Concise but cold: Facebook’s text summarisation feature to change reading habits

Rhodri Marsden writes about why a new AI assistant from Facebook can be problematic

Man's hands are entering some words on the laptop keyboard for the Internet news. Both images you may buy separately here: laptop with hands #7292297 and magnifying glass #7756620
Powered by automated translation

The online abbreviation TL;DR is never well received by those of us who enjoy writing. Standing for “too long; didn’t read”, it curtly informs writers that they used too many words to get their point across so have not been read at all, and that their efforts were a waste of time.

It's unclear if Facebook appreciates how dismissive TL;DR sounds to anyone who regularly types out more than 20 words in a row, but that's the name the firm has given to a new AI assistant introduced during a company meeting last week. An audio recording of the meeting, obtained by BuzzFeed News, reveals that TL;DR will summarise articles – such as this one – into bullet points, saving you from having to read them. Unfortunately, dear reader, it doesn't exist yet, so you'll have to press on with this. I promise to wrap it up as quickly as I can.

TL;DR is deemed necessary because of accelerating information overload. Scholars have complained about an excess of information for hundreds of years, but there's never been as much as there is today. It's easily generated, distributed and the competition for eyeballs is feverish. "Attention is zero-­sum because every click [one company] gains, [another] loses," writes Vedant Misra at AI research firm OpenAI in a blog post. "That's why your email inbox is a battleground of people vying for your attention. So is the results page for every Google search."

Companies such as Facebook, Twitter and Google have been described as “resellers” of our attention. Algorithms are dedicated to finding the best way to get us to read a tweet, open a message, click on a link. But what if we didn’t have to click the link at all?

That’s the aim of TL;DR: to save us time by automatically summarising information. In the same way we can now listen to audiobooks at higher speeds, this is about getting us to consume more information more quickly. It’s also seen as an attempt by Facebook to become the gatekeeper of that information.

Attention is zero-­sum because every click [one company] gains, [another] loses

TL;DR is, however, likely to do a much better job than its antecedents, thanks to advances in natural language processing. Back in 2012, a Reddit community called AutoTLDR began using a bot to produce summaries of online articles, reducing them in size by 50 per cent or more. It worked, but it felt like more of an experiment than a service. By 2016, Facebook was describing its deep learning ­algorithm, DeepText, as having “near-human accuracy”, and across the field, computers were showing marked improvements at processing text and pinpointing which words were more important. One study, at the University of Maryland, used AI to digest legalese and present the meaning of lengthy “terms of service” documents in a way people could understand. Firms such as Primer began using AI to help businesses process millions of documents in multiple languages to avoid anyone having to actually read them.

Facebook now claims that its AI systems can immediately detect 95 per cent of hate speech posted on the platform, as opposed to just 52 per cent last year. Google has reported similar advances this month; its experiments with Imperial College London show “high linguistic quality in terms of fluency and coherence”.

But do these systems really understand the words they're processing? Do we want our skim reading to be done by a machine that merely simulates understanding? If a human being were asked to summarise some text, we would read through it, comprehend it and put it into simpler words. Computers are hamstrung by a lack of background knowledge, and can still struggle to get over linguistic hurdles. "Even the simplest sentences can be semantic minefields, littered with connotations and grammatical nuance that only native speakers can instantly make sense of," writes Christine Maroti, AI Research Engineer at language firm Unbabel.

So, the concerns surrounding TL;DR appear to be three-fold. Firstly that it might misunderstand texts and make simple mistakes, unintentionally propagating misinformation. Nick Inzucchi, a former Facebook employee who left the company earlier this month, has expressed concern in this regard. “AI will not save us,” he wrote. “The implicit vision guiding most of our integrity work today is one where all human discourse is overseen by perfect, fair, omniscient robots owned by Mark Zuckerberg. This is clearly a dystopia, but one so deeply ingrained we hardly notice it any more.”

Secondly, that it does a disservice to its users by assuming that they wish to outsource their understanding to an algorithm. Thirdly, revenues that might accrue to organisations who pay writers may end up diverted into Facebook's coffers, as people consume their summaries instead of the original work. Perhaps more fundamentally, it asks the profound question of whether writing is merely information. Many of us love to read precisely because of the detail it conveys and the emotions it provokes. TL;DR, even if it did its job perfectly, would be short on detail and lack emotional depth. And that, surely, has to be a shame.