AI/machine learning

An Epidemic of AI Misinformation

The media is often tempted to report each tiny new advance in a field, be it artificial intelligence or nanotechnology, as a great triumph that will soon fundamentally alter our world. Occasionally, of course, new discoveries are underreported.

misinformation.jpg

The media is often tempted to report each tiny new advance in a field, be it artificial itelligence (AI) or nanotechnology, as a great triumph that will soon fundamentally alter our world. Occasionally, of course, new discoveries are underreported. The transistor did not make huge waves when it was first introduced, and few people initially appreciated the full potential of the Internet. But, for every transistor and Internet, there are thousands or tens of thousands of minor results that are overrreported, products and ideas that never materialized, purported advances such as cold fusion that have not been replicated, and experiments that lie on blind alleys that don't ultimately reshape the world, contrary to their enthusiastic initial billing.

Part of this, of course, is because the public loves stories of revolution and yawns at reports of minor incremental advance. But researchers are often complicit, because they, too, thrive on publicity, which can materially affect their funding and even their salaries. For the most part, both media and a significant fraction of researchers are satisfied with a status quo in which there is a steady stream of results that are first overhyped then quietly forgotten.

Consider three separate results from the past several weeks that were reported in leading media outlets in ways that were fundamentally misleading:

  • On November 24, The Economist published an interview with OpenAI's GPT-2 sentence generation system, misleadingly saying that GPT-2’s answers were “unedited” when, in reality, each answer that was published was selected from five options with filters for coherence and humor. This led the public to think that conversational AI was much closer than it actually is. This impression may have been inadvertently furthered when a leading AI expert (Erik Bryjngjolffson) tweeted that the interview was “impressive” and that “the answers are more coherent than those of many humans.” In fact, the apparent coherence of the interview stemmed from (a) the enormous corpus of human writing that the system drew from and (b) the filtering for coherence that was done by the human journalist. Brynjjolffson issued a correction, but, in keeping with the theme of this piece, retweets of his original tweet outnumbered tweets of his correction by about 75:1, evidence that triumphant but misleading news travels faster than more sober news.
  • OpenAI created a pair of neural networks that allowed a robot to learn to manipulate a custom-built Rubik's cube and publicized it with a somewhat misleading video and blog that led many to think that the system had learned the cognitive aspects of cube-solving (viz. which cube faces to turn when) when it had not, in fact, learned that aspect of the cube-solving process. (Rather, cube-solving, as apart from dexterity, was computed with a classical, symbol-manipulating cube-solving algorithm devised in 1992 that was innate, not learned). Also, less than obvious from the widely circulated video was the fact that the cube was instrumented with Bluetooth sensors and the fact that, even in the best case, only 20% of fully scrambled cubes were solved. Media coverage tended to miss many of these nuances. The Washington Post, for example, reported that “OpenAI’s researchers say they didn’t 'explicitly program' the machine to solve the puzzle,” which was, at best, unclear. The Post later issued a correction: “Correction—OpenAI focused their research on physical manipulation of a Rubik’s cube using a robotic hand, not on solving the puzzle …”—but one again suspects that the number of people who read the correction was small relative to those who read and were misled by the original story.
  • At least two recent papers on the use of neural networks in physics were widely overreported, even by prestigious outlets such as Technology Review. In both cases, neural networks that solved toy versions of complex problem were lauded in a fashion disproportionate to the actual accomplishment. For example, one report claimed that “A neural net solves the three-body problem 100 million times faster” than conventional approaches, but the network did no solving in the classical sense; it did approximation, and it approximated only a highly simplified two degree-of-freedom problem (rather than the conventional 10), and only with objects of identical mass. The original tech review story was widely distributed around the web; a subsequent, detailed critique by Ernest Davis and myself in Nautilus received significant attention, but, glancing at retweets as a crude metric, I still would not be surprised if the ratio of those reading the original breathless report relative to those reading our more sober analysis was again on the order of 75:1, if not even more skewed.

Unfortunately, the problem of overhyped AI extends beyond the media itself. In fact, for decades since AI’s inception, many (though certainly not all) of the leading figures in AI have fanned the flames of hype.
Read the full story here.