Science & technology | Generative AI

How generative models could go wrong

A big problem is that they are black boxes

In 1960 norbert wiener published a prescient essay. In it, the father of cybernetics worried about a world in which “machines learn” and “develop unforeseen strategies at rates that baffle their programmers.” Such strategies, he thought, might involve actions that those programmers did not “really desire” and were instead “merely colourful imitation[s] of it.” Wiener illustrated his point with the German poet Goethe’s fable, “The Sorcerer’s Apprentice”, in which a trainee magician enchants a broom to fetch water to fill his master’s bath. But the trainee is unable to stop the broom when its task is complete. It eventually brings so much water that it floods the room, having lacked the common sense to know when to stop.

Explore more

This article appeared in the Science & technology section of the print edition under the headline “How generative models could go wrong”

From the April 22nd 2023 edition

Discover stories from this section and more in the list of contents

Explore the edition
illustration of a person wearing headphones with a smaller figure hanging inside their face

Can at-home brain stimulators make you feel better?

For now, the evidence for neuromodulation products is slim

Two very curious dingos

Australia’s dingoes are becoming a distinct species

Many will still be culled under false pretences


Drug resistant aspergillus.

Lethal fungi are becoming drug-resistant—and spreading

New antifungals offer a glimmer of hope


AI models can learn to conceal information from their users

This makes it harder to ensure that they remain transparent

The Carthaginians weren’t who you think they were

New research shows just how diverse the ancient city of Dido was

We’re hiring a Technical Lead for our AI Lab

Join The Economist’s new AI initiative