Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Reminder, this is a Premium article and requires a subscription to read.
Earlier this year, the AI model SORA created an entire AI video, and a wave of concern, from a single text prompt. Image / openai.com/sora
The new book Code Dependent: Living in the Shadow of AI sounds a warning about how the technology is changing what it means to be human. Author and Financial Times journalist Madhumita Murgia tells Greg
Bruce we must act now to make sure it’s not for the worse.
In a book packed with dystopian real life stories about the threat of AI, the one about London Uber driver Alexandru Iftimie may be the most instructive.
Iftimie had been driving for Uber for more than two years, and had a flawless five star rating, when the company sent him an automated warning saying he’d been flagged for fraudulent activity. He assumed it was a mistake and ignored it, but two weeks later he received another warning. One more, he knew, and he would be terminated.
He had no idea what he’d done wrong because the warnings came with no additional information. He called Uber driver support to find out. They didn’t know either. He says they told him: “The system can’t be wrong, what have you done?”
He joined the union, who took up his case with the company. Afraid of losing his job permanently, he stopped taking rides. Three months later, Uber contacted him to say its automated system had made an error. They apologised and cleared Iftimie’s record, but he still has no idea why or how it happened.
He told Murgia: “It’s like you find a warning left on your desk by your boss, but they are inaccessible, and if you knock on their door… they won’t tell you what it is.”
This is a classic example of what’s known in the world of AI as the “black box problem”: We can see the information that goes into the black box and the information that comes out, but what happens inside? No one knows. The algorithms are too complex.
The black box problem is just one of many issues confronting the exploding world of AI that Murgia explores in Code Dependent. It’s a book about one of the most important issues of our time from an author who has spent years on its front lines, reporting in real time as it changes the world in ways we are only just beginning to understand.
She tells another story in the book about how authorities in Amsterdam used an AI software called ProKid to predict which of the city’s children and young people were most likely to go on to commit crimes. If you think the story sounds familiar, it’s probably because you’ve seen the movie Minority Report.
When parents received the letters informing them their children were on the watchlist, they took their concerns to city officials. Those officials then relayed the parents’ concerns to the mayor, in an email that was later obtained under the Freedom of Information act and is republished in Code Dependent: “Of course,” the officials wrote, “we cannot yet answer the most important question parents have. Why am I or my child on this list?”
Murgia writes: “The algorithm-generated lists were more than predictors. They were curses. Inclusion on one of these lists permanently changed the life-course of many young people. It made them question who they were. Just by being included on the list, they were branded, judgements following them around wherever they went. And even if they were to come off it in time, they were worried their chances of being admitted to college, getting a job, or buying a home would be permanently ruined. The ProKid software had taken events they had little control over, such as witnessing violence or being the child of a broken home, and twisted it into something that could set them up for lifelong failure.”
And so the book goes on. Murgia travels the world, meeting people whose lives have been affected by the AI revolution in disturbing and sometimes horrifying ways: people who are working for peanuts to train AI models that are generating billions for their owners; invasions of privacy; racial bias; women whose faces and names have been used in AI-generated sex videos; content moderators who have developed PTSD from watching beheadings, mass shootings, terrorism, sexual violence and child abuse.
But, in spite of all this, she is far from pessimistic. At the Financial Times, she covers artificial intelligence, data and emerging technologies, and prior to that she worked at the famously techno-optimistic Wired magazine. She has written frequently about the many “amazing things” AI can do for us. Her message is that we need to make sure we are in control of it.
So, while the book offers plenty of cautionary tales, it’s also a call to arms: “It’s not a pessimistic outlook of how things are going to go wrong,” she says. “I think it’s a warning of what things could look like if we don’t actively participate in how the systems are designed and rolled out.”
She wrote the book in part to answer the question, “How is artificial intelligence changing what it means to be human?”
These changes are already happening. She believes it will become much more normal to have relationships with AI in ways we currently think of as “weird”. She already sees these differences in her preschool children and the way they perceive, and interact with, the AI assistant Alexa in their home. She believes these sorts of relationships will become even deeper as these types of systems are upgraded with generative AI.
“I think the thing to really watch out for is when AI systems have personalities and start to learn your preferences as a human… really understanding who you are and how you’re going to respond to things. And then I think the question – it’s a much more pressing question – of how can that be manipulated if it continues to be owned by corporations, but you think it’s your friend or your mentor or your adviser? What can it plant in your head? How does that change how you see the world?”
She says she is careful about the technology her children can access, and about the way she talks to them about it, particularly regarding the difference between a machine and a human. “Which sounds really silly, but it’s very easy to blur, even as an adult. I think it will be so interesting to see how our children’s generation see machines and humans – whether they even see them as different.”
In the final chapter, Murgia includes a list of 10 questions she asks herself every time she encounters an AI tool (#8: “In consequential areas such as employment, criminal justice or welfare, who is accountable for decisions or outcomes of an AI tool, and do they have meaningful control?” #9. “How do we compensate people (e.g. artists, writers, photographers) for their creativity and expertise, which technology builders today mostly scrape for free to build new AI systems?”)
What she wants is to encourage people to think about these sorts of questions, and to help shape the way that we think about, use and police AI. This is work we all need to be involved in, she says. “It’s too important to be left to technologists.”
The overriding message of Code Dependent is one of hope: That technology is powerful, but not as powerful as people.
“I found that the most inspirational parts of the stories I uncovered were not the sophisticated algorithms and their outputs,” she writes in the book’s epilogue, “but the human beings using and adapting to the technology: doctors, scientists, gig workers, activists and creatives, who represent the very best of us.”
Code Dependent: Living in the Shadow of AI by Madhumita Murgia is published by Macmillan and available from August 27.
Reminder, this is a Premium article and requires a subscription to read.