Madhumita Murgia’s extraordinary book explores the potential for innovation, as well as dehumanization, in artificial intelligence
Madhumita Murgia’s extraordinary and thought-provoking book, Code Development: Living in the Shadow of AI, chronicles her deep dive into the life-altering world of artificial intelligence. Murgia has an investigative journalist’s most sacred possessions, an open mind and relentless curiosity. The journalist began her college career studying biology at Oxford but switched gears and found tremendous success exploring AI and its many applications. She writes for the Financial Times, recently becoming its artificial intelligence editor. Watching one of her TED talks, it’s clear to see the author’s sensitivity to the world around her and how she carries a seriousness of purpose.
Murgia defines artificial intelligence as “a complex statistical software applied to finding patterns in large sets of data.” She explains how apps including Google Maps, Uber, and Instagram collect and sell our information to those who wish to target their advertising so they can sell more of their products. Tech companies combine the data they mine about us with public information that is available to them and turn the data gathered into sophisticated packages that are highly valuable to companies looking to find their intended audience.
When we apply for a job, AI might analyze our face. Many people are now using AI software to write cover letters for their job applications. Some people have become dependent on AI to answer questions for them about their medical ailments. AI software assists with mortgage applications at banks. When we open Google Maps to plan our holiday travel route, or ask Alexa a question, or call for an Uber to pick us up, we are using some form of AI.
Murgia explains it is easy to become seduced by the software’s possibilities: DNA editing, flying cars, brain-machine interfaces. But she warns us there are hazards. She asks us to think carefully: “What does it feel like to ‘talk’ to a black-box system? Do you get a choice between human and machine? How do you appeal a life-altering decision made by an app? What would you need to know to be able to trust it? How would you know when not to trust it? … How is artificial intelligence changing what it means to be human?”
Murgia travels to Nairobi, meeting Ian Koli, a worker for Sama, a company that employs 2,800 people. She learns about how Koli trains AI software made for global corporations by creating detailed labels for the datasets used to train them. His work is primarily image tagging for driverless cars and the computers inside cars developed by Volkswagen, BMW, Tesla, Google, and Uber. The automobiles need to be trained to read street signs, spot pedestrians, and recognize road markings and traffic lights.
Ian receives driver’s-view clips of cars riding down anonymous roads. He tags all visible objects he sees in the footage including people, animals, trees, streetlights, crosswalks, houses, and even pieces of the sky. An hour of video footage may take him eight hours to annotate with labels, and the work can be repetitive and mindless, but he claims not to mind it. His dream is to one day become a software developer for Tesla and he is currently studying at night for his IT degree.
But he is the exception, not the rule. Most of the other employees work their eight-hour shifts and go home, happy they no longer clean houses, or sell chapatis on the street. They don’t comprehend the value of the final product they are creating for these mammoth companies. For them, it’s just a new kind of sweatshop filled with strange new devices. Lunch breaks are short; they aren’t given ample time to stretch their legs or take a coffee break.
Murgia visits another employee of Sama whose job is to examine dozens of images of buildings around the world and mark them either “modern,” “historic,” or “both.” She happens to come by his workstation while he is staring at an image of an ancient Japanese Buddhist temple in Tokyo standing behind a telegraph tower. He pauses, then marks the image as “both.” Once again, she notices how tedious the labor is and how these employees have no agency over their rights as workers. The work he is doing will be fed into a platform called Material Bank that will allow customers to search for architectural and design materials or any sort of construction apparatus they need quickly and effortlessly.
The author understands the importance of what these workers are doing and how their grunt work will eventually be turned into sophisticated software to help physicians, lawyers, social workers, financial advisors, and other professionals do their work more efficiently. Yet, she is troubled by the reality that to accomplish this task, an entire subclass of workers is being abused and underpaid all over the world, where they often have no other means to make any sort of living. Something about watching these men and women sit hour after hour performing menial tasks that holds no personal meaning for them dampens Murgia’s enthusiasm. She writes, “All the data workers I met with were vulnerable: either transitory or unsettled, or struggling to make ends meet – they had essentially no bargaining power at all.”
She writes about Karl, who teaches computers to identify faces better than humans can using deep learning, an AI technology allowing millions of photographs to be uploaded to the web to train new image-recognition algorithms. Karl developed experimental AI analytics that can spot physical signs of illness from a person’s face. Those with Parkinson disease often have stiff facial expressions, and this new AI technology can spot these changes early and allow for medical intervention before they progress.
Karl also worked on surveillance technologies such as face recognition software used to monitor Black Lives Matter protests. As a Black engineer, Karl felt conflicted about his accomplishment, saying “It’s a complicated feeling. As an engineer, as a scientist, I want to build technology to do good. But as a human being and a Black man, I know people are going to use technology inappropriately.” Murgia writes about how the Chinese government used facial-recognition technology to identify and arrest protestors during the Covid lockdown.
Murgia travels to South Asia and meets Ashita, who works with an AI program called Qure.ai that allows her to better diagnose patients showing signs of tuberculosis. The software can also detect Covid-19, head trauma, and lung cancer. Ashita finds it invaluable because she works with one of the oldest tribal communities, rife with poverty and all sorts of illness. There are only 58 doctors for over two million people. Murgia says 6o0 sites in 60 countries use this technology. In Mumbai’s hospitals, it has increased the diagnosis of tuberculosis by 35 percent. Sequoia Capital and Merck have contributed millions to the tech’s development. Google teamed up with a chain of low-cost hospitals in India to test AI software that can diagnose diabetic retinopathy.
One of Murgia’s most illuminating stories shows both the benefits and detriments of AI: it’s about Uber drivers feeling stranded with no place to turn when trouble arises. She speaks of Alexandru Iftimie’s experience as an Uber driver who received a termination notice and didn’t know why. When he called for an explanation, he couldn’t get someone on the phone to explain. He relied on Uber driving to support his family. It turned out it was all a mistake and within a few months Uber reinstated him, but the experience stayed with Iftimie. He never understood how Uber calculated his wages, or how Uber picked him for a certain job over others. He felt completely powerless.
James Farrar in London felt the same lack of agency driving Uber and decided to fight. He brought together a group of workers from all over the world to create a list of their demands, including more transparency in algorithmic decisions, access to his own data, and an insistence on labor protections for drivers. In February 2012, the UK’s Supreme Court sided with Farrar and said Uber should treat drivers as employees with the rights to minimum wage, sick pay, and pensions, as well as receiving access to their own information. The Supreme Court also suggested sick leave and holidays. Similar rulings soon became the law in Canada, Switzerland, and France.
Murgia explores all the fanfare around ChatGPT which can produce text-based answers in response to natural-language queries. Many people have written poetically about falling in love with it, swearing they experience a feeling of using “something sentient.” Murgia reminds us that this very sophisticated software doesn’t have a cognitive understanding of what we are saying, but users beg to differ. One woman felt GPT was her best therapist. Another asked ChatGPT medical question. Still others have used it to write speeches, term papers, or cover letters for their job applications.
But Murgia doesn’t shy away from the downside of AI’s awesome powers. She writes about how graphic artists and musicians feel cheated by new software trained on “millions of words written by human authors in books, essays and newspapers, scores of images, artworks and photography, hours of original music and audio files-all to be labeled by data laborers around the world.” For example, Amber Yu, a freelance illustrator, used to design video game posters for about $1,000, and now receives far less and earns her living tweaking AI-generated images for a tenth of what she used to make. Machines replicated her work and stole her creative life.
Murgia has written a thought-provoking book about artificial intelligence and its increasing reach into our personal lives. She shows us the tremendously life-affirming technologies that are offshoots of this new software, but also reminds us of their potential dangers if development outruns our ability to stay in control. It’s an uneasy balancing act, yet it has already defined our future.