When people are starved of human contact, does it undermine their dignity to fob them off with something less? Emphatically yes, says Alegre: removing the human aspects of care work dehumanises the carers and those they care for. Rather than replacing humans, we need machines that perform specific functional tasks better, such as lifting. To create a digital illusion of loving – or caring – will compound isolation, she warns. It is the “corporate capture of human connection”.
Generative AI, like ChatGPT, may have fired the public imagination, but Alegre sees human creativity at risk. Asking “Who is Susie Alegre?” was dispiriting as the system failed to find her despite 25 years’ worth of publications, including a well-received book, Freedom to Think. When pressed, it attributed her work to various male authors.
“Writing women out of their own work is not unique to AI,” she observes dryly. Nevertheless, the failure is striking. “Whatever you ask it, ChatGPT will just make up plausible stuff.” In effect, it “sucks up all the ideas it can find” – using copyright-protected work to feed large-language models – and “regurgitates them without attribution” (or consent and remuneration). If you are an independent thinker or creator, she says, it is nothing less than “intellectual asset-stripping”.
Copyright and human rights laws give artists economic and moral rights in their work, but for individuals the costs of mounting a legal challenge are too great. In 2023, strikes in the US by screenwriters – partly about the threat of generative AI to their future work – secured restrictions on the use of AI and mitigated this persisting problem of access to justice.
Alegre gives other examples where AI has failed. There are legal submissions drafted using ChatGPT which cite precedents that do not exist. Technology that sifts and screens documents is useful to lawyers, and there is a role for automation in handling certain disputes. But not knowing its limits risks creating a legal process “as fair as a medieval witch trial”. Judges relying on ChatGPT have also not emerged well. Judging disputes often involves a context-dependant, textured mix of intellect and feeling – which AI can’t undertake.
The dangers inherent in trying to replace human reasoning also arise in warfare. Drones can kill the enemy without exposing your own forces to immediate jeopardy. But, as viewers of the 2015 film Eye in the Sky will recall, when to strike can still pose a moral dilemma. A lawful, proportionate attack must weigh the military advantage against the risk of harm to civilians – which it is duty bound to minimise.
Alegre doubts AI can reliably work out what is lawful and not. Furthermore, AI remains prone to “hallucinations” and “inexplicably incorrect outputs”. If fully autonomous weapons are in use, “this means arbitrary killing”. Who is accountable when mass death results from a glitch in the system? The use of AI blurs lines of responsibility when things go wrong, as they inevitably do.
Acknowledging the fallibility of machines and the need to hold those responsible for harm to account are key themes of this interesting book.
There are cases where this is happening. Errors in the Horizon software system resulted in hundreds of sub-postmasters and sub-postmistresses being wrongfully convicted (and in some cases imprisoned) for fraud, false accounting and theft.
Alegre sees “automation bias” in Post Office management’s refusal to accept Horizon was flawed: whereby “people tend to favour information produced by a machine over contrary information right in front of their noses”. In a high-profile public inquiry, Post Office and Fujitsu employees are being confronted (in the presence of some they helped send to prison) with their own automation bias, and their willingness to tolerate injustice to “protect” a corporate reputation.
In another set of legal proceedings – the inquest into the death of 14-year-old Molly Russell – executives from social media companies Meta and Pinterest were summoned and challenged. As the inquest heard, Molly suffered from a depressive illness and was vulnerable due to her age. Recommendation algorithms provided her with text, images and video of suicide and self-harm that negatively impacted her mental health, leading her to take her own life in 2017. This tragic case informed, and helped enact, the Online Safety Act 2023. Meanwhile, in the US 41 states are suing Meta, accusing it of encouraging addictive behaviour in children while concealing its research into the harm caused.
Alegre shines a light on the less visible social costs of our use of AI. Workers on pitiful pay in Kenya, Uganda and India moderate content to enable ChatGPT to recognise graphic descriptions of child abuse, bestiality, murder, suicide, torture, self-harm and incest. To do so, they will have sifted a mass of disturbing material. This “exporting of trauma” is recognised as a global human rights problem, says Alegre, and has already led Meta to pay $US52 million to settle a single dispute brought by content moderators in the US.
When Luddites destroyed machinery in the Industrial Revolution they weren’t resisting innovation but protesting “unethical entrepreneurs”.
It is also a “fallacy” she says, “that virtual worlds are somehow greener”. Manufacturing and disposing of the devices we use has an enormous environmental impact, and even before the AI boom, the information, computing and technology (ICT) sector produced more emissions than global aviation fuel.
Urgent calls for global AI regulation can be misleading, warns Alegre, as they suggest these new technologies are currently beyond our control. They aren’t. We have privacy laws, anti-discrimination laws, labour laws, environmental laws, intellectual property laws, criminal laws and the human rights laws that underpin them. Rather than creating “grandiose new global regulators”, she says, we need to enforce existing laws and “identify specific gaps that need to be filled”.
Anticipating being labelled a Luddite, Alegre explains that the Luddites are misunderstood. When they destroyed cloth-making machinery in the Industrial Revolution they weren’t resisting innovation but protesting against “unethical entrepreneurs” using technology to depress wages “while trampling over workers’ rights” and “cheating consumers with inferior products”. They lost, she says, “not because they were wrong, but because they were crushed”.
AI is not evil, Alegre believes, but it has no moral compass. Guiding the direction of “scientific endeavour” to safeguard human rights is up to us as sentient beings. There is nothing inevitable about where we are going. We have a choice.
In writing Human Rights, Robot Wrongs, Alegre plainly hopes to enlist the like-minded and avoid the fate of the Luddites. I sincerely hope that she does.
Human Rights, Robot Wrongs: Being Human in the Age of AI by Susie Alegre (Atlantic).
New Statesman