Why AI shouldn’t be making life-and-death decisions

Why AI shouldn’t be making life-and-death decisions
Let me introduce Philip Nitschke, also called “Dr. Death, also known as “the Elon Musk for assisted suicide” Nitschke is a man who wants to “demedicalize death” and make assisted suicide as unassisted through technology. As my colleague Will Heaven reports, Nitschke has developed a coffin-size machine called the Sarco. After undergoing an algorithm-based psychoiatric self-assessment, anyone who wishes to end their life can enter the machine. The Sarco will release nitrogen gas to asphyxiate them if they pass. The Sarco will ask three questions of a person who has made the decision to die: What are you doing? Do you know what’s going to happen when you push that button?

In Switzerland, assisted suicide is legal. Candidates for euthanasia need to demonstrate mental capacity. This is usually assessed by a psychiatrist. Nitschke wants to eliminate people from the equation.

Nitschke would be a good example. Will writes that AI is being used to triage patients and treat them in a growing number health-care areas. Algorithms have become an increasingly important part in care. We must ensure that they are not used for moral decisions but only for medical decisions.

Will examines the morality of attempts to develop AI that can make life-and death decisions here .

I’m not the only one who feels uneasy letting algorithms decide whether people live or die. Nitschke’s work is a classic case where we have misplaced faith in the capabilities of algorithms. He is trying to avoid human judgments that can be complicated by using technology that can make “unbiased” and objective decisions.

This is a dangerous road, and we know exactly where it leads. AI systems are a reflection of the people who built them. They are also prone to biases. We’ve seen facial recognition systems that don’t recognize Black people and label them as criminals or gorillas. The Netherlands tax authorities used an algorithm in order to identify benefits fraud. However, they penalized innocent people, mainly those with lower incomes and members of minority groups. This had devastating consequences for thousands, including suicide, divorce, and placement of children in foster care.

As AI becomes more common in health care, it is crucial to critically examine the process of building these systems. Even if we can create an algorithm that is perfect and has zero bias, algorithms lack the complexity and nuance to make decisions about society and humans on their own. It is important to consider how much decision-making AI will be able to do for us. It is not necessary to allow it to penetrate deeper into our lives and societies. It is a human choice.

Deeper Learning

Meta wants to use AI to give people legs in the metaverse

Last week, Meta unveiled its latest virtual-reality headset. It has an eye-watering $1,499. 99 price tag. Meta presented its vision for a next-generation social platform accessible to all at the virtual event. As my colleague Tanya Basu points out: “Even if you are among the lucky few who can shell out a grand and a half for a virtual-reality headset, would you really want to?”

The legs were fake: One of the big selling points for the Metaverse was the ability for avatars to have legs. Legs! A leggy avatar of Meta CEO Mark Zuckerberg revealed that artificial intelligence would be used to enable this feature. This would allow avatars to not only walk and run, but also to wear digital clothes. But there is a problem. Meta hasn’t actually figured out how to do this yet, and the “segment featured animations created from motion capture,” as Kotaku reports.

Meta’s AI lab is the largest and most successful in the industry . It’s also home to some of the best engineers in the field. It’s hard to imagine that Meta’s multibillion-dollar effort to make VR Sims a reality is very rewarding work for its AI researchers. Are you a member of the AI/ML team at Meta? I would love to hear from your company. (Drop me a line [email protected])

Bits and Bytes

Learn more about the exploited labor behind artificial intelligence
In an essay, Timnit Gebru, former co-lead of Google’s ethical AI team, and researchers at her Distributed AI Research Institute argue that AI systems are driven by labor exploitation, and that AI ethics discussions should prioritize transnational worker organization efforts. (Noema)

AI-generated art is the new clip art
Microsoft has teamed up with OpenAI to add text-to-image AI DALL-E 2 to its Office suite. You can now create images that you can use in greeting cards and PowerPoint presentations by entering prompts.
(The Verge)

An AI version of Joe Rogan interviewed an AI Steve Jobs
This is pretty mind-blowing. Play.ht, a text-to-voice AI startup, trained an AI model based on Steve Jobs’s biography. It also used all the recordings it could find online to imitate the way Jobs would speak in a podcast. Although the content is quite silly, it will not be long before the technology can fool anyone. (Podcast.ai)

Tour Amazon’s dream home, where every appliance is also a spy
This story offers a clever way to visualize how invasive Amazon’s push to embed “smart” devices in our homes really is. (The Washington Post)

Read More