The Daily Tar Heel
Printing news. Raising hell. Since 1893.
Monday, April 22, 2024 Newsletters Latest print issue

We keep you informed.

Help us keep going. Donate Today.
The Daily Tar Heel

Editorial: Even robots need ethics training

Le Ha, Toluwanimi Dapo-Adeyemo, Kezia Kennedy, and Adam Sherif, first-year students at UNC, pose for facial detection by artificial intelligence.

The fear that robots are taking over the world might not be that far-fetched. 

While artificial intelligence is praised for its responsiveness to expansive amounts of information, AI seems to be absorbing biases, too. An experiment published in June demonstrated that robots trained with artificial intelligence exhibited racism and sexism in their decision-making.

While sorting through billion of images, robots in this experiment routinely categorized Black men as "criminals." Similarly, the label "homemaker" was given to women more regularly than it was given to men.

In programmers' efforts to create a machine that could eliminate human error, they have failed. 

AI is made by humans and deployed into existing systems and institutions – none of which are immune to sexist and racist practices throughout history. Without studies like this one, such ethical concerns could have been missed entirely.

This technology is already so ingrained into our lives, in ways that seem to have relatively low stakes. AI can be used to restock grocery shelves, create online shopping algorithms or even play chess. 

Systemic issues emerge when AI systems are encrypted into more than simply shelf-stocking robots. 

AI tools are used to screen potential tenants or job candidates, relying on criminal histories and evictions to make their choices. These screening process tools reflect longstanding racial disparities in the criminal and legal system, which greatly affect members of marginalized communities that these systems are consistently discriminatory towards. 

And while this could only be a setback in the development of AI, it has the potential to become something more without exploration of this issue.

Zac Stewart Rogers, a supply chain management professor from Colorado State University, told the Washington Post that new coding software is often built on pre-existing structures. This, he said, explains the prevalence of continuing issues in new robot systems. 

Time and time again, systemic injustice has become ingrained in our institutions – even the realm of technology. This issue could root itself in future artificial intelligence development unless acted upon while AI is still beginning development.

Artificial intelligence has been referred to as the future of technology in the world and it has many potential benefits if produced and introduced correctly. Many cite the benefits of expanding the use of artificial intelligence, including its potential to cause economic growth,  cost-effectively increase productivity, create jobs and raise GDP.

With a growing population, job creation is important. Bias must be eliminated from the data used in AI algorithms, beginning with a more representative group of artificial intelligence developers. Either artificial intelligence algorithms need to have corrective measures or the solutions to our growing world just got a lot more complicated. 

This raises a question — is it really possible to have a robot without human error if it is made by humans?


To get the day's news and headlines in your inbox each morning, sign up for our email newsletters.