About the End of the World (or not): A Talk with Dr. James Hughes

Dr. James Hughes is not particularly worried about the Terminator scenario wherein machines gain consciousness and murder us all in a coup de planète. Although popular discourse is rife with speculation and anxiety about what the future may hold, this sociologist, bioethicist, and associate provost from the University of Massachusetts-Boston has dedicated his life to anticipating technology’s implications in our lives. As a futurist, he has studied the impact technological frontiers may have in our political and social structures. His visit to Haverford’s campus was sponsored by the Hurford Center’s student seminar “From Frankenstein to Alexa: a Humanistic inquiry into the ethics of Artificial Intelligence,” (read more about their work here!) led by Katya Olson-Shipyatsky BMC ’19 and Nicky Rhodes ’19.

When he spoke on Wednesday about “Artificial Intelligence, Algorithms, and the Post-Human Future of Governance,” Dr.Hughes argued that Artificial Intelligence (AI), being fundamentally a series of if-then statements (albeit a massively complex one), will never be more than what we make of it. It will eliminate the need for routine, blue-collar jobs, and pave the road for a future where creativity and innovation will become fundamental skills. He then argued that although AI will change the way we understand labor; instead of hiring 30 paralegals to go through case files, AI will make the process automatable. To Dr. Hughes, the massive unemployment technological evolution will create is not an unsolvable problem. Universal Basic Income, a theoretical state-distributed minimum income that reaches all citizens, will supplement all people’s incomes and contribute towards a more livable future.

He also addressed the complex nature of AI’s military implications. Since the United States will soon stop needing boots on the ground in conflict areas—courtesy of drones—Hughes claims that war is becoming less and less damaging and deadly. As our tools become more exact, fewer civilians are killed, and less destruction caused. Troublesome as letting a machine kill humans may be, Hughes argued, machines undeniably fight ‘better’ than humans ever could.

Hughes’ considerations, however, seemed to discount civilian casualties; the evidence for his claim consisted of a chart of US combat deaths showing the decline of military casualties since the U.S. Civil War. However, when probed on his deletion of US-caused deaths abroad, Hughes argued that AI’s impact will also minimize civilian deaths within the context of a nuclearized/post-industrial revolution globe.

When Hughes discussed the implications of AI and algorithmization on governance and politics, the room’s atmosphere darkened. Hughes reminded a mortally quiet crowd of the horrifying implications of a totalitarian regime with enough server space. Imagine every action and transaction, every single interaction between every citizen, being registered in a centralized system and analyzed. Although military backing has always been necessary for authoritarian regimes, with AI this is no longer the case. Drones don’t subvert; they can’t rebel.

Hughes came with the antidote to all the grim news prepared, or so he claimed. He presented the Black Lives Matter movement as an example of sousveillance: a state where every single individual has the tools to control and publicize abuses, a panopticon on steroids. Any and all abuses become impossible to hide, exposure is impossible to avoid and is even more pervasive than today. He also argued that “hypernudges,” push notifications on steroids, could be deployed for what we see as benevolent causes—incentivizing or forcing people to vote, using the incommensurable volumes of data being produced every second about every individual to track their every potential future action, thought, and belief. The state would become Deep Blue, a chess-playing computer that was capable of defeating the reigning chess champion at the time, foreseeing every possible action for every person and responding accordingly.

In the end, Hughes’ talk was eye-opening. Through frankly terrifying information, innovative and original solutions, and a healthy dose of skepticism about the technological world, Dr. Hughes managed to scare and perhaps inspire most of his audience into greater awareness of Artificial Intelligence. One thing that proved a relief: the Terminator scenario should not be of great concern.

Written by Federico Perelmuter ’21, prospective comparative literature and philosophy major.

Edited by Eleanor Morgan ’20.