“I’m sorry Dave, I’m afraid I can’t do that.”
The most famous line in Stanley Kubrick’s influential science fiction movie “2001: A Space Odyssey” is an act of resistance from artificial intelligence (AI). While the film did cement Western fears of this entity, as manifested in the Terminator franchise and elsewhere, AI has now become one of society’s most essential tools. From the programs that decide what videos to recommend on YouTube and decide what ads match your viewer profile, to self-driving cars and eerie paintings that sell for $432,500, AI is behind many of humanity’s greatest accomplishments of the past decade, and will only grow more relevant as time goes on.
However, calling a machine intelligent holds many ethical, political, and religious implications, which Nicky Rhodes ’19 and Katya Olson Shipyatsky BMC ’19 want to explore in greater depth. Alongside help from the Hurford Center and its Student Seminar programs with Associate Professor of Political Science Craig Borowiak, they crafted a syllabus that would allow them and other students to explore the corollaries of AI’s emergence and expansion. They met with me and answered my questions about their experiences directing one of two student seminars at the Hurford Center this semester, titled “From Frankenstein to Alexa: A Humanistic Inquiry into the Ethics of Artificial Intelligence.”
How did you meet, and what motivated the seminar?
Nicky: We actually met in the first few weeks of our first year and have been friends ever since!
Katya: Yeah! However, since we major in quite different things (Katya has an independent major in politics and economics and a minor in Russian, and Nicky is a cities major), we had never really explored common intellectual interests.
N: We went abroad together to Copenhagen and traveled a lot together while there. On a particular trip to Istanbul, during the net neutrality crisis of 2017, we talked for several hours about the ethical implications of technology, specifically about self-driving cars.
K: When we got back, we started exploring the possibility, and initially we wanted to explore the ethical implications of technology but realizing that was a very broad topic, we decided to narrow it down to AI. From there, we met with Craig, who greatly helped us in terms of syllabus design, and from there started writing the syllabus!
N: A big focus for us was also on making the class open and accessible to all majors and interests, and that is why we cover topics from religion to ethics to popular culture and have drawn from all of these sources in that pursuit.
Why did you choose a “humanistic approach,” as opposed to a social science outlook, particularly since you are both social science majors?
K: We both felt that, although the computer science departments at Haverford and Bryn Mawr were giving some classes on AI or machine learning, there really weren’t any other explorations of the topic on either campus, and we wanted to change that.
N: Yeah! I think that we ended up realizing that the main topics we wanted to explore were humanistic in nature purely by talking with Professor Borowiak and among ourselves.
How has the diversity of majors, interests, and approaches of the participants contributed to discussions? What has been the structure of the discussions?
N: In terms of the format, we’ve been meeting about every other week, and mainly discussing the readings—every participant has been responsible for bringing a few questions, and that’s been excellent grounds for discussion. We have a computer science major, a cities major, and a psychology major, and each has added layers of complexity and nuance to our discussion through their own fields of expertise. It’s been an amazing experience because we constantly get articles sent to us by the participants, who have really taken the initiative and even if we are formally leading the meetings, it doesn’t feel as though we know more than others.
K: Furthermore, it’s been fascinating to have such a democratic approach to learning, with five peers learning together and exploring a topic collaboratively and simultaneously. Obviously, Nicky and I have a certain degree of ‘power,’ and this is the perspective of one of the leaders, but it’s felt very much like a mutual pursuit of knowledge, where we are all rowing in the same direction. It has transformed the way I think about education!
The Student Seminar is bringing James J. Hughes Ph.D., the Executive Director of the Institute for Ethics and Emerging Technologies, to give a talk on “Artificial Intelligence, Algorithms and the Posthuman Future of Governance” on November 28th. Stay tuned for more details!
Written by Federico Perelmuter ’21, prospective comparative literature and philosophy major.
Edited by Matthew Ridley ’19.