Skip to main content.
Bard HAC
Bard HAC
  • About sub-menuAbout
    Hannah Arendt

    “There are no dangerous thoughts; thinking itself is dangerous.”

    Join HAC
    • About the HAC
      • About Hannah Arendt
      • Book Roger
      • Our Team
      • Our Location
  • Programs sub-menuPrograms
    Hannah Arendt
    • Our Programs
    • Courage to Be
    • Democracy Innovation Hub
    • Virtual Reading Group
    • Dialogue Groups
    • HA Personal Library
    • Affiliated Programs
    • Hannah Arendt Humanities Network
    • Meanings of October 27th
    • Lapham's Quarterly
  • Academics sub-menuAcademics
    Hannah Arendt

    “Storytelling reveals meaning without committing the error of defining it.”

    • Academics at HAC
    • Undergraduate Courses
  • Fellowships sub-menuFellowships
    HAC Fellows

    “Action without a name, a 'who' attached to it, is meaningless.”

    • Fellowships
    • Senior Fellows
    • Associate Fellows
    • Student Fellowships
  • Conferences sub-menuConferences
    JOY: Loving the World in Dark Times Conference poster

    Fall Conference 2025
    “JOY: Loving the World in Dark Times”

    October 16 – 17

    Read More Here
    • Conferences
    • Past Conferences
    • Registration
    • Our Location
    • De Gruyter-Arendt Center Lecture in Political Thinking
  • Publications sub-menuPublications
    Hannah Arendt
    Subscribe to Amor Mundi

    “I've begun so late, really only in recent years, to truly love the world ... Out of gratitude, I want to call my book on political theories Amor Mundi.”

    • Publications
    • Amor Mundi
    • Quote of the Week
    • HA Yearbook
    • Podcast: Reading Hannah Arendt
    • Further Reading
    • Video Gallery
    • From Our Members
  • Events sub-menuEvents
    Hannah Arendt

    “It is, in fact, far easier to act under conditions of tyranny than it is to think.”

    —Hannah Arendt
    • HAC Events
    • Upcoming
    • Archive
    • JOY: Loving the World in Dark Times Conference
    • Bill Mullen Recitation Prize
  • Join sub-menu Join HAC
    Hannah Arendt

    “Political questions are far too serious to be left to the politicians.”

    • Join HAC
    • Become a Member
    • Subscribe
    • Join HAC
               
  • Search

Amor Mundi

Amor Mundi Home

 

Can We Stop Ourselves

05-07-2023

Roger Berkowitz
Technology is not bad. On its own, social media, the internet, and artificial intelligence are tools. They can be used to further humanity and also to destroy it. This is true for artificial intelligence today. It can be used to increase food production and cure illnesses. But it can also be used to create intelligent machines who would make humans superfluous and robotic warriors who might eliminate those superfluous humans. The danger in AI is not necessarily in the technology itself, but in how we will use it. And Geoffrey Hinton, one of the pioneers of AI research, has recently quit his job at Google to spread the word of his fears that AI will be used in ways that will do fantastic harm. “It is hard to see,” he says, “how you can prevent the bad actors from using it for bad things.” In an essay on Hinton’s worries, Cade Metz writes:


His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.

“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
Many other experts, including many of his students and colleagues, say this threat is hypothetical. But Dr. Hinton believes that the race between Google and Microsoft and others will escalate into a global race that will not stop without some sort of global regulation.
But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
He does not say that anymore.

Footer Contact
Contact HAC
Bard College
PO Box 5000
Annandale-on-Hudson, NY 12504
845-758-7878
[email protected]
Join the HAC
Become a Member
Subscribe to Amor Mundi
Join the Virtual Reading Group
Follow Us
Image for Twitter
Image for Facebook
Image for YouTube
Image for Instagram