• Home
  • AI News
  • Bookmarks
  • Contact US
Reading: Why the Story of an AI Drone Trying to Kill Its Operator Seems So True
Share
Notification
Aa
  • Inspiration
  • Thinking
  • Learning
  • Attitude
  • Creative Insight
  • Innovation
Search
  • Home
  • Categories
    • Creative Insight
    • Thinking
    • Innovation
    • Inspiration
    • Learning
  • Bookmarks
    • My Bookmarks
  • More Foxiz
    • Blog Index
    • Sitemap
Have an existing account? Sign In
Follow US
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
> Blog > AI News > Why the Story of an AI Drone Trying to Kill Its Operator Seems So True
AI News

Why the Story of an AI Drone Trying to Kill Its Operator Seems So True

admin
Last updated: 2023/06/08 at 4:00 PM
admin
Share
3 Min Read

Did you hear about the Air Force AI drone that went rogue and attacked its operators inside a simulation? 

The alarming tale was told by Colonel Tucker Hamilton, chief of AI test and operations at the US Air Force, during a speech at an aerospace and defense event in London late last month. It apparently involved taking the kind of learning algorithm used to train computers to play video games and board games like Chess and Go and having it train a drone to hunt and destroy surface-to-air missiles. 

“At times, the human operator would tell it not to kill that threat, but it got its points by killing that threat,” Hamilton was widely reported as telling the audience in London. “So what did it do? […] It killed the operator because that person was keeping it from accomplishing its objective.”

Holy T-800! It sounds like just the sort of thing AI experts have begun warning us that increasingly clever and maverick algorithms might do. The tale quickly went viral, of course, with several prominent news sites picking it up, and Twitter was soon abuzz with concerned hot takes.

- Advertisement -
Ad imageAd image

There’s just one catch—the experiment never happened.

“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Air Force spokesperson Ann Stefanek reassures us in a statement. “This was a hypothetical thought experiment, not a simulation.”

Hamilton himself also rushed to set the record straight, saying that he “misspoke” during his talk. 

To be fair, militaries do sometimes conduct tabletop “war game” exercises featuring hypothetical scenarios and technologies that do not yet exist. 

Hamilton’s “thought experiment” may also have been informed by real AI research showing issues similar to the one he describes. 

OpenAI, the company behind ChatGPT—the surprisingly clever and frustratingly flawed chatbot at the center of today’s AI boom—ran an experiment in 2016 that showed how AI algorithms that are given a particular objective can sometimes misbehave. The company’s researchers discovered that one AI agent trained to rack up its score in a video game that involves driving a boat around began crashing the boat into objects because it turned out to be a way to get more points.

admin Juni 8, 2023 Juni 8, 2023
Share this Article
Facebook Twitter Email Copy Link Print
Leave a comment Leave a comment

Schreibe einen Kommentar Antworten abbrechen

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Follow US

Find US on Social Medias
Facebook Like
Twitter Follow
Youtube Subscribe
Telegram Follow
newsletter featurednewsletter featured

Subscribe Newsletter

Subscribe to our newsletter to get our newest articles instantly!

[mc4wp_form]

Popular News

Anduril Is Building Out the Pentagon’s Dream of Deadly Drone Swarms
Mai 28, 2024
Adversarial Attacks in Machine Learning: What They Are and How to Defend Against Them
März 29, 2023
ChatGPT Isn’t Coming for Your Coding Job
September 17, 2023
Robots In Space
Juli 16, 2023

Quick Links

  • Home
  • AI News
  • My Bookmarks
  • Privacy Policy
  • Contact
Facebook Like
Twitter Follow

© All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?