• Home
  • AI News
  • Bookmarks
  • Contact US
Reading: Scammers Used ChatGPT to Unleash a Crypto Botnet on X
Share
Notification
Aa
  • Inspiration
  • Thinking
  • Learning
  • Attitude
  • Creative Insight
  • Innovation
Search
  • Home
  • Categories
    • Creative Insight
    • Thinking
    • Innovation
    • Inspiration
    • Learning
  • Bookmarks
    • My Bookmarks
  • More Foxiz
    • Blog Index
    • Sitemap
Have an existing account? Sign In
Follow US
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
> Blog > AI News > Scammers Used ChatGPT to Unleash a Crypto Botnet on X
AI News

Scammers Used ChatGPT to Unleash a Crypto Botnet on X

admin
Last updated: 2023/08/21 at 11:00 AM
admin
Share
4 Min Read

ChatGPT may well revolutionize web search, streamline office chores, and remake education, but the smooth-talking chatbot has also found work as a social media crypto huckster.

Researchers at Indiana University Bloomington discovered a botnet powered by ChatGPT operating on X—the social network formerly known as Twitter—in May of this year.

The botnet, which the researchers dub Fox8 because of its connection to cryptocurrency websites bearing some variation of the same name, consisted of 1,140 accounts. Many of them seemed to use ChatGPT to craft social media posts and to reply to each other’s posts. The auto-generated content was apparently designed to lure unsuspecting humans into clicking links through to the crypto-hyping sites.

Micah Musser, a researcher who has studied the potential for AI-driven disinformation, says the Fox8 botnet may be just the tip of the iceberg, given how popular large language models and chatbots have become. “This is the low-hanging fruit,” Musser says. “It is very, very likely that for every one campaign you find, there are many others doing more sophisticated things.”

- Advertisement -
Ad imageAd image

The Fox8 botnet might have been sprawling, but its use of ChatGPT certainly wasn’t sophisticated. The researchers discovered the botnet by searching the platform for the tell-tale phrase “As an AI language model …”, a response that ChatGPT sometimes uses for prompts on sensitive subjects. They then manually analyzed accounts to identify ones that appeared to be operated by bots.

“The only reason we noticed this particular botnet is that they were sloppy,” says Filippo Menczer, a professor at Indiana University Bloomington who carried out the research with Kai-Cheng Yang, a student who will join Northeastern University as a postdoctoral researcher for the coming academic year.

Despite the tic, the botnet posted many convincing messages promoting cryptocurrency sites. The apparent ease with which OpenAI’s artificial intelligence was apparently harnessed for the scam means advanced chatbots may be running other botnets that have yet to be detected. “Any pretty-good bad guys would not make that mistake,” Menczer says.

OpenAI had not responded to a request for comment about the botnet by time of posting. The usage policy for its AI models prohibits using them for scams or disinformation.

ChatGPT, and other cutting-edge chatbots, use what are known as large language models to generate text in response to a prompt. With enough training data (much of it scraped from various sources on the web), enough computer power, and feedback from human testers, bots like ChatGPT can respond in surprisingly sophisticated ways to a wide range of inputs. At the same time, they can also blurt out hateful messages, exhibit social biases, and make things up.

A correctly configured ChatGPT-based botnet would be difficult to spot, more capable of duping users, and more effective at gaming the algorithms used to prioritize content on social media.

“It tricks both the platform and the users,” Menczer says of the ChatGPT-powered botnet. And, if a social media algorithm spots that a post has a lot of engagement—even if that engagement is from other bot accounts—it will show the post to more people. “That’s exactly why these bots are behaving the way they do,” Menczer says. And governments looking to wage disinformation campaigns are most likely already developing or deploying such tools, he adds.

admin August 21, 2023 August 21, 2023
Share this Article
Facebook Twitter Email Copy Link Print
Leave a comment Leave a comment

Schreibe einen Kommentar Antworten abbrechen

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Follow US

Find US on Social Medias
Facebook Like
Twitter Follow
Youtube Subscribe
Telegram Follow
newsletter featurednewsletter featured

Subscribe Newsletter

Subscribe to our newsletter to get our newest articles instantly!

[mc4wp_form]

Popular News

Computer Vision Technologies in Robotics: State of the Art
September 7, 2022
I Went Undercover as a Secret OnlyFans Chatter. It Wasn’t Pretty
Mai 15, 2024
Nervous About ChatGPT? Try ChatGPT With a Hammer
August 29, 2023
Why Hollywood Really Fears Generative AI
Juni 2, 2023

Quick Links

  • Home
  • AI News
  • My Bookmarks
  • Privacy Policy
  • Contact
Facebook Like
Twitter Follow

© All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?