• Home
  • AI News
  • Bookmarks
  • Contact US
Reading: ChatGPT Has a Plug-In Problem
Share
Notification
Aa
  • Inspiration
  • Thinking
  • Learning
  • Attitude
  • Creative Insight
  • Innovation
Search
  • Home
  • Categories
    • Creative Insight
    • Thinking
    • Innovation
    • Inspiration
    • Learning
  • Bookmarks
    • My Bookmarks
  • More Foxiz
    • Blog Index
    • Sitemap
Have an existing account? Sign In
Follow US
© Foxiz News Network. Ruby Design Company. All Rights Reserved.
> Blog > AI News > ChatGPT Has a Plug-In Problem
AI News

ChatGPT Has a Plug-In Problem

admin
Last updated: 2023/07/25 at 11:00 AM
admin
Share
5 Min Read

Over the past eight months, ChatGPT has impressed millions of people with its ability to generate realistic-looking text, writing everything from stories to code. But the chatbot, developed by OpenAI, is still relatively limited in what it can do.

The large language model (LLM) takes „prompts“ from users that it uses to generate ostensibly related text. These responses are created partly from data scraped from the internet in September 2021, and it doesn’t pull in new data from the web. Enter plug-ins, which add functionality but are available only to people who pay for access to GPT-4, the updated version of OpenAI’s model.

Since OpenAI launched plug-ins for ChatGPT in March, developers have raced to create and publish plug-ins that allow the chatbot to do a lot more. Existing plug-ins let you search for flights and plan trips, and make it so ChatGPT can access and analyze text on websites, in documents, and on videos. Other plug-ins are more niche, promising you the ability to chat with the Tesla owner’s manual or search through British political speeches. There are currently more than 100 pages of plug-ins listed on ChatGPT’s plug-in store.

But amid the explosion of these extensions, security researchers say there are some problems with the way that plug-ins operate, which can put people’s data at risk or potentially be abused by malicious hackers.

- Advertisement -
Ad imageAd image

Johann Rehberger, a red team director at Electronic Arts and security researcher, has been documenting issues with ChatGPT’s plug-ins in his spare time. The researcher has documented how ChatGPT plug-ins could be used to steal someone’s chat history, obtain personal information, and allow code to be remotely executed on someone’s machine. He has mostly been focusing on plug-ins that use OAuth, a web standard that allows you to share data across online accounts. Rehberger says he has been in touch privately with around a half-dozen plug-in developers to raise issues, and has contacted OpenAI a handful of times.

“ChatGPT cannot trust the plug-in,” Rehberger says. “It fundamentally cannot trust what comes back from the plug-in because it could be anything.” A malicious website or document could, through the use of a plug-in, attempt to run a prompt injection attack against the large language model (LLM). Or it could insert malicious payloads, Rehberger says.

“You’re potentially giving it the keys to the kingdom—access to your databases and other systems.”

Steve Wilson, chief product officer at Contrast Security

Data could also potentially be stolen through cross plug-in request forgery, the researcher says. A website could include a prompt injection that makes ChatGPT open another plug-in and perform extra actions, which he has shown through a proof of concept. Researchers call this “chaining,” where one plug-in calls another one to operate. “There are no real security boundaries” within ChatGPT plug-ins, Rehberger says. “It is not very well defined, what the security and trust, what the actual responsibilities [are] of each stakeholder.”

Since they launched in March, ChatGPT’s plug-ins have been in beta—essentially an early experimental version. When using plug-ins on ChatGPT, the system warns that people should trust a plug-in before they use it, and that for the plug-in to work ChatGPT may need to send your conversation and other data to the plug-in.

Niko Felix, a spokesperson for OpenAI, says the company is working to improve ChatGPT against “exploits” that can lead to its system being abused. It currently reviews plug-ins before they are included in its store. In a blog post in June, the company said it has seen research showing how “untrusted data from a tool’s output can instruct the model to perform unintended actions.” And that it encourages developers to make people click confirmation buttons before actions with “real-world impact,” such as sending an email, are done by ChatGPT.

admin Juli 25, 2023 Juli 25, 2023
Share this Article
Facebook Twitter Email Copy Link Print
Leave a comment Leave a comment

Schreibe einen Kommentar Antworten abbrechen

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Follow US

Find US on Social Medias
Facebook Like
Twitter Follow
Youtube Subscribe
Telegram Follow
newsletter featurednewsletter featured

Subscribe Newsletter

Subscribe to our newsletter to get our newest articles instantly!

[mc4wp_form]

Popular News

The EU Urges the US to Join the Fight to Regulate AI
Juli 16, 2023
How Can AI Help Film Makers?
September 6, 2022
Can An AI Be Smarter Than A Human
Januar 17, 2023
An AI Dreamed Up 380,000 New Materials. The Next Challenge Is Making Them
November 29, 2023

Quick Links

  • Home
  • AI News
  • My Bookmarks
  • Privacy Policy
  • Contact
Facebook Like
Twitter Follow

© All Rights Reserved.

Welcome Back!

Sign in to your account

Lost your password?