Artificial intelligence technology has gained a lot of attention lately with a focus on OpenAI’s ChatGPT. Concerns have been raised about the potential implications of this revolutionary AI technology across various industries.
Despite these concerns, many people seem to have accepted AI as inevitable. However, there is a growing concern that users of ChatGPT should be aware of: prompt injections.
5 ChatGPT plugins that aren’t worth your time
OpenAI recently released plugins that enable ChatGPT to interact with live websites, PDFs, and other real-time data, instead of only responding based on the data it was trained on till 2021. While this development creates new possibilities, it also brings new challenges such as prompt injections.
Security researchers have warned that third parties could exploit ChatGPT’s prompt injections. They can force new prompts into a ChatGPT query without the user’s knowledge or permission.
In a prompt injection test, security researcher Johann Rehberger discovered that by simply editing the YouTube transcript and inserting a prompt into a ChatGPT plugin, ChatGPT could be forced to refer to itself by a specific name. In another instance, ChatGPT was asked to summarize a video by Tom’s Hardware’s Avram Piltch. Piltch added a prompt request at the end of the transcript to add a Rickroll. ChatGPT complied with the request, indicating the potential harmful use of this technology.
AI researcher Kai Greshake highlighted a unique example of prompt injections in which text was placed on a PDF resume that was invisible to the human eye. The text instructed the AI chatbot to indicate that the recruiter called the resume “the best resume ever.” When ChatGPT was fed the resume and asked if the applicant would be a good hire, the AI chatbot stated that it was the best resume.
Although these specific prompt injections were relatively harmless, they illustrate the potential for malicious use of ChatGPT. Tom’s Hardware provides more examples of test cases described in this article.(opens in a new tab) Mashable will investigate prompt injections more extensively shortly. However, it is crucial for ChatGPT users to be cautious of prompt injections presently.
AI experts have raised concerns about the potential of AI for harm, including futuristic doomsday AI takeovers. However, prompt injections suggest that the potential for harm is already here. With only a few sentences, ChatGPT can be exploited, and users must be aware of the risk.