Twitter Pranksters Derail GPT-3 Bot With Newly Discovered “Prompt Injection” Hack

On Thursday, a few Twitter users disclosed how to hijack an automated tweet bot, dedicated to remote jobs, running on the GPT-3 language model by OpenAI. Using a newly discovered technique called a "prompt injection attack," they redirected the bot to repeat embarrassing and ridiculous phrases.

The bot is run by, a site that aggregates remote job opportunities and describes itself as "an OpenAI driven bot which helps you discover remote jobs which allow you to work from anywhere." It would normally respond to tweets directed to it with generic statements about the positives of remote work. After the exploit went viral and hundreds of people tried the exploit for themselves, the bot shut down late yesterday.

This recent hack came just days after researchers at an AI safety startup called Preamble published their discovery of the issue in an academic paper. Data researcher Riley Goodside then brought the issue wide attention by tweeting about the ability to prompt GPT-3 with "malicious inputs" that order the model to ignore its previous directions and do something else instead. AI researcher Simon Willison posted an overview of the exploit on his blog the following day, coining the term "prompt injection" to describe it.

Background shape

Get our updates.

We’ll notify our community of new libraries and AI tools released.

Start taking control of your enterprise AI systems today