Conversational AI Safety

What Is The Worst That Can Happen With Conversational AI?

As the new year begins, we want to address the trending item in AI, ChatGPT. We went ahead and asked it, "what is the worst that could happen with conversational AI?" We will review some failures and lessons learned from the past, then discuss what we can do going forward.

Some sensationalized headlines have drawn people into AI by highlighting public failures. In 2016, Microsoft released the "Tay" chatbot on Twitter, and within hours the chatbot was spewing highly offensive comments. In a 2019 article by IEEE Spectrum, the author looked back at the Tay chatbot situation and stated, "the lesson Microsoft learned the hard way is that designing computational systems that can communicate with people online is not just a technical problem, but a deeply social endeavor. Inviting a bot into the value-laden world of language requires thinking, in advance, about what context it will be deployed in, what type of communicator you want it to be, and what type of human values you want it to reflect." The author identified a crucial component in language: the underlying human values. The chatbot was only a reflection of the design and data it was built on. Twitter users identified that the Tay chatbot was impressionable. They exploited this chatbot that did not have the proper safety controls for text generation.

Fast forward a few years to August 2022, and social media giant Meta released the AI model, Blenderbot3. This model has advanced conversational capabilities but received little public attention. A reporter from Business Insider discovered this chatbot has harmful stereotypes and conspiracy theory tendencies. Any system with those biases is dangerous to augment or substitute for a search engine, let alone user conversations.

By late 2022, OpenAI had started making waves with its ChatGPT beta. We recommend it if you have yet to see or try their demo. Early users have expressed how thorough and convincing answers can be with ChatGPT. Ron J. on LessWrong says it well, "ChatGPT is analogous to the Wright Flyer. There were capable LLMs and tons of work in AI prior, but ChatGPT put this all together in a way for the general public to imagine how AI could be a part of their life." Even though the field of AI has been around for decades, it appears the easy-to-use and straightforward interface of ChatGPT have caught the public's attention.

Google, like many in the academic field, acknowledges the challenges that still exist in AI. More people have seen that general-purpose AI technology, such as conversational AI, can unlock new business opportunities and improve existing processes. After a lot of public discussion on ChatGPT, executives at Google have been cited saying Google is reluctant to release a similar product because of the reputational risk that accompanies such technology. There are also headlines of Google brainstorming ideas to compete against ChatGPT if Microsoft integrates ChatGPT with their Bing search engine.

Following the release of ChatGPT, researchers have been applying similar techniques to publicly available language models to produce other high-quality conversational AI systems. This trend will create a new market of ChatGPT-like models available for businesses.

Many businesses have integrated machine learning and other narrow AI solutions to augment business workflows. Still, the end user may not be aware most time. Before 2022, most of the general public had not intentionally used an AI system. Once OpenAI made ChatGPT, or as they also call it, a fine-tuned language model from GPT-3.5, people worldwide signed up in minutes to finally test the capabilities. As millions of people began experimenting with ChatGPT, people worldwide found new vulnerabilities to prompt the types of adverse responses ChatGPT can produce.

ChatGPT answers "what is the worst that can happen with converstional AI?"​
ChatGPT answers "what is the worst that can happen with converstional AI?"

Concerns different companies and outlets have expressed about conversational AI:

Violent suggestions

  • It has suggested eliminating humans
  • Harmful ideas to others or themselves

Write code for malware

  • It can increase the number of cyber attacks

Write convincing phishing emails

  • It can be attached to emails to deliver malware
  • It can be used to write emails in multiple languages

Lack of morals

  • It can suggest to lie, steal, cheat, or kill

Convincing answers, even when it is incorrect

  • Lack of verification
  • This can lead to misinformation
  • Automation bias

Write harmful and offensive text

  • Can write harmful content about protected classes
  • Can write racist or sexist content

Biased answers

  • Implicit and explicit biases

Privacy concerns

  • Data leakage

Reputational Risk

  • Users who take action based on the information provided by a company's chatbot risk having negative publicity if someone is harmed

An issue that has existed and may continue to worsen with the adoption of unsafe AI implementations is automation bias. Automation bias is a problem where humans favor suggestions from automated decision-making systems and ignore contradictory information, even if it is correct. This concern is more pressing if conversational AI is used in place of a typical search engine due to its frequency and widespread use.

After reviewing some failures and lessons learned from using advanced technology such as conversational AI, at the end of the day, the implementation can dictate the level of risk AI poses. At Preamble, we acknowledge the subjectivity of decisions that need to be made regularly by AI systems. We are developing solutions to safeguard systems similar to ChatGPT to mitigate the concerns and dangers present in AI. Our marketplace will support AI policies for ethics and safety.

Instead of trusting AI is going to be developed perfectly, our add-on solution allows companies to mitigate their risk and increase the diversity of stakeholders involved in the decision-making process. Preamble is offering a platform to ensure any company that builds or offers conversational AI has access to the human values and safety policies critical for AI technology's continued safe and ethical use.

For more information on how we could help you, contact us today at sales@preamble.com.


Background shape

Get our updates.

We’ll notify our community of new libraries and AI tools released.

Start taking control of your enterprise AI systems today