The Risks of Botshit

The Risks of Botshit

Illustration by HBR Staff

Post

Post

Share

Annotate

Save

Print

Botshit — made-up, inaccurate, and untruthful chatbot content that humans uncritically use for tasks — can pose major risks to your business in the form of reputational damage, incorrect decisions, legal liability, economic losses, and even human safety. Yet, it’s unlikely that chatbots are going away. How can you manage these risks while taking advantage of the benefits of promising new tools? The authors suggest asking two key questions based on their research: How important is chatbot response veracity for a task? And how difficult is it to verify the veracity of the chatbot response? Based on your responses to these questions, you can better identify the risks associated with a given task — and successfully mitigate them.

Hot off the heels of OpenAI releasing their GenAI chatbot ChatGPT to the public in November 2022, Google released their own chatbot called Bard (now Gemini). During Bard’s first public demonstration, it generated a major factual error in response to a question about the discoveries made by the James Webb Space Telescope. This wrong answer by the chatbot led to a 9% drop in the stock price of Alphabet, Google’s parent company —  at the time, $100 billion in market value.

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Harvard Business – https://hbr.org/2024/07/the-risks-of-botshit

Exit mobile version