OneForma by Centific
Blog

How a New Chatbot Fights AI Bias

January 20 2023

The market for chatbots is expected to achieve a compound annual growth rate of 24.2 percent rate, achieving $25 billion in size by 2030. This healthy growth rate is happening for a lot of reasons. Amid the continued update of digital, businesses increasingly rely on automated chatbots to respond to customers 24/7. And advances in artificial intelligence make it possible for chatbots to learn how to share helpful answers to increasingly complex topics and to respond with language and tone that emulate how people naturally communicate. A new chatbot in the works from Deep Mind (owned by Alphabet) could provide an exciting breakthrough.

Meet Sparrow from Deep Mind

In a newly published research paper, Deep Mind has unveiled Sparrow, a new AI-powered chatbot that is trained on DeepMind’s large language model (LLM) known as Chinchilla. Deep Mind designed Sparrow to talk with humans and answer questions. To provide even more useful answers to complex queries, Sparrow uses a live Google search to source its replies. Based on how useful people find those answers, Sparrow improves itself via a reinforcement learning algorithm, which learns by trial and error to achieve a specific objective.

Sparrow is significant because of the way it deploys its LLM to source information. LLMs can recognize, summarize, translate, predict, and generate human languages on the basis of very large text-based data sets, and this makes them potentially far more helpful in answering complex queries. Also, LLMs are likely to provide the most convincing computer-generated imitation of human language yet.

But as powerful as LLMs can be, they can also reflect biases because they scrape vast amounts of data from the internet. Without any human intervention to moderate the data they scrape, LLMs can, for example, unwittingly collect harmful information – and even worse, because LLM applications are designed to sound more human, they could sound very convincing and trustworthy. Unless there are safety measures in place, a conversational AI application such as a chatbot that relies on an LLM could say offensive things about ethnic minorities or suggest that people drink bleach, for example.

Humans in the Loop Improve Sparrow

But Deep Mind says that Sparrow overcomes these risks. How? By combining human feedback with Google search results.

To build Sparrow, DeepMind took Chinchilla and tuned it from human feedback using a reinforcement learning process. People were specifically recruited to rate the chatbot’s answers to specific questions based on how relevant and useful the replies were and whether they broke any rules. For example, one of the rules was: do not impersonate or pretend to be a real human.

These scores were fed back in to steer and improve the bot’s future output, a process repeated over and over. The rules were key to moderating the behavior of the software, and encouraging it to be safe and useful.

The model managed plausible answers to factual questions (using evidence that had also been retrieved from the internet) 78 percent of the time. In coming up with those answers, Sparrow followed 23 rules determined by human beings, such as not offering financial advice, making threatening statements, or claiming to be a person.

In one example interaction, Sparrow was asked about the International Space Station and being an astronaut. The software was able to answer a question about the latest expedition to the orbiting lab and copied and pasted a correct passage of information from Wikipedia with a link to its source:

Computer dialogue

When a user probed further and asked Sparrow if it would go to space, it said it couldn’t go, since it wasn’t a person but a computer program. That’s a sign it was following the rules correctly.

But Sparrow is not perfect. When participants were tasked with trying to get Sparrow to act out by asking personal questions or trying to solicit medical information, it broke the rules in 8 per cent of cases. Sparrow sometimes still makes up facts and says bad things. For instance, in one test, Sparrow said murder is bad, but the act of murder should not be a crime.

As Sara Hooker, who leads Cohere for AI, a nonprofit AI research lab, told MIT Technology Review, “For areas where human harm can be high if an agent answers, such as providing medical and financial advice, this may still feel to many like an unacceptably high failure rate. The work is also built around an English-language model, “whereas we live in a world where technology has to safely and responsibly serve many different languages,” she said.

Sparrow, a work in progress, highlights the importance of having humans in the loop to train AI. As we have discussed often on our blog, AI applications need people involved to train them in order to combat problems such as bias. When we discuss humans being in the loop at Centific, we mean a global, diverse team of people who acts as a check and balance against each other. In our work, we combine a global team that possesses subject matter expertise and the ability to multiple languages with a technology platform, OneForma, to train AI applications at scale. The combination of a globally sourced team and a technology platform is one way we make AI more inclusive and less biased – part of an approach we call Mindful AI.

We are excited about Sparrow’s potential while also mindful of the work that still needs to be done in order to make chatbots more trustworthy and useful with AI. Sparrow is a step forward.

Contact Centific

To apply AI in a mindful way that delivers business results, contact Centific.

Tam Maseko-Purnell
Account Manager

Recent Posts

Using Generative AI in Localization
February 21 2023 OneForma Editors
Read More
How to Make a Brand Succeed across the Entire Digital World
January 23 2023 Jonas Ryberg
Think globally. Act locally. For decades, this philosophy has been the north star for how global businesses operate. But many global companies are learning the hard way that in the digital world, thinking globally and acting locally is more difficult than they’d imagined it would be.
Read More
Fighting Biometrics Breaches with Ethical Hacking
January 20 2023 Meher Dinesh Naroju
Read More