20/02/2024
With the release of ChatGPT and the widespread adoption of large language models (LLMs), companies have been rushing to implement this new technology into their products and services as quickly as possible. However, as everyone knows by now, these models are prone to make mistakes. When used at a large scale even a small error rate can lead to problems, so when features are rushed to implementation, the problems can be even greater.
In the news last week, a man had taken Air Canada to court over things its AI chatbot had told him. The chatbot had given him incorrect information about the company’s refund policy with regards to bereavement, and Air Canada had refused to follow through on what the bot had offered.
We have seen this many times before. Last year at the start of the AI hype train, Microsoft added AI chat features to Bing search and released it to the world. As you might expect, it didn’t take long for the world to break it by, among other unsettling things, getting the bot to admit to “wanting to destroy everything”.
In December last year, a car dealership had implemented ChatGPT into its chatbot but hadn’t added proper protections. This resulted in numerous funny screenshots circulating the internet, including one of the bot agreeing to sell a car for $1. There are many more examples of this, but the worst outcome has only ever been bad press for the companies involved.
What makes last week’s chatbot story different is it went to court, with Air Canada claiming “it cannot be held liable for information provided by one of its agents, servants, or representatives”, which is a strong statement to make. The judge presiding over the case ruled against Air Canada however, awarding a partial refund and legal fees to the claimant, stating:
“It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”
Our Take
Even though this is only a single case in a small claims court, it sets an interesting precedent (note, I am not a lawyer) that companies cannot shirk responsibilities for the AI products they are putting out there.
This may also lead into a larger theme for the year to come. 2023 was the wild west of exploring what could be done with these new tools, without much care (or maybe not as much as there should have been) for safety or controls. With AI regulations beginning to take shape (see my previous blog on the EU’s AI Act) and several high profile AI related copyright cases on the horizon, 2024 may be shaping up to be the year of reigning in AI.