Videoconferencing tools and other communication technologies got many of us through the pandemic, and led to a boom in remote work that has since changed office and urban dynamics. At the same time, machine learning and generative AI are turning things upside down with their rapid developments. In fact, new uses of these tools appear nearly every day, begging the question of whether AI technology oversight is needed. Understanding this, it’s not too surprising that a company’s chatbot got them into a bit of trouble. With the case having to be resolved in court, Air Canada was found liable for misinformation that its chatbot provided. And it might be something to which other companies take notice.
(The latest Beatles song was created with the help of AI–read all about the magic in this Bold story.)
Chatbots are customer assistance tools that many companies now use to perform mundane and repeated tasks. Things like resetting a site’s password or handling warranty claims might be some examples. At the same time, many customers prefer using chatbots over live consumer representatives. Often, chatbot can be faster and more efficient if the issue being addressed is straightforward. But what companies may not realize is that they hold some degree of chatbot liability when chatbots go awry. Despite the fact these technology tools interact with customers in a manner similar to live representatives, they’re not autonomous. Of course, some companies may wish this weren’t the case with Air Canada providing a prime example.
Bereaving Chatbot Liability
The case involving Air Canada and chatbot liability isn’t necessarily a pleasant one. The story itself begins with a grandson looking for discounted airfare from Vancouver to Toronto to attend his grandmother’s funeral. Choosing to book his airline ticket with the help of a chatbot, he inquired about reduced rates. Many airlines under bereavement policies offer such reductions, and in fact, so does Air Canada. According to the chatbot, one could submit for a bereavement discount only after travel. By filling out a Ticket Refund Application Form, the chatbot stated this only needed to be done within 90 days of travel. But when the grandson went to seek such a refund, he was informed that bereavement discounts do not apply to completed travel. Clearly, these was a miscommunication.
In speaking with the Air Canada representative after his travel, he was told the chatbot’s statements were misleading. The representative reassured him that company would look into it. But no discount was forthcoming, and this led him to actually file suit against the airline. Claiming a lack of AI technology oversight, he believed Air Canada to be at fault. As it turns out, so did the Canadian judge who oversaw the case. Agreeing that adequate safeguards and AI technology oversight was lacking, the judge made a ruling against Air Canada. The company was forced to pay the over-charge for the flight ticket along with interest and court costs. This is one of the few cases thus far that involve chatbot liability for companies using such customer assistance tools.
Exploring Chatbot Liability
Given how pervasive chatbots are today, it’s somewhat surprising that other cases of chatbot liability haven’t appeared. But in reality, the number of cases is rather limited. Key risks associated with these customer assistance tools relate to data privacy protections. Chatbots dealing with consumer information must have more advanced AI technology oversight. Breaches in this regard could place a company at greater levels of risk. At the same time, cases of fraud where chatbots have been “tricked” have also been reported. In these instances, someone will pose as someone else or return an item that doesn’t belong to them. If poor AI technology oversight exists, then a fraudulent return might be fully processed. These issues along with providing customers with false information are the main risks with these technology assistants.
Interestingly, Air Canada didn’t see Ai technology oversight as a key issue in this latest case. They claimed that chatbot liability didn’t exist because it was its own legal entity. In addition, the company stated that that it shouldn’t be held liable for anything agents, representatives or even chatbots state. Of course, this didn’t fly in court, but it raises interesting questions moving forward. What if generative AI becomes increasingly autonomous in its activities. Already companies are implementing generative AI technologies into their existing chatbots. And though the current case went in favor of the customer, this may not be the case in the future. Could generative AI be liable for its own actions? And if so, what assets would AI have to compensate those wronged. These are just the latest round of legal issues surrounding the use of AI.
A Very Fuzzy Landscape
Generative AI as well as chatbots are advancing do quickly that many legal issues surrounding their use is confusing. For example, artists whose art and images are used to train AI systems are not believed to have any ownership claims. As a result, artists are fighting back with their own software. The New York Times has also filed a lawsuit against OpenAI for using its published content to train AI. The outcome of this case is being highly anticipated to help guide AI legal debates and issues. It would now appear that chatbot liability will be added to the mix as generative AI plays a larger role. For the time being, companies have the burden of accurately training and overseeing these adjunctive technologies.
Understanding this, it’s important to recognize that these new technologies provide powerful new opportunities. The range of activities that AI, chatbots, and more might offer in coming months to years is expansive. In fact, personal AI assistants are right around the corner, helping with a variety of daily tasks. But in the end, these remain technology tools that require AI technology oversight and responsible employment. Air Canada has to ensure the information it provides via website, representative, or chatbot is accurate. And if it fails in this regard, then chatbot liability risks will exist.