You are accountable for your AI’s decisions
When you grant agency to an AI agent to provide advice, and it subsequently misleads a customer, you can become liable for the outcome. In this piece Isaac Schultz explores some of the questions investors can ask to explore whether a business has appropriate AI guardrails in place.
Last week, a lawsuit was settled between Air Canada and one of their passengers. Air Canada’s chatbot had provided misleading information to a passenger, which the passenger then acted upon. The lawsuit resulted in a ruling in favour of the passenger, awarding them $812.02 [1]. Although the amount awarded in this case was small, the outcome sends an increasingly clear message: you are likely to be accountable for your AI’s decisions.
Although that last statement sounds definitive, it may not be true in all contexts; for example, if an output is “created” by an AI chatbot versus passed in as reference material. [2] Regardless, the ruling is a sign that regulators are catching up and that caution should be exercised when investing in a business with AI capabilities. Investors should consider the following:
Australian Consumer Law provides consumers with strong protections under circumstances when advice influences consumer decisions on purchasing goods or services [3].
Certain industries are held to higher standards of accuracy - meaning health, legal, and financial advice should be approached with increased caution.
Although it seems as though they might be, disclaimers are neither comprehensive nor bulletproof solutions. While clear Terms of Service and disclaimers can mitigate some liability, disclaimers in Australian law must be reasonable, and businesses can’t rely on small print and disclaimers as an excuse for a misleading overall message [4].
Operating across different countries and jurisdictions will introduce additional complexity for large multinationals.
If a business you are considering investing in operates within a heavily regulated industry or an industry with strong consumer protections, you will want to ensure that adequate scenario testing and appropriate controls are in place (so that you don’t accidentally sell a customer a car for a dollar [5]).
Starting with the following questions will make AI risks and opportunities clearer when considering investing in a business:
Does the business have AI chatbots? Are they internally facing, externally facing, or both? What are they used for, and how much traffic do they get?
Does the business have a baseline level of education when it comes to engaging with AI (e.g., knowledge of the ASD’s “Engaging with AI” guidelines [6])?
Does the business follow any AI risk management frameworks (e.g., the NIST AI RMF [7]), and how are these frameworks managed and checked against?
How has the business consulted with internal stakeholders and business units during implementation? Working with the right people from the start can bring in domain knowledge, prevent simple errors, and ultimately mitigate process risk.
Contact us today to learn more about how to manage the risks associated with complex and emerging technologies in your portfolio businesses.
Interested in reading more insights from the team? Here's some of the latest.