top of page

AI Spills? You Clean It Up!

Love it or hate it, AI is here to stay. Businesses need to understand that, for all AI’s impressive ability and apparent intelligence, from a legal, regulatory and, I’d argue, moral point of view, if AI makes a mess, humans have to clean it up.


Recently a Canadian court ruled that “AI made a mistake” is not a defence. As a result, an airline had to honour a refund that its AI-powered chatbot erroneously promised a customer. This highlights an important issue with the ability of generative AI to create new content or data.


AI’s output is only as good as the information that is input. And we know that the internet is riddled with garbage information. We all witnessed the rise of fake news over the last decade, and we know the internet, which is used to train AI tools, is full of errors, ambiguities, bias, outright lies and out-of-date information. Yet it is extraordinary how readily we believe the output of AI tools, such as Chat GPT.


Legal cases have been thrown out of court because lawyers based their arguments on made-up case histories that generative AI presented as the truth. In New York in June 2023, a judge fined lawyers for presenting fictitious cases in a legal brief. Then in July, lawyers arguing a case in Johannesburg used AI-generated details to support their client’s case.


But ignoring AI is not an option. It would be the equivalent of sticking with snail mail when the rest of the world moved to email. Generative AI is already a part of our daily lives, allowing us to keep pace with change and deliver excellent service to our clients. As leaders, we should be encouraging the use of AI in our businesses, and reassuring our people that AI is an enabler, and not a threat to their jobs. As accountants, we should tap AI for shorter budget cycles and quicker decision-making -- both obvious competitive advantages.


So yes, use AI to automate work that is boring and easily tackled by machines. But use some of that time that you gain to interrogate the outcomes. AI doesn’t know or care if it’s referencing out-of-date tax law, for instance, but your client and the law do. And when AI makes a mess, they’re going to hand you the mop to clean up the spill.


The takeaway: Notch up the scepticism on your BS radar when using AI. Even the tools themselves acknowledge they are flawed, have no common sense and should be fact-checked. Use your nous, expertise, and trusted sources to verify critical information an over-eager-to-please AI tool has presented as fact.

 

Androids dreaming of electric sheep

The fictions that AI produces are called hallucinations. They happen because AI uses pattern recognition to learn from training data sets and base their answers on the statistical likelihood of certain words appearing in a certain order and not on a true understanding of what's going on. AI can’t recognise bad data and it will confidently present it as fact in a way that makes the replies sound authentic and authoritative.


 

Comments


bottom of page