top of page

Regulating AI – the digital can of worms is open

As it did with data protection, the European Union has initiated the first significant shot across the bows when it comes to regulating digital technology with its recently passed Artificial Intelligence Act (AI Act). And sure, as accountants, we're no strangers to regulatory complexities, but the EU law opens up a whole new can of worms for anyone doing business in the region.


Any business developing, deploying, or using AI systems within the EU is affected, even if your business is based elsewhere. And before you stop reading, thinking that you don’t, or won’t, offer AI services to your EU clients, perhaps pause and think again.

AI has so many competitive advantages to offer accountants in customer-facing services and the back office that I don’t think any accountancy firm can afford to ignore it. AI could be introduced to customer-facing services such as financial reporting and insights, fraud detection, tax planning and optimisation, and personalised financial advice. Back-office applications of AI could include automated data entry, workflow optimisation and prioritisation, anomaly detection and employee training.


The new law, in brief

The AI Act, which came into effect on 1 August 2024 and will roll out over the next 24 to 36 months, is the world's most comprehensive attempt to regulate AI technology. It says it aims to protect people from harm while encouraging innovation. It would, of course,  have been a fool’s game for the EU to take a rules-based approach – any rules around AI would be out-of-date before they saw the light of day. Instead, the law takes a risk-based approach underpinned by a set of guiding principles.


A digital can of worms

When I first started brainstorming ideas for this article, there were as many questions about this as there were people in the room. Questions ranged from whether the law introduced loopholes that bad actors would be quick to exploit to whether the GDPR and other laws cover a lot of this instead – do we even need standalone AI laws? What about the catch-22 that AI needs heaps of data to be accurate, but we also need protection for individual data rights? Another catch-22: companies are being told that AI is the key to unlocking competitiveness, innovation, productivity, and improved service, yet if they fall foul of this law the fines could put the firm out of business.


Of course, a lot of this detail is going to come out in the wash over the next months and years as the law rolls out. But it’s a complex matter and is likely to be a pivotal point in the evolution of a highly impactful technology, so bears some consideration today.


GDPR playbook

The question that is most relevant for us as accountants offering AI services to clients in the EU, is how this affects our businesses today, and going forward. The AI Act follows a similar playbook to the General Data Protection Regulation (GDPR). If you want to do business in the EU, you'll need to comply. This means any business developing, deploying, or using AI systems within the EU, even if they are based abroad (which, post Brexit, includes the UK), falls under the remit of this law.


The AI supply chain

Here’s where I get uneasy. I stand by my assertion that every company is, or will be, an AI-powered company. You are very likely using AI every day without realising it (think spam filters). Your people are definitely using AI to do their jobs, including tapping ChatGPT for research and using automatic meeting note-takers and transcribers to streamline their day.


But even though we use AI in our businesses, it is unlikely that we actually develop the AI ourselves or have much control over how it is built. Take customer service chatbots. Sure the business has a role to play in the setup of a chatbot, specifically supplying company-specific data and information, and testing the bot. However, the vendor will do most of the heavy lifting, from providing the core AI tool and infrastructure to handling the natural language processing and integrating the chatbot into existing systems.


And even your chatbot vendor is very unlikely to have developed the artificial intelligence from scratch. At its core will be technology from one of the U.S. tech giants – Google, Microsoft, Amazon, Apple and Meta – surrounded by your vendor’s customisation and innovation.


Where does the AI buck stop now?

So surely, one would reasonably argue that in this case, the buck stops with the tech giants building the core artificial intelligence that powers everything built on top if it? Yet the wording of the law says that it is the service provider to the EU resident, i.e. your or my business, that is responsible. In case of a compliance issue, do we then counter-claim upstream? Do we audit our IT suppliers based on their compliance? Or do we ringfence our European services and rip AI out of them? (This sounds dramatic but is happening already. Because of GDPR data privacy concerns, Facebook has not made one of its AI models available in Europe.)


I don’t have the answers yet. But the AI revolution in accounting is not on the horizon – it's here. And it’s complicated. If nothing else, this should encourage you to consider that AI needs to be on your agenda today, not in five or ten years. By proactively addressing these regulatory challenges, we can position ourselves to comply and thrive in this new business landscape. The firms that adapt quickly to this new reality will be the ones leading our profession into the future.


Comments


bottom of page