We are all using AI now, right? With perceptions of business value inherent in the use of technology, many professionals and organisations are communicating to the world how they are using AI and the attendant benefits. 

Tom Whittaker

Tom Whittaker

Jon McLeod

Jon McLeod

But what are the risks associated with this? And how can clients communicate honestly and effectively about how they are adapting to this generational tech shift?

Consider the audiences that are on the receiving end of communications about AI and thus the risks to manage:

  • Investors and financial markets are a key area where things could go wrong. With stock values well-inflated by an AI premium, for a public company (or anyone seeking investor capital) to overplay their hand in respect of using technology could constitute a serious misstatement risk. Repercussions could range from regulatory intervention to shareholder actions and civil fraud claims. 
  • Linked to the above, regulators of all types would take an interest in how a business’s claims about AI use could intersect with the organisation’s obligations to a vertical or horizontal sector regulator. This is not only true in the case of entities such as the Financial Conduct Authority and the Takeover Panel. It could also pique the interest of anti-trust watchdogs, such as the Competition and Markets Authority, or data regulators, including the Information Commissioner’s Office in respect of obligations relating to data processing. 
  • In the professional services sector, those with oversight of firms and practitioners will continue to take an interest in any claims about how AI is used, in order to better understand what the market is doing now and potentially in the future, and what risks may be present, where and why.
  • Businesses that communicate with consumers – for example, marketing and advertising messages – will have to have regard to images or other representations which could be construed as misleading. Advertising copy, text and data also have the potential to mislead or constitute a breach of statutory, regulatory or contractual obligations. 
  • We have already seen examples of actual or alleged reliance on AI-generated material that has led to incorrect information being presented in court cases in jurisdictions globally. Courts have spelt out the range of potentially serious consequences for those who use AI inappropriately. 
  • Those in the public sector looking to talk about the productivity and growth benefits from AI will already be familiar with a range of obligations, especially when it comes to creating any legitimate expectation of what the public organisation will (or won’t) do. Insurers may be looking at external communications about AI to help price policies and check for potential non-compliance.

It is possible for one organisation’s external communications about AI to reach a broad audience. Given different stakeholder interests and needs, and associated legal risks, there is the risk of the law of unintended consequences coming into play.

However, the ‘say nothing’ alternative may not be possible. External communications may be seen as necessary in a competitive market for clients and talent. Further, there are existing and emerging laws that require transparency to end-users about the use of AI. But it should be remembered that what is required for transparency depends on context and relevant laws to help the receiving party understand, in order to take informed decisions. Transparency is a means to an end, not an end in itself. 

So what should organisations do to ensure that their external claims, statements, offers to the market and communications about AI mitigate potential exposure? 

First, make sure that your organisation has a clear and maintained internal policy on AI procurement, development and use. This will be tailored to the type of entity concerned, but might cover specific use cases, product development and approval, and data handling. Ensure that colleagues know who to speak to so that queries can be routed correctly. Remember that, for some issues, there may not be an answer yet. While the answers are not always known, the way to answer them usually is.

Second, ensure that all external communications developed using AI, or making claims about the business’s use of AI, are vetted by relevant legal and communications teams and reviewed against your organisation’s policy. Make sure that core communications are reviewed regularly. We have seen corporate websites explaining the use of and approach to AI that were created and last updated in late 2023/early 2024, shortly after the public launch of ChatGPT, even though there have been significant developments for those organisations regarding AI since.

Third, ensure that you have taken relevant legal advice about statutory and regulatory considerations that are relevant to your market. Ensure that this advice is kept up to date, as existing and new AI-related regulations are developing quickly, but at differing paces, across jurisdictions.

Fourth, consider talking to your internal and external stakeholders, especially clients and customers, to get close to their feelings about the rise of AI use in your sector. Respond to their concerns and be clear that you are in listening mode. 

Do not be afraid to approach a specialist lawyer or a policy/communications professional if you have concerns. They will be dealing with many comparable queries and will be well placed to assist. 

 

Tom Whittaker is director and head of AI (advisory) at Burges Salmon LLP. Jon McLeod is a partner at DRD Partnership