Back to Blog
/5 min read

The FCA hasn't told us what to do about AI customer communications. Here's what I think the rules should be.

RegulationAIFCARegulationConsumer DutyFinancial Services

Right now, somewhere in a regulated financial services firm, an AI is drafting a response to a customer complaint. A member of staff will glance at it, click approve, and that response will go out. Nobody outside that firm knows it happened, and nobody has told that firm what rules apply.

Firms are already using AI to draft, personalise, and respond to customer communications across collections, complaints, and servicing. The tools are cheap, fast, and convincing. The FCA has broad principles - Consumer Duty, CONC, TCF - that set out what firms owe their customers. Specific guidance on AI-generated content does not exist. That gap is being filled by individual firms making individual decisions, which is how you get inconsistency across the industry and, eventually, harm to customers who had no idea the communication they received was machine-generated.

What the FCA has actually said

PS24/17, the FCA's feedback statement on AI published in late 2024, acknowledged that AI is significant and that the regulator is paying attention. It also deferred the hard questions. The FCA's position is that existing rules apply regardless of the technology used - technically correct, practically insufficient.

Consumer Duty requires firms to deliver good outcomes across four areas: products and services, price and value, consumer understanding, and consumer support. Those obligations apply whether a human or an AI wrote the letter. The FCA is right about that. The problem is that "existing rules apply" does not tell a compliance team how to assess whether an AI-assisted communication meets the consumer understanding outcome, what oversight controls are expected, or how to document the process for a supervision visit.

The principles-based approach has real advantages. It is flexible, it does not lock firms into specific technologies, and it creates space for judgment. Principles without specificity also create a compliance lottery where firms that invest in doing this properly are held to the same standard as firms that treat AI as an IT efficiency tool and ignore the regulatory implications entirely.

Where the real risk sits

The public conversation about AI risk in financial services focuses on chatbots giving wrong answers or automated systems making unfair credit decisions. Those risks are real. The deeper risk is somewhere less visible.

It is the grey zone of AI-assisted communications: the complaint acknowledgement that was 90% AI-generated, reviewed by a handler in 30 seconds, sent with wording that is technically not wrong but that a careful human reviewer would have changed. The arrears letter that uses a tone the firm's vulnerable customer policy would not endorse, but which nobody caught because the review process assumes AI output is good enough by default.

The primary risk in regulated communications is AI lowering the quality of human attention. When people know the AI is drafting, they review less carefully. That is a human failure, and the regulatory framework needs to account for it.

What the rules should actually say

I think the FCA should issue specific guidance covering four areas.

Disclosure. Customers should be able to know when a communication was materially assisted by AI. A disclaimer on every email is unnecessary. A clear, documented firm policy and a straightforward way for customers to access that information on request - consistent with how firms already disclose automated decision-making under GDPR Article 22 - is the right standard. The principle is the same: where AI has played a material role in something affecting a customer, that information should be accessible.

Defined categories requiring substantive human review. Not sign-off as a formality, but genuine review by someone with the knowledge to assess what they are approving. At minimum this should cover:

  • Complaint responses under DISP
  • Arrears and collections communications
  • Any communication that could constitute advice
  • Any communication where a vulnerable customer flag is present

These are the categories where poor communications cause the most harm. They are also the categories where AI-assisted drafting is already in use.

Audit trail. Firms should be able to demonstrate, on request, which communications were AI-assisted, what the human oversight process was, and who approved what. If a firm can tell a supervisor exactly how a credit decision was made and by what model, it should be able to do the same for a regulated customer communication.

Model governance. The AI used to draft customer communications should sit within the same model risk management framework as any other model touching regulated outcomes. Firms that have robust governance around their credit scorecards - validation, change controls, performance monitoring - and then run customer communications through an unchecked large language model have an obvious inconsistency. The FCA's SS1/23 provides a workable template.

The harder question Consumer Duty creates

Consumer Duty raises a question that goes beyond rule compliance. If an AI-assisted communication technically meets the requirements of CONC but leaves a customer less well-informed than a carefully written human response would have, is that a good outcome? Probably not.

The Duty's outcomes framework is designed to ask exactly this kind of question. The question is not whether the AI followed the rules. The question is whether the customer, at the end of the interaction, understood their situation, understood their options, and was treated fairly. Answering that requires firms to treat AI communications as a regulated activity - with the governance, oversight, and documentation that implies.

The firms that get ahead of this

The FCA does not pre-approve. It audits. The firms that will be in difficult conversations with supervisors in three years are the ones that treated AI-generated customer communications as an IT cost-saving exercise and never asked the compliance question at all. The ones in a better position will be the ones building standards now, documenting their rationale, and creating an audit trail that shows they took the question seriously before anyone told them they had to.

The guidance does not exist yet. That is not a reason to wait.