Back to Insights

AI ROI in Financial Services: What 25 to 40% Efficiency Actually Looks Like

Every AI vendor quotes a 25 to 40% efficiency number. Very few will show you where it comes from. Here is what it actually looks like inside a real financial services firm.

Every second slide deck in financial services quotes the same number. 25 to 40% efficiency gains from AI. Sometimes it is 35%. Sometimes it is 50%. The source is usually a vendor case study, a consulting firm's survey, or an industry report that cites the consulting firm's survey. The number is real. The way it is presented is not useful.

The worked example below is an advice practice because the public data on AI ROI is thickest in advice. The same bucket structure applies across wealth management, insurance, mortgage broking and banking, with different time multipliers in each bucket. Use the shape, change the numbers to fit your vertical.

What follows is the same number, broken down into where it actually lives, what it feels like inside a real firm, and the places where it quietly does not show up.

Where the number comes from

There are four buckets. Most published efficiency numbers are aggregates of these.

Bucket one. Post-meeting administration. The time between a client meeting ending and the file note being complete, the next task assigned, and the CRM updated. Industry studies put this at two to four hours per meeting per adviser. AI meeting notes compress it to thirty to ninety minutes. Inside a practice of five advisers with three to four client meetings each per week, that is a recovered day and a half per adviser per week.

Bucket two. Review pack preparation. The assembly of portfolio data, performance summaries, market commentary, and prior meeting context into a document for the annual review. A paraplanner typically spends two to six hours per client review. AI-assisted generation, with a human reviewing against a defined source of truth, runs at forty-five minutes to two hours. Inside a practice running 250 reviews a year, that is somewhere between 250 and 1000 paraplanner hours recovered annually.

Bucket three. Compliance and audit preparation. The cyclical work of preparing documentation for licensee audits, ASIC requests, or internal file reviews. A typical advice practice spends hundreds of hours per year producing summaries, reconstructing adviser rationale, and cross-referencing documents that already exist. AI that can read across systems and produce audit-ready summaries turns days into hours. This is the bucket that rarely shows up in vendor demos because it is unglamorous. It is also where some of the highest leverage sits.

Bucket four. Document generation. SOAs, ROAs, engagement letters, fee disclosure statements, advice summaries. The pattern is that 60 to 80% of these documents is boilerplate or re-used content, and 20 to 40% is genuinely bespoke. AI-assisted drafting, with clear authority boundaries, takes the boilerplate component to near-zero effort while leaving the bespoke content for human judgement.

Add those four buckets together for a mid-sized practice. The numbers land in the 25 to 40% range by themselves, without any of the more speculative revenue-side claims.

A worked example

Ten advisers. Four paraplanners. One practice manager. Average adviser book of 120 clients. 850 clients in total. Typical service proposition of an annual review plus quarterly check-ins for A clients, annual review plus two interactions for B clients, and an annual review for C clients.

Pre-AI operating baseline.

  • 30 client meetings per week across the practice.
  • 90 hours per week of post-meeting administration.
  • 250 annual reviews across the year at 4 hours each, equals 1000 paraplanner hours.
  • 600 hours per year of compliance and audit preparation.
  • 1200 documents generated per year across all types, at an average of 2 hours each, equals 2400 hours.

Total operational time in these four buckets. Roughly 8500 hours per year.

Deploy AI across the four buckets with governance in place.

  • Post-meeting admin compressed by 70%. Saves 3300 hours per year.
  • Review pack prep compressed by 55%. Saves 550 hours per year.
  • Compliance and audit prep compressed by 50%. Saves 300 hours per year.
  • Document generation compressed by 40%. Saves 960 hours per year.

Total hours recovered. About 5100 hours per year in a ten-adviser practice.

That is not a 25 to 40% efficiency gain. It is higher. It is also the theoretical ceiling. Inside a real practice, three things happen.

Where the ceiling leaks

The theoretical number is never the actual number. Not because AI does not work. Because of three predictable leaks.

Leak one. The human review step. AI drafts need to be reviewed. Review time is real. If the review is thorough, it offsets some of the generation saving. If the review is cursory, you are accumulating quality risk you will pay for later. The honest mid-point is that review takes 20 to 30% of the time the task used to take. Factor it in.

Leak two. Data quality drag. Every AI workflow reads from the firm's data. Where data is clean, the AI output is useful on the first pass. Where it is not, the output needs correction, the correction takes time, and the time eats the saving. Firms with clean data achieve 80 to 90% of the theoretical ceiling. Firms with messy data achieve 40 to 60%. This is not the AI's fault. It is the firm's data hygiene showing up in the efficiency report.

Leak three. Behavioural inertia. A senior adviser who has written SOAs the same way for twenty years will not use an AI-drafted SOA the same way a newer adviser does. That is neither good nor bad. It is real. Some of the theoretical efficiency is absorbed by the human operating model adjusting around the AI. It takes twelve to eighteen months to flow through.

Net effect. In a firm with clean data, a genuine operating model change, and disciplined review, the 25 to 40% number is real and achievable. In a firm skipping the data work and bolting AI onto the current workflow, the number is 10 to 15% and fragile.

Where the number does not show up

Two places.

It does not show up as headcount reduction. This is the question every executive asks and every adviser worries about. The honest answer from what we see in the market is that AI shifts work rather than eliminating it. Advisers see more clients. Paraplanners move up the value chain into more complex work. Practice managers spend more time on growth and less on admin coordination. The headcount stays stable or grows. The mix of work changes.

It does not show up as a straight-line saving. The 5100 hours recovered in the worked example do not translate to a 5100-hour cost reduction on the payroll. They translate to 5100 hours of capacity that the practice can redeploy. Firms that redeploy well see revenue growth, client satisfaction uplift, and better adviser retention. Firms that do not redeploy well see the hours slowly get absorbed back into other administrative expansion.

This is why the conversation about AI ROI has to include the deployment of recovered capacity. Otherwise the efficiency is on paper and the P&L never sees it.

What the number looks like on a P&L

For a ten-adviser practice with a typical fee structure, translating the efficiency into financials.

Assume average fully loaded cost per adviser of $200,000 per year and per paraplanner of $95,000. Assume 5100 recovered hours are redeployed into an extra 85 clients served. Assume average client revenue of $5,500. Gross revenue impact of $467,000. Incremental cost of serving the extra clients, net of AI platform costs, at roughly $150,000. Net margin impact in year two, after deployment is stable, of around $317,000.

That is a ten-adviser practice. Scale it up. A twenty-five-adviser practice sees roughly 2.5 times the number if the operating model adjusts cleanly.

These numbers are ranges, not guarantees. They are closer to reality than any vendor case study you will read.

Why most firms will not see this

The firms that will capture the top of the range share four characteristics. The firms that will see the bottom of the range share the inverse.

Top of the range. Clean CRM data. A single system of record for client context. Defined governance for what AI is allowed to do. A leader in the practice whose job includes AI adoption. A willingness to change the workflow, not just add a tool.

Bottom of the range. Messy data across multiple systems. Governance added after procurement. AI treated as a productivity feature for individuals rather than an operating model change. No change to how work flows between adviser, paraplanner, and compliance.

The technology is identical in both cases. The operating environment is not.

The honest ROI conversation

Most boards make two mistakes when they ask for an AI business case.

They anchor on the top of the vendor range and plan as if every firm gets 40%. Reality for most firms is 15 to 25% in year one, and 25 to 35% in year two once the operating model adjusts. Planning for the top of the range leaves the board disappointed when year one underdelivers, and that disappointment often kills the programme before year two arrives.

They focus on the cost side and ignore capacity redeployment. An AI programme that only reduces cost sub-optimises the investment. An AI programme that redeploys recovered capacity into revenue growth, client experience, or adviser retention produces returns that are two to three times higher over three years.

The better business case runs two numbers.

Year one efficiency, conservatively modelled, assumed to be fully absorbed by the transition.

Year two and three capacity redeployment, modelled against a specific growth or service outcome.

That is the shape of an honest AI ROI plan for a wealth firm.

What to do on Monday morning

If you are running an advice practice and trying to size the AI opportunity, do this before you sign anything.

  1. Measure your current baseline in hours across the four buckets. Post-meeting admin, review prep, compliance and audit prep, document generation. One week of timesheet tracking across the team is enough.
  2. Assume 60% of the theoretical efficiency ceiling in year one.
  3. Pick what you will do with the recovered capacity before you start. Write it down.
  4. Track the baseline forward against a clean before-after comparison at months six, twelve, and eighteen.

The number you end up with will be specific to your practice. It will also be credible, because it will be measured in your operation, not modelled from a vendor case study.

That is the only ROI number that matters.

The 25 to 40% statistic is directionally right. It is not your business case.

Your business case is the specific hours your specific practice recovers, minus the specific review and adjustment cost, multiplied by the specific capacity decision you make.

Do that number. Trust that number. Plan against that number.

Everything else is marketing.