Back to Insights

What the Adviser Remembers That the Transcript Misses

AI note-takers absorb the writing step. The cost surfaces in year two as lost compression: the adviser's chunked model of each client quietly stops forming. Here is the fix.

Three weeks ago I sat in a paraplanner's office while she opened the AI-generated note for a complex SMSF review meeting. Eleven pages. Every sub-topic timestamped, four follow-up actions pulled out, a sentiment line at the bottom that said the client "appeared engaged but cautious."

She read it for forty seconds, then closed it.

"It's all there," she said. "But I still don't know what we're meant to do."

The adviser had run that meeting. Fourteen years of SMSF work behind him. Walking out of the room he had already registered that the wife was the real decision-maker, that the husband had a brother burned in a 2015 SMSF wind-up, and that the question the couple came in to answer was not the one they had asked.

None of that was in the note.

This is the gap AI note-takers will not close, however many transcripts they ingest. It is also the gap that decides which practices get more out of these tools and which ones quietly get slower.

AI meeting transcription has moved from novelty to default in the practice-software stack over the past two years. Fact-find platforms bundle it. Risk-research tools will draft the file note from a recording. For a practice already running at capacity, the time saved on note authorship is real, and it goes straight to the bottom line.

But I keep seeing the same thing happen in year two. The note time falls. The synthesis time does not. Advisers who used the act of writing the note to think the case through start producing thinner recommendations afterwards. The paraplanners notice first. Then the principal notices. Then nobody quite names it.

The mistake is treating note-taking as one workflow that AI either does or does not do. It is two.

The first workflow is capture: what was said, by whom, in what order, with what actions attached. AI does this well. It is verbatim, searchable, timestamped, and consistent in a way handwritten notes never were.

The second workflow is compression: what the meeting means for this client, set against what you already knew about them, their goals from the last review, the family dynamics you read in the room. The adviser does this in the back of their head, usually without naming it. It is the difference between a recording of a conversation and an understanding of one.

Take the same SMSF meeting. The transcript holds the contribution-caps discussion, the pension question, the property the husband keeps raising. The compression is the adviser's read: the property is not really a strategy question, it is the husband looking for permission, and the next review needs the wife in the room and the brother's wind-up on the table. Same meeting, two different things to carry forward.

The cognitive science term is chunking. In their 1973 studies of chess expertise, Chase and Simon showed that masters do not hold more raw information than novices. They hold it differently, compressed into a few meaningful structures. A grandmaster does not recall thirty-two piece positions. They recall three or four patterns. A senior adviser does not recall sixty minutes of conversation. They recall the shape of the case: who decides, what they are afraid of, where the last plan drifted, what has to be true for this one to work.

A verbatim transcript is the opposite of a chunk. It is the uncompressed source data. Reading one back is like trying to play chess by replaying every move in a database instead of recognising the position on the board.

When AI writes the file note, it captures the source data perfectly and flattens the chunk structure. The compression the adviser used to do, partly in the room and partly while writing it up afterwards, never gets externalised, because the writing-up step has been automated away.

That is the year two problem. The time saving is genuine. The compression that used to happen alongside it has stopped happening.

We see this across firms. The adviser who once sat down to write the file note while the meeting was still warm now reviews an AI draft and clicks approve. Approval is faster than authorship. It does not trigger the same synthesis. The chunk does not form, the case stays loose in working memory, and within a week it has thinned out to a few stray details.

The clearest tell shows up twelve months later, when the adviser comes back to that client. Before AI, they would carry a compressed model of where the case sat, and open the file only to confirm the details. After AI, they open the file to learn the details for the first time. The model was never built. The version of this I see most often is an adviser reading the AI summary in the car park before a review, because there was nothing already in their head to walk in with.

Nelson Cowan's research on working memory puts human capacity at about four meaningful items held at once. That is a hard ceiling, and it does not move. The job of a senior adviser in the minutes after a meeting is to load the right four items into that slot. A transcript helps with everything except choosing which four.

There is a regulatory edge to this as well. An advice file is meant to show the basis on which the adviser formed their recommendation, not only a record of what was discussed. ASIC's guidance on advice conduct expects the reasoning to be legible in the file. A verbatim transcript shows what was said. It does not show what the adviser concluded, why, or how the recommendation followed from it. The adviser's working compression is exactly what the file is supposed to evidence, and exactly what the AI note leaves out.

This is not a crisis today. The AI note still leaves an audit trail, and the SOA still records the advice. But the file now sits one step further from the adviser's reasoning than it used to. In a complaint, or a remediation review, that step is the thing someone goes looking for.

The instinct, when a firm notices this, is to make the AI note better. Add a summary section. Add an action tracker. Add a "key themes" block. That makes the artefact longer, and it makes the problem worse, because it gives the adviser more reason to treat the note as the finished output. A richer transcript is still a transcript. The missing step is not a feature of the document. It is something the adviser has to do.

The pattern is bigger than note-taking. It will repeat across every workflow AI half-absorbs. The procurement lead who used to write the vendor scoring rubric receives an AI rubric and stops forming the comparison. The investment committee member who used to write the minutes receives AI minutes and stops crystallising the dissent. The compliance officer who used to translate a file into a defensible position receives an AI translation and stops building the defence in their own words. The document still gets produced. The person's own grasp of why stops getting built.

Junior consultants show the same thing. Use AI to generate a first-draft client deck and the slides come faster while the argument comes out weaker. Building the deck was how the argument got formed. Take that step away and the argument stays scattered across prompts and PDFs, and never lands in the consultant's head.

That is the real cost of handing AI the writing step. The writing was the thinking. Treat it as a separable task and hand it off, and the thinking leaves with it, because it was happening inside the writing the whole time.

The fix is a small synthesis step, added back into the routine after the meeting. Three to five minutes, before looking at the AI note, the adviser opens a separate short document and writes four lines:

  1. What changed in the client's situation since the last review.
  2. Where the meeting went somewhere I did not expect.
  3. What the case really hangs on now.
  4. What I want to remember when this comes back across my desk in twelve months.

Those four are chosen deliberately. The first forces a comparison against the prior model. The second captures the thing a transcript records but never flags as important. The third is the load-bearing judgement. The fourth writes the brief for the adviser's future self. Together they are the chunk, and they are what the AI cannot produce, because it does not hold the prior model of the client, the comparative context, or the instinct of someone who has run five hundred meetings like this one.

Call it the Compression Note. It sits in the file alongside the AI transcript. Both go into the CRM. Six months on, when another adviser or a paraplanner opens the file, the AI note answers "what was said" and the Compression Note answers "what it meant." The file needs both to be worth anything.

Firms that have added this step recover most of the lost compression. The advisers who do it say they walk into the next meeting in the series with a sharper brief than they used to carry. The paraplanners say the case files read as coherent again. The principals see retention hold. The cost is roughly fifteen minutes a day for an adviser running three meetings.

Before you scale the note-taker across the practice, run a short test. Ask three of your senior advisers to describe, from memory, the current state of three of their most complex clients. Score how many load-bearing details each can produce without opening the file. Compare the advisers who lean hardest on AI note-takers against those who do not. The difference is what the tool is costing you, and it will not appear on any invoice.

Then put the Compression Note into the post-meeting routine. A time sheet will read it as fifteen minutes of overhead. Those fifteen minutes are how the firm gets the compression step back, recorded somewhere it can be seen.

The paraplanner closed the eleven-page note and asked the adviser what he wanted her to draft. He told her in two sentences, from memory, without reopening the document. The note held the meeting word for word. What the practice actually runs on was the part the adviser had carried out in his head.