Glossary
The terms defined plainly.
AI, agentic systems, and Australian financial services regulation produce a lot of acronyms. Here are plain-English definitions of the ones that come up most often in our work.
AI Concepts
- Agentic AI
- An AI system that takes a goal, plans the steps required to achieve it, executes those steps across tools and data sources, and reports back. Where a copilot waits for instructions, an agent keeps moving until it hits a boundary. In financial services, agentic AI typically performs end-to-end workflows like file note generation, statement of advice drafting, or claims triage with a human reviewing and approving the output.
- AI Agent
- A software system built on a large language model that can plan, take actions, and use tools (calling APIs, reading data, writing files) without step-by-step human direction. Agents differ from chatbots in that they can complete multi-step workflows. They differ from traditional automation in that the steps and decisions are dynamic rather than pre-coded.
- Copilot
- An AI assistant that responds to user instructions inside a workflow the human still drives. Microsoft Copilot, GitHub Copilot, and most note-taker tools are copilots. The user types or asks; the AI suggests. The human remains the operator. Copilots are easier to govern than agents because the human is in the loop on every action.
- RAG (Retrieval-Augmented Generation)
- A technique where an AI model retrieves relevant documents from a knowledge base before generating its answer, so the response is grounded in specific, up-to-date sources rather than the model's training data. In financial services, RAG is what makes AI safe to use against client data, policy documents, or compliance manuals: the model cites sources rather than hallucinating.
- Foundation Model
- A large AI model trained on broad data (text, code, images) that serves as the base layer for specific applications. Examples: GPT, Claude, Gemini, Llama. Foundation models are built by a small number of companies with research and capital structures no advice firm could replicate. Every wealth firm using AI is using a foundation model, whether they know it or not.
- Hallucination
- When an AI model generates plausible-sounding but factually incorrect output. Hallucinations are confident, well-formatted, and indistinguishable from accurate output without verification. The risk in financial services is that polished AI-generated text gets accepted at face value. The defence is sourcing: AI that cites the document it drew from is auditable; AI that generates from training memory is not.
- Prompt
- The input text that instructs an AI model on what to do. In production AI workflows, prompts are designed by the firm, version-controlled, and tested. The prompt determines tone, scope, source preference, and what the AI is allowed to assume. Most AI failures in financial services trace back to prompts that were never properly designed.
- Tool Use / Tool Calling
- An AI model's ability to call external functions: querying a database, sending an email, updating a CRM record, calculating a tax position. Tool use is what turns an AI from a writing assistant into an agent that can act inside business systems. Every tool call expands what the AI can do — and what it can break.
- Vector Database
- A database that stores text (and other content) as numerical embeddings, allowing fast retrieval of semantically similar content. Vector databases are how RAG systems find relevant documents quickly. In a wealth firm, a vector database might index every client meeting note, file, and email so an agent can pull the right context on demand.
Unwired Wealth Frameworks
- Control Stack
- The four-component model every agent in financial services needs before it earns autonomy: Mandate (what it is for), Evidence (what it reads from), Containment (what it can touch), Escalation (when it must stop). Miss one and the others compensate badly. Originally published in our Insights.
- Trust Layer
- The four-component infrastructure that makes AI outputs reliable enough to act on: Data Integrity, Decision Boundaries, Audit Trail, and Team Capability. Most firms talk about AI trust as a feeling. The ones getting results treat it as architecture.
- Production Stack
- The four conditions that turn an AI pilot into production: Workflow Clarity, Data Fit, Risk Ownership, and Change Design. Most pilots stall not because the model is weak but because the firm tries to scale a demo before any of the four conditions are met.
- Compliance Translation Layer
- A three-column mapping (Artefact → Obligation → Evidence) that connects every AI output to the existing rule it sits inside (Corporations Act, Credit Act, AML/CTF Act, Privacy Act, APRA standards). Built once, applied to every AI artefact. Replaces the standalone AI policy that nobody can apply to a real client situation.
- The Three Lanes
- The mental model that separates AI adoption into three categories with different governance needs: Personal Productivity (one person faster), Team Productivity (workflows changing), and Business Automation (systems changing). Most firms collapse all three into one decision and end up with policy that fits none of them.
- Four Gates
- The agent governance model: Data Gate (what it can see), Decision Gate (what it can decide), Traceability Gate (what you can reconstruct), Escalation Gate (what forces it to stop). Each gate has to be passed before the agent earns more autonomy.
- Operating Model
- The set of decisions, controls, and workflows that govern how AI is used inside the business. Where it sits in the workflow, who reviews, who approves, what gets logged, what triggers an override. An operating model is what stops AI from being a series of disconnected pilots.
Related:AI Operating Model service
Australian Financial Services Terms
- Statement of Advice (SOA)
- A document an Australian financial adviser must give a retail client when providing personal financial advice. The SOA records the advice, the basis for it, fees, and any conflicts. SOAs are governed by section 946A of the Corporations Act and are one of the most labour-intensive deliverables in advice. AI-assisted SOA drafting is one of the highest-leverage AI use cases in wealth management.
- Record of Advice (ROA)
- A simpler advice document used when a client receives further advice that is similar to advice already given in an SOA, and circumstances have not changed materially. ROAs are common for ongoing service clients. AI generation is well-suited because most of the content is template plus current circumstances.
- Paraplanning
- The role responsible for assembling the evidence base for advice, applying licensee rules, and translating adviser recommendations into compliant documents. Often described as the operational backbone of an advice practice. AI is reshaping paraplanning rather than eliminating it: mechanical work compresses, judgement work expands.
- Best Interests Duty (BID)
- An obligation under section 961B of the Corporations Act for advisers to act in the best interests of the client when providing personal advice. The duty is one of the highest-stakes obligations in Australian financial services. Any AI system that touches advice preparation operates under BID, regardless of whether the human or the algorithm produced the first draft.
- ASIC RG 271
- ASIC Regulatory Guide 271 on internal dispute resolution. Sets the standards for how Australian financial services firms must handle complaints. Relevant to AI because firms using AI in client-facing workflows must still meet RG 271 obligations on response timing, documentation, and outcome.
- ASIC RG 274
- ASIC Regulatory Guide 274 on Product Design and Distribution Obligations (Part 7.8A of the Corporations Act). Requires product issuers and distributors to have a Target Market Determination and to act consistently with it. AI-assisted product distribution must still align with RG 274.
- APRA CPS 230
- APRA Prudential Standard CPS 230 on Operational Risk Management. Sets requirements for prudentially regulated entities on operational resilience, third-party arrangements, and critical operations. In force since 1 July 2025. AI vendor contracts for critical operations must align with CPS 230 by the next renewal or 1 July 2026, whichever is earlier.
- APRA CPS 234
- APRA Prudential Standard CPS 234 on Information Security. Requires regulated entities to maintain information security capability commensurate with the size and extent of threats. Pulls AI model inputs and outputs into the information security frame for prudentially regulated firms.
- AUSTRAC AML/CTF
- Australian Transaction Reports and Analysis Centre obligations under the Anti-Money Laundering and Counter-Terrorism Financing Act. Reform deadlines through 2026 expand obligations to many financial services firms. AI used in onboarding, transaction monitoring, or suspicious matter reporting must support these controls, not weaken them.
- AFSL
- Australian Financial Services Licence. The licence required to operate a financial services business in Australia. AFSL holders are governed by section 912A obligations to provide services efficiently, honestly and fairly. AI does not change the licence obligations; it changes the artefacts the licensee has to defend.
Need this applied to your firm?
The terms are the easy part. The hard part is mapping them to your workflow, your regulators, and your operating model. That is the conversation we have every week.