Money Matters Episode 339-AI Workers, Not Chatbots: How Advisors Scale with Digital Co-Workers W/Jonathan Michael
What if financial advisors didn’t just use AI tools—but had AI workers supporting their business?
In this episode of Money Matters, host Chris Hensley is joined by Jonathan Michael, Director of Growth at TIFIN AXIS, to unpack what AI agents really are, how they differ from chatbots, and why they’re quickly becoming a practical way for advisory firms to scale.
Jonathan brings a founder’s perspective from both EdTech and WealthTech and focuses on one core idea: AI should take on the operational, repetitive work that slows firms down—so advisors can spend more time where they add the most value.
In this conversation, we discuss:
What defines an AI agent (and what doesn’t)
Why AI workers are best thought of as digital co-workers
Where agents outperform traditional software and manual workflows
Why structured prompting is a best practice for reliability and oversight
How verification loops reduce AI errors and improve confidence
The role of data infrastructure in deploying AI responsibly
What the RIA of the future looks like with humans and AI working together
If you’re a financial advisor or RIA leader trying to make sense of AI without the hype, this episode offers a grounded, practical look at how firms are actually using it today—and where it’s heading next.
🔗 Guest
Jonathan Michael
Director of Growth, TIFIN AXIS
Host, AI for Wealth
Author, Wealth Management Prompts
What if financial advisors didn’t just use AI tools—but had AI workers supporting their business?
In this episode of Money Matters, host Chris Hensley is joined by Jonathan Michael, Director of Growth at TIFIN AXIS, to unpack what AI agents really are, how they differ from chatbots, and why they’re quickly becoming a practical way for advisory firms to scale.
Jonathan brings a founder’s perspective from both EdTech and WealthTech and focuses on one core idea: AI should take on the operational, repetitive work that slows firms down—so advisors can spend more time where they add the most value.
In this conversation, we discuss:
-
What defines an AI agent (and what doesn’t)
-
Why AI workers are best thought of as digital co-workers
-
Where agents outperform traditional software and manual workflows
-
Why structured prompting is a best practice for reliability and oversight
-
How verification loops reduce AI errors and improve confidence
-
The role of data infrastructure in deploying AI responsibly
-
What the RIA of the future looks like with humans and AI working together
If you’re a financial advisor or RIA leader trying to make sense of AI without the hype, this episode offers a grounded, practical look at how firms are actually using it today—and where it’s heading next.
🔗 Guest
Jonathan Michael
Director of Growth, TIFIN AXIS
Host, AI for Wealth
Author, Wealth Management Prompts
Jonathan Michael
[00:00:00] So that's the way, that's how I would de define an agent. What is not an agent, right? So you have chatbots, which was the first version of an AI system. Chatbots are just giving you responses to your questions, right? They're relying on pre-training data. They're not using tools, they're not using any data sources to answer questions, right?
And age. What makes an agent uniquely an agent is that it has some degree of autonomy to execute on a plan, and it has, and it has access to tools, right? I, I think that's the best way I would try to distinct between what is an agent and what is not an agent. Something is not a agent when it's not using external data sources.
Mm-hmm. To produce an action. So that's how I would define.
Christopher Hensley RICP, CES: What if I told you every financial advisor will soon have a team of AI workers, not tools, not chatbots, actual digital coworkers that takes on 70% of their operational workload. No more onboarding bottlenecks. No more [00:01:00] data chasing. No more juggling 20 disconnected platforms, and the firms that adopt this first will grow faster than anything we've seen in Modern Wealth Management. I'm Chris Hensley, your host, and this is Money Matters. Today's guest, Jonathan Michael isn't imagining this future. He's building it a two time founder with 10 plus years in ed tech and wealth tech. Jonathan now leads growth at. Tiffin, am I saying that right? T-I-F-I-N. Axis
Jonathan Michael: Yes.
Christopher Hensley RICP, CES: axis, where he's focused on one mission giving advisors an AI powered digital workforce that handles the operational heavy lifting so that they can serve more clients more deeply. In this conversation, Jonathan breaks down what an AI actually is. structured prompting is now a compliance requirement, how agents will change advisor onboarding forever, and what the RIA of the future looks like when humans and AI coworkers operate side by side. If you wanna understand the biggest [00:02:00] shift we're heading towards in wealth management and how to prepare your firm before it hits, this is the episode for you. Jonathan. Welcome to the show.
Jonathan Michael: Thanks, Christopher. Thanks for having me on.
Christopher Hensley RICP, CES: excited to have you on. So we, you know, we ran into each other for, through LinkedIn right? Of all places.
Jonathan Michael: Yes.
Christopher Hensley RICP, CES: originally you saw an article by Rob Bures that was talking about a, let me see, say all these letters together. A-I-S-E-O, right?
Jonathan Michael: Right.
Christopher Hensley RICP, CES: and, and, and then you made a, an additional. Article that was kind of incorporating is like, okay, this, and, and, and kind of built on top of that. And for listeners, I, I, there's on my webpage, if you go to Money Matters podcast.com. I've got a link to that article that, that you wrote on there.
Jonathan Michael: Yeah.
Christopher Hensley RICP, CES: that was kind of how we, we got connected was through, through LinkedIn and through that conversation. And I had been following your videos because what I really liked about 'em is that you are showing us practical things. I'm seeing a lot of these [00:03:00] like marketing companies that are giving us kind of I wouldn't say polished ways of coming at ai, but you were giving us really kind of nuts and bolts.
Christopher Hensley RICP, CES: Like, here's something you can do. So I really like that. We, we talked about having a conversation about agents. 'cause you know, we hear that's a kind of a buzzword going on out there right now. We're hearing a lot of hype around that. When we hear that word agent, in its simplest terms, what is an AI agent and what is not an agent?
Jonathan Michael: Yeah. You know, that's a great question and i'll try to illustrate it with like a simple example that advisors can resonate with, right? So if you think of like knowledge work today if you think that advisors are also, I would consider us knowledge workers, right? So, you know, we use a laptop, we have software subscriptions.
Jonathan Michael: We have CRM planning, we have, we have a, you know, CRM tool that we use. We have financial planning software, we have tax planning software. We have all these different tools. And at the start of a week or at the start of [00:04:00] the month, you know you have a plan, right? You create a plan and then you try to execute on the plan.
Jonathan Michael: Hopefully you do have a plan or you have a to-do list, for example. All of us can resonate with that. We all have to-do lists and we try to check it off to get things done, right? You can illustrate that example with how agents are designed to function. Agents create plans. They have access to tools.
Jonathan Michael: That could be data sources. It could be web search. It could be, uh, CRM data, it could be your Google Drive, right? And then it goes and executes on those, on those tasks, and it uses tools to accomplish that. So that's the simplest way to illustrate what an agent is. It's some version of A LLM using tools and memory to accomplish tasks, right?
Christopher Hensley RICP, CES: Mm-hmm.
Jonathan Michael: are essential components that make knowledge workers distinct. That is what we are as humans, right? As human knowledge [00:05:00] workers. We have memory, we have short term memory, long-term memory. We have, we have tools and we have plans to, uh, execute on. So that's the way, that's how I would, uh, define an agent.
Jonathan Michael: What is not an agent, right? So you have chat bots, which was the first version of an AI system. Chat bots are just giving you responses to your questions, right? They're relying on pre-training data. They're not using tools, they're not using any data sources to answer questions, right. And age. What makes an agent uniquely an agent is that it has some degree of autonomy to execute on a plan.
Jonathan Michael: And it has, and it has access to tools, right? I, I think that's the best way I would try to distinct between what is an agent and what is not an agent. Something is not a agentic when it's not using external data sources to produce an action. So that's how I would define.
Christopher Hensley RICP, CES: I
Jonathan Michael: That makes sense. Yep.
Christopher Hensley RICP, CES: that. So I'm gonna, I'm gonna kind of repeat what you just said because we're, we're wanting to bring in the lay person here. 'cause we're wanting to try to, you know, [00:06:00] decode this this word agent that we're hearing around. So you, you know, let's start backwards. You talked about chatbots.
Christopher Hensley RICP, CES: So there's a juxtaposition of what we saw rollout at the beginning where there were these chatbots, but they didn't have, some of these of additional features. They, they had. Preloaded data, but they did not have the ability to create a plan, use tools, or execute on task. And that kind of makes them different there.
Christopher Hensley RICP, CES: And then when you talked about the agent, I kind of mapped this out as you were saying that. So the idea of being able to create a plan, having a set of tools, whatever that may be. And then the idea of being able to, to complete task. And then we also talked a little bit, you, what you talked about was autonomy and the ability for the agents to act on their own.
Christopher Hensley RICP, CES: Maybe at first not so much. And then as you get more confident in them, giving 'em more autonomy. Think it's a really good explanation there. Let's talk about some of the practical value here. What's one real advisor [00:07:00] workflow today where an agent is clearly better than a human or a traditional software tool?
Jonathan Michael: Man, I have, I have two examples here. Okay. Christopher, and I'm gonna be a little, I, I might be a little bit controversial with my examples, right?
Christopher Hensley RICP, CES: controversial examples. Go for it.
Jonathan Michael: So I'll use the example of something that's core to what an advisor does, right? It's financial planning. I know there's lots of compliance reg, you know, protocols around this, and I definitely don't, uh, you know, I'm not a, I'm not trying to promote that you do this, so you wanna take this with caution but running Monte Carlo simulations, right?
Jonathan Michael: Think about how many probabilities you need to run, uh, how much time it takes to run an actual Monte Colo simulation. You cannot do that in an AI agent that's built for Excel. So if you're, if you're an advisor that's still using Excel to build out your financial plans, and hopefully you're not, hopefully you're [00:08:00] still using, you know, there, there's tons of financial planning softwares out there and hopefully you're still using that.
Jonathan Michael: But, uh, when it comes to building out the spoke financial models, something that's core to even financial analysts core to even what some advisors have done in the past, and some advisors still do, I believe. I think an AI agent that natively understands all the components and artifacts of Excel that can go ahead and create these models that would otherwise take a lot of time.
Jonathan Michael: Uh, I think that is becoming increasingly good with every new, uh, with every new iteration of a model. The company that I'm referring to is called Try Shortcut ai. I've actually created a video on this, on my YouTube channel where you can, you know, see how I'm using, I'm actually using a, a system prompt.
Jonathan Michael: That functions, uh, like a CFP would. So it's using the seven step planning process and it's, it's creating the plan and the constraints. So that's important to remember, right? I'm giving it constraints about all the, you know, different [00:09:00] parameters I wanted to follow. I'm giving it the standard deviation basis, uh, the percentage points.
Jonathan Michael: Uh, I'm giving it like very specific information about the client, right? So that's an example of an AI agent for Excel that's getting really, really good at, you know, creating financial plans. The other example, which is I think a lot of advisors may resonate with more is, um, quad, right? So if you look at what a lot of advisors are doing day in and day out is, you know, writing reports, doing research, creating presentations.
Jonathan Michael: Building up models on Excel, maybe or extracting and analyzing PDFs form filling PDFs. Claude is a general purpose agent. Like this is a chat bot, right? You go to claude.ai, you could sign up for it. This is a general purpose chat bot. I would say it's a gentech, it's not a chat bot anymore. It's a general purpose agent that Claude has designed to do these tasks now.
Jonathan Michael: So it can use Excel. It'll use an Excel as you know, uh, uh, an Excel S [00:10:00] format. Um. To, to actually create these models. It'll use PDFs to analyze data. It'll literally create PowerPoint presentations for you. So I think that that is something that's so central to what an advisor does. Like, you know, previously maybe you'll hire a client, you might hire a client operations associate or associate advisor to do some of these tasks, but now you can do that within these two tools.
Jonathan Michael: Right? So I, I would say those are examples that stick out to me. Yeah.
Christopher Hensley RICP, CES: I love that. Yeah, no, I I The idea of financial planning. The idea of the Monte Carlo simulation, right. So I, it, I don't think that's too controversial. Hopefully clients don't think we have that in our head. When we're, when we're doing the financial planning, we've always had some form of assistance, right?
Christopher Hensley RICP, CES: Whether it's financial planning software or I've been in the industry for 22 years. So yes, I fall in the category of somebody who has used Excel spreadsheets for many years and then stop trying. like we, we've got these tools that are, that are much better and this is a progression of that. And so some of the things [00:11:00] that you're seeing. Being able to put like constraints based on CFP guidelines and it's not a a guesswork thing. The, the, the AI is really good at, at putting those into place and, and adopting those, those old Excel spreadsheets and, and and working in that direction. And you mentioned Claude. You know, when I was doing my. Due diligence on LLMs. You know, now we're several years into the future here, but it was, who's gonna be last man standing? Right? So you kind of had to pick a lane. I would try a little Claude, a little bit of chat. GPT and I got deep into chat, GPT and I really haven't gone back into Claude, but I'm seeing a lot of the pro like programmers, people who were building out stuff that Claudes almost like preferred.
Christopher Hensley RICP, CES: And, and so what, from what I'm hearing from you is maybe I need to go back and look at it. I know we had that news this week where like Claude and Google have kind of pulled to the front of the race, right? And then, chat GPT or Open ai, right? They said, okay, we're gonna do code Red Alert [00:12:00] and go back in and try and try to compete.
Christopher Hensley RICP, CES: So we're still looking to see who's last man standing there. But any additional thoughts on Claude? I, I, I'm intrigued. I want to kind of go back into that.
Jonathan Michael: I'm glad you asked, and I really wish more advisors were looking into Claude chat. G PT is a general purpose consumer chat bot at this point, and, you know, it can do some pretty cool tasks like, you know it, it's, it's great, but Claude. The way they think about model development and model progress is really designed around the enterprise, right?
Jonathan Michael: So they're thinking about business first use cases. They're designing Claude around business first use cases that is very relevant to what an advisor does. And so, you know, that's something that a lot of other foundational model companies like, you know, OpenAI isn't really thinking about. Aquatics, you know data protection really seriously, data retention is a big deal.
Jonathan Michael: They don't train you know, on your data. Uh, they have very clear policies around it. It's transparent, uh, and easy [00:13:00] to understand. They came up with a constitutional AI framework, uh, as well, which is basically a regulatory framework for, uh, model usage. And so I really like Claude because they, they are really, you know, uh, I would say they are the model for financial purposes.
Jonathan Michael: And more firms ought to be using it for sure. Yeah.
Christopher Hensley RICP, CES: I gotta take a second look at it. For sure. I was like, all right, I'm, if I'm gonna get better at this, I gotta pick a lane and stay in it. And I got data analytics certified and prompt engineering certified like a few years back. And I said, okay, but you know, as advisors, uh, we might be interested in this stuff, but we could spend a lot of times breaking stuff, right? So if you, if you hire somebody like, like. Jonathan to help you with this. I encourage people to learn as much as they possibly can so you can know what you know, even speak the language, right? Know what you're talking about.
Jonathan Michael: Yeah, I think one simple way to look at this clause chat g PT thing, is to observe the way the companies are releasing products. [00:14:00] In chat GPT. You can create images, right? Uh, a lot of advisors, it's probably not gonna be super valuable for you to create Ghibli images of yourself. All day long. Right? Uh, that's really a waste of compute power.
Jonathan Michael: Like, you know, it's a lot of compute that it takes to generate these images. Atropic has gone the other route. They don't have any kind of native image generation capabilities, right? Instead, they focus on like really hard coded data analysis. Uh, they have CLO code, which is the best coding agent out there.
Jonathan Michael: Those are the features that advisors really want. You wanna model. You wanna be able to use a general purpose AI agent, uh, that's focused on delivering great data analysis help you build out documents. One last qu, one quick example. Once you build out a financial plan, for example, let's assume you wanna create a personalized investment proposal, right?
Jonathan Michael: You can take that data thrown into Claude, it'll create an investment proposal that's in your firm's brand guidelines. You would otherwise spend maybe five hours in, in Canva doing this yourself. Well imagine that you can now use [00:15:00] Claude to do that. And Claude has a feature called Claude Skills. Uh, my newsletter covered that a little bit, and then maybe I should make more videos about that.
Jonathan Michael: But Claude Skills is really incredible because you can take your brand guidelines, your firm's brand guidelines, feed it into Claude and have it follow the same colors, the same fonts. I mean, that's, that's something that's. Unbelievable. It's incredibly useful for advisors.
Christopher Hensley RICP, CES: yeah, that what I've seen. I, I think I read that article and that probably pointed me to cloud skills. 'cause what I've looked into on it, it looks fascinating. It looks like where you're basically. Training it to repeat things that you're, you're doing often in Claude. And, and that's powerful to be able to do something like that.
Christopher Hensley RICP, CES: I'm gonna pivot a little bit because one of the things
Jonathan Michael: It's good.
Christopher Hensley RICP, CES: know, as I'm going down the rabbit hole here with ai the idea of data sovereignty is, is something that memory and data sovereignty are two things that I'm kind of leaning into for my digital Kaizen book that I'm working on [00:16:00] now. You talked about how data infrastructure, not the model. Is is the real bottleneck. Why might small sovereign models actually be better for advisors than massive Frontier LLM?
Jonathan Michael: Yeah, I mean. I think that's a really good question. In fact, Nvidia research came out, you know, a few months ago talking about how small language models might be more token efficient. And what that means is that it just consumes you know fewer tokens and that means less compute, which means it's a lot faster to run for one.
Jonathan Michael: Right? So small language models are, are useful in the fact that the latency is really low. And you can run really fast you know, run it more effectively. I think small language models. So that's on the, on the speed on the speed, uh, side of the as far as model, uh, inference is concerned, right?
Jonathan Michael: So being able to run these models really fast, but I think like the data protection that you get from [00:17:00] that is also useful, I think. Small language models are really useful for specific tasks versus large language models. Like, I don't need to know about what Shakespeare wrote about a poem. Uh, when I'm trying to run like a very specific client onboarding task that has to be able to retrieve data in a specific format, populated into a specific software system, that's a very narrow task that I think a small language model would be really good at.
Jonathan Michael: So I think that you have a unique ability to train on extremely specific skills or specific use cases versus a general model like, you know, Opus 4.5 or GPD 5.1. So I see the value in a small language model, so I think you're running it locally, data sovereignty, like you said, being able to host all your data, run your data in your own environment.
Jonathan Michael: I think the challenge there though, let me, uh, you know, uh, kind of. Push back on that. Push back on what I just said a little bit. I think the [00:18:00] challenge there is in the orchestration piece, right? So you've gotta be able to orchestrate all your workflows yourself. Let's assume you have a model locally.
Jonathan Michael: You have you know, your model running locally, but then your data sits in the cloud, right? You have your CRM, your financial planning, tax planning data, all of that is in the cloud. How do you pull all that data? How do you run it effectively? How do you run evaluations? Making sure that the use cases are actually doing the job it's supposed to do.
Jonathan Michael: That is where it gets tricky, and I don't know if advisors wanna do that. Uh, you may, you're better off trusting a company to actually run that for you. Now you can run a lot, you can run a small language model locally on your desktop for fun and, you know, uh, you know, have at it. Check it out. You know, use, you can use your data.
Jonathan Michael: You, you can, you know, probably feel more safer to use client information in a small language model that runs locally inside your environment. But [00:19:00] then to run like real business workflows. I think that takes a level of orchestration you know, bringing data together nicely into one unified interface and being able to run those agent workflows.
Jonathan Michael: So I think that's where you're gonna run into some problems.
Christopher Hensley RICP, CES: Understood. That makes a lot of sense here. So even just the introducing the idea, 'cause I don't know if a lot of advisors or people have heard about this, but the idea of small language models instead of the large language models where they're may, you know of the benefits that you mentioned were speed that it consumes less tokens.
Christopher Hensley RICP, CES: Right? It's, it's compute. Less. Less, right? It works less, as far as is spinning up the tokens on their data protection. If it's something like an NAS or a, a local server that it's, it's. Possibly more private, but then again, if you're backing it up on the cloud, who knows, right? and then the idea of of noise, right?
Christopher Hensley RICP, CES: And stuff. If you're using these smaller models and being able to really give it content, sometimes I, I think it's a big thing of less is more, right? Where. [00:20:00] Some jobs that you're trying to have it do, it doesn't like you. You used the example of Shakespeare doesn't need to know the entire Shakespeare, right.
Christopher Hensley RICP, CES: To be able to do one specific task. I'm seeing some of these white papers out there where they're talking about content curation, content training, and, and really kind of filtering out. What you're putting the AI towards. So, so that part of it is good, but on the other hand, if you're doing some of these larger enterprise level full business things, just the friction of right, of to, having to jump from one system that's local to maybe your CRM or something like that.
Christopher Hensley RICP, CES: So as we're, you know, rolling through this stuff there, we're getting all of these ideas, but let's talk about. Structured prompting, 'cause this is something that you talk about. You say wealth firms
Jonathan Michael: Yeah.
Christopher Hensley RICP, CES: AI without structured prompts have no verification loop. What does that mean in practical terms and why is it a compliance risk?
Jonathan Michael: Well, I think you've gotta give an answer. You, you've gotta show a reasoning [00:21:00] trace of how the AI came to a, a answer or conclusion. If you're using AI for. Anything that's client specific or data sensitive where you're using client data exclusively to like run specific work, maybe it's planning or investment research, whatever it is, you've gotta give a clear response.
Jonathan Michael: A clear reasoning trace. Right. Or an audit trail, one might.
Christopher Hensley RICP, CES: Yep.
Jonathan Michael: and I think like structured prompting is really important because it reduces your hallucination risk by many factors. I mean by 77, 80%, like you can really dramatically drop. You know, the rate of hallucinations with structured prompting, because these models really like structure, right?
Jonathan Michael: They're designed to receive structure, receive structured input, generalize, and synthesize information from, its, from its training dataset, from the web, and then give you a, a, a response, right? So when you give clear instructions, the model is like clear, but what it needs to do. [00:22:00] So an example of structured prompting.
Jonathan Michael: For example, I will, uh, is a chain of verification, right? So everyone knows chain of thought prompting. Hopefully you've heard of chain of thought prompting and you know, what chain of thought prompting is, uh, which is, you know, uh, structuring your prompt through simple steps, uh, and making it clear about what the steps are.
Jonathan Michael: So the model knows what it needs to do, and you're able to test the steps, right? But chain of verification lets the model verify its own output automatically, right? A simple example is just, it's really the phrases, Christopher. It's like being able to like, know what phrases to throw at the, at the model, right?
Jonathan Michael: Let's assume you asked the AI to do something for you. Maybe writing investment research or market commentary or something, right? Market commentary, for example. You need to look at a lot of numbers, lot of data sources you're trying to come up with, you know, output that your clients are gonna connect with.
Jonathan Michael: A simple example is, write me a market commentary for today. Make sure you run a chain of verification [00:23:00] after you do that. Right. It, it's really a simple phrase and you'll see the model kind of verify its own output.
Christopher Hensley RICP, CES: Mm-hmm. Mm-hmm.
Jonathan Michael: is reflected. Define right. Models are, you know, there's, there's a lot of work that.
Jonathan Michael: AI research companies do before a model goes out into production. But one of the, you know, features that they, that they spend a lot of time on is reinforcement learning, right? teaching a model based on human feedback on how to get better. The thing is like, you can actually use that feature by, like, you, you, you can use a simple phrase like reflect and refine, and you get the model to reflect on its own answers again, and then refine it too.
Jonathan Michael: So, chain of verification is your verifying facts and data, right? But we reflect on fine. You're having the model reflect on what it said and then refine it. So the refining piece, right? That refinement piece is where you can really get some extra games. And one other technique that I personally love a lot, is called multi hop retrieval.
Jonathan Michael: I know for a fact Christopher, [00:24:00] I'm, I'm, I'm not sure about you, but I think advisors are putting more documents and data in AI than we think. Uh, a lot of them are using it to parse and.
Christopher Hensley RICP, CES: to that. But I'm gonna say yes, I think, and I, not just us, I think all industries are moving towards that direction. So it's a problem we're gonna have to solve and figure out.
Jonathan Michael: Oh yeah. Yeah. I think multi hopp retrieval, so that's the phrase, these are all phrases, right? I don't like techniques, just throwing out techniques for the sake of it and you know, sounding too technical or whatever. But multi hop retrieval. Really think of like a use case where you have estate planning documents, tax documents, financial planning documents.
Jonathan Michael: Uh, you have the advisor, you have the client's risk profile, you have all this information about the client, right? You want the AI to traverse across all of those data sources and come back with a clear synthesis of what you're looking for. Maybe it's able to find insights that you might not otherwise see, right?
Jonathan Michael: You run the risk of groundedness, right? So when you have the AI parse through all this data, right? How do you know the model is [00:25:00] grounded? Right. How do you know the model's grounded on the data? So that's when you run that prompt and you say, run a multi hop document retrieval and make sure all the data, all your responses are grounded in the documents that I shared with you.
Jonathan Michael: And make sure you add source attribution to each data point to each you know each data point that you've analyzed. So you multi hop document retrieval, to put it in simple terms is, uh, getting the AI to hop through all those documents, parse each page, make sure that every that every line it's, you know, generated for you is actually grounded in the document.
Jonathan Michael: Right? These are like simple things that advisors can start implementing and start, you know, getting value on like Asop. So, yeah.
Christopher Hensley RICP, CES: it. I love it. So you, you said a whole bunch right there. So I'm gonna go back and kind of point out some of the things that stood out to me. You know, one, you just talked about the idea of it being grounded. It and the idea of going back through the data, I think [00:26:00] about indexing, like going through a giant document, but then cutting it up so that you're making sure that every, that it's seeing it, that it's in those, um, those prompts and the, and the way that the LLM is hitting it when it's going back. You know, one of the things we started with, so before, before we talk about documents, right? We, we, we started with compliance. So I kind of put a, I put a firewall here 'cause advisors are human beings, right? So there's gonna be, cases that they're using for their business that we have to answer that compliance question right away.
Christopher Hensley RICP, CES: No if ands or buts. Right? But then there's stuff that they're doing in their personal life that they may be, you know, if you're over on Nana Banana and you're messing around you, you don't necessarily have to worry about compliance, right? So but, but from an, from an advisor's standpoint you know, being able to prove your, your work, right?
Christopher Hensley RICP, CES: Or I think about when we make an investment recommendation. When we do that, and if we ever get called out, right? If, if we get, say, Hey, why did you make this recommendation? If you've got data, if you've got investment reports and, and [00:27:00] information backing that up, that's the kind of things that will help solve a compliance issue on it. Same thing with what you talk about with structured prompting and the idea of this verification loop. You're asking it.
Jonathan Michael: Yeah.
Christopher Hensley RICP, CES: its work. You're asking it to show a trail that if you do get audited, what was your reasoning in making this recommendation? You have a way to answer that. No, if ands or buts.
Christopher Hensley RICP, CES: You've got a re show your receipts. Right? so I, I like that. I like that, that idea there.
Jonathan Michael: That's a prompt too. What you just said is a prompt too.
Christopher Hensley RICP, CES: I didn't
Jonathan Michael: Prove to me that what you've. What you, that's a prompt. What? Prove to me that what you say, what you just generated is grounded in SEC regulations. Right?
Christopher Hensley RICP, CES: Yep. So having your
Jonathan Michael: That's.
Christopher Hensley RICP, CES: marketing angle, just run it through that and just make sure. Now, you did mention the word hallucination, so one of the things that, that that this is helping, right? We're having less and less hallucinations here, but you mentioned that this way of prompting helps [00:28:00] drop that rate of hallucination, so. You only need one. Right? You got everything working and you got one crazy hallucination there messes you up. Right? So so the idea of, of structuring in such a way that you're, you're dropping the rate of those hallucinations is, is good. We're moving in a good direction there. Alright. We are. Wow. We are, we've got about two minutes left, so we're bumping towards the end here.
Christopher Hensley RICP, CES: Let me pick one last question for you. We could talk all day. You can tell. I love, I love this stuff. Let's
Jonathan Michael: So.
Christopher Hensley RICP, CES: let's talk about agent deployment. You mentioned that building an agent is the easy part. What are the hard parts when deploying agents inside a financial advisory firm?
Jonathan Michael: Well, if you look at what we are doing here at Xis you know, we're working with firms aggregators who are, you know, working with advisors in transition, right? So you're moving across firms and you are trying to pull a lot of data, trying to bring it into another data environment. When you look at data migration for an advisors tech [00:29:00] stack, think about the entire spectrum.
Jonathan Michael: You have CRM, portfolio management, portfolio accounting, custodian, uh, custodian data. Then you have document management. That's a lot of structured and unstructured data. What do you do with all of that? How do you migrate that really well safely and how do you run AI workflows on top of that? Right? And so what we've done is we've, we've built out this data infrastructure that lets you pull all of these d disparate data sources into an anto, into an anthology.
Jonathan Michael: Basically an anthology is, uh, you know, converting, transforming all these, all these data sources into objects that have relationships with, with each other, right? You can quickly form relationships between all these different data objects, and that's really important when you're running AI systems.
Jonathan Michael: Because you wanna make sure that all of this data is connected, uh, and that you can run AI on top of that. But also it's the physical data isolation, right? So it's not like a logic-based multi-tenant [00:30:00] SaaS type, uh, you know, um, environment where you have your data sitting in the cloud, uh, and, you know, you, your client IDs could get mixed up and whatnot.
Christopher Hensley RICP, CES: Right.
Jonathan Michael: When you're dealing with advisor data, you wanna make sure that the data is physically isolated the pie is anonymized all of that. In all of that, you have zero, zero data data zero data retention so that you can run ai you know, smoothly on top of that without any fear of data commingling and whatnot.
Jonathan Michael: Right. So the physical data isolation piece, like literally having like physical hardware dedicated to your data sources, I mean that's, that's a big piece. And, you know some of like. Like, that's defense grade, uh, architecture that we are trying to deploy within different access for running our agents.
Christopher Hensley RICP, CES: love
Jonathan Michael: And so in the, yeah, so in the transition process, as you can imagine, there's a lot of data to bring in and deploy. So we, we try to take that very seriously. Yeah, that's the hard part. That's the hard,
Christopher Hensley RICP, CES: So
Jonathan Michael: yeah.
Christopher Hensley RICP, CES: that's the hard part, what's the easy part? So we are right here at the end. Jonathan, [00:31:00] thank you so much for being on the show today. What a good place to, to leave it at, because I think it gets everybody's cogs moving in the right direction. Jonathan, for advisors who'd like to learn more about your company, but also I know you have the YouTube channel, we'll
Jonathan Michael: Phone.
Christopher Hensley RICP, CES: to the YouTube channel on our YouTube channel to to, so that people can find it.
Christopher Hensley RICP, CES: Where can people find you at?
Jonathan Michael: So there's multiple places here, like, you know, there's tiffin.com/access where you can learn about Tiffin access and you can schedule demos to learn more about how we're building ai. Uh, but also like, uh, my personal LinkedIn, I, I create content every week. I have a channel, a YouTube channel called AI for Wealth, uh, and a new study called Wealth Management Prompts where.
Jonathan Michael: Try to publish best practices for, uh, prompt engineering for advisors every week. So you can find me on LinkedIn too.
Christopher Hensley RICP, CES: I
Jonathan Michael: Yep.
Christopher Hensley RICP, CES: got multiple hats. You've got your main gig and you've got your creator gig much like me, so
Jonathan Michael: Yep.
Christopher Hensley RICP, CES: I love it. Jonathan, thank you so much for being on the show. We'll have to have you back. It's such a big topic. I feel like we just kind of hit the surface here, so [00:32:00] have a good rest of the day there.
Jonathan Michael: Thank you, Christopher.
Director of Growth @TIFIN AXIS
Jonathan has 10+ years of experience as both an EdTech and WealthTech founder. After founding startups and consulting for leading wealth managers as a go-to-market leader and product builder he's now at TIFIN AXIS solving wealth management's biggest scaling challenge—giving firms an AI-powered digital workforce that takes on the operational heavy lifting. He sees a future where every advisor has a team of AI workers amplifying their expertise, allowing them to serve more clients more deeply than ever before. Jonathan is an avid runner and passionate about pushing the boundaries of personal development. He writes a weekly LinkedIn newsletter called "Wealth Management Prompts" and interviews leaders in Wealth on his YouTube channel called "AI for Wealth".



