Joseph L. Breeden, CEO of Deep Future Analytics, on the hidden math behind credit risk

Enjoying the podcast? Don’t miss out on future episodes! Please hit that subscribe button on Apple, Spotify, or your favorite podcast platform to stay updated with our latest content. Thank you for your support!

Joseph L. Breeden, CEO & Founder, Deep Future Analytics

Today, I sit down with Joe Breeden, CEO and founder of Deep Future Analytics (DFA), for what is, in effect, a two-part conversation. First we do a deep dive into credit risk management and where it is falling short today and then we discuss AI monitoring and governance and how it is going to revolutionize software.

The conversation covers why traditional machine learning models are missing critical components for accurate risk assessment and how adverse selection has dramatically impacted loan quality in recent vintages. Then Joe makes a bold prediction that software user interfaces are on the verge of a transformation that will render them unrecognizable from previous versions Curious? All is revealed in this fascinating conversation.

In this podcast you will learn:

  • How he got started with Deep Future Analytics
  • The state of credit risk management in banking today.
  • What is missing, even with fintech lenders, who are using machine learning.
  • What lenders get wrong when they focus on credit score.
  • Why loans booked since 2022 are lower quality than even 2006-07.
  • What kind of lift lenders or investors can see with DFA’s models.
  • What Joe sees in the pool of borrowers today.
  • Why DFA moved into AI monitoring and governance.
  • Who is using these new AI models they have developed.
  • Why a “human in the loop” is not an effective monitoring method.
  • What their new Strategic Recommendation Agent (SyRA) does.
  • How they incorporate Large Language Models to ensure zero hallucinations.
  • Why the future of software is dashboards on demand and analytics on demand.
  • How Deep Future Analytics is different to others in the market.
  • What it is going to take before software moves to a chat-based interface.

Read a transcription of our conversation below.

FINTECH ONE-ON-ONE PODCAST NO. 549 – Joseph L. Breeden

Peter Renton:

Today’s episode is brought to you by the AI Native Banking and Fintech Conference, happening in Salt Lake City on September 30. This event, hosted by Spring Labs and in its second year, is all about practical applications of AI that are in production at banks today. There’ll be no hype, no fluff at this event, just real-world use cases for banks and fintechs. Join the more than 400 people who will be learning about AI solutions they can purchase today. There’ll be big name keynotes, panels of bankers comparing notes, case studies and demos from some of the cutting edge AI companies. Use the coupon code PODCAST20 to receive 20 % off your ticket at conference.springlabs.com.

Joe Breeden:

When rates are good, you bring in the value shoppers, those who are financially savvy. When they’re bad, you’re left with the others who are always borrowing regardless of conditions. In fact, in commercial lending, it was even more dramatic. You can see this in our data also that in CNI and CRE, that in 2021, just as rates were starting to go up, there was a rush to borrow by the CFOs who were more savvy. They saw their last chance to get the funds they needed. Those were good quality loans. In 2022, you got the rebound. The ones that were left who didn’t come in in time, something is wrong. They should have done better.

PR: This is the Fintech One-on-One podcast, the show for Fintech enthusiasts looking to better understand the leaders shaping Fintech and banking today. My name is Peter Renton and since 2013, I’ve been conducting in-depth interviews with Fintech founders and banking executives. Today on the show, we are talking credit risk and AI with Joe Breeden, the CEO and founder of Deep Future Analytics. Now, I think Joe is one of the smartest people in all of credit. He certainly has more experience building credit risk models than just about anyone. He talks about how most machine learning models are used today are missing a key ingredient. He also discusses why recent vintages have not performed well and why he saw that coming in 2021. We also do a deep dive into their AI monitoring product, which was incredibly fascinating. Joe also makes the bold prediction that software user interfaces are about to change dramatically. Now let’s get on with the show.

PR: Welcome to the podcast, Joe.

JB: Thank you. Good to be here.

PR: Great to have you. So let’s kick it off by giving the listeners a little bit of background about yourself. I know you have a PhD in physics, which is not all that common in FinTech, I would say. I don’t know, you may even be the first person on the show that has had that illustrious qualification. But tell us a little bit about your career and your academic work that you’ve done so far.

JB: Well, my physics background, I was doing chaos theory primarily in astrophysics and some other examples, a lot of simulation and numerical methods, all of which gets wrapped into data science these days. So it’s probably just as accurate to say that I’ve been doing data science for 30 years. Right. And I do love data and science in general, but I have modeled all sorts of data: sports betting data, crop yield data. I could go on for a while about different things, tree rings and climate change and such, but it’s all connected. It’s all the same stuff.

PR: Interesting. Tell us a little bit about your career as far as how you started some companies. Take us through that journey.

JB: Yeah, so my first engagement in the banking industry was I was hired into a think tank that Citibank created jointly with Los Alamos National Lab. Brought in a bunch of physicists, gave them all assignments. John Reed, CEO at the time of Citibank, kind of randomly pointed across the room at me and I was tasked with stress testing in emerging markets. And so while sitting in Thailand gathering data, I was educated on vintage data and said, well, if you can get these insights from graphing this, I bet there’s math you could do to actually analyze it. And that analysis wound up being my first company called Strategic Analytics based on this idea of vintage analysis to do stress testing, forecasting and such. In fact, it was quite successful. I have probably too many stories there, but through the global financial crisis, those models were quite effective, but too late, the crisis, the loans had already been booked. It didn’t help that we could predict two, three years in advance how much was going to go bad. They were already sitting there. So we decided that we needed to solve that account level stress testing problem upfront to have these insights at point of origination.

So we founded Prescient Models initially to do some consulting around the area and then we developed software and created a joint venture company called Deep Future Analytics, which some years later wound up being the parent. So that’s where I sit today.

PR: Right, right. That’s where you first came onto my radar was with our present models back in our LendIt days. But before we get into deep future analytics, I want to talk about the current state of credit risk management in banks and fintechs for that matter. There’s a lot of work that has been done, but it seems to me that particularly in some of the banks that they’re still doing the same thing they did 10, 20, even 30 years ago. what, what, us what your perspective is on, you know, credit risk management, in, banking.

JB: I do think that we have a tendency to get excited about new technologies, to do the old things in a more refined way when the system is more the problem than the model that it sits in. this is not a new insight across the business universe. There are plenty of examples like this. And so if you look at machine learning, you know, this revolution that was supposed to solve a lot of problems, in fact, it’s almost all machine learning models that are deployed today are basically a slightly nonlinear version of the same credit score structure that you had in the 60s. And therefore, still booking the same kinds of loans that we’ve had in the past. And the last few years, you’ve seen a dramatic downturn in loan quality because of some shifts in the environment. And I think I wouldn’t blame the machine learning models for not capturing that, but I would say that the excitement about machine learning missed that there’s some fundamental problems. We haven’t solved the basic problem of being able to predict cash flows and yield forecasting at point of origination with reliability. I say we as an industry, we as Deep Future Analytics in fact have, that’s what we intended to do. That’s why I’m here.

PR: Right, right. Well, mean, the fintech companies certainly have positioned themselves, and some of them are public companies these days, you know, if I had one of those fintech CEOs talking to us right now, he would challenge your assertion, I’m sure. So maybe if, can you kind of, youmaybe push back on that challenge. There are a lot of fintech companies that have made their brand built around the fact that we do machine learning that is far more predictive and superior to what the FICO models of the past.

JB: Well, I would say yes, they’re doing better, but the systemic problem is pricing. It’s the connection between your score and finance. So you’re building a score with a higher genie that does a better job of ranking risk between consumers. What it doesn’t tell you is when the entire pool of borrowers shifts, we call this adverse selection. There are very few lenders that are properly measuring and incorporating adverse selection into their underwriting and I can say more about what that is. then even when you do, if you have a good estimate of these things, finance is either setting pricing, looking at competitive market, you know, what are my competitors charging? What can I charge? But it’s very rare for finance to be taking a credit score and connecting it to account level cashflow projections and yield because they don’t connect. There’s this kind of magic dust in the middle that tells you what the PD and the prepayment are gonna be based on your origination score. It’s because there’s some missing math there and that’s what we’ve solved. So I wouldn’t challenge somebody for saying that they’ve built a more accurate frontline score. I would contest whether they’re really getting the yield forecast right.

PR: So we’re not PhD level mathematicians here generally and the listeners to this show. So what math is missing? you explain that in relatively simple terms?

JB: Yeah. So the, for, for any analysts who do have the shift is that they use cross-sectional data, which basically says, will this account default in the future or not? Yes, no. That’s not good enough because there’s a difference between somebody who defaults in a recession and somebody who defaults in a good time. It’s not yes or no, it’s if yes, then when. they’re missing the why of it based on the environment and also, there are times when there simply are no good applicants. That a credit score, when people focus on credit score, there’s a presumption that the good borrowers are out there and I just have to find them. As a physicist, I would say your credit score is the second order effect. That is not the first thing that happens. The first thing that happens is a self-selection by the borrower. Do I want a loan now? If I do, then I will apply. And the reason they might say no is if interest rates run up suddenly, as they did a few years ago, your value shoppers, your financially savvy consumers will say, not now. I’ll keep driving this car a couple of years more. I’ll wait. I’ll wait to get a new home. And so when I said that our models predicted very well, back in early 2006, what was going to come in 08, 09, it was because we could see that the pool of borrowers, the applicant pool was not what you would normally expect. So this is what’s missing in the credit scoring process is there’s no recognition that you may choose the best, but they still might be bad, the adverse selection, and that you need to normalize these things for what’s going on in the economy before you build your score.

PR: So then are you saying, we need more data points or is this, you saying like the typical data points on an application combined with macro economic data, you can build a better model.

JB: You can build a better model of what you have. It’s great if you’ve got unique exotic variables and that’s where your machine learning is going to come in and all that cool stuff. But what we’ve said is, in fact, we’ve done research. I can take anybody’s machine learning model and say, just let me restructure how you estimate things so that we adapt to the environment. And then we measure the residual risk and both those things will allow us to get you an account level yield forecast rather than just an account level arbitrarily calibrated FICO-type score.

PR: Okay, so then the big question is, what are you seeing today? Because there is certainly a lot of economic uncertainty, more so we’ve got a lot of data that is murky, given the impact of or the potential impact of tariffs and what that may do down the road to the unemployment rate if we hit a recession. I it seems like there’s a lot of unknowns. How are you incorporating that and what are you seeing?

JB: I’m a little surprised career-wise that I’m at a point where I can say, I’ve been doing this for 30 years, literally. And so in my experience, this is the most volatile period. The last few years have been surprisingly volatile. Volatility in the sense of this through the door credit quality changed dramatically. I would say the loans booked in the last few years, intrinsic quality are worse than what was being booked in 07, 08 right? But we haven’t had a recession and we haven’t had a collapse in house prices. In 2009, you had bad quality loans in a recession and a collapse in house prices. So you’ve got high default rate, low collateral value. Right now we’ve only got one of those three problems, but it is across all product types. So if rates start to come down, I’ve been saying this for the last year at least assuming they would, but things keep happening and they haven’t.

But if rates come down, you’ll draw back some of the better borrowers and you always book the best loans at that peak of a recession, at that peak of risk, because that’s when the Fed lowers rates. And so your savvy consumers with resources, with a job will say, now’s a good time to buy a car. So maybe early next year, there’ll be that better wave of borrowers, even as we face a recession.

PR: So then I want to talk about like with your clients, how are they using models, your software and what kind of a lift do they see?

JB: It’s probably worth mentioning what kind of clients we have.

PR: Yes, let’s talk about that too.

JB: Because it’s an interesting point that what I’ve described, math that connects credit risk, underwriting and financial calculations cuts across at least three different silos at most of the bigger organizations. And that makes it tough to put in even this, what I consider a small systemic change. When we go to fintechs, finance companies, digital banks in Asia, investors, buying loans, all these groups have less legacy and a kind of yield first mentality. So it’s been much easier to say, hey, let me just tweak how the score is built and then you’ll have a yield forecast or talk to an investment group who say we need good cash flows and yield forecasts. So we do this directly. So those are the groups we talk to and putting in this kind of system basically.

PR: And so what about, you know, let’s just take the investors for example, typically an investor is just wants to get yield. and they want a more accurate projection of the pool of loans that they’re about to buy. What is your system tell them and how much more accurate is it than what they have been using?

JB: Let me give you a story. There was a group buying a large pool from a FinTech and we ran a model based on the internal monthly account level data. a purely custom model, but we can do this pretty fast. And the output as part of that output, it was a monthly forecast of each account and it included the residual risk relative to the credit score, the adverse selection, basically this adjustment for the intrinsic risk of the borrowing pools. And you can stress it under different economic scenarios going forward. So it had all those things. So you have a stressed yield forecast. When we delivered that, they came back and said, well, wait a minute, these loss rates are higher than anything this FinTech originator has experienced before. I said, yeah, because they’ve never been through a credit cycle where all the applicants are bad, the way we had back in 06, new product, new experience. And like, well, we had negotiated a credit insurance add-on to our purchase. And you’re saying we’re going to blow through that day one. It’s like, yeah, it’s been that bad. So that’s the value proposition here is measuring these residual risks and giving stressed yield forecasts that are right down to the accounts. Cause then they can go back and say, you know here’s the you whatever percent of these accounts that we’re going to shave off the deal just too high a risk to be worthwhile

PR: Was that this year you’re talking or is that in like last couple of years?

JB: It was, I lose track of time on this a little bit. may have been the end of last year.

PR: Right. It was relatively recent. You say it’s been pretty volatile. like I think everyone expected when we had all the tariff talk in April that by now we would be seeing a lot more bad economic news and it hasn’t happened. are you surprised that, well, maybe I can ask this. The projections you did a year ago, how like, how are they holding up?

JB: Well, I’m always careful to distinguish between the projection and the scenario. The economic scenario, actually do a lot. We have a lot of smaller lenders who can’t afford expensive scenarios. So we have an ensemble of deep learning neural nets that generate economic scenarios and just pass those out to our clients for free. Those scenarios are of course, conditional on some of these things. We are not going to predict a tariff policy and whatnot. Our economic scenarios, I would say we’re no better, no worse than anybody else’s on average. models have been quite effective because we did pick up on adverse selection early and we’ve been able to track the quality of these pools. And, and I have to say back in late 2021, early 2022, this time, well…in ‘06, all we did was measure and report. This time we predicted if you book loans now in 2022, you should expect a quality deterioration on this magnitude. We had a couple of clients who took that warning, changed their underwriting emphasis and have told us that their ROI on that advice was dramatic. They have a 0.5 % loss rate on certain products compared to peers with 1.5 % which is kind of saved the organization You know, that’s what should happen for some of these things.

PR: Okay. Okay. So you’re saying in 2022, you started to see this adverse selection creeping in where, like what interest rates rose, you know, they rose dramatically faster than they’ve risen for decades. And so people who were borrowing then really had to borrow. And so that was, that, is that what, what sort of caused the adverse selection?

JB: Yeah. So you could say that there’s always a mix of personality types in the market. When rates are good, you bring in the value shoppers, those who are financially savvy. When they’re bad, you’re left with the others who are always borrowing regardless of conditions. In fact, in commercial lending, it was even more dramatic. You can see this in our data also that in C&I and CRE that in 2021, just as rates were starting to go up, there was a rush to borrow by the CFOs who were more savvy. They saw their last chance to get the funds they needed. Those were good quality loans. In 2022, you got the rebound. The ones that were left who didn’t come in in time, something is wrong. They should have done better.

PR: Interesting, interesting. Okay. I could keep talking about that for the next half an hour, but where…we need to move on because there’s a whole nother side of your business that I want to get into. that is the, the AI piece that you have. Why don’t you just tell us a little bit about your, you know, your AI governance and monitoring side of your business.

JB: Yeah, so I volunteer as president of Model Risk Managers International Association for about the last eight years, I guess. And in that capacity, I’ve been managing kind of the model risk management of AI and machine learning track for our conferences and our publications for several years now. And people have asked me just last year, they started saying, you know, we’re installing all these gen AI systems and we’re not sure how we’re going to oversee them. And human in the loop oversight was the standard answer that everyone gives. And you’ve met humans, right, Peter? This is not going to work either from capacity or boredom or distraction. The psychologists understand this quite well. So the answer was obvious and I just wasn’t seeing anybody do it. So we decided to step up and create a product, which is essentially AI augmented human in the loop, where we use the AI to scan and filter out things that are communications that look fine, but highlight the ones that are questionable and promote those to a dashboard for a human to oversee.

Now, what qualifies as good or questionable is based on a set of business rules. Now these could be regulations, ethics. It gets fascinating. I mean, it’s, it’s obviously if you think about an AI call center and you’re worried about what the AI agent is going to say, it’s more subtle when you think about an internal AI chatbot for your staff to use. And then it’s not the answer that I’m worried about. It’s the questions people ask.

PR: Right. Right.

JB: And so AI monitoring can actually mean monitoring the humans using the AI because you don’t want to introduce an AI accomplice to inappropriate or nefarious behavior amongst staff.

PR: Right, right, for sure. So we are talking about, this is obviously beyond the lending space, right? So who is using this? Who are you targeting for this type of monitoring?

JB: Well, it was easy to talk first to people in regulated industries or high-risk, high-legal risk industries. So still we have our contacts in finance, insurance, questions starting to come in from healthcare. And a lot of these are beyond just the regulatory compliance, but getting into a lot of business risks. Some of them are surprising, the areas that AI is being put into and the possible risks that come up. In fact, I’ve had people describe certain things, well, we’re doing this because it’s low risk, you know, starting with AI here. And once you start to see the creative ways that your staff might put things to use, you realize there are very few low risk situations. It’s more unknown risk. You’ll find out later what the risks are.

PR: Right, people think if this is just going to be used by my internal staff, how much damage is it to do really? Can you give me some examples then of how this is being used in banking and finance? How are your clients using it today?

JB: Yeah, recently one of the entertaining use cases was we often hear that RAG models, Retrieval Augmented Generation is where you create one of these large language models whose job is to go find a policy document, hand it to the support agent or whoever and say, here’s your answer. Well, that sounds fine. It sounds safe, low risk until you think about the follow-up question. The staff, the human agent says, well, I’m trying to help this consumer do what they need and I see the policy, but is there another way that I can help solve this customer’s problems? Well, that’s where the AI accomplice comes in. Do you really want to give them a system that can help them find loopholes? So, even there, the so-called low risk, low hallucination RAG system actually is a backdoor to doing things to get around policies instead of complying with policies.

And the, just the general internal enterprise chat bot where you, uh, you know, I, I’m friends with head of AI for a big insurance company. They rolled out one of these, had a training session and first question that hand that goes up says, uh, well, I’m from HR and this sounds great. I can run resumes through this to see who, uh, complies with our job posting. No, no, no legal jumps up says you can’t do that. AWS has problems with that right now. Um, and somebody from corporate policy underwriting says, well, this sounds good. I can do some background research on these companies as I’m writing policy. No, not for decision-making. You can’t do that. So you wind up with a do-not-do list because of the legal risks if you did them. One of the other team members at MRMIA led the creation of a document on safe use of AI, how to get started. And he came up with, his team, came up with a one page do not do list about 10 different items. There are millions of dollars of legal risk on that list, nothing to do with regulation. But yeah, you have to think through. And so, and you can train people not to do these things, but you still have to watch and make sure they pay attention.

PR: Right. Cause they might go, that’s, that’s overblown that risk. Let’s just, do it this one time kind of thing.

JB: Got a deadline tomorrow. I’ve been out of the office. There’s always a reason, right?

PR: They may not be intentionally doing it or may not be meaning to take shortcuts, they have to for whatever reason. But anyway, I’m really curious about this new, do you call it a product that you’re doing? It’s called the Strategic Recommendation Agent, do you pronounce it? Sira, is that how you say it? S-Y-R-A.

JB: I like to sera. It’s easier for me to say.

PR: And what so tell us about what what is it for?

JB: I have to say that even with all the changes in AI that have come out lately, my inspirational moment was watching AlphaGo defeat LisaDoll in Go.

PR How long ago was that? That was a while ago.

JB: It was like 2016, something like that. That’s what I thought. Because if you read up on the structure of the system they had, it was an agent creating essentially simulations and another system for scoring. so there are various pieces there. And as I looked at it, I thought, well, we have all those pieces in our API. Analytically, I can simulate the overall P &L of a financial institution down to the individual loans and deposits month by month. I’ve got from the lowest level to the highest level that I can connect and run a simulation. There’s way too much there. The reason people don’t buy that is it’s too much information. They’re overloaded. Nobody has the capacity to drill into every segment to find all the detailed strategies.

What you need is something like an AlphaGo that can run those strategies in the background and find what you’ve been missing. The risks, the opportunities, the strategy changes and come up every month and say, hey, here’s some things I found. So that was something that we started working towards back in 2016. There are a lot of pieces to put in place, data sources, algorithms. Then when the large language models came out, I realized, well, I don’t have to create the strategic agent now, I have a large language model. Now people have already slapped a large language model on top of call report data. So you can ask it questions. It’s not impressive because they’re language models, right? They don’t do math well. They, they don’t do strategy and logic all that well. They do some of it based on what’s encoded in our language and our documents.

But what they’re really good at is translation. And so we realized the way you should be doing this is take a human question, translate it into a server request, let the API and the data sources generate the answer and bring it back to the large language model that then puts it into a report, a paragraph, a sentence, whatever for the user. So that large language model in the middle is just having a two-way conversation. It’s not trying to invent an answer. So it means you’ve got a zero hallucination system tapping into our resources. Now, I thought I had this great idea a few years ago on how to do this. And I think Anthropic is doing something very similar in some of their products now. So I won’t claim to be unique, but I can unlock a lot of our analytics and data resources with a system like this, so that our clients can tap into things that we’ll never have time to build a user interface for. They can tap into our full libraries and kind of do self-serve consulting.

PR: Hmm. That’s really interesting. And I know when we chatted a little while ago, you talked about this whole kind of approach where software interfaces like the way traditional software, whether you’re using Salesforce or QuickBooks or whatever it might be, it has a kind of a structure to it, right? There’s menus and you go make selections and that takes you down to a different part of the software.

And what you were saying, which I thought was fascinating, is that you want to replace all that with basically a chat bot or a conversational interface. Let’s just say that. us a little bit about what you mean.

JB: Yeah, literally that chat window, you’ll see it immediately appear in our existing products. We have products like everyone else that has tabs and menus and windows and just at the bottom of the homepage, we’ll put in this prompt window where you can ask for whatever you want. If you remember the path to get to a certain thing, you can do it as you do now, but you can ask for new things that we’ve never built a user interface for.

And at the same time, we’re developing a system that has none of that other stuff. It is just the prompt window because in addition to this, bringing you back the pieces of answers that you want, I, you know, think about how much you can create a dashboard from scratch right now by going to one of these tools and saying, I want tables and graphs and whatever. I believe the future of software is just dashboard on demand, that there’s no point building user interfaces, the consumer will do that with their question. And they’ll be able to memorize that dashboard if they want to come back to it in the future. yeah, it’s basically analytics on demand, dashboards on demand.

JB: I want to go back actually to what you said around the zero hallucinations, because I think I don’t want to gloss over that because it’s one of the biggest pushbacks that you get from, you know, from, from bankers when they’re talking about implementing new AI software. So the reason you get zero hallucinations, I just want to be clear is that because you’re training it just on the bank’s own data, right? Or is there, is there how, maybe you can just explicitly explain how you go, how you end up with zero hallucinations.

JB: Yeah, well, SyRA, as an agent, the initiation prompt she gets is don’t ask her questions yourself. Any question you get is a translation request. Translate question to code, API, server calls, data requests, translate the result of that back to English for the user. You’re a translation agent, human to code, result to human.

Now, can SyRA make a mistake in translation? Sure. And in which case, either the code doesn’t run or the answer is not what they asked for, but it’s not a hallucinated answer. It’s always based on the actual data and algorithms that we have. And I think this is where these algorithms are most powerful, is if anytime you turn it into translation, then they can make mistakes, but they’re not hallucinating. This is like when you use it as a coding copilot. Nobody says that it hallucinates when it writes code. It just may make mistakes. Well, you can correct the mistakes.

PR: Right. Cause it’s translating it from human language to programming language. Yeah. Yeah. Okay.

Interesting. Interesting. All right, boy. I want to talk about, the, you your position in the market here, because a lot of things we’ve described in the foot, particularly with the credit risk, there’s others that would say they’re doing similar things to you. And you just mentioned Anthropic with AI, although it seems to me, their financial stuff is really geared towards more financial data for trading. That’s what I’ve seen. How, like maybe you could talk about how you are different to others in the market.

JB: Well, we focus on lending and the, I should say, fintechs, investors, credit unions, community banks, regional banks, but not so much on the, when I say investor, I mean loan pool buyers, private credit, those folks. So it’s a unique set of data. It’s a unique pool field of analytics. So our expertise from the last 30 years in developing this expertise and putting it into algorithms that can be called is a unique advantage. There are folks who can create algorithms to mimic some of that. I would argue about whether they really do what we do, but there are very few who’ve built this into a reusable API the way we have. So that’s where I think we have an opportunity to put a large language model on top of it and provide something unique. Now, unique in our industry, and I do agree with you that I think this is the future of all industries. There’ll be an industry specific data set and API for everybody. Everyone will have an agent for their industry to help them do what we’re talking about. We just want to be that for lending.

PR: Gotcha. Interesting. Okay. So then you’ve kind of painted a really interesting picture here. How long, I mean the transition to that future you just described, what is the steps that need to happen for us to get to this place where everyone has, every industry has its own LLM AI agent that’s sitting here doing a lot of these translating, shall we say. What’s your timeline and what needs to happen on the development side?

JB: Well, you can get to a dumb agent really fast and that’s what has happened already, which is where they have the freedom to answer the questions themselves and look up their own data and you get hallucinations. It takes a few months to build the backend system to be able to pass the queries back and forth and build a user interface and some of that stuff. How long it takes to have the algorithms behind that that return quality results is a question of…how long you’ve been doing this and have you thought ahead to needing an API? It’s, know, I, if I were a VC, I would not be putting money into a fresh startup to build these algorithms because it takes in any industry, what you’re talking about is being able to simulate an entire institution. And it requires a lot of background knowledge. For example, I would not try to roll this out in healthcare. There’s too much I don’t know about health insurance or hospital maintenance or all these different things. You need that expertise.

PR: Right. Okay. Well, I’ll tell you what, this has been such an interesting discussion. You’ve given the audience a lot of food for thought here. So thank you very much for coming on the show today.

JB: Thank you. It’s great to be here.

PR: I hope you enjoyed that interview as much as I did. There were so many great takeaways. It’s hard for me to focus on just one, but I love Joe’s thinking around user interfaces. As people get more used to using the mainstream AI tools in their everyday life, like ChatGPT, Gemini, or my favorite, Claude, they will start to wonder why they can’t interact with all their software tools this way. So I think Joe was right on the money here to design his software with a chat interface. I think in a very short time, probably within two years, most tools will have vastly different UIs than what we see today.

Anyway, that’s it for today’s show. If you enjoy these episodes, please go ahead and subscribe, tell a friend or leave a review. And thank you so much for listening.