AI Meets a $2T High-Trust Industry: How Newfront CEO Made It Work

Wade (00:01.356)
All right, folks, welcome to Agents of Scale. It's a show where we talk to CEOs and leaders who are figuring out how to operationalize AI across their companies. I'm Wade Foster. I'm the co-founder, CEO at Zapier. And today we've got a great guest, Spike Lipkin, who's the co-founder and CEO of New Front. It's a company that is rethinking how the $2 trillion commercial insurance industry works all the way from the ground up. Spike started at Blackstone, worked at Open Door, and moved on to start

New front. Spike, welcome to the show.

Spike Lipkin (00:34.079)
Thanks for having me, Wade. Excited to be here.

Wade (00:36.142)
Well, let's start with the softball. Tell us a little more about what exactly New Front does.

Spike Lipkin (00:43.432)
Yeah, so prior to New Front, I had been the person buying insurance and I worked at a large firm and at a startup and I realized the experience was terrible. It's mountains of paperwork. It's incredibly confusing the product you're buying and then you buy this thing and you don't even know if it's going to respond when you need it. And I got really excited thinking, you know, here's a product where technology and eventually artificial intelligence can play a big role in bringing about a better client experience.

And so when we started, we looked at do we just build software for this space and try to make it better? Or do we become a full stack brokerage ourselves and provide a better client experience aided by software through this combination of experts and software? And so that was the approach we landed with. We started in 2017. Today we transact about $3 billion of premium annually. We work with one in five unicorns, hundreds of public companies.

We win by combining experts, so people that go really deep into various verticals of insurance with software and increasingly with AI to bring about a better client experience. So that's the really brief backstory. If you're a business, you probably know about these products because you are required to have a general liability policy to sign a lease. You're required to have workers comp in California to hire people. If you raise money, you need directors and officers coverage in order to

have people serve on your board, you probably need health insurance and we provide all those products.

Wade (02:11.234)
Love it. So you talked about how it opened door. You saw firsthand how broken insurance workflows are. Now, I think a lot of listeners sort of work inside these older industries. They have these like painful spots, but they don't know exactly like what to do about them. So maybe talk a little bit about like what you observed and like why you thought that could be better.

Spike Lipkin (02:34.013)
So what was fascinating to me is, I was part of the setup of two different businesses, one at Blackstone and one at Opendoor. And when you see how businesses get set up, they run on these cloud-based systems. So all the data about employees, all the data about customers, all the financial data exists in the cloud. And so when you get to insurance, you think, OK, well, great. I've got all my information that I need in order for someone to underwrite me and tell me how much my insurance is and provide coverage.

Like I should just be able to pull that data down and send it off. Turns out that you go from digital interface online in the cloud to this pen and paper process, literally PDFs being emailed back and forth. And so it started occurring to me that every market is going to move online eventually. And all the data that flows into this marketplace is already structured. It's already online. This marketplace has the opportunity to digitize. so that

thought experiment of, well, the data is here and we're sort of pulling it offline and putting into physically putting into PDF and sending it off and then we're getting things mailed to us. That's going to change because all the inputs have been digitized. That got me really excited. I, you know, wait, don't know if this resonates for you, but I think to start a business, have to be incredibly naive. So we had a very little understanding of what we were getting into. And I think there was a benefit to that, right? We looked at things with a fresh perspective. I think if we knew more, we might've been scared away from sort of the

Wade (03:51.182)
You

Spike Lipkin (04:01.804)
massive scope of the problem and the difficulty of the problem. But I think if you want when you took the opportunity plus kind of a healthy dose of optimism and naivete, that's what got us started.

Wade (04:14.571)
Yeah, I think that every founder sort of like, you know, famous last words is, how hard could it be? Yeah, a few more steps than I thought. So another interesting thing, you mentioned that, you know, part of the founding thesis was how AI could impact the workflows here, but...

Spike Lipkin (04:21.851)
Yeah, exactly. Turns out really hard.

Yeah.

Wade (04:38.369)
You're founding the company in 2017. So this is before Transformers and Generative AI. So when you say AI, you're probably talking classic machine learning algorithms, things like that. Tell me a little bit about sort of what you saw there and how is that different from what now you're seeing is possible within your business using Generative AI.

Spike Lipkin (04:47.386)
Yeah. Yes.

Spike Lipkin (04:58.266)
Yes, so when we started the company, we had this thesis that data moves online and gets structured and computers are going to get smarter and smarter about interpreting that data. And you know, when you think about the role of the insurance professional, there's this transactional component, which is just the blocking and tackling of getting the information to the underwriters, getting the quotes back to you, doing billing technology is going to be really good at that. But then there's this more strategic layer, which is interpreting information, drawing conclusions from that information. And so we had this view that if we built the infrastructure,

As computers got smarter and smarter, we could leverage that infrastructure to make, our clients make better decisions, draw insights. The big paradigm shift for us is the structured data is actually less important, right? We used to think you take a policy, you describe the policy in a structured manner and you put that into a database. And then on top of that, you can build interesting reasoning and intelligence and you can glean client insights from that. What we're finding with these new LLMs today is the document becomes the source of truth.

So rather than the database being a structured understanding of the document, we can actually just go straight to the policy and the AI model reads the policy and it can glean insights from the policy. we were right that computers would get smarter. We were right that you could provide these really transformative client experiences with AI. But I think what we were wrong about was how these LLMs would allow you to go straight to the source documents in order to glean insights.

Wade (06:26.029)
One of the things I have consistently heard from folks more in the enterprise is that a lot of these AI projects are somewhat fizzling out. You're using it for prototypes and demos and they're getting decent proofs of concept but struggling to get to those productions. Can you maybe share a little bit about the places where you all are seeing the most success versus where are areas where you're like, yeah, there still needs to be...

maybe another technology breakthrough or two to be able to really get those hummin in a way that is impactful for the industry.

Spike Lipkin (07:02.681)
Yeah. So, you know, we certainly had that experience, I think early on in the AI revolution of feeling like the promise outpaced the results. Two years ago, we had a hackathon and it was right when, you know, chat GPT was really gaining popularity and these tools were coming out and we shipped 50 % of the products we built. And they're like in production used heavily today. And so I think it was this perfect moment where the technology had sort of caught up to the promise.

A couple examples. So a number of products are client facing and they help clients either take work off their plate or make better decisions. So one of the products we've built, we answer benefit questions from employees. So let's say that you're enrolled in Zapier's benefit program and you have questions about how to get a free pair of glasses because under the vision program, you're entitled to a free pair of prescription sunglasses every year.

Today you're probably reaching out to someone on tapers HR team and you're asking a question and that person is like typing out a question or maybe they have a form response to it. Turns out that HR teams keep doing that and it takes a huge amount of time and so with the product we built, we call it Benji for benefit Genie. We think we save HR teams about a month a year in case studies we did with with early clients. I think the key to that product really working is.

It's agentic. You're replacing what used to be a human and you're actually like delegating the full work product to them and getting back completed work product from them in terms of the information you're looking for. And in that case, you're taking work off the client. We find those products work really well. Another type of example is gleaning insights that would have previously taken a really long time. So most insurance comes from contracts.

And we have teams of specialists that all they do is review contracts, look at insurance requirements and tell our clients, hey, you can sign this contract because it meets your insurance program or it doesn't. We have an AI powered tool now where the client or the team member on our side can upload the contract and get an immediate answer on whether it complies with the insurance program. So in that case, you're sort of providing this experience that feels almost magical, right? Because if you're a client, you're used to sending off a contract, know, maybe best

Spike Lipkin (09:23.566)
best case you're getting a response hours later. Now you're getting it like 20 seconds later. So in both those client use cases, these are workflows that are not, these are not workflows, these are agentic, right? These are providing experiences to clients and completed work product. On the business side, what we found works really well similarly is areas where there's not a huge amount of change in process. So,

You know the prior generation of technology we built was all workflow based, right? It was do the work that you were previously doing in this new way using this new workflow and the workflow will help you and maybe there's some automation in the workflow and as a consequence of using workflow will capture really interesting data. What really has product market fit today is hey, ignore these workflows. Just email this email address and send us raw work and we will send you back completed work product.

Wade (10:13.228)
Mm.

Spike Lipkin (10:17.486)
Right? You know, no new workflow to learn, no set of clicks to learn, no web page you need to go to, literally an email address that you email and you get back the completed work. We find that stuff works really well. Where we've stumbled is requiring lots of change management, new workflows. The other kind of internal rallying cry we have is that if we ask people to eat vegetables, there needs to be some benefit. And so in our case, like eating vegetables is learning new workflows, inputting data.

So if we're gonna ask you to eat the broccoli, like you need to see that the next day you're starting to feel healthier and stronger. If the next day the corresponding product benefit is not there, we try not to ask you to do that. So those are the cases where you've stumbled, where we've required a bunch of data entry in order to make a product work, but it doesn't correspond to an immediate client or team impact.

Wade (10:51.82)
Mm.

Wade (11:07.148)
Got it. So it sounds like the kind of magic unlock for you is most of your employees are used to working in email. So we're going to just say, hey, just keep sending email and we'll have somebody on the other hand that is going to abstract away all the messy workflow components of this. And so that team is responsible for designing out what to do with the data you're getting from the email. And they're going to sort of go figure out how to return that output back to the person who sent the email.

based on what they're looking for. So that person doesn't really have to know what's going on beneath the hood.

Spike Lipkin (11:43.961)
Absolutely. And this does tell us nicely with how even pre-LLMs, a well-run operational process should operate. You have your experts delegate work to specialists that work on individual verticals, and they give you back completed work product for those verticals. So it's not a huge shift from how things worked before LLMs.

Wade (11:49.27)
Mm-hmm.

Wade (12:04.684)
I think that's a great insight. you know, there is something, you know, for folks who are really trying to get AI to have an impact in their organizations, there is something so compelling about saying, we're just going to put it in the exact place where people already do their work. We're not going to force them to change habits. We're not going to force them to change rituals. We're just going to say, keep doing what you're always doing. We're just going to make it better. Like, at the other side. Okay, let's talk hackathons. You mentioned that

kind of a key unlock for you was this hackathon you all ran. It sounds like maybe about a year ago. We're huge fans of hackathons too. Like we internally have seen the biggest leap in AI usage whenever we run hackathons. I think when we talk to customers, it's sort of similar. Like how did you design the initial hackathon? Like what was the sort of approach and the thought to say like, hey, we're gonna go do one of these as a way to unlock.

internal productivity and knowledge around AI usage.

Spike Lipkin (13:04.546)
So yeah, was a couple of years. We've done this forever, but it was a couple of years ago that we saw this big unlock. I think anyone who's worked with these AI models, one of the things that they see is the out of the box version works pretty well. And so it really is good at prototyping products. That doesn't mean that, to your point, like those products are all production ready and you can use them immediately. But in terms of prototyping something where you can see the vision and sort of test it out with customers or users, you can do that really quickly. And so I think we got the time right.

A lot of it had to do with the hackathon design. think if your hackathons are just engineers or product managers sitting in a room, having had some discussions with users, not going to work very well. What we found is our teams are a combination of obviously technical folks, engineers and product managers and designers, but every team had insurance professionals on it. so literally embedded into every conversation was

Wade (13:46.016)
Mm.

Spike Lipkin (14:00.812)
You know, either the voice of the user because some of the products are used directly by the insurance professional or we were really close to understanding the customer needs because the insurance professionals are talking with customers every day. And so I think that is a huge part of it. I think you know for organizations that are doing this around clients like can you get clients to join you for three days on site for hackathon? I just think that voice is so important to making these products work and to prototyping them. And then you you have.

I don't know how you all structure your demos or how you structure your hack ons, but I think another part of it is like you got to demo it at the end and there's judging. Like I think there is something to having this ranking system and the fact that you got to show it live at the end. And we will have professionals and clients in the room for those demos. So they have to be really good. And I think that's a component of it as well.

Wade (14:51.281)
Mm-hmm. Yeah, I would agree. Like, I think the demo element promotes accountability. So folks are going to show up and give it their best. And it promotes knowledge sharing, which I think the knowledge sharing aspect is really important right now when there's so much learning that has to be done. Like, you want to see what did other people try to do and what worked for them and what didn't work for them. And then you can just steal the best stuff and be like, I'm going to take that. That's a great prompt or that's a great tactic and kind of go from there.

Spike Lipkin (15:20.531)
One of the, if I may add, one of the interesting areas where we've, it surprised me and we've driven a fair amount of product innovation is we added, you know, an LLM enabled chat bot into our product, which hooks up to our database and you can ask questions. you know, I was kind of a skeptic here initially of building this product, because I was sort of like, you know, everyone's got a chat GPT subscription and also like, let's build products, not just a chat bot.

Wade (15:22.763)
Mm-hmm.

Spike Lipkin (15:47.383)
What's interesting about that is it actually becomes the future pipeline for products because you can read all the queries. So like the thing I've started doing is every week I just read all the queries. And from that, people are using the chatbot in ways I never would have imagined. And we can actually build better products now that are maybe better than a chatbot interface to deliver the experience. But I don't think I would have ever guessed that had I not been able to literally read like every chatbot query and say, okay, this is an interesting.

Wade (16:06.346)
Mm-hmm.

Spike Lipkin (16:14.078)
very unexpected way people are using the product. So that can become sort of a pipeline to product ideas.

Wade (16:18.811)
Mm, yeah, that makes total sense. Another question. So I think a lot of organizations are struggling with, you know...

how exactly do they want to go approach this? Are they trying to empower every single employee of the company to sort of become an AI builder and use the tools? Are they going to designate a cluster of power users who are going to sort of work cross-functionally and go department to department and sort of fix up different things? Some are trying to mix a hybrid of these approaches. How do you think about, what has worked for you to sort of raise the bar on AI's impact internally?

Spike Lipkin (16:59.446)
So I think it's a combination. mean, we haven't cracked this, but I think it's combination of all of the above. One of the coolest things we've done is with non-technical folks, we've created these lunch and learns where power users who, you they're not building products, they're just using these tools out of the box, we'll host a lunch and say, here are like the 10 ways I'm using these products on a day-to-day basis to help me do my job. And...

that ends up leading to pretty wide scale adoption of some of these use cases. So this can be as simple as, use this to draft compliance emails or something, something that you wouldn't have guessed previously. So the sort of lunch and learn format where the speaker is not a technical person, but a professional has worked pretty well for us. And then when we build products, I do think there's value in having these kind of tight knit small teams that are able to iterate quickly where there is a either client or

know, user, in our case, an insurance professional embedded in the team, and then expanding that out from there versus having it be these sort of broad based organizational initiatives. So those are the two approaches we've taken, but you know, it's still a work in progress. And you know, we're still trying to learn our other organizations are doing this.

Wade (18:11.499)
You obviously all work in insurance, which I think most folks would think of as more of an older school industry. And so obviously you have parts of the team that are very forward thinking on engineering, but I have to imagine you would think of other parts as maybe not traditionally doing this. What advice do you have for other folks who kind of have this similar setup where it's like, hey, we've got big chunks of the areas we work that are maybe not filled with people who are cutting edge technologists?

but still can benefit a lot from these, technology.

Spike Lipkin (18:43.487)
Right. I think with any technology, it gains adoption to the extent it's useful. And so what we've found is there are two big drivers of technology adoption. We created an internal wins channel where you can see every client we're winning, and you can see the reason we're winning those clients. And we tag whether technology and which technology was demoed as part of the sales process. And we're not selling technology, we're selling insurance.

What we found is that as we started seeing lots and lots of wins come through that wins channel where technology was tagged as the reason we were winning, you know, drove a lot of adoption among the folks who were out there selling. Internally, I think to the extent these products are agentic, it's really easy to gain adoption, right? People want to work on the most strategic, interesting parts of their job. And if they can delegate less interesting work to a tool, it's great.

And, you know, they'll take advantage of that. And if the adoption requires forwarding of an email versus, I have to log into this new system and I have to learn this new workflow, like make it as easy as possible. You you were alluding to this earlier, Wade, but I do think in industries like ours, the inbox actually becomes the workbench where, you know, all the communication is coming through the inbox. We've been saying the email is the API, right? Like that's the way that we're interacting with computers now is via these emails.

So I think that's the other way to drive adoption. One of the cool tools we built is called coverage analysis. So the core of what insurance brokers do is we will review coverage information and we'll try to find gaps. So, you know, does your policy have holes in it? Are there exclusions that it or endorsements that it's missing? And that work is super laborious and we trained a model to be able to do that work. We're going line by line of coverage and the model is getting better and better. And what we find is that

it allows someone who might be an expert in one area but not in another area to very quickly get up to speed. And so again, it provides a ton of utility. You can get an analysis back in a minute when usually it takes hours or days to do this analysis. So I think if these tools have great utility, you'll see the fastest option of them.

Wade (20:49.193)
Mm-hmm.

Wade (20:56.616)
Yeah, you know, kind of coming back to that lunch and learn topic, like it's interesting. I was talking to a customer this week that works for a professional basketball team. you similar these folks have, you know, folks in their back office that have worked there 20, 30, in some cases, 40 years. And so these are folks that are very used to doing the job a particular way. And

they found a similar thing where they would do these, they call them luncheon launches, where they just invite these, like, you the back office staff to just sit down and just come complain, talk to us about, what is the most annoying parts of your day, and they'll just start building stuff together. And they found that, you know, for those folks who are like, you know, maybe conventional wisdom would say, hey, these folks are, you know,

Luddites maybe not going to figure out how to use this stuff. Many of them will come back a week, two weeks later and have built more things. To your point, like, if it's useful, people get into it. They're like, this is exciting. And the technology is like pretty darn accessible. I know there's still a lot more we can do to improve the ease of use, it's not crazy difficult.

Spike Lipkin (22:08.795)
You know, it seems so obvious that like the more useful your technology is, the easier adoption will be. But it also, like we haven't seen a trend where, you know, only this segment of really tech savvy early career folks are the adopters. We've actually seen a trend of the people who have this work that's painful to do where they can delegate it are the adopters, regardless of sort of the demographic or where they are in their career.

Wade (22:13.002)
Hahaha

Wade (22:32.616)
Yeah. Are there any other tactics beyond hackathons and lunch and learns that have worked really well for you to drive adoption? Or is that kind of like the 80-20 rule is like those two things are going to get you most of the bang for your buck?

Spike Lipkin (22:49.491)
So like any great product idea, think, especially with this new technology, which LLMs are really new and what they can do is really new, I think you need to be open to ideas coming from unconventional sources. Like if you're an organization that has a roadmap and a product planning process and PMs and a head of product, you might be used to this really rigorous process in which you build things.

I think with these tools, because they work so well out of the box and everyone can experiment with them, I think you also just have to be open to kind of novel use cases coming out of the woodwork and coming from sources that are not always the obvious sources. I think that's the only other thing we've seen. A couple of years ago, our product ideas were coming from different places. And today, they're very much organically coming from the professionals on our team who are using products every day and coming up with ideas that I think are

Wade (23:30.762)
Hmm.

Spike Lipkin (23:43.292)
product team is not necessarily coming up with.

Wade (23:45.863)
What's like the most unexpected place where you were like, man, I can't believe that we got a great idea from there.

Spike Lipkin (23:51.162)
We got a great idea from there. Well, I think it's this chatbot that we installed. I think that the myriad of use cases around analyzing, like uploading documents to analyze, and I was like, I didn't even think people needed to do these types of analysis between documents has sort of led to the most unexpected use cases. And I don't even know that our team knows they're contributing to the product roadmap when they do that.

Wade (24:13.544)
Mm.

Spike Lipkin (24:19.034)
I think it's been most surprising to see the ways in which people are using it.

Wade (24:23.389)
Yeah, think that, like, I often think of that stuff as like the exhaust. It's like these stuff that just gets produced in the making of the stuff you do. If you take time to just like look at it, you often find that there's like a lot of value in there and then you can repackage it into more interesting things with it and you can kind of get a little bit of a product flywheel going based on it.

Spike Lipkin (24:28.24)
Yes.

Spike Lipkin (24:45.369)
Right, and it's sort of the highest fidelity version of product analytics, right? You're not like reading the tea leaves on how people are clicking on your product. Like they're literally asking a question they need help with and you're like, I guess I could build a product to solve that. Right, exactly. Yeah, no reading of tea leaves. Just read the query.

Wade (24:52.327)
Yeah. Yeah. Yeah. How do I do X? And it's like, we should make that easier.

Yep, it's right there. It's what it says on the 10. Looking forward into the future, tell me a little bit about how you think about the various roles changing inside of New Front. Are you finding the types of people you hire, the types of skills you look for are shifting, or the ways in which you assemble teams are shifting? What lessons learned do you have there?

Spike Lipkin (25:32.113)
So I think there's going to be this big paradigm shift and I think it's going to happen in our business and I expect it will happen in lots of industries and lots of other businesses. Today, if you're a professional who's serving clients, you have these two roles. One is, you know, sitting down with a client and having a strategic conversation about the decisions your clients are making. Do we ensure this risk or not? How much retention do we buy? What are the trade-offs in our business? And, you know, really that's what we get paid for.

But in order to do that, there's all this pre-work that happens, which is sort the transactional work of collecting information, getting quotes from carriers, building a proposal, doing billing. And that work, think, will increasingly be handed off to agents. And I think the role of the professional will go from today, you're part strategic, part doer, to mostly strategic, a small part doer, and then a lot of delegation.

And I think that's gonna require a really different set of skill sets. I think again, the email inbox becomes the workbench for that delegation. So the professional role is, I'm spending a lot of time with clients, I'm spending a lot of time delegating work and getting back completed work product. Maybe early on you're spending a decent amount of time QA-ing that work product, but over time, as these models get better, you're spending less time or maybe no time QA-ing it, it's just going directly to clients. And so I think that is the evolution.

Wade (26:55.72)
Mm-hmm.

Spike Lipkin (26:55.865)
Ironically, our thesis is that relationships become more important because the transactional components are going to increasingly be table stakes. And, you know, the quality of your advice and the quality of your relationship, I think will be the big differentiator in the future. So I think as our hiring practices changes, change, increasingly are looking for folks who are really good at interfacing with clients, really good at building those relationships with clients, and then really good at delegating work.

Because if you're doing the work yourself, you're competing with an agent who never has to sleep and can work 24 hours a day. And you could go spend time with clients.

Wade (27:36.158)
What have you learned about...

I

teaching people to delegate, like helping folks to build that skill. Like I find that some folks that maybe comes more naturally and then some folks are like, it's just better if I just do it myself, you know?

Spike Lipkin (27:51.332)
There is definitely the component of it's better if I do it myself. And I think, you know, anyone who is in a client service role, like you want to do really well for your clients, you want to show up for them. And so there's nothing negative about wanting to do it yourself because usually it comes from a positive place. I think it goes back to this, you know, if the agents are really good at getting you back completed work product, you're more likely to delegate it, right? Like, you know, imagine if you have like the world's best intern,

Wade (27:54.57)
huh.

Wade (27:59.818)
Mm-hmm.

Spike Lipkin (28:20.387)
You're like, yeah, I'll get them to do all this work versus an intern that makes mistakes every third task you assign to them. And so we try to make it less around how do we win hearts and minds to the best product is going to win hearts and minds. If you had a genius level agent that can complete every task, a lot of things are going to get delegated to them. Other than that, haven't cracked the code on change of behavior, just build a great product.

Wade (28:27.72)
Mm-hmm.

Wade (28:48.583)
Yeah. You mentioned that there still is going to be mostly strategy, a little bit of doer, and then delegating a lot. What's in that little bit of doer slice? What's the things where you're like, yeah, we still got to do that?

Spike Lipkin (28:59.929)
Yeah.

Well, think so. So you know what we do is very high stakes and so that little bit of doer for us, I think is a lot around QA and edge cases. And I think you know if you've built any of these technologies, what you what you learn is that like getting to a 90 % solution is straightforward. Getting that last 10 % that like accommodates every edge case. It may just not be worth the squeeze or it may not be worth the squeeze early on.

And so that last element of doing is accommodating edge cases, checking to make sure that this is right, giving feedback. The product obviously gets better when we catch a mistake or we find an edge case. And so I think that's the doing component. That bucket is undoubtedly going to get squeezed. I even think where we are relative to a couple of years ago, that bucket's gotten much, smaller.

Wade (29:48.659)
Mm.

Wade (29:55.261)
Yeah, like how do you spot those areas where you still actually do want to have a human in the loop? How do know when it's time to like hand it off? Like walk through the mechanisms you have around that.

Spike Lipkin (30:07.203)
Yeah, it's an interesting question because it's really a risk question, right? You know, like an interesting example for us is this coverage analysis. It's really high stakes, right? If we miss something, it could lead to a client having a claim that's uncovered and there could be legal liability there. Our view is over time, these computers, these agents are going to get way better than humans at reviewing complex, lengthy documents.

Wade (30:35.431)
Mm-hmm.

Spike Lipkin (30:35.513)
The question is, change of behavior. Like the analogy we think a lot about internally is, I'm not intimately familiar with the stats on self-driving cars, but at least some of the stats seem to suggest that self-driving cars are safer than human drivers. But when a self-driving car has an accident, it's a really big deal. So I think there will be this paradigm shift at some point where we sort of all take the leap and we realize, yeah, self-driving cars are safer.

That doesn't mean there are zero accidents, but at a population level, we're to have fewer accidents and it's better. Today, I think the tolerance for accidents is low. And it's particularly low in a high-stakes business like ours. And so that last mile of work for us is around checking, particularly in high-stakes moments, where I think these products are applicable and

different is, you know, use case like e-commerce, right? You if you send someone the wrong size clothing, it's not the end of the world. You know, you can probably give them refund and a store credit. And so I think there can be a little bit more tolerance for error there. And so so much of it is just the nature of the business, right? How critical are you? How much liability is there? What's the downside for your client if you make a mistake? And in our case, it's high. So we're very careful.

Wade (31:53.533)
Yeah, it's, you know, kind of riffing off that self-driving car analogy. It's, you know, the thing that I've heard is that, you know, yes, they're safer than humans driving. The challenge partly is that when they do have accidents, oftentimes it can be in areas that are actually quite easy for humans.

And that's the part that's like weird for human psychology, where we're like, how could it have made this such a such a simple mistake? And I think that will apply to other domains as well, too, where you'll see it make a mistake on something that for us would seem so incredibly simple. But the inverse is also to like it will do something that there's no way we as a human could actually do it. And so that that like it really is it kind of messes with your brain a little bit.

Spike Lipkin (32:18.263)
Right.

Spike Lipkin (32:41.198)
100%. I mean, we've seen, you with our coverage analysis product, for example, we've seen areas where the LLM will find things that the experts didn't find. But then the LLM is only as good as the expert that trades it. And so, you know, part of making these models good and safe is, you you need a lot of people to train them and provide feedback. So we've seen exactly that, I think you're talking about Playout.

Wade (33:05.416)
Yeah. What's different for brokers starting out today? Like, what should they be learning? How does the job look like five years from now for those folks compared to, you know, maybe who people got in the industry, you know, in 2017 when he started the company?

Spike Lipkin (33:20.949)
Right. I think that the big paradigm shift today is going back to this idea of what's going to matter is the quality of your advice and the strength of the relationships you're building. And I think that there is this view in a lot of industries where you kind of work your way up from the mail room and you do a lot of the transactional work and you cut your teeth on that and you earn your stripes on that.

think that's less relevant today because I think the mailroom is going to increasingly be AI agents. so, whereas previously the mailroom taught you things that would make you a better advisor to your clients, it's not so clear to me that doing that transactional work today is going to make you a better advisor. You actually might just want to start with how do I become the best advisor possible? And I think to become the best advisor possible, it's learning how to leverage all these different tools and then take insights out of the tools and deliver them to clients.

versus again, cutting your teeth by learning the transactional work. I think that's the big shift.

Wade (34:21.094)
Now, I love it, Spike. This was awesome. Love that you're not just building AI that's kind of like checking the boxes. You're using it to make it so that it's used, it's trusted. You're using the email as a work inbox. Pretty great. For those of you who listening, hopefully you can subscribe, leave a review, and send us a recommendation on who we should chat with next. Like I said, I'm Wade Foster, and this is Agents of Scale. We'll see you all next time.