Box’s Ben Kus on Building AI for Millions of Users and Billions of Files

0:02
That idea of very rapid changes um is very critical because like some teams if
0:08
you change too much it breaks them. You have to kind of almost build it in like
0:11
be ready for everything to change almost every like you know maybe every day if
0:16
not every week in that in that in that you have to kind of develop like a
0:19
process around it. Alrighty folks, welcome back to Agents
0:33
of Scale. I am WDE Foster. I'm one of the co-founder CEO here at Zapier. And
0:37
this is a show where we go behind the scenes and talk to leaders who are
0:40
figuring out how to operationalize AI at scale. And today I'm talking with Ben.
0:45
Ben's a CTO at uh at Box. and he spent uh what more than a decade at box now uh
0:52
helping enterprise store everything uh and a lot of that is changing in the age
0:56
of AI and I've been pretty impressed with how um fast they have sort of uh
1:03
been jumping on it and all the learnings that they have been sharing out so Ben I
1:08
am stoked to welcome you to the show yeah thanks for having me on
1:12
uh to kick it off I'd love to understand what's different about how box operates
1:18
today from when you joined shoot 10 years ago.
1:23
Um yeah, so there's the whole world of how um not only using AI but building
1:29
for AI has kind of changed everything. Um and so um I for me one of the key
1:35
ideas was that uh for in our world of unstructured data of the sort of the you
1:40
mentioned the idea of like storing uh like files. Um there was this this
1:45
interesting dynamic that uh we've kind of been through this like all of these
1:49
revolutions in technology over time. Um and and and and we're uh it's very
1:53
interesting to see the data revolution, right? And when I say data revolution,
1:56
you're probably thinking about um you know structured data, right? this idea
1:59
of um uh you know data most companies today you know they'll say that they're
2:03
data driven and and they have all these like you know these interesting
2:05
technologies or things like you know great great technology from Snowflake
2:08
from data bricks from just across the whole industry about using data more
2:12
effectively um but for for many organizations uh most of their data is
2:17
actually unstructured data right it's stuff and and and our our focus on that
2:20
is the unstructured data within files within content and this is a really
2:24
interesting world because um it used to be like a few years ago that there's
2:28
very little you could do to help automate that. And so then when you're
2:30
talking about unstructured data, you're typically talking about sharing it and
2:33
collaborating it and maybe kind of like having it making it easier for people to
2:37
kind of see and and understand it. But it did take a person to sit down and see
2:41
and understand and create these things. And that's kind of all changing now
2:44
because of the idea of uh like AI generative AI was kind of born on the
2:49
idea of of this uh unstructured data, right? That's kind of how it learned. is
2:54
just uh was able to kind of like figure out like oh what would I do in this case
2:57
if this the data and and then be able to to um that's how that these big training
3:01
sets are kind of all geared around this and this became very interesting because
3:05
the AI sort of learned to do what a person might with with this unstructured
3:09
data and and then and it changed the world of of that we operate in and and
3:13
um in box because suddenly you could actually automate and you could begin to
3:18
uh use AI on unstructured data in ways that it was never before possible. And
3:23
this was like night and day difference like like the things we used to do to
3:27
apply like machine learning style uh algorithms to your data. You know I'm
3:32
I'm going to sp yeah get 10 data scientists. We're going to spend months
3:34
making these new models getting training sets of all this this data so that we
3:38
can do one little thing like like you know pick this data out of a contract or
3:41
something like that. Like that has now all changed because now AI kind of
3:44
inherently understands the data like kind of like the way people do and this
3:47
this has changed everything. So you went right where I was most
3:51
curious. We've seen this too. You know, Zapier started a little over well 14
3:55
years ago and we're automating it. But to your point, it's almost entirely
3:59
structured data uh what is available here. So this is like, hey, you've got a
4:03
form and people are filling out these form fields and then you're mapping
4:06
those into like a CRM in a very specific particular way. Everything is labeled
4:10
and structured perfectly. Um and we struggle with the like big old
4:16
document, the big old email, the big old blob of text or whatever. like there's
4:19
just not much that you could sort of do with that. It's like well we had parsers
4:23
and some of these other things but they're pretty fragile and it locked out
4:28
like entire use cases entire industries from doing this and so we have seen you
4:33
know unstructured data become so valuable uh in the age of AI. I'm
4:38
curious for you what what were some of the like early customers and those early
4:43
use cases that you saw that made you go oh like oh wow there's a huge
4:48
opportunity here. Yeah. No. Yeah. And to your point, like uh I had a customer
4:52
tell me a couple days ago, they're like uh they're like, you know, AI generative
4:56
AI makes unstructured data cool again. And and I I hadn't actually I'm like,
5:00
it's always been cool, but like uh but yeah, it was interesting to to to hear
5:03
that that that kind of view. Um but so one of the uh the first things
5:07
that that we did, so we were kind of um over time, you know, we had spent a lot
5:10
of time in the idea of these like specialized models um in the past before
5:14
gener. And then as we saw like GPT2 style kind of stuff coming on and it was
5:18
like oh that's interesting. It doesn't you know not quite quite there yet. But
5:21
then around the same time that chat GPT became sort of uh a phenomenon is right
5:26
around the same time that like the AI became what you would call like
5:29
enterpriseg grade production ready at least in the earliest phases. And so for
5:32
us the very first thing that we did was to um start to to go back to some of the
5:36
challenges that we've struggled with in the past in particular like things like
5:40
structuring unstructured information. the idea of taking something like a
5:43
contract or taking like a project proposal or taking like a a digital
5:46
asset and and and like an image or whatever and and starting to say what c
5:50
can you pull out some key attributes of this because there's many aspects of
5:53
content in the world that we live in where the the the content is the key
5:57
thing right like this is the contract right all the words matter but there's a
6:01
lot of data about that you care about who signed it when did they sign it
6:03
what's the um what are the key terms and in all these different aspects of it and
6:06
this is across like almost all file types all industries all all like sort
6:10
of lines of business. Um, and then and so this idea of arbitrary structure uh
6:14
like you you ask I want this structure you know almost like here's a table fill
6:17
it in of all your content like this is very valuable for a lot of these like
6:21
business processes. So we started there in the very early days and it was just
6:25
amazing immediately you're like I can't believe that the AI can figure out I
6:29
have like four different forms or different pieces of data that are very
6:31
different. Um but the AI you just it's more objective driven. You're like I
6:35
want this kind of info and it's like I'll figure it out. And it almost works
6:37
like the way people do which is that you don't specify to person I if you want to
6:40
know the effective date you don't be like it's in these like pixel square
6:43
block after this term that you're like no it's just like roughly over here and
6:46
figured out like uh and so um that became something very interesting to do
6:50
and then the the um so this idea of structuring unstructured data we call it
6:53
data extraction but then also um for the first time we we just the the one of the
6:58
first features we did was was almost like a simple rag based solution where
7:02
you basically said I have a question and and here's a bunch of info can you give
7:06
me the answer. And this is like revolutionarily different. Like instead
7:08
of saying, "Find me the file and I'll read it." It's like, "No, no, I have a
7:13
question and I want some data like I I want the info back." If I ask you like,
7:18
let's say, um, uh, you know, you have a bunch of sales material and you're like,
7:21
"Do you support AES 256 encryption or something?" Like, you don't want it to
7:25
be like here's the security 300page uh, like document. You want it to be like,
7:29
"Yes, this product does. It says it right here." and and that change uh has
7:33
really um under like even today I still think that not everybody realizes how
7:37
powerful that the idea of just the the retrieve augmented generation style of
7:41
looking up information versus finding documents is.
7:45
Yeah. I mean, and that makes total sense for Box. Like, you've you're sitting on
7:48
top. Your your customers have put, you know, so much content inside of their
7:52
Box accounts for years and years and years. And now, just the ability to ask
7:56
a question and get an answer back versus sift through,
7:59
okay, you got to go through the file structure tree, the folder tree, and
8:02
then find the file or maybe to your point like search will return, well,
8:06
it's in one of these four files. Like, you can click into each of them and see
8:09
if it was the right one or not. Like, that is a pretty pretty incredible um
8:14
you know, uh new way to search for an answer.
8:18
Okay. So, that's kind of like the first unlock.
8:21
Yeah. And I imagine for a lot of your
8:23
customers, they kind of start there. It's like, okay, great. I can search
8:26
over stuff. What's difference from where they start to like where are the best
8:29
customers? Like what are the best things they're doing with automation, with, you
8:32
know, unstructured data today? Yeah. So I I think um one of the really
8:37
interesting things is is the day that you stop talking about AI models and you
8:41
start talking about AI agents like there's it's a tiny twist because like
8:45
you know there's this like almost like a continuum of what is the difference
8:48
between like a AI model that like responds to you versus an AI agent that
8:52
kind of does more complex work and and um and I'll define my terms uh just in
8:57
case there's the AA agent there's there always like a a fun uh dispute over what
9:02
an agent what is an agent right
9:03
yeah yeah so I use a definition of the um an agent is something where the AI is
9:09
making decisions about when it's done and about how to progress through a
9:13
workflow. And this is this is like um yeah, that sounds reasonable, but like
9:17
that's a very critical set of of things in there. And and I and I think of it
9:20
like um that the thing that we used to call basic uh like AI model type of
9:25
singleshot responses could actually in many cases be considered just kind of
9:29
like a really simple basic agent in in some cases and that the agent is just
9:32
deciding when it's done and and so and and so like I'll just give a simple
9:36
example of like in our first versions of of of of AI like um when we were doing
9:41
like I have a question for this document like you know that kind of thing is is
9:44
just simply like here's you know customize a prompt here's the data you
9:48
know pull out the most important data and then ask the AI to to to form a
9:52
formulate a response and then and then so that's that's just kind of a a basic
9:55
response. Um but if you have an agentic version of that then one of the things
9:59
agent does is it says is this a good enough response? Let me double check
10:03
almost like the way humans would is like like I'm about to give you an answer but
10:06
let me let me just kind of think about it for a second. Let me let me look
10:09
through the pages again. And that just that simple little loop would then make
10:12
it agentic um in a full way which is that then the agent decides this is a
10:15
good enough answer. And so um I I believe that like almost everything in
10:19
the enterprise in particular is going to become agentic because I think this is a
10:23
really interesting paradigm with the um in particular because it's kind of like
10:26
the way that that people work. Um and then and then um and so agents become uh
10:31
the sort of the vehicle to do more and more complex things. And so, uh, to your
10:35
question about like what is the next generation of what people are interested
10:38
in is to use these AI agents not only for like as assistants, helping you do
10:43
things, helping you do more complex tasks like um like in the way that if
10:47
you had a uh like like somebody next to you who was who was quite smart and
10:50
you're like, can you do this? Um, I want you to go look through these 15 uh
10:54
documents. I want you to tell me the difference between this contract and
10:56
this. I want you to like figure out this research proposal and then and then
10:59
create a summary of it, but take the info from these three areas and then
11:02
create a new version of it. like that kind of thing. You can do that. But in
11:05
addition, you can actually have AI start to participate in workflows which would
11:08
be like taking the spot where maybe a human would have otherwise had to give
11:12
their uh like a very specific uh like stop the workflow and wait for a human
11:16
to do it. And I think this is a very valuable area where you're having agents
11:20
sort of contributing their intelligence to not just help a person but then to
11:24
actually just automate a process. Yeah, I think the big unlock we've seen
11:29
is when you can combine that to an automation. So instead of you having to
11:32
turn to the person and say, "Hey, can you do this?" You can say, "Hey, anytime
11:36
this happens, I want you to just keep doing it."
11:38
And so now you don't have to remind them anymore. It can wake up on its own and
11:41
just perpetually do this task for you. Uh which is incredibly valuable.
11:46
Yeah. And and and I think um what I've seen and and what I'm excited about is
11:50
that like um you know, some people ask about like, oh um can you uh uh like how
11:56
much um time are you going to save or how is it going to make you know like to
11:59
cut a person out? But but I I've seen two things. Number one is um like many
12:03
of our teams uh both internally at Box in addition to when working with
12:06
customers, some of the things they're starting on are the things that they
12:09
like they they they the most don't want to do like like we have like you know
12:13
like especially if you're um uh like like legal teams and like you know like
12:18
compliance teams and like or or anybody has to do a review like many times
12:21
people like it not only does it take a long time but some people hate it. Like
12:25
I personally if somebody sends me something to review which I have to do
12:28
quite often uh it it's often not a pleasant thing. I have to stop and then
12:32
review a bunch of stuff like and so um uh one of an interesting idea is to have
12:36
agents start to do more of this sort of first pass review like uh the idea of
12:40
going through and saying like is this something that like there's there's 45
12:44
rules that that you know maybe Ben knows about or somebody on the legal team or
12:47
in brand or somebody knows about like can I check those intelligently like
12:51
like full intelligent level checks of like does this meet the criteria of how
12:55
we normally do a blog post? does this meet the is is this is this piece of
12:58
content okay um from a brand perspective whether you know uses the right fonts
13:02
uses the right words it uses the right like sort of like uh it's compliant with
13:06
the the kind of um way that we talk about our system and then and then to
13:10
have that all those checks be done automatically intelligently informed by
13:14
by your content by your policies by all the stuff like I think this is a major
13:18
area and and interestingly because not because it replaces somebody but because
13:22
it makes it faster and because you can do more of these things like like many
13:26
types of reviews often don't get done and sometimes like for like a big uh
13:29
that's a big challenge and I think that this is one kind of example prototypical
13:33
example of something that agents can do for you that will actually help
13:36
enterprises across the board. Yeah, I think you're spot on like you
13:40
know I think what is lost in the discussion is that um there's a large
13:44
set of work that is not economically possible today. Uh and that work just
13:51
doesn't get done. um even though it might be useful, it's just we can't we
13:55
can't deploy it cost effectively. And now with AI, you can come in and say,
14:00
"Oh, there's a new curve of stuff that actually can be economically useful now
14:04
because it's just a lot cheaper to go out and do that." And so there's a whole
14:07
bunch of unlocks inside these organizations where we're doing tasks,
14:11
we're solving more things for our customers that we just it just weren't
14:14
possible before. And that gets lost in the discussion.
14:18
Yeah. I I think um I had this exact thing happen to me a couple uh last
14:21
week. Um we were working with a customer who had gone through and they had like
14:24
these millions of these like client files that um they need they really
14:27
wanted to keep track of for all sorts of reasons including compliance reasons and
14:31
look things up and so um they but they were all different types you know this
14:34
is like a financial institution that had a data and all these like you know
14:37
highend clients. So we did a a job went through and and and had AI um in box
14:43
like be like oh this is this type of file and this is the key information
14:46
this is the client number and sort of what it's talking about. And so when it
14:49
was done, they were like, "This is amazing. This will help this this this
14:51
this is, you know, increase the audits uh like increase our ability to audit
14:55
things." And and and we were like, "Okay, like, hey, let's collect some
14:58
interesting stats." Like, "How much um did it save you over how you're going to
15:01
do it in the past?" And their answer was,
15:03
"I never would have done this in the past." Like like this is like, "Yeah, I
15:07
would have had to hire an army of of people um to go to go uh like, you know,
15:11
like like parallegals to like and they they would they would all quit
15:13
immediately because they hate this job to go through these millions of
15:15
documents to to to like to do this." And we just never would have done that. And
15:19
so like suddenly you have this unlock of this ability to like better understand
15:23
some of your most critical data, but in in a way that you like I mean if you
15:27
want to put numbers on it, they could have they could have like well it would
15:29
have it would have cost us this much. But they again they just like it the
15:32
idea of what's possible now is is different um in ways that I think
15:36
enterprises are still trying to understand even for some of the more
15:40
basic type use cases. And and I think this is like to your point, this is
15:43
something that I think more people will start to do over time and and and it may
15:47
not show up in these like headline like you know numbers of like oh like what's
15:51
your how many dollars saved like you know because it's actually really
15:54
weirdly hard to calculate that but instead they're like I can work better
15:57
now than I used to and then that will just become obvious over time. I was
16:01
talking to the person who runs AI for the Portland Trailblazers and they
16:05
shared a similar story where they had this process where they were
16:09
responding to uh customer complaints from, you know, people that had a bad
16:14
time at the arena for one one reason or the other.
16:16
Yeah. And um
16:18
they they struggled to like actually do support for those just because of the
16:21
staff and the dollars they had on on staff. and they implemented a couple uh
16:26
AI workflows around that and they were able to massively improve their response
16:31
times to that. So much so that they said, "Hey, I wonder if we could
16:35
actually do something awesome for the people who
16:38
had an awesome time at the arena." And they weren't doing that before. Like
16:41
there's people who loved it, were having amazing experiences, and they wanted to
16:44
just double down on that and like say, "Oh, great. Glad you had an amazing
16:47
time. Here's a jersey or here's a, you know, cool thing or whatever." And so
16:51
now they've just created this entirely new workflow that like focuses on these
16:55
others which in the past it was just like hey glad you had a good time like
16:59
we just don't have any time to like go do anything for you.
17:02
Yeah I I think it's a great example where like the problem like that when
17:06
you get those kind of responses should be like which ones should I respond to?
17:09
But it was almost not possible for you to ever have like what could you
17:12
possibly have done in the past to figure that out and there's just too many of
17:15
them to read. So you couldn't even you couldn't even route them. And so, but
17:18
now with AI, you can and you can find the ones that are most you want most
17:22
want to respond to that are good in addition to bad, in addition to like
17:24
routing it to the right people. Like, yeah, totally great.
17:26
Incredible. Um, so I want to back up. One of the things you mentioned is
17:32
this started to get useful for you. Um, at about the time the models became uh
17:38
more suitable for enterprise. Yeah.
17:40
Boxes, you know, long focused on serving
17:44
enterprise clientele. Yeah. What have you all learned around how
17:49
these large institutions are deploying AI? What is working for them that's
17:54
maybe different from your typical startup?
17:57
Yeah. So um it is interesting because um uh the first moment of course when when
18:03
AI came out then it was becoming um a production grade enterprise grade um
18:08
and and uh and of course at Box we we wouldn't ever use anything that we
18:12
didn't that didn't meet our our our security and compliance uh uh sort of
18:16
standards and and so and and so when I say like this became ready for
18:20
enterprise I mean that like it was like the first moment where you would have
18:23
the ability to like use one of these models in a way that like of course the
18:27
the data that you give it is is not is is is is controlled in the same way like
18:31
like let's say the data that you give to like AWS or like GCP like like it's it's
18:36
controlled in a certain way. It's it's it's not actually their data. It's it's
18:38
it's it's like or our data. It's our customers data. And so we need to treat
18:42
it that way the whole time. Like and this this goes into your operational uh
18:45
practices. It goes into your security practices. It goes into like what's
18:48
available to be to to like learn to do support on like even as something as
18:52
simple as like uh like for for for enterprise is very different. Um like
18:56
some people ask us like um how do you know that your um your like something
19:00
like let's say questions and answers or like some of these data is accurate and
19:03
we're like well our customers tell us like no no how does systematically do
19:06
you know that and we're like well I mean we give them the ability to say yes or
19:09
no but like we can't look and we can't read and we can't learn and we can't
19:12
train like like it's it's strictly built into the system is is to say that you
19:17
know it's not our data like it's we we're not allowed to and then we make
19:20
that very clear in our in our contracts and in our operational like procedures
19:24
so that people can then start to trust trust it. And and the thing is is that
19:27
enterprises even today are still somewhat suspicious of this. They're
19:31
sort of very aware of the idea that like many consumer-like tools will sometimes
19:36
uh like by default it'll be like yes, of course, I'm going to train on your data
19:39
and I'm going to use it. And then it leads to these concerns that like
19:41
something confidential or something you didn't want to appear will then be in a
19:45
future training set. And then and the moment it is the moment a model knows
19:48
something, it would be incredibly hard to figure out how to get it out of that
19:51
model sort of training knowledge. And so that idea underpinned a lot of people's
19:56
like lack of trust in AI for a long time. And and and so and then what you
19:59
started to see was that um people would would would would go through the process
20:03
and they they learn to adopt it. They they they learn about not only the
20:06
normal infrastructure rules about what it takes to do uh AI uh securely and and
20:11
safely, but then also the the like specifics of AI like again like which
20:14
training set is or isn't isn't available. Um and then and then once
20:18
they kind of get through that then um they then say like okay well how am I
20:21
going to use this? How am I going to get my my my people like set up to use it?
20:25
And and and here we see things like some of them you just uh you know there's
20:29
philosophy of like give it to people and have them try it. And I and I think this
20:32
is a great philosophy by the way which is like people will typically figure out
20:35
how to use tools better than a top down version of people like and you can just
20:39
see that like do any survey is like how many people in the world have used
20:42
something like chatbt and it's like a very high number. How many people use it
20:45
this week? A very high number. Um and so whether it's Gemini or Tropic or
20:48
Chachet, it's like this this this important thing. And then how many then
20:50
use it at work and and the number shrinks down, right? Because they um uh
20:55
they they they perceive that they're they're not allowed to or they haven't
20:57
been given the mandate to. And I think the more that you make the tools
21:00
available, then they will figure out ways to do it. We see with our customers
21:03
and ourselves is like like at some point people do things that you never would
21:06
have imagined that they that that would have been like a way to use like a
21:09
disruptive technology like this. And and I think that that that's very helpful
21:12
overall. Yeah. But don't like the you got to
21:15
think that those folks who like those hands go down, they're they're using it
21:18
at work. They're just not saying that they do, right? Like
21:20
I mean I I I think um yes, although some some companies had an initial reaction
21:25
which was to go very much out of their way and and I respect them for it. You
21:28
should never provide your data to anyone that doesn't guarantee to you a bunch of
21:32
things. And like like sure enough like go to almost any publicly available AI
21:37
tool and then dig through their settings and you'll find somewhere that's
21:39
probably checked by default which just says I will use your data to learn and
21:43
then and then many of them you know like I mean they're not they they they they
21:47
publish in their terms of service. it's not a secret and it's a free service
21:49
from any of them or or like a like a they pay for something but but like then
21:53
you have to uncheck it and then and so no corporation would ever be like I'm
21:56
okay with with people having my data and so that's why like it's just the data
22:00
protection rules before you even get to the AI rules that that usually would
22:03
prevent people from want to do that. the but the more the companies lock it down
22:06
the more that like they end up in this world we like you used to call it like
22:10
shadow IT world where they're trying to stop people from doing things but
22:13
they've been to the point like I I believe that you want to give people the
22:16
capabilities in integrated with their platforms usually that they use like so
22:19
that you can then protect this kind of stuff going forward
22:21
yeah I mean just sort of good proper governance that allows people to do it
22:24
and do it in a way that you know meets the corporate policies at the end of the
22:28
day um what has what has changed
22:33
um inside of box to enable you all to move fast in this area. What were the
22:38
what were the like characteristics, the traits, the the cultural attributes that
22:42
really, you know, Yeah. just just enabled you
22:46
all to go fast in this moment in time. And you're you're mostly curious about
22:50
the the way we build AI, right? Like how we we we we
22:54
specifically how you build with AI. Um but also I think you all are fairly good
22:57
consumers of it as well, too. So um I'm curious how both sides of that um have
23:03
played out. So one of the um it's very interesting
23:07
to see that like um when you have a company an enterprise class company that
23:10
needs to sort of deliver like sort of systematically on a bunch of of um uh
23:15
like enterprise class features like you you you see a certain deliberate like
23:20
approach to it that has evolved on most companies over time like if you're going
23:23
to like everything we do is like you know um it it matters to us. Is it fed
23:27
compliant? Is it is it HIPPA? Is it is it like just this whole list of things
23:30
and and to to make sure something's truly secure and um with all the best
23:33
practices and and compliance so on you spend you have a long list of things to
23:36
do and sometimes that gets in the way of something that's moving so fast and so
23:41
something like so very interestingly like uh for something like AI technology
23:45
like uh it changes so fast that it gets in the way of of in of processes that
23:50
are traditional for like software companies have to deliver to
23:53
enterprises. So for instance um when we first started using um AI we we picked
23:58
uh like a vector embedding solution uh for the database and we picked uh some
24:02
some some models and um and we said oh these are like you know pretty good ones
24:06
we'll go with that and then I it was a few months later we're like let's
24:09
reassess and and then and then and then we you know we finished and then then
24:13
six months later let's reassess and similar to our agent frameworks and and
24:16
certainly from the models perspective like what is the frontier-based models
24:19
is continuing to change o over time and so you you find yourself constantly ly
24:24
questioning these underlying assumptions that normally would take like like you
24:29
know once you pick a database you keep it for a long time like years usually
24:32
like um you know or even even even even longer and so this idea of having to
24:35
constantly recheck your choices is something that requires um like a
24:40
two-speed system internally there's this like the some things like about the
24:44
company you you don't want to change that fast it it it it unnecessarily
24:48
causes problems for for people internally or for customers but for AI
24:52
you need you need to sort of set everything up to so that it can
24:55
constantly change and be re-evaluated. And this is a whole new way of working
24:59
for some people. Like um uh if you've been in a startup, it seems completely
25:03
obvious, but if you've been at a large company for a long time, it is like what
25:06
do you mean like like you told me six months ago that in six months you do you
25:10
know blah and and and you're like well the world changed dramatically. The
25:13
competition changed, the the the technology changed, the paradigms
25:16
changed and it's different now. And so you have to constantly be building that
25:19
into uh your internal processes, the way you build, the technology choices you
25:23
make like and then even even the philosophy changes sometimes which is
25:26
which is like not normal in my experiences uh over the last like 20
25:29
years is that like the idea of what is your strategy is is is changing more
25:35
often I think for almost every company than like like you said oh every
25:40
strategy for last for five years that kind of topic. No no that that doesn't
25:43
make any sense right now for for the AI specific stuff.
25:46
Yeah. Um, double click on that. So, I've heard you talk about this before that,
25:51
you know, companies need to have this like multi-speed mode. Um, where you
25:55
sort of have like the fast team and then like the stable team. How does that
25:58
actually get operationalized? Yeah. I mean, so the important thing is
26:02
that it's not the cool team and the not cool team or it's not the like important
26:06
team and the not important team. It's the team that must respond quickly and
26:09
then the team that has uh the ability to uh like uh plan for longer horizons. And
26:16
so um uh so the way that we do it uh for box is that we um when we're building
26:22
something new we we we try very hard to make sure that like the sprint cycles
26:26
are shorter. We we make sure that the um that we have more reviews like like some
26:31
some sometimes in in in in in organizations like you know I mean we've
26:35
got um uh many different teams who are building many different really critical
26:39
things but like some of them you review like every quarter every every six
26:43
months or you know like what's the strategy any big updates but progress
26:46
over time but with AI you almost have to do it every week or two and you have to
26:49
have like like almost all of the key stakeholders involved and and and to the
26:53
point like at some point you have to shrink down that number of stakeholders
26:56
specifically to get work done. Um, and so like so, uh, of course at Box, Aaron
27:00
is is is our CEO is very involved in in in a lot of the uh the the the newest
27:06
updates. And it's it's almost interesting is it's like we even put
27:09
like people on in like who sit in certain areas and and so it's like
27:13
you're looking this direction and you're thinking about the traditional way your
27:15
like day job of of like like you know making sure that the company runs
27:18
effectively and so on and then you kind of like turn this way and it's kind of
27:21
like okay you know what changed since this morning like like let's let's like
27:25
um oh my like nobody knew that this thing was going to happen and that did
27:28
you see this latest way that this this company is releasing this and then and
27:31
then that idea of very rapid changes um is very critical because Some teams if
27:37
you change too much it breaks them and you have to almost prepare the team to
27:40
say like we don't know what we're going to do in a month. Like I mean of course
27:43
we have an idea but like probably it's going to change. You have to kind of
27:46
almost build it in like be ready for everything to change almost every like
27:50
you know maybe every day if not every week and that in that and that you have
27:54
to kind of develop like a process around it. But importantly, it's probably not
27:58
important for the whole like, you know, to change everything about your your
28:01
development methodology because there are certain things that you still want
28:04
to have like reliably predictable like um important and some cases absolutely
28:09
critical things that just happen on a like the quoteunquote normal uh software
28:13
process. Yeah. So, I mean, that makes sense to me
28:16
that you would sort of want to have this like fast response team where, you know,
28:20
I don't know, OpenAI, Anthropic, whatever, they sort of announce some new
28:23
capability or whatever, and you want to be able to jump on that right away and
28:26
figure this out. And you want to have on this other side, uh, a team that's
28:29
investing in stable, durable, um, you know, trusted uh, APIs and
28:34
infrastructure that folks can can rely on. What happens when those two teams
28:38
meet? Like what happens in the middle? So um the uh chaos uh is is in every
28:44
company that I've ever met I've ever been involved in and and but but this is
28:47
I think where um so I use the term platform quite a bit and and to me uh
28:52
the reason you do that is so that you develop this idea of um of like
28:58
almost like a a slip area. So like you can have pieces of it that are moving
29:02
reliably uh like solving for quality, solving for scale, solving for
29:07
reliability and then a team that is able to to quickly like iterate. So one of
29:11
the things that we do at Box is that we have um multi-layers of our platform. So
29:16
if if uh like we have our traditional platform, you know, our content
29:19
management, we're doing security and and you know, unlimited storage and and
29:22
internal external collaboration and all this stuff, you know, let me tell you
29:25
about 20 years of history about all this great stuff like how many days you have
29:28
so I can tell you all the features and then you have the AI layer which is the
29:32
infrastructure AI layer where you know secure and and then people you know we
29:36
get our our compliance audits on this and we we make sure that it's secure and
29:39
then on top of that we have this agentic AI layer whose job is to change very
29:45
rapidly. So the idea of like what that agent is
29:47
doing is is something that we consider to be like uh built on top of this. So
29:52
there's like a contract between what an agent can do in the system and then what
29:56
the system can do. Like for something like retrieve log generation where you
29:59
have to go look up a bunch of data. The piece where you go look up the data you
30:02
know across you know like you know we have over an exabyte of data. We have
30:05
like hundreds of billions of of of these things. You you you want that to be
30:09
extremely permissions aware secure uh like not changing all the time. But then
30:14
what the agent's going to do with that will change all the time. Like to the
30:17
point a new model comes out, it can substantially change what your agents
30:21
can be able to do. And so then you'll be able you should almost rewrite it that
30:24
night, you know, like like just to see what happens. And then that idea of like
30:29
you can't like you could never build the platform again to update it. Like even
30:33
even the process of updating the platform might take longer than than
30:36
than like uh like than than the time you want to spend on it. And so you have to
30:40
have a flexible layer. So in box we have the ability to like load and to
30:43
customize these agents inside the platform so they can do those tests to
30:46
see like what should we be doing next based on the results of me just trying
30:49
it out and and I think for every every company you need to have that idea of a
30:53
of a platform underneath it stable reliable scale just a different set of
30:57
requirements versus the thing that's allowed to change and it and and the
31:00
friction you're going to hit to your question is if you don't have it the the
31:04
right layers then you're going to end up with like somebody trying to move a
31:07
giant uh object and somebody else pushing back on the other side like and
31:11
and then that just never works out well. Um okay, I want to circle back to our
31:16
discussion on unstructured data again. So you have loads of customers that have
31:20
lots of unstructured data. Uh but just because it's uh you know AI is good at
31:26
working with unstructured data doesn't necessarily mean that companies who have
31:29
lots of unstructured data can just like flip a light switch and now oh great
31:33
like we can do all sorts of interesting analysis or workflows or automations on
31:38
top of that. What are the things that if say you're
31:42
in one of these companies where you have just reams of like decades worth of like
31:47
uh unstructured data, what are the things that they have to do to actually
31:50
be ready to go deploy AI and actually get practical value out of it? It
31:54
probably can't still be sitting in a file for folder somewhere in a in a
31:57
basement. Well, yeah. uh hopefully it's in a
32:00
system that is able to uh like support the latest AI capabilities and and then
32:04
which is of course one of the things like um you know at Box we we think of
32:08
ourselves as um if you're going to be dealing with unstructured content you
32:12
better be like keeping up with AI otherwise um somebody's else is going to
32:16
do it for you like and and so um in in general one of the things I would always
32:20
recommend is across all of your platforms like
32:23
which of your platforms are keeping up with the latest AI like not the ones
32:26
that had like an announcement that they did you know a year ago But like keeping
32:29
up with all these changes because that's going to be very important for you
32:31
because probably the most value you're going to get out of some of these these
32:35
platforms is going to be the thing that they're about to build tomorrow based on
32:38
the latest you know AI capabilities. Um but in general for unstructured data
32:42
like one of the challenges is that um uh like there's so much of it and there's
32:47
this like long history to it that you have to make sure that you're um like uh
32:52
uh you're able to use it effectively and securely. That's often one of the the
32:56
biggest challenges is like and and and um sometimes people like in the in the
33:00
first versions of some of the stuff that came out around with some from some
33:03
companies they were like look what you can do with your unstructured data like
33:06
if you do this and this wonderful thing happens but they like didn't take into
33:08
account the idea that every company has like every person in every company has
33:13
access to different things like and and and many some people are like well I'm
33:16
going to add role-based access control and like no it's not even rolebased
33:19
access control like just because you're in marketing doesn't mean that you get
33:21
to see all marketing stuff or just because you're in like you know
33:23
engineering doesn't mean you see all the stuff at some point you as a person have
33:27
access to like a a bunch of unstructured data that nobody else does too. So you
33:32
have to take that into account because you probably want to use all of it. So
33:34
then you end up in this world where you have to sort of be very aware of the
33:37
identity and of the permissions associated with that with the users. Um
33:41
and this is really critical. You should never deploy any solution that doesn't
33:44
have this all like completely checked uh uh every one of these boxes because it
33:49
like if you're not the AI is so helpful and AI it doesn't keep secrets. it will
33:53
go find information and tell you about it and and it doesn't know whether or
33:56
not you're authorized to do it. So you basically cannot have the AI ever look
33:59
at something that you're not allowed to look at when it's operating on your
34:02
behalf and and so um that that's that's very critical and then and then and then
34:05
it makes its way into the world where um one of the interesting things is to get
34:09
the data that you have that is um and then begin to um uh provide it to people
34:15
in a way that's not only um you know make sure that you have the right
34:17
permissions but also is authoritative. This is one of the other the other key
34:20
pieces of your question which is there's a lot of data in a lot of organizations
34:24
and only some of it is authoritative and so one of the hard parts about context
34:29
engineering and about going through this kind of data is to make sure that you
34:32
have got not just the answer but the best answer given the data that it has.
34:36
So um a fun thing that we did early on was um we took all of our um we were
34:42
testing out like like how well this you know like idea of looking up information
34:45
works. We dropped all like a whole uh all the financial data we could find um
34:49
at Box into a big uh that we call a hub or a big folder. Um and then we
34:53
basically asked a question and we were like what was the the Q3 revenue and it
34:57
came out with an answer. We were like oh like high five and I can't believe this
35:00
works. It's so awesome. And then we looked at it we're like wait a minute
35:02
that's a little bit wrong. And and at first we thought oh it must be Q3 like
35:05
last year or something. No but it kind of got that part right. But then we
35:08
looked and it we found that it had like looked at data from a board deck. Um,
35:11
and it was like and and so it's quite authoritative, but it wasn't like it
35:16
didn't it turned out that was like early on before the quarter had closed. And so
35:19
it it should have been looking at the the document which was the official like
35:22
public report which was also in there somewhere. And so that kind of thing is
35:26
really interesting because there's so much context in an organization and then
35:29
selecting which is the right one is is itself an intelligent task. Um, and it
35:33
gets worse the more data you have and it gets worse the more like I'm going to
35:36
look at messages, I'm going to look at emails, I'm going to look at all your
35:39
data like and so um, one thing that we do is we give people the ability to um,
35:43
not only curate a set of data, which is often a great way to do it is to have
35:46
humans be like, I want you to really care about this set of data like product
35:49
material or policies or that kind of thing. And so the AI just naturally only
35:53
looks at that data in addition to having the AI try to intelligently figure out
35:57
if it has conflicting set of data, can it resolve it itself? Um and and usually
36:01
we also then say if it can't just tell all the data the Q3 revenue is either
36:05
this this this and here are the seven documents I found it from. So like which
36:08
one did you want because all of them seem like the answers to your questions.
36:11
Yeah. They it's got to know that it's not in document final. It's in document
36:15
final final to this time I mean it. You know
36:18
it turn Yeah, that's actually what we learned that exact trick early is that
36:21
we used to not provide the uh the AI the file name. we just would find the chunk
36:25
of the data. But then it turned out that often times that conveyed a lot of
36:28
information to your point like and then so like these days you might if you gave
36:31
it one of those draft and file it would be like if you ask it a question be like
36:35
well the one that says final this is the answer just for note that there's other
36:38
versions and the draft ones that were different like and that that kind of
36:40
thing I think is is the only way to resolve this kind of challenge because
36:43
even even at the smartest human in your company if you ask them that question
36:46
like they might not be able to give you a single legitimate answer they might
36:49
often have to like kind of like give you some extra context. Well, and you know
36:52
what's funny is, you know, there's a there's a joke that goes around like one
36:56
of the hardest things in computer science is naming stuff. And I think
36:58
humans are just very sloppy with naming things, but AI is actually quite good at
37:02
this. So, if you want to just ask the AI to name your things, like it does a
37:05
pretty darn good job of naming it in ways that it will recognize down the
37:09
line. That that we we we definitely have that
37:12
challenge in box is that the challenge of naming, but also using AI wherever
37:16
possible to help us uh with with reasonable naming structures is is is
37:18
helpful. Yeah. So you said one interesting thing
37:21
which was hey you know you got to be working with vendors who are you know
37:26
staying up to date with this stuff. So hidden behind that is clearly some
37:30
experience in like vendor procurement. Um and I'm curious like you know for
37:35
folks who don't work in tech I mean or heck maybe even if you do like every
37:39
vendor out there is saying I'm doing AI I'm the front line you know it's it's
37:42
hard to tell like who actually really is doing this versus who is just putting
37:45
window dressing on it at this moment in time. So, for folks who are out there
37:48
trying to buy from vendors, like are there certain things that you're like,
37:51
"Hey, here's some tricks that you can use to help you better understand who is
37:56
likely um doing legitimate work here versus who is just trying to surf a
38:01
wave." Yeah. Um I I think uh uh uh probably one
38:06
of the answers would be get get four presentations from your vendors about
38:10
their AI strategies and then you'll you'll probably very quickly figure out
38:13
which ones are are which. like it's almost one of those like like uh you
38:16
know just try a few times and you'll get it. Um uh but I'd also say I mean if you
38:19
were looking for sort of specifics um I I personally believe that like AI agents
38:24
are a big part of enterprise uh sort of paradigm that will matter a lot. So
38:28
people who are talking about their AI and they're and they and they have uh
38:31
they're telling you about AI agents now and in the future and you start to see
38:34
them evolving like uh is I think that's really critical. Um I think many many
38:38
customer or many times you're talking to customers to vendors um you should see
38:42
like when they tell you this is what works today. Let me show you this is
38:46
what is brand new and this is what's coming later. And you kind of have to
38:49
have all three like because um like if you only get people telling you later
38:53
you're like I'm not so sure they actually built anything. I'm not sure
38:55
they solved the hard problems. If they tell you um only what they have, you're
38:58
like oh maybe they they got stuck because sometimes if you're not
39:01
rearchitecting constantly it's it's a challenge. So I I I think if you look at
39:04
the like ask them like which which is which like which are the things that are
39:08
brand new which are the things that are old which are are old being like a year
39:11
old um and which of the things are are um the the next generation this this
39:15
works and if you want to give them a hard question ask them what an agent is
39:18
and then see if they're answering rate them on that because that's often would
39:20
would sort of differentiate the people who are either using AI models in their
39:24
base form which which I think is great there's no there no problem there but
39:27
but versus the ones that are preparing or using currently the idea of the more
39:30
complex tasks with agents. Yeah. So there's this like uh you
39:35
basically just want to see the trend line effectively. You want to say like
39:37
hey I want to see where you started or I want to see where you're at and I want
39:40
to see where you're going. And if there actually is a line there
39:42
you can say ah this is a company actively working on this versus if it's
39:45
just a dot in time you're like ah this is yeah you know this was a flash in the
39:49
pan. Totally that's a great way to say
39:51
what are you seeing that is separating the companies that are figuring out this
39:55
the fastest. What is it about them? Is there is there certain skills? Is there
39:59
certain tools? There's certain c cultural attributes like what are the
40:03
things where you're like ah this if you're doing this you are you are in the
40:06
1%. I I I think um
40:10
for me it's it really is all about AI agents and and and I I think I started
40:15
saying that um maybe six months ago or a year ago and I think I'm going to be
40:18
saying that for the next couple years. um is uh like I my mental model of an AI
40:22
agent is is one of uh from that uh where where it can do things kind of like a
40:27
person can do and and so like getting to the definition of exactly what that
40:31
means in your organization and how you're going to use it is a very long
40:34
and broad set of things that you can do and so for me like you start to try to
40:39
figure out like what is the both the road map and what's currently available
40:42
for companies that do these uh AI agents and you start to see like some of the
40:46
really interesting things is to see how like a very high-end and programmer who
40:50
of course is on the forefront of like many of these, you know, they have the
40:53
capacity to handle some of the like um earliest uh like more complex technical
40:57
versions of of what AI can do and and and of course AI being able to program
41:01
is is just like one of their fundamental attributes that have sort of like always
41:04
been very amazing. But you start to see that like some people are taking any of
41:08
these newest um the best sort of uh AI code generation tools and they start to
41:12
say like when they get to work they become managers of a bunch of agents
41:16
doing a bunch of work for them. And you see the difference about how people are
41:18
getting good about this kind of thing. And so and then and then um uh as people
41:22
are doing that you start to realize there's a difference between a good
41:25
agent and a and a and a less good agent. And so I think people in the forefront
41:29
of using AI effectively are the ones who have very good context engineering that
41:34
makes its way into very good agents that basically let you then delegate to them.
41:38
It's almost like like we we evaluate people this way like oh they're they're
41:41
they're a good uh you know according to this role they're good at this or good
41:45
at this. you almost start to need to evaluate agents that way. You're like,
41:47
"Yeah, you did the job, but you require a little bit more guidance in and the
41:51
first version was maybe off." So, this is, you know, you don't get promoted
41:54
yet. Like that kind of mentality like you start to see this coming out of
41:57
companies is you're like, I can't believe that if I asked this thing, this
42:00
agent was able to do it. That's amazing. Um, versus u another one where you like
42:05
it took you so many shots to get it right that it almost maybe I should have
42:08
just done it myself. that that that kind of thing is is and and I think that this
42:11
idea will go forward across many companies is to to see how smart the
42:15
agents are. It's almost like agents are the new application. Uh and and then
42:18
it's going to matter most to a companies, how good that agent is. And
42:21
it funny it might even turn into like 100 lines of code, but that 100 lines of
42:24
code is more important than many other things you're doing.
42:26
Yeah. Uh yeah, effectively it's like this uh you know establishing a rubric
42:32
and then like keeping tedious notes on like yes that was great, no that was not
42:35
great and just like keep doing that over and over again. And like the humans that
42:38
are just really diligent about doing that effectively can deploy agents way
42:42
better than folks who are just like I set up a prompt and
42:45
you know it's fine like you know at the end of the day.
42:49
Yeah. What um let's look ahead a little bit.
42:52
What are you excited for uh in the next six to 12 months? What are the things
42:56
that you think are just on the cusp of organizations being able to figure out
43:01
how to do? Uh to stay true to my promise from a
43:04
moment ago, I'm going to say the word AI agents. Yeah. But but okay. So, so but
43:08
there's there's multiple flavors of it to make it more. Yeah. Okay. So, uh AI
43:12
agents that can do more for you, I think is is very critical. Like the idea where
43:15
you give them complex tasks and and and like you almost can tell like um like
43:20
some things you want to answer quickly. I have this piece of information I want
43:22
like sort of classic retrieve augmented generation have an agent think about it.
43:25
Even if it reflects on it, it should give it to you really quickly. But then
43:27
at some point you want to do something complex and it's going to take it a
43:30
while. And this is something that I think uh I don't know if everyone's
43:32
gotten used to it, but like people who are on the forefront of AI like thinking
43:36
modes or or like reasoning modes, the these are something that you start to
43:40
realize that in the same way with a person if I said hey can you give me
43:43
this the answer this question like here you go. I'm like can you make me a
43:45
report that I'm going to give to the board that's going to be completely
43:48
accurate. They're like give me give me a while like you know and then in that
43:51
idea you agents have that too. The longer you give them the more they can
43:55
do work and the more they can get you really reasonable things. Um so
43:57
something like deep research uh which was a the capability that many model
44:01
like chatbt and Gemini and anthropic like like that they added a while ago
44:05
deep research on the internet turned out to be something where if you really
44:08
wanted a good answer do deep research versus the sort of answer in 10 seconds
44:12
and so like we've incorporated agents that do uh deep research on your content
44:16
and then you start to realize that like you can get an answer quickly but but if
44:19
you want to have research that goes on for a minute sometimes 10 or 20 minutes
44:23
like then then giving the AI time to reflect and to look and understand more
44:26
things, get a reference, follow up on that reference and that kind of that
44:29
kind of like loops. Um those are really uh critical for um overall making sure
44:34
that you are getting the best kind of answers. And so then um going forward I
44:38
think you're going to start to see that like you have these agents helping you
44:41
but then so that's one major aspect like agents taking longer to do things. So
44:44
you start to think of yourself as a manager of agents but the other side of
44:47
it is agents and workflows. I think some of the really interesting things are
44:51
when you take the power of these agents um and then you put them inside a
44:54
workflow and sometimes these these these agents can be calling other agents and
44:57
other systems and then and then and but uh uh the more that you can have like a
45:02
workflow go from end to end without stopping because you along the way you
45:05
sort of are intelligently to do something something that used to require
45:08
a person to do it um the more that you can probably accelerate a process. Um,
45:12
and then and then to the point earlier, you can do this in areas that you just
45:15
don't do today because it's just too hard or too complex or it bothers people
45:18
too much. And so I think this is going to be a big part of of certainly six to
45:22
12 months of people using workflows more with agents with different platforms
45:26
together to start to automate more of the things that um hopefully are
45:30
delivering like what you call like real business value.
45:33
So follow up on that. One of the things we're observing is that, you know, when
45:38
you put these agents in a workflow right now, they benefit massively from being
45:42
constrained inside a workflow where you're sort of predetermined saying,
45:45
"Hey, we want you to go from this step to this step to that step." You get a
45:48
lot of better accuracy, you get better reliability, you get uh cost advantages,
45:53
things like that. But I think we're all sort of more excited about this world
45:56
where the agent can figure out what the next step is, etc. What do you think the
46:00
big unlock is for those to actually work at the reliability that most enterprises
46:04
need? I think there's some cases where it's working but you know by and large I
46:08
think that still is not yet solved. Yeah. I mean and there's an interesting
46:12
question of what is an agent versus what is a workflow right? Because if you look
46:14
at true yeah like like what is a good agent? It
46:17
either has a predefined sense of what it's going to do and it intelligently is
46:21
progressing through this and making decisions and looping along the way or
46:24
it is um uh making its own plan that it like you know just completely like or
46:28
some combination of both. And so um uh that that if you looked at what an agent
46:33
does in many of these more complex cases, it does look like a workflow. Um
46:36
and then and then so there's this question of well if I'm using that agent
46:39
in a workflow like what's where does it stop and end? And to me there's this
46:42
question of like a deterministic aspect of when something always happens versus
46:46
when there's something intelligently happening. This is why I believe in the
46:49
paradigm where you actually use agents in a workflow. And so you might have an
46:53
incredibly complex agent that does all these things and intelligently coming
46:56
through this but its job is to arrive at a single answer or single like like
47:01
something to branch upon and then you use that inside of a workflow. Yeah.
47:04
And so um given an input uh like do a lot of interesting things and then come
47:09
out with an output that then leads you down a deterministic path
47:12
and and I again I think of this very much like imagine that you had a very
47:16
long list of intelligent contractors who come into your organization who can do
47:21
whatever you want. You just have to specify it to them. Like at some point
47:24
you you specify like please do all this intelligent stuff. You have this long
47:27
list of instructions and things you give to them. But then usually like them just
47:32
saying yes or no or risky or not risky or like you know here's the how to
47:37
categorize this thing isn't actually the value. The value is then to take that
47:40
and then to do seven more steps. And those seven more steps I think are the
47:43
workflow. And so I I think of it as these little like if you were to draw a
47:46
workflow you typically you know you have a box in there which is your AI agent to
47:49
take the input and then give the output. that's going to become more and more
47:52
complex. I actually think that like you know some people draw these workflows as
47:56
these really long complex things like at some point they're going to get smaller
47:59
smaller but the box that is the AI agent doing work is going to become more and
48:03
more sophisticated and that's okay because I think that's the way that
48:05
people interact with other people like at some point in your whatever group
48:09
you're in in your organization some team you go to somebody you're like you do
48:12
this and it's like one line it turned out to be like like hugely complex or
48:16
even that person then goes kicks off their own workflows but like it's a
48:19
great way to think about things is to say this hard thing to do will be a
48:22
respons responsibility of one entity, in this case an agent, to go do that.
48:26
Yeah. It's it's it's almost like you're just chaining along milestones where
48:29
it's like, hey, we want to check back in at this agreed upon point of time, and I
48:33
want this agreed upon artifact to exist. And how you get to that, that's the
48:37
agent. It's like, ah, here's all this magic. And then boom, now we're talking
48:40
about the output of this thing. And then based on that output, we're
48:43
going to go kick it off again and do another Yeah. The next step in the
48:46
process. I I remember the first time I um started to build agents. um you you
48:51
start to like realize that they resemble the state diagrams that you have in in
48:54
in school like at some point you learn these you know and and then you as you
48:58
go through it you start to realize that like wow like many things that I do or
49:01
that my company does are representable in these state diagrams and and then
49:05
then there's this like little like kind of moment you have where you're like
49:07
whoa if I just had an arbitrarily complex huge set of these things like
49:12
then like I can do almost anything with the agent because you have the ability
49:15
to have intelligence and then of course your next question is how am I going to
49:18
possibly manage that overall and And then you start to build these like
49:22
layers of this agent calls this agent. This agent uses MCP server to go then to
49:26
this platform. This agent can then all that that be represented in a determin
49:30
deterministic workflow. And I think that is how people will start to think of the
49:33
world going forward. And again all of it powered by the idea of like that's not
49:36
too far from how you think of it today when you have that agent be like kind of
49:40
the way you deal with a person or a team or whatever. And and not to say the
49:43
agents are going to take over and do those things by themselves, but but you
49:46
will think of them in that way so that it makes it easy for you to figure out
49:49
what the agent can do for your organization.
49:51
Love it, Ben. This is a good place to end it. I could keep going on for hours
49:56
on this topic. Uh agents, agents, agents. Uh hopefully folks enjoyed this
50:00
conversation. You know, if you're in an organization that has lots of
50:02
unstructured data and you're bemmoning how uh it's been hard to do workflows,
50:07
do automation, all that sort of stuff, uh you heard it from Ben. Now's an
50:10
exciting time for you because AI is going to make it a lot easier. Thanks
50:13
for joining us, Ben. Thanks for having me on.

Join our newsletter

checkmark Got it. You're on the list!