Skip to content

EPISODE 02 TRANSCRIPT | SIGNAL & NOISE PODCAST

AI & The Future of SaaS with Ben Wilde

Brett (00:00)
Hey everybody, welcome back to Signal & Noise. This is episode which we're calling, Agentic AI and the Future of SaaS, or as Rio Longacre, my co-host, likes to say, ⁓ is AI the death of SaaS, or thematically, I think you said Rio, something like TV, did TV kill the radio star, which you might be dating yourself with that statement. Do you remember when that video first aired on MTV? Yeah.

Rio (00:25)
That was their first one, right? I think that was

81, something like that, 1980.

Brett (00:27)
Yeah, it was

like shortly after they put the sort of MTV MTV flag in that video of the on the moon. And that video was the first it was one of the first music videos I ever saw. But I think it's a it's a really interesting topic. We've got Ben Wilde, who's the head of innovation at Georgian. And for those of you that don't know who Georgian is, they're a VC that invests in growth stage, AI and automation focused B2B software companies. Super. Oh, yeah.

Rio (00:32)
Flag in the moon.

It's a great discussion. I think

everyone's going to be so excited to listen to this.

Brett (00:58)
Yes, great discussion that you quickly can get into almost beyond your depth. But I think it was a really 360 view of every aspect of agentic AI and AI and how it's impacting in very realistic terms, the effect that it's happening today on ⁓ people's jobs, on the advertising ecosystem, larger industries in general. So I was I was thrilled by the conversation.

Rio (01:25)
You see people posting all the time, is AI going to kill SAS? Obviously, that's being a little hyperbolic potentially, but it definitely is going to change it. So it's a great discussion. Stay tuned for that I think you'll enjoy it.

Brett (01:38)
Yeah. And so before we get into that, we're going to do our kind of news rundown. Right now, I think it's pretty important to cover the topics of economic uncertainty. Obviously, the Upfronts and Newfronts , which Rio, will take you through some of his observations of the biggest news coming out of those over the last two weeks with the IAB Newfronts and then the Upfronts last week.

I watched some of them, was able to get some streaming links. There's a lot of entertainment, but also a lot of ad tech, martech, and advertising talk during these, at least the preambles that you didn't typically hear at Upfronts. So it was a pretty interesting experience. But before that, I think it would ⁓ behoove us not to talk about the fact that the marketing ecosystem as our CEO, Matt Krepsik says, is about to get weird.

Right, so there's certainly some economic cycles that we've experienced over the last few years. Rio and I have been around for more time than we care to admit in the industry. And what we've noticed is that every economic cycle in the advertising industry specifically represents a good opportunity to evolve. so...

Just to get an idea of where we are now, I did a little bit of research and we've seen some various percentages in terms of the likelihood of a recession. JPMorgan is saying 60% even this morning. In 2025, Goldman Sachs says there's about a 45% chance of recession. ⁓ The New York Fed ⁓ via Forbes suggests a 30% chance of recession over the course of the year. So we don't know exactly yet. It seems like things are stabilizing. ⁓

But I think overall, we're expecting, in MediaRadar and in a lot of the research that I've done, and know Rio has done a lot of this research as well, there's gonna be an impact on advertising. Marketers are expecting a 6 to 10 %-ish cut in advertising spend. ⁓ We expect that SMBs are gonna be more exposed to volatility, which is generally the case. There's likely to be structural

shifts in the advertising market, right? You look at the Google breakup, you look at what potentially is going to happen with Meta. So there's a lot going on in the ecosystem right now that might leave some of us a bit unsettled. But I think, you know, if you look at what happened with COVID and Brexit in the UK and what happened in the '07 and '08 financial crisis in the US, ⁓ you'll see some room for optimism, I think. And Rio, tell me if you disagree. ⁓

I mean, initial ad spend in the UK during Brexit and the COVID era, sort of 2019, remained pretty strong post-COVID. It was up 8%. It eventually fell to low single digits. But what they did see, and I think we saw this in the US as well, is a tremendous shift to digital. So I was in the digital ecosystem in the digital advertising world. So were you, Rio. Both of our businesses, Slalom Consulting, Neustar, saw significant boons from

the COVID period, which I don't think we expect.

Rio (04:45)
Yeah, that was interesting, Brett. So I remember, like, I was working management consulting during the 2008-2009 financial crisis. It was catastrophic. were immediate, I mean, across the board 40% cuts throughout the industry and teams that focused on financial services set up to 60% head count reductions. It was brutal. It was immediate. Then

Brett (05:06)
yeah.

Rio (05:09)
we expected when COVID hit, we expected the same thing. Everyone was creating lists of people. It was really sad, right? And it felt like the world was ending, but then the opposite happened. Business took off we grew headcount by over 50 % during that year, like that starting, let's say July when we realized the world wasn't going to end, we realized, okay, there's more money going into digital. I was hiring four people a week at some point, losing too, because they were being poached by Adobe, Salesforce, other consultancies.

Brett (05:11)
Yeah.

Rio (05:37)
But it was a wild time. ⁓

Brett (05:39)
Which is pretty astonishing

if you think about it. I mean, during that period when you were going to supermarkets with gloves on and masks on and nobody knew what was happening, you were Cloroxing your produce for God's sake, and you're hiring four people a week.

Rio (05:53)
That was incredible. Yeah, without meeting them. I mean, we never thought of hiring someone without meeting them pre-COVID was almost unimaginable. No way you do that. You want to sit down in a room with them or at least have someone do that. But the fact that we weren't doing that, we were hiring such high volume, it wasn't just us, it was everyone. I mean, all the tech companies, all the hyperscalers were certainly all the consultancies and nearly anyone in digital. So to your point, Brett, we don't know what's going to happen. But short term ad spend is definitely, it looks like it's down.

Marketers are being told to clip their budgets. I'm seeing that big expenditures are being delayed or deferred or canceled altogether That's already starting and and the way this works too is there's always a short-term freakout. But the impact really is going to be next year right because next year the new budgets get get made this fall so I guess it'll depend on how this year ends, but next year I that's usually what happens next year is when you see the budget really get clipped

Brett (06:42)
Yeah, yeah. In 2007, 2008, certainly there was a decline, I think a 14% decline in US ad spend during that period. But there was a structural shift, another digital advertising structural shift. And this year, digital advertising is going to surpass 80% share of total ad spend for the first time ever, which is remarkable. And part of that was driven.

by that economic cycle. The other things that kind of happened in that period, which are pretty interesting. One, obviously the sort of decline, continuous decline of local TV, newspapers, radio, right? ⁓ Largely taken over by the digital ecosystem search and social saw steady growth. YouTube introduced their ads in 2007. That may not have happened at that time had there not been an economic cycle like this, right? Hulu introduced ads in 2009.

Right, which is really the advent of CTV as we know it today. And I know with the Upfronts and the Newfronts and Rio's got a full analysis on that. ⁓ CTV is a big play in a number of respects. And it was a big part of the conversations that I was listening to. They're saying it's going to account for all growth in television advertising going forward, which is a remarkable shift. If you look back just five, seven years ago ⁓ in the ecosystem. So.

Rio (08:02)
There's going be

20 billion this year. It's going to be around that will be the estimated CTV spend. I mean it's not quite as big as either individually cable or broadcast, it's bigger than each individually, it's not as big as them combined. Right. So it's not as linear, linear is gradually declining. You look at cable subscriptions, I think they're falling. I think I read it was six percent a quarter or something crazy like that. So over time, it's definitely not going to go to zero, but it's going to it's going in that direction and

yet spend on CTV is rising very dramatically. So yeah, not surprisingly to your point, Brett, that was one of the main discussions that in ad tech, right? It's interesting too, like ad tech's kind eating the world, right? You look at Upfronts and Newfronts I mean, five, six years ago, 10 years ago, certainly was maybe an afterthought, but now is all the discussions, which are going to in a moment really were centered on different tech advances that were being rolled out.

Brett (08:52)
And I think we I read a report from Activate Consulting recently that said by 2028, there's a they're expecting a hundred and twenty eight percent growth

in streaming televisions, ⁓ streaming television households specifically. So from 60 to 77 million. And there's been a 78% decline or there's going to be a 78% decline in broadcast households. So the shift to almost entirely digital from a media and advertising ecosystem perspective is definitely underway. ⁓ And it was certainly a big topic at the Upfronts last week.

Rio (09:26)
Well, and

not surprisingly, too, looking at both of those, like performance television, performance TV, and shoppable streaming, shoppable content, that was a big focus of it. Not surprising, if we are the outcomes era, I don't think there's any debate anymore. Tying views to conversions, tying it to commerce is going to become more more important in that vein,

couple things to call out Netflix unveiled its AI generated seamless ad format. And this is cool. So you kind of think of like a can of Coke like floating within a title card, right? Brands will be able to do this. It's a really cool advertising, I think breakthrough that it's now being enabled by technology, I had a good conversation with a Triple of CRO at dinner share a couple of weeks ago and then even Triple if they're rolling out these

content formats where you can actually insert logos at a frame level ⁓ within creative. It's really cool within streaming content. So this is going to become really the norm. We can have product placement kind of taken at a micro level here. I think CTV is enabling a lot of that. Amazon had some big news, ⁓ very focused on tech. They had their interactive pause ads, shoppable live event overlays, and then a lot of updates to their DSP. And they were touting the fact that they now have access to 300

million U.S. users across all the different properties. And for those who have not been paying attention, they've been adding additional properties, ⁓ both between their own and operated, know, Twitch, for example, but also they've been continuing to beef up additional content within the DSP with a big focus on CTV. And it seems like their goal to really become a rival to the trade desk and become one of the main DSPs that's being used today, not just for their own inventory. ⁓ very interesting there. And then

Brett (11:10)
But but you got

also you can't you can't go without saying that Amazon is doing a making a huge play in the live television sports world right in terms of Thursday night. Yeah, and so is Apple. So that's a big play for sure

Rio (11:18)
So is Apple.

Yep. And then WBD, they had their Neo ad platform, Storyverse program. And the goal of this was to let brands use their kind of historic IP to create custom spots. So think that was kind of a cool feature they rolled out. And last but not least here, Google during the Newfronts And I thought this was interesting because they announced that it going to be piping the retailer

network data directly into DV360, their DSP, and this would be for YouTube campaigns. This kind of shows like this ongoing kind of merger ⁓ between retail media and what's going on on CTV. I think the combination of that's really powerful. Google's smart to get on board. They've got, they've had a, it's been an interesting few months for them, but they continue to roll out new features and they're going to remain a dominant player. ⁓

Brett (12:12)
Yeah,

as long as they can connect that retail data to the point of sale. mean, that's kind of the golden goose for retail media, as they call it, right?

Rio (12:19)
That's why Amazon's, you know, that's one of the big reasons they've done so well, right? Is having to...

Brett (12:23)
They can claim attribution at

a level of accuracy that I think previously was sort of educated guessing, you know, with some algorithmic logic behind it. But, but I think you're getting a little bit closer to proving outcomes.

Rio (12:35)
Yeah, closed loop and then you adding their emphasis on CAPI, they've been pushing out conversion API. I think it's not surprising. But then the backdrop for this, as we talked about before, is the layoffs, right? A couple of you mentioned a couple of Brett, but I think, you know, the group MWPP media, they announced, I think 40 to 45%, right? ⁓ Could be, they didn't say eliminated or they just said impacted in some way, whether that's new jobs or...

or roles being eliminated, those are big numbers, right? That's a big company. mean, that's...

Brett (13:06)
And Ria, do you think

this was an inevitability that ties to the WPP Group-in business model, the agency Holdco business model, or do think it's something else that's driving these changes?

Rio (13:18)
Well, it could be that.

I mean, I think there's always been questions over the years. Okay, these holdcos, they're combination of many, many different agencies. They almost seem federated, know, they all have their own CEOs. They all have their own trafficking teams. They don't even have common technology platforms a lot of times, right? Will that starting to change? Does this...

does this environment accelerate that change, right? As they find efficiencies or optimizations across the holdco, I mean, there's probably tons of room for improvement, right? I mean, you look at how much duplication of different roles there are. My guess is it accelerates it Brett, I think a lot of them probably want to do it. And maybe this pushes them to do it. And how much is AI driven, right? I mean, hard to say.

Brett (14:02)
Yeah, and

the economic uncertainty is certainly a forcing function back to what we were talking about a bit earlier, right, in terms of, you know, one of my old colleagues, guy named Eugene Becker, who was from eXelate him and I went through the Nielsen acquisition. He became the EVP of data and analytics at Acxiom. He put a post up on LinkedIn that I thought was super interesting. I'll just read it word for word "It's crazy that the existing model with multiple agency brands duplicating effort has persisted

 as it has in a mature sector. Is there any other business that does this? At some point, agency holdcos need to be simply companies." And I thought you couldn't have said it better. Rio, you and I were talking about it beforehand, about what that means when you talk about them being simply companies. And you had some good thoughts on that.

Rio (14:46)
Well, I think that was part of the val prop historically of the holdcos is we have all these different agencies that have their own flavors, their own specialties, and this is, we can bring this to bear, right? We have great creative shops, great media shops, great analytics shops, and they're all a little different and they all run differently. I think that was always the val prop but moving into this modern era, I mean, does that hold water anymore? Are you better off saying, you know, we've built a common data platform, we've built a common...

you know, let's say activation platform we've built, you know, we've come up processes for trafficking media for everything. ⁓ hey brand, we're gonna be able to do this more efficiently for you. I mean, I think we're gonna be seeing more of that. And even you look at some of the big Indies, right? Like Horizon Media, Stagwell, they've grown a lot. I mean, I think Horizon is like over $9 billion in managed media. I Stagwell is over five, right? You know, they're as big, you know, approaching holdco size, right? Which is kind of interesting. And they've done this, you know, by...

Brett (15:38)
Yep.

Rio (15:41)
They made acquisitions, sure, but I think they're going to market a little differently. So will we see the big holdcos take that approach? think what you're seeing with GroupM, WPP Media, I think it signals maybe that's the direction they're going in.

Brett (15:55)
Which is the simply company, right? Where you've got EBITDA and you've got efficiency built in. There's not overlapping or redundant functions, Horizon probably is taking that direction, arguably.

Rio (16:06)
I think Dentsu announced $2.2 billion in plan optimization. TBD, what that looks like. then remember the end of last year, Omnicom announced 3,000 rolls are being impacted. And that is not what's going to happen with the IPG merger. I think those will probably be much bigger numbers. They haven't really announced those yet. But to your point, Brett, mean, as the holdco evolves, I think we're going to see more of it, be my guess.

Brett (16:27)
Yeah,

and Dentsu talked about, they talked about it as optimization in quotes, right? So what do you think they mean by that? Do you think that, you know, and the topic of this podcast, when we get to our Ben Wilde interview, is certainly AI and the role that it plays in all of our lives.

Rio (16:45)
Well, AI is probably behind some of it. You wonder is AI the excuse though? For sure AI over time is going to change many jobs and eliminate some. I think a lot like, I mean, you heard like what, Brian Lesser, like we talked about Groupm he said like human hands won't touch media planning in the future, may or may not be true. There was the quote about, in you know 25 years won't be trafficking media in the future, right? I mean, it'll just be done by AI, but it's not happened yet, right?

I think AI is an excuse to make these reductions, right? That maybe you're needed anyway, Maybe, maybe not, but I think it's an excuse. But over time, I think as AI does get implemented, won't just be an excuse to be the reality that these jobs have changed.

Brett (17:27)
Yeah, no for sure. So and the US advertising sector in general has seen a bunch of net job losses, right?

Rio (17:35)
Yeah. It's interesting too though. I talked about the little list in that post I made about possible about AI. I think that it's just started impact advertising and marketing. It's wild. You look at tech, everything in tech is AI. I mean, I'm not saying it's suggesting everyone's vibe coding, but developers are using it constantly in their jobs. I'm not seeing the same penetration of marketing right now. People are using it for sure, but it's more they're using it for small things. I don't see it completely restructuring

the approach to how they do their work now. I think that's going to change really quickly. correct. Yeah, it's like you have chat GPT or you have Perplexity running in your desktop. I have a question for this. You'll use it to answer some questions. I don't see it fundamentally like restructuring how work is done yet, but that's going to come.

Brett (18:08)
Yeah, it's more supplemental if anything.

Well, with that, ⁓ let's go to our interview with Ben, which was fascinating. And stay tuned, everybody. It's a terrific conversation.

Rio (18:34)
Enjoy.

Brett House (18:37)
Hey, Ben. Welcome to Signal & Noise. How are you today? Absolutely. So we're to be talking about agentic AI and the future of SaaS, a really great topic. I know you and Rio have chatted a bit, both at the Digital Velocity event ⁓ from a company that you invest in, Telium, and had some real deep conversation about AI. Just for those that don't know, Georgian's a VC, right? That's the right way to put it, I think.

Ben (18:39)
Thanks Brent. Good, thanks for having me.

Correct. Yep.

Brett House (19:06)
not a PE but a VC.

Ben (19:08)
Yeah, we're growth stage investors so sort of in between the two.

Brett House (19:10)
Growth stage investors.

Yeah, and they invest in companies that do AI and automation and the like. And they put out a bunch of really interesting research that Ben was kind enough to share with us in advance of this podcast. One was your most recent report, the AI Landscape Series, Agentic Platforms and Applications, which you surveyed 600 executives, right? I think it was both across GTM as well as tech.

If I'm, ⁓

Ben (19:39)
That's correct.

Yeah, we put out a couple of things. One was actually was a white paper, which was the AI Applications and Agentic Software. And then we put out a piece of research with ⁓ NewtonX that interviewed 600 executives, 400 and what you call startups and a couple of hundred enterprises and asked them a bunch of questions which we can talk about today.

Brett House (20:02)
Yeah, and you get good perspective from both the GTM sort of marketing crowd plus the IT CIO crowd, which I think is a good balance between those two audiences. And then I thought your crawl walk run model was really interesting. Took a little bit of a look at that. And then I looked at some of your past stuff, the AI Applied report from November of 2024. So we'll definitely dive into some of those details. A ton of really interesting findings that I'm sure everybody in the audience will be interested to hear and learn about.

Ben (20:08)
Great, yeah.

Brett House (20:27)
You know, some early conversations with Ben, can tell you he knows more than just about anybody I've spoken to about this world, which is complex and there's a lot of acronyms. So I'm gonna let Rio, set the stage for the thesis of this particular conversation.

Rio (20:41)
Yeah, thanks, Brett and Ben. Great to see you again. The background here is, as Brett mentioned, Ben and I ran into each other at Digital Velocity. That's the Tealium annual conference a couple of months ago, had just great conversations and I've continued to have them ever since and done a lot of thinking. And it's just such a cool topic. I mean, the premise here is, okay, we have this thing called AI. What's the impact going to be on software as service, on the SaaS business, on not only how people are interacting with SaaS, but also on the business model itself. And you think about it.

Some people are starting to say, I mean, I'm not saying I necessarily agree with them, and that's what I we'll dig into today, that AI, specifically agentic and generative AI, could potentially pose a mortal threat to the traditional SaaS business model because it does fundamentally change, or at least promise to change, the way users interact with software and how they derive value. I mean if you think about it, like we, over millions of years of evolution, we evolved to interact with other humans, right? So if we can have...

agentic software that interacts with us with natural language and speech and things like this, maybe video, it's just, it's going to be a new way of interacting and what this is going to impact everything, including SaaS, right? You think about it, traditional SaaS, buttons, knobs, controls, predefined user interfaces, workflows. That's just how it works, how it's evolved, right? A lot of these SaaS companies charge based on features. You compare or contrast that to AI and AI agents where they can...

dynamically execute task based on natural language. can maybe even spin up UIs on the fly just based on demand. It's pretty cool, right? Given this is where it seems like we're going, why would anyone want to pay for 10 different SaaS tools, right? With complex UIs and all those buttons and knobs, why would they do that when one single AI agent could, in theory, do it all.

In SaaS, if you're paying for features, right, and if that's how the models work, why would you pay if you're paying for features, not outcomes, and AI could potentially enable tasks or outcome-based computing... let's say you talk to the agent, you say, run a campaign, generate a report build a journey for me. I mean it's going to execute all those things. This, in theory, reduces the need to log in and manually operate individual SaaS platforms, right? So that is just a threat to the business model. Could be, right? It just really all begs the question,

given this future that we all are starting to believe is there, is the SaaS business model potentially in jeopardy? Again, Ben, welcome. We're thrilled to have you here, Head of Innovation at Georgian. Maybe we can kick things off I'd to hear a little bit about your background, a little bit about what Georgian does. Floor is yours, Ben.

Ben (23:13)
Thanks, Rio Appreciate it. Yeah, and I've enjoyed our conversations so far, and this should be another good one. Yeah, so Georgians a growth stage investor, as we said, and we're focused on B2B SaaS, enterprise software, and a few infrastructure investments as well. So a little bit of hardware here and there. But the common theme about everything that we do is really around

the use of data in a trustworthy way, and in particular around AI use cases. So a lot of AI investing, I've been with the funds since Fund One. We started out in 2008, and we're deploying out of our sixth growth fund at the moment. And we predominantly invest in ⁓ the US, Canada, Israel, and the UK at the moment. And then my role in the organization is Head of Innovation. It's a bit of a nebulous term, but

what that really means is I work really closely with our portfolio, spend a lot of time on product strategy around ⁓ AI and agentic AI. I also support the pipeline activity and I do, you know, thought leadership. I get out there and I talk about this stuff as well. My background, you know, prior to getting into ⁓ investing at Georgian was in the software industry. I was ⁓ just prior to Georgian at ⁓ IBM in the software group.

division after being acquired ⁓ in the early 2000s. I used to previously worked in Formix in the database management system business. Competing with Oracle and ⁓ DB2 and Microsoft back in the day. A lot of background in ⁓ systems and DB, think a DBA before that and data management and then just naturally evolve through the years into

more into the data science, ML, AI side, but always with a very applied perspective and always coming from the product management side of things.

Brett House (25:11)
So

the innovation title lives up to the hype. I you were an operator that was on the product side, was on the systems development side, right? You ran teams that did this sort of stuff. you know, maybe it's not as nebulous as you think.

Ben (25:22)
Yeah, yeah, a bit of M &A. Well, it's

a lot of fun, that's for sure. And then we just get to the really fun part, which is the disclaimer. And I just always have to say this, which is, you know, I do work for an investment firm, but nothing I say should be taken as investment tax or legal advice. In fact, sometimes you probably do well just not to listen to a lot of what I say at all. Take everything with a healthy dose of skepticism, and I think it will actually make the conversation better anyway.

Brett House (25:51)
This is purely speculative. Purely speculative. And just for our audience, don't call him an Aussie. He'll get very offended for any of those that can't tell the difference between the Kiwi and the Aussie accents. It's kind of like Toronto and New York or Toronto and Boston. Sometimes hard to tell. You're in Toronto, right?

Ben (25:53)
Purely speak to other all this. Yeah, exactly

Yeah.

Yeah, we are based in Toronto, yeah, that's correct.

Brett House (26:12)
And I find that hard sometimes to tell the difference, right? Yeah, you gotta listen carefully and be like, there's certain words. Yeah, exactly.

Ben (26:19)
You do, aye

Brett House (26:22)
let's dive into that report that you pinned or researched and you're telling us about some of the deep learning algorithms that you run to pull a lot and source a lot of research with, which I thought was fascinating from a use case perspective, because we do a lot of that on our side. But it was the AI Landscape series. It's about agentic platforms and applications. What really motivated you to do this research?

And can you sort of define, just set the stage of what is ⁓ agentic AI in your mind and what were some of the findings that were important?

Ben (26:55)
Yeah, sure. I think the motivation really was to help ⁓ just catalog some of what's happening at the moment. You know, there's so much change that's going on in the industry in almost every aspect of software at the moment, ⁓ both in terms of applications, but also in terms of the infrastructure and also in terms of the underlying hardware as well. And there's more change than I've seen in the 30 years that I've been in, you know, in technology. ⁓ We were sort of motivated

Brett House (27:23)
It's hard to make sense of

it all,

Ben (27:25)
Well, is. it's just, you know, so part of it was just a bit of an exercise of cataloging, you know, all the different areas that things are changing and, in part, to help explain some of this to our investors, you know, we have investors in the fund and then also to help us sort of talk, give our perspective to the companies that we're either already working with because they're in the portfolio or the companies that we'd like to work with because they're in the pipeline. So that's,

Brett House (27:37)
Yep.

Yeah, we call that

we call those state of industry reports.

Ben (27:51)
Yeah, exactly. That sort of thing. And it's just, it's been super helpful to go through and just for our own learning to dig in to areas that are familiar to us in terms of categories of software, but that are changing quite quickly. And as you mentioned, it's a little bit meta, but I'm using ⁓ AI tools to do a lot of this, right? So I make heavy use of deep research style tools. One of our own companies,

you.com has the ARI, ⁓ Deep Research agent. We also use Gemini Deep Research and I use OpenAI as Deep Research as well. I use all these tools, sort of side-by-side, see how they all do and they've been a really good productivity boost for me and we can talk more about that. I think it's a good example of how, in fact, I'll talk about it now, which is like when you think about what an agent is, to me, the definition of an agent is really about

it's a piece of autonomous software. It's able to perceive its environment so it can get context from the environment. It can understand what's and pick apart what it's being asked to do. It can then do some amount of thinking or what we call reasoning about that task or what it appears to be reasoning. ⁓ It's a whole other, exactly with some limitations. Then make decisions about what needs to be done, maybe think about some subtasks, etc.

Brett House (29:09)
Yeah, with some limitations, right? Yeah, that's all another topic. We'll get to that.

Ben (29:20)
do stuff like so use what we call tools and you know, analogous with tools in the real world, but use tools to get stuff done to achieve those goals and doing all that with without sort of constant human intervention. So you know, just to give an example through the years, if you go back to, you know, Google, when we just used to do Google searches, ⁓ you know, you're doing a search on Google, you put in a pretty short query

you get back a response and something like less than a tenth of a second, and then it's over to you. Then you're reading through each link and you typically don't go past the first page, but you read a few things and you click around, you read some more and you might write it up. We saw that change with the arrival of ChatGPT once it started being able to search the web. Now you could put in a query into something like ChatGPT or you.com or Complexity and it will

It'll actually start to rewrite that query. It'll think about the query. It'll rewrite it. It'll then go and do a few different searches for you, and then it will come back with a more sophisticated answer, which is really a summary of what it thinks it's seen. It's going to take a few seconds.

Brett House (30:31)
Yeah. Yeah.

Rio (30:32)
Yeah, the search summary,

Brett House (30:35)
Yeah, and I was

pretty amazed by how quick Google adjusted their approach to search with the advent of ChatGPT because you were doing searches in Google that just weren't giving you the answers you were looking for. So I would go to ChatGPT, I'm sure large numbers of advanced audience users were going to those types of platforms to get answers they wanted because Google was just

you know tons of results that required way too much effort for me to search through it was a lot of retail stuff Right, so they adapted pretty quickly and now it's kind of changed the nature of search in a sense

Rio (31:07)
yeah,

Ben (31:08)
Yeah,

Rio (31:08)
They're only really doing that for some of the ones they can't monetize. They have the search summaries, right? But then for the other ones they can monetize, still have the ads. It's interesting.

Ben (31:15)
I was just going to say, it's improved. You can see the Gemini summary is improving, I think, in my view. It's got better pretty quickly. But what I was going to say as well is, but there's another step, Brett Once you think about what I said about search, and then you've got head chatbots, with these research agents, now you're talking about, and this is what I'm using a lot in my own job, and so it's starting to automate what I used to have to do manually, which is,

Brett House (31:24)
Tata.

Ben (31:44)
I'll give it a more detailed prompt. In some cases, I'm giving it a short paragraph or prompt. Then it's coming back. These tools like ARI or ChatGPT, research will come back and ask you some clarifying questions. Then it'll go up and I've run queries. A typical one will be five or 10 minutes. Some are 20. I've even had like a 40 minutes prompt run. It'll go away and it's doing

Rio (32:08)
wild.

Ben (32:12)
I think qualitatively a much better job than just when you just use something like a chatbot directly. That gets into this whole area, this notion that we can improve the performance of these models by spending more time during inference. It's what they call test time compute. So just running the model, running more prompts against it, having it do more things, and get a better result.

Brett House (32:37)
It's optimizing based on its interaction

with you.

Ben (32:41)
Well, it's actually it's just I think a way I would put it is it's just it's spending more time on the task and then it's you know, and it's also got access to like, you know, it's able to go off and do more things like connect. you know these research agents are going to do more searches, getting results back, reading it and then change and then thinking about that, then they kind of go in these loops of you know, think search, read, think search, read, think search, read, and they kind of do that a few times and they come back and just at least.

For me, qualitatively, that is going to result in much better output. So it's pretty helpful.

Brett House (33:16)
Could you just to set the stage before we go into some of the specific topics, could you define for the audience, we've talked a little bit about agentic AI, you've talked about how you've conducted research, right? LLMs to agentic to AI to AGI, which is artificial general intelligence, which is hypothetical, I think still at this point, and then artificial super intelligence, which is like Blade Runner.

Right? ⁓ You know, which is ridiculously hypothetical at this point, but hypothetical might be five years away. Can you kind of define kind of the core terms that, you know, you're going to be referencing and you think we should all understand at this point in AI's evolution?

Ben (33:55)
Yeah, no, for sure. I don't want to sound dismissive of people's concerns or optimism, depending on who you talk to about AGI. But I really don't spend a lot of time at all or any time thinking about talking about AGI and ASI, like this idea of superintelligence. The idea of AGI, of course, is that it's that AI would have ⁓

a level of general intelligence similar to it at the level of a human. But there's no agreement on what that means. There's no agreement as to whether getting to the similar outcome but using completely different methods and having no understanding of the underlying concepts, which is basically what language models do, right? They don't really understand the words that are spitting out. They say not in the same way that we do. Is that, you know, could you ever consider that? So there's all these definitional arguments that people are having. And so

I'm more interested in how do you build systems and how do we help our companies and how do we talk to new companies? How are they building systems that increase the level of automation because they've got these new technologies and then deal with ⁓ all the challenges still there? Because if you've used these technologies, I think for me, what strikes me is that they're the most amazing tools, one moment, and the most

frustrating and dumb the next. And a lot of what we're working on is like, well, how do you engineer products using these things? And everyone's working on this, right? And some really big companies are struggling. I think it's Amazon's had a pretty, you know, it's taken them a long time to try and get to the next version of Alexa, you know, integrating ⁓ generative models and things because it's hard, right?

Brett House (35:21)
Yeah, that's what I was going to think. Yeah.

Ben (35:47)
it's hard to make this stuff work consistently. by the way, you referenced some of the research that we've done. Our most recent stuff that came out in the last couple of weeks around agentic adoption, pointed to the number one concern for both the R&D technical folks and the go-to-market folks in that survey of 600 people was reliability. ⁓

Brett House (36:14)
Yep, trustworthiness of the results that it's spitting out, right?

Ben (36:17)
Is it actually working. And so that's certainly front of mind. But the other interesting point of that, quickly, I'll let Rio jump in, it's interesting to think about, the one thing I do find interesting is that my intelligence has been exceeded by the tools I use in a number of discrete areas. And that started happening in 1998. So if you're both old enough to remember when Google released in 1998.

And you were probably like me, maybe you were using AltaVista at the time, but most likely you were still using Yahoo's directory a lot, right? And that was a very manual, you know, hunt and peck sort of approach to finding what you wanted on the internet. It's absurd to think of it now, but we used to use the directory. And immediately when Google came out, it was like magic, at least for me. And it completely exceeded my capabilities of search on the, of finding things on the internet.

So that's almost 30 years ago that my ability to research on the internet was exceeded by a search engine.

Brett House (37:21)
Yeah, it was exceeding

like microfiche and phone books

Ben (37:25)
Yeah,

Rio (37:25)
the model pre-Altavista was like more like a phone book putting online. You just had a list, a directory, right? And I think Google suddenly, you know, they were able to, okay, there's a search engine, we can, you know, as a browser, we can display the results this way. It was the better way to display the results based on a new format. It just took us someone not long to invent it. I think it's just kind of like going into SaaS here, right? I mean, we're in an, this is a new medium, right? There's going to be a new way of interaction.

Ben (37:31)
Yeah, exactly.

Rio (37:50)
I think there's going to be a, I mean, we're still to think that the SaaS way of using tools is going to carry through. I think it won't, it's going to look different. So I think it's a good analogy you just brought up.

Ben (37:52)
Yeah.

Yeah, let's come back to that in a minute. I just wanted to say, and then if you fast forward to these deep research agents, I would say we've got intelligence that exceeds my own ability, at least in a narrow task. And I think I heard someone talk about this other day. It's like, if and when we ever achieve, or when we achieve AGI or whatever, it could be in specific domains or tasks within domains. Coding is a good example where maybe it

could happen sooner because we're having really good progress at the moment around using AI and coding tools. But this idea that we'll get sort of general, just across the board human capability, to me, I'm reasonably skeptical that that happens in the near future. I certainly like my own personal view is that I don't think we have all the invention and the technologies that we need today to do that. I don't think, I think it's proving out that we

it's unlikely we can just scale what we have today in terms of language models, for example, and get to this. I mean, even with, obviously, were scaling training and we've got amazing and continue to get good progress, right? But that sort of appears to be maybe topping out a little bit. And then we're moving to this running at this time compute. And that's really improving, as I said, the quality of how these things work.

but it doesn't eliminate hallucinations. There's research that indicates that hallucinations are a feature, not a bug, of how these systems work. And so they're probably going to be with us. And so therefore, if you have that inherent unreliability built into the system, it's hard to see how we can have full automation. ⁓

Brett House (39:47)
Yeah. Is that a data

problem or is that an algorithmic logic problem or something else?

Ben (39:53)


I wouldn't couch myself as an expert on that. I'm an enthusiastic hobbyist, but my thesis would be that it's an architectural, so what you would probably put in the algorithmic category, but the nature of how we build these transformer-based models is that there's this inherent tendency towards some amount of hallucination. As I said, it's some indications that it's part of creativity. It's part of how we create things.

And then you can get into a whole argument about, does it just need to be better than humans and just a bit more reliable than giving the task to Rio. And it's like, well, if Rio makes a mistake, we're probably much more forgiving of Rio when it comes to exactly, sometimes recreationally, I'm sure. But the point is that in IT systems,

Brett House (40:35)
Rio hallucinates all the time.

Hahaha

Rio (40:42)
was say it depends on the day of the week.

Ben (40:45)
And in IT systems, we have different expectations. And so again, this is coming out in the data. So people are saying that reliability is one of the top issues that people are dealing with here.

Rio (40:57)
Ben, the number of hallucinations has gone down quite a bit is my understanding. It's, you know, it was really high when we first started this thing. It's, know, I think it's gone down to, you know, I mean, I've seen studies where it's close to human error, right? And why couldn't we have a future where, I mean, we work, you you wouldn't in journalism or if you're writing code, you wouldn't publish anything or push anything into production until it's been reviewed by other people, right? So, I mean, is it conceivable that we're going to have teams of agents supporting each other?

looking for a loose nations fact checking could be conceivably, you know, once we gain that trust, get to a scenario where, know, there's human in the loop almost all the time now. Is there a scenario in the future where we don't need that? I mean, I can see that potentially happening and how quickly will it happen? Maybe a couple of years. I don't think it's further out than that.

Ben (41:43)
Yeah, I'm probably a little bit more cautious on that. I think that there's been a lot of progress and experimentation with things like the what's called LLM as a judge. So, you know, judging the output of one language model with another. And I think it's good work being done around that, but I don't think it's a silver bullet. In fact, some research I read showed that the larger the model, the more capable the model, the less good it was at detecting

errors and in fact smaller models are better at detecting errors than larger models. There's all these sorts of intricacies that people have to work through and you know there is an inherent issue with using something that hallucinates to detect hallucination. So I think what will happen is we will figure out what is an acceptable level of reliability and I think we're still just finding our way there. Maybe it's not what we expect from a cloud system like

five nines of reliability, right? So maybe it's less than that. But, you know, it's I don't think it's 50%, obviously, it's it should be better than the coin toss. And a lot of these models have got like a lot better. But if you look at some of the benchmarks and the error rates and things, you know, they're achieving, you know, a 90% score on the benchmark is, is pretty unusual, pretty high score, we're still seeing pretty big error rates in some things. And if you if you have

error rates, you know, one, two, three percents on each step in the process. And you've got dozens or hundreds or thousands of steps. The potential overall error rate of the system gets very large. So I think a lot of this stuff has to still be worked through and we still have, you know, and it will vary by industry, Really something, some areas it doesn't matter. Like when I'm doing web research, if it hallucinates, I pick up on that really quickly because I can click through and I can see that the link doesn't exist. On the other hand, if

Brett House (43:36)
You

Ben (43:37)
You have it.

Brett House (43:37)
can't tell necessarily by the nature of the answer in all cases. It's more of the source of the answer.

Ben (43:43)
Exactly. Yeah,

I mean, yeah, exactly. But if there's other tasks, but it's much more difficult. And then there's, know, if you've got a regulated environment, it's more difficult again, like I so do a bit of work with one of our firms, which is WorkFusion, they're in the financial crimes and compliance space. So they do, they provide agentic workflows for doing things like

adverse media monitoring, so checking articles online about people, KYC, AML screening, stuff like that. So that's a part, it's a regulated task, if you like, within the regulated industry, and it's quite high consequence. so it's the idea that we will, can't get it wrong, so the idea that you would leave that to chance and have a language model reason about it, that's not really what that industry is looking for, right? They're looking for

Brett House (44:22)
Can't get it wrong. Yeah.

Ben (44:34)
use of AI to enable covering more edge cases, increase the level of automation, but there's a well-defined set of steps that need to be followed. They don't necessarily want the language model to spit out a different way of doing things each time. There needs to be a well-defined workflow. This sort of actually gets into the definition of agency and agents. My view on that is that

it's a continuum rather than a black or white thing. So just because there are well-defined steps or it's using pre-existing workflows or pre-existing tools or accessing a database, and it's not just trying to do everything on the fly, it doesn't mean it's not an agent, from my perspective.

Brett House (45:19)
It doesn't mean it's not an agent. ⁓ You're just taking advantage of the automation capabilities to facilitate speed to completion of a task, to facilitate ⁓ cost reduction, which is obviously a concern with people in a lot of industries that certain tasks, especially the repetitive tasks are going to be replaced by agents. So what are the main drivers of

Rio (45:21)
That's interesting.

Ben (45:21)
Yeah.

Yeah.

Brett House (45:46)
why a financial services firm would use this. Is it that? Is it cost reduction plus speed to answer a combination of those things?

Ben (45:52)
Yeah, I

think a combination of those things. Efficiency is one of them. But also just being able to also look across more information than was possible before. So that's where you get into this whole thing of, partly it's about displacing labor to some extent, but a lot of it's also about doing things that weren't economic to do before.

I think an example I heard on a podcast the other day was ⁓ the founder of Flexport talking about how they were starting to use voice agents to call all their truck drivers. And they wouldn't, it wasn't economic to do before to bring a truck driver individually with a human to figure out what their capacity is when you're down to the level of literally the person that's sitting at the wheel. And so that's an example of where that

doing something that a human could do, but this wasn't economic to do, so they can now do it on a much bigger scale. There's a variety of different ways that people are using this. actually gets to Rio, your question, if you to go back into this now around the future of SaaS, is SaaS dead? Obviously, this was kicked off at the end of last year a bit by the CEO of Microsoft. He never actually said that SaaS was dead, but I think that was the-

the headline that was put out. The way that I just sort of throw it out there, the way I've been thinking about this and talking about it with people is like, it's not so much, I don't view it that way. I view that my view of the future is that agentic capabilities will infuse themselves into most software and then in the enterprise space, probably almost everything will have some sort of agentic capabilities over the long run. Not today, but...

more and more as we go in the future. So if that's the future, how are the different ways that you can get there? And I think that existing SaaS applications are one of the ways that you get to agentic. ⁓ Upstarts, new agentic native companies are also there. And then there's whole new categories of software that are going to get created, right? It's not just the cannibalization of SaaS at all. It's more like this.

Rio (47:39)
Yeah.

Ben (48:04)
evolution, revolution, this new stuff that's coming along.

Rio (48:08)
Yeah, Ben, one thing we've seen so far for sure is like these legacy SaaS platforms are embedding, let's say, agentic workflows at certain points. And it's, yeah, I think that's been, it's been done, it's been done really well. You look at Adobe, for example, I know Salesforce has been doing that. Some of the big legacy SaaS players, I think have been doing a really good job. And I think adoption has been a little spotty, but you know, it is a new kind of wave interacting, right? But I think the bigger question though is, okay, we know that's going to happen. It's going to be, let's say, audience builders, report generators.

specific tasks within SaaS will become, let's say, more agentic. But looking at, let's say, the trend of composability, combine that with agentic AI. If you think about composability API for systems, that because AI agents can orchestrate calls across these multiple systems and pull together data from different trigger actions and access tools just straight from the UIs

Are we going towards maybe a potentially completely new architecture, right? Where they're calling on, let's say, like Stripe or Slack or HubSpot and just hitting these APIs. I mean, it's very different. you're still using the capabilities these SaaS platforms deliver, but you're using it in totally different way where you're not actually accessing the SaaS platform, you're just accessing services within it. I'd love to hear your thoughts on that.

Ben (49:29)
Yeah, sure. First of all, think that, as you alluded to, I don't think the SaaS applications, these systems will go away. I think the nature of what they do changes slightly. We still need the databases. We still need the data storage, the data models. And I think in a lot of cases, we also need the well-thought-through, predefined workflows. I'm personally skeptical that you want an agent to figure out everything every time

and new, right? So first of all, it's economically inefficient to if, for example, there's always the right six steps to take to solve a task with a few exceptions, you probably want to try and follow those six steps. So this is the argument for, you you might generate that code using an agent once and then test it, but it becomes a tool that another agent can use to execute it. But you're not trying to recreate it every time. So

Brett House (50:26)
Yeah, and it's the consistency

of results and outputs that you

Ben (50:29)
Exactly. So you have these SaaS systems. Yeah, to your point, you're maybe not hunting and clicking around as much in them. So there's potentially less of that busy work that happens. But you still need the system to be there. And then, yes there is obviously, there's a lot of work at the moment around things like model context protocol so there's MCP is a standard that was released ⁓

late-ish last year by Anthropic, OpenAI now. So they're adopting it and there's quite a lot of activity around implementing this thing. Really what it does is it allows you to expose software. So it could be a database, could be an application, could be another agent or language model. it's getting towards having a standardized way of presenting these things so they can be used by agents. And so you can do what you talking about, Rio. You can start to compose these things together.

But there's still an open question as to who you're going to go to to get that, who's going to actually provide that agent that does this for you? And will it be an agent, a single agent, kind of like you were alluding to that can access all these many, many different things? Or will it actually be a multitude of agents all cooperating with each other? So Google has released this other standard called A2A, so agent to agent. And so that pattern is more

you might have a bunch of specialist agents and it's a way of having them talk to each other in a standardized way. So that's another way of looking at it. And the truth is probably in between. You might end up, you know, an A2A agent might use multiple tools and then talk to another agent that uses a client tool or whatever. So you probably compose these things together. But I think you'll find that there may be advantages to, you know, going to your CRM vendor to use a CRM agent that's been

Maybe they've got some sort of insight or particular data set. They've been able to train that agent to be really reliable at those set of tasks. Then you're specialized for that task. The The pricing model, yes, I think. The pricing model, think, is

Brett House (52:33)
Yeah, it's specialized for that specifically type for that SaaS, yeah.

Rio (52:38)
Do you think it challenges the model though?

Ben (52:45)
is challenged, right?

Brett House (52:47)
Yeah, it challenges the pricing model of the CRM because you're leveraging it. ⁓ Is it a third party CRM agent that would be doing tasks that the CRM can't achieve on its own? Is that?

Ben (52:59)
Well, I think Brett it challenges the model, and this is, Rio's point, more because it's less clear that it's a per seat. It used to be easy. You used to run around, how many salespeople have you got? Well, we've got 100 salespeople, so we have to buy 100 seats.

Brett House (53:11)
Yeah. Yeah.

Yeah,

so it challenges the per-seat model, which not all software operates on a per-seat model. MediaRadar is an example of that. But yeah, that's an interesting point.

Rio (53:24)
Yeah, I

think we've that's a fundamental problem. I think that's, it's going to happen to per seat model, just like the FTE model, like an agency's is going to be severely challenged and probably go away. I think the per seat SaaS model gets severely challenged and potentially goes away. But we've been seeing that anyway, like, you know, to a more consumption model, like a lot of CDPs switched to like the more the snowflake model anyway. So I think we're seeing constant evolution of how these things price themselves. I think that's normal and okay.

Ben (53:33)
Yeah.

Brett House (53:43)
totally.

Totally. Yeah, because the per seat model, ⁓ it's really about usage and how often you're using the SaaS platform, and then your cost per action within the SaaS platform. And in aggregate, that's the most important number, whether per seat or not. So get people to use your software, whether it's through an agent or not, as much as possible, because you're going to drive down this relative cost for the company that's paying for the software.

Ben (53:50)
That's it.

I think so, but one of the things to bear in mind is that just because some things will do less of in these applications, there might be other things that we do more of. As we move to this model of having more agents, say running on top of a CRM, there's still work to be done. If we go back to the discussion before about reliability, in my view, we're still ⁓ a few years at least, maybe more away from being fully

at the point where you don't need a human in the loop. And so working from that assumption, you're still going to want to log into these SaaS systems. You're going to want to check the work of the agents, make sure things are done properly, validate things. You're going to need to log in to authorize certain work and all this sort of stuff. So there's still a supervisory role and a human reasoning, human interpretation role, because there's certain types of thinking that these models are not.

because everything, know, good skills and yet things that humans can do really easily that they struggle with.

Brett House (55:19)
And do you think that's

a threat to sort of the hockey stick adoption growth that I'm sure some people are predicting out there in terms of how quickly this industry and these software providers are going to grow? Right.

Ben (55:32)
I that's a great question. I would have to say I don't know. I think it I think it should probably temper expectations around the level of like how much. What level of automation do get to? So if you're if the the revenue hockey stick thing you're talking about is dependent on the idea that we can automate 100% of a particular task or a role, then I would say yeah, that would I would.

Brett House (55:48)
Yeah.

That's a pipe dream.

Ben (56:00)
I would temper those expectations. But I think the opportunity here is still super exciting because now we're able to do a lot more. Like if you look at someone like myself, I'm using these technologies as a force multiplier. So it's changing my role, and it's allowing me to do a lot more, but I'm still there having to move the chess pieces of it.

Brett House (56:26)
Yeah, and on my team, mean on the B2B side, GTM side, it's definitely a forced multiplier, but then you go to B2C, big brands where they could, we were talking about this earlier, Rio, where they could force multiply a creative team, for example, that might have to do thousands of variations of personalized ads or whatever it might be, any size, shape or form. A forced multiplier of 10x, 100x, right? No longer do you have to have people doing these things. You could do the work of 40 with a team of four.

Isn't that the argument behind a lot of this and a lot of the economic value that they're expecting from this?

Rio (56:58)
Yeah, but that's interesting. mean, the bit I've seen.

Yeah, but I've seen like, we were talking about this earlier, we were like about a bunch of within marketers. And IT like everyone's adopting it. Every CEO I talked to of a tech company, like all their engineers are using it. But I think marketing adoption is lower. I've been, there's a question. Yeah, there's a question for you then. Like adoption, where is it? Like what's impeding it? Are humans the bottleneck? I kind of see them being, right? I think AI is better than most people know.

Brett House (57:17)
for coding.

Rio (57:29)
It's more effective than most people are using it for, for sure. And whether we get to AGI or ASI at some point, it doesn't need to get there to be really useful. Like what are the bottlenecks for adoption and like what kind of adoption, where's it working, where's it not?

Ben (57:44)
I think that's a great, great, point, Rio. But I think one of the reasons that it's being adopted quickly and certainly with coding is that while not perfect, coding is, you know, you can run the code. You can compile the code and see if it works, right? That doesn't tell you whether it's efficient. It doesn't necessarily tell you that there's like, whether or not there's security problems with it. But you can, you know, you can also verify the code using static analysis tools and things. There's ways of validating that what

came out the model was valid, was good, right? ⁓ That's actually more difficult in other areas. So if the three of us would say, ⁓ let's go write a white paper on a cancer drug that we have no understanding of, then our ability to ⁓ evaluate what's coming out of the language model ⁓ and to know whether it's correct or not is almost zero, right?

So if you're a marketing person, I'm not saying they have zero knowledge of the industries that are in, but if you're a marketing person, you're not a technical person, but you're in a software company and you're tasked with writing a blog piece, sure you can use these technologies to produce that, but it's hard to, you may not have the skills to be able to work out whether it's correct or not, and you've got to read through the whole thing. There isn't some compiler that you can run that through to tell you whether it's actually right or not.

Now you can certainly ⁓ use other language models to test things and things. There's tricks you can do and you can improve that. But that could be one of the reasons ⁓ for it. That maybe in some fields like marketing, there's maybe not quite as much adoption as others. There's sort of the garbage in, garbage out problem. It's hard to assess that in some cases.

Rio (59:34)
Looking at agentic software, Ben, like what, like, what agentic tools have you seen get, like, I mean, it's, it's a term people throw around, right? So what, like truly agentic tools have you seen that have been out there that are getting adoption or getting attractions? Are these companies revenue positive? How can you comment on that?

Ben (59:50)
It's hard for me to, I mean, I don't really have ⁓ a lot of insight into private companies, what they're doing from a revenue perspective. But there's certainly been, I think in some areas, talked about a lot, but like around web research, there's been a lot of adoption of this technology. So the whole idea of putting a language model in front of a search engine or a search index, it's obviously been a use case, which is really well suited.

Got a lot of traction. It's relatively low risk. Although, obviously, there was a whole bunch of copyright issues and things around this which have bubbled up. But that's one area where there's been quite a bit of traction. And I think in just I can probably best if I talk to what I see happening in our portfolio, where I see things working really well and adding value and getting use

is where the technology is being used in a relatively narrow and task-specific way.

So you know helping augment something that was ⁓ already happening in the software process or where the language model can be used in a very specific way that can then be measured as to whether the result is correct or not. And the more open-ended the task is, the harder it is to provide a feature based on these technologies that's reliable, right? So the most success that we see is when you're sort of being incremental and you're

you're building on it building on it rather than sort of coming in with magical thinking and just having this sort of all seeing all doing agent tries to take on a big complicated task.

Brett House (1:01:35)
Yeah, is that a little bit of what you guys have talked about with the crawl walk run approach to doing this? Right? How do you get started? Whether you're in a CIO and running a tech organization or you're a CMO and running a GTM or otherwise organization, how do you what would your recommendations be for for, you know, let's say just think of larger organizations that are sophisticated, that have huge, huge manpower

that are looking for efficiencies, they're looking for potentially cost savings or otherwise, ⁓ automation, how do they get started? What's the kind of framework to get from A to B to C?

Ben (1:02:13)
Well, I mean the first thing is to pick up the tools and start using them, right? not just, you know, don't just say, I've used ChatGPT or whatever once and then, you know, not go back to it for a while. I think it's been really effective for me is to reach for these technologies first. So obviously I'm a knowledge worker, so it makes sense, but I, you know, even at home now, if have something wrong with the washing machine, I'm on.

on one of these language models trying to debug it. That shift in behavior to start to use these technologies first and think that way so that they're to augment what you do, and then build on that and try things that you might not think will work, but to see if they do and you'll often be surprised. By doing that, you're starting to prototype how these technologies can be used, and then start to think about, do I want to take this further and build something

focused around this? Like, I get my tech team to actually build something out that does this kind of thing, but for our business or look for a vendor, you know, who's thought more about it and maybe incorporating the technology into it, a wider package. And that's a big decision for a lot of people, right? It's like the whole build versus buy thing. Um, you know, that's a challenge in itself, but that it's like, get started, be incremental and, and just experiment and move forward.

Rio (1:03:38)
So we're thinking about what to build, Ben. I mean, I think it's interesting. So like a lot of the success I've seen so far is, you you take the SaaS platform, you drop in an agent that does a specific task, which makes a lot of sense to me, right? It's not disrupting what's there. It's using the foundational database and the models they have. And I think in a lot of cases, you almost have this agentic layer where the MCP hooks that kind of are twisting the knobs and controls in SaaS, right? So, and I get it, that's where we've seen success so far, but almost as a counterfactual, you could argue, okay, well,

but the true AI companies are the future. They're not gonna try to like take the old index and just make it better. They're gonna blow it away, right? They're gonna say, forget it. We're going to rebuild this re-architect this thing from the ground up. We're gonna go completely AI native, right? And I think no one, as I mentioned earlier, no one really knows what this is gonna look like. ⁓ Is it gonna be an iterative thing where you're like, hey computer, you're talking to it, it's asking you questions back. Is it gonna be...

I mean, no one really knows. wonder if you'd like, have you thought about this? I'm sure you have, right? A little bit. mean, I'd love to get your, your two cents on that.

Ben (1:04:39)
Yeah, I think a lot of stuff does change, but I think a lot of things stay the same. I think we talked about earlier, we're going to store the data. Are we going to completely abandon database management systems? Are we going to abandon network software, storage software? So there's so much. The answer is no, we're not. And there's so much that is underlying this agentic capability that it needs to support it, right?

So I think the things that are going to change and the more agentic native thinking is more on the, using these technologies to deal in more flexible ways with sort of situations that can't easily be figured out ahead of time, right? But my personal view is that it's more iterative and this is more like a new technology wave than it is a completely like,

discontinuity. I think when we look back at this in hindsight, it's going to look, it's probably going to look more like the shift to cloud than the invention of fire. Like it's like, there's a lot of hyperbole around it, but it's likely more just to evolve software than it is to completely replace it. And, know, I heard, I was listening to a podcast earlier today and it was, ⁓

Brett House (1:05:40)
Yeah, that's a good point.

Ben (1:06:05)
from the CTO of one of these agentic memory companies. She made the point that it seems that everyone that's building agents is talking about databases and suddenly started like everything's written in JSON and then we're going to persist it out to a file. It's like stuff that makes no sense at all.

Rio (1:06:17)
That's funny.

Brett House (1:06:26)
Well, does that make no sense though? I mean, isn't that kind of fulfilling the promise of interoperability, which Martech and AdTech ⁓ and the data space has been talking about that for a decade. Like data interoperability, then came APIs, but doesn't it help connect the dots between all of these systems and platforms?

Ben (1:06:47)
I'm pretty confident, if you wrote out every piece of data to a JSON file and stuck it on the file system, you'll have a hard time finding anything over the long run. Like, you know, there's this indexes, query languages, like, you know, there's this... Software is more like a lasagna than anything else, right? We have all this capability and we sort of, we add more as we go. And I think that's a likely outcome here. I think...

Brett House (1:06:57)
Yeah.

Ben (1:07:13)
Rio, to your point, some parts of the stack will be completely reimagined and there will be whole new categories in my view of human work that we can do now because we can use these thinking models, if you want to call them that, to do things that just deterministic software could never do or even classical AI couldn't do. But the idea that everything just disappears, I don't buy that.

Brett House (1:07:34)
Yeah.

Rio (1:07:37)
It's interesting, you bitch.

Brett House (1:07:41)
Did everything, did

Rio (1:07:42)
It's funny when we first, the AI thing first started two, three years ago, a client asked me, okay, AI does this mean I don't need a dam anymore? Can I just throw, I guess throw my images somewhere and have an AI rip through it and tag it and put metadata on it and fetch it whenever I need it? And my answer at the time was,

no, you still need it. You still need it underlying taxonomy. And there's been a big movement recently. Maybe we don't need taxonomies anymore. Maybe AI can immediately go in, analyze things, tags them. And, you know, it offers the promise of that, but I think we're far off from that. I think for now, AIs can tag things better, look at them quicker, but I still think you need the underlying superstructure. agreeing with what you said Ben

Ben (1:08:20)
Yeah, I think you're right there, think that, you know, to the point about the digital asset management system, the nature of it might change. So maybe you don't have a rigid taxonomy sitting in there, but you're still going to have security access controls. You're still going to have ⁓ even the tagging service that you mentioned that might be provided by that system. You're certainly going to have like storage decisions about, know, especially if it's a distributed system, right? So it's not running in one place.

cluster of different storage. There's all these problems that still need to be solved. And we've got this new technology, this new capability to reason about, think about, plan even a little bit about what to do. It doesn't suddenly replace everything. What it does is open up potentially a lot of new interesting use cases. And I'm much more excited about the combination of these things together than I am concerned about these systems of record suddenly disappearing.

In some cases, maybe there will be more disruption. One of our portfolio companies, PolyAI, is in the voice AI market. And there's going to be, I would expect, quite a bit of disruption there. That goes back to your earlier question, ⁓ Brett Brett about you were asking about areas where it was, or maybe it you, Rio, you asking about areas where there was adoption of these technologies. Certainly in call centers, there appears to be some... ⁓

rapid adoption of these automation technologies. ⁓

Brett House (1:09:53)
No, that's for sure.

We've all experienced that before. More times than once. For quite a few years, in fact.

Ben (1:09:59)
Yeah, but even even with a agentic, ⁓ you know, upstart like Poly that's coming in with, you know, innovative technology around voice AI, they still have to plug into other things. They still have to, you know, plug into the telephony system. They still have to be, you know, there's still, you know, controls around it still has to act this, you know, real time customer information to get context of the call and everything. So as you know, that's why it's sort of this whole sort of lasagna metaphor, which is

it's more additive than it is destructive. I think it's got time to play out. I think, Rio, you're probably right. In some areas of the industry, there will be ⁓ lots of change. But I think overall, probably the biggest changes are more around the software model. And to your point, the level of interoperability might improve. And the things that we can do will expand. I think that's where the opportunity is the most.

Brett House (1:10:57)
Yeah.

And it seems to me like what's going to take the most time is the organization change, the human change, which Rio mentioned, right? You because there has to be human oversight. You've talked about it in a couple of different flavors, right? In terms of establishing frameworks, which there's been a relative lack of, right? But how do you see this playing out from an organizational perspective? Because generally big human organizations take a long time to change, right? Whether it's the government, whether it's big Fortune 500, Fortune 1000 companies.

Ben (1:11:07)
Yeah.

Brett House (1:11:27)
is, you know, this seems to be moving at light years faster than how we as people can change. ⁓ How do see organizations sort of establishing frameworks to be able to organize and manage this stuff going forward to reap its benefits, I guess.

Ben (1:11:45)
Yeah, I mean, that's a big question. I think it probably varies ⁓ by organization, by industry, by job role. It won't be evenly distributed. It'll be different in all cases, probably. I think that one of the discussions I've been hearing recently is perhaps one of the biggest shifts is that we all start to become managers of agents, right? And so even, yeah.

Brett House (1:12:10)
prompt engineers.

Ben (1:12:12)
Well, yeah, and you know, I mean, certainly by the end of the day, when I'm, you know, I'm sitting at my desk, I get up at the end of the day, I'll have half a dozen, you know, deep research tools, various descriptions, you know, open in my browser. And I'm constantly asking at work, evaluating it, interpreting it, you know, merging that into something else. So, and that's a different workflow from what I was doing two, three years ago. So I think one of the changes we might see

in organizations is that they start to train people to think more like this and have some of those sort of management skills that they can apply them to using these tools and treating these tools like ⁓ employees a little bit. And so that'll be one change.

Brett House (1:12:57)
I may need to connect with you offline to see if you could help me do some of this simultaneous algorithms running or deep learning actions running because it's super helpful. I I do a lot of that state of the industry stuff for our board and for our seat, you know, and it's super helpful because there's so much information out there and to be able to collect and collate and curate that stuff manually. It's like, you know, and I'm already starting to move towards.

Ben (1:13:11)
Yes.

Brett House (1:13:24)
⁓ leveraging AI for these purposes, but it seems like you've taken that to a whole other level.

Ben (1:13:29)
Well, yeah, mean, there's

other skills in there as well. mean, critical thinking is as important as ever, right? So we shouldn't be using these tools blindly. So I think there will be a lot of change. think that'll probably organizational issues, liability, accountability, that will cause us to need to think carefully about how we apply these technologies and it will slow things down and probably rightly so.

Brett House (1:13:39)
yeah.

Ben (1:13:57)
That's why when I step back and I look at this, I look at the technology, how amazing it is, but also, you know, in my view, what some of the challenges are around reliability, you know, and then I look at the organizational change. This feels like a kind of a big, ⁓ big to me, but normal sort of technology show. ⁓ I'm much more in that camp than I am in the everything's changing. It's AGI in two years, we're all going to be out of a job.

Brett House (1:14:26)
Yeah,

Ben (1:14:27)
Like, you know.

Brett House (1:14:27)
and how much of that stuff was really, because I like this, this is practical, it's rational, it's incremental changes to our organizations, to the way that we do work, to the software that we leverage every day. But there's a lot of this fear mongering, which I think some of it was politically driven. We won't go into those details. Where people jump right to Battlestar Galactica, they jump right to the human race is going to be exterminated and taken over.

Rio (1:14:52)
Cylons, right?

Brett House (1:14:55)
by super intelligence that's gonna force us off the planet and then we're gonna have to find our way back home. Why do we always jump to that part of the conversation?

Rio (1:15:01)
What's going to make slave or make slaves out of us,

Brett House (1:15:03)
Yeah, you're very practical and rational and I think this is the way that we should be thinking about how it actually affects our daily lives.

Ben (1:15:04)
Yeah, I mean...

Yeah, there's much smarter people than me that do worry about these things though. you know, I think, I, you know, in my view, I look at the technology and I don't see a pathway from what we have today to those fears, but that doesn't mean that we shouldn't have people thinking about that and debating it. But I just don't, I'm probably with you, Brett, is that in my view, it shouldn't overshadow either the application of the technology or even, you know, near term concerns.

Brett House (1:15:40)
The investment,

yeah, the investment, the innovation, yeah, all that.

Ben (1:15:41)
the investment. yeah.

And also we have, you know, my personal view is we have to think about how do we roll this stuff out. It's probably best we don't, you we shouldn't have big discontinuities and terms of people's jobs and things. We should bring people along for that journey. ⁓ I would worry. I would think more about that and then I would worry about ⁓ super intelligence. But it's not to say that that shouldn't be a concern for some people. And

It's just not my field. It's not my area. I don't, you know, it's not anything I claim to have expertise in.

Rio (1:16:15)
Well, Ben, looking at

how it's being used by, let's say, developers and even marketers today, or business managers today, with a very small team with these tools, you can do a hell of a lot. Let's say one junior designer who can now write copy. Brett and I were talking about a designer I know. He's now vibe coding.

tools, right, instead of having engineers, right? So it's like smaller teams can do a lot more things with these. It's really accelerating work and it's making them more productive, but is it potentially like preventing them from hiring? It could be, right? So I mean, I guess, but I'm a believer this is going to create more jobs than it's going to destroy in a short and a long term. I know people argue with me short term. It's going to, that's not the case. ⁓ What is your opinion on that? Do you think it'll, do you think it will create more jobs than it eliminates?

Ben (1:17:06)
That's a question, Rio. I think it probably depends on the industry. At the moment, it does seem to be quite a bit of automation being done in call centers. And it's not necessarily true that it's going to create lot more jobs in call centers. But then if you look at, say, the creative side or game development, there's a really interesting conversation between

Tim Sweeney and Lex Friedman on the Lex podcast. So Tim's the founder of Epic Games and he was sort of debating this with Lex and talking about it and you know Tim's perspective and I think I would agree with it is that rather than replacing creatives in the game or film industry the nature of these technologies it's much more likely that AI would be a force multiplier to work with a gaming or a movie engine rather than you know replacing that whole thing.

Brett House (1:17:56)
Yep.

Ben (1:17:59)
So it depends. I'm with you. I Rio is that I think overall it'll create a lot of opportunity and categories. But it could be that whole other categories do go. And so again, it depends on it won't be evenly distributed. My personal view is that net positive overall. So an example I give is that if you look at midwifery.

as ⁓ an industry. ⁓ A number of years ago, my ex-partner, she was a midwife and looked at that, you know, we don't have any good software for managing midwifery practices. Well, there's no money in it either. And so it's an example of an industry that ⁓ would benefit if the cost of development came down. And so you'd be able to potentially, you know, your point, RIO Vibe code, or at least

produce more efficiently solutions for whole categories of economic activity that just weren't viable before. And so we could end up with more software, not less.

Brett House (1:19:12)
Yeah. And that's an example of a relatively low tech industry where there is some benefit, which is interesting. Right.

Ben (1:19:19)
Yeah,

and there's a, you know, that, mean, to go back to the question about where you're seeing, you know, growth and success, like, vibe coding is definitely one of them. There's a number of companies like Replit, Lovable, who are getting pretty interesting traction at the moment. And then,

Brett House (1:19:35)
And can you

define that for the audience just so we know what that means exactly?

Ben (1:19:40)
So it's a bit of a vibe coding, and I've also heard the term vibe marketing now. with vibe coding, idea is that you're not a programmer, or you're at least, you're not as accomplished programmer as what you're trying to create, right? And so you're using a language model or a reasoning model through prompts to create software. And then seeing if it runs, and when it fails, you go back to the thing and you cut it.

Rio (1:19:47)
Yeah.

Ben (1:20:10)
paste the error in and you ask it what went wrong. And then there's tools that make that cycle a little more efficient, but you can just do this and check something like ChatGPT if you want to. So that's kind of what vibe coding is, like you try something and see if it works.

Brett House (1:20:23)
It up levels

Rio (1:20:24)
Yeah.

Brett House (1:20:25)
your technical abilities if you're not quite there to

Ben (1:20:28)
Yeah.

Yeah. And it's a fun way to learn. It's a fun, you know, fun way to build.

Rio (1:20:29)
you think about it, like the friend I was, the example I was, example I was.

What I was getting at with the guy who's VibeCoded, he's a designer, right? So he's able now to be a great designer. Now some of the design work he was billing for before, whether it's production, resizing banners, changing things, that's gone, right? I mean, that's been taken by AI, but the thing is, but he's able to be a much more productive designer because he's able to spend more time selling his work, dealing with clients, even creating apps without having to hire or pay for engineers,

or not as much engineering. So it's really interesting. It's making him more productive. It's changing his role. I mean, if you thought he was gonna just be able have a team of people doing production work, sitting in a studio, I mean, not really anymore. He might now have one person with a team with a bunch of LLMs doing the work for him. So I think the nature of the job has shifted. I think people are just gonna have to adjust. We've always been adjusting, right? We've been adjusting forever, right? But I think it accelerates now.

Ben, this was really awesome. It's great to hear your opinion. I think I can speak for both Brett and myself when we could say we really appreciate you coming on here. A lot of stuff to think about. ⁓

This is going to continue evolving. So maybe we can even do this again at some point, later in the year, early next year. Love to kind of check in, see where things are. I know Georgian's going to continue to put out a really good thought leadership and reports as you work with different clients and kind of take the pulse of the industry.

Ben (1:21:51)
Thanks, Rio. Appreciate it. Thanks for having me on. It's been a great discussion and yeah, we'll talk again soon.

Brett House (1:21:51)
All right.