The Real Reasons Companies Aren’t Moving Faster on AI
Featuring: Brian Betkowski, Ed Haines, and Will Funderburg
AI technology is advancing extremely fast, but business adoption is lagging due to practical, human, and organizational barriers—not ignorance.
PODCAST TRANSCRIPT
Will Funderburg
The ability to ingest unstructured data and then feed it into an enterprise system to augment it is super real.
Brian Betkowski
Welcome back to Jabian’s Strategy That Works podcast. I’m Brian Betkowski. I’m here with my friend and colleague, Ed Haines, for another fun episode. We get to actually talk about a topic that’s pretty near and dear to our hearts. And it’s a popular topic these days, the AI topic, but we’re excited to be able to do that again. We’ve got a really fun guest and what we’re going to talk about today is things are moving at warp speed in the AI world, technology-wise. No technology in the past has evolved as fast as AI is evolving. Yet there’s this interesting thing going on where, arguably in many companies, the adoption is not as fast as the technology is moving forward. And there’s plenty of good reasons for that, but it’s an interesting phenomena. And basically that’s what we’re going to talk about today.
Ed Haines
Exactly right. And I’ve got some stats as well. Given to me. We can use to frame up the conversation. So the first one is to go along with what you’re just saying there. McKinsey estimates that between 2.6 and 4.4 trillion or T, productivity globally will be attained through generative AI in the next few years. So that’s a huge number obviously where the vision’s there, the expectation is there. And then you have also reports that come out that basically say, look, 20 to 30% of companies are really seeing some sort of benefit, not the whole benefit, but some sort of benefit from using generative AI. And actually that number’s kind of up a bit from say a few months ago where there was a sort of infers or famous report from MRT where they talked about 95% weren’t seeing things. I think honestly that number’s probably a little bit bigger because you’re not always necessarily getting the whole breadth of everyone who’s actually seeing all the different value pieces. But generally speaking, they’re not getting, it doesn’t feel like that’s on that trajectory for that trillion plus number that McKinsey pointed out.
And then when you look at who’s actually doing it, so that’s the value you get, but then who’s actually doing it. 75% of enterprise, this is from a Gartner report, 75% of enterprises are actually trying something. So there’s 25% out there who aren’t really doing much at all. And then there’s 75%. So over 75%, 20 to 30% kind of seeing some sort of value from it. And really, I mean, what that really says to us is the capabilities there and the capabilities understood, but the idea of how you match that up with actually really starting to see the value is not quite there yet.
Brian Betkowski
Yeah. And we definitely, we’re not bringing this up today, obviously, because we think that it’s all irrational fears or irrational reasons for not adoption. There’s some really good reasons. And honestly, even to be an executive these days is really tough. A lot of pressure from boards and private equity firms and other pressures, peer pressures to do this as well as just the fact that things are changing fast, there’s stuff going on in the economy. I mean, just all crazy pressure. So they’re not irrational fears, but they are real.
Ed Haines
Yeah.
Brian Betkowski
And that’s what we’re going to talk about today. We’re actually lucky to have a really good guest today. We’ve actually had this guest before. It’s Will Funderburg, who runs our Charlotte office, also is our lead of our operational excellence service area, as well as our optic offering. And what’s really cool, Will has just got a lot of experience in not only how to change companies and how to do operational improvements, but also doing real work in the AI space. So he’s got a lot of insights into what are these thoughts and fears and anxieties and reasons for change and not to change. And that’s what we’re going to talk to today. So welcome, Will.
Ed Haines
Excellent. Welcome, Will.
Will Funderburg
Very good. Thank you for having me. Yeah. I’m in probably five or six executive conversations a week on this topic. And yeah, certainly some are much further down the path, but I thought what would be kind of fun today is to talk about maybe the top 10 reasons folks are not activating or adopting as fast as they should. I think both of you hit the nail on the head that the technology’s advancing really quick. It’s just how does it diffuse into companies or the broader economy? And when are we actually going to see those reports of the productivity numbers jumping much past the 20 and 30%?
Brian Betkowski
Yeah.
Will Funderburg
So the thing I’ve noticed is it’s not that leaders are unaware. It’s because of friction. And for fun here, I’ve broken up these top 10 reasons into the levels in the org and where we commonly hear these reasons. So excited to walk you through it and see if y’all hear the same thing and then talk about some practical reasons for, again, why you said these are rational and how folks can think about mitigating some of these reasons of not adopting any faster.
Brian Betkowski
Let’s do it.
Will Funderburg
All right. So I’m going to start at the executive level. The first one, and maybe the most common is I’m not sure the ROI will be real or provable. I talked to one executive, in fact, this week that said, “I’m getting 15 pings a day from different vendors and the speed of all the ideas is not the problem, but how confident can I be in actually proving out that value?” I’m worried I’ll sponsor something that I’ll not be able to show that measurable impact. So are you all hearing that one?
Brian Betkowski
Oh, this is real. Yeah. I mean, I think one of the main things that, and doing any transformational thing, not just AI, when you talk about ROI is measuring against what baseline, what’s the baseline? So obviously there’s core metrics, which if you just measure against revenue and things that people are always tracking, that’s usually an easier one. But if it’s more of a submetric, either the speed to do a sub-task, you need to be measuring that sub-task. Now, of course, you could do things qualitatively, but I think to do what you’re talking about quantitatively requires a good baseline.
Ed Haines
Yeah. I mean, and then there’s also, I mean, again, similar, not just with AI, but with all big transformational, especially technology projects, there’s the cash, the check element, because we talk about savings and how savings get kind of captured into ideas and you can track any type of savings on an Excel spreadsheet, but when do they become real and when do you actually see the bottom line? And there’s always been that disconnect, especially in larger enterprises between, “Well, I’m going to save this number of resources over here,” but suddenly they pop up somewhere else or they do certain things. And so how do you make sure that you tie it back to the KPIs and tie it back to really ensuring that you’re in some way cashing that check?
Brian Betkowski
Yeah. And the reason why, Will, when you were bringing this up or when we were introducing this is to say that this is not irrational. It’s that also usually the thing’s not the thing. So in a lot of cases, I don’t believe that an executive only feels like, “I don’t know how I’m going to measure it. I don’t know how I’m going to get the ROI.” They’re actually thinking forward like, “Okay, well, what am I going to have to do to actually make that happen?” And in a lot of cases, sometimes that does require headcount reductions or other significant impacts to the organization. And that can be scary and that causes anxiety and that’s just a real human thing and that goes into the calculus. We can’t really avoid that.
Ed Haines
Right. And I think, sorry, yeah, I know we need to probably move on to the next one as well, but I think there’s also this idea, especially we think about savings as cost savings and resources, it’s an easier way to think about it, but there’s also like, how do you use generative AI to improve sales, improve for growth? And that’s not always. Unlike savings, you can find that fairly immediately. With growth, you get closer to it, but there is a little bit of requirement to actually do that extra piece. It’s kind of like leads. When you get leads, well, I’ve got a thousand leads. Well, you don’t need to go close them. And it’s the same with this. And so making sure whatever you’re doing is actually reaching into that close part as well. That’s where the checking of the cash happens.
Brian Betkowski
Yeah. So, Will, what’s your thought on this one?
Will Funderburg
Yeah. Yeah. I mean, so y’all hit most of it. The one thing I’d add is not having the misconception that you need to take really big swings for your first one. So pick one you’re confident in. Pick one that is a well-worn path or maybe it’s less having to be proven from growth or innovation, and pick a workflow, understand where you can embed AI automation, the assumptions that you’ve had in the past are that you can’t apply that to judgment-heavy spaces, but start small, prove it out, and then shadow and grow from there. So it doesn’t have to be a big transformational swing. It can be super focused.
Brian Betkowski
Yeah, I like that. Start small.
Will Funderburg
So the next one that I hear very often is, well, if I start small, good segue from our last one, am I going to build some fragmented or systems or solutions or am I going to start to incent this shadow IT culture that we’ve spent all this time trying through the early 2000s to get to platform systems? So I’m just going to wait on the ERP vendor who’s also thinking about this to roll out features. So I’m going to wait on my platform vendors.
Brian Betkowski
Yeah. It was so funny you bring this one up. We literally were on a call yesterday with an HR leader and very dynamic, awesome HR leader. And she built the first AI chatbot for her HR organization, herself, literally herself. And so-
Will Funderburg
Very nice.
Brian Betkowski
Part of me says, we can say we don’t want shadow IT. When people say that, they normally mean official shadow IT, like officially buying IT things from another vendor and you’re not the IT person. There’s a new version of shadow IT now. It’s literally your whole entire company building things because anybody can build something. And actually it’s this weird thing going on where we enable our employees with these awesome tools and then they get really creative on how to use them and create IT systems. And so it’s happening. So it’s almost like you want to get ahead of it and embrace it so that you can control it it doesn’t control you.
Ed Haines
Right. Yeah. I mean, again, having run IT shops before, I feel for this because the last thing you want to do is suddenly shift all the things that need to be done in the business over to IT. That’s sometimes how it feels is like, oh, we’re going to have these agents and so we don’t have to do as much in pick the department and now suddenly you’ve got more to do in the IT group. And so it is real. But at the same time it’s like, well, is Excel shadow IT? I mean, I know that’s probably a little bit extreme, but it starts to get that way. Whereas if you can put the tools into hands and trust people to do the things in the right way, then yes. But if you’re starting to buy in different tools and then suddenly you’ve got all these different things that you need to manage, I think that’s a different situation.
Brian Betkowski
What do you think, Will?
Will Funderburg
Very good. Yeah. Yeah. The only thing I’d add to that one is the ability to ingest unstructured data and then feed it into an enterprise system to augment it is super real. And so we’ve seen a lot of those use cases prove value pretty quickly with clients and interface with a Workday, a Salesforce, enterprise systems that we’re not trying to disrupt. So that can be a cool one to show folks because they have. Again, we don’t want shadow IT, that was hard one over many years culture.
Ed Haines
And I think just to add to that, Will, I think there is a spectrum, isn’t there? I mean, on one end, there’s, let’s lock it down, let’s not have any kind of extra curriculums, systems around. And on the other end, it’s like anyone can do what they want. I mean, it’s somewhere in between where you have the ability to understand what people are doing, how they’re using it, and have some governance around that. I think that’s the reality of the situation. I don’t think either of those two extremes work.
Brian Betkowski
Yeah. Now, Will, it was another dimension of what you brought up there at the beginning, which was waiting on the enterprise systems. I think we’re hearing that a lot from folks because over the last, what, 20 years or more, people have invested a lot in enterprise systems and there’s a really good reason for that. However, it takes longer for enterprise systems to turn the ship and to create real, meaningful, embedded functionality. And with the world moving so fast, this is the dilemma that every executive has is, “Oh, I know I’m pulled one way by wanting to use enterprise systems and not have shadow IT. I’m pulled the other way by my board and my peers and my competitors and everybody telling me I need to move faster and then I’m frozen.” And I think that’s what that’s what we’re seeing, but it’s real, this is a real problem.
Ed Haines
Yeah. And a lot of these enterprise systems become platforms and suddenly you actually create a little bit shadow IT on a platform if you’re not careful too. So there’s layers of complexity to this.
Brian Betkowski
Yeah.
Will Funderburg
Yeah. Not to overcook this one, but it’s just near to my heart. I haven’t been at a client, multi-billion dollar manufacturer, distributor, et cetera, that doesn’t have the latest SAP and also has Excel processes going on around it right now, right? So it’s like, what if we get away from that and more into feeding data and having a log of that data into your system? It’s actually helping get that closer to something controlled.
Brian Betkowski
Right. That’s a good one. All right.
Will Funderburg
Let’s jump to the next one. And this one is something that popped up probably over a year ago, but it’s still, I hear it in conversations. To me, AI is a black box, and if I get it once, trust is wrong, whether with my employees or with my customers. So it’s just that reputation risk or personal accountability of, I don’t understand all the ins and outs of it, and I’m worried if it gets it wrong, all of a sudden I’m going to have a story folks are telling themselves.
Ed Haines
Take a stab at this one. I had this situation last week with a customer and they were basically using their knowledge of ChatGPT or Claude on the app and using it, like everybody else uses it as a way to understand what it could do. And of course, that all depends on the prompt and what you say. And in this case, the particular issue, it was because the model was older and something had happened in those last few months and it wasn’t in the report it gave the person. So the answer I always give to people with that is like, well, think about AI systems beyond the ChatGPT website and think about them as being agentic. And if you really want to get a job done, then you put it through three or four times to make sure it’s right. You use external resources, you make sure the data that’s coming in is correct.
And so I do think that’s an educational piece in a sense, like to say, yes, AI is this over here, but it’s also this.
Brian Betkowski
Yeah. I would say, and I have a lot of just compassion for executives right now, especially IT executives, because this is the first time really that a lot of IT executives have to make a decision about something that they don’t have personal experience with prior in their career. So until now, by the time you became an IT executive, most people, either they grew up as a developer or they grew up in technology where they were part of a big implementation of something like an ERP system, whatever. And so then when they became an executive, they were making decisions about something that they had firsthand knowledge or someone on their team had real deep firsthand knowledge. Then all of a sudden, poof, something changes, we get this new generative AI thing, that’s all everyone wants to talk about, and there’s a lot less firsthand experience. And so it’s a double trust thing, Will.
I think it’s like now an executive has to trust their advisors or their people that work for them and they’re dealing with the trust that you mentioned, which is the trust of the user having trust of the system. And so there’s a lot here at stake. And I think people realize that, again, that usually now we’re seeing that driving is just freezing. It’s like you have all these things happening, they’re real, and so you freeze. And so to your point, well, how do you do that? You start smaller or you start with having an agentic system where you have controls and you have multiple layers, multiple iterations, things like that. There are ways around this, but again, these are real. We’re not bringing these up because they’re irrational. They are totally rational, but there’s ways to handle that.
Will Funderburg
Yeah. And just recently, last week I was in a conversation where just the connotation of we’re just talking about maybe the last mile, of course, keep the human in the loop, especially through a shadow and pilot phase. But we’re not talking about 0% to 100%. We’re talking about 0% to 80% and then of course have that last mile covered. So that tended to be a good learning in that conversation.
Ed Haines
It’s a good one.
Will Funderburg
All right. Last one within the executive level, AI will create workforce backlash. So the classic change management back pushback, if I free up 20% capacity. I don’t have a really solid story on exactly what we’re going to do and exactly how we’re going to do the rescaling and training, et cetera. And so does this mean I’m committing to layoffs? I haven’t made the decision yet with you, but is that what I’m talking about a few months from now? Do y’all hear that one?
Brian Betkowski
Oh yeah. I mean, not only just from executives, but obviously this is the internal thing that’s going on when you give productivity tools to someone and you want to be able to get that productivity back. There’s a human thing going on here, which is like, well, wait a minute, I mean, am I putting my own career at risk? Am I putting my own family at risk? That’s a human thing. You’re going to weigh that. And so I would say not irrational, but again, this is where you have to, I think as an executive, we’ve seen some really good executives say like, “Okay, let’s think through this.” We have growth that we can grow into with folks. There might be some people who aren’t the right people. There are some people who are going to retool their whole entire careers and create new opportunities for them and their own companies.
And so yes, for sure, there could be a negative side of this, but there’s so many other upsides that need to be just balanced.
Ed Haines
Yeah. I mean, in other technology, big technology transformations and when new technologies come on the scene, you’ve often seen and have visibility to what those new jobs are going to look like and therefore less of the angst is taken off because you can see, well, I just need to train myself up on using a computer using email from years ago. But now that’s a little less visible at the moment, I think, and that’s probably where some of that comes from and it’s so quick and immediate, but it’s a real situation. And one of the things that we always talk to clients about is this idea of, but start with the things that people don’t want to do and to get them to really understand and how to use it, but are of good value, right? Things that they don’t want to do that are of good value.
And then the things they do want to do and are more fun in their work, the reason why they turn up to work and have high value, then use AI to make them do even more of that. And that’s actually where the productivity comes from, but you also get people to feel like they’re not being sort of pushed out by AI.
Brian Betkowski
Yeah. And it’s also a good reminder to connect people with the cause, connect people with, what are we really doing? Are we just work and help desk tickets? You could look at it that way, or are we really enabling our customers to be more successful in their own businesses? And that’s just one example. But I mean, connecting to the cause I think gets people remotivated on like, well, what are we actually doing and why are we trying to be more effective? And if our firm doesn’t get more effective, our firm’s going to be passed out by a competitor. I don’t mean our firm Jabian, but just a general company, if you don’t get effective, you’re going to be passed up and that’s going to affect everybody in the company. And so it’s for your own personal benefit to be able to stay on the right side of the curve and it’s in the best interest of your company.
What about you?
Will Funderburg
And a little plug for maybe a next topic for a podcast is just the reinvestment playbook, like you said. Your peers aren’t going to sit there and say, “You know what? The products we got today, let’s do them or as cheap as possible from now till forever.” Right? No. So investing in how to have quicker releases of products and serve more customers, take the white glove level of service that you provide now to your top tier customers and drag that down all the way through the other tiers of your customer set. So having a really good idea around that reinvestment playbook is something we can talk about on the next one, but we’re spending a lot of time thinking about that now.
All right. So moving into the middle manager or the functional leader set of reasons that I hear pretty often. This all sounds great, but we don’t have the data where it needs to be.
So I don’t know if investing in this right now, just we have a data strategy project come in and I just don’t want y’all to get started and be held up by our data not being where it needs to be.
Brian Betkowski
Yeah. I mean, this is one of those things. In some ways it’s so true. In some ways it’s like, will it ever be?
Ed Haines
Yeah.
Brian Betkowski
I mean, will it ever be where you want it? That sounds so nice like, oh, it’ll just all be in order and then we’ll just be able to act on it and then the next day it won’t be because things change. And so this is one of those cases you mentioned earlier, Will, you just got to break it down to the right size chunk to go after it to say like, “What do I have that’s good enough and balance good enough with speed?”
Ed Haines
Yeah. And then depends on what you’re trying to do as well, right? I mean, because I think the old way of thinking about data was it needs to be in a relational database or it needs to be connected and it’s structured. And of course, I mean, that’s the beauty of AI, right? You can use a lot of this unstructured data and maybe it’s not quite right, but you could actually use AI to fix that even in some circumstances while you’re trying to solve the problem you’re trying to solve in some ways. And so you don’t necessarily need to get it all tightly wrapped up. You can actually pull it in, use AI in the moment and then.
Brian Betkowski
Well, to your point, a lot of data gets sloppy or uncurated because there was no natural reason to curate it.
Ed Haines
Yeah.
Brian Betkowski
But if you create a reason to curate it, meaning a AI agent or an automation workflow is going to need that data, then all of a sudden when you start to use that data, you’ll see it’s not a 100% wrong or 100% right. There are issues with the data. And as they come up, you fix the data and you continue to curate the data. And so you basically put in a process to curate the data because now it’s needed where it wasn’t needed before, which is one of the reasons why it probably wasn’t in good shape to start with.
Ed Haines
And knowing how much wrong data you have is actually data in its own right, isn’t it?
Brian Betkowski
Yeah.
Ed Haines
I mean, you start there even.
Brian Betkowski
Yeah.
What about you, Will?
Will Funderburg
Two points on this one. And again, not AI related, but every time I get into a data conversation like this, it’s like, well, the lake of dirty data we have to fix. And it’s like, well, why don’t we start, like you said, Brian, with the factories that are feeding the lake, and let’s work on that flow of current transactions and fix those as we go. And then the second one is that sometimes I think between the lines, people see this as we’re going to look at this data and it’s going to be individual evaluation. And instead, what we often say is let’s start at the value chain, just the highest level input, cycle time output, and make it more about how efficient the function can be. So you don’t have to have perfect data, but at the workflow level, I think is good enough to get started and say, “Hey, look, we made a real improvement here.”
Brian Betkowski
I think also like anything you do in life, you got to get started to gain a little confidence in making the bigger decisions. I mean, we just had, Will, you and I were actually just on a meeting today. And the irony of this meeting was six months ago, this was a company who was dipping their toe into this and on today’s meeting they were saying things like, “We should think bigger. We can totally automate this whole entire thing, not the whole company, but this whole entire function.” And so it just gives you confidence to start thinking like that. So to start small, to bite off a little piece of this thing, to go figure out what data you’re going to make better, we’ll give you the confidence that you can do it, Robert.
Ed Haines
Yeah. It makes me think of, I mean, this is a few years back now, and I’m going to say it was the CEO of P&G, one of the new ones came in and he said, “I want to put all the numbers up on a big screen in the boardroom.” So every time we have a big leadership meeting, we’ve got all the numbers, all the brands up there. And someone said, “We can’t do that because it’s not right. The data’s not right.” And he said, “Well, it will be if we put it up in the boardroom.” So I think it was P&G. I have to go back and check on that one, but it’s so true. It’s like, got to start somewhere, perhaps that’s where you start.
Brian Betkowski
That’s a good one.
Will Funderburg
Very good. All right. Well, this next one, and I think we can all relate to it, but we’re too busy. This is going to add workload. My team’s already stretched. I don’t have the bandwidth to sponsor something like this.
Brian Betkowski
Yeah. I mean, again, one of those ones that’s like, yeah, there’s truth to that. And so this, I mean, I would say some ideas are obviously just about prioritization, like how important is this? I know when we work with clients to do this, it’s not like we’re in meetings with our clients 40 hours a week. In some cases, we’re meeting a couple of hours a week and getting their real SME information and then making a material progress behind the scenes on things and then getting back together to move it forward. So whether that’s consulting, whether you use consulting or you use other groups inside of your own organization, it’s saying most likely a SME or a person executing the job does not need to go from a 40 to an 80-hour week. That’s usually not the case. People usually overestimate the amount of time that they’re going to need to invest in this. And so it’s just a matter of prioritizing.
Ed Haines
I mean, classic management problem being around for years, I mean, it’s part of the world we live in, right? It’s always going to be more stuff to do than you have time for. And so prioritization is where it plays. And I think it’s really about framing the project in that case, because if I said, “Well, I don’t have time to put AI in, ” it’s different to, “I don’t have time to save $10 million off my operational expenses.” And so you really got to think about how you get to the project itself, not just the idea of what technology you’re using.
Brian Betkowski
Yeah. What do you think?
Will Funderburg
I think things are advancing so quickly too that the pressure is high, right? So they’re already busy on something else, maybe even some pilots in the personal productivity type of AI, like you said, a Chat window or a Copilot. And so they think another thing, it’s going to take a ton of time. One of the ways we always try to address this is, we call it automatic discovery. So being able to ingest hundreds of SOPs or procedures or messaging traffic within a chat window, of course, relevant to the business problem. But the point is, if we have a couple hour session to do the high level vanilla happy path of a business function, then we can start to get this unstructured data in between process mining or just generating process flows that are inferred from the data. We can get really close, way better than we could a couple years ago.
So it doesn’t have to be a weeks at a time sets of sessions with people and taking them out of their job. So just people’s understanding of how we do discovery now.
Brian Betkowski
You made me think of one other thing, which is going back to what people’s past experiences are with these big IT projects usually are around big enterprise systems. So that’s hundreds of people, months of work, tons of consultants, all this documentation. And I think people just need to experience the new world, which isn’t that. I mean, that’s a pressure on our own industry, which is that we can do things that don’t take a 12-week project with four consultants anymore in a lot quicker. And I think that that’s a good thing. And that trickles down to non-consulting companies too as well. But I think people are starting to realize, wow, you can do things a lot quicker with a lot less pain when you don’t have to do the painful parts yourself because that’s the beauty of generative AI.
Will Funderburg
Absolutely. All right. Last one in the middle manager functional leader section. We’ve already tried automation and frankly it disappointed. I don’t want another visible automation project that didn’t deliver what it was supposed to under my purview.
Brian Betkowski
Yeah. Again, the truth. It’s just interesting truth in these.
Will Funderburg
The word automation’s been around for a longtime, right?
Ed Haines
Well, and that’s one of the questions, right? Well, when did you try it? I mean, people say, well, that was six or seven years ago, so maybe try it again. Well-
Brian Betkowski
Go ahead.
Ed Haines
I think that the idea of any kind of project, any kind of innovation, if you’ve got the idea and you’ve got the concept and you know, conceptually it could work, you’ve got to look at why it fails. And it often fails because maybe it wasn’t designed in the right way, maybe there wasn’t enough adoption. It’s not necessarily the concept of automation. And normally there’s a lot of things that surround that. And to write something off like that, I think is kind of the tail, is it really?
Brian Betkowski
I think of any technology, like if a technology project goes awry or is tough, is it usually because like literally the technology could not work? Never, never. There’s something else. It was either like there was an adoption issue or maybe we didn’t really understand the requirements. So we didn’t really understand what we were actually trying to solve at the beginning and we jumped into a technical project really quick and we threw technology. I mean, there is a long list of reasons why that happened. And I’m not saying that AI solves that, but it’s actually worth a step back to say like, “Hey, these AI projects aren’t technology projects.” They’re usually behavioral change projects you’re trying to get someone, either your employee or your customer to behave or interact with you in a different way. And so there’s a much bigger human element. And I think a lot of those projects in the past were looked at as just technical projects and that’s the difference, I think.
Will Funderburg
Yeah. And RPA, let’s call it 2018, literally recording yourself doing click by click. And if the screen popped up differently or if it was full screen versus a partial screen, those things would break RPA automation back then. And so I think people have that connotation of automation still, or some do. The other thing, and Brian and I have talked about this before, calling something a proof of concept right now is just, it’s almost like let’s skip that stage. 99% of the use cases we’re talking about that are just right down the center of the fairway business transactions, there’s no need to go through proof of concept. It’s been done a hundred times already and many of your competitors have probably proven those things. It’s the prototype part. Let’s talk about your scenarios and get really tight on every scenario that we’re going to address to raise the probability of it being successful.
Brian Betkowski
You’re bringing up a huge point, which is that the original RPA concept was to literally take the exact steps that a human was doing down to the clicks in a lot of cases and replicate those with a machine. And there are still really good uses for that. But I think if people, to your point, if people had that as like, oh, that’s what you mean? Oh man, that was brutal. Well, yeah, it was brutal, but that’s not what it necessarily means in today’s world. And so I think that there’s a lot of opportunity to just rethink and maybe use different words. Sometimes it’s semantics to just use different words and frame it in a different way because those barriers aren’t what exists today.
Ed Haines
Yeah. Yeah. I mean, the thing that comes to mind for me, because we went to CES earlier in the year, we saw a lot of robots, humanoid robots-
Brian Betkowski
Lots of robots.
Ed Haines
Jumping up and down and dancing. But the idea of like, we think about things that can. Robots that can do the jobs we do. And so imagine if you had robotics when they first came out with washing machines, you would say, “Yeah, I need a robot that can stand at this sink and clean these clothes.” But actually the solution was to have a cylinder that turns around and looks nothing like a human. And I think that’s very-
Brian Betkowski
In a way that a human can do.
Ed Haines
Yeah, exactly. And it’s similar kind of thinking around some of these. RPA was the robot standing at the sink. In this case, it’s like, no, actually AI is the washing machine that turns around and does it in a different way. Yeah.
Will Funderburg
I like that analogy. Very good. All right. So moving into the individual contributor level, our processes are too unique or too complex. And I’m going to jump in with that. I think what’s behind this one is my specific expertise or experience going to be less central or does that change my value? So my process is too complex.
Brian Betkowski
I mean, I think the game changer on this one is the LLM because what a person’s usually saying by that is that technically speaking, this isn’t too complex, but there’s a lot of knowledge in my mind and my mind is what makes this thing work. My mind and my teammates minds are what makes this work. And how could we ever get all of that out of our minds? And guess how is a computer going to do that? I’m like, “Oh, it’s the sweet spot.” It’s the sweet spot of LLMs right now. And so the knowledge or the gathering of knowledge, and then once you have it, the using of that isn’t the barrier anymore. Of course, like you were saying before, well, you have to have methods in order to gather that. Some of those are manual methods, some of those are automated methods. But once you gather the information, then usually there’s very. Unless we’re talking about truly brain surgery and some really, really, really, really fancy topics, but the general business topics are never too complex.
Ed Haines
Yeah. And it’s not just one time, it’s additive, right? Every time you can set up systems that basically what you’re talking about there are the instructions at a base level, aren’t they? And you can continue to iterate and add. And we think about technology and systems as I put in a system and then I’ll wait for the release to come. But actually the releases are the iterations on those instructions at the end of the day that allow you to do more and you can continue to add on that on an hourly, weekly, daily basis.
Brian Betkowski
Yeah. And this goes, again, back to a human thing, which is I think the average human is going to be three to five to maybe 10X as productive as right now and some 100 X. I feel like I’m 10 times more productive just with all the tools. And so the resistance to that is like, well, hey, maybe it’s not that it’s maybe you could play a bigger part in this if you had AI and agents helping you as a person involved in this process. How do you upskill yourself and get yourself ready for the next level because it’s coming. It’s coming.
Will Funderburg
Well, and in a lot of cases, folks in the individual contributor role may be closest to the customer. And so all the promises around personalization, they may know all the best ideas for the ways to grow personalization, white glove service. And so the automation part, the AI automation, focus on the common happy path vanilla stuff first and get that down solid and then keep those experts focused on what’s the next greatest thing we could do in this space.
Brian Betkowski
I think people also underestimate speed versus quality. There are a lot of cases where speed is more important to a customer, even if there was a little less quality or a little, I shouldn’t say quality, a little less human interaction or a little less white glove service, someone would take speed over that. And you get that a lot of times when you automate, for sure.
Will Funderburg
Well, perfect. That was my list. A couple of them we talked to grouped together. So a couple patterns I wanted to just point out. One is this risk asymmetry, right? I know I’m going to spend the cost now. I know I’m going to potentially spook folks around what does AI mean for me? And that feels really immediate and the upside feels theoretical. And so my thought to that, what I advise folks is, again, start small, start in the lower risk areas, prove it out. And then like you shared, Brian, as we’ve seen on several clients, then the imaginations start to get dialed in on what really this can be for them. The second one is the mental model lag. So whether it’s RPA or whether it’s the big ERP project, folks are using that experience as a way to frame it. And so just updating that with the right assumptions for today’s technology.
And then lastly, I think incentives are not necessarily aligned to how do I get 20% more efficient. They’re certainly aligned to how do I get 20% more customer interactions or growth or sales. And so just making sure that as you automate, having a really good reinvestment playbook for where are we going to reinvest so that we all get the benefits, the customers, and then the whole company gets the benefits from the savings or effectiveness we get.
Brian Betkowski
Yeah. I think and just acknowledging that all these barriers, we talked about all these anxieties are just rooted in just human behavior and they’re natural. It’s natural to have a little bit of fear about things, a little bit of protectionary against your own job a little bit. I mean, those are natural things. And as leaders figuring out how to balance that with the fact of, “Hey, we got to grow or die. We got to change or die. There’s going to be a person who will figure this out, a competitor who will figure this out, who is figuring this out. Nevermind a competitor. There’s a kid graduating college right now that’s figuring this out that can create a company overnight that can actually displace much larger companies. So this is happening.”
Ed Haines
And I think it’s also important to recognize when we talk about large enterprises, this is actually really hard. All these issues at an individual level, they’re kind of like, “Well, you could do this, you could do that. ” But when you’re talking about shifting the mindset of thousands of people, when you’re talking about changing the process of thousands of processes, it’s not an easy undertaking, but I think it starts with really kind of putting a stake in the ground and making. This is classic leadership, right? How do you bring people along? How do you make them recognize that this is where we’re headed and this is the good train to be on?
Brian Betkowski
Yeah. I mean, the mental thing that I would tell people to do is just like, “Okay, this gives you anxiety.” Make it a little smaller. How does the anxiety feel? Still there? Okay, make it a little smaller. Break it down until you feel like, “Okay, well, maybe that would be possible,” and then try that. And then when you go back the other way to scale it back up, I can guarantee you’ll move the other direction much faster once you start small.
Will Funderburg
Right. You look back two months after doing that start small and you’ll make huge progress.
Ed Haines
Yeah.
Brian Betkowski
Yeah. Well, great. Will, this was awesome. I really enjoyed this. It’s always a good one. I appreciate you bringing that list to us, Will, and letting us opine a little bit. That’s always fun. So, yeah, until next time.