Navigating AI as an Executive
Featuring: Brian Betkowski and Ed Haines
AI is advancing at a pace that is overwhelming even the most experienced executives. Faced with rapid change, fragmented tools, and unclear ROI, many organizations are choosing inaction. The discussion unpacks why this happens, the risks of waiting, and how leaders can make progress.
Overview
AI is advancing at a pace that is overwhelming even the most experienced executives. In this episode, Brian Betkowski and Ed Haines explore a growing pattern: leaders aren’t resisting AI, they’re freezing. Faced with rapid change, fragmented tools, unclear ROI, and unfamiliar technology, many organizations are choosing inaction. The discussion unpacks why this happens, the risks of waiting, and how leaders can shift from hesitation to progress by focusing on experimentation, process transformation, and managed risk rather than perfect solutions.
PODCAST TRANSCRIPT
Ed Haines
The studies have been done to show that those product companies that tried something, failed, tried something, failed super quickly, were enabled to get even better products because they’re learning from those mistakes.
Brian Betkowski
Welcome to Jabian’s Strategy That Works podcast. I’m Brian Betkowski. I’m here with my friend and colleague, Ed Haines. And we’re here to have a lively conversation, just the two of us today. No guests today.
Brian Betkowski
Yeah, we’ve got different things on our mind. We’re going to talk about a little AI topic, a little thing that we’ve noticed happening out there. It’d be interesting to see if anybody else is noticing the same thing. But the fact that to be an IT executive, really any executive right now, but for sure an IT executive, it’s really an interesting time. It’s a really hard time. And that’s two things that we’ve observed and want to talk about that today is that, one, so much change so fast and so much pressure to adopt some of the new things, mostly AI and other of the newer technologies, but so much pressure to adopt them. On the same time, they’re changing so fast. And then you combine that with the fact that because they’re so new that most of the executives right now, these things were not part of their own career as they were coming up through the ranks. So they don’t have personal firsthand knowledge, so they’re relying on trust and relying a lot more on others. You combine all three of those things, and in most cases we’re seeing that causes people to do nothing.
Ed Haines
Right. Yeah. It’s the classic idea of in nature, you have the fight or flight reaction, but there is also a middle one which is freeze. So this idea of, you see it in certain animals, right? They get into danger and suddenly they just stand as like a statue. And they’ve actually done studies that humans, this is a condition as well in certain emergency situations. People fly from a scene or fight when they need to, but also they don’t know what to do, so they freeze. And I think it’s interesting with the technology that we have today and the speed of it, but also the fact that it’s so fractured, you add that into the mix, which is, okay, what do I do because I’m used to buying big enterprise software to solve my solutions. So I’ll wait. I don’t want to add a bunch of applications. And really, are you that case there by freezing, by waiting, are you actually kind of doing yourself this harm because eventually all this stuff will consolidate? We’ve seen that happen before. And so this idea of how do you make decisions that may seem counterintuitive to what you’re used to because we’re in this kind of different world, in a lot of ways, it’s an interesting time.
Brian Betkowski
Yeah. And you think about it from a competition perspective. I mean, every day that you’re not doing it’s just one more day you’re behind. And the amount of progress that happens on a daily basis literally is staggering. And so you combine this problem with, wait a minute, well, if I don’t do anything or if I wait or if I hang my chips on the enterprise software, that’s going to surely bring this to me soon, but what if someone either does it themselves or finds another solution faster than the enterprise software and now you’re left with nothing?
Ed Haines
Yeah. And you’re behind on three things really. Right? You’re behind on solving the problem that you’re setting out to do, and that opens you up to competition. You’re behind on your workforce or yourself or being better trained or better knowledge of how to use this type of technology. And then the third one really you’re behind on is the kind of data you’re collecting. And in a way, that’s probably the most important one because AI inherently is a data collection tool as much as it is a solution tool. And as you start to kind of get into this world, it’s like eventually those people who have more length of data will actually be the winners in some of this. So that’s another reason not to freeze.
Brian Betkowski
Yeah. And that learning aspect that you’re mentioning, the term is like, well, it’s all about reps and you got to actually do it. And a long time ago, randomly, I learned how to ride the unicycle. And I can tell you for sure that you could watch unicycle videos until you die, you could also read unicycle books, you would never be able to ride a unicycle unless you just get on it and fall a thousand times.
Ed Haines
Yeah.
Brian Betkowski
And that’s really the world that we’re in right now is because it’s changing so fast, because there’s not a way to do anything and it’s a lot of style and art in these things right now, especially in all the different, the AI tools is that it’s about iterations. It’s about getting your hands dirty. It’s about trying it. And it doesn’t mean that everything, in my opinion, it doesn’t mean everything needs to be something that you build separately. And a lot of the things that we’ve been able to build have a component that sits outside of an enterprise software, but also has a component that integrates with enterprise software. And there’s, again, an art and how to balance that. But I think there is a way to make progress and not have to go all in on something too early because that’s the theory. You go all in right now, you get locked in and you spend a bunch of money on some big long IT projects. When you wake up, the whole world has changed, which is why folks are having trouble making decisions.
Ed Haines
Well, and it’s very similar to how products get developed too. And the studies have been done to show that those product companies that tried something, failed, tried something failed super quickly, were enabled to get even better products because they’re learning from those mistakes. But you’ve got to get the cycle of that failure kind of in control. And it’s got to be something that can easily pop up and easily be thrown away if it doesn’t work. I mean, you can’t do that with really big product type things. If I said, “Well, I’m going to build the next big earth moving machine,” then that will cost me a lot of money and it’s hard to kind of the way. If I’m just building an application that allows me to kind of learn from it, then yes, you can iterate on that. And I think that’s where a lot of this knowledge is going to come from, is the speed at which we learn or the LQ in AI.
Brian Betkowski
Yeah. Yeah. I think there’s also just a, and this is a human thing, but continuing to push yourself. I mean, a lot of things that we’re seeing in different companies is the big first step, which was to adopt some of the different LLMs and then perhaps wrap them and put a wrapper and get those out to your workforce. And whether that’s in the form of out-of-the-box software like Copilot or leveraging one of the other LLMs. But then there’s the, “Okay, so what’s next?” And I remember when you and I put together that early on when we were getting into the generative AI stuff three years ago, we put together that big matrix that they picture of where we need to go. What are the things we need to learn? What don’t we know yet and what needs to be the next couple of things we learn? And in probably less than a year, we had not only covered all those things, but we created another 20 boxes on that picture. So when we first created the picture of all the things we didn’t know that we needed to know, it didn’t even have probably, I don’t know, 50% or maybe more than what we know now.
Ed Haines
Yeah.
Brian Betkowski
And so that concept of saying like, “Hey, put it on paper, go do it, move forward.” And then as you do that, it opens up new doors and new things is really important.
Ed Haines
Yeah, it’s the known unknowns or the unknown unknowns, right? Which is-
Brian Betkowski
Yeah.
Ed Haines
Yeah.
Brian Betkowski
I would say also, one of the things that I think that we’re seeing is just the ROI, the ROI topic. And okay, so I push myself to do these things or I push my company to do things and I adopt them and I spend money on them, but I just don’t see an ROI.
Ed Haines
Yeah.
Brian Betkowski
And I think that’s like, I mean, there’s so many articles being written about that right now, but from a practical perspective, it’s, I think the focus is on the wrong thing. The focus seems to be on the thing, the tool, the solution, as opposed to saying, no, the focus is really on what is it that we’re trying to change in our business, either that’s a process enhancement or some other outcome. And then the tool could be, and quite often is, a little component of AI, but that mindset of saying, “Oh, the goal was to use AI or the goal was to get an AI system,” seems wrong because then once you hit that goal, you realize, “Well, that wasn’t the goal. That just costs me money.”
Ed Haines
Yeah.
Brian Betkowski
The goal is really to figure out how to use that to do something faster, cheaper, better.
Ed Haines
Yeah. Yeah. Actually, I had one of these today with someone I was talking to about a problem and that we were just sort of brainstorming how they might solution it. And the question was, “Hey, I have this portal.” That’s an old term, isn’t it? “But this sort of dashboard that I’d like to use AI on it.” And you kind of look at it and go, “Well, actually, do you want to just make AI out of this portal or what are you actually doing with it? What are you actually trying to achieve in the first place?” And when you really have that conversation, then you end up going all the way back to, “Oh, well, we want to put this piece of information in and we need to get this piece of information out.” And in the middle, someone’s making some decisions. You go, “Well, that’s AI.” Making AI out of a portal is very different.
Brian Betkowski
Yep.
Ed Haines
And I think that’s where people struggle a lot is really trying to sort of step away from the technology a bit and really kind of focus in on that process part.
Brian Betkowski
Yeah. And I think a lot of these barriers are deeply human, especially when there’s this much disruption and this much change. We’re not talking about, “Hey, we’ve got some tools that can make a single human 5% better.” We’re talking about in some cases, there are tools and technologies that could truly displace a large number of people. And I’m not trying to get into the, “Oh, AI’s taken everyone’s job” thing. It just, I think it will enable us to elevate to do bigger and better things. But in the near term from a single human thinking about them and being able to support their own family, there’s maybe a conscious or maybe sometimes unconscious, but a conscious or unconscious feeling of like, “Man, if I do go all in and let it totally into my job and into my group, I could be displaced in the near term and I don’t have the skills to move that fast.” I think people are feeling that.
Ed Haines
Yeah, not even that. There is an element of that. But even before that, I think there’s the … We’ve worked on a couple of projects like this where you say, “Here are all the things that you do, this particular one here, how long do you spend on it?” And they’ll say, “30% of my time.” And you go, “Well, brilliant. Let’s take that and automate that and then you get 30% of your time back, right?” “Oh yeah, definitely we can do that.” And then you kind of automate it and you put it in, or you start to have conversations about the technology that then start to use it. And especially if you use it as a tool and you give someone a tool to do that 30%, suddenly it’s not 30% anymore. It’s 10% or 5%. Because there’s an element, I mean, there’s an element of like, that’s a people reaction thing, but there’s also an element of really how do you package up the process in a way that you can truly get the 30% out. And that comes back to what you were just talking about, right, which is don’t give them the tool, give them access to the process or make it part of the process and use the tool to monitor the process and then use the time you get back to add more value to your role.
Brian Betkowski
Yeah. It’s almost like if you think the next step is give the person the tool to do it better, just before you make that decision, force yourself to say, “Well, could we go one thing past that? Could we just have the tool do the process?”
Ed Haines
Well, what would be a better value statement for that person in that role? And then you build the tool to help them do that additional valuable work.
Brian Betkowski
Yeah. Or take the un-valuable thing off their plate so they don’t even have to do it.
Ed Haines
Yeah, right.
Brian Betkowski
There’s also the risk factor that I think is another big thing that’s holding people back, which is the, in essence, fundamentally, AI is just an indiscreet thing as opposed to all the technology that we’ve been working on for the last 40, 50 years has been all discreet. You code a discreet thing. It may take you a while to figure it out and all the permutations, but once you do it and once you have it working, you can be fairly certain that it’s not going to just wake up one day and do something different because the code doesn’t just change itself. Well, nowadays, although you have the benefits of the speed and the amazing things that generative AI can do, you also have its creativity sometimes. And it says, “Well, what is that sweet spot? What is really the balance between leveraging the creativity and the speed versus the indiscreet nature of it?”
Ed Haines
Yeah. And that’s a real struggle. I mean, look, as a programmer in the past and now the idea of moving from discreet to indiscreet is a really hard jump. And we were used to those, I get a value back and that value is binary. It’s a true or false, zero, one, and there’s two things I can do with it; or maybe three, because it could be null.
Brian Betkowski
Yeah.
Ed Haines
But you basically, you know those three things and you cover those three things off. And I remember actually when we were doing this, it was about three years ago now, we were trying to get, I think it was like ChatGPT to come back and just say yes. Remember this?
Brian Betkowski
Yeah, just answer it yes or no.
Ed Haines
Just say yes.
Brian Betkowski
Or just say yes or no. It was yes or no.
Ed Haines
And it said, “Okay.” And then it comes back and says, “Yeah, yes, the answer is blah, blah, blah.” And we go, “No, no, no. Just say yes or no.”
Brian Betkowski
“Yes or no.”
Ed Haines
It came back and it said, “Yes, okay.” I was like, “No, I still can’t use that. Tell me just yes.” That’s all. Yes, period. And it came back and it said, “Yes, period.”
Brian Betkowski
Period, yeah.
Ed Haines
And again, as a programmer, what do you do with that? And of course, we were barking up the wrong tree in a way. And you use that indiscreetness though and flip it around and use it as a powerful part. And there are obviously ways in AI then to get that discreet element out by using-
Brian Betkowski
Now there are. Yeah.
Ed Haines
And all that kind of stuff. But yeah, it’s quite a change in how you think about things and how you solve things.
Brian Betkowski
Yeah. Because in the end, it’s really just a risk. How much risk do you want to take?
Ed Haines
Yeah.
Brian Betkowski
It’s the same classic case of when you’re testing, even when you’re testing discreet software, you usually can’t test every permutation.
Ed Haines
Right.
Brian Betkowski
You have to ask yourself, “Based on the risk of what this thing is doing, how much do I want to test it?” I think there’s a similar concept now with AI where you could say, “Okay, well, based on whatever you’re asking it to do, how much risk is that?” And do you want to have other AI agents that are checking those AI agents work? Do you want to put a human in the loop? Do you want to do both of those things? Do you want to also add some discrete barriers? So you’re going to maybe let it do a mathematical calculation, but if that calculation ends up outside of some rational bounds perhaps and you set those bounds in discreet code, there’s all sorts of ways to do it. But I think the key is not saying, “Oh, because AI is indiscreet and it can make a mistake, I shouldn’t use AI.
Ed Haines
Yeah.
Brian Betkowski
No. I mean, that’s like saying, “Because perhaps I can miss a decimal point in Excel, I shouldn’t use Excel. I should make sure I write it down on paper.” No, that has nothing to do with that. You have to just act accordingly for the risk that you-
Ed Haines
With the processes around it, right? Yeah. I mean, if something’s really important to you, you might have five or six people before AI doing those things and then passing it to the next person to check it, to check it, to check it. And you do the same with AI. It’s no different. And the most important thing I think with AI that people also miss a bit is, if AI were a person working on your team, you wouldn’t just put them into your team and then not manage them.
Brian Betkowski
That’s right.
Ed Haines
And so you got to make sure any agent is managed by someone and that accountability is there. And that’s the human in the loop still, but human in the loop in the sense of how do we eventually just roll this up to the person? Because ultimately it ain’t going to fly being able to say, “Well, AI made the decision for me.” Companies are going to be on the hook for that and people are going to be on the hook for it.
Brian Betkowski
Yeah, that’s true. I mean, the analogy I use is imagine you did hire a new grad or someone that’s junior in their career and you taught them with the training or some instructions, if you will, like you do when you’re using AI, and then you let them at it. To your point, not only would you not manage them, would you not check in on them? Would you not check their work maybe? Would you not ask them to have their manager check their work or have a peer check their … You would do something.
Ed Haines
Yeah.
Brian Betkowski
But you wouldn’t say, “Well, yeah, because they could make mistakes, because they’re junior and they could make mistakes, I’ll never let a junior person do this.” No, you would just do it appropriately and you would do it smartly and you would manage the risk with the process. And I think that’s a takeaway that I think that we all need to just challenge ourselves more.
Ed Haines
Right.
Brian Betkowski
It’s like can we just balance the risk with the process?
Ed Haines
Yeah. Here’s a phone, off you go, start talking to my customers.
Brian Betkowski
Yeah, of course.
Ed Haines
We wouldn’t do that.
Brian Betkowski
You’d monitor.
Ed Haines
Yeah.
Brian Betkowski
You’d sit in on every once in a while, you’d listen to a call, you’d check the transcripts.
Ed Haines
Thank you for joining us on our Strategy That Works podcast. We’ve put this together to basically share ideas and thought leadership around various topics. We like to anchor around technology and AI, but as you’ll see, we like to also talk to people from different industries, from different areas of expertise. And so if you’d like to reach out to us, please do. We’re always looking for commentary and engagement. You can find us on our website and our YouTube channel.