Podcast Cover Image

E26 Successfully Adopting AI ft. Stephan Ledain

November 2024

69 minutes

Stream on:

Audible Spotify Apple Youtube
0:00 / 0:00

Episode Notes

With the explosion of AI and LLMs like ChatGPT, we must reckon with a world where technology might outpace us.

Yet at this very moment in time, we are in a weird place where AI has all the promise in the world yet paradoxically cannot meet our basic needs.

Companies are rushing to implement something they don't understand. Employees are reacting out of fear and resisting change.

In this episode we feature AI Adoption expert Stephan Ledain, Founder of AdaptAI, a consulting company dedicated to helping companies capture opportunities in technology. We explore how to really think about the new wave of AI and how to drive implementation.

0:00 - 2:03 Introduction To AI In Leadership 2:03 - 11:00 The Positive Aspects Of AI 11:00 - 14:49 AI's Role: Replacement Or Augmentation? 14:49 - 22:51 The Limitations And Misconceptions of AI 22:51 - 35:50 The Human Element In AI Adoption 35:50 - 44:51 Organizational Roadblocks To AI Implementation 44:51 - 51:24 Addressing Individual Fears Around AI 51:24 - 57:51 Ethical Considerations In AI 57:51 - 01:02:12 Practical Steps For AI Adoption 01:02:12 - 01:09:00 How AdaptAI Supports Organizations

3Peak Coaching & Solutions is a leadership consultancy dedicated to Elevating Executive Mastery.

We specialize in transforming businesses through leadership and team development during transitions and times of crisis.

We focus on the 3 critical areas where chaos and conflict are most likely to appear:

  • Board, CEO, and C-Suite Misalignment
  • Transitions into Executive Leadership
  • Conflict Between Functional Departments

By addressing these flashpoints, we assist you in navigating change to build unity, create certainty, and establish clear direction.

Our approach empowers leaders to master complex challenges and transform their companies to thrive now and in the future.

Transcript

Mino Vlachos: Hello and welcome to the 3Peak Master Leadership Experience. My name is Mino Vlachos and I'm the co founder of 3Peak Coaching and Solutions where we support executives to master leadership. Our company provides coaching and team workshops to support leadership transitions. Today we're very excited to have a guest on our podcast, Stephan Ledain. Stephan is a practicing organizational scientist, a people consultant and the founder of Adapt AI. His research specializes in artificial intelligence, social identification and creativity in the workplace. He leverages the latest in behavioral science, data science and technology to create tools that drive workplace performance. As you might guess, our topic for today is all about AI, artificial intelligence, technology, and also the human parts of how those actually get implemented into the workplace and what behaviors can really drive those changes within organizations. So I want to welcome and thanks to fan for joining us here today to talk about this topic.


Stephan Ledain: Thank you so much, Mino. It's an absolute pleasure being here and chatting with you today.


Mino Vlachos: Thank you. So of course, AI is truly the hot topic. It seems like everywhere I go, every client, every CEO I'm coaching, it seems like AI is on everyone's lips. And yet when I look at stats, it doesn't seem like there's actually as much penetration as one might expect. So there's this kind of dichotomy between we're all talking about it and obsessed with it, but then not all of us are truly using it in the workplace. And so today's conversation, we're gonna start to tease this apart and start to really understand what is the place of AI and how do we really use it. And of course, using your expertise to help that make those connections for us. So just to begin, I wanna actually start with the more positive side because sometimes I think I can come off a little negative, cynical, a bit of a hater. So I actually wanna talk about the good things that have come from AI. So what are some of the benefits or strengths of utilizing, whether it's AI or just technology, more broadly within an organization?


Stephan Ledain: Yeah, well, first I want to show appreciation for how you intro the conversation because there's a lot of hype, cycle and hoopla going around the conversation that is often distorting and at best confusing to people. And I think it's really important to appreciate that the negative sides or the myths that are often pervading this conversation do need to be dispelled before we can properly go into the benefits and the advantages of actually using this technology. And so I really appreciate how you framed the question and the topic actually. But going into what are what the good is, I think the obvious ones, the low hanging fruit are you. We're seeing potentials for operational efficiency gains. You know, AI has the opportunity of really streamlining processes, automating things that were once seen as mundane, rote tasks that really consumed workforce time and were frankly expensive for little return on value. I think we, we are really faced with amazing shift in how we can sort of outsource that work to the technology. But we're also seeing more sophisticated use cases and applications such as enhancing decision making. You know, we, we often lack real scientific, rigorous decision architecture within firms and oftentimes really consequential directions are taken with arbitrary times, lazy ways of coming to what is right to do for an organization. And so I really love the opportunity that AI introduces in synthesizing large amounts of data across time, across contexts, across domains, and helping you think through your options and weigh them accordingly. Another really cool case is how we're being able to interact with people a little bit more thoughtfully. We're seeing it applied in things like customer service experiences. We're seeing a lot of AI chatbots emerge. We all know the automated voice message on the end of the phone, which honestly can feel a little bit disengaging, disenchanting and inhuman and impersonal. But what it allows for is us to actually engage and interact with businesses that simply can't handle the influx of customer service inquiries. And so you're sort of able to streamline the process of connecting to the business and maybe a live person on the end of the phone. But zooming out from this, what I really love, philosophically speaking, that AI really brings to the table as an advantage, is the ability to bring the humanness and the humanity back to work. I think we have in this productivity age and the building off from the Taylorist point of view of workplace, we've really come to see work as a series of tasks. You know, there's that joke or that meme going around where a lot of professional service work is just emailing, emailing back and forth. And a lot of that is doing really non or low value tasks like coordinating calendars or following project plans. And I really think we have this opportunity now to actually realize and create space for the things that energize us and make us feel whole at work. Things like thinking creatively, thinking about new ways of approaching things, allowing ourselves to introspect at work, allowing ourselves to try new things and to explore new ideas. But the most important of all, I think, is the opportunity to bridge connection to Create community, to create identity at work, which again becomes secondary or periphery to the central work of just getting tasks done. And so I really like that there's a potential, an opening for us to reconnect and reengage with each other and think that work is actually. Or the workplace at the end of the day is a institution of people being able to connect over some sort of contribution to society. And so I think we might see shades of that as we continue to evolve our application of the technology.


Mino Vlachos: And so from what I'm hearing, and the question that comes up for me is there's kind of two ways I could look at it potentially. One is that if AI and technology more broadly replaces some of the task based work, what's left is the relational left, the relational kind of task tasks or whatever we call that, right? So it's like the human side we always talk about, right? So it automates PowerPoints, emails or whatever have you. And then what's left that I have to do is build connections, relationships. That's one way of looking at it. But there's another way which you're also bringing up, which is that it actually has a capacity to augment the human as well in decision making, in creating processes or frameworks or structures. And I know from my work, when I go into organizations, if I'm doing team development with C Suite or I'm doing board work, like essentially, there's usually some principles, guiding principles that we're walking them through and facilitating them so that they have a more structured, methodical way of going through the way they make decisions. Do you see it as one or the other or both? Is it here to kind of just replace rote tasks, or is it here to also augment the relational side or the humanistic side? What's your kind of perspective on what the true kind of potential for some of this technology is?


Stephan Ledain: Absolutely. And I think it's worthwhile at the stage of parsing between what we project as technologists, what we imagine the application of these tools will be and what will actually happen. And we can see this time and time again with repeated relevant cases of new tech emergence where the original intention for the tool mirrors and maybe rhymes with what actually happens, but oftentimes is not actually what we imagined. And so I do think what we hope is that the human ingenuity and creativity and parts of our explorative brain and generative drive can emerge. And that's what the intention of this tool would be to help us automate and support some of our work. But I could see interesting and non obvious use cases emerge where we're looking to have it do things that we didn't even realize we needed support around. Things like making sure you're keeping in touch with friends and family at a cycle that makes sense, that's ideal. Or helping us understand what are the conduits or variables that really support our emotional and mental health. Things that we typically thought were stuff that we had to onboard on our own and work silently on in the workplace. I do see this being a, a tool that emerges in interesting ways. And an example of this is the use of text messaging apps like WhatsApp or Slack for example at work. You know, Slack was meant to replace email originally and is supposed to be a better way of organizing and synthesizing conversations and information and data. But it sort of emerged as this like interesting in between of having light casual conversations with friends and family and work and colleagues and getting work done. And I think of course there's limitations to that and opportunities for it to not be good or healthy and blur work line life balance. But there's also this amazing rich new space where we're talking to each other a little bit more friendly, a little bit more lightly, where engaging in a cadence and a form of working that feels a little bit more human. And so I do see more of these use cases emerging where it fills in this messy in between of accentuating things that we just didn't actually predict originally.


Mino Vlachos: Is there any other use cases you've seen with clients that you have. You founded a company called Adapt AI, so are there other kind of positive case studies or ways you've seen AI integrated that has really supported organizations like if I come to you and I'm like why should I take a bet on this? It's going to take up time and resources. So you know, tell me why I should do this within my organization.


Stephan Ledain: Yeah, absolutely. One of the ones that I feel the most proud of and love knowing that I was a part of was a, we'll call it a historically legacy professional service company who had a very traditional way of doing things that was very heavy laden, high touch, sort of realized that it has this long history of data, decades long of data, of information that could really be turned into really useful insight for clients. And they're feeling stressed and a little bit worried about their inability to actually collate and collect that information and then turn it into insights. And they simply just didn't know where to begin. And what I really appreciated in this work was several fold. But the first was we actually were able to sit down and talk them through critical use cases. And in that process, you sort of realize what your value is as a firm, what you've established the firm on, and what you try to purport to your employees and to your customers, which is, this is the work that we actually want to do. We want to help clients in this way. And so this is why this will emerge as a priority or a use case. And we have data that already exists, that's taking, that has the opportunity to save time that we're doing, trying to make sense of that data, otherwise. And in that process, we were able to galvanize and, and have everyone align to the agenda and make it feel like it's valuable for them as individuals and their working lives at every level. So we're talking at the executive level and at the intern level, because they were themselves so occupied and just trying to make sense of the data. And so the technical work that we actually did was, and I'll regret using overly technical terms here, but we created data lakes and centralized data systems that they could then pull from and have a dashboard of and, and extract meaningful and useful conversations is what I like calling them for clients, instead of just these random data points that otherwise have no rooting in the work, their working styles. And what I really appreciate about this opportunity is, aside from it saving them a ton of time and operational ineffectiveness, what we've allowed them to do is feel more emboldened and more confident in their legacy, traditional understanding and history, and their information and collective knowledge within the firm and having that brought forward into these client conversations and proactively brought to these clients so that they can understand how much of a wealth of expertise and understanding that this firm actually has. And so I think the benefits of this is wide, but it ultimately brought the organization together on the things that really matter. And I think that we have a real golden opportunity in this age of AI to start doing that a little bit more intentionally.


Mino Vlachos: So now I turn us to the other side of the coin, which is where sometimes I tend to go. So thank you for stepping with some of the positives here. I am someone who likes technology or even loves technology, so I'm not someone that's against technology in any way. I love exploring, I love experimenting, but I like having, like, kind of crisp understanding of the limitations or the use cases of why and how we should be using technology. And one of the things that it's not the default of the technology, obviously, but the way in which we've started to talk about and use AI. For me, it feels like there's so much of an emotional high and excitement and then fear of missing out FOMO that in that what's lost is like. But what is actually the core use of like, if I look at like a large language model like ChatGPT, right. And there's others out there, but that's just one of the more famous ones, one of the first to market. And you know, for me, the more I use it, it's this dual life I live of like being simultaneously completely blown away buy this thing and utterly unimpressed at the exact same time. And I carry both of them with me in every single day where I'm like so wowed that technology has made it this far. And yet anytime I try to use it in a practical setting, it's not good enough for what I'm doing. Right. And so I'm gonna ask you, like, what are some of the ways that we might be either hyping up over hyping up or misusing or miscategorizing AI and AI in the workplace?


Stephan Ledain: Yeah. It's such a brilliant and necessary facet of this conversation. And I first want to begin on making sure that we're level setting on terms and understanding because we've hopped, we jumped so much into this error era of AI that we've hop skipped and jumped like how it came to be, how what is actually happening in these tools. And there's still a lot of opaqueness and explainability which is required and needs to happen. But at its baseline, what a large language model is, for example, is a very sophisticated word prediction mechanism. The algorithm that sits underneath that just has a really brilliant methodology of putting together what one might respond given the previous context of the query. So the prompt this, this is a really important starting point because this explains the instance of things like hallucinations or misquotes or biases. Because what it's. All it's doing is reflecting on its existing sample of data, which is large and extremely vast, but limited in some ways. And it's. And the way that it was trained and what it held as a positive answer or criterion. And it's saying this is what I, if we can give it I ness this is what one should say given this query. That's not the same as it being a sentient being that's encoding all the information, the body posture, the history of the person, the nature of the query. It's not the same as a sentient being encoding all this information before Giving an answer. And so I think it's really important to level set on that to begin with. Now then we get into things like over promising on results. As with new technology, the technologists and their salespeople often get more excited than what they're able to deliver. And I think a lot of organizations have come to expect AI to solve problems, to be the end all cure for all organizational ailments and maybe societal ailments if we're that ambitious. But these are really unrealistic expectations. I think that we should really posture it and understand it as a tool that complements human efforts. But it, it is not where it's nowhere near actually. It's. But it's not quite yet at the thing that can solve for human things preemptively. And I, I sort of feel as though that's what we imagine it to be. Aside from that, there are things like bias, which is a really important conversation and facet of all this as well. These models are trained on biased data. At every stage. There's the potential for lack of fairness to be introduced. The spaces we collect data in, the method through which we collect data, the way we label and assign weights and values to the data so that it makes predictions. There's so many points of potential bias. And the reality is we're collecting oftentimes human data, human generated information that's being synthesized in these models. And we know human beings have a tendency to practice and employ bias. And so we can't expect that these models are so sophisticated that they can see that happening. And so they often replicate or reinforce these biases that already exist in humanity. The more practical thing that I think is plaguing a lot of organizational adoption is simply the misalignment with business strategy. One of the things we make sure we spend ample time in a constructive and deliberate and systematic way of doing is outlining clear use cases, as I was sharing in the case study, because a lot of people, especially executives, are really enthralled by this new big shiny thing that has entered organizational life and is expected to revolutionize everything and save and make a whole bunch of money. But they don't really actually know in practical terms how this is to align with their business strategy. And so they start to introduce this thing in a misaligned way and hope that it cures whatever ailments they're feeling without actually understanding what it is that it needs to be aligned to. And I think it's really important and we're best served and we set ourselves up for success. When we have clear defined purpose for the AI tools that we want, understanding its limitations and rather than throwing in the excitement of innovation for innovation's sake. I'm actually quite conservative in my innovation approach and advice for clients, which feels like oxymoronic, but it isn't. I think you do need to really put in sufficient thoughtful, proactive guardrails in how you're implementing these tools for it to actually be effective. And now with all that, I do think although there are gross and vast amounts of limitations and there's also the looming threat of job displacement, we also need to appreciate we are so early on in this and people need to understand the conversation itself is still forming. And now is the right time for individuals and organizations to get involved in the conversation, to voice your grievances, to express where you're seeing bad behavior from these models, for example, and to help these machine learning engineers, AI engineers, data scientists, understand where we can make improvements and tweaks now, why while the the tools are still amendable to our changes.


Mino Vlachos: I'm going to throw like a series of kind of observations I've had using particularly more chat GPT and I'm going to just kind of freewheel and then would love just anything that comes up for you. But a few things I've observed is one, this is just my position. I'm not saying in any way I'm correct, but this is just my opinion. But when I've studied and there's people who've researched like large technological changes in society, one thing we find is that typically when something new comes out, I think you've touched a bit on this at moments is we don't really know how to really use it. And so we kind of try to invent the use case versus having an actual use case and there tends to be like a lost decade of productivity. So this is something that Nate Silverstone, statistician, I think wrote in his book the Signal and the Noise that like when even like the, the printing press, when it came out, whatever it was like the 1500, 1600, I forget exactly. There's like a decade where it's like we don't really know what to do with this. Or when personal computing started to come out around the 70s and 80s, we don't really know how to use this. And so in the pursuit of adoption you kind of lose a lot of efficiency and productivity so you can go experiment and fail and find the limitations. The irony though is you actually do need that period. Because if you don't have that, then you don't know how eventually to change it, to adapt it and to eventually put it into your company in a way that makes sense. So we paradoxically, I think almost need to allow for that creativity and that, that experimentation and that failure and that like you know, overstretching it. And like no, it's not really for that, it is for this. Because I think from that like we might eventually like, like actually slack might be a good analogy. We find out what it really could be used for which we might not really intend for. Because I know like when I use it for when I do a little experiments here and there. Like for instance, like you know, when I'm, I coach, I coach executives, CEOs and one of the things that I've done is I've asked ChatGPT to coach me. And like day one of coaching training, what you're like asked to learn is how to ask quote unquote powerful questions, which is open ended questions. You always ask what or how questions. And ChatGPT, when it coaches me, it always asks. And those are if you think about it very, they open up your choices, they open up your thinking. They're very open ended questions, right? To brainstorm divergent thinking. Every time I ask ChatGPT, it asks me yes or no questions. So it's very narrowing, right? It's either, it's a binary, it's either you're going in one direction or the other direction. And sometimes it's a, it's a false dichotomy, it's a false binary. It presents to you. But okay, you could say like we can reprogram that, right? Like that's not that hard to, you know, tell it to go from asking these very binary questions to the more open ended questions. But one thing that I am interested to see is one of the greatest gifts we have in coaching or if you think about even going further into like therapeutic services is our intuition where there are things that just come that are so non linear and so out of the box and so out there. But those are the things that I think really support our clients to make breakthroughs. And so I'm seeing this as a, yeah, there's a, there's a tool, it can be helpful. But also it's like where does it really fit? What is its place? And we don't really quite know yet. And so since you're already kind of working with actual clients and you're really supporting adaptation and adoption of AI, like where do you see it fitting? Like, and I'm not saying that this is the answer for all answers. But in your current, to the top of your knowledge right now, your current position, like where do you see this thing like sitting or supporting with, what is it good at?


Stephan Ledain: Yeah, I think that's a brilliant question. And the question that we try to tease out when we're working and speaking to executives or sponsors of our work. It's, it's like what is the actual best case scenario? What does good look like? Where are we actually going to extract ROI if we were to use these tools? And in that we never really land at a complete, clear, succinct answer. But we begin on the journey of being more thoughtful and efficient and intentional with our approach. And what I will say is what I'm experiencing is most people actually just need to, they really just don't want to do things that feel soul sucking. That's the only way I can say it. The number one use case is I don't want to feel like I am a machine or a cog in the wheel at work. How can you help me with that? And we'll use, we'll decorate that in so many ways. We'll use words like operational effectiveness or we'll use words like sales, enabling. But it's really like these are the things that I don't feel like my humanity's expansive in. How can this thing take some of that? And I think that's a beautiful opportunity for us again to be introspective and reflect at work about what are the values we're trying to put forward and to understand then how do we use this tool to help us scale, embolden and enforce and share these values. That to me, I think is where we'll feel the best about the investment we made in AI. That's where we'll feel aligned. I'm really interested in this work and in my personal life in really addressing and tackling AI alignment because I think it is very possible and extremely dangerous for these tools to be built and flushed with capabilities and features that are divorced from our deeper human needs and desires. And if we are not actually enforcing this agenda of en inclusion of creativity, of self express expression and reflection, then we're simply just building more opportunities for problems and self oppression and collective oppression within work and brought more broadly. And so that's a long and fanciful way of answering your question, which is let's really get underneath what is the things that feel non human to us. Let's have it tackle that because we have created it as a tool to support our lives. And let's see how we can enforce and contribute to human flourishing through the outsourcing of that work.


Mino Vlachos: So I'm going to go a little rogue here because I got inspired. So this is the moment of intuition. So one thing I've noticed, because I do a lot of writing, and anytime I've asked ChatGPT to do any form of writing, and I imagine it's because it has training data, as you said, and it's essentially the average of training data. So what is the most likely word to come out? But everything I read has this very generic voice. And writing has a voice to it, right? Like, there's. You can. You can tell Hemingway versus, like, a Shakespeare. Like, there's a voice that we use when we write. And the CHAT GPT voice, it's fine, but it's, like, very boring to me. And when I write in my own style, it has edge, it has personality. And you're not only just a random guest on our podcast, you're also a friend. So I know that you also have a lot of creative endeavors. Like you're a brilliant behavioral scientist and a founder, but you're also a brilliant creative, and you have a very amazing creative magazine that's a big outlet for that creativity of which I'm a huge fan of. And so there's also this, like, what is. This is going to be a crazy philosophical question, but, like, what is true kind of creativity in contrast with a chatgpt? I'm not saying, like, define all of creativity, but, like, in contrast with this output that I get from a prompt, like, what is this thing that's creative and alive that you just can't get through a large language model?


Stephan Ledain: Yeah. You're asking, what is a special sauce? What makes humans special? And it's maybe the most important question of our era, because I think if we don't answer that in clear and deeply encoded ways, we risk a profound relevance. And it may not be as obvious as robots taking over, and it may be a little bit more insidious as something like not feeling like you're. You have any purpose on earth that's. That's just as dangerous, in my opinion. But to. To take a stab at your question, I'm. I'm gonna highlight and admit that this is going to be a controversial take, but I'll stand on it. I think the beauty of this whole era and why I believe in this counterintuitive balance between AI and creativity is there's this thing that happens in daily life experiences that we don't have a word for. But if we were looking at it algorithmically, it would be error. It'd be the things that emerge, the properties that are emergent outside of the parameters that we think life fits. And I think that manifests in beautiful writing, it manifests in beautiful music. It manifests in creative moments and good conversations. It's. I. I can predict what you're going to say and do to a certain degree, but it's the 20%, maybe that I don't know that's coming that feels like life. And where I think AI will be limited, at least for now, at least for the foreseeable future, is it simply cannot bake error into its prediction models. Because that's a little bit of an oxymoron ideologically. It's how do you predict error? If you could predict that, it's not error. And I think this is what we're missing actually, as a research area and domain, which is. Let's actually not throw away the bit that these tools can't predict. Let's actually look at that and. And understand that and sit with that and wonder what it is. I'll give you an example of where this might appear in a creative piece of work. If you were to ask people who are the top best, most technical singers in the world, you'd have an answer for that. I know someone like you who has a refined palette has an answer for that. Now, if you were to ask the same people who are the best, who are your favorite singers, I can, with a high degree of confidence, believe that those won't be the same list. Okay, so let's double click into that. Why might that be? And I think something that they might touch on or try to articulate is something about the my favorite singers touch notes that feel resonant in me. It's not the best notes. It's not the perfect notes. It's not the notes that feel the most refined or technically astute. It's the notes that touch something in me that's error. That's. That's something that's incredibly hard to put into a framework with parameters. And I think we are in the time now where our models are so sophisticated. They have millions of parameters, but there's still, again, a percentage of variance that isn't accounted for that I think really points to the richness of life.


Mino Vlachos: That's such a beautiful answer. And I'm going to take that one with me for my personal reflection. So thank you for satiating that prized, selfish question I asked and you gave such A profound, beautiful answer. And it reminded me of, I think it's Good Will Hunting when Robin Williams character is talking about his wife. And I think he talks about all the little kind of like quirky things that are so unique. And then when ChatGPT I keep using as an example just because what I use, I know there are other models out there. This is not a plug or ad for ChatGPT, but when it does it, then it's a hallucination. And then I'm pretty angry because I can't trust that it's telling me when a human does it, we can find it endearing. So it's, It's a pretty good answer there.


Stephan Ledain: Yeah, yeah, yeah, yeah.


Mino Vlachos: So we've talked a lot about the technology and of course in relation to the human, but now we're going to really talk about the human side of things. And now we're going to go into like, okay, let's. This is a tool, there's a use for it, there's a place for it, it can support us. Now we're going to talk about organizations. And you're such a brilliant behavioral scientist and consultant and I know you've worked with a lot of executives. So now we're going to talk about the human part of like how to implement this, adopt this. And we're going to start with the roadblocks. Everything I've seen studies so far show that there is not that much actual adoption. It's only happens again. This is a very general statement, but all the studies have seen is that it's more individualistic, so someone is more kind of interested versus like it's a big corporate initiative. It's like some people are using it here and there, if I generalize, because they are early adopters. But let's talk about organizations. What are some of the blockers that you think are kind of stopping or hindering these organizations from either taking a realistic look about how to implement this or even if they're trying, it kind of stops them from doing it properly.


Stephan Ledain: Yeah, I think at the highest level, most things or all things fall under simple lack of change management. I think most organizations want to introduce new things without proper support infrastructure, without addressing potential points of resistance and without making people feel like they're stakeholders or involved or meaningful partners in the change initiative that they're bringing. It feels a lot top down and very heavily indexed on that. And with that comes things like cultural resistance. And I think a lot of natural resistance will come, especially in the era of AI where the looming threat of Job displacement is very present and real. But I think people want to know why an organization is looking to reduce their time to deliver a project to optimize efficiency. How does that reflect on their worth and value to the organization? And how will that change as we continue to adopt more tools and see results of this? And so I think it's natural for a culture and individuals within the culture to resist. Another underspoken reality is the lack of identifying as a innovative or culture or technologically astute organization. That's a huge hurdle that I'm seeing where when we do our assessments of AI readiness, we learn a lot that most people within these organizations don't actually see themselves as individuals. In an organization that does this sort of thing, it feels like it's pandering. It feels like it's and not very well founded. It feels whimsical instead of something that actually ties to the core values and intent that the organization has outlined and contracted with these employees on. And so they feel like this feels like this strange initiative that's coming out of nowhere instead of something that feels meaningful and core to who they are. And with that comes all the particulars of that, which is the lack of upskilling and training and realizing that employees simply just lack the skills to know how to work effectively with this stuff. The technology has moved so fast. We went from like sample sizes and studies of 100, 200 to sample sizes of a trillion. As we're looking at large models. This has, the scale of this has been so. Has grown so quickly and is so vast now that people simply don't really understand or know how to wrap their heads around all that's going on in this sort of technology and change. And for a fear of not being able to utilize things properly or being intimidated by the technology, they simply won't engage with it because they don't know how. Then more practically, there's just a gross under and poor communication when introducing the strategy and thinking behind tools and the rollout of tools. Without the clear communication infrastructure, employees just won't know how and where and why this change is happening, what their place is in it and how. And it opens up pockets for misinformation and fear mongering to spread and building implicit barriers to adoption. And so I think there's a lot of just real practical lack of implementation, but also really understanding where values and strategy and intent comes to play. And all that leads to not understanding or having a clear understanding of what the return on investment will be. And I think that ties in everything together. Well, change management needs to have understanding of what the eventual outcomes or consequences of the change will be. If the executive team or those who are the decision makers and bringing in this new technology don't have a way of articulating the return on investment or outlining the timelines or expectations of that return on investment, it feels again, just like a loose agenda item that simply doesn't have rooting in the organization.


Mino Vlachos: I want to go one layer deeper into some of the individual fears that might come up in individuals. I tend to think of change resistance as, generally speaking, fueled by some unarticulated fear that then becomes more reactionary and more like, no, I don't want to do this. And maybe some of those fears are founded and maybe some of those fears are unfounded. What have you observed in terms of some of the fear that comes up as related to AI and technology?


Stephan Ledain: Yeah, I think I'd probably categorize the fear in two different categories. There's the group that has always done things this way and is used to things and just simply doesn't want to fix something, that doesn't feel broken. And there's the group that may feel as though things are broken, but that the technology is coming to replace me because I'm a part of the issue or I am a contributor to the breaking. And so in the first group, the don't fix what isn't broken. They simply just really need to be educated and convinced that this is additive to their lives and to their work because they simply are fine as the way they are. And I think sometimes that is emblematic of a lack of foresight, not really seeing what's coming around the corner and how their competitors and how their clients will be adopting these tools and we'll be expecting that of them. And they don't actually have a holistic understanding or prediction of just how much work it might take to stand up this sort of thing. And then the other group is the ones that are just afraid because they think they're going to be replaced in some way or made redundant or have to reposition themselves within the organization, which is a more difficult nuance to tackle that is best served in emboldening them to realize how day to day this new introduction of the tool or the technology can actually enhance and support and help their workflows and their ways of working and make them actually more relevant, more useful, more integral to the business operations. And again, I think that has to do with very thoughtful, intentional and science based change management. Because if you don't if you don't not only have a process for which to address these two groups, but also a way of evaluating, measuring and monitoring that over time these agendas will fall flat or will be lost or in the worst case outright rejected.


Mino Vlachos: So I'm going to share, I haven't shared this publicly before, but I'm going to share my one of my kind of conclusions or observations about one of the fears around AI and I just want to get your general reaction or take. But one of the things that I hear a lot is like kind of the rise of the robots, right? It's like this unfeeling, cold kind of technology that is just cerebral. And when I really sat with, when chat GPT like very first came out and I was, I was very pleasantly surprised with it and like I said, I was very mesmerized by its capability. But I was thinking that like if we think of our kind of cognitive brain, the part that we conscious mind, that's the tool that we use to create this technology and we're using something to almost replicate our own brain. So it's like the brain is so fascinated with itself that it created a shrine to its own kind of capabilities. And, and so the brain, the mind, the brain, because the brain is more than just the mind. So the mind is just that kind of active, talking, conscious part. That's how I define it. So the mind is really fascinated by the brain and wanted to replicate it. And in that kind of process when people were like, okay, there's a mind out there that's thinking, but it's not feeling. And that's very scary. And this is the moment where I say, well actually a lot of human beings are trained, I think unfortunately to not feel, to be in a disassociated state, to be disconnected from the body, to not be able to either understand or regulate their emotions. I think we already have the very thing that we're afraid of, which is the disconnected mind, that it's not embodied, it's not in the emotional intelligence, it's not in the kind of communal feeling space. It is disassociated and potentially psychopathic. And we already have people like that walking around us already. So that's why I joke like there are that, that AI mind, the robot that you're scared of, it already exists. It's already walking side by side with you. The only thing that can separate us is a more embodied, feeling based kind of societal structure, culture. So that's just my opinion and I'm not saying again that it's right. But that's just what I've observed and I'd love to just kind of get your reaction, raw reaction to what I shared, whether it resonates, you disagree with it, I'm happy to hear anywhere you go with it.


Stephan Ledain: I think that's a fascinating take. I and I do think we AI sort of highlights moments of depravity within ourselves that we outsource onto the tool which are, which is our own ability to be cold, which is our own ability to be emotionless and as you say, disembodied. And I more practically speaking, think the reason why we are quite a good distance from AGI in my opinion, which is artificial general intelligence, which is like the end all, be all AIF application, is the lack of embodiment. An example I constantly reflect on is the failures of our ability to have autonomous vehicles. A 17 year old can learn how to drive in two weeks. We have models that have trillions of data points that still can't drive flawlessly over years. Decades of research. And I think there's something about the embodied experience and its ability to converge intelligence and analytical thinking and sensory intuitive experiences as well, and knows how to prioritize and weight those accordingly in real time and very fast. I think something really important about that that is core to our ability to exist and function properly as individuals in broader society. But I want to go back to your original point which I think is brilliant and really useful to how we move forward with AI is just thinking about and understanding what it actually means to have a mind and to have a thinking brain and what and how it has the ability to reflect on itself. I think that's an underappreciated and I will say that the large language models are showing signs now of pretty sophisticated self reflection and understanding. But there's something amazing about the desire for our brain to want to replicate that which it is. And I think the simple and accessible definition or justification for that would be because our brains do things, we go out into the world and our minds allow us to interact, engage, produce. But I think what's interesting about that is again going to the error point, which is our minds also wander. They also seek stillness. They also have value in non doing. And I think that will be a really fascinating thing for us to look to introduce into models. And it's why I don't like personifying my AI tools at all. I think it's strange that we want to give it pronouns and personalize it and speak gently to it. I think that's more of a reflection on how we want to be treated. But it removes us from appreciating the fact that these are tools. These are tools without sentience. They do replicate thinking processes, but they don't have the holistic embodied experience that we do have where although we could be reasoning ourselves through a situation, there's so much other information and data that is inputted before we actually make a decision. And so I really, really love that that's your point of view. And I really can see how that framing could help leaders understand what is the role of leadership and what is the role of human leadership and how that might apply to dealing with a workforce that is going through the adoption of AI.


Mino Vlachos: And I'm not going to, I'm going to be cruel, I'm going to semi open something but then close it immediately. Because when we start to talk more about the ramifications, at some point we have to cross into the field of ethics. And ethics is essentially philosophy. And there is a point where I remember this is years ago I heard like a thought experiment about when cars learn how to self drive. What happens if like a kid runs in front of the car and there's other cars in front of you. So if you swerve to hit the kid, you'll die, but if you continue on, the kid will die. And you could in theory one day be able to program an autonomous self driving car to be able to have an ethical choice that you pre program that says I would ride rather die versus the kid or whatever have you. Right? It's the trolley program, the trolley problem. That's like the classic day one of philosophy class which is the train is running and if you let it run it hits five people. But if you pull the lever, it'll only kill one person, but you have to pull the lever. And then there's variations of that trolley problem where like imagine there's five people on the track that could die. Or if you push someone in front and you know that one person is enough to stop the train, would you physically push someone in front of the train? And whereas we can, some of us would say okay, five versus one with the lever, I can, I can rationalize pulling a lever. There's something about the viscerality of pushing another human being even though it would save the five, that almost all people would say, I would have a tough time with that. Even though it's the exact same mathematical equation. So math does not. And you kind of. The utilitarianism does not fully encapsulate everything that we think and feel are encoded for. So these ethical conundrums and how like we would choose to structure our society and code AI programs that will need to make decisions is a very vast field. We cannot open it on this podcast and I don't know that I'm even qualified to open it on this podcast. But this is where we have some serious reflection to do to figure out like, what are our values and what are, what is our philosophy. And all of a sudden these things become important for us to really consider.


Stephan Ledain: I'll just quickly comment on that because I think it's really important and we honestly would probably be remiss if we didn't even touch on it lightly. I think it's really naive and overly ambitious to think that we can divorce AI application with real philosophical questions. I don't see how a good outcomes for AI isn't directly aligned to good philosophical reasoning. And I think a lot of our leader, our leaders in the AI space are of that mind as well. And I think what we should take away from that and what I want to hammer home is even though you may not feel qualified or necessarily equipped to have a strong point of view on that, I think it's absolutely critical that you continue to think through how you feel about that out loud. I think collective engagement in AI's development at this stage and especially in consideration of ethics is absolutely essential for us to have good AI outcomes in the next little while.


Mino Vlachos: And as you mentioned in a different lens earlier though, it's a reflection of us ultimately because whatever we're putting in it is what we value. And even in the cases where like horrific biases have come out, right, like the training data is Reddit or Twitter or whatever it is, and then it's like horrifically racist or it becomes like a neo Nazi parrot or there is, it's a reflection of us. It's us being fed back to us. It's the mirror. It's the same thing with like, I think about like on social media these like kind of the tick tock version of social media with YouTube and now I know Instagram has it, but people complain because like I'm going to just be very crude but like if you're a guy and you end up looking at some women and then it feeds you more women and more women and more women and you're like, why is my feed just all like looking at, you know, models or this and that? And it's like, well that's what you're looking at. So it just feeds you, the thing you're looking at, it's a reflection of yourself. So if you don't like it, it's not the algorithm, it's you. And that's like one of the things that I don't know if we're ready to take responsibility for. It's just a mirror for our own consciousness.


Stephan Ledain: Absolutely. And that realization is critical. We really need to be mindful of our laziness in outsourcing the depravity of our own issues into AI as a tool, because it is actually a reflection of us in the same way we committed that error with other people and in our sexism, in our racism, in our ageism, etc, it's really just us trying to outsource the pain of having to go through what we think is weak or not accepted or unacceptable in us. And I think a lot of the AI pushback is just that, it's. We don't want these things to exist in the way that they do because we see them in ourselves and we don't appreciate this is just a tool that synthesizes data in a very large, sophisticated way. And it is actually an entity that is a derivative of our own thinking and articulations in the world. And we should then use that as information for how we can make it better.


Mino Vlachos: And so if I bring it now back to a more practical place, because I always enjoy going to these kind of more interesting, I think, thought provoking places. But ultimately if I'm a, I also founded a company just like you did. So I'm sitting here, I'm a CEO and I'm like, all right, I need like a few points, right? Like I'm looking to adopt AI and I want some practical takeaways. So what are a few things that you would guide someone who's, let's say a CEO or like a head of a department when it comes to AI adoption?


Stephan Ledain: Yeah, absolutely. The first thing is begin on the slow journey of education and awareness. And that doesn't need to be like you sitting down and conducting a meta analysis of AI systems. It's simply just sit down with a large language model, you have many to choose from that have free versions these days and just talk to it, see where it feels good and productive. Feel, see where it feels like it has limitations. Understand what is actually going, learn a little bit about the terms, make sure that you're agreeing on the high level terms and then start asking some more questions that will deepen your learning journey, which is are there any traps that I'm falling into when applying AI that could be avoided? Are there any AI startups or companies working on problems that I find interesting or I'm passionate about? What are some of the small steps I can do to follow an expert online or to continue my learning or to sit down with someone who's already a little bit ahead of the curve? And how can you stay ahead of some of the potential issues and downfalls of AI? What things have you heard, what things have you experience and what questions do you still have that you want to throw to the ether that someone who might have thought about this a little bit already? And do you know personally within your circle anyone who's working in AI, how could you ask them questions and learn more? But what I'm really getting at is start by building a small ecosystem of AI forward reflections and experiences and then with that, concurrently do the practice of playing with AI. I think when you spend a little bit of time, you start to realize that it is actually just a novel technological tool that can be additive to our lives if we actually get to know it a little bit more. And we also need to get ahead and on top of the ethics and implications of using these tools. And then of course you can do things like listen to this podcast and get more technical understanding and help. But I'd say treat it like any other revolution, especially technological revolution. Treat it like what it was like when you first started using a cell phone or started using the Internet and appreciate that the parallels will be the same. It's going to feel uncomfortable and confusing and overly academic at first. And then you sort of feel comfortable with it and understand it and then eventually you evolve the maturity to know where it could be problematic. But bear this in mind and treat this as an opportunity for a clear learning journey. Seek coaching through that seeking. Take questions or build questions for your coach to see how you're changing through the implementation of this. But all that is just to say start building this cultural ecosystem of AI enablement in your life.


Mino Vlachos: And so I'm the founder of a company. It's not hypothetical, it's true. And, but now let's say hypothetically, I'm interested in adopting AI, but everything you said sounds like a whole lot of work. And I know you're the founder of a consulting company called Adapt AI, so in what ways does your company support someone like me when it comes to AI adoption?


Stephan Ledain: Yeah, this is where you can actually outsource all the hard stuff and give it to us. So we take you through a very concise, evidence based scientific journey of starting from zero and getting you to on your way to actually leveraging the value of AI. We start with just doing a simple quick assessment, an online assessment of understanding your organizational AI readiness. From there we understand the landscape a bit better and outline some use cases for you. Then we start thinking about where are there areas and existing tools that we can leverage to help you on this journey of adoption? What are some case studies or best practice examples to get you there. Then we start thinking about what are the ways in which we can actually implement this at scale. That's probably the meat of our journey that we take with clients, which is do we need to think about buying or build strategy? Do we need to work with partners who have already built tools that will solve the needs that you have present that will synchronize smoothly, or do we want to build that internally? We have a tech team at ADAPTAI that can help you with your data housing and infrastructure and understanding and insights gathering. And then throughout the final portions of our engagements with you, which I think are the most part, we really think about what the data shows, about where you're, where you're at, on your progress, how have you actually seen roi? Where are you seeing roi? Where's the most momentum, where are there areas that we need to refine and improve our model and processes? And most importantly, how can you report that to your team, to your executives, to your board and to your people, to the people that you're working with? How can you share, show them how you're tracking with this AI initiative, how we're actually creating value for the business and how their individual contributions matter to that. And so we take all of that, we do the project management of it, but we basically hold your hand through this very confusing winding process of starting to leverage this tool.


Mino Vlachos: I love it and it sounds like a very true end to end process that's very holistic and comprehensive. So today we covered quite a lot of ground. We start all the way from what are the benefits of AI as a tool, what are some of the hindrances or fears that pop up? We moved to the more behavioral realm and talked about what are some of the sticking points or why isn't AI adopted? Once an organization and tries to go on that journey. We talked about some of the more philosophical aspects, the ethical aspects, and then really landed on some of the practical tools, tips, implementation that folks can use on their journey. Whether it's a self guided journey where they create their own ecosystem, or it's through outsourcing, which is what I would vote for because I love technology. But everything you said, just as a founder, I'm like, that's a lot of work. I would really need someone to help me with that because I got a business to run. So how adapt AI can really be a really beneficial partner in that journey for you. And so as we think about everything we've discussed today, what are your final thoughts for this podcast episode as we wrap up? What would you like to leave us with today?


Stephan Ledain: Yeah, I actually would love to take a moment to show appreciation for what you're doing here because I think it's actually critical in the overall work of proper partnering with AI, which is let's create space for dialogue. Let's allow ourselves to be honest about the points of insecurity, fear, trepidation, places where we don't think we're actually getting value, but also let's be honest and share wins and places where we are actually adding meaningful use cases and understanding and value through AI. And let's, let's realize that we are all on this journey together and that no one is a underst or an expert on all facets of this. Some people know data best, some people know technology best, some people know the people cost of all this best. And let's gather and exchange information as a community. And I think for you, Mino and the three peak group, just being a supportive coach through this strange and nerve wracking time is absolutely critical. Like we are actually for the first time faced with a frontier of potentially fundamentally disrupting the way we exist and work. And that for a leader or for someone, entering the work course is a lot to bear. And I think having just an objective, supportive presence in your professional life at this point is very useful and very critical. And so I really love the space that you're creating and I hope we can continue to emulate that in our own as well.


Mino Vlachos: And what I'll leave us with is that many times we talk about leadership, we talk about relationships. I've said many times on this podcast and just generally with clients, that I really believe that leadership is a social process. It's neither a role nor an innate ability, but it's something that happens between two people. And tools are an important part of leadership. How we relate with our tools is an important input and output to what we do in our jobs every single day. And so we cannot neglect the relationship we have with the tools available to us. And that you can look throughout history, how we relate to tools as humans is the difference sometimes between healthy, harmonic, kind of cultures and war. We can use the same tool in two different ways. You can use an ax to split wood or to attack a person. So the relationship with tools is vital and so we need to explore it. We need to understand how leadership interacts with our, with the way that we have technology in our day to day. So any, any talk of leadership which is what we focus on but without the relationship to the tool isn't incomplete. And so I'm very happy that you're able to join us today so we could talk about one of the most probably revolutionary tools that is coming into our ecosystems and consciousness because we need to understand how we're going to integrate this and use it for hopefully the benefit of ourselves, our organizations, our societies and not to the detriment. And so with that I'm going to conclude the podcast for today and thank you so much everyone for listening. We hope to see you again soon.