Episode 38: Busting Common AI Myths with Sarah Alt, Founder of the Ethical AI Consortium

Understanding Common AI Myths

On today’s episode of The Leadership Habit podcast, Jenn DeWall discusses a common AI myth with AI thought leader, Sarah Alt. Sarah is the founder and CEO of the Ethical AI Consortium, a nonprofit membership organization of industry partners, institutions of higher education, and professionals who are dedicated to the awareness of ethical and responsible use of advanced algorithms, analytics, automation, and AI. The EAIC supports the development and recognition of ethical AI leadership and organizations, government education, and research. Sarah has over 20 years of experience in the technology industry, and she continues her mission to create a more explainable future. For those that are new to AI, Sarah will help you get intrigued and engaged in this amazing technology that we have in our hands. Enjoy today’s episode as we talk about the common myths about artificial intelligence.

Full Transcript Below:

Jenn DeWall:

Hi everyone. It’s Jenn DeWall, and I’m so excited to be sitting down right now, talking to Sarah Alt. Sarah, thank you so much for joining us in our conversation all around artificial intelligence.

Sarah Alt:

Yeah. Thank you. Thanks for having me, Jenn, a pleasure to be here.

Jenn DeWall:

So Sarah, for people that may not know you or may not be familiar with the work that you do within the space of AI, could you go ahead and introduce yourself for us?

Meet Sarah Alt, Public Interest Technologist

Sarah Alt:

Yeah, sure. Thank you. We, I recently founded the Ethical AI Consortium, which is a nonprofit organization here in the US focused on bringing standards and governance models from sort of the marketplace around AI and in particular. As it pertains to the ethical and trustworthy, responsible use of AI and bringing that to organizations that need help with that. Or that have committed their ethical AI journey.

We do a lot of research and development to get them the information that they need and to bring them on board with the various frameworks that are available. To learn, how do I navigate everything that’s going on in the ethical AI space? And we make sure that you can navigate that awkward dance of am I responsible for this is? Or is it that my vendors and suppliers that are responsible for it? Who all plays a role in making sure that we’re both buying and using AI in trustworthy and ethical ways? So that’s what we’re doing with the EAIC today.

What does Ethics Have to do with AI?

Jenn DeWall:

What does that mean? And I know we’re going to talk a little bit more about AI, but for someone that might be unfamiliar with attaching ethics to artificial intelligence, because we might think of that’s technology, why would I have to add ethics to that? So at a high level, what does it mean to have ethical AI?

Sarah Alt:

Yeah, so I think it really boils down to a really important word, which is trust in our technologies and because humans are developing these technologies still today. Yes, there may be a day when technology is developing itself, and there are no humans involved. We just, that day isn’t yet here. And so humans are still developing these technologies, and they’re developing it for other humans to either make decisions or, or at least get recommendations to make decisions from it. And so being ethical and responsible in that is making sure that we’re not violating fundamental human rights, that we’re not amplifying biases, that we’re not discriminating against human beings, potentially not even purposefully or knowingly, but maybe unconsciously or unintentionally. And it’s making sure that we’re doing that in some kind of a safe and responsible way. It does not necessarily mean that we’re rushing to legislation or making laws as much as it is to make sure we understand how to be more trustworthy. And how to be more ethical when we are using these technologies because we want to embrace them. Technology is a great thing, as long as we know that we’re making good decisions with it.

Jenn DeWall:

And I thank you so much for just clearing that up because I think knowing the topic of AI for someone like me is it’s still relatively new. So, and I absolutely am the person that thinks, oh, it’s technology or, oh, it’s a robot, right. It’s artificial intelligence. And so I don’t necessarily think about ethics initially. And I think that’s an important bridge. It’s not where we’re going to go in our conversation today, but it is important to think that, yeah, this is technology, but it’s also a tool. It’s a skill. It can be used for great things, or maybe not so great. And we need to understand how to make sure that it’s doing things in a responsible way, but today we’re actually going to be talking about the AI myth. Why do you think there are so many myths around AI?

Sarah Alt:

I found that debunking an AI myth is a great place to start with most audiences because AI sort of feels like this mythical thing that we’re supposed to be fascinated by, but we ended up being potentially frustrated with, how do I take advantage of, of AI? I know I’m supposed to be listening to the fact that these technologies are coming, but what does that really mean? And what does taking advantage of AI or incorporating AI in our business or organization really mean? And so we like to start with the myths because it helps us again, as humans to consume this, to understand it, is this something that I need to be like, excited about, afraid of, or a combination of those. So that’s what we like to do with this kind of AI myth.

AI Myth 1: Are Robots and AI the Same Thing?

Jenn DeWall:

Well, and there’s so much that I don’t know. So I know that I’m probably leaning into assumptions that I’m making about this, but let’s dive into one of the myths. So I know that we have a few that we’re going to talk about today, but one of the AI myths is the one that conflates robots with AI. What is that myth specifically?

Sarah Alt:

So this is like the first real “aha moment” for most people that we talk with, because there’s, again, this AI myth that’s been either implanted into us or that we’ve arrived at ourselves. That AI myth that if I don’t have robots, like if I don’t see robots here next to me, then I must not have AI yet. And so I’m good. Right? And so some of the— what we remind people is that some of the most basic components of AI— algorithms, data analytics, machine learning, these are really basic components. That, of course, in the proper recipe and with all the great intelligence that’s put into building AI produce some very powerful automation. But these have been around in the workplace for a long time. So we invite people to use the term artificial intelligence more broadly to mean a computer program that uses data to execute a task that a human would typically perform.

By that definition, AI is not really new, so that would be an AI myth. Robots have been performing human tasks for a long time. We’ve had— you and I have talked about— we’ve had robots in manufacturing facilities for decades. But, the volume and variety of data that we have available to us, the speed at which we can process it, and the ability for the machine or the AI to learn more sophisticated human-like decision-making. These are what’s making AI more interesting and novel than any other automation that we’ve seen. And it’s where things can get a little bit slippery when you’re not certain about the quality of the data that we’re feeding the machine.

So we shouldn’t wait for robots to show up at the door to suddenly take responsibility for how the basic components are developed and used in our organizations or how we program them or how we sell them. If any part of your decision-making in your organization today uses algorithms, advanced analytics, or any kind of advanced automation. Then you have, we invite you to consider, then you have some version of artificial intelligence, even without robots. And we need to focus on the quality of our decisions and take more responsibility for those machines.

Jenn DeWall:

This may be the time to say that AI is—and I know many people have made this joke, but that feeling when you just looked at this on Google, and now it’s showing up in my social media, what in the heck is happening? That is artificial intelligence, right? Or is that an AI myth?

Sarah Alt:

Absolutely. It’s an algorithm in its simplest form, that data, which is who are you and what are you looking up? And what are the results of what you’re looking up? And what do you do with those results? Do you click on it? Do you pause there? Do you stare at it? Do you consider it? And, and the algorithm taking all those data points and making inferences, making decisions, making recommendations that eventually then say, Oh, if Jenn is interested in that, then she must be interested in this. So it’s an, it’s a simple algorithm, very complex, but at its simplest form, it’s an algorithm that is helping to guide that decision. And so really, without robots, we are we’re using and work close to AI every single day.

Jenn DeWall:

Gosh. Yeah. It is still relatively new. I remember I think, getting my master’s, this was almost ten years ago. And just thinking about how people were starting to use data in their marketing to understand who their consumer was, what they need. They gave examples of a retailer being able to understand and project customer behavior. Like where someone was— like if they were pregnant, and then they became a mom, tracking when retailers should send them the next ad for something. Like, wow, the ways that AI can think is, is pretty incredible. I mean, I know there’s the spooky side, and that’s not what we’re here to talk about, but it’s, it’s really amazing. AI is around us whether we realize it or not.

So another AI myth is, you know, all our own data scientists, like kind of like innovation, how they assume research and development is responsible for innovation. And I think AI gets the same assumption. That, Hey, there’s someone over in the corner that’s responsible for our AI. But what does is that AI myth about?

AI Myth 2: Isn’t AI Just for Data Scientists?

Sarah Alt:

Yeah. I love the R & D example, because I often use that one as well, that we want to have innovation embedded in what we do. And likewise, the AI myth that we are trying to sort of debunk and help organizations understand is that this idea of wait, we need data scientists in order to explore AI. Don’t we? That’s a common AI myth. And so, therefore, when we talk about the fact that we’ve already kind of have AI in our workplaces, it follows that since we’ve been making these decisions even before or without data scientists, we don’t necessarily need them to explore AI. And to be fair to all those data scientists out there, this is not to say that they aren’t valuable or necessary. They absolutely are, but it’s more in when we have this conversation with organizations it’s to remind them that the disciplines and competencies we need in order to get the great value out of AI investments can and should be built into many roles.

You shouldn’t just package it up and say, that’s for the data scientists to worry about. Because, in fact, the data scientists I talked to they want their colleagues and other business functions to be skilled at working alongside the awesome stuff that they’re building. They don’t want to be the only person in the organization that is responsible for making sure that that it needs, that it fits its purpose, that it meets the needs, and that it isn’t biased or discriminatory. There’s, there’s actually a real condition in decision-making today called the automation addiction, even so far as automation bias this ever-evolving list of unconscious biases that we have.

But this idea that if I over-rely on technology and automation to tell me what to do, or how to believe, then we’re at risk of making too many decisions based on automation that was built by somebody that perhaps maybe not intentionally, but maybe unintentionally, had built bias into it. So we can’t completely turn off that part of our brains of critical thinking and making sure that it makes sense, it’s fit for purpose, and that we’re using it the right way. And so we want to upscale and rescale all of our talent in our organizations to work alongside the AI, not just only rely on our data scientists to bring that.

Jenn DeWall:

I love that. It’s all about the upskill of that. And I think you nailed it. Think about it as something that we want to build into every single level from our frontline to the C-suite. We should all be able to have the skillset to allow us to leverage the power of AI. To implement AI, even if you’re maybe more at the frontline— to execute and utilize it. So it can capture the data that you need for future decisions or strategies or growth. What have you, I, you know, and I think it’s really empowering because I think that makes the subject of AI feel more attainable to think, Oh, I could do this. I can look at this to figure out, you know, how we could potentially leverage this piece of data instead of feeling like, I don’t know that’s data, and I’m afraid of that. And I don’t know what to do with it, or I’m not an analytical person, so I can’t understand it. Like your approach makes it feel like I could probably learn this.

Sarah Alt:

Yeah. And this whole idea of because I love that phrase. I often hear that. Right. Which is this, Oh, I’m not good at math, or I’m not an analytical person. That’s okay. We don’t want you to only be good at math or good at data science to understand how to work alongside AI. In fact, I would argue, in some cases, we need that diversity we’re encouraging in, in most cases, bringing in these other sorts of experiences and expertise to the table much earlier in the thought process and the buying or development process of AI. Frankly, so that we can eliminate or reduce that the biases that could result, if anything, to make sure we don’t make mistakes. But also another side benefit is the earlier that people are engaged from these other areas of your business in that thought process. And even in the decision-making process of bringing AI or automation to your workplace, the more likely they are going to be to adopt and be willing to work alongside the AI instead of refusing it or fearing it.

AI Myth 3: Can’t We Just Set It and Forget It?

Jenn DeWall:

So get people- or essentially invite people into the conversation to evaluate, to try it because you’re introducing change. And we all know that people struggle with change. It takes us time to adapt, but bring them in at the beginning instead of waiting until you maybe have found these, you know, the ways that you can use it and then trying to force people to use it, show them how it can make it better. Fantastic. So another AI myth that you talked about, I think this isn’t one of my favorites- the set and forget. I can see companies doing this because I feel like I do it on my own. If I can ever look at technology, you know, it’s, Hey, what can I do to simplify something for my life and then never think about it again, but the set and forget myth.

Sarah Alt:

Yeah. So this AI myth is also one of my favorites because having bought and been on the buy-side of software equation for many years, you know, I have these scars left to prove that we’ve spent millions of dollars on software platforms. And so this myth of, I paid a lot of money for it, so I should just be able to set it and forget it that we’ve already invested in it. And so it should be fine, right? And this was even hard for me as a technology professional to make that shift in my career. But today you know, technology changes rapidly. The days of perfect software before it goes to market are like long gone. Software is very iterative. We update apps and software regularly to fix bugs and add features as more data, more variety of data points, and faster processors become available. The upside is we get to take advantage of that. The downside, of course, is that that investment that you made initially, yes, it does carry a sort of maintenance and an upkeep requirement that sometimes can surprise people. But likewise, the governance and the standard for what is lawful and acceptable or preferred in the way AI or any technology behaves also change in time.

What is one day considered acceptable practice in one culture may not be acceptable in a few years later, or what is acceptable in one culture may not be acceptable in another culture. So AI is no different in that. It is constantly evolving and learning. And again, this is why we remind our members and our clients not to take our eyes off the adjustments we need to make for the more responsible use of that technology, as well as the technology itself. There’s this belief that it should just work, especially if it costs so much money to implement. And I totally get that, but those critical sense-making and critical thinking skills that we talked about that we want to have people, you know, working alongside the AI are really important to make sure we’re training the algorithms to meet the standards, the protocol, and the cultural nuances of where it’s used.

So real quick example or story from my own “aha moment” regarding AI in the workplace. A few years ago, we posted a role to recruit for a manager on our team and our, and our HR recruiter showed me a batch of resumes. And I didn’t really see any candidates in that group that sort of fit what we were, what we were looking for. So I asked her to get more candidates, and I totally expected that she would repost it, come back in two weeks with a new batch of resumes and we would sit down and go through it. But she came back two hours later that afternoon with a new batch. And in it, we found a few more candidates that seem a bit of a better fit for us. But when I asked her where the second batch of resumes came from, she said that they were originally screened out. And I asked, well, who screened them out? And she said the system screened them out.

And I asked, how does the system— or the software in this case— know that it was wrong this time. In this case, it was wrong because what it screened out was actually where I found value. And she didn’t know. She didn’t have an answer. So I asked, could we update the system? Could we tell it here’s how to do better for Sarah’s needs next time? And again, she didn’t know. And the more I pressed and eventually got in front of the software vendor to do the same, the more apparent that was to me that no one could explain or was willing to explain. There’s another whole different conversation we can get into about the willingness to be transparent about the algorithms. But regardless of that situation, I realized if I couldn’t explain it for myself, I would never be able to explain it for others.

Can You Explain Your Algorithms?

Sarah Alt:

And, and I became that person, which you know, is shameful of me to admit, but I became that person who refused to use the technology because of the bad experience. And I was supposed to be a technology leader. So for me, that was very much an existential moment in my career where I realized, wait a second, this lack of transparency does not sit right with me. It certainly didn’t sit right with me that I’m, I’m pretty sure we weren’t biased or discriminatory in that situation. But if we had been presented in a situation where we were, and I couldn’t explain that, that that would not bode well for us. So we weren’t even talking about AI in the workplace at that time, it combines the whole, we didn’t have robots either. We are just making decisions based on a recommendation from an algorithm in a system. And so for me, this was sort of that we need to go back and figure out how do we tune the software? Or how do we ask ourselves, how do we get to a point where we can feel comfortable explaining the outcome? Because not being able to, I don’t think it is good enough.

Jenn DeWall:

Oh my gosh. And I think you’re also bringing awareness to something that I think many people don’t realize that there are applicant tracking systems. So when you go to apply for a job, that resume that you uploaded, depending on the format that is uploaded, will determine whether or not it gets kicked out. I know one of the things for resumes. If it’s, let’s say, I think if it’s in just an Adobe (.pdf) versus a Word (.docx) Where it can’t pull up those keywords, it could be discarded. And so I think it’s important for people to know that, Hey, do you understand what type of system your organization is using or do you know what kind of system another could be using?

Jenn DeWall:

Because I also think that, you know, as a career coach, this is one of the easier ways to say, it’s not always just you. It could be a system that automatically kicked you out. And so you don’t have to feel bad or feel like you’re a bad person or an unqualified or not, you know, not confident in the direction you want to go because if we don’t learn how to leverage this technology, there could be things working against you that you absolutely have no control over.

Sarah Alt:

Right. And you know, it’s funny because you made me think about sort of this other side of the argument that, well, if we explain it too much, and if we’re too transparent about how the algorithm works, people will find a way to beat the system. They’ll find a way to put just the right words or do just the right things to sort of game the system and beat the algorithm.

And frankly, that’s going to happen. Regardless. I think that what, especially in, in, in places where fundamental human rights are being considered where bias and discrimination can enter, which for like every organization, is going to be in your hiring practices. We sort of invite people to say, look if the worst thing that happens is that people will game the system–you can find that out. You can weed that out. Wouldn’t you rather make sure that you weren’t introducing or amplifying bias or discrimination in your AI or your software? And that you could explain that so that you could feel confident in that you’re not violating fundamental human rights. And you have sort of that foothold to stand on while you entertain this whole idea of people trying to game the system because they know how the algorithm works. Right?

Can People Outsmart the Algorithms?

Jenn DeWall:

I was just in a meeting this morning, and they were talking about algorithms and on a social media platform. That, Hey, initially, you could partner with all your friends, have them comment, and then it would, you know, move your, move, your quote, or your post, whatever higher and get visibility. But once that platform figures out that you kind of have the same people that are doing it, they, they then take that, and they can also do the opposite. That’s right.

Sarah Alt:

That’s right.

Jenn DeWall:

You can try to beat the system, but there are systems also I think built in to anticipate that level of, let’s say, I don’t, I don’t want to call it. I don’t know if you call it cheating. If it is cheating, this is—I don’t know if you call it that.

Sarah Alt:

Outsmarting, or gaming, in some cases cheating, like it, it all depends on sort of that lens that you look inwardly with, but to your point- humans should still be at least involved in being able to screen that and see if that’s what’s happening. And I think that’s the part where it’s, we often back to the automation addiction model, we believe that, Oh, we’ve spent all the money on this. So I should be able to rely on the automated decision.

And it’s not to say, like, we’re, fear-mongering, we’re not trying to suggest that anybody should be fearful of that, but we are asking for that sort of critical thinking and sense-making to say, is this really what we intend to do with this? And is this really how we intended to use it? Because remember– and this kind of segues already into our next myth– but remember that these technologies that are built the way software manufacturers build. It is for the common sort of denominator of all subscribers. Otherwise, they would be tailoring and building custom applications of custom software specifically for Jenn’s needs. And that’s not necessarily a responsible market. I’m just certain people will offer to do that, but that’s not necessarily sustainable. It’s just not how the market works. So they build it for that sort of general use.

And it’s really up to you as the subscriber or the buyer or the user to figure out what are the right configurations that are going to fit our culture, both national culture, and organizational culture. That is going to fit the legal requirements and regulatory requirements of our industry. Like there’s a lot of nuances that go into that, and the software vendors know that. And what we’re encouraging on the software development side is make sure that you’re really intentional about saying that to your clients and your subscribers. Don’t just assume they know that because they may not. And not because they’re not intellectually capable of knowing it. They just may have a reliance on, and a bias toward automation, especially when they’ve spent all that money on it. Right?

Jenn DeWall:

Yeah. It can be hard to think about having to make another investment or long going or long term, you know, in investments. I think the thing that comes to mind for me from a normal person perspective, if you will, is the concept of a gym membership. You can’t pay for a luxurious gym and then assume that by not going there that your membership alone would be the thing that gets you, where you need to be nice. You have to put in the work, and you have to, you know, assess where you’re at. You need to set goals. You need to know what machinery is going to work for you. I mean, if I look at my gym, I might be more of a “set and forget” person.

Sarah Alt:

It’s great. No, I totally get it. It’s a great analogy.

AI Myth 4: If my AI is Biased, it’s the Vendor’s Fault.

Jenn DeWall:

And then, the next AI myth is my technology vendor. What’s the AI myth that people have around, I guess, the expectation that they think the person that builds the technology has or what they have to do and offer them? What’s that myth?

Sarah Alt:

Yeah. So very early on, you could get away with this, which is this AI myth that it’s the vendor or supplier’s responsibility if I make mistakes with AI, right? And so very early on when the market was sort of immature and even our legal advisors were a bit, they’re still catching up with, with the software game here. And with AI, in particular, you could get away with that. But very quickly, things have changed. And there’s an expectation both in the practice of law, but also just in any good sort of ethical practice. There’s an expectation of shared accountability.

Vendors and suppliers are responsible, no doubt, for the quality and performance of their AI technology, but they’re not necessarily responsible for any mistakes or biases that already exist in your data or your processes or in how people make decisions with that technology. Your people apply the AI to your data, your decisions, and your processes. It’s sort of the view of the supplier of the, or the vendors and suppliers of software. It doesn’t mean they completely wash their hands of it. But if you use my HR recruitment, as an example, the software that we subscribe to is the same one that the organization down the road subscribes to. They may be configured slightly differently than we are, but for practical purposes of what it’s intended to be used for, we were using it correctly. Where we lacked was the ability to explain how it worked in the situation of screening out certain candidates. That screening out may be happening because of data that we were feeding it or ways in which we were using it.

So all of this needs to be reviewed and assessed regularly to make sure that we’re preventing mistakes. Another recruiting example, and that’s a completely stay in recruitment, but I just think that this is, you know, every organization, for the most part, has some version of employees. And so we want to make sure that people understand that this is one of the top places that we need to look for this. If you’re subscribing as an example to an AI-based screening system, some companies will subscribe to like cognitive testing system where after you’ve gotten candidates so far in the process, you administer some kind of cognitive tests to screen candidates to see if they’re kind of up to what your cognitive expectations are. If your measure of successful candidates, even partially considered a past candidate success- so the data that you’ve fed it was based on Jenn was successful in her role.

She stayed in her role for two years from the time that we hired her. So that’s because we’ve taught the algorithm that Jenn’s cognitive results are good for our organization. But if you are, none of your past candidate candidates had a cognitive disability, for example, you could be unintentionally biased against those candidates. And because recruiters feel they can trust the AI, going back to our point about automation addiction, more than human screening, you may unknowingly be amplifying this bias against people with a cognitive disability. It may be your own data, processes, and people that we need to tune for that. Not even because, again, you were intentionally trying to do that, but because your data did not have historically represent the population that you may be needing to serve today and making sure that it is not discrimination. So that’s just another recruiting example of how we need to make sure that it’s not just we don’t just take the software that the vendor gives us. And we say that it must be good. And therefore, if it’s bad, it’s only their fault. It also is a responsibility for how we use it and the data that we feed it.

Jenn DeWall:

Yeah. It’s like going to a car dealership and buying a brand new car. And then if you don’t pay attention to the traffic signal and get into an accident, you can’t go back and say, well, you did this. Right. But I think that’s important. Like I love that. I think that’s a really important thing to note because I, and again, the thing that we have to remember is that for most people, you likely, aren’t the ones that are developing the software. You’re the ones that are learning how to partner with the software. And so, yeah, you need to understand how it works. You don’t necessarily need to be as concerned about intent if you’re not programming it, but you need to understand how it might be making decisions for you or what specific decisions it’s making on your behalf.

Sarah Alt:

Yeah. We recently saw this also play out in a really significant way regarding facial recognition AI used in law enforcement. There’ve been several instances where the AI has shown bias in skin color, and it may not have been the intention of the AI developers. It may have been an unconscious bias. It may have been the data that was used to train it. It could have been the data that was used by the specific law enforcement subscribers, though, that was already biased against any offenders that were not Caucasian. Whatever the precise reasons are for that bias, the major software company supplying the AI for law enforcement either paused or completely exited that business as a result of what they learned from those mistakes. At least until the practice can be better understood. But, but the point here is that they took some level of accountability. They shared that accountability with their law enforcement subscribers to say, listen, it’s not ready. It’s maybe it’s not ready just for your specific use. Let’s, let’s learn from this. Let’s teach ourselves, let’s teach the algorithm, let’s teach the AI how to understand and make sure that we aren’t biased and discriminatory in how we’re using the facial recognition in that, in that law enforcement instance.

AI Myth 5: Will AI Replace Human Workers in the Future?

Jenn DeWall:

Yeah, that’s the ethics. So let’s, let’s go into that, that fifth one, which I feel like is absolutely the one that me, as a non-AI’er, if that’s a thing, as someone that’s outside of the land of artificial intelligence, that AI will replace all humans in the workplace. I feel like we really have that one. That AI myth that our jobs are going to be gone pretty soon, everything’s going to be done by a robot, and we can’t embrace it because it’s out to get us. What is that myth? Why do people have that myth?

Sarah Alt:

Yeah. So a lot of that AI myth has to do with cultural placements of, and the way in which you know, for all very good reasons we want to see advancement in technology. We want to see it evolve continuously. And I think if you study any of you know, Moore’s Law, or anything else about how fast technology changes, it just feels like a sort of fascinating to think about the process or the idea of robots replacing us. It’s an AI myth – AI is replacing all humans in the workplace. But also frustrating at the same time, because you realize, wait a second, does that mean that AI is going to replace me? Right?

And so there’s no doubt that, and I don’t know the exact year, but recent graduates and people are entering the workforce. And in pretty much most countries in the world, they can expect that they’re going to be working alongside AI sometime in their career from that point forward. And it’s not that long ago, but certainly that they’re going to be working alongside. What we see is more of a preference for co-bots or collaborative robotics, robots that operate maybe at slower speeds or that are fitted slower speeds than maybe you and I think of robots like in an automobile manufacturing scenario where they’re working really fast, but that’s because they’re tuned for one specific task. We’re talking about AI that’s learning and evolving even in its active state. It’s sitting there right next to you, that kind of thing. We see that more being collaborative that you are going to be working alongside them. That’s not an AI myth at all.

We see applications like this, for example, in hospital settings, where we have AI-powered robots that are running manual routines that normally a human being would do, like taking samples, taking test samples, and delivering it to a location. Now you could say, well, that’s just robotics. Well, in some cases, AI is built into that robot to make a better decision about the path that is going to take or how it knows whether or not it’s doing the right thing. But this again brings opportunities forward for those critical thinking and analytic skills that we want, that we need to have as we’re working alongside the robots. You know, we’re actually really looking forward to opportunities where robots and AI can replace humans in very dangerous jobs or menial tasks that help to sort of regain you know, fulfillment and job satisfaction with the right investments and upskilling and reskilling people in these jobs. There’s that hope for more fulfillment and job satisfaction because I don’t need to be worried about very dangerous tasks, or I don’t need to be sort of demeaned by these menial tasks. And so we’re really looking for opportunities to elevate where we’re working alongside the robots, not necessarily replaced by them.

Jenn DeWall:

And I think that’s exciting then to look at it like that they are true partners here to also provide potentially greater job satisfaction for you, or at least a safer job. If it is a task that you know is more hazardous. So it sounds like there’s a lot of ways that we can actually embrace AI. But what is the thing, like, what advice would you give for leaders today? Like what, what would you say they need to do now in terms of AI?

Sarah Alt:

Yeah. And so obviously from our work, we’re very focused on the ethics and the ethical and trustworthy use of it. So from that standpoint, we absolutely invite organizations and leaders to prioritize where your greatest value is going to be from your AI investments and prioritize where to look. Because as we said earlier, you’re probably already making sophisticated, analytical decisions elsewhere in your business. And almost all organizations, you can start with recruiting and hiring and, and any of those areas where you’re making decisions about employees and kind of work from there as your priorities and, you know, really ask what seems like a fairly simple, straightforward question, but perhaps one that we haven’t really asked sort of in this, in the light of this setting, which is how are we using algorithms to make decisions today? Make sure that you’re following laws in those areas. Hopefully, you are. And then take the next lens with that. Which is okay, we may be following laws— but is this representative of the ethical use and the ethical practices and of the cultural norms, both national culture, but also the corporate or organizational culture that we live by?

And one of the most important things that we coach organizations on is making sure to define what transparency and explainability mean for you. You know, my earlier example of nobody could really explain to me how the technology works. That’s a pretty risky situation when you’re talking about a recruiting or employment situation, so you don’t want to be there. And so what we invite organizations to think about is just as we’re sort of auditing our systems for their, for example, for their fitness, for financials, or for auditing our processes, to make sure that we’re following regulatory and compliance. We’re inviting organizations to think of doing the same thing with their advanced analytics, or advanced automation is to ask the questions are compliant from an ethical standard and ethical standard that we have set, or that we have shared with others and make sure that you’re making those changes that you’re committed and making those changes internally, but you’re also holding your vendors and suppliers to the same. The good news is that built carefully and in the right context, AI has the potential to actually be used to de-bias human decision-making if you think about it again, whole other podcast.

Jenn DeWall:

I was going to say, What? How can it be biased, and we then use it to remove bias? That is so interesting.

Sarah Alt:

I think there are ways to do that. And, and back to my data scientist friends who were probably getting a rash earlier, when I said they weren’t important, they are. And they’re exactly the people that can help us to build the right technologies that actually take the bias out if they’ve done so properly. For now, it still remains up to humans to make sure that’s done right. And that’s what we have to hold ourselves accountable to.

What is Your Leadership Habit for Success?

Jenn DeWall:

Sarah, I’ve really enjoyed our podcast. I also loved, you know, even closing just thinking about what you can do right now. And one of the pieces is just to see if you can understand it in a way that you can describe it. It doesn’t have to be in a way where you understand the backend programming, because I wouldn’t be able to speak to that, but I want to at least understand how it works. So be curious about how it can help and also just what could be potential blind spots that could get in your way down the line if you set and forget. But Sarah, thank you so much for joining us on the podcast. This last question is not about an AI myth, but I do close every single podcast episode with our one question—what is your leadership habit for success?

Sarah Alt:

Yeah. Thank you. It’s funny because that tees up my answer as well because I believe that being okay with not knowing all the answers is okay. Be comfortable in your willingness to ask questions with a genuine commitment to learning. Again, having answers is important and having a vision is important. And being mindful and intentional about asking questions. I just find that we arrive— when we’re working together and collaborating together— we arrive at so many more aha moments and quality decisions when we’ve just had a chance to ask some questions. And it doesn’t necessarily mean that we’re not committed to, or we don’t. We were going to flip flop our answers, but we’re just going to make sure that we understand it and that we see it from other’s perspectives. And I think that’s just been where I’ve seen the most success with our leadership and with our teams when we’re working together,

Jenn DeWall:

Practice curiosity! And I think it’s important to know because AI does feel intimidating and, but there are so many nuances to it. I love even just the belief that bias exists means that there’s going to be someone that can’t know everything about it because then there’s likely a bias there, or an AI myth or two. Sarah, thank you so much for just sharing your knowledge, sharing your wisdom, your insights, and everything with our leaders. It was a joy to have you on the podcast today. Thank you so much.

Sarah Alt:

You’re welcome. Thank you.

Jenn DeWall:

Thank you so much for joining in on our podcast today with Sarah Alt. Now for more information about her organization, the Ethical AI consortium, or how you or your organization can start your commitment to the ethical use of algorithms, analytics, automation, and AI, you can connect with Sarah on LinkedIn and follow the EAICs progress at explainyourai.org. And of course, don’t forget to share this with your friends, help spread the news about AI. And if you enjoyed the podcast, don’t forget to leave us a review on your favorite podcast streaming service.