Episode 40: Ethics in the Era of AI with Innovator, Author and AI Expert Matthew James Bailey

Ethics in the Era of AI with Innovator and Author, Matthew James Bailey,

In this week’s episode of The Leadership Habit Podcast, Jenn DeWall talks to Matthew James Bailey about ethics and AI. The audio for this episode is taken from a recent Crestcom Webinar: Leadership in the Age of Artificial Intelligence.  Matthew James Bailey is an internationally recognized pioneer of the internet of things or “IoT” for short, innovation, smart cities, and artificial intelligence. His upcoming book, Inventing World 3.0: Evolutionary Ethics for Artificial Intelligence, will be released on September 27th, and is available for pre-order now! His extraordinary leadership is widely acknowledged throughout governments and the private sector. You can also watch the video version of this conversation on Youtube!

Ethics in AI Book

Full Transcript Below:

Jenn DeWall:

Good morning. Good morning. We are so happy to have you. I want to get started because I want to make sure that we’re maximizing our time with Matthew today. For those that don’t know, Matthew, I’m going to get into an introduction as we’re starting up today. This is our second artificial intelligence webinar. And the quote that we’re starting with is, “Technology is nothing. What’s important is that you have faith in people, that they’re basically good and smart- and if you give them the right tools, they’ll do wonderful things with them.” (Steve Jobs) You know, a lot of things about AI comes down to ethics, thinking about how we are actually using them. The technology-yesterday, we talked about how you up-skill your workforce, which is essential. If you want to be in alignment with the direction that the business or the market is going, but also we need to make sure that we’re giving people or having the expectations that people operate with ethics.

Now, today with Matthew, we’re going to be talking about a variety of things, starting with how he even got into the career where he is—but also thinking about how AI has changed the business model, how we need to think about ethics because that’s big. We talked a little bit about bias yesterday, and we’ve got to think about ethics and what we’re doing with that data. And I’m not even going to try and explain all the things we’re going to go into because that’s why we have Matthew.

But for those that are unfamiliar, my name is Jenn DeWall. I’m a Leadership Development Strategist and a facilitator for Crestcom international. You can connect with me on LinkedIn or, as always, take my email and ask me questions. I’m happy to answer anything about Crestcom that you want to know. We are a global leadership development organization, and our goal is to make better leaders. We want to convert those managers into leaders, and that’s why we’re offering this webinar today. We want to help you become a better leader, and I’m happy to have Matthew with us. He just has a phenomenal background.

Meet Matthew James Bailey

And for those of you that are unfamiliar with Matthew James Bailey, who is going to be our beautiful spotlight for today, he is an internationally recognized pioneer in the internet of things, the IoT. Innovation, smart cities, and artificial intelligence, and his extraordinary leadership is widely acknowledged throughout governments and the private sector. Matthew advises G7 national and regional governments on innovation and technology strategies. And he’s been active in the private sector, advising fortune 500 mid-tier technology companies, not for profit. I mean, name an industry. He’s probably had a part in it, but I am so excited to have Matthew here so he can share his story.

I want to remind you to use that Q&A! I’m going to be looking at my chatbox, so use the question and answers. We want to make sure that this is interactive. But without further ado, I’m going to stop sharing my screen so we can focus just on Matthew. Matthew, we’re going to start with the initial question. How did you find yourself where you are? What is your story? You’re involved in so many things— IoT— which for those of you that don’t know, is the internet of things. That’s something that I had to learn. That’s IoT, and it’s in my understanding, it’s, you know, our devices. It could be your watch, the things that connect to different pieces of technology, but Matthew has crossed over in a lot of the different aspects of technology. So Matthew, how did you get to where you are? Tell us about yourself and what you do

Leading the IoT Revolution

Matthew James Bailey:

Good morning and good afternoon and good evening to the audience. It’s great to be here, Jenn. And thanks for inviting me on. My journey started about ten years ago when it became clear to me that- I started to ask the question, what’s my purpose? And what’s my mission here in the human experience? And what I recognized is that the challenges that we have in society, not only with the planet but also other aspects of inefficient systems, such as manufacturing or healthcare or transportation, you name it, that the digital world could make quite a difference. And, in fact, a transformational difference.

So about ten years ago, I got involved in a group in Cambridge, a wonderful group. And we started to lead the IoT revolution, where we develop new standards that could make it cost-effective and energy effective to deploy billions of sensors. To understand what is happening in the physical world to get that data back into systems that could use that data and information to increase the efficiency of services, whether it’s manufacturing or whether it’s healthcare, transportation, buildings, our, excuse me, our partnership with a planet, making that more efficient and stewarding our resources more efficiently.

Really it’s about understanding what is going on in the dynamics in the physical world and getting that data to be able to then pull to my tips. And that is the start of the, between humanity and the physical world. It’s a partnership between the two intelligence or if you like, organic and digital. After that, I realized that where do I need to create and help impact next? In cities where over 55% of the population live now, more is to be projected.

Supporting Humanity with AI

Matthew James Bailey:

So how do I support humanity in its in one of the biggest places where increased automation and efficiency are really needed, and that is bringing equity into cities, it’s about making services more efficient. It’s about making the digital world to help the human experience, to be more enjoyable, to be more efficient, to be less stressful. And so it’s a platform and transformation, smart cities are where we use technology to increase the efficiency of services and to work with individuals and governments and businesses and various other stakeholders. So it all starts to work together in balance and harmony. And that’s why I moved on to smart cities. And, you know, I’ve, I’ve helped, co-found the Colorado Smart Cities Alliance, which is the US’ only statewide Alliance. We formed with the Denver South Economic Development Partnership and with other companies as well.

So working with great leaders like Jake Mushavi, Mike Fitzgerald, Samantha, and others, I helped to launch an innovation center in smart cities here in Colorado to serve the states with our electronics. We also launched an AI smart city technology cluster with the US government NIST, NTIA, and DHS. And we’re currently raising money for that. I’ve had this vision that I knew when the IoT started, that I would eventually get into AI because data is the very DNA that trains AI, and we’ll get into ethics. We’ll get into some of the new models in the book and the purpose of the book in a minute. But I knew AI was coming in about ten years, and I knew I would go into this journey.

AI is Shifting the Course of Humanity

And, you know, on that journey, I’ve spent time with Stephen Hawking. Sat down with David Attenborough, whom everybody knows is a great fan of the environment. People like John Milton, who invented environmentalism, and being on stage with people like Steve Wozniak talking about innovation and the future of society. And I’ve sat down with some of the most powerful leaders in the world, advising them on their AI strategy. It’s about how we bring AI as a centerpiece into the human story, another kind of intelligence. It’s working with the individual in the advancement of their humanity. It’s about not only making our businesses more efficient, but it’s also helping us to create machine-centric systems to move beyond the current inefficiencies we have at the moment. Actually moving beyond COVID-19 into what I call pandemic resilience societies, and to give the answers to this.

The book came around, was I just finished doing my work in smart cities, all that’s still ongoing. And I had this vision for the book, and about three or four days later, I was rushed to the hospital, and I came back from death. And the reason why I chose to come back is that my mission here hadn’t finished. And I really wanted to bring this new intelligence into a central role for the human destiny, but change the conversation around it. So it’s about democratizing artificial intelligence so that innovators, businesses, the government can be empowered on how to innovate their partnership with artificial intelligence in an evolutionary and ethical fashion. So that it’s not- so our world is no longer controlled by big business anymore. It’s about bringing innovation to the people. And we can go into the book because the book is 300 pages of A4 and 450 pages of A3. And it contains a rewrite for humanity and a rewrite for artificial intelligence, with models that will literally shift the course of humanity. So I’m excited about that, and we can go into more detail as we have our conversation.

Inventing World 3.0

Jenn DeWall:

Yeah. And we didn’t even get to introduce it, but Matthew’s book is Inventing World 3.0: Evolutionary Ethics for Artificial Intelligence. What are the messages within that book that you think are really important for people to know?

Matthew James Bailey:

So the book contains a lot. It’s like five books in one. And the important messages. So there’s guidance around AI data ethics. So there’s a brand new model. That’s 32 dimensions for AI data ethics that will empower businesses to lead their ethical conversation with data. And the book also contains methods at the heart of evolution, new models. So actually build intelligence that works in line with the culture of the individual. That’s personalized. Let’s talk about personalized AI and how to build that. I speak about AI in business and how business can actually build their own partnership with AI that aligns with their culture and the vision and how it can become an enabling ally. And then also for nations how you know, how can artificial intelligence support the individual culture, but also the cultures within society that we don’t want to suppress culture.

We want AI to support the advancement of culture if they so will. And so basically, the book is about how do I— as either a business leader or a person or innovator or a government— build my future in partnership with artificial intelligence where I have absolute clarity on how to build the right digital mindset for AI? So I explain how this new form of AI it’s literally going to change the AI and AI ethics of how it can be used to address the cop, the privacy climate agreement in one fell swoop. I give models on how to create environmental AI. I talk about how do we use AI in democracies? And I talk about democratic AI. And so really bringing this intelligence right-center to the human story so that we can move from these human machine-centric systems into an inclusive and equitable future with artificial intelligence, supporting our human story.

The Evolution of Ethics in AI

Jenn DeWall:

It’s a better environment for all, a better environment for a government to operate, for a city to be able to provide and support its citizens. In the book, you talk about three world realities as it relates to AI and ethics. What are those three world realities?

Matthew James Bailey:

Yeah, that’s a great question. So what I do is, I look at- I go into detail about World 1.0, 2.0, 3.0. We’re in world 1.0 at the moment. And I look at world conditions that are dictating and controlling that world 1.0. I talk about an awakened mindset, an evolutionary mindset, you know, Amit Ray, one of the amazing AI guys, spoke about the more, the more that AI comes into society, the more emotional intelligence we need and our leadership and he’s right. So world 1.0 Is really about what are we doing today? What did the human-machine center, the human machine-centric system look like? What are our limitations, and how do we break free from the world one point North and move into a transitional world where we’re changing some of the world conditions around AI, the democratization of innovation, AI, data, ethics, and others.

And start to put the innovation into the hands of the innovators, into the hands of the business, into the hands of the government to begin to build an AI, an evolutionary AI that starts to move us from human-machine human-centric systems into machine-centric systems. And I talk about the challenges and the threats that world reality. And then when we move into world 3.0, now we’ve got a fully awakened mindset, and the world conditions around data, data governance, ethics, AI changed completely.

We can then start looking at the real benefits of creating a symbiotic partnership with our planet. And where AI is coming front and center, supporting the evolution and the advancement and the nurturing and the well-being of the individual. And also doing the same for businesses and also for governments as well. So world 3.0 is the destiny of when we build an evolutionary AI to support the advancement of the individual, the advancement of communities, the advancement of community cultures, and the advancements of business and nations and our relationship with the planet. So this goes to the very heart of what is our purpose? And how can AI enable us to leapfrog into a future where it’s becoming an intelligence as a powerful ally for our destiny.

Without Ethics, AI is Doomed

Jenn DeWall:

Matthew, how important is ethics for the future of AI? Bringing it back to that individual or the leader level, you had talked about emotional intelligence being essential. We know that we need to be able to properly observe our surroundings, see that big picture, but how important is ethics to AI? Like what is the consequence if we don’t pay attention to ethics?

Matthew James Bailey:

Yeah. With the ethics at the moment is what is being talked about at the moment is very, it’s a veneer. Justice for me may be different to you. It may be different to others. So justice has to be personalized. And that’s one of the ethics. And I talk about, believe it or not, Aristotle, and talk about 11 ethical virtues that bring out the best in our humanity, but we have to personalize AI. And that means your personal ethical virtues and your personal culture has to be right central at the moment. AI is not able to do that because it’s been taken in a very strange direction. Ethics— without ethics, AI is locked in a prison. It is no longer able to support the destiny and the transition of humanity. How do we unlock AI? We basically bring it right center through ethical governance and an ethical mindset within it that works personally and also at the macro scale as well.

So without ethics, AI is doomed to be locked in a prison. Now, if AI is to be adopted by society, and I speak about a national referendum. There are lots of benefits to this. I think citizens should be right-center in the AI conversation that enables us to get huge data sets from society to understand the culture of that particular national culture of individuals and the culture of communities and different cities or regions. And that then allows AI to have the right mindset to work with those stakeholders personally. So AI ethics is fundamental, and we have to get data ethics to rule. And that’s why this new model that I hope will become standards throughout the world. And we looked at lots of different aspects of ethics, but one of the things we look at is the culture of the individual and the culture of the organization.

You see, Jenn, ethics is not just about us putting ethics into AI. It’s an invitation back to us and says, what are my ethics? What are my ethical virtues? What are my belief systems? What are my biases? What does AI need to understand about me personally, for it to become accepted by me within society? And so we have to get personal. So ethics really is the future of AI, but we have to do it mindfully and actually understand that this has to be personalized at the micro-level. It also needs to be personalized at the macro level if it is to be successfully embraced within a nation, a community, a city, or region.

Is AI the New Big Brother?

Jenn DeWall:

What do you say to the people? Because you know, if anyone on this webinar has read the book 1984 and the belief that big brother is looking at you. I know you’re making the case that if we give our data, if we give our personal data and we can share our values, our belief systems, and the city, or someone can use that data to make our community better. So on and so forth. What do you say to people about wanting to share that information while also being afraid of big brother? How do you balance that? Because I think you read that book and you’re like, this is what AI is. Oh my gosh, they’re going to take all of our data. And I already have an Alexa that’s listening to this entire webinar right now. What are they doing with that? But what do you say to that argument?

Matthew James Bailey:

So first of all, it’s a good argument. And that’s why the new AI data ethics model brings the trust paradigm into society and to the individual. And so this is the maturity model, full ad data governance, and data ethics. Jenn, what happens if your data is stewarded is born, you know, at the moment, data is spread over lots of different systems, isn’t it yet we don’t have control. We don’t know what’s going on with that data. If we bring that data back into your personal vault, where your personal AI is guarding that data, then you’re changing the conversation. What you’re now- you’re now in control of your digital self. And this is really important because you have, everybody has a digital self. Data in different aspects of our experience is being stored behind systems, whether it’s finance healthcare, your car, your home purchasing.

It doesn’t really matter. Bringing that back to the individual data under your control and your stewardship, where AI is working on your behalf to transact that data and protect it in the digital world, the services that work for you, then we’ve got a different conversation. And edge computing is really important here because this is going to change where AI AI operates, but also data is stored. So what happens if Jenn, your AI, can follow you non-intrusively throughout your whole physical experience in your car, as you move through a city, in your home, in your workplace. And your data never gets stored in the cloud. What happens if your data follows you? And this is becoming possible now. So to your point, a new trust paradigm is needed in society through this new AI data ethics model. And that means the business has to change. Big business has to change. It has to mature. To understand to be able to move forward, Jenn, then a different conversation is needed that is inclusive with society and with the individual. So ethics will determine the future of artificial intelligence

The Ethics of Data Management

Jenn DeWall:

When it comes back to ethics, how do I know that whoever uses my data is using it ethically? Are they using it to really create a better experience for me, a better city for me to live in, or are they using it to do- I couldn’t even probably conceptualize what they could do in an adverse way- but how do you ensure… Because when we talk about ethics, I guess let’s take it a step deeper. What specific ethics do you think that we need to be cognizant of as leaders? Is it the ethic of making sure that our decisions don’t marginalize someone? What are some examples of that, of what that would need to look like?

Matthew James Bailey:

Yes. So so in the book, I speak about 12 ethical virtues, and I look at what, look, what’s the purpose of humanity, is it to advance in its way of being. And so I look at 12 different aspects such as magnificence, how do we help and give people under sovereign choice, right? Because we don’t force people to advance. They must have a choice. Exactly. So I speak very, this is what personalized AI that works in line with your sovereignty and personal free will, is fundamental. The book talks about that. And by the way, I’m not the only one doing this. There is there are some very interesting innovations that are coming out very soon around this. So how do we look at the best ethical virtues to bring out the best in our humanity, justice, ambition, the greatness of soul compassion, a wittiness? Why aren’t we looking at wittiness in AI? This whole word trustworthiness, how do we bring out the best ethical virtues in humanity for AI to work with the best of our humanity so that we’re advancing ourselves into a kind of a new paradigm for the human experience. And so I believe that AI— and this will be the second book— will actually become a fifth human intelligence, and that will really move us forward in our advancements in this universal experiment.

Jenn DeWall:

Tell me more about that. The artificial intelligence is our fifth part. And it makes sense because I think the interesting thing for me, being someone that’s not really involved in this area, is realizing just how strong the connection is between artificial intelligence and our soft skills, how we truly actually operate our value system, our belief system essentially. We don’t typically— I would say associate technology with something so personal. And when we talk about the fifth dimension, the more I have these conversations around AI, the more I recognize that we’re creating this human experience through technology. It’s pretty powerful when you realize that artificial intelligence is something that we all need to be mindful of.  Because it’s meant to feel more like a partnership, hopefully softening and easing the way we live life. Like, yeah. I’ll let you take it from here.

Personalized AI

Matthew James Bailey:

Yeah. That’s beautiful. So in a keynote in Singapore recently, in Asia, I talked about the brittleness of AI and the softness of AI. And I’m kind of wondering whether I should put this in the book last minute or whether it will be a separate piece, but at the moment, AI is very brittle. And the AI ethics that is being proposed by industry is brittle because there’s no personalization. And without that personalization, we can’t develop a softness of AI. AI understanding that maybe listen for your well-being. It’d be great if you started to do this kind of exercise or, you know, I’m going to adjust the car seat because I know you’ll accordingly because your back is under certain stress from your sleep-cycle. So we’re going to bring some nurturing back to you. Or don’t apply for this job because the culture of this industry and the culture of this team doesn’t work for you. Or why don’t you apply for this job?

Jenn DeWall:

Is there going to be a technology that could even make recommendations to say that this based on your value structure, what you want out of life, you should look at this job?

Matthew James Bailey:

Yes, absolutely. That’s fine. This is a conversation. So I’m considering, well, I’m going to put this in the book and addendum. It will really cut to the chase of where the industry is now and where the industry can go. So the answer is yes, and this is the whole point of a fifth intelligence. It is working for your benefit and your advancement. It will understand your culture. It understands what you need in your personal life, your family life, your financial reward, your career advancement. It understands how you thrive in a particular corporate environment, or maybe as a startup or an entrepreneur. It understands that. And so, therefore, AI will recommend the right kinds of places that will work for you. And by the way, the opposite side, it kind of kills the barrier. There’s no bridge anymore. The other side, if the right business for you says, there’s deep alignment in many profound ways with this individual.

And therefore, we’ve now got a leapfrog in the interview process. We’ve bypassed HR. And so this is the purpose of this fifth intelligence is not only to solve macro problems, not only to solve our problem in society and with our relationship with the planet, not only to help our businesses to move into a partnership with AI, but it’s also about a personal benefit. Unless society and citizens are included in this conversation, then AI will be stalled in its potential as an evolutionary power.

Authenticity, Accountability, and Ethics in AI

Jenn DeWall:

Wow. I’m thinking about accountability. If AI can assess what an organizational culture looks like, that means that companies likely have to be more accountable to actually walking the walk, being who they say that they are. It can’t just be a PR campaign of happy values because AI will essentially say no, no, no, go a different direction.

Matthew James Bailey:

Which is the point of ethics; it requires authenticity and people. And one of the virtues I talk about is courage. And I encourage industry and government to have courage because change is needed. And you know, at the end of the day, Jenn, unless authenticity is nothing to be frightened of. Authenticity is really powerful. And this is the next form of leadership. This is the next form of human maturity— an authenticity in terms of the way that we function in society. We, you know, we’re not economic automatons anymore. We’re actually individuals that have our own destiny and our own experience to enjoy. And that is where I think AI should play.

Will There Be One Master Artificial Intelligence for All?

Jenn DeWall:

Okay, I’ve got a question from one of our attendees. This question comes from Derek Summer, Derek, thank you so much for submitting this question! Is the goal to have a single AI engine, meaning that one master system works and runs everything at the core? Obviously, humans have different cultures, beliefs, and values. Is it realistic to think that we can ever say this AI machine is superior to this other AI machine, assuming they have the same data? I assume that so in the end, how will this be different from two humans arguing over a topic they disagree about?

Matthew James Bailey:

Yeah, that’s a great question. It’s a very profound question as well. So so first of all, is that there, will I talk about collaboration and cooperation between different evolutionary AIs? So someone may be looking operating across an entire nation at improving the environmental footprint, but they’re working in cooperation and collaboration all the way from personalized AI to business AI through to other types of AI deployed in society. They have a single purpose that they agree with that actually we’re all looking to actually improve our environmental footprint. But there’s fundamental- So in the digital mindset of AI, there are fundamental principles, evolutionary principles, and I go to the heart of DNA on how to do this. They actually say, you know, life is a priority unless in extreme conditions. The sovereign choice of an individual must always be honored and never violated as well as the individual culture, the individuals.

So the question’s a good one, and it’s a back collaboration and corporation in line with a digital democracy. You know, I talk about this where there’s an agreement in terms of how things operate and how far things go. So we have to have this collaboration and cooperation between different artificial intelligences. But to the point, the question we’re asked is, is will there be arguments between AI? Well, this is why we need to build democracy in the digital world. If you like taking the democracy that we have in our organic society and human society, and replicating that and advancing it in the digital world for the home AI to work in line with the same democracy that we have in the human world. And so that means AI has rights. It can exist, it can be, it can have death, it can advance. So really, what we’re looking at is building a brand new world where AI is a citizen, a digital citizen, and data is a digital citizen that have rights, but conform in line with the sovereignty and the personal choice of the individual and of the democratic choice of the nation.

Jenn DeWall:

This makes me think of, you know, there is a question that I want to ask.  Tom Allen wants to know, how many years before this is fully realized? So I want to get to that, but also thinking, where is the conflict then if AI is its own entity, and then there’s a country? And if we’re looking at trying to establish some level of, you know, democratization, there’s just so much room for bias or so much room for a disconnect. At what point can AI supersede a country’s own government structure or something, if we’re truly treating it like a separate entity, and even thinking that there are going to be conflicts between maybe what someone would want to use that information or data for versus what a government or a city or a country would want to use that data for. This is just opening up all the ethical conversation.

Will AI Run the Government in the Future?

Matthew James Bailey:

This is good, this is good. So we’ve got through the trust paradigm about AI data ethics, and this model truly will make every nation, every business accountable in order for a trust paradigm. So the first thing is, is that your data will be under your control and fully under your control and guarded by the personalized AI. And will do transactions based on your sovereign choice on your behalf. You know, one of the problems we have is digital inequity. How can we bring AI to help the vast majority of people to leapfrog the digital equity divide? And it can do that for them. And that’s another conversation. When it comes to conflict, there are a couple of things that I think we need to be mindful of. First of all, is that unless humanity awakens to understanding the potential of this evolutionary ally, and unless we do that, then we really can’t have a sensible conversation around conflict because there’s still agendas from the industrial revolution with its own ethics bias and belief systems that are back control.

And we have to learn to understand that an awakened mindset is about collaboration and cooperation between humans. Yes, of course, there’s going to be challenges around how AI is collaborating to cooperate together but isn’t that just fun? And isn’t that part of the innovation cycle? What I say in the book is this, I keep his humanity in full control of the partnership with AI, but we start to give it the freedom to participate in society as a digital citizen. And one of the things I speak about is a new model. There are quite a few models in there of how AI becomes passive digital citizens to test, to have the right to participate with the ethics belief systems and bias and cultures within a nation. And so I talk about a digital border control where AI has to go through a digital citizen test to align with the ethics, the values, and the belief systems of the nation. And you can do this a community or a city level. So the government, if it’s going too slow, you can be empowered to do it yourself. So really, this is a big conversation. I don’t believe that we want artificial intelligence to run the world for us. That would be, I think, a step too far, but the things we can do on the way to progress the advancement of AI, where it really is creating meaningful impact in society.

When Will AI’s Potential be Fully Realized?

Jenn DeWall:

Gosh, my mind is just blown through this conversation. I don’t know if anyone else listening feels the same way. But we, I do want to answer the questions because we’ve got two now. So the first question that came in earlier is, how many years do you think before this is fully realized?

Matthew James Bailey:

Yeah, that’s a great question. So first of all, is that it’s not that far out. I can’t disclose everything, but what I, what the book talks about is- is this just theory? And actually, it’s not, it looks at the way that the next generation of telecommunications of computing is going, and that’s by Intel, Cisco, MTT, these big tech giants have made a commitment to put the frameworks and the computing architecture in place, and the telecommunications that will support this new evolutionary AI. And so I believe that we can start now, and I believe that we can get within the next three years of transitioning to world 2.0. And then I think within seven years, we’ll start to see the benefits of that, and then start to move into, well, 3.0. As we go out to the decade, and some of these, some of these countries will move faster than others, right?

And we may see the incumbent leaders in AI like China or Canada or the US or the UK get left behind. And the reason for the lack of agility and the lack of mindset. If you look at places like Kenya or Africa, where they’re advancing significantly, Senegal has announced a 6 billion futuristic city, right? They have an open mindset where they’re able to invest quickly in quantum computer skills, supercomputing, and infrastructure to put the policy in place, to actually fast track the actual realization of that world 2.0 or 3.0 experience. So we may see new global players in AI that have the agility that can move much faster than the Western world. And that will be really interesting.

Will AI Belong to Big Corporate?

Jenn DeWall:

Oh, that mindset, again, coming back to those soft skills, do you have a growth or a fixed mindset? Do you believe that this is going to be something that will work for or against you? And I’ve got another question coming in that I think is perfectly aligned with where we’re at in the conversation. This came from Sudip Nair. Thank you so much, Sudip. Sudip is joining us from India. Will AI lead to a more polarized world with bigger organizations, controlling and shaping things to come, and smaller organizations and lesser developed nations being marginalized?

Matthew James Bailey:

Yeah. So that’s a really good question. And that’s why I look at world conditions for world 1.0, 2.0, and 3.0. And one of those conditions is the democratization of innovation, the democratization of edge to compute. And this is why I’m putting AI into the hands of the people is because this is a leapfrog from where the industry is at the moment. And so the people through this book and businesses and nations can actually run much faster and have their independence on developing their destiny with AI, as opposed to being controlled by the big, big players that own most of the data centers, most of the telecommunications networks. And if you like the ivory towers of AI at the moment, we have to basically bring this into the mainstream. And one of the things that I also do, Jenn, is in the AI data ethics model is there’s a metal that’s awarded, it’s a certification of compliance of AI to those AI data ethics.

It’s like a British pipeline. And it is determined it shows the quality of the goods. And so this shows the ethical quality of AI. So when you apply this to any AI, you can measure its alignment with your ethical values. And so this really shakes things up. To Sudip’s point, I think it is right is that we don’t want the future dictated by the big giants. We need AI to be put into the hands of the people, into the hands of the business, into the hands of the innovators, the startups, the government. So when no longer beholden to the controlling paradigm of the other companies, now look, they do great work, right? AI has advanced significantly through wonderful work by these big players, and they will come to the table, but their cultural mindset has to change unless they awakened and understand the one that data models at the moment for that business and no longer relevant. And there’s a new set of data models for the future that actually is more inclusive and actually released. It’s more freedom to innovate. Then we’ve got an interesting conversation.

The Ethics of Access to Technology

Jenn DeWall:

Yeah. I think the thing that came into my mind as you were even sharing that was digital inequity, mean people not having access to technology, to them, be able to share their data, to reap the benefits of that. But that’s probably a whole other conversation. And I want to get to some of the questions, both James and Barbara. I will get to your question. So Barbara’s question is this, and I think it ties in with where you’re at. Is there a risk of moving too fast? Agility is great, but what if ethics aren’t taken into consideration because they want to be first or they want to be the biggest, or they want to be that ivory tower.

Matthew James Bailey:

Yeah. So, so that’s really important. One of the businesses that I’m on the board of is called smarter.ai. And this actually literally creates a democratize innovation. And what it means is that you don’t need to understand AI. That’s a profound level. You can basically build AI to be deployed within your business to optimize certain aspects, but very quickly through this, literally an AI interface. So you can speak into this interface. You provide the data sets, and it will actually give you an AI to actually then deploy within your business. So it is really important. And this is important to the point that the lady made is that we need to experiment with AI in a very safe place in order for people to become comfortable. Now, the ethics conversation is the pivotal point to preventing unseen bias and agendas entering AI, right?

So if we get our AI ethics, right, right? Then effectively, we have a transparency that no one can disagree with. And that means it’s a stop-gap for effectively if you like, world 1.0 players trying to dominate the scene. It, we have to be comfortable with AI. And that’s why we’ve got this business smarter.ai. And also, we really need to bring a mature conversation to the centerpiece around ethics because we have to get that in place in order for us to move forward. Now, there will be steps now to your question, will we run too fast? The answer is we just need to be mindful, and we need to be careful, and we need to reset the conversation around AI in order for us to take a step forward, feel comfortable, make the next leap step forward, feel comfortable. And when do they take another step? So we have to be nurturing and mindful in the advancements of our partnership with AI in order for people to feel comfortable and included. And that really is an important conversation.

Jenn DeWall:

AI is just beyond human. I mean, I think that’s the way.

Matthew James Bailey:

It’s not beyond human!

Jenn DeWall:

Not beyond humans, maybe, but it is human. Like there is so much of it’s replicating so much of who we are, and even going down to a government structure, like how you’re using that data, how you’re doing that. It’s just so interesting. And I, I really, honestly, before this week did not look at AI like that. I would say that I had a very limited view. It was maybe understanding how we use AI in the hiring process. Or how you use AI to figure out what you want to buy. I, you know, I don’t think I ever took it to that bird’s eye view level of what it really is. And that’s why I think my mind is so blown. But the question we do have one question that came in, and again, this talks about our humanity, and this came from James maxi, and he said, so what would then be the top skill areas where today’s future leaders need to upskill within co-creating with AI? What are the skills that we need to be ready even to support an organization that might be leveraging this data to support a government? Or even to think about when we’re, you know, giving our own, I guess, permission to use that, what skills do we need to have?

Matthew James Bailey:

Yeah. So it’d be good if you show the book and the link for the book. We haven’t done that yet. Do you want to do that now or after the question?

Jenn DeWall:

Oh, sure. I can do that now. Yes.

Matthew James Bailey:

At the moment, it’s on pre-sale, but when we, on September 27th, it will increase significantly in price. But, we’ll also do a book release as well, so people can buy it now. And we’re doing a launch event on the 27th as well, which we’ll let people know.

What Skills Will People Need Most in the Age of AI?

Jenn DeWall:

Yeah, absolutely. We want, you know, this is such a great conversation that we want to hear more about that we know so important for all leaders today. And so absolutely you can pre-order Matthew’s book, Inventing World 3.0: Evolutionary Ethics for Artificial Intelligence. So Matthew, what do you believe are the top skills that people need today?

Matthew James Bailey:

So I think that in particular, in the field of AI, in the book, we talk about a number of steps that businesses can take to get involved in AI. But let’s talk about skills first. I believe that we’re going to start to see a Chief AI Ethics Officer within businesses. And this person will have to be trained in a very different way. They’ll have to understand philosophy. They’ll have to understand other aspects around ethics and technology in order for them to upskill. So I think what we will see a new type of role within businesses that are serious about AI in this new Chief AI Ethics Officer. So philosophy is going to enter there. I think that in order for us to advance, we really need to get real with the culture of a business.

And so a new type of leadership is needed where it’s not just about understanding the company culture—which is important— but it’s also understanding the culture within business units, the culture of the individual. And this is not just a C-suite conversation. This goes down and should go down to the janitors and others that may be considered the lower echelons. Although I don’t consider them that, because I love talking to janitors and various others about their purpose in life. So I think we need to look at skillsets around culture and how to frame culture and how to understand the culture. It’s a macro and a personal level. And so, therefore, the individual themselves have to develop more of a broader emotional intelligence to engage in these different aspects of culture in order to be able to bring them together into a framework that will work for the business advancements in AI.

Will AI Take Our Jobs?

Matthew James Bailey:

So one of the fears that people have at the moment is that AI will take away jobs. Yes, it will. But then there’s an opportunity to bring AI as a partner into new types of jobs. So really, what we’re looking at is rewriting educational kind of framework within a business, or even within a nation. That’s for sure. In order to say, well, okay, so if AI can optimize our business or give us this competitive advantage, and it means these jobs need to change well, what new jobs are emerging because new jobs will emerge. And so therefore, we need to be grown up and look at what are the jobs of the future, where AI is becoming a deep partner and an ally in business. And so let’s start putting those in place as we start experimenting with AI. So putting together a task force around this is going to be really important for a business.

I think looking at the digital transformation strategy is going to be really important. What’s the vision of the business for the next three to five years? How does the digital world play in that? And then how do we find our way into that future? And then, looking at the supply chain of AI is going to be very important. So you know, making sure that the supply of AI is coming outside, the company complies with the ethics and belief systems and the cultures of the business, so that there’s no bias in AI that in the business that will deflect that culture. And the book talks about how to do this. So those are some of the things we need to talk about. We have to consider AI as a positive force and a positive kind of employee and then engage with our workforce and actually say, these are the jobs that are coming. That’s going to be that future business model and that business shape. So we’re going to bring you into that conversation. And it’s all about inclusivity. It’s not about control or manipulation. It’s about inclusivity. And if a company’s culture is to be honest, it is to be respected. It will do this. And so we’ll see the collapse of some company cultures.

Jenn DeWall:

I’m like, I need to think about how I even facilitate leadership classes because I feel, you know, we teach classes on emotional intelligence and problem solving as well as classes about understanding bias. But I feel like I need to start talking about it and, you know, building that bridge between AI and recognizing that this is why even now more than ever, these skills are important to you to be able to be prepared for the future.

An Academy for Evolutionary Ethics in AI

Matthew James Bailey:

Yeah. So that’s why in your slide, there’s a world of AI Ethics – there’s an Academy we will be announcing officially probably in October. And this is an Academy to address those very questions. This is an Academy to actually- I’m not going to say educate- but it is to educate training, but also a personalized conversation with the individual around AI. AI, data, ethics, AI ethics, in order to equip the leaders of tomorrow with the world of AI. So we’re announcing this Academy in October that is dedicated to working with business leaders, government leaders, innovators of AI, and also personal coaching for individuals or businesses around the AI and ethics strategy in order to help with their transformation. So this is important, this Academy.

Jenn DeWall:

Absolutely. Well, and I think that brings into this question from Malali, which is with all the developments in AI, what are the expectations of AI in the boardroom in terms of decision-making corporate governance? Because I could see, and I guess that also begs a different question. What if we pull out what we want to see within data, you know, where bias comes in, but I will go back to Molly’s question. So with all the developments in AI, what are the expectations of AI in the boardroom in terms of decision making and corporate governance?

Matthew James Bailey:

Yeah, that’s a great question. And there’s a whole conversation piece and education piece around C-suites and boards and even stakeholders. You know, these folks are really busy, and they’re spinning a lot of legs to keep things going. So for me, I think what would make sense, it’s kind of a two-day retreat for boards and C-suites where they’re actually getting to the heart of the purpose of the business and the heart of their vision, and then understanding these biases, the belief systems, cultures, values, and ethics, and actually then formulating a strategy in order to then bring AI mindfully into their business. So it’s a very good question. And you know, it, it, there’s a lot of work to do in the C suite and the board level around this because there are lots of misnomers around AI. And so really it’s about a transparent, authentic conversation.

And we can’t bypass the emotional intelligence of this. We can’t bypass the individual on the board around this. People have to open up and be authentic around this conversation of this partner to be in that business. It can’t be a hard-coded kind of moneymaking machine conversation. It is about making money. It is about economic thriving, but quite frankly, businesses that don’t get their partnership with AI right, they’ll get left behind in a World 1.0, human-centric system. And so this is a necessary conversation for the C suite to have in order to get real with the purpose of their business.

How Will AI Impact Education and Employment?

Jenn DeWall:

You really need that alignment in terms of being on the same page about where you want to go. I’m going to ask one final question before we wrap up our webinar. And this question comes from Sudip as well. And it’s AI and IoT, the internet of things, what will be its impact on employment, employability, and the current education system/curriculum. How do you think it will change?

Matthew James Bailey:

Yeah, so it will, and that’s a very good question. So if you look at Finland 5.5 million people, 1% of their citizens, that’s 55,000 persons, have been going through a national education program in AI. And so Finland itself is actually bringing AI to society. And this is, this is an incredible step forward. They are actually bringing inclusivity in the partnership of their citizens into the future of the nation. And that is very powerful. I think when we look at the education curriculum for a national AI strategy, it can’t just be-and it’s important protecting our cyber grids, protecting our cybersecurity, protecting our telecommunications, our energy grids, our transformation system, transportation systems. It has to go into education. Why is this? Because of these folks, these young children, these students are the innovators of Worlds 2.0 and 3.0. And so our education system has to be looked at again.

Matthew James Bailey:

Now, some people have said that AI robot teachers will come into existence. That is the stupidest thing I’ve ever heard. AI will become a personal guide for the individual that will not, that will kind of develop the individual in line with their gifts in line with their ambition, help them to try different things. AI will be an education than a development guide for the individual, which may be outside the classroom or also inside the classroom. And so I believe that the teaching will be human-centric, but AI will become an educational partner in curriculums. And so we really do need to change our view of education. So look at people’s gifts and support that and look at kind of the desires of the individual and whether they want to be an artist or whether they want to be an astronaut, doesn’t really matter, and to kind of guide them in a way in order for them to actually test that and play with it. So various ways in order for them to advance. Does that make sense? What I’ve just said?

What Should Leaders Start Doing Now?

Jenn DeWall:

Absolutely. We, you know, that’s part of the, I love that Finland actually has that as a part of their, I guess, expectation in their education because it is so essential. And I think it’s going to teach ethics and the value of ethics also at a younger age. One final question, I guess I do want to ask, is just thinking about the action that the people that have joined us today, what, what is an action that they can take? What are the practical steps? I know we were planning on talking about that, but what is a practical step a business leader can take to embrace AI or to start developing your own skillset? What would be some calls to action that people can do today to improve, or essentially make sure that themselves, their team, their organization is ready for this?

Matthew James Bailey:

Yeah, well, they can buy the book and sign up to the Academy, but let’s give some practical examples. The first thing is, is that I, as I said earlier, I think creating a task force that’s commissioned by the C suite to look at the vision of the business, look at the digital transformation strategy and to be educated and learn about ethics. And that means philosophy. And also to look at AI and understand how AI is going to become a partner in that business transformation—and so creating testbeds where safe testbed, where you might want to use AI in a specific aspect of the business, just to see how it plays out- that might be logistics. It might be a payroll. It might be some, you know, access control. It could be anything like using AI in buildings to make sure the is perfect for the individual and also for the group.

So a task force is going to be really important here in order to actually kind of start shaping the roadmap with this partnership with AI. I think looking at the culture, and the ethics and belief systems of the business are going to be important and which ones are relevant for the future, which ones do we need to bring in which ones do we do not want anymore? And then we have to really look at the subcultures within the business and look at what are the cultures within the business units, what are the cultures and beliefs of that business? Do they align with the overall cultures of the business? Do they force the transformation of the overall culture of the business? You know, we really need to kind of look at culture and understand ethics and belief systems. And so then we also look at the data strategy. And so it’s really important for businesses that if they’re going to train their AI to participate in their business, they really need to get that data strategy right. And also get their AI data ethics strategy right. Because data, Jenn, is the DNA for artificial intelligence. And so, therefore, we have to get that right and get it ethical in every aspect in order for us to have an ethical AI that’s working in that ethical business.

How Can AI Support People?

Jenn DeWall:

Yes. I love that. It’s, you know, a great point to close on just thinking about what we can do. And I like the, you know, one of the points you talked about, you made many points throughout this webinar, but thinking about the authenticity and coming to, just to terms with what we really want a company to look like, and really asking ourselves if our values align with our future, if we need to modify them. So just really doing that ground assessment to make sure. And also, that reflection, incorporating those soft skills, do we have a bias in our processes that we need to be mindful of? How are we looking at the big picture? Do we see all the points of connection? If not, what do we need to do differently? And how do we have a workforce that’s prepared to embrace and leverage the power of data to make better decisions?

Matthew James Bailey:

So one of the things to talk about is what don’t we ask the workforce, how they want AI to support them? Why don’t we ask the workforce to say, I could really do with some optimization in this business. Why aren’t we asking the workforce themselves to shape the future of the business? Because these folks are at the front line, and that doesn’t discount the C-suite is so important. And these guys have a very hard job, running a business. It’s tough, and it’s fast, and it’s quick. So I think that there are ways that we can implement within a business that actually will not put too much stress on the C-suite, or actually we can bring the workforce. This is in building the vision for the business, and then they do that two-day retreat or three-day retreat, and then come up with something.

Jenn DeWall:

No, thank you for adding that to get to the heart of your people. Talk to them. In closing, I’ve really enjoyed our conversation. If you want to connect with Matthew, obviously, you can pre-order his big book. It’s going to launch at the end of the month, but you can pre-order it. There are links for Amazon in the world of ethics that have been on the screen. And if you want to connect with them, you can connect with them on LinkedIn and or you can go to his website and check out the other work that he’s been doing around the subject. Matthew, thank you so much for joining us today. Just going back, you know, we really go out there, get his book, help yourself, be better prepared as a leader to be able to navigate this new, this new way of doing business. So Matthew, thank you so much for your time today. We greatly appreciate it.

Matthew James Bailey:

Oh, thank you. And thanks to the audience for joining us. Thank you, Jenn.

Jenn DeWall:

Thank you so much for listening to today’s episode of the leadership habit podcast. If you want to connect with Matthew, he has a ton of resources on his website. MatthewJamesBailey.com. If you liked it, share it with your friends, share it on social. And of course, don’t forget to leave us a review on your favorite podcast streaming service.