AI@SCALE requires trust and adoption


In this early part of 2022, we are spotting an uptick that AI is no longer just part of innovation but about core business transformation. To unleash the impact of cognitive solutions through transformational enterprise-grade initiatives, businesses are reaffirming their interest and their main blocker: AI Adoption.

Adoption and even more appropriation of cognitive computing by humans rely on an appropriate level of understanding regarding what AI1 is, the ability for every user to trust those advanced systems and, lastly, the will to objectivise when to rely on humans, machines or both. But first and foremost, let’s share and demystify three misconceived ideas and their respective impacts to successfully deploy cognitive enterprise at scale.  

Deconstruct misconceived ideas about AI  

Data AND Learning  

Firstly, do not only be obsessed with data but think learning processes. The challenge of AI is to teach a system something in a given context and purpose. This learning is driven by humans who therefore operate a transfer of knowledge, know-how and interpersonal skills to a so-called "intelligent" system. It is about Data AND Learning, which is actually the real promise of AI. The more it learns and the more it is used, the better it performs.  

Algorithms, mathematics… AND cognitive sciences  

Secondly, do not think only in terms of algorithms, statistics, and mathematics, but also in terms of cognitive sciences. AI is an eminently human subject. It covers cognitive dimensions through 6 capabilities: language, voice, vision, complex reasoning, knowledge management and empathy. Big transformational AI & Data projects will require not only data scientists, but also a wide panel of business and industry experts; we need sociologists, semantics specialists, psychologists.... Multidisciplinary is a must and you need more than ever to develop a broad spectrum of skills within your teams.  

Technology AND Humans  

Finally, let's not make it only a technological topic. It is a subject where the priority is change management, in order to ensure the adoption and appropriation of the systems we make available to users. We are facing a new collaboration that requires new skills and new behaviours: Artificial Intelligence is a technology that changes everything for everyone. It will be able to influence our decision-making process at every stage and in any sector. Often reduced to its technological dimension, artificial intelligence is above all a human revolution, not a simple trend.  

While cognitive systems have undeniable qualities in terms of hard skills, it is necessary that we value our human intelligence (interpersonal skills, empathy, team spirit...) to create a win-win relationship and make the best possible decisions: "AI will be what we make of it”2. So, adoption and appropriation are the keys to success, and it is essential to be prepared to surf the wave of AI and not be overwhelmed. To do that, soft skills such as critical/horizontal thinking, teamwork and “free will” are essential to interact with these systems in the best possible way: you will know why, when, and how to use it – or not!  

Objectivise the use of AI  

The inexorable rise of AI has led us to think in a binary way, either that humans should be at the centre of everything and all the time... or on the contrary to think that machines should now take over, as they are overtaking us in many aspects.  

However, the truth requires much more discernment with the challenge of being able to get objectivity and take informed decisions: Why, When, How to optimise decision making by minimising cognitive biases, by maximising the intrinsic characteristics of the human and/or the machine. We must now accept the fact that in some cases AI will be privileged, in others the Human will have to decide alone, and finally there will be cases where the Human and Machine have no other choice than to collaborate. But to continue through this approach, the Human would have to trust cognitive system outcomes.  

Implement trust at the core  

Diversity & Inclusion  

If we are not inclusive and do not ensure fair diversity of the group of individuals who are at the beginning of AI training processes and then ensure that those systems learn and improve; we will - by design - continue to create biases. We need to ensure that humans working on the subject reflect diversity in the broad sense of the word: beliefs, race, gender, etc. There is still a long way to go; a lot of work remains to be done. Until this point is reached, we may have developed the best tools and standards, but we will continue to generate bias in AI systems by design.  

AI Trusted Framework to operationalise  

That’s why being able to put in motion the whole enterprise around 3 levers: Enterprise Governance, AI Engineering and Culture & Design, will clearly create a competitive advantage. A recent IBM study revealed that “75% of executives view ethics as a source of competitive differentiation”3. To do so, companies would have to show they are mitigating bias risks through explainability, robustness and transparency of their cognitive solutions.  

The same study shows that “fewer than 20% of executives strongly agree that their AI ethics actions meet or exceed their stated principles and values”4, those values that are core business for them and their clients.  

All of this must be handled within an ethical framework that focuses on topics such as values, beliefs, sense of accountability, well-being… It means that we will explicitly frame what ethics means in a specific context, for a specific company and for individuals.  

So, let's stop talking about trends and focus on the application of AI in all business processes! Because, yes, we are going to live in an augmented world. Shouldn't the initials AI stand for Augmented Intelligence?  

Jean-Philippe Desbiolles, Vice-President & Managing Director, Financial Services, AI & Data Leader, IBM Industry Academy 

1Artificial Intelligence 
2Desbiolles, Jean-Philippe. “AI will be what you make of it: The 10 golden rules of AI” Dunod Editions. August 2019. 
3Goehring, Brian, Francesca Rossi, and Beth Rudden. “AI ethics in action, an enterprise guide to trustworthy AI.” IBM Institute for Business Value. April 2022. 
4Goehring, Brian, Francesca Rossi, and Beth Rudden. “AI ethics in action, an enterprise guide to trustworthy AI.” IBM Institute for Business Value. April 2022. 

Banner EIS