By Dr Tian Jing, Senior Lecturer & Consultant, Artificial Intelligence Practice
Tech giant Amazon once had the ambitious dream of building a recruiting tool based on artificial intelligence (AI). But soon, their machine-learning specialists uncovered a huge problem with their experimental system: It did not like women [1].
The company had reportedly been building computer programs since 2014 to review job applicants’ resumes, in hopes of automating their search for top talent.
Yet somehow, the AI system taught itself that male candidates were preferable. According to internal sources, it penalised resumes that included the word “women’s” – as in “women’s chess club captain” – and downgraded graduates of two all-women’s US colleges. The company ultimately disbanded the project in early 2017.
Amazon’s sexist AI recruitment gaffe spotlights an important question that underpins the success of widespread AI adoption and deployment: How do we trust AI decisions?
What’s in the mysterious black box?
It would now be hard to imagine a world without AI. According to PWC, the technology has a transformational US$15.7 trillion worth of opportunities by 2030 [2].
From chatbots and ride-hailing apps, to Netflix recommendations and diagnostics in healthcare, AI continues to transform every aspect of our lives. Most of the time, it operates behind the scenes, without us even being aware of it. We simply trust AI to do its work.
However, when blunders arise, it strikes fear. Machine learning models are generally non-intuitive and difficult for the humans to understand, and this is especially so for people with limited knowledge in areas of computer science, machine learning, or statistics. Moreover, as AI systems become increasingly complicated, more and more decision-making are being performed by an algorithmic “black box”.
As a result, many find it hard to trust decisions made by such a system – in which even their designers cannot explain why the AI arrived at a specific conclusion.
The lack of understanding of how AI arrives at its decisions can impeded industry adoption of the technology – business owners are not confident that they can trust AI recommendations; developers cannot make informed decisions on how to improve the AI model; while regulators lack a basis upon which they can evaluate whether an AI system is fair.
Turning the AI black box transparent
The key to helping users build trust and effectively manage emerging AI applications is through Explainable AI, or XAI.
XAI systems are designed to provide explanations for the decisions they make, i.e., how its models arrive at specific conclusions, as well as what the strengths and weaknesses of the algorithm are. By helping humans understand and interpret AI models and decisions, XAI creates a transparent environment where users trust AI-made decisions. This in turn will enable wider and faster adoption of the technology across industries.
Moreover, XAI can also help to improve system performance. By understanding why and how a model works, developers can fine tune and optimise the model to derive better insights and improve business strategies. It also allows for greater visibility over unknown vulnerabilities and flaws, which can help minimise potential downtime and protect businesses from costly mistakes.
AI for good and only for good
As organisations leverage AI to spur their business innovation, they face growing pressure from customers and regulators to ensure their AI technology aligns with ethical norms, and operates within publicly acceptable boundaries [3].
These considerations come into play at various stages of building an AI system: ensuring the ethical collection of data for model building, elimination of bias in model development, and making sure that the model is not being deployed for harmful ends.
One particular source of concern is the use of models that exhibit unintentional demographic bias. For example, New Zealand’s department of internal affairs was left red-faced when its automated system rejected the passport photo of a man of Asian descent, saying it’s invalid because his eyes were closed [4]. This is not a rare, isolated case – large gender and racial biases have been found in AI systems sold by tech giants, as they were trained mainly on a narrower set of data that could exacerbate stereotypes towards minority groups.
From principles to practice: Creating a human-centric AI for the future
The question remains on how to encourage innovation while at the same time safeguarding the interest and safety of humans.
Autonomous vehicles are often touted to be much safer than normal cars due to its AI and data-informed system that reduces, if not eliminates, chances of human error. But when fatal incidents happen, it is not immediately clear who is liable, and whether the manufacturer is at fault for putting a potentially dangerous technology on the road.
Singapore is a pioneer in the area of AI ethics, having set up the Singapore’s Advisory Council on the Ethical Use of AI and Data in 2018, and publishing the Model AI Governance Framework the following year.
The framework outlined two main guiding principles of AI usage, that (a) decisions made by AI should be explainable, transparent, and fair, and (b) AI systems should be human-centric. It also recommended four areas of consideration that businesses can adopt to help their users build confidence in their AI solutions:
- Implementing internal governance structures and measures to monitor and manage risks
- Determining the level of human involvement in AI-augmented decision-making
- Operations management to minimise bias in data and AI model
- Communicating and engaging with users to help them understand AI processes and policies
AI is no longer the future – it can now be found everywhere around us, in our daily lives. Businesses, regulators, and even inter-governmental organisations need to adopt a collaborative approach to create a practical and sound system of trust within the AI ecosystem. Only then can the technology realise its full potential for making positive impacts in society.
If you are interested in learning more about XAI, please email us at isstrainingb@nus.edu.sg
__________________________________________________________________________________________
¹https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
²https://www.pwc.co.uk/xai
³https://www.pwc.co.uk/audit-assurance/assets/pdf/explainable-artificial-intelligence-xai.pdf
⁴https://www.reuters.com/article/us-newzealand-passport-error-idUSKBN13W0RL