There was a time when most people considered advances in computer technology as benign, helpful to society and to individuals. In the “Digital Revolution” or the “Third Industrial Revolution” things could only get better. However, as artificial intelligence develops, that view is being challenged and new technology is creating a host of questions about transparency, security and ethics. Are we saving the world or creating monsters? And what can internal audit do about it?
For a start, we need to understand the issues. Artificial intelligence (AI) isn’t just one thing. There are several definitions. “Weak” or “narrow” AI, is focused on one narrow task. Most of the AI used today is narrow. A supercomputer that beats a chess grand master is built for playing high level chess, nothing more.
Artificial general intelligence (AGI) has not yet been developed successfully, but potentially it will be a machine that can understand and learn any intellectual undertaking that a human being can. Some researchers refer to AGI as "strong AI", "full AI" or "true AI". Others reserve the term "strong AI" for machines capable of experiencing consciousness.
Ethics aren’t simple either. There are “instance-based ethics” and “principle-based ethics”. Instance-based ethics is what we do when we see something and say “that’s wrong!”. Principle-based ethics applies stated values consistently to reach appropriate decisions, regardless of the particulars of the situation. AI needs principle-based ethics designed into its program stack at a fundamental level to control everything that comes afterwards.
So what do internal auditors need to consider in the current context of narrow AI, and how can we as assurance professionals approach this new technology?
• The goalposts are moving
As machines become increasingly capable, tasks considered to require "intelligence" are often removed from the definition of AI, a phenomenon known as the “AI effect”. Mark Maloof in his book Artificial Intelligence: An Introduction described this as: "AI is whatever hasn't been done yet.” AI is therefore becoming embedded in our lives almost by stealth. What is discussed as AI is just what is still to come.
• Embedded prejudice
AI is just computers answering bigger arithmetical problems, helped by better hardware and processing power. It’s about as intelligent as a puppy. Its knowledge of the world depends on what humans tell it. When humans feed in skewed views, AI treats these as facts. This has resulted in several claims of AI generating racist and sexist outputs when used in areas such as security monitoring, recruitment and staff rewards – for example, in the case of facial recognition programs that cannot differentiate Afro-Caribbean faces, and an AI-based recruitment program for a multinational corporation that penalised CVs that included the word “women’s,” such as “women’s chess club captain” (the problem arose because the system was trained on data submitted by applicants over a ten-year period, mostly from men).
• Job losses
Lost factory jobs have historically been offset by new jobs: “labour of the mind” rather than “labour of the hands”. But AI could potentially displace more jobs, including doctors, lawyers and auditors. How can alternative jobs be created at the same speed? Automation is seen by many economic analysts as a driver behind the increasing wealth divide of the past 20 years. As Thomas Hodgskin (1787-1869) said: “The road-builder may share profits with the road-user, but the road itself cannot do so.”
Does AI put money that was previously shared more evenly around society into a small number of people’s pockets? How
will that affect our economy, politics and social culture?
• Online security – quantum computing
Quantum takes advantage of the strange ability of subatomic particles to exist in more than one state at any time. Quantum computers operate on completely different principles from current computers and are better at solving mathematical problems such as finding the huge prime numbers used in cryptography. Quantum computers working with AI may quickly crack many of the systems that keep our online data secure. The internet may become an insecure environment.
•The Black Box effect
Lack of interpretability in AI is a public concern. Humans find it hard to analyse the workings of the algorithms and AI self-generation has left human scrutiny behind. The machines are outpacing our ability to understand how they work. In 2017 AI developed at Facebook created its own language. It developed a system of words to make communication more efficient. Researchers shut the system down when they realised they couldn’t understand the language the machine had created. Other AI systems have also diverged from English to develop their own language recently. They weren’t asked to do it, but simply found a better way. How do we control something we can’t understand?
• What are our responsibilities to the machines themselves?
What happens if AGI becomes possible and we create a machine that is, in effect, a new form of sentient life? Many respected AI researchers are calling for a Charter of Robot Rights. What should this contain?
As assurance professionals we need structures and rules. We may need to position ourselves as ethical facilitators. We need to be guardians of truth and catalysts for new thinking. So, what considerations should guide us?
• There has been a lag between early adoption of AI and the development of regulatory frameworks. Genuine regulation doesn’t yet exist, although many people are calling for an international code of AI ethics.
• There is no mature auditing framework for AI subprocesses (eg, deep learning).
• There are no AI-specific standards or mandates. No one yet knows how far is too far in AI, or which applications are likely to be unacceptable to society.
• Where does legal and social responsibility lie for the use of popular AI applications? A recent test case went to court to establish the right of AI to be given intellectual property rights to anything invented by AI.
• There are almost no widely adopted precedents for AI audits. Internal auditors need to build a new way of auditing.
• The complexity of AI and shortage of qualified data scientists will lead to outsourcing of AI development projects to third-party resources, which might prove difficult to audit.
• A coherent understanding of enterprise AI will be dispersed, potentially lost, across tiers of outsourced AI providers.
• The black box effect makes algorithm audit a highly specialist (impossible?) task.
• Explaining AI output and risks to clients is difficult.
IT auditors should not overthink the challenges of auditing AI. They should look at the governance of AI and the integration of AI among other more traditional IT systems.
• Use the way in which cloud computing and cybersecurity were first audited when they emerged as a useful frame of reference.
• Focus on the controls and governance structures that are in place and determine that they are operating effectively – as with previous new technologies.
• Provide value in your assurance by focusing on the business and IT governance aspects of AI. Traditional audit techniques can be applied to new technology,
• Don’t get hung up on trying to fully understand how the technology works. It’s more important to concentrate on the difference AI makes to business and society.
• The old IT adage “rubbish in, rubbish out” has never been truer. Look for rigour in the training of AI algorithms. Conscious or unconscious bias at this stage can throw up problems later. The black box effect makes it almost impossible to unpick where the fault occurred.
Few AI researchers are concerned about AI “going rogue” and threatening our survival. However, most researchers believe we must think about the unintended consequences of using this technology.
Most also do not believe that AI is, or will become, sentient. They see it as simply an important, morally neutral, 21st-century tool.
But AI’s use, and its in-built biases and prejudices, depend on the criteria we apply to its development. Kathy Baxter, a prominent ethical AI practice architect summed this up when she told the 2019 World Economic Forum: “While AI has the potential to do tremendous good, it can also have the potential for unknowingly harming individuals.”
AI is a powerful new force in its embryonic stages. We could create angels or demons and we still have time to decide our course. The technology holds a mirror up to humanity and we must ensure it shows our best face. Audit professionals have a key role to play in this if we have the courage to get involved now.
Stephen Watson is director of Tech Risk for AuditOne UK. He recently presented a deep dive workshop on the ethical considerations of artificial intelligence to the Global IIA and ISACA’s joint GRC Conference in Fort Lauderdale, Florida.
This article was first published in November 2019.