Diligent One Platform World tour ad April 2024 TeamMate ESG advertising banner 2023

Heads of Internal Audit Virtual Forum

12 July 2023

Please note:

  • All Institute responses are boxed and highlighted blue.
  • Where the chair comments in that capacity, the box is highlighted in yellow.
  • For confidentiality, the identities of all delegates/attendees are anonymised.

 

Chair opening comments | Derek Jamieson | Director of Regions, Chartered IIA UK and Ireland

Artificial intelligence (AI) is moving along at a huge pace. Our speakers today will share insight into the context and practicalities of AI.

Merve Ugurlu, Senior Manager, Protiviti.

Iain McGregor, Director of Innovation and Development and Arsalan Hussain, Head of Data Analytics, Government Internal Audit Agency (GIAA).

Results of poll questions

1.       Are you confident that you are up to date with the extent to which AI is being deployed in your organisation?

Yes 34%

No 65%

2.       Has your organisation developed a set of policies and protocols to address AI?

Yes 15%

No 84%

3.       Are you already deploying AI in the IA function? 

Yes 21%

No 78%

 


Key Takeaways

  • Understand the language of the topic - for example artificial intelligence vs machine learning.
  • Machine learning is algorithm based using different types of learning: supervised, unsupervised and reinforcement.
  • AI regulation is evolving but at a slower pace than the technology and its usage.
  • There is significant complexity and inconsistency to overcome (see slides 7/8).
  • AI assurance should consider:
    • Scope and Impact - Strategy and Goals – Scope – Impact Assessments
    • Compliance and Monitoring - Risk Management and Standards – Privacy and Security – Post-market surveillance
    • Technology and Data Governance - Technical Requirements – Algorithmic Accountability – Delivery Methodology 

Click here for the presentation slides, including a useful one-page assurance approach for AI.

GIAA findings/lessons from a cross-departmental survey over the last year.

  • Some people didn’t know they were using AI.
    • IA can ask and educate.
  • AI used conservatively – intentionally so as waiting to be authorised to use.
    • IA can share experience and opportunities.
  • Areas with mature digital and data governance more confident in exploration of AI.
    • Now is the time to build capabilities.
  • Leadership needs to define their risk appetite to AI so as to empower end users.
    • IA can advise and encourage.

Examples of how GIAA internal audit are using AI

Large team, c350 internal auditors, c1500 audit reports each year. 
Philosophy for using AI is to put human at the centre of the process and provide super-human skills to internal auditors

  • Internal auditors feel empowered to be internal auditors not writers or administrators.
  • Quickly read across all audit reports for themes, trends and to create word clouds etc using natural language processing (form of AI).
  • Perform sentiment analysis across audit reports for specific time periods for example – useful for annual opinion, surveys, new products etc.
  • Deploy AI across the Audit Management System to create a draft report.
    • Software like ChatGPT is a gift for internal audit.
    • Enabling a shift from 3-10 days to get to draft report to 2 minutes.
    • Human internal auditor then refines.
  • Use private version of ChatGPT to identify and write risks during scope stage – with cause and events, descriptions and consequences.

Click here for GIAA’s presentation slides.

Chair closing comments

Ahead of today I asked Chat GPT about the value of AI to an organisation; automation and efficiency, data analysis and insight, customer experience, predictive analytics, enhanced decision-making, improved cybersecurity, innovation and competitive advantage, cost reduction. Click here for more detail. In a nutshell, the value is immense, but we must be careful.

This is a developing area which we will return to. The Chartered Institute is continually developing guidance and welcomes new authors. Check out this topic in our How to audit and Technical Skills sections of Resources on our website.

We are now moving to six forums a year – every two months. Our next meeting will be 27 September where we will discuss the newly launched Risk in Focus 2024 report.

Dates for your diary 

  • 27 September | HIA Forum – Risk in Focus 2024
  • 3,4 October| Annual Conference – London/Digital | register here
  • 2 November | Scotland Conference | register here
  • 8 November | HIA Forum – New Global Standards

Chat comments including Q&A

Merve | ChatGPT is becoming a common tool but poses new risks. Two countries within EU banned its use in certain circumstances for example. GIAA’s is a great example of how AI can be used effectively and efficiently. There needs to be end-to-end governance and it’s good to see the human intervention for oversight – embedding these factors is crucial to the use of AI.

Question | Can you advise how the GIAA tool was created? ie how is it programmed?

Arsalan | We have used RShiny to develop this application.

Question | Did you have to do any configuration to the software or was it directly deployed through a Microsoft app? 

Iain | We designed it ourselves. It communicates with ChatGPT in a private cloud/private version. We tell the software that ‘you are an experienced internal auditor.’ It thinks and responds as an internal auditor – cause/event/consequence. An important aspect is to stay in control, we always give our human internal auditors the ability to control the AI tools. 

Question | What about data privacy? When using ChatGPT we have to provide content that includes banking secrecy, company data etc?

Merve | Some member states taking action to ban – when building any AI need to consider data privacy – eg the question/response/the source data scraped. When using it commercially there needs to be consideration of its impact on liability and professional indemnity.

Comment | When having ERM discussions in new locations I have used AI as a tool to provide some general examples of the type of risks that are prevalent in that location - useful for speeding up the establishment of risk registers for new market entry countries.

Comment | Fireflies.ai to capture system notes, it also links into ASANA, so good for internal audit action plans being updated in real time.

Comment | Not implemented its use to any great extent yet but thinking so far includes i) similar to the other delegate's point for a bit of horizon scanning to feed into either specific audits or overall audit planning, and ii) potentially looking at our organisation's external reporting (financial, ESG etc) and reconciling to our internal view of what is reported externally.

Comment | ChatGPT as a virtual assist to support the audit cycle ie it is used to QA written reports, generate test plans for actual controls.

Comment | We need to bear in mind that ChatGPT only has data up to September 2021.

Iain | We are using it for its language skills rather than the latest data.

Question | How do we get comfort over the security of the data / information that we are uploading to ChatGPT? I have used it sporadically for scoping but have not uploaded any findings from our work for analysis. 

Answer | There was no time to answer this question – however here is what ChatGPT had to say about it.

Ensuring the security of data and information is crucial, especially when dealing with large organizations. While I, as an AI language model, don't have direct control over the security measures in place, I can provide you with some general guidelines to help you gain comfort over the security of the data and information you upload to ChatGPT. Here are a few steps you can take:

  1. Understand the security measures in place: Start by learning about the security protocols, practices, and certifications implemented by OpenAI for its ChatGPT service. OpenAI has likely implemented various measures to protect user data and maintain high security standards. Review OpenAI's documentation, privacy policy, and terms of service to gain a clear understanding of their security practices.