Abstract background with vibrant lines and luminous dots, creating a captivating visual display.

Risk Trends in 2024 and Beyond: Artificial Intelligence

Risk Trends in 2024 and Beyond: Artificial Intelligence

Synopsis
4 Minute Read

Generative AI, such as ChatGPT, has immense potential to transform and disrupt the business landscape. However, there are significant risks associated with its use — including privacy breaches, intellectual property theft, and frequent hallucinations leading to costly errors and misinformation.

Organizations and individuals who ignore AI will do so at their peril. The technology is advancing rapidly and has already become commonplace in many applications. Users need to scrutinize how they engage with AI and its usefulness at scale.

Boards and senior leaders can help by ensuring they understand the current landscape and trends and that appropriate policies and governance are in place to support AI in the business.

This insight is one of 15 risks in our 2024 Risk Trends Report. Navigate back to the main page for the full list of risk trends that you should be monitoring for in the year ahead.

AI is everywhere: What are your opportunities and risks?

OpenAI released ChatGPT for public use in November 2022, giving many users their first eye-opening glimpse at the immense power — and disruptive potential — of generative artificial intelligence. Initial reviews marvelled at how realistic computer-generated text appeared and the seemingly infinite number of cost-saving use cases. But potential stumbling blocks appeared just as quickly.

Almost everyone who has interacted with ChatGPT and similar generative artificial intelligence frameworks has experienced a so-called hallucination: The algorithm seems to go rogue and volunteer data that is incorrect, was not requested, or generally does not align with the prompt.

Subsequent updates have reduced the frequency of these hallucinations. Still, it may not be possible to prevent them altogether. AI can interpret a question posed to it in any number of ways depending on how that question is asked, and the same prompt can result in infinite different outputs. Users, therefore, need to scrutinize how they engage with AI and the usefulness of AI text, imagery, and computer code when using these tools at scale.

Additional concerns surround the data sources used to train AI applications and the opportunities for misuse. Inputting sensitive or proprietary data (intentional or not) into AI could lead to significant privacy breaches, cases of intellectual property theft, or copyright infringement. Threat actors could deploy AI solutions to craft sophisticated and highly believable social engineering attacks. Students could use it to cheat on university papers.

Perhaps the most important takeaway from the world of AI in 2023 is that organizations and individuals will ignore it at their peril. The technology is advancing far more rapidly than anyone anticipated, and we’ve only just scratched the surface of the myriad ways it will permeate our lives for better and worse.

Related risks

  • AI bias leading to sub-optimal decisions
  • Theft or loss of intellectual property or private and confidential data
  • Operational issues resulting from AI not making accurate judgments
  • Cybersecurity vulnerabilities
  • Plagiarism
  • If ignored, loss of competitive advantage

""Key questions to ask

  • Do you have an inventory of all the technology, systems, processes, and job descriptions that have been — or may be — impacted by AI?
  • Does the organization have a risk assessment process for all new technology or AI being considered or implemented?
  • Have you asked your critical third parties and suppliers how they plan to incorporate AI into their technology, systems, and processes?
  • What types of AI usage would be unacceptable to your organization (i.e., driverless delivery vehicles)?
  • Are the policies and/or guidelines your organization has provided on the acceptable use of AI adequate?
  • How are users of AI validating the recommendations, explanations, and sources put forward by AI?

""Red Flags

  • Projects ignoring the associated risks and implications of using AI
  • Lack of measurable results related to AI usage
  • AI solutions that are not scalable
  • Bottlenecks caused by large volumes of AI-generated data and/or the inability to cope with the volume
  • AI is unable to harness big data effectively and/or reliably
  • Confidential information found on ChatGPT and similar platforms
  • The ethics and bias of AI causing suboptimal decision making
  • Demand forecasting/optimization failures related to the use of AI
  • Negative ESG side effects resulting from reliance on AI

Internal Audit Project Opportunities

AI Model Performance Audit
This audit assesses the accuracy, efficiency, and reliability of AI models deployed by the organization. It ensures that the models are producing accurate and meaningful results.
Data Quality for AI Audit
This audit examines the quality, completeness, and relevance of data used to train AI models. It ensures that the data is of high quality and that any biases are identified and addressed.
AI Governance and Oversight Audit
This audit evaluates the organization's governance structure and oversight processes related to AI development and deployment. It ensures that there are clear responsibilities and accountability measures in place.
AI Ethics and Fairness Audit
This audit focuses on assessing the ethical implications of AI systems and whether they are designed to treat all individuals and groups fairly, without bias or discrimination.
AI Security and Privacy Audit
This audit reviews the security measures implemented to protect AI systems and the data they process. It ensures that AI systems do not pose security risks and that privacy concerns are adequately addressed.
AI Transparency Audit
This audit examines whether AI models and their decisions can be explained and understood. It ensures that AI systems are not operating as "black boxes" and that their reasoning is transparent.
AI Compliance Audit
This audit assesses whether AI systems comply with relevant laws, regulations, and industry standards, such as data protection regulations or ethical guidelines.
AI Training and Testing Data Audit
This audit evaluates the data used to train and test AI models, ensuring it is representative and appropriate for the intended use.
AI Vendor Management Audit
This audit focuses on the organization's management of third-party AI vendors, ensuring that vendor selection, contracts, and performance align with the organization's requirements and standards.
AI Incident Response and Contingency Audit
This audit reviews the organization's preparedness to respond to AI-related incidents, such as system failures, biases, or ethical violations, and the measures in place to handle such situations.
AI Training and Awareness Audit
This audit assesses the training and awareness programs provided to employees regarding AI ethics, usage, and potential risks.
AI ROI (Return on Investment) Audit
This audit evaluates the financial and strategic value derived from AI implementations, ensuring that AI projects align with the organization's goals and provide tangible benefits.

Risk Trends in 2024 and Beyond

View all the risk areas featured in this year’s report. 

Insights

  • December 19, 2024

    How MNP’s Voting and Election Services supported Calgary Co-op through the election process

  • Progress

    December 18, 2024

    How your dealership can build a more gender-diverse workforce

    With only 23 percent of employees in new car dealerships being women, the gender gap continues to persist in the automotive industry.

  • Performance

    How will the CRA’s significant GST/HST update impact your dental and orthodontic practice?

    How will the recent GST/HST update impact your dental practice? Understand the new requirements for claiming ITCs and opportunities for GST/HST refund claims.