Dialing Down AI Risks While Getting Smarter About Its Uses

Although AI delivers remarkable opportunities, it also introduces substantial business risks.

Samuel Greengard, Contributing Reporter

September 21, 2023

5 Min Read
Business woman, AI and laptop with hologram, UX and networking for forex trading icon in office using virtual platform.
Yuri Arcurs via Alamy Stock Photo

At a Glance

  • Reputational, Legal, and Regulatory Penalties
  • Leaked Secrets in AI-generated Code
  • Steps to Take to Reduce Risk

The idea of using artificial intelligence to write marketing content, set insurance rates or automate sales forecasts is incredibly appealing. The technology can dissect complex data sets, spot patterns and generate valuable results -- often in a fraction of the time that it takes for a human to do the same job.

Yet somewhere between cost savings and productivity bumps lies a disturbing fact: AI also introduces business risks. A March 2022 survey conducted by law firm Baker McKenzie found that while 76% of companies have AI policies in place to manage risk, only 24% of executives believe that the policies are even “somewhat effective.”

What’s more, as organizations gravitate toward generative AI systems such as ChatGPT, the risks grow. “There are several areas that represent a high level of concern,” states Brad Newman, partner and lead at Baker McKenzie. The list includes intellectual property (IP), copyright infringement, data bias, deepfakes, cybersecurity, and data privacy.

It's an issue that CIOs and other business and IT leaders must confront. The fallout from accidental or negligent misuse or abuse of AI can prove costly and lead to reputational, legal, and regulatory penalties.

Says Liz Grennan, expert associate partner at McKinsey & Company: “Every company now lives in a supplier ecosystem that will soon be dominated by AI. There are so many layers of risk, and so many sources.”

Related:Growing With AI Not Against It: How To Stay One Step Ahead

Adds Avivah Litan, distinguished VP analyst in Gartner Research: “Organizations are suddenly waking up to the fact that serious AI risks exist, and legacy controls don’t suffice. It’s important to implement new controls for a new attack vector.”

Risky Business

The exponential growth of artificial intelligence and the enormous success of generative AI systems have left organizations scrambling to adopt solutions. McKinsey & Company reports that one-third of companies it surveyed in 2023 are already using AI regularly and 40% of respondents say they are boosting investments in AI. Meanwhile, 79% of respondents say they have some exposure to generative AI systems.

Although no one argues that AI lacks value -- Gartner found that 7 in 10 business leaders say the benefits outweigh the risks -- it’s also apparent that new risks are popping up faster than business leaders can manage them. For example, Forrester Research notes that AI is now used to generate convincing spear phishing attacks, deepfakes that impersonate the voice or likeness of a senior level executive, and spread misinformation about a company or its stock.

Related:FinovateFall Takeaway: AI Has Been Baked into Fintech for Decades

Concerns also exist in the intellectual property space. There’s the potential for leaked secrets in AI-generated code. Yet, there’s also risk associated with using ChatGPT and DALL·E 2 to generate text and visuals. How copyright laws apply to generative AI remain murky and a wave of lawsuits -- emanating from artists, musicians, software developers and others -- are now wending their way through the courts.

AI biases are another knotty area. These can lead to accusations of discrimination, unfair hiring practices, and other forms of bias. Baker McKenzie found that only 39% of business leaders acknowledge the risk. Newman says that the problem is rooted in haphazard AI oversight. Only about one-quarter of AI policies are thoroughly documented, he says. As marketers, developers, HR, customer service, and legal teams tap these AI systems, the risk of misuse grows.

In fact, the McKenzie study also found that 4 in 10 organizations had no single person responsible for overseeing AI, and 83% hand off the primary responsibility of AI oversight to the IT department. In addition, only 54% of respondents reported that HR is involved in the decision-making process for AI tools. “Incomplete input leads to incomplete policies and poor decisions about how, when, and where to use AI,” Newman says.

Related:5 Pitfalls and Possibilities AI Brings to Cyber Insurance

The risks associated with AI, in particularly generative AI, can easily fly below the radar, Litan points out. For example, a serious problem with systems such as ChatGPT, Google’s Bard and others is that they display a propensity toward generating so-called hallucinations. Without checks and balances, false facts and incorrect information generated by these systems can wind up in documents, reports, customer emails, and embedded in products. “It can lead to liability, lawsuits and bad decision making,” Litan says.

The deep learning and machine learning methods that train these models can also contribute to systems going off the rails, says Gerald C. Kane, professor of management information systems at the University of Georgia’s Terry School of Business. “What happens when a chatbot misbehaves or hands out the wrong information? What happens if it talks or acts in a biased or discriminatory way? Business leaders need to think about these things before introducing AI.”

Getting Smarter about AI

One thing is clear when it comes to managing AI risks: things aren’t going to get simpler anytime soon. Moreover, companies are slow to act and install the necessary governance framework for AI. Gartner found that 82% of organizations do not yet have an acceptable use policy for ChatGPT.

Yet CIOs and other enterprise leaders can take steps to reduce blind spots and mitigate the risks. Establishing a chief AI officer to oversee all the touchpoints the technology creates is an important first step, Newman says. “There is an illusion at many companies that ‘we’ve got great data scientists and developers working for us and they have a handle on everything.’ Nothing could be further from the truth.” A growing patchwork of state, national and international laws means that “oversight must begin and end at the C-suite.”

McKinsey’s Grennan says that the safe and effective use of AI requires “a set of foundational elements” that revolve around strategy, an understanding of how risk maps to business objectives, and clear and specific guidelines for data sourcing, usage, model design, training, deployment, security, acceptable use, monitoring, and continuous improvement.

“There must be clarity about what role people and data play,” Grennan says. Moreover, “AI projects require cross-functional collaboration, including data scientists, domain experts, legal and cyber professionals, and others.”

The ultimate goal, Litan says, is AI trust. This includes oversight and controls that an organization has always had in place -- for things like theft, damage to data and assets, security, and privacy -- but also for social engineering and AI risks that aren’t easily detected.

“There’s a need for organizational change, process ownership, and technology improvements,” Litan concludes. “Too often, business leaders are pushing to use the technology -- and risk and security controls are merely an afterthought.”

About the Author(s)

Samuel Greengard

Contributing Reporter

Samuel Greengard writes about business, technology, and cybersecurity for numerous magazines and websites. He is author of the books "The Internet of Things" and "Virtual Reality" (MIT Press).

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights