Over the years, we have interacted with AI through various mediums such as voice assistants, search algorithms, social media, and facial recognition on our phones. However, the advent of generative AI, like ChatGPT, has brought AI to the forefront, and we are now witnessing its capabilities at a visceral level. As a result, AI is no longer a concept that is coming someday; it is already here, and it is ready to change the world.
However, with change comes risk, and for corporate boards and management teams who have not developed an AI risk management plan, ChatGPT should serve as a wake-up call. In this article, we will broadly define the risks associated with widespread AI implementation.
Here are the top 5 AI risks that business leaders should watch out for:
1. Risk of Disruption
Artificial intelligence is set to revolutionize markets and business models like never before. One striking example of this is ChatGPT itself. Even Google, the long-standing search champion, is now facing serious competition.
Only a few years ago, people thought AI would only disrupt industries that relied on low-skilled labor or highly-methodical work such as financial trading and radiology. However, we now know that even creative industries like media and advertising, as well as personalized service professions such as teaching and financial advisory, and elite skill segments like pharmaceutical R&D and computer science are also at risk.
According to a March 2023 report from Goldman Sachs, generative AI like ChatGPT could eliminate as many as 300 million jobs worldwide, including 19% of existing jobs in the US. This means that regardless of the business or profession you are in, your company will face significant changes in the next few years. Unlike previous technological disruptions, the consequences this time could be “life and death”. Therefore, it’s essential for companies to develop an AI risk management plan.
2. Cybersecurity Risk
Business leaders already faced the challenge of keeping their organization’s data, systems, and personnel safe from hackers and saboteurs. In 2022, attacks increased by 38%, with organizations experiencing over 1,000 attacks per week on average, and the average cost per data breach surpassing four million dollars.
The introduction of artificial intelligence will amplify this challenge exponentially. Phishing attacks, for example, will become much more powerful with AI that is as sophisticated as ChatGPT. Hackers could send emails to staff that appear to come from the boss, using information that only the boss would normally know, and even mimicking the boss’s writing style.
The use of deepfake technology like voice clones in cyber fraud has been reported since at least 2019, and as AI improves and diversifies, the issue of cyber risk management will only worsen.
However, relying solely on current-day cyber defense technology like firewalls will not be enough. AI will help bad actors locate the weakest links in an organization’s defense and then work continuously until they find a way in. Therefore, business leaders must understand the potential risks posed by AI and develop a comprehensive AI risk management plan to mitigate them.
3. Reputational Risk
When ChatGPT first made headlines, Google executives expressed concerns about the “reputational risk” of immediately launching a rival AI (although they later announced their own AI, Bard, a few days later). Later on, errors and embarrassments from Bing and other generative AI platforms proved that Google’s initial concerns were valid.
It’s important to remember that the public is watching how companies use AI, and any behavior that goes against a company’s values can result in a PR disaster. AI has already exhibited racist and misogynistic behavior, led to wrongful arrests, and increased bias in staff recruiting.
Furthermore, AI can damage customer relationships. Forrester reports that 75% of consumers are disappointed with customer service chatbots, and 30% of them take their business elsewhere after a poor AI-driven customer service interaction. AI is still in its early stages and prone to errors. Despite the high stakes, many companies are deploying AI without fully understanding the risks to their reputation.
4. Legal Risk
The rise of AI has prompted the federal government to address the societal challenges it poses. In 2022, the Biden administration introduced a blueprint for an AI Bill of Rights, which seeks to protect privacy and civil liberties. The National Institute of Standards and Technology also released its AI Risk Management Framework to assist corporate boards and other leaders in addressing AI risks.
Legislation such as the Algorithmic Accountability Act of 2022, which aims to promote transparency in automated decision-making, has been introduced in Congress. In addition, 17 states have introduced legislation governing AI, targeting facial recognition, hiring bias, and other use cases. The EU has proposed an Artificial Intelligence Act that seeks to ban or moderate biometric recognition, psychological manipulation, exploitation of vulnerable groups, and social credit scoring.
New regulations are expected in 2023, and the risk to companies extends beyond compliance. If something goes wrong with a product or service that uses AI, the product or service provider, AI developer, data supplier, or even the company itself could be held accountable. Companies will likely need to provide transparency about how their AI makes decisions to remain in compliance with the new laws.
5. Operational Risk
The potential risks associated with AI are numerous and can be disastrous for businesses that adopt it too quickly. Even popular AI models such as ChatGPT can fail when misused, as seen in the recent Samsung case. There is a real risk that AI could give incorrect advice or recommendations, leading to losses that could be significant.
Other examples of AI gone wrong include IBM’s Watson giving incorrect cancer treatments and Tyndaris Investments being sued for hedge fund losses. Operational risks such as these should be managed by boards of directors.
The rise of AI has led to federal and state regulations, including the proposed AI Bill of Rights, the AI Risk Management Framework, and the Algorithmic Accountability Act of 2022. Compliance with these laws is essential, but businesses must also be proactive in managing AI risk.
Business leaders can learn more about how to manage AI risks and leverage AI for their organizations’ benefit. It’s essential to stay informed about the impact of AI on businesses to understand how AI is shaping the future of the business world.