Advertisement
Top
image credit: Freepik

When Technology Outpaces Regulation: How Businesses Can Navigate the Emergent Uncertainties of AI

February 29, 2024

Category:

If we thought last year was the year of Artificial Intelligence dominating headlines, it was given star power this year. With a name that generates over ten billion results on Google, the emergence of deepfake images of Taylor Swift sent the cyber world into a tailspin. More importantly, it raised pertinent questions about AI regulation, safety, and ethics. Moreover, with businesses scrambling to use AI to keep up with trends, several risks have come to the fore, validating the concerns of those who have repeatedly and loudly advocated for AI regulation to be prioritized. 

And while many have explicitly focused on how this technology could be manipulated in government settings, it has developed so fast that this is a growing concern for the broader public. Businesses are at risk, and legal practitioners are grappling with the complex questions artificial intelligence poses. What we do know is that we’ve arrived at that fateful moment when technology outpaces regulation. Here’s what businesses need to look out for. 

AI and the Threat it Poses to Businesses

Over the last several years, artificial intelligence has been posed as the silver bullet to several issues that businesses face. And while it’s true that AI technology has the ability to revolutionize almost every industry it touches, there are considerable risks posed to business in many ways. 

Lack of Skills for AI Integration

With companies jumping at the opportunity to use artificial intelligence, there’s a genuine question about competence. In the last few years, trends in technology have developed rapidly, requiring businesses to adapt and deploy new solutions in order to keep up with emerging needs. Business analysts have had to grapple with new concepts and trends like Big Data, Cloud Computing, Robotics and Automation, and Blockchain. 

With the expectation of IT and Dev teams to migrate to artificial intelligence, the big question is whether there’s significant capacity and skills development to do this successfully. While the benefits of AI are certainly impressive, it still requires skill in data management, failing which, organizations are left open to reputational, data, and security risks. 

Intellectual Property and Legal Conflicts

In all the buzz around AI, a number of concerns immediately came to the fore, one of those related to the law. We’ve now reached the point where technology has outpaced regulation, and these concerns are more pertinent than ever. With businesses deploying AI, legal minds are working on identifying how disputes arising from AI can and should be settled. 

Ordinarily, people and organizations are held accountable for errors through the rule of law, but AI isn’t beholden to legislation. The margins for error widen when we consider that AI has the ability to “hallucinate” and, in some instances, cannot differentiate between data sets and is riddled with bias. With chatbots, the regulations and rules around intellectual property are even murkier; with no way to trace where information has come from, using artificial intelligence tools like ChatGPT instead of traditional search engines makes it impossible to attribute sources. 

This is referred to as a “black box,” and there are real concerns about how legal disputes around IP will be resolved when new material is created from AI prompts with no discernible source material accredited. Our definitions and understanding of plagiarism, intellectual property, and patent law will be challenged. With all this in mind, it’s not surprising that 27% of businesses have banned the use of artificial intelligence. 

Lack of Accuracy 

For all the wonders that artificial intelligence can, and does deliver, there’s a colossal issue with accuracy. Similar to the proprietary matters, verifying sources can be difficult, and AI can simply be inaccurate with hallucinations. Google recently issued a statement apologizing for “inaccuracies in some historical image generation depictions.” In efforts to improve on racial bias, the tool has depicted US Founding Fathers and even Nazi-era German soldiers as people of color. Microsoft Bing has had its own issues, with a high-profile example proving the tool couldn’t differentiate between financial data comparing vacuums and clothing

How Businesses Can Protect Themselves

In efforts to avoid the future threat of legal action and reputational risk, many businesses have elected to prohibit the use of artificial intelligence as a whole. In some industries and certain scenarios, this is a valid response that safeguards organizations and their teams. In other cases, this simply isn’t feasible, with the results being increased costs and hindered workflows. For those working with large data models, it may in fact be impossible. 

AI to Monitor AI

It might be surprising to consider that artificial intelligence could be the best thing to regulate artificial intelligence. Management systems need to be sophisticated enough to monitor and audit algorithms to protect data integrity. Most importantly, analysts need visibility over the flow of data, from input to output. Ultimately, only AI-powered systems have that kind of capability. This would result in fewer hallucinations, improved accuracy, and a narrower margin of error. 

Robust Legal Frameworks 

While governing committees are convening around the world in efforts to understand and regulate artificial intelligence, businesses will have to be more proactive. Leaning on the expertise of lawyers who specialize in commercial and intellectual property law is necessary to create a framework that protects businesses against some of the more common issues related to artificial intelligence. In so far as data is concerned, experts suggest establishing protection against data leaks by ensuring employees, clients, and vendors are legally required to protect their data. While there currently may be little in the way of federal law, businesses can tailor existing proprietary law for their defense. 

Capacity Building for Employees

An informed and well-equipped team is still the best defense against the onslaught of AI-driven risks. As the front line of the business, companies must invest in training employees to evaluate information, proactively search for credible sources, and better understand AI and how it works. This contributes to employees’ media and digital literacy, which in turn protects businesses against the harmful threats of AI. This can also improve AI adoption in a way that is beneficial to organizations. 

Conclusion

We’ve reached the point many were concerned about; one where technology races ahead of regulation. And while it’s not the fever-dream, sci-fi nightmare Hollywood depicts, there are a number of serious concerns for individuals and businesses. 

The lack of regulation and legal recourse makes it difficult to embrace this technology the same way we did a year or two ago. And while the slow wheels of bureaucracy churn, new tools and applications are being developed, each attempting to leapfrog the other. Businesses are clamoring to make use of AI, many in an attempt to keep abreast of the latest developments in their field and compete more effectively in their industry. But while AI can offer efficiency, it can also open up organizations to proprietary and reputational risk. 

The technology is still incredibly fallible, and without the necessary legal and regulatory frameworks in place, we find ourselves in a vulnerable place. What remains true, however, is that something needs to be done, and soon, ensuring everyone can enjoy the benefits of this technology, ethically and with impunity.