Skip to content
robot shopping

A How-To-Regulate-AI Guide for All

When it comes to artificial intelligence, the government can’t just ban the bad uses and keep the good ones, and the private sector can’t just go along.

Words: Yameen Huq
Pictures: Lukas
Date:

Nothing fuels fear and excitement today like the subject of artificial intelligence, or AI. While such technologies have been around for a while, the recent explosion in Generative AI or GenAI tools has made AI more visible and made folks more aware of the benefits. ChatGPT, for example, is helping users customize their prompts, which are computer commands written in a human language such as English. And for better or worse, AI is bringing us closer to science fiction.

GenAI tools can increase the quality and quantity of white-collar work products from helping to write articles to even debugging code, but as with any other tool it comes with costs. While federal and state governments alike can and should regulate such technologies, the processes are cumbersome and error-prone. Private organizations shouldn’t simply wait for effective regulation that may never come. Instead, leaders should implement their own regulations, internal to their organization, to shield themselves from the risks of improper use.

Defining Artificial Intelligence

What exactly is AI? For many, hearing the name conjures up images of floating supercomputers, robot humanoids, and, ultimately, human replacement. However, these are just (exaggerated) instances of AI in use and don’t represent the concept in totality. Like any new technology, AI should be understood in how it functions as an economic activity: to mimic human intelligence, however strongly or weakly, in order to analyze, automate, or create in ways that improve productivity. With this definition, it’s clear that AI has been with us for generations; it’s only the depth and sophistication that’s changed. 

Historically, AI was primarily limited to analysis and automation. GenAI tools such as Midjourney, ChatGPT, and DALL-E, have expanded this domain to the third: the ability to train models with data to create all kinds of content, such as text, audio, video, and even programming code itself. GenAI programs use large quantities of data and complex statistics and optimizations to “predict” an output, such as the additional text needed to “complete” a particular input text. One can think of them as highly advanced statistical tools for doing autocompletion. This is not to downplay the achievements but to put them in perspective. These tools are in ways that will certainly transform white-collar work, such as by analyzing visual images, writing programming code, and even displaying cultural literacy.

These commonplace forms of AI are often confused with artificial general intelligence, or AGI, which is AI that can mimic a wide range of human activities with varying effectiveness. But today’s AI instruments are limited to a very specific range of applications, such as generating content, automating clearly defined processes, and performing pattern analysis.

What Can GenAI Do?

GenAI can improve human productivity across multiple dimensions. GenAI text tools such as Claude and ChatGPT can be used to reduce time spent on mundane tasks so that workers can allocate their time elsewhere. In some instances, these tools may automate such functions completely, freeing up individuals to pursue new careers generated by the productivity gains. For example, IT customer service could be automated into true self-service portals for handling requests. The savings could then be invested into upskilling IT teams to handle the more difficult cases. Finally, GenAI tools can help improve product quality. Artists, for example, can use image-based tools such as DALL-E to explore possibilities for inspiration before putting pen to paper.

Technology changes quickly and in unpredictable ways. Legislation and regulations emerge through a much more cumbersome bureaucratic process. In addition, there’s no guarantee that the regulations will be designed to minimize the bad and maximize the good.

But as with all technologies, GenAI can be used in ways that are harmful. First, the tools may produce incorrect outputs, also known as “hallucinations.” Second, these tools can be used in ways that violate intellectual property rights. The recent agreement between the Writers’ Guild of America and the Alliance of Motion Picture and Television Producers to end the writers’ strike sets hard limits on the use of copyrighted material for producing GenAI content. Third, as with any algorithm, these tools can violate privacy rights and violate confidentiality agreements if they use protected data. Finally, data collection is subject to numerous biases, which raises the risk that any GenAI model that uses them can be biased and discriminatory as well. These risks create costs that are both private and social; a private organization that violates the above will face financial repercussions for doing so. Therefore, it is in the private sector’s interest to mitigate them regardless of whether the government does.

Technically, the government can ban the “bad” uses and keep the “good” ones. But unfortunately, when it comes to reducing harm, the law can fall short for two reasons: gridlock and rent-seeking. Technology changes quickly and in unpredictable ways. Legislation and regulations emerge through a much more cumbersome bureaucratic process. In addition, there’s no guarantee that the regulations will be designed to minimize the bad and maximize the good. Regulations, especially for new activities and products, often fall prey to the “bootleggers and baptists” phenomenon. The people most invested in new regulations are the ones least interested in optimizing it: those who fear it and those who directly compete with it.

So What Do We Do Then?

Fortunately, the government isn’t the only actor that can effectively regulate GenAI: private entities can do so as well within their own domain. 

At the heart of what private entities are trying to do is mitigate the principal-agent problem. This is a failure that occurs when the “principal,” in this case, the business owners and leadership, have to entrust critical functions to their “agents,” individuals tasked with maximizing value for the principal. The “problem” is that principals can’t monitor the agents 24/7, and agents have self-interested reasons to behave in a manner that doesn’t maximize the principals’ value. With regards to GenAI, agents without oversight can use these tools in risky, irresponsible ways that raise costs for the organization. Well-designed regulations should not stifle positive uses of GenAI, but disincentivize the bad uses.

Good regulations are a multi-stage process and align the interests of the agents or employees with those of the principals or leadership. Leaders should do the following. First, figure out what’s actually affected by GenAI, such as content creation, cybersecurity, third-party risks, and privacy. Second, quantify the impacts on those processes by GenAI, such as by developing a risk assessment to support effective decision-making. Third, use this knowledge to first mitigate human gaps and train their teams on making aligned decisions. Fourth, set up an oversight group with benchmarks and key performance indicators to evaluate whether or not their workforce is using GenAI responsibly. Fifth, extend this approach beyond to any third-party entities they work with. Finally, monitor that everyone is following these rules. The goal of these principles is to minimize an organization’s costs. The greater benefit, though, is that the more organizations that follow this approach, the better it is for society.

Proper regulation that balances benefits with costs is as necessary for GenAI as it is for any other technology. This is especially true because criminal organizations have little reason to implement harm reduction policies with regard to the use of GenAI. But leaders in private industry shouldn’t wait for regulations to arrive nor expect them to be sufficient. They should apply rules now within their own organizations to mitigate the costs of these valuable yet volatile technologies.

Yameen Huq

Yameen Huq is the Director of the US Cybersecurity Group at Aspen Digital.

Hey there!

You made it to the bottom of the page! That means you must like what we do. In that case, can we ask for your help? Inkstick is changing the face of foreign policy, but we can’t do it without you. If our content is something that you’ve come to rely on, please make a tax-deductible donation today. Even $5 or $10 a month makes a huge difference. Together, we can tell the stories that need to be told.

SIGN UP FOR OUR NEWSLETTERS