Skip to content
ChatGPT, AI, regulation

The Sweeping Tide of AI Tech and ChatGPT

New AI systems present citizens and policymakers with a familiar conundrum.

Words: Lovely Umayam
Pictures: Brooke Cagle
Date:

This month’s installment of Inkstick’s monthly culture column, The Mixed-up Files of Inkstick Media (inspired by From the Mixed-up Files of Mrs. Basil E. Frankweiler), where we link pop culture to national security and foreign policy, is about ChatGPT and the existential conflict it’s created for all of us as we embark on this tech train. 

“Let’s make this interesting…I want stories with conflict.”

I rang in the new year with my family on the living room couch, peering over my father-in-law’s phone as he played with ChatGPT, a chatbot powered by OpenAI’s artificial intelligence system. We watched ChatGPT string together words into sentences in response to my father-in-law’s outlandish prompts (stories with conflict!), serving us tales of dragons battling bakers, couples quarreling over biometric tattoos, and babies protesting against their parent’s bad bosses.

Silly and brief as it was, I was acutely aware that this was my first step into a strange future: artificial intelligence (AI)  —  the capability of a computer to execute logic simulating human cognition, including cause-and-effect, pattern matching, and prediction — is now good enough to tell stories as deftly and imaginatively as a real person could. It’s only been a few months since my first experience with ChatGPT, but its debut in the public domain has already made a firecracker impression. According to OpenAI, ChatGPT receives 13 million web visits each day. In March 2023, OpenAI released GPT-4 (an update to the language model system powering ChatGPT), which reportedly gained 100 million subscribers overnight.

From nuclear weapons to social media, disruptive technologies share a familiar pattern: their introduction to society fundamentally changes the way we live; the world agrees it’s too late to put the proverbial “genie back in the bottle;” companies and/or governments encourage an innovation race to make the technology bigger, better, faster without stopping to consider how that might break things.

This quick, massive reach naturally created opposing factions. Some believe advancements in AI will enable humans to better delegate undesirable tasks to computers, boosting human productivity, ingenuity, and of course, profit. Tech companies bet big: In 2019, Microsoft announced that it invested $1 billion in OpenAI, and recently unveiled plans to integrate the technology into its suite of office products as assistants (Clippy resurrected). Others portend that allowing general access to AI such that it is experienced via a direct Q&A user interface will lead to unethical shortcuts to how we work, from encouraging plagiarism to erasing the essence of individuality and thereby reducing the value or need for human effort. Worse, such a direct, widely accessible interface encourages people to ask dangerous questions, which then yield devious answers. If prompted correctly, an AI chatbot can spread conspiracy theories, identify precursors to a chemical weapon or engage in a relatively cogent conversation on how to build a dirty bomb.

Despite the debate, ranging from creative to philosophical, the hand-wringing around AI doesn’t feel new. From nuclear weapons to social media, disruptive technologies share a familiar pattern: their introduction to society fundamentally changes the way we live; the world agrees it’s too late to put the proverbial “genie back in the bottle;” companies and/or governments encourage an innovation race to make the technology bigger, better, faster without stopping to consider how that might break things. What is uniquely worrying about AI is the pace of improvement.

Regulation Woes

OpenAI introduced GPT-4 just four months after its predecessor’s release (GPT3.5 was released in November 2022), and it already tested significantly higher on some of the hardest standardized tests, demonstrating stronger human-like cognitive capability. In a recent interview, OpenAI’s CEO Sam Altman said he anticipates a long-term exponential growth in AI technology so that society won’t find improvements as impressive or worrisome as updates occur more frequently, similar to the technological feat of the iPhone circa 2007 and how subsequent versions have normalized incremental changes. But there is danger in being swept by the rapid tides of progress, which can shift our understanding of what is real and what is considered intelligent — and what makes us human.

For one, there are no proper regulatory anchors to hold society steady as the tides grow stronger. In the United States, lawmakers admit having trouble envisioning a legal governance framework over AI in part because many have a hard time understanding what it is. In the EU, a proactive attempt to regulate AI established in 2021 must now be reworked to address the dual-use nature of the tech, so that it can better distinguish benign, general public use from “high-risk” activity. The current draft leaves fundamental terms such as “AI system” open to interpretation and does not outline an AI company’s obligations to report suspicious and malicious use. It is also unclear whether governments should be the sole authority enforcing standards and tracking compliance when, in some cases, it has used technology for their own preservation and gain.

Currently, the onus is on AI manufacturers to develop algorithms that train AI systems to identify risks, including misinformation and harmful content. But an industry-led model is not sustainable without clear economic incentives to prioritize safety and responsibility. Again, we’ve been here before: for 27 years since the advent of the first social media platform, social media companies have had free rein to innovate and expand under the seemingly good-hearted premise of global connection and community. While the world may be more connected, attention spans are shorter, the spread of misinformation is higher, and our personal data is more exploited than ever before. To date, only the EU has comprehensive legislation to hold social media and other digital companies accountable.

Therein lies the ultimate conflict in humanity’s pursuit of new technological terrain — there is a lot of money to be made (as one publication put it, it’s the new gold rush!), which commits the world to a fast track to innovation and scale that knows no bounds. There is no space to breathe or think, only to react. But for AI, this is just the beginning of this very strange journey. As a collective, we don’t have to accept a future where we lose ourselves to the technology we make. There is still time to ask hard questions: Who is AI really for? Is it genuinely for everybody? To what end is it beneficial to humankind? Will government and industry oversight be enough to prevent exploitation? Is there a third, unexplored possibility for governance? Short of regulation, what can we do in our personal capacities to be more mindful and protective of our digital footprint in preparation for this brave new world? Perhaps these questions allow for moments of pause and caution, even inspire new thinking around AI regulation. The other option is to do nothing and wait for the tides of progress to take over. Sadly, we know how that story ends.

I asked ChatGPT to write its own conclusion to my article. Here’s what it offered:

I couldn’t have said it better myself.

Lovely Umayam

Editorial Board Member

Lovely Umayam is the founder and chief writer for Bombshelltoe, a blog featuring stories about nuclear history/politics, art, and media. Bombshelltoe is the first-prize winner of the U.S. Department of State’s Innovation in Arms Control Challenge in 2013. Lovely’s work under Bombshelltoe was featured at the 2013 SxSW Interactive, FastCompany, The Bulletin of the Atomic Scientists, and the 2013 U.S. Department of State’s Generation Prague Conference in which she interviewed Under Secretary of State for Arms Control and International Security Rose Gottemoeller.

LEARN MORE

Hey there!

You made it to the bottom of the page! That means you must like what we do. In that case, can we ask for your help? Inkstick is changing the face of foreign policy, but we can’t do it without you. If our content is something that you’ve come to rely on, please make a tax-deductible donation today. Even $5 or $10 a month makes a huge difference. Together, we can tell the stories that need to be told.

SIGN UP FOR OUR NEWSLETTERS