Skip to content

Artificial Intelligence Threatens to Increase Knowledge Inequality

We have entered the era of digital feudalism — and it’s not looking democratic.

Words: Michael W. Wright
Pictures: Andy Kelly
Date:

Over a decade ago, in “The New Business Normal” my co-author, Walt Ferguson and I wrote that competitive advantage accrues to those able to access, aggregate, analyze, and act on information faster worldwide. The flip side is that disadvantages accelerate for those lacking access, skills, and resources. While technologies connect people with basic functionality, higher intelligence products rapidly leave many behind. Crossing horizons like artificial intelligence (AI), Internet of Things (IoT), and privacy can become points of exclusion. 

Consider AI: Information expands faster than accessible understanding. We lack methods to see technological ramifications across societies. If AI concentrates in the hands of a few, the 99% may fall permanently behind.

Could AI Consolidation Create a New Digital Feudalism?

In 2018, I wrote about the concerning trend of AI and domain-specific knowledge becoming increasingly concentrated in the hands of fewer companies and individuals. This aggregation of AI and knowledge, combined with the growing concentration of wealth, suggested the emergence of a powerful oligarchy that could lead to a new form of “digital feudalism.” 

Five years later, this trend has only accelerated, as evidenced by examples like OpenAI’s ChatGPT, Google Brain’s PaLM, DeepMind’s AlphaFold, and Anthropic’s Claude. As AI becomes more advanced, it requires exponentially more data, computing capacity, and talent. This entrenches the position of incumbents like Google, Microsoft, Meta, and Amazon. 

All signs point toward an impending “digital serfdom” as AI and wealth aggregate under corporate oligarchs. AI infrastructure holders Microsoft, Google, IBM, Oracle, and Amazon have created large barriers to entry in the form of capital required to participate at scale. Without concerted efforts, digital feudalism seems probable.

The Threat of Widening Inequality

Recent developments reinforce the divergence between the AI “haves” and “have-nots.” ChatGPT, AlphaFold, and Anthropic all demonstrate the vast resources needed to advance AI, which few can access or replicate. According to a recent OpenAI report, the cost of training large AI models will rise from $100 million to $500 million by 2030. In addition, there are the operating costs estimated to be $700,000 per day. 

If AI concentrates in the hands of a few, the 99% may fall permanently behind.

The implications span economic, political, and social realms. Economically, small firms and startups will struggle to compete as winner-take-all effects intensify.  In 2019, Apple acquired the AI startup Drive.ai for an undisclosed sum. Drive.ai developed self-driving car technologies and was once valued at $200 million but struggled to compete with billion-dollar giants like Waymo and GM Cruise. The acquisition effectively ended Drive.ai as an independent company. 

Politically, concentrated surveillance and manipulation capabilities endanger democracy. 

Socially, those without AI expertise face unemployment as automation accelerates. Robots are already being used in factories to perform tasks such as welding, painting, and assembling products. As these robots become more intelligent, they will be able to perform even more tasks, which could lead to the loss of millions of factory jobs. A recent study  “AI, Automation, and the Future of Work” by the McKinsey Global Institute estimated that up to 800 million jobs worldwide could be displaced by automation by 2030. 

The growing divides may not only be present between social classes, but could further entrench inequalities between countries as well through data colonization and other issues. Data colonization is when vast datasets are shared across borders without oversight or consent. Like the mining deals made for mineral and oil resources, the convergence of technology and personal information is creating new opportunities for data resources to be mined across borders. 

Escaping this spiral requires reducing barriers through open standards, public funding for open-source AI, taxation on data consolidation, stronger privacy laws, and platform cooperatives.  

However, market dynamics naturally favor concentration. Breaking this default trajectory requires conscious policy effort. If knowledge differentials are allowed to intensify unchecked, we risk creating a permanent underclass without agency in an increasingly algorithmic world. Mitigating the harms of accelerating inaccessibility obliges us to prioritize equity over efficiency.  

Regulating the AI Oligarchies     

The acceleration of knowledge inequality poses an existential policy challenge. Proactive reform can distribute the dividends of algorithms widely. 

First, an “AI Inclusion Index” should track access metrics like data/tool availability, AI literacy, computational resources, and diversity of practitioners. An inclusion index can be used to identify data that is biased against certain groups of people. This can be done by looking at the distribution of data across different groups, such as gender, race, or ethnicity. For example, if an inclusion index shows that there are significantly fewer women represented in a dataset, this could be a sign of bias. Examples are Google’s AI Fairness Indicators (AFI), Microsoft’s Fairlearn, and EqualAI

Second, policymakers at all levels must directly address exclusion through interventions like public procurement of open AI systems, university access grants, and diversity incentives. 

Third, updated competition policies and regulations can reduce entry barriers via interoperability, data portability, and transparency requirements. For example, changing the ownership of data like the EU has done. The EU has enacted significant changes around personal data ownership through the General Data Protection Regulation (GDPR) which went into effect in 2018. Users must explicitly consent to their personal data being collected and have the right to access, delete, or export their data. This limits the data available to train AI systems. In the, EU anonymization is now required for data used in AI. Data is limited to the purpose specified before collection and there are no unexpected secondary uses.

Fourth, international cooperation can balance development and prevent an AI arms race. In this environment, any cooperation would be of benefit but seems unlikely to develop without a triggering event. There is a state-level awareness that AI is a winner-take-all game because in an instant the arms race would be over. Russian President Vladimir Putin said it best: “Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.” It may take an observable horror similar to the atomic bombings of Hiroshima and Nagasaki to drive cooperation. Or the continuation and acceleration of existential threats to human existence, such as climate change, to drive cooperation across national borders.

Catalyzing action requires urgency that lawmakers currently lack. Other potential triggering events include high-profile AI failures, mass surveillance proliferation, economic exclusion, algorithmic propaganda, and asymmetric cyberattacks.

No solution is a silver bullet. But preventing digital feudalism will take vision, courage, and global cooperation. We must choose democratization over indiscriminate or unintentional domination. The future need not be an algorithmic oligarchy but reform must happen now.

Michael W. Wright

Michael W. Wright is the CEO of Intercepting Horizons, LLC. He is an author and former global high-tech executive at scale, who has held leadership positions including CEO, COO, Chairman, and board member of public and private companies. He currently serves as a board advisor and executive coach, helping companies anticipate converging technology trends and build viable business strategies.

Hey there!

You made it to the bottom of the page! That means you must like what we do. In that case, can we ask for your help? Inkstick is changing the face of foreign policy, but we can’t do it without you. If our content is something that you’ve come to rely on, please make a tax-deductible donation today. Even $5 or $10 a month makes a huge difference. Together, we can tell the stories that need to be told.

SIGN UP FOR OUR NEWSLETTERS