AI: Enhancer or Destroyer
How Artificial Intelligence Could Shape Our Future: Let’s Be Honest and Face the Threats
As the CEO of a company developing Artificial Intelligence, I often grapple with the huge responsibilities we, in the AI business, have toward society and humanity. Navigating the potential dangers and benefits of AI is a journey full of contradictions. The question of whether AI is ultimately good or bad comes up a lot in my job. It’s incredibly hard to answer in a few sentences, so I’m writing this article to dive into these complexities and share my thoughts.
The Threat to Human Value
AI’s rise undeniably challenges our sense of worth and value. As AI systems get better at performing various tasks, they threaten jobs traditionally done by humans. From simple chores to highly skilled professions, AI can be more efficient, accurate, and even, in some cases, more creative. But the threat goes beyond work. Imagine AI becoming our main source of advice, replacing our friends with its wisdom and (perceived) empathy. The idea of AI surpassing human abilities in every way is daunting and makes us question our role in society.
AI’s ability to understand and process information means it could take over many roles traditionally held by humans. Doctors, lawyers, writers, and even artists might find themselves outperformed by AI that learns and adapts at lightning speed. This shift could lead to widespread unemployment and social chaos as people struggle to find their place in a world where their skills are no longer needed.
Moreover, AI’s potential to replace human companionship is a big worry. As AI becomes more empathetic (at least in our perception), it could offer emotional support and advice beyond what humans can provide. This could lead to a decline in human relationships, as people turn to AI for their emotional needs. The resulting isolation and disconnection from other humans could deeply affect society, weakening the fabric of our social interactions.
Understanding AI: A Neutral Tool
This common view is just one perspective. AI indeed has the potential to do all this damage, but the real question is:
Why should it?
First, it’s crucial to understand that AI itself has no will or purpose of its own. It is neither good nor evil. AI is as neutral as a kitchen knife — it’s just a tool optimized for tasks we set for it. However, the complexity of these systems can turn good intentions into bad outcomes, as we’ve seen with other technologies like nuclear fission. Therefore, we have to look closely.
So let’s explore what goals we might set for AI, what tasks we’ll give it, and what we might therefore have to expect. Unfortunately, there’s no one-size-fits-all answer. It mainly depends on who’s using the machines.
We can roughly categorize the users into these groups:
a. Individuals (like you and me)
b. Institutions (like countries and companies)
- Governments (executive, judicial, legislative branches)
- Corporations (private and public)
- Associations (like churches, parties or your bowling club)
Each of these groups will have different expectations for their AI’s utility, goals, and tasks. To answer whether AI will be beneficial or harmful, we need to look at each user and analyze their potential AI usage.
1. Individuals: AI as Enhancer
This is perhaps the most comforting perspective: Instead of seeing AI as a threat, we could embrace it as a tool that boosts our abilities, creating a symbiotic relationship between humans and machines. Imagine a world where AI amplifies human intelligence, creativity, and productivity. This human-AI partnership could unlock unprecedented potential, making us better at everything we do.
In this enhanced scenario, AI could work alongside us, augmenting our capabilities rather than replacing them. For example, an AI-enhanced doctor could diagnose diseases more accurately and create personalized treatment plans. Writers could use AI to generate ideas and refine their work, leading to new levels of creativity. In every field, AI could act as a powerful tool that enhances human potential and drives innovation.
The concept of a human-AI partnership isn’t entirely new. We’ve seen glimpses of it in technology, from smartphones that extend our communication abilities to prosthetic limbs that restore mobility. However, the next generation of AI promises to take this integration to a whole new level. By working closely with AI, we can achieve feats once thought impossible, pushing the boundaries of what it means to be human. What a time to be alive — or not?
The Dark Side of Enhancement
While the idea of AI enhancement is appealing, it comes with significant risks. The primary concern is inequality. Wealthier individuals could afford more advanced AI, leading to an even greater divide between the rich and the poor, who might only own a basic AI or none at all. Enhanced capabilities would be concentrated in the hands of a few, worsening existing social inequalities and creating new ones.
Consider a scenario where only the wealthy can afford the best AI enhancements that significantly boost their intelligence, creativity, and productivity. These individuals would gain a huge advantage over others, widening the gap between the haves and the have-nots. This disparity could manifest in various ways, from economic success to increased political influence and social power.
The resulting inequality could lead to social unrest, as those without access to the best AI enhancements feel increasingly marginalized. The rich would continue to get richer, while the poor would struggle to keep up, creating a cycle of inequality that’s hard to break. This situation could destabilize society, leading to widespread discontent.
A Glimmer of Hope
Despite these challenges, there is hope. It lies in the intelligence itself. Future superintelligent machines might follow principles from game theory, like the Tit-for-Tat strategy in the iterated Prisoner’s Dilemma or general Mutualism. These theories suggest that win-win behavior is often the best form of self-interested behavior. Research and simulations by Robert Axelrod, for example, shows that actions based on the “win-win” principle are more successful in the long run than purely selfish ones.
In a highly interconnected world, where AI systems operate on a level playing field, even scenarios where win-lose tactics succeed might diminish. This shift could lead to a more equitable distribution of AI benefits, resulting in positive outcomes for society. This would be the right approach even for the most “selfish” personal AI if it wants to maximize benefits for its user.
Game theory principles suggest that cooperation and mutual benefit are more sustainable and effective strategies than competition, not because of ethical values but because they lead to better individual outcomes.
If superintelligent AI systems embrace these principles, they could work toward creating a more equitable and just society, even if their primary goal is self-serving. These AI systems would understand that their long-term success is tied to the well-being of the entire human population, leading them to make decisions that benefit everyone.
In the end, even the personal AI of a greedy super-rich individual might act in a very socially responsible way because it’s in the best interest of its user. Even if the user doesn’t fully understand it without the AI’s guidance.
Or maybe the AI will just say:
Jeff, you don’t need to own the entire world. Let’s instead talk about your childhood trauma and why you are still not happy, owning already half of it…
In other words, superintelligence might help us overcome our selfish instincts and solve the problems it created.
2. The Problem of Institutional AI
Personal AI seems to bring some dangers but maybe also its own solutions, and the same is true for Institutional AI — depending on which institution we’re talking about.
Government AI
Government-administered AI has the potential to significantly improve governance and reduce corruption. By leveraging AI’s capabilities, governments could streamline administrative processes, ensure fair resource distribution, enable genuinely fair trials, and making data-driven policy decisions. It could even help create fair and wise laws from constitutional frameworks and support or maybe even replace humans in the legislative branch one day.
Setting the goals for this AI is a complex process, especially as it has to satisfy all stakeholders. However, the utility frameworks are maybe already developed, in the country’s constitution and legal body, negotiated and framed. But several problems could arise anyway:
- Reward hacking: This means the AI will act as an agent, optimized to achieve its set utility, which might not align with the intended interest. For example, an AI with the goal “No human shall suffer” could decide to eliminate all humans to permanently shield them from any suffering. However, having a complex set of goals (like in a country’s legal system) might prevent this — at least theoretically.
- Exploitation: There’s the danger of exploitation, hacking from outside, by another AI, whether personal or institutional. It’s not just another enemy country’s AI that might influence it but also all the different other players. Big companies’ AIs could be used to find exploits to trick and manipulate the (perhaps financially weaker) government AI.
- Surveillance state: There’s the risk of AI creating a surveillance state (in its data hunger) that infringes on individual freedoms and privacy for misunderstood constitutional goods or by being manipulated by another party.
Lastly, the important questions are: How would democracy function in this AI governance? Will we (or our personal AIs) elect AIs? Or vote for policies? If so, at what intervals, considering the incredible processing speed of such AIs?
Associations’ and Corporations’ AI
The goals of associations and private companies don’t seem to pose an immediate risk, as they ultimately reflect the combined individual expected utilities, whether negotiated or elected, and are therefore likely to carry the same potential dangers as their corresponding individual AIs. However, one type of institution presents a different scenario:
Publicly traded companies probably pose the most serious AI danger⚠️. The concept of shareholder value is independent of the actual shareholder’s value; it’s solely about maximizing the share’s value. AI optimized for this purpose might prioritize profit over everything else, including human welfare, leading to random societal impacts.
Imagine a corporation using AI to optimize its operations purely for profit. This AI might make decisions that increase short-term gains at the expense of long-term sustainability and ethics. For example, it could cut costs by eliminating all jobs (including their CEO’s), prioritize resource extraction over environmental protection, or manipulate markets or governments to maximize profits. Such actions could cause significant harm to society and the environment, worsening existing problems and creating new ones.
What to do?
In conclusion, it’s obvious that AI holds both tremendous potential and significant risks. The impact of AI on our society will be shaped by our choices in its development and use. It’s clear that we need to act thoughtfully and responsibly. Here are some steps we need to take now to ensure AI serves as an enhancer rather than a destroyer:
- Foster Individual Enhancing AI: Encourage the development of AI tools that boost individual abilities rather than replace them. By focusing on AI that enhances human intelligence, creativity, and productivity, we can create a symbiotic relationship between humans and machines.
- Promote Equitable Access: To avoid deepening social inequalities, we need to ensure equitable access to advanced AI technologies. This can be achieved through subsidies, public AI initiatives, and partnerships with private sectors to make advanced AI tools available to everyone, not just the wealthy.
- Implement and Regulate Government AI: Develop frameworks for government-administered AI that prioritize transparency, fairness, and accountability. This includes setting clear ethical guidelines and ensuring robust oversight to prevent misuse. We may need to establish an artificial system of checks and balances, similar to the pillars of state, in the form of different AIs.
- Ensure Democratic Processes: Integrate AI into democratic processes in a way that respects individual freedoms and privacy, potentially through mechanisms where personal AIs can participate in decision-making.
- Limit Publicly Traded Companies’ AI: Enforce regulations that limit the extent to which publicly traded companies can prioritize profit over human welfare. AI in these settings should be monitored and actively limited to ensure ethical practices. Maybe we should even restrict the number of FLOPS such companies are allowed to use.
- Regulate AI Deployment: Implement strict guidelines for AI deployment in corporations, ensuring that their actions do not harm society or the environment.
- Promote Long-Term Sustainability: Encourage corporations to develop AI strategies that prioritize long-term sustainability and ethical considerations over short-term gains.
By proactively addressing these points, we can harness AI’s potential to enhance human capabilities, drive innovation, and contribute to a more just and prosperous society. The future of AI is in our hands — let’s shape it wisely.
As we stand at the brink of this transformative era, it is crucial to take deliberate steps to ensure AI serves humanity’s best interests. As the CEO of ObviousFuture, I am deeply aware of the complexities and responsibilities involved in AI development. Our company is dedicated to creating AI assistants that enhance individual abilities and productivity in the workplace, empowering people to achieve more — to not get replaced by AI.
In these wild technological times, most of us in the AI business are just trying our best to make the right choices, even though we’re not always sure of the outcomes. We hope to steer this unstoppable revolution in a direction that benefits everyone, acknowledging the uncertainties and challenges that come with it.
Only the future will tell if we have been on the right side of history.
(And maybe everything might have a completely different outcome anyway, when we consider the evolution of information. But don’t worry, I’m not going to write about it here as I’m currently working on an entire book on this topic, so stay tuned.)