Perhaps the best-known risk is the one embodied by the killer robots in the “Terminator” movies: the idea that AI will turn on its human creators. The story of the arrogant inventor losing control of his own creation is centuries old. And in the modern era, observes Chris Dixon, a venture capitalist, “Hollywood trains people from childhood to fear artificial intelligence.” A version of this thesis, focusing on the existential risks (or “x-risks”) to humanity that AI might one day pose, was developed by Nick Bostrom, a Swedish philosopher, in a series of books and articles that began in 2002. His arguments have been adopted and expanded by others, including Elon Musk, head of Tesla, SpaceX and, sadly, X.
Proponents of “AI safety”—also known as “AI catastrophe”—fear that AI could cause harm in a variety of ways. If AI systems are able to improve themselves, for example, there could be a sudden “takeoff” or “explosion” in which AIs spawn more powerful AIs in rapid succession. Catastrophists fear that the resulting “superintelligence” would far outperform humans and might have very different motivations than their human creators. Other catastrophic scenarios involve AIs carrying out cyberattacks, helping to create bombs and biological weapons, and convincing humans to commit terrorist acts or deploy nuclear weapons.
After the launch of ChatGPT in November 2022 highlighted the growing power of AI, public debate was dominated by concerns about AI safety. In March 2023, a group of tech giants, including Musk, called for at least a six-month moratorium on AI development. The following November, a group of 100 global leaders and tech executives met at an AI Safety Summit in Bletchley Park, England, and declared that the most advanced (“frontier”) AI models have “the potential to cause serious, even catastrophic, harm.”
This approach has since sparked a backlash. Critics argue that x-risks remain largely speculative and that malicious actors who want to build bioweapons can already turn to the internet for advice. Rather than worrying about the theoretical, long-term risks posed by AI, they argue, the focus should be on the real risks posed by AI that exist today, such as bias, discrimination, AI-generated misinformation, and intellectual property rights violations. Prominent proponents of this stance, known as the “AI ethics” group, include Emily Bender of the University of Washington and Timnit Gebru, who was fired from Google after co-writing a paper on those dangers.
Examples of real risks posed by malfunctioning AI systems abound. An image-tagging feature in Google Photos labeled black people as gorillas; facial recognition systems trained on mostly white faces misidentified people of color; an AI resume-scanning system built to identify promising job candidates systematically favored men, even when applicants’ names and genders were hidden; algorithms used to estimate recidivism rates, allocate child benefits, or determine who qualifies for bank loans have shown racial bias. AI tools can be used to create “deepfake” videos, including pornographic ones, to harass people online or misrepresent the views of politicians. And AI companies face a growing number of lawsuits from writers, artists, and musicians who claim that using their intellectual property to train AI models is illegal.
When world leaders and tech executives gathered in Seoul in May 2024 for another AI summit, the conversation was less about distant risks and more about those immediate issues — a trend that will likely continue at the next AI safety summit, if it’s still called that, in France in 2025. In short, the AI ethics camp now has policymakers’ attention. This is not surprising, because when it comes to making laws to regulate AI — a process already underway in much of the world — it makes sense to focus on addressing existing harms (for example, by criminalizing deepfakes) or requiring audits of AI systems used by government agencies.
Still, policymakers have questions to answer. How broad should the rules be? Is self-regulation enough, or are laws needed? Does the technology itself require rules, or only its applications? And what is the opportunity cost of regulations that reduce the scope for innovation? Governments have begun to answer these questions, each in their own way.
At one end of the spectrum are countries that rely primarily on self-regulation, including the Gulf states and Britain (although the new Labour government may change this). The leader of this group is the United States. Members of Congress talk about the risks of AI, but no laws are passed. This makes President Joe Biden’s executive order on AI, signed in October 2023, the country’s most important legal directive for this technology.
The order requires companies that use more than 1,026 computational operations to train an AI model — a threshold above which models are deemed a potential risk to national and public security — to notify authorities and share the results of safety tests. This threshold will affect only the largest models. For the rest, voluntary commitments and self-regulation reign supreme. Lawmakers fear that overly strict regulation could stifle innovation in a field where the United States is a world leader; they also fear that regulation could allow China to take the lead in AI research.
The Chinese government is taking a much stricter stance. It has proposed several sets of rules for AI. The goal is not so much to protect humanity or Chinese citizens and businesses, but to control the flow of information. Training data and the results of AI models must be “true and accurate” and reflect “the core values of socialism.” Given AI models’ propensity to make things up, these rules may be difficult to enforce, but that may be what China wants: When everyone is breaking the rules, the government can selectively enforce them however it wants.
Europe is somewhere in the middle. In May, the European Union passed the world’s first comprehensive piece of legislation, the AI Act, which came into effect on August 1 and cemented the bloc’s role as a global digital standards-setter. But the law is primarily a product safety document that regulates applications of the technology based on their degree of risk. For example, an AI-powered writing assistant doesn’t need regulation, while a service that assists radiologists does. Some uses, such as real-time facial recognition in public spaces, are banned outright. Only the most powerful models have to comply with strict rules, such as mandates to assess the risks they pose and take steps to mitigate them.
A new world order?
So a grand global experiment is underway, as different governments take different approaches to regulating AI. As well as introducing new rules, this also involves creating some new institutions. The EU has created an AI Office to ensure that large model makers comply with its new law. In contrast, the US and Britain will rely on existing agencies in areas where AI is deployed, such as healthcare or the legal profession. But both countries have created AI safety institutes. Other countries, including Japan and Singapore, intend to create similar bodies.
Meanwhile, three separate initiatives are underway to design global standards and a body to oversee them. One is the AI safety summits and the various national AI safety institutes, which are supposed to collaborate with each other. Another is the “Hiroshima Process,” launched in the Japanese city in May 2023 by the G7 group of rich democracies and increasingly being taken up by the OECD, a larger club made up mostly of rich countries. A third initiative is led by the UN, which has set up an advisory body that is drafting a report ahead of a summit in September.
These three initiatives will likely converge and give rise to a new international organization. There are many opinions about what form it should take. OpenAI, the startup behind ChatGPT, says it wants something like the International Atomic Energy Agency, the world’s nuclear watchdog, to monitor x-risks. Microsoft, a tech giant and OpenAI’s largest shareholder, prefers a less imposing body modeled on the International Civil Aviation Organization, which sets the rules for aviation. Academic researchers argue for an AI equivalent of the European Organization for Nuclear Research, or CERN. One compromise, supported by the EU, would create something akin to the Intergovernmental Panel on Climate Change, which keeps the world abreast of research on global warming and its impact.
Meanwhile, the outlook is complicated. Concerned that a re-elected Donald Trump might overturn the executive order on AI, US states have taken steps to regulate the technology, notably California, with more than 30 AI-related bills in the works. One in particular, set for a vote in late August, has the tech industry outraged. Among other things, it would force AI companies to build a “kill switch” into their systems. In Hollywood’s home state, the spectre of “Terminator” continues to haunt the AI debate.
© 2024, The Economist Newspaper Ltd. All rights reserved. From The Economist, published under license. The original content can be found at www.economist.com
Disclaimer
The information contained in this post is for general information purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the post for any purpose.
We respect the intellectual property rights of content creators. If you are the owner of any material featured on our website and have concerns about its use, please contact us. We are committed to addressing any copyright issues promptly and will remove any material within 2 days of receiving a request from the rightful owner.