The recent artificial intelligence safety summit convened by U.K. Prime Minister Rishi Sunak has revived a bad idea—creating an “IPCC for AI” to assess risks from AI and guide its governance. At the conclusion of the summit, Sunak announced an agreement was reached amongst like-minded governments to establish an international advisory panel for AI, modeled after the Intergovernmental Panel on Climate Change (IPCC).
The IPCC is an international body that periodically synthesizes the existing scientific literature on climate change into supposedly authoritative assessment reports. These reports are intended to summarize the current state of knowledge to inform climate policy. An IPCC for AI would presumably serve a similar function, distilling the complex technical research on AI into digestible synopses of capabilities, timelines, risks and policy options for global policymakers.
At a minimum, an International Panel on AI Safety (IPAIS) would provide regular evaluations of the state of AI systems and offer predictions about expected technological progress and potential impacts. However, it could also serve a much stronger role in approving frontier AI models before they come to market. Indeed, Sunak negotiated an agreement with eight leading tech companies, as well as representatives from countries attending the AI safety talks, that lays a foundation for government pre-market approval of AI products. The agreement commits big tech companies to testing their most advanced models under government supervision before release.
If the IPCC is to serve as a template for international AI regulation, it is important not to repeat the many mistakes found with climate policy. The IPCC has been widely criticized for assessment reports that present an overly pessimistic view of climate change, emphasizing risks while downplaying uncertainties and positive trends. Others contend the IPCC suffers from groupthink, as there is pressure on scientists to conform to consensus views, thereby marginalizing skeptical perspectives. Additionally, the IPCC’s process has been criticized for allowing governments to stack author teams with ideologically-aligned scientists.
Like its IPCC predecessor, an IPCC for AI will likely suffer from similar problems related to politicization of research findings and shortfalls in transparency in assessment processes. Confirming reason to worry, the AI safety conference in the U.K. has similarly been criticized for its lack of diversity in viewpoints and narrow focus on existential risks, suggesting bias is already being baked into the IPAIS even before its official creation.
This impulse to create elite committees of experts to guide policy on complex issues is nothing new. Throughout history, intellectuals have warned that only they can interpret arcane information and save us from catastrophe. In the Middle Ages, the Bible and Latin mass were inaccessible to the common man, placing power in the hands of the clergy. Today, highly technical AI and climate research play an analogous role, intimidating the layperson with complex statistics and models. The message from intellectuals is the same: heed our wisdom, or face doom.
Of course, history shows the intellectual elite often errs. The Catholic church notoriously obstructed scientific progress and persecuted “heretics” like Galileo. Nations that embraced economic and technological dynamism flourished, while those that closed themselves off behind backward religious dogmas stagnated. Climate activists today hold similarly dogmatic views, resisting innovations like genetically modified crops and nuclear energy that would reduce poverty and protect the planet.
Empowering a tiny intellectual elite to guide AI governance would repeat these historic mistakes for a number of reasons.
First, the IPCC has blurred the line that separates policy advocacy from science, to the detriment of science as a whole. As my Competitive Enterprise Institute colleague Marlo Lewis once put it, “Official statements by scientific societies celebrate groupthink and conformity, foster partisanship by demanding allegiance to a party line, and legitimate the appeal to authority as a form of argumentation.”
One of the most pernicious effects of the IPCC has been to popularize the idea of an international “consensus” surrounding public policy discourse, shutting down rigorous scientific debate that would otherwise transpire. Scientific facts will always be open to a variety of interpretations. We should not entrust a small priesthood of AI researchers to judge what is safe and to be permitted. An IPAIS will homogenize and politicize AI research, jeopardizing the credibility of the entire AI research agenda.
Second, a global AI governance body would discourage jurisdictional competition. The IPCC sets arbitrary goals and deadlines upon which nations are supposedly obligated to act. But different nations have varying risk tolerances and philosophical values. Some will accept more uncertainty, risk and disruption in exchange for faster progress and economic growth. Instead of asking for one-size-fits-all commitments from nations, we should encourage countries to implement diverse policies in response to diverse viewpoints, and then see what works.
Third, regulations arrived at through precautionary international bodies, based on manufactured consensuses, will inevitably be overly pessimistic and overly restrictive. No one should be surprised that the IPCC has mainstreamed the most alarmist emissions scenarios, given the historic tendency of intellectuals to see themselves as the saviors of humanity.
AI has immense potential to benefit civilization, from spurring healthcare innovations to promoting environmental sustainability. But excessively stringent regulations based on alarmist predictions will block beneficial applications of AI. This is especially true if AI systems are subjected to centralized vetting procedures.
The hazards of AI, like other technologies, are real. As AI progresses, thoughtful governance is needed. But the solution is not a globalist technocracy to direct its evolution. This would concentrate too much power in too few hands. Decentralized policies targeted at concrete harms, combined with research and education from a diverse range of viewpoints, provides a path forward. Elites with dystopian visions have led us astray before, let’s not let them do it again with AI.
Read the full article here
Leave a Reply