AI Has Risks. So Does Regulating It.
August 11, 2023
Quote of the Issue
“At some point, everything’s gonna go south on you… everything’s going to go south and you’re going to say, this is it. This is how I end. Now you can either accept that, or you can get to work. That’s all it is. You just begin. You do the math. You solve one problem… and you solve the next one… and then the next. And If you solve enough problems, you get to come home.” – Mark Watney, The Martian
A better world is possible.
A wealthier, healthier, cooler world.
A world of more opportunity and more freedom.
The future of our dreams is waiting.
And here’s how to make it happen, finally. Let’s go!
Coming on October 3, 2023 and available for pre-order …
How To Create the Sci-Fi World We Were Promised
The Essay
🤖 AI has risks. So does regulating it.
It’s not surprising given the current geopolitical climate that the emergence of large language models such as ChatGPT immediately prompted questions about China’s generative AI capabilities. That, especially as the performance of LLMs quickly led to speculation about their progression to artificial general intelligence.
As former Google CEO Eric Schmidt recently wrote in Foreign Affairs:
Even more powerful than today’s artificial intelligence is a more comprehensive technology—for now, given current computing power, still hypothetical—called “artificial general intelligence,” or AGI. … The advent of AGI remains years, perhaps even decades, away, but whichever country develops the technology first will have a massive advantage, since it could then use AGI to develop ever more advanced versions of AGI, gaining an edge in all other domains of science and technology in the process. A breakthrough in this field could usher in an era of predominance not unlike the short period of nuclear superiority the United States enjoyed in the late 1940s.
Of course, there’s now some reason to think Schmidt was too pessimistic in his forecast. Anthropic CEO Dario Amodei says his timeline to AI matching a “generally well educated human” is about two to three years. (The current Metaculus community forecast for when the first “weakly general AI system will be devised, tested, and publicly announced” is January 2027.)
Now imagine: What if the consensus appraisal of China’s AI abilities was that it was not two or three years behind but two or three years ahead?
That would mean a communist, authoritarian state was on track to create one of the most powerful technologies ever devised by humanity. The barely contained American panic during World War II about the Nazi Germans getting the atomic bomb first, well portrayed in Oppenheimer, might pale versus the panic today of losing the AI race. As it is, the specter of falling behind China, as suggested by Schmidt, should provide a powerful incentive for policymakers to show great caution in any effort to create a special regulatory regime for AI.
I like this perspective: In my recent 5QQ chat with Adam Thierer of the R Street Institute, the tech policy analyst argued against the mistaken belief that AI exists in some completely ungoverned Wild West today. In reality, the federal government already regulates many aspects of AI through its massive bureaucracy. Agencies like the FTC, FDA, NHTSA, FAA, EEOC, and CPSC, he explained, have broad powers to address algorithmic or other AI-related risks in their domains. These agencies are already actively using that authority as novel cases arise involving AI. The courts and common law system are also starting to tackle new AI problems through litigation. While there may be governance gaps for specific contexts like predictive policing or law enforcement uses of AI, we can fill those incrementally, and targeted policies can build on that foundation to address emerging issues, Thierer concluded. Let me quote him here at length:
It is not an overstatement to say that a veritable war on computation could be looming. Some regulatory advocates, and even companies like Microsoft and OpenAI, are proposing to regulate the full AI production stack: applications, models, data sets, and even data centers. It would constitute comprehensive control of computation in the name of addressing future algorithmic risks. It could potentially entail government caps on the aggregate amount of compute that could be created or sold and widespread surveillance of AI research and development. Regulatory advocates suggest the new regime could take the form of a top-down licensing system that restricts output according to pre-determined thresholds established by the Department of Energy or some other new AI regulatory body.
This is why I call it a ‘Mother, May I?’ permission slip-based regime for compute. This regulatory scheme will lead to a highly politicized licensing process in which developers fight over who gets to do large training runs or develop powerful chips or GPU clusters. Once government makes artificial intelligence artificially scarce through such supply-side limitations, the politics of this will get really ugly and it will spawn a cesspool of rent-seeking activity. Federal compute regulation could come to resemble fights we’ve witnessed before over licensed taxicab medallions and broadcast spectrum licenses, where the biggest and most politically connected leverage their influence over the system to beat back new rivals and innovations. The consequences for advanced algorithmic innovation will be profound as many important applications and services never get a chance to get off the ground.
If you watched the Senate hearings back in May, you saw two contrasting approaches to AI governance and enforcement. OpenAI’s Sam Altman advocates licensing AI models above a “crucial threshold” via an FDA-style agency conducting safety reviews. This top-down permission slip model has support from some Democrats. IBM instead backed “precision regulation” focused on specific use-cases rather than the technology itself. Their model entails proportional rules based on defined risks, transparency requirements formalizing disclosures for AI uses, and mandated impact assessments ensuring suitability. While Altman wants preemptive control of powerful models, IBM argues for tailored oversight of deployments. Again, as Thierer warns, the former risks politicizing innovation with gatekeepers restricting access. The latter could balance risks and innovation by targeting documented harms in context.
This from the political analysis team at Piper Sandler:
While legislation seems nearly certain to occur at some point, it’s too early to know what approach lawmakers will ultimately take. It seems more likely Democrats will favor a licensing approach geared towards safety while Republicans might favor a more hands-off “precision regulation” approach. It does seem likely, though, that Congress will need to establish a new agency with sufficient expertise to conduct third party validation of product safety and have the teeth needed to remove dangerous systems from the marketplace. But the creation of a new agency, especially one with sweeping powers over what is likely to be one of the most significant innovations of the century, will be contentious and a heavy lift.
Good, and as a reminder of the potential of AI to do a lot more than replace rote and boring tasks, here’s a bit of how AI might be able to boost scientific and technological progress:
- Sam Altman, whose company ChatGPT has put $375 million into Helion, is also a big investor in fusion startup Helion, which aims to prove it can produce net electricity. The WSJ reports that “Helion employees have started using ChatGPT to see how it can speed up engineering work.”
- Reconfiguring the plasma in a magnetically confined fusion reactor in search of more efficient plasma shapes creates a lot of engineering work for engineers. But scientists at EPFL’s Swiss Plasma Center and DeepMind have jointly developed a new method using AI to autonomously reshape the plasma using magnetic coils.
- Physics-based materials simulations can be costly. AI can be used to supplement those simulations by combing through large data sets. “Materials scientists and chemists are using machine learning and other tools to perform computations and simulations that can point them to no candidates for new catalysts, polymers and other materials with unique properties,” according to Axios.
- AI is poised to make drug discovery faster and cheaper, using algorithms to identify new compounds for clinical trials. DeepMind has used used its AlphaFold algorithm to build the most complete and accurate database yet of the more than 200 million known proteins. It’s a breakthrough that will significantly reduce the time required to make biological discoveries. Prediction of a protein’s structure from its DNA sequence alone has been one of biology’s greatest challenge. And don’t forget about CRISPR genetic editing. As journalist and author Kevin Davies recently told me, “There are a lot of computational aspects to CRISPR in terms of designing the particular stretches of nucleic acid that you’re going to use to target a specific gene. And AI can help you in that quest to make those ever more precise. … AI can help give us more confidence that we’re only honing in on the specific gene that we want to edit, and we’re not potentially going to see some unforeseen, off-target editing event.”
- Astronomic data is very noisy, and AI is being used to clean it up for astronomers. Astrophysicist Gioia Rau noted in an episode of Faster, Please! — The Podcast that NASA used AI to find 181 new planet candidates from TESS data in 2021. Rau: “AI in general is revolutionizing astronomy in many ways from understanding cosmology to understanding the shape of galaxies and how they form. And I’m noticing more and more AI-based applications to the exploration of astronomical data. And so this is definitely, I believe, the future of astronomy. In a decade or so there will be more AI-based applications to analyze astronomical data than manual ones.”
- Nvidia is working on an AI-powered “Earth-2” to predict climate change. It will be a “physically-accurate, high-fidelity, and ultra high-resolution replica of Earth continuously running to predict climate and weather events at the regional and global scales,” Forbes reports.
- In agriculture, AI can be used to analyze satellite and drone imagery or soil data to optimize management resources, predict diseases, and improve efficiency. AI can also help identify new crop varieties that are more resistant to a changing climate and use less resources.
The potential here is vast, and I don’t want it to get lost among hand-wringing and negativity — a dispiriting 72 percent of voters prefer slowing down the development of AI compared to just 8 percent who prefer speeding development up — and suffucated by wrong-headed regulation.
5QQ
💡 5 Quick Questions for … infrastructure policy analyst Aidan Mackenzie on geothermal regulation
Last August, I featured Quaise Energy CFO Kevin Bonebrake on Faster, Please! — The Podcast for a fascinating chat about Quaise’s efforts to use microwaves to “drill” (vaporize, really) ultradeep shafts for geothermal energy. In that conversation, Kevin told me, “if we want to drill geothermal wells and we apply for a permitting process, we have to jump through more hoops than the oil and gas sector would” due to categorical exemptions enjoyed by that industry which have not yet been extended to geothermal.
On that same theme, Aidan Mackenzie, infrastructure fellow at the Institute for Progress, recently published “Geothermal Energy Needs Permitting Reform.” It’s a great read, and I definitely recommend checking it out for a better understanding of the regulatory hurdles geothermal currently faces. Here are five questions I had after reading that piece, with five answers from Aidan.
1/ Why haven’t oil and gas regulatory exemptions been extended to geothermal already? Are legislators unaware? Are counter-lobbying efforts succeeding?
The Energy Policy Act of 2005 gave oil and gas exploration drilling a specific carve-out from NEPA. Those exclusions haven’t been extended to geothermal. This means there’s a remarkable imbalance between the regulatory difficulty of drilling for geothermal heat, and the difficulty in drilling for oil. It’s even stranger when you consider that many of the well-drilling technologies are functionally identical: in fact, many of the cutting-edge geothermal drilling techniques were adapted from traditional oil and gas, or from the shale revolution. Geothermal development requires several stages of development, such as exploration drilling to test for good subsurface production conditions, and those separate stages force the industry to complete several NEPA reviews instead of one. It can take up to 7-10 years for geothermal projects to go through all the permitting steps.
The oil and gas exemptions are a good example of how complex regulatory processes can disadvantage newcomers. Established industries have practice working with regulators, and their lobbyists have sway with policymakers. New industries don’t have many lobbyists and have to move up a steep learning curve.
2/ Why should the imbalance between oil and gas regulation and geothermal regulation be resolved by creating more carve-outs? Why shouldn’t we just end the oil and gas exemptions?
Ideally, no project should have to wait 4.5 years in the NEPA process, much less go through multiple reviews that take 7-10 years. In my view, the NEPA process generally imposes far too much economic cost for far too few environmental benefits, so, in the absence of comprehensive NEPA reform, extending carve-outs is the second best option.
It’s ironic that the environmental review process currently sabotages geothermal, which is completely clean energy. Oil and gas is already a mature industry, with a long history of successful development.
On the other hand, geothermal is at a very early stage. If we want clean geothermal energy, the industry needs the ability to learn by doing, and to develop next-generation tech that can drill deeper and produce energy more efficiently. A major harm from onerous permitting requirements is that they add additional uncertainty to the geothermal industry.
3/ What are the harms we’re trying to mitigate with geothermal regulation, such as under the National Environmental Policy Act?
NEPA is primarily a procedural law, meaning it does not set or enforce regulatory requirements (like safety or pollution standards). NEPA only requires the federal government to take a “hard look” at the environmental impacts of a project, no matter the type of project.
So NEPA isn’t burdensome for geothermal energy because geothermal is especially environmentally disruptive, but rather because of the technical ways that NEPA applies to government approval.
The U.S. sits on remarkable heat resources close to the surface, but those resources are predominantly in the Mountain West, mostly federal land on which NEPA applies. This means that geothermal developments are disproportionately likely to require NEPA review compared to other energy sources. Other industries, like oil and gas, are able to build more infrastructure on private land, limiting their exposure to federal review processes.
4/ Should geothermal merely be on a level playing field with oil and gas, or should federal regulation tilt toward geothermal since it’s cleaner?
Well certainly, giving geothermal the same treatment as oil and gas would be a huge improvement for the industry, and it’s the most pragmatic near-term goal.
But in general, I don’t see fixing permitting as a tool to flip the status quo, to advantage clean energy and disadvantage fossil fuels. Procedural approval processes aren’t the proper mechanism to pick winners and losers. But comprehensive or at least tech-neutral permitting reform will naturally benefit emerging industries like geothermal, which are especially hamstrung.
5/ How much of this issue comes down to NEPA alone versus other regulations?
NEPA is especially burdensome for geothermal, because federal lands contain a significant share of geothermal assets. But there are other regulatory problems. The Bureau of Land Management lacks staff capacity at field offices to approve geothermal projects in a timely manner. Transmission lines are also vital for moving geothermal energy to market. Reforming the siting, permitting and cost-allocation of large, interregional transmission lines would help reduce uncertainty for geothermal development.
Micro Reads
▶ Google’s Waymo, Cruise Get Nod to Expand San Francisco Robotaxis – Eari Nakano and Edward Ludlow, Bloomberg | The verdict expands how Waymo and Cruise vehicles can operate. Prior to Thursdays decision, Cruise could only charge a fare in a limited section of the city, while offering a free service covering almost all of the peninsula. Waymo did not charge a fare in San Francisco and its public service excluded the city’s northeast. The resolutions permit the autonomous vehicles to operate without a human safety driver during the day or at night, throughout the entire city of San Francisco. Previously, autonomous vehicles were limited to operating during certain hours, in specific weather conditions or with a safety driver present. Both companies are permitted to collect payment for rides.
▶ AI Is Building Highly Effective Antibodies That Humans Can’t Even Imagine – Amit Katwala |
▶ With AI, Hackers Can Simply Talk Computers Into Misbehaving – Robert McMillian, WSJ |
▶ San Francisco Balks at Expanding Driverless Car Services on City’s Roads – Yiwen Lu, NYT |
▶ David Autor: ‘We have a real design choice about how we deploy AI’ – Delphine Strauss, FT |
▶ To Navigate the Age of AI, the World Needs a New Turing Test – Ben Ash Blum, Wired |
▶ What’s next for the moon – Jonathan O’Callaghan, MIT Tech Review |
▶ How ‘creative insecurity’ is stimulating Ukrainian innovation – John Thornhill, FT Opinion |
Sign up for the Ledger
Weekly analysis from AEI’s Economic Policy Studies scholars