Do We Need a Reboot? Challenging Prevailing Narratives on AI
July 14, 2023
It is hard at times to tell which is likely to be more disruptive: artificial intelligence (AI) or the multiplying efforts to regulate this powerful, potentially transformational, and highly beneficial technology. With multiple calls for pauses and increasing calls for new regulation, we are clearly at a critical moment in a debate that could either leave the path open to a new era of prosperity or hamper the U.S. in leading the way toward a better and more prosperous AI future.
We recently hosted an AEI event with two Stanford University professors, Robert Reich and Jeremy Weinstein, to unpack some of the big issues regarding the future of AI. These two political scientists took a nuanced and informed approach to AI that still left plenty of room for a robust debate, a welcome break from the doomsaying and millennialism that characterizes much of the conversation.
AI doomers often fall prey to a negativity bias, making them prone to overlook the opportunity costs of pausing or halting AI research. Others adopt the regulate-first “precautionary principle,” an unproductive and exhausting form of policy shadow-boxing. Reich and Weinstein are well aware of the benefits AI can bring and are duly skeptical of calls for pauses and existential risk arguments that posit science fiction-like dangers. Just as hard cases make bad law, worst-case scenarios put fear in the regulatory driver’s seat.
The main areas of contention over AI regulation breakdown into three categories: whether or not AI enjoys a “regulatory oasis,” how to deliver its benefits without surrendering to an “optimization” mindset that ignores human-impact, and how to balance opportunity and risk in AI development.
The regulatory-oasis theory holds that since the mid-1990s internet platforms have enjoyed a healthy level of legal protection. However, Section 230, the law described as shielding online platforms from legal actions related to content posted by others, might not cover AI. This has raised concerns about the legal liabilities for work created by AI and what is the appropriate role for government engagement.
As other experts have argued the federal government is already actively engaged in AI regulatory issues. The National Institute of Standards and Technology (NIST), White House Office of Science and Technology Policy, Food and Drug Administration, National Highway Traffic Safety Administration, and Equal Employment Opportunity Commission have directly addressed the topic of AI and are developing regulatory frameworks. NIST’s Artificial Intelligence Risk Management Framework provides robust guidelines for AI development and deployment and a thorough analysis of AI’s dangers. The list is long.
Weinstein and Reich are both very concerned about the “optimization mindset” of engineers and AI developers. The danger Weinstein and Reich point to is how a relentless focus on optimizing data sets and technological applications could unwittingly harm users and society-at-large while a “go fast, break things” ethos takes over.
This is a fair criticism. At the same time, optimization is a close cousin of concepts like efficiency and productivity, both integral to rising incomes and living standards. In other words, the problem is less with optimization than it is with unthinking optimization—and a lack of reflection about the ends we seek from technology.
The argument about optimization aligns with the question of how to regulate AI to prevent worst-case outcomes. If the so-called precautionary principle, which favors regulation before harm manifests, is invoked it might cause as much harm as it seeks to avoid by stifling innovation and hindering the benefits AI can bring to society.
Those benefits are easy to overlook because negativity bias elevates potential downsides over possible gains, but those benefits deserve equal consideration. In June, a McKinsey & Company study predicted that fully deployed AI, could add between $17.1 and $25.6 trillion to the global economy per year. The study also estimated the effect of early and late AI implementation. Almost all the long-term benefits happen only when we adopt early. While AI might introduce risks, it could also solve some of our biggest problems, many of which are existential. Caution looks prudent and “free”—but often it is neither.
This is not to say we should discount the risks of AI and insulate it from scrutiny and regulation. But we should prioritize a full accounting of risks and rewards when regulating the technology. Taking a more balanced and flexible approach will allow us to navigate the complexities of AI without denying ourselves much of its potential.
Sign up for AEI on Vocation, Career, and Work
Biweekly updates from AEI on the latest findings in human capital development, skills training, and strategies for improving pathways to upward mobility