The post Revising the Horizontal Merger Guidelines: The Path Forward in Antitrust appeared first on American Enterprise Institute - AEI.
]]>Public comments on the proposed revisions are due September 18. On September 19, AEI will host a two-hour symposium at which leading economists and legal experts representing a wide range of perspectives will discuss the proposed revisions, the reaction among antitrust policymakers and practitioners, and the likely path forward.
Submit questions to Kate.Beinkampen@AEI.org or on Twitter with #AskAEITech.
The post Revising the Horizontal Merger Guidelines: The Path Forward in Antitrust appeared first on American Enterprise Institute - AEI.
]]>The post Inching Closer to Editorial Freedom? The Government Weighs In on Social Media Platforms’ First Amendment Rights appeared first on American Enterprise Institute - AEI.
]]>The cases center on Florida and Texas statutes that, as I explained elsewhere, restrict the ability of large social media platforms “to determine and curate for themselves—as business entities, free from government censorship—the content they host, where they host it and, ultimately, the types of communities they maintain.” A key constitutional question, in turn, is whether the First Amendment safeguards the platforms’ content-moderation decisions from such government interference, similar to the way the Supreme Court concluded nearly 50 years ago that it protects “the exercise of editorial control and judgment” by print newspapers.
The federal appellate courts in the NetChoice cases disagreed on the answer. Last year in Moody, the US Court of Appeals for the 11th Circuit concluded that it was “substantially likely” that a Florida statute restricting how and where large platforms display content and another banning the deplatforming of candidates running for public office in the Sunshine State “unconstitutionally burden” what it called “protected exercises of editorial judgment.” As I wrote for AEIdeas in June, the 11th Circuit “reasoned that Florida’s content-moderation statutes likely wouldn’t pass intermediate scrutiny (let alone the more rigorous means-end test, strict scrutiny) because they did not further any substantial government interest.”
Conversely, the 5th Circuit in Paxton upheld a Texas statute that, in key part, bars large platforms from censoring “a user, a user’s expression, or a user’s ability to receive the expression of another person based on . . . the viewpoint of the user or another person.” The 5th Circuit “reject[ed] the idea that corporations have a freewheeling First Amendment right to censor what people say.” Embracing an originalism text-and-history analysis I previously described, it reasoned that “the First Amendment’s text and history . . . offer no support for the Platforms’ claimed right to censor.”
The wonderful news for social media platforms’ autonomy and independence is that Prelogar did more this month than simply ask the Court to consider the constitutionality of the content-moderation restrictions in Moody and Paxton. She also weighed in on the merits, asserting that (1) “The platforms’ content-moderation activities are protected by the First Amendment,” given that the “act of culling and curating the content that users see is inherently expressive”; and (2) Florida and Texas “have not articulated interests that justify the burdens imposed by the content-moderation restrictions under any potentially applicable form of First Amendment scrutiny.” In brief, the content-moderation mandates imposed by both states are unconstitutional.
The Supreme Court, of course, doesn’t need to adopt the Biden administration’s stance on the content-moderation question, and it doesn’t even need to hear the cases in the first place. But it is a heartening step for online-speech businesses seeking to avoid government edicts about whether and how they host third-party users and their content. Indeed, NetChoice issued a statement noting, “The Solicitor General’s brief underscores that both Texas and Florida’s laws are unconstitutional and that the Court should review our cases.”
If the Supreme Court ultimately adopts Prelogar’s position, it would—perhaps, ironically—finally give social media platforms solid legal precedent to forcefully push back against the type of jawboning Biden administration officials allegedly engaged in cases such as Missouri v. Biden to get the platforms to remove content the government doesn’t like. Compounding the irony, oral argument about the merits of a July 4 trial-court order in Missouri v. Biden banning such jawboning activity occurred on August 10 in front of the 5th Circuit, the same appellate court that ruled against the platforms’ First Amendment right of free speech in Paxton.
Print newspapers have long had the Supreme Court’s 1974 ruling in Miami Herald v. Tornillo in their constitutional corner to protect their First Amendment right of editorial control and autonomy against government efforts (informal or otherwise) to control content. The Court in Tornillo struck down a Florida statute compelling newspapers to print—free of charge, no less—the replies of candidates running for the public office whom the newspapers had criticized. Now, almost a half-century later, it just may be that Florida statutes of much more recent vintage—the ones at issue in Moody—help to propel the First Amendment right of editorial control and autonomy beyond the realm of print and into the digital, online era.
The post Inching Closer to Editorial Freedom? The Government Weighs In on Social Media Platforms’ First Amendment Rights appeared first on American Enterprise Institute - AEI.
]]>The post What We Know—and Don’t Know—About AI and Regulation appeared first on American Enterprise Institute - AEI.
]]>A key feature of both the EU and Canadian legislation is the obligation to define and identify the risks AI applications pose and ensure that appropriate risk mitigation strategies are put in place and continually monitored. As the promotors of the Canadian legislation claim, “For businesses, this means clear rules to help them innovate and realize the full potential of AI. For Canadians, it means AI systems used in Canada will be safe and developed with their best interest in mind.”
However, as former Federal Trade Commissioner Maureen K. Ohlhausen observed, drawing on Friedrich Hayek’s “The Use of Knowledge in Society,” regulation is a task that needs to be approached with a healthy dose of regulatory humility—that is, recognition of regulation’s inherent limitations. A regulator must acquire knowledge about the present state and future trends of the industry’s regulation. The more prescriptive the regulation and the more complex the industry, the more detailed knowledge the regulator must collect.
But this supposes that the relevant information is already known, or can be known, in the first place. And importantly, that the regulator acknowledges what is not already known, and maybe cannot ever be known, will influence the ability to craft and enforce effective regulations. Failing to acknowledge this factor leads either to falling for Daniel Kahneman’s “what you see is all there is” cognitive bias or, in Ohlhausen’s view, overconfidence in the regulator’s ability to use regulatory means to achieve desired objectives. The less the regulator knows, or the more that cannot be known by the regulator or anyone else, the greater the likelihood that the regulation itself will impose harm, over and above any harm that may be caused by the subject of that regulation.
Therefore, a distinction between risk and uncertainty is crucial for understanding regulatory humility. Frank Knight articulated in his 1921 book Risk, Uncertainty and Profit (Beard Books) that “a known risk is easily converted into an effective certainty” using probabilities, while “true uncertainty is not susceptible to measurement.” John Kay and Mervyn King provide a modern interpretation in Radical Uncertainty: Decision-Making Beyond the Numbers (W.W. Norton & Company, 2020.) They distinguish between the state of radical uncertainty or complexity (order is not apparent ex ante so no certainty of outcomes can be anticipated [Knightian uncertainty]) and the merely complicated (with sufficient time, information, and resources, order can be discerned and probabilities attached, leading to a state of Knightian risk). They term the quest for understanding the latter cases “puzzles”—for which there is one or more potential solutions—and the former, “problems”—where there is no clearly defined or obtainable solution, no matter how much effort is exerted.
What then, does this mean for the so-called risk-based approaches to regulating ML proposed for the EU and Canada?
At the nub of the ML problem is that the current state of knowledge about ML applications and their likely effects in various sectors is scant, even amongst those developing them. A clear and precise definition of what constitutes ML appears almost impossible to pin down. This is a state of Knightian uncertainty: Definitions and outcome probabilities are nearly impossible to assign, not to mention the probabilities of specific interventions having quantified likelihoods of success. By Ohlhausen, Hayek, Knight, Kay, and King, the AI/ML situation is not one amenable to regulation relying on principles of risk and risk management. A dose of regulatory humility is required.
A closer examination of the EU regulations reveals it is not that the risks of ML, per se, are being managed, but rather the sectors where it is feared that any potential harms will be more costly or irreversible are being ring-fenced and sheltered from uncertainty. And to the extent that some firms by dint of their size might possibly cause more instances of harm with their applications, they too face additional restrictions as regulators try to shift costs of uncertainty onto those with pockets big enough to bear the insurance premium of uncertainty for society.
Applying a good dose of humility thus leads to the conclusion that this movement is not regulating the risks of ML but rather managing the consequences of the fear of the uncertain.
The post What We Know—and Don’t Know—About AI and Regulation appeared first on American Enterprise Institute - AEI.
]]>The post National Identity Systems in the Fourth Dimension appeared first on American Enterprise Institute - AEI.
]]>So some years ago, I sought to sharpen the debate around national ID systems by putting together a definition of what a national ID system is. As Congress briefly considered a bill to revive the still-moribund REAL ID Act, I wrote a blog post offering up a definition. I still think it’s good and have used it in other writings. To define national identity systems, I wrote:
First, it is national. That is, it is intended to be used throughout the country, and to be nationally uniform in its key elements. REAL ID and PASS ID have the exact same purpose—to create a nationally uniform identity system.
Second, its possession or use is either practically or legally required. A card or system that is one of many options for proving identity or other information is not a national ID if people can decline to use it and still easily access goods, services, or infrastructure. But if law or regulation make it very difficult to avoid carrying or using a card, this presses it into the national ID category.
. . .
The final ‘element’ of a national ID is that it is used for identification. A national ID card or system shows that a physical person identified previously to a government is the one presenting him‐ or herself on later occasions.
With identity systems springing up all over, it’s useful to know which are concerning and which are safe to embrace. I’m a fairly consistent opponent of the REAL ID Act. (In 2007, my AEI colleague Norman J. Ornstein wrote a good piece highlighting its sloppy origins.) But surely not every identity system is to be avoided. There are many benefits on offer from well-designed systems, including convenience, security, and lower costs for goods and services.
Take attending sporting events. Major League Baseball’s (MLB) “Go-Ahead Entry” launched this week, a ticketless entry system based on facial recognition. Phillies fans now may access their ballpark without the inconvenience of carrying a ticket or pulling out their phones to scan in a digital one.
“Enrollment in Go-Ahead Entry is voluntary,” said MLB in one news report. “Cameras will [scan] users’ faces to ‘create a unique numerical token.’ The facial scans will be immediately deleted afterwards and only the unique numerical token will be stored and associated with the user’s MLB account, officials said.”
That sounds good. The data destruction makes it more privacy-protective than it otherwise would be. Should we all move to Philly to gather a little of that convenience while cheering on the Phillies’ late-season rise?
There’s a problem with that plan, and it’s not just the upcoming three-game stand against the equally hungry Milwaukee Brewers. The problem is that policies can change.
A couple of years ago, I issued words of caution about Worldcoin, the plan to distribute cryptocurrency as one might in a universal basic income program. The Worldcoin system uses iris scans collected by a device called the “Orb” to administer payouts. It has some privacy protections, but it is hard to protect against future changes that undermine them, as I previously noted:
The global infrastructure for machine-biometric tracking made popular for WorldCoin distribution could be repurposed to all kinds of tracking and control. The Worldcoin identifier, which must be shared widely to work, could become the new global social security number—a powerful tool with good uses, but also profoundly bad ones.
MLB’s Go-Ahead Entry seems worlds apart from Worldcoin. But from the moment it extends to paying for peanuts and Cracker Jacks, it could start to morph into a system later to be co-opted into tracking and control. It will be national, it is for identification, and all it takes is for some national emergency or fervor to produce legislation that makes it practically or legally required to access goods, services, or infrastructure of all kinds.
Innocuous identity systems like Go-Ahead Entry pose risks along the fourth dimension: time. I’ll stick with biometric-free baseball tickets.
The post National Identity Systems in the Fourth Dimension appeared first on American Enterprise Institute - AEI.
]]>The post Defamation Law and Generative AI: Who Bears Responsibility for Falsities? appeared first on American Enterprise Institute - AEI.
]]>Indeed, when the Federal Trade Commission (FTC) began investigating OpenAI (maker of ChatGPT) in July, an explicit concern was “reputational harm” caused by its “products and services incorporating, using, or relying on Large Language Models.” The FTC asked OpenAI to describe how it monitors and investigates incidents in which its large language model (LLM) products “have generated false, misleading, or disparaging statements about individuals.” Sam Altman, OpenAI’s CEO, predicted in June that it would “take us a year and a half, two years” before it “get[s] the hallucination problem to a much, much better place.”
What happens until then for people seeking redress for AI-generated falsities? Consider two defamation scenarios: (1) lawsuits targeting businesses and people (including journalists aided by AI programs) who use generative AI to produce information they later publish, and (2) lawsuits leveled at companies such as OpenAI and Google (maker of Bard) that create generative AI programs. In the first scenario, the defendants are AI users; in the second, they are AI companies.
In the former situation, anyone who uses generative AI to produce information about a person and then conveys it to someone else may be legally responsible if it is false and defamatory. Such AI users will be treated as publishers of the information even though something else created it. As the Reporters Committee for Freedom of the Press observes, “[I]n most jurisdictions, one who repeats a defamatory falsehood is treated as the publisher of that falsehood.” Under an old-school analogy, a print newspaper cannot escape liability for publishing a defamatory comment just because it accurately attributes the comment to a source. This reflects the maxim that “tale bearers are as bad as tale makers.”
Furthermore, because it’s commonly known that generative AI “has a propensity to hallucinate,” people who use it to generate information and then fail to independently verify its accuracy are negligent when publishing it. It’s akin to journalists trusting an unreliable human source—one they know has lied before. Indeed, OpenAI’s terms of use: (1) acknowledge its products may produce content “that does not accurately reflect real people, places, or facts,” and (2) advise users to “evaluate the accuracy of any Output as appropriate for [their] use case, including by using human review of the Output.” In sum, not attempting to corroborate content produced by a program understood to produce falsities constitutes a failure to exercise reasonable care in publishing content. That spells negligence—the fault standard private people typically must prove in defamation cases.
Public figures and officials, however, must satisfy a higher standard called actual malice. Proving this standard requires demonstrating that an AI user acted with reckless disregard for whether the AI-produced statements they published were false––that they had a “high degree of awareness of their probable falsity.” This can be shown through circumstantial evidence including the “dubious nature of [one’s] sources” and “the inherent improbability” of the falsities. In brief, if AI-spawned defamatory falsities seem believable and users don’t otherwise doubt them, then a defamation case might fail.
Regarding the second scenario—suing AI companies over defamatory falsehoods—there’s now a case on point. Talk-radio host Mark Walters filed a complaint in June against OpenAI in Georgia’s state court. It contends that ChatGPT, responding to a journalist’s request to summarize the allegations in a lawsuit complaint, falsely said Walters was a defendant “accused of defrauding and embezzling funds from” the lead plaintiff. Walters’s complaint contends “[E]very statement of fact in the summary pertaining to [him] is false.”
In July, OpenAI transferred the case to federal court, where it moved for dismissal because “Walters cannot establish the basic elements of a defamation claim.” Perhaps that’s true, but what’s striking about the motion is how it largely relies on OpenAI’s terms of use (see above) and ChatGPT’s falsity warnings to absolve itself of legal responsibility and shift that responsibility to users (here, the journalist who asked ChatGPT to summarize the complaint). The motion states:
Before using ChatGPT, users agree that ChatGPT is a tool to generate “draft language,” and that they must verify, revise, and “take ultimate responsibility for the content being published.” And upon logging into ChatGPT, users are again warned “the system may occasionally generate misleading or incorrect information and produce offensive content. It is not intended to give advice.”
That may be an excellent strategy to end a defamation lawsuit, but it’s not exactly confidence-inspiring as a business model for ChatGPT. In fact, it actually bolsters the FTC’s concerns noted above. Walters’s opposition is due September 8; stay tuned.
The post Defamation Law and Generative AI: Who Bears Responsibility for Falsities? appeared first on American Enterprise Institute - AEI.
]]>The post Uncertainty & Technology: The Adaptability Imperative of Automation (LIVE with Brent Orrell—Part II) appeared first on American Enterprise Institute - AEI.
]]>The two scholars get at the heart of how we should view automation and the imperative that places on our institutions—and ourselves. The crowd—the 2023 AEI Summer Honors Program student cohort—also has a chance to ask questions since they will soon be embarking on their own career journey.
Missed the first part of the conversation? Listen to Part I here!
The post Uncertainty & Technology: The Adaptability Imperative of Automation (LIVE with Brent Orrell—Part II) appeared first on American Enterprise Institute - AEI.
]]>The post The Case Against Breaking Up Amazon: Embracing Innovation and Consumer Choice appeared first on American Enterprise Institute - AEI.
]]>News reports say the FTC has three primary concerns: (1) that Amazon seeks to ensure products on its website have the lowest prices on the web, (2) that it rewards sellers who buy ads and use Amazon’s logistics services, and (3) that Amazon Prime packages books, music, and video streaming. These concerns, if valid, are hardly worth breaking up a company. Lowest-price requirements hinder markets only in particular circumstances. Rewarding good partners and customers is normal in business. Tying products together isn’t a problem per se, according to the FTC website. In this case, customers have multiple sources of books, music, and video streaming.
The FTC might have other reasons. Amazon provides 53 percent of book sales in the US, 80 percent of e-book sales worldwide, nearly 40 percent of retail e-commerce in the US, and 32 percent of cloud computing worldwide. And it is frequently accused of unfairly competing with its third-party sellers, although the actual evidence falls short of what the headlines promise. However, these static numbers, viewed in isolation, are deceiving and neglect consumers’ perspectives.
Amazon’s bookselling prominence grew because it offered customers a better value for some purchases than did the brick-and-mortar stores. Amazon provided convenience, a larger inventory, and personalized AI-generated recommendations. This competition led Amazon’s rivals to up their game: Independent booksellers began emphasizing community, personal service, and bringing together people who have common interests.
Niche publishers have also benefitted from Amazon’s prominence. Self-publishing grew 23-fold from 2007 through 2018, and Amazon’s book marketplace accounted for 95 percent of this explosive growth.
Despite newspapers claiming Amazon harms small businesses, the actual businesses suggest the reverse is true. Of the small businesses selling online, about a fourth use Amazon, second only to selling through their own websites. And although Amazon is the most popular marketplace for small businesses, many businesses do not feel captured by Amazon: Most also use eBay, Etsy, and Walmart.
Although Amazon’s innovativeness has provided it with a large percentage of retail e-commerce and thus the illusion of market dominance, the reality is that online and offline commerce compete. Consider the numbers: E-commerce is only 15 percent of retail in the US, so Amazon is only 6 percent of the retail landscape. Walmart is the heavy hitter in retail, providing twice the retail sales of Amazon.
Among Amazon’s innovations is Amazon Prime, which has proven wildly popular with consumers. Despite the FTC’s questionable claims that customers are being tricked into buying Prime and then being held captive, the service grew by a third during the pandemic—from 150 million to 200 million subscribers—as the public came to favor online, contact-free shopping.
Cloud computing reveals a similar picture of innovation, quick popularity, and aggressive competition. Amazon has provided 32 percent of cloud services over the past five years, while Microsoft’s share has jumped nearly 70 percent—from 13.7 percent to 23 percent. Cloud computing as a whole grew more than 200 percent over the past five years, refuting claims of market power stifling the marketplace.
The breakup, if it happens, might be a feather in the cap of FTC Chair Lina Khan, who made her name declaring Amazon to be “dominant” in numerous markets and a “house of cards.” At stake is an e-commerce platform that enabled small businesses to sell 7,800 products per minute in 2022 and that US consumers rate second in customer satisfaction, behind only Apple.
The debate over Amazon’s breakup should be examined through a lens of innovation and consumer choice. Amazon has thrived by introducing transformative technologies and fostering retail competition. Perhaps the FTC should defer to customers, as they determine the true economic value of Amazon’s services and innovations.
The post The Case Against Breaking Up Amazon: Embracing Innovation and Consumer Choice appeared first on American Enterprise Institute - AEI.
]]>The post Does Big Tech Need a Reboot? appeared first on American Enterprise Institute - AEI.
]]>In this episode, we invite you to listen in on a recent AEI event on the book System Error: Where Big Tech Went Wrong and How We Can Reboot (Harper Academic, 2021). On June 22, 2023, AEI’s Brent Orrell and Shane Tews were joined by Rob Reich of the Stanford Institute for Human-Centered Artificial Intelligence and Jeremy M. Weinstein of the Freeman Spogli Institute for International Studies to discuss their book, which they co-authored along with their fellow Stanford professor Mehran Sahami.
The panelists discuss the challenges that Big Tech in the 21st century—particularly artificial intelligence—poses to democracy. They explore the dangers of the “optimizing” mindset that competition in technology encourages; the trade-offs between the values of privacy, safety, agency, and productivity; the rise of misinformation and disinformation; and issues of power concentration and regulatory capture in the technology sector.
Mentioned in the Episode
System Error: Where Big Tech Went Wrong and How We Can Reboot
Freeman Spogli Institute for International Studies
Stanford Institute for Human-Centered AI
“Get Rich U.” in the New Yorker
DoNotPay – Your AI Consumer Champion
Facebook “Connect the World” Memo
Sen. Schumer’s SAFE Innovation Framework
NIST AI Risk Management Framework
The post Does Big Tech Need a Reboot? appeared first on American Enterprise Institute - AEI.
]]>The post In the Google Case, the Justice Department Continues to Help Companies, Not Consumers appeared first on American Enterprise Institute - AEI.
]]>The case focuses on agreements between Google and companies such as Apple, Samsung, and Firefox, wherein the counterparty agrees to make Google the default search tool in exchange for a share of the search revenue thus generated. So, for example, when an iPhone user enters a search term in the Safari browser bar, the browser returns Google search results. This saves the consumer the step of entering www.google.com into the browser before searching. The government argues that such agreements reflect Google splitting monopoly rents with Apple and others to preserve its dominant position in search markets against other search providers.
At first glance, one sees some parallels to the landmark Microsoft case in the 1990s, which found pre-installing Internet Explorer on Windows computers foreclosed rival Netscape from competing in the browser market. But there is a key difference, which Thom Lambert discussed at length when the case was first filed. In the Microsoft case, no one argued that pre-installing Internet Explorer made the browser better. It just made it harder for Netscape to reach consumers. By comparison, Google argues that its default agreements improve the customer experience in two ways.
First, these agreements improve the quality of Google’s search results. Modern search markets exhibit economies of scale: The more searches a provider processes, the more it learns from user queries and the better its algorithms become at giving consumers the results they want. This means that by serving as the default search provider on various devices, Google is increasing traffic to its search engine, increasing its scale and thus improving its products’ quality vis-à-vis its competitors’.
Second, Google argues that serving as the default search provider enhances competition in adjacent markets. For example, integrating search into the browser bar, rather than making the consumer go to a search engine website, saves the consumer time and thus makes the browser better. And the revenue shared with browser providers allows those companies to improve their products. (Lambert notes that the independent browser Firefox generates 95 percent of its revenue from search royalties, suggesting that absent such agreements, independent browsers would struggle against Microsoft Edge and Google Chrome.)
These product quality arguments are central to Google’s defense, which is likely why the Justice Department filed two unusual motions in limine asking the court to instruct that such evidence of product improvement may not be introduced as a complete defense and to exclude evidence of consumer benefits in adjacent markets as irrelevant. As a technical matter, this was an odd vehicle to make what are effectively legal arguments. Motions in limine typically limit the evidence’s undue influence on the jury. But this is a bench trial, in which the judge serves as a factfinder. Presumably, judges need not remind themselves which evidence is admissible or for what purpose.
But setting aside the form, the legal argument is interesting. The government argues that Google’s procompetitive product enhancement must be weighed against the anticompetitive effect of foreclosing rivals from securing those scale benefits. As Herbert Hovenkamp notes, that’s a difficult comparison to make—reminiscent of Justice Antonin Scalia’s question whether this line is longer than this rock is heavy. The 9th Circuit has implicitly rejected such balancing in product quality cases. This makes sense: Antitrust does not require a company to keep prices high to protect less-efficient competitors. Neither should it require one to reduce product quality to protect rivals. Both results would benefit trailing competitors at the expense of consumers.
Of course, search engines such as Bing could simply outbid Google for the right to be a default provider, and that’s perhaps the most amusing aspect of this case. While the case is styled as a quest to protect competitors from Google’s dominance, the primary beneficiary of a government victory would be Microsoft—an even larger tech titan. Microsoft has the resources to buy preferential placement for Bing. It hasn’t done so, which suggests consumer preferences for Google are strong enough—and switching costs are low enough—that rivals would not significantly benefit from similar arrangements. If customers prefer Google defaults, then the government is literally asking the courts to put companies ahead of consumers.
The post In the Google Case, the Justice Department Continues to Help Companies, Not Consumers appeared first on American Enterprise Institute - AEI.
]]>The post Worldcoin’s Introduction to the Kenyan Market Is Temporarily Halted Due to Privacy Concerns appeared first on American Enterprise Institute - AEI.
]]>Following a week of iris scanning, the Communications Authority of Kenya and the Office of the Data Protection Commissioner (ODPC) issued a joint statement raising concerns about the absence of transparency in data security measures and the sensitive data retention process. The agencies listed the following concerns:
The ODPC questioned whether the consent Worldcoin obtained for data processing was legal, as offering a monetary incentive could be seen as a form of inducement to participate without a full understanding of the data being collected in the iris-scan-for-tokens program. The Associated Press even reported some people “traveled for miles after friends said ‘free money’ was being handed out. They acknowledged not knowing why they needed to scan their irises and where that information would go but just wanted the money.”
The Worldcoin project “is intended to be the world’s largest, most inclusive identity and financial public utility, owned by everyone” and includes an ID, a Worldcoin token, and an app that facilitates payments, international transfers, and purchases. It would seem, however, that Worldcoin still has some way to go in terms of addressing privacy and data protection concerns. Countries such as France, Germany, Spain, and the United Kingdom have begun to review Worldcoin’s activities to determine their alignment with the countries’ regulatory framework—in this case, the General Data Protection Regulation.
Kenya instituted a Data Protection Act in 2019 designed to preserve the rights and interests of Kenyan individuals’ personal data. In Kenya, the core dispute involving Worldcoin’s launch revolves around the principles of unencumbered, unambiguous, and knowledgeable consent. Under the Data Protection Act, consent is defined as “express, unequivocal, free, specific and informed indication of the data subject’s wishes by a statement or by a clear affirmative action.” With this statute in mind, the ODPC questions whether the entire process of collecting and processing people’s data is legal.
The two initial facets of consent are now subject to scrutiny due to the cash incentive extended to users for their participation in the initiative. Furthermore, informed consent hinges on the assumption that Kenyan citizens possess a fundamental grasp of personal data’s importance—an assumption that might not hold universally true. Overall awareness concerning privacy—encompassing the various facets of personal data as relevant in the Kenyan context—remains somewhat limited. This circumstance consequently places a greater onus on entities involved in personal data collection and processing, necessitating a higher degree of transparency in Worldcoin’s operational conduct to align with ongoing compliance requisites.
The Kenyan Data Protection Act calls for registering data controllers and data processors, and stipulates the legitimate parameters for personal data processing, including conditions for cross-border data transfer outside Kenya. The extent to which all such requirements were satisfied prior to the introduction of Worldcoin in Kenya remains uncertain. Notably, however, Worldcoin’s parent company, Tools for Humanity, did register as a data controller with the ODPC.
The cybersecurity safeguards should be in the collection process’s initial design for the retinal scans to protect against sensitive biometric data theft and misuse. Worldcoin has firmly asserted its commitment to pertinent legal and regulatory frameworks along with the expectation of compliance. There continues to be a lack of clear communication regarding the collection and security of the collected data. However, basic questions persist: What happens to the collected data? How long is it being stored? Where is it being stored?
It remains to be seen whether the Worldcoin exercise will continue given recent events and an X post alleging consultation with Kenyan regulators a year before the launch. The company has also communicated its proactive engagement in addressing the reservations the Kenyan authorities articulated. The trajectory of Worldcoin’s future activities in Kenya remains contingent upon whether these endeavors prove sufficient to resume operations.
In the interim, the case of Worldcoin in Kenya casts a luminous spotlight on critical inquiries at the intersection of groundbreaking innovation and individual privacy.
The post Worldcoin’s Introduction to the Kenyan Market Is Temporarily Halted Due to Privacy Concerns appeared first on American Enterprise Institute - AEI.
]]>The post From Pixels to Prosperity: Highlights from My Conversation with Susan Otieno appeared first on American Enterprise Institute - AEI.
]]>To unravel these threads, I had a candid conversation with Susan Otieno, a privacy and legal expert deeply entrenched in the stock photography space in Kenya.
Below is an edited and abridged transcript of our enlightening discussion. You can listen to this and other episodes of Explain to Shane on AEI.org and subscribe via your preferred listening platform. If you enjoyed this episode, leave us a review, and tell your friends and colleagues to tune in.
Shane Tews: You spend a lot of time in the intellectual property and copyright space with stock photography; what is it like interfacing with creatives, and how would you describe the exchange of intellectual property?
Susan Otieno: At first I thought it was just going to be reviewing some contracts and looking at paperwork. But it was so different because I was really interfacing with creatives, photographers, videographers, illustrators, graphic designers, and I didn’t know how to communicate with them.
It was quite frustrating because I was coming from a transactional point of view, in terms of “This is what you need to do. This is what you need to know. This is the law. It can empower you.” But it wasn’t reaching them because they perceive information from a different lens. It took us time to understand that and be able to kind of unpack my own communication style, so that I can be able to relate and convey what I need to convey. And that also got me into photography, which is great. So now I’m able to speak in a language that makes sense to creatives.
In essence, the stock photography model ideally operates on a series of permissions. So you have the model who will give the photographer permission to shoot them, and consequently monetize the image, and release them from any liability. And then you have the photographer who gives the platform permission to showcase their images on the platform. And when they’re sold, they get a commission, and the photographer will retain the rights to the image.
So, for us, we realized that in Africa, there’s quite a bit of awareness that needs to happen because people aren’t fully familiar with stock photography and how it works. We really try to unpack that and go from a simplified version, like user permissions, which is easy for helping someone understand how the platform works. So for us, we have contributors who voluntarily upload their images to the platform, and they sign a license agreement with PICHA. And then ideally, they receive 50 percent commission when an image is purchased or licensed.
With all these permissions and metadata, what are the privacy regimes like in African nations? Is there coordination?
There is some movement on it. But it’s quite a difficult task because one, not all the African countries have passed data protection laws. And two, the level of enforcement is quite different in the countries. And the countries are taking different approaches to how they want to govern data. So there’s no unifying voice yet.
For instance, Tanzania just passed their Personal Data Protection Act of 2022. And I think in the region, Kenya passed its act in 2019, Uganda also has an act. So it’s quite timely that they passed their act and they’ve taken a bit of a liberal interpretation to how they want to govern data in the jurisdiction. So for example, they don’t prohibit cross-border data transfers, and Kenya does. So it’s really interesting to see how those things will play out.
When you look at the African Continental Free Trade Area (AfCFTA), which is ideally a trade agreement that is supposed to help intra-jurisdictional trade in Africa, it’s going to be an interesting conversation to see how we talk about digital trade when we haven’t discussed the movement of data across our borders as a continent.
Even when you look at the EU, they were able to unify their laws because there’s a level of standardization, there’s a level of cognizance that we are working together, and we understand what our ecosystem is in terms of economics or socially. There’s already a level where there’s that expectation that there’s movement that’s facilitated through regulation. So, hopefully we can mirror that.
Are you thinking about AI stock image generation in your work?
Within the generative AI space, how these systems are being trained is of major concern, because in Africa or in the global north, the data points are reflective of the north. So when the system is created, it reflects the bias of the north and kind of the narrative of the north. And when you look at images, video—this is part of the visual narrative. So it really impacts our ability to tell our stories, and our ability to create of our own. So I think it’d be interesting to see how we are able to generate data, and to also potentially get into developing these AI systems.
What are African nations doing on data localization?
So when you talk about the rights of a data subject, you’re talking about the rights of an individual to their data, to express themselves in terms of determining what happens to the data. When we talk about storage, in Africa, the data centers are owned by the big cloud computing companies.
In Kenya, for example, we’re definitely doing the data localization. And you’re seeing that in a slew of other African countries. But I think it really is just a question of capacity. For us, in Kenya, we’re probably at a better position in terms of data centers. We’re able to facilitate infrastructure. We have great internet governance, too.
From your perspective, what tends to be the most common method of connecting to the internet?
The informal sector really contributes to the broader economy. The African informal sector has a high penetration in terms of mobile phones. That means that’s your point of connection. So, in terms of access, in terms of communication and information, the mobile phone has become a center for all that.
The post From Pixels to Prosperity: Highlights from My Conversation with Susan Otieno appeared first on American Enterprise Institute - AEI.
]]>The post Why Industrial Policy Fails appeared first on American Enterprise Institute - AEI.
]]>WASHINGTON, DC – Industrial policy is all the rage nowadays. In the United States, President Joe Biden has signed laws offering hundreds of billions of dollars in incentives and funding for clean energy and domestic semiconductor manufacturing. Similarly, Donald Trump launched a trade war with China in the name of reviving US industry. Rank-and-file Democrats and Republicans alike are on board with this shift from free markets toward government planning.
But industrial policy always works better in theory than in practice. Real-world factors are likely to thwart efforts by the state to revitalize the manufacturing sector and significantly boost the number of manufacturing jobs.
Current US policies raise all the same old questions that have been asked before about industrial policy. Why should we expect the government to do a good job of picking winners and losers, or to allocate scarce resources better than the market? If the government intervenes in markets, how will it avoid mission creep, cronyism, and corruption?
In the real world, government planners simply lack the control to make an industrial policy succeed over the long term. Biden can subsidize semiconductor manufacturing with the stroke of a pen, but he cannot wave a magic wand to create workers who are qualified to staff chip-fabrication plants. Deloitte estimates that the US semiconductor industry will face a shortfall of 90,000 workers over the next few years. Just this month, Taiwan Semiconductor Manufacturing Company announced that it must delay production at an Arizona fab, owing to a lack of workers with the right experience and training.
Nor can US policymakers prevent other countries from retaliating and intervening to boost their own favored industries. Consider the Trump tariffs, which then-Secretary of Commerce Wilbur Ross defended as a case of concentrated benefits and diffuse costs. Though all Americans might have to pay 0.6 cents more for a can of soup, he argued, the country would get a boost to manufacturing employment in return.
This claim seemed to assume implicitly that no other countries would retaliate. But Aaron Flaaen and Justin Pierce, both economists at the US Federal Reserve, find that the US suffered greater losses in domestic manufacturing employment from retaliation than it gained from import protection. And because the tariffs increased the cost of intermediate goods used by US firms, Flaaen and Pierce conclude that shifting an industry from relatively light to relatively heavy tariff exposure was associated with a 2.7% reduction in manufacturing employment.
Biden’s Inflation Reduction Act provides $370 billion in tax credits and other incentives for clean-energy projects in the US. Its subsidies put American allies at an artificial disadvantage in industries such as battery production and electric-vehicle (EV) manufacturing. Not surprisingly, South Korea and the European Union have responded with their own subsidies. French President Emmanuel Macron has warned that the IRA could “fragment the West.”
None of this bodes well. Tit-for-tat industrial policies distort relative prices, and reduce economic efficiency by prioritizing political whim over comparative advantage. As more countries adopt subsidies, they will blunt the impact of subsidies elsewhere. Industrial policy lights taxpayers’ money on fire.
Yet another reason industrial policies fail is that politicians cannot resist the temptation to use public funds to advance unrelated goals. For example, in February, the Biden administration required companies receiving federal subsidies for semiconductor manufacturing to ensure affordable childcare for their workers. But what if there are not enough workers immediately available to run daycares near chip plants? Such add-ons reduce the effectiveness of the subsidies.
Moreover, companies that adhere most closely to the administration’s broader social-policy views could become politically favored and entrenched, reducing market competition, discouraging new entrants, and sapping economic dynamism. All too often, social-policy goals conflict with industrial goals. The Biden administration wants to support organized labor, but it also wants to hasten the green transition. Yet the United Auto Workers are making aggressive demands in negotiations with automakers just as those companies are facing increased costs to shift to EV production. If workers follow through with a strike next month, that will further derail US industry.
This is not to say that industrial policy should never be used. Operation Warp Speed (which accelerated COVID-19 vaccine development and deployment) and the Defense Advanced Research Projects Agency are two good examples of the government successfully orienting a specific industry toward specific goals. “Specific” is the keyword, here. Restoring the entire manufacturing sector (with particular focus on swing states in the 2024 presidential election) to an unspecified semblance of its former glory is too vague, too broad, and too ambitious an objective – especially when it is combined with fighting climate change, advancing progressive social goals, and protecting US national security.
What should the US do instead? First, to safeguard national security, it should identify a narrow set of specific goods that genuinely warrant export and investment controls. Second, it should invest public funds in basic research and infrastructure – not because that will create manufacturing jobs, but because it will increase productivity, wage growth, innovation, and dynamism more broadly.
Third, it should adopt a carbon tax to lower the relative price of green technology. That would accelerate technological development and allow the market to determine which technologies are the most promising. If widespread global adoption of green technologies is the overarching goal, trade barriers are particularly problematic, as they will slow uptake – particularly among low-income countries – and reduce green tech’s role in fighting climate change.
Finally, America should invest in all workers, rather than try to turn back the clock to the heyday of manufacturing. That means increasing earned-income subsidies to support participation in the workforce, investing in training to build skills and increase wages, and reducing the barriers workers face from social policy and anticompetitive labor-market institutions.
One of the few redeeming features of American populism has been its renewed focus on workers. But populist and nationalist solutions won’t work. We owe it to workers to focus on policies that will advance mass flourishing.
The post Why Industrial Policy Fails appeared first on American Enterprise Institute - AEI.
]]>The post Do Oppenheimer’s Warnings About Nuclear Weapons Apply to AI? appeared first on American Enterprise Institute - AEI.
]]>Unfortunately, we’re at risk of getting those lessons wrong.
Oppenheimer and his colleagues built the atomic bomb because almost nothing could have been worse than the Nazis winning World War II. By 1950, however, Oppenheimer opposed building the hydrogen bomb — which was orders of magnitude more powerful than the earliest atomic bombs — because he believed the tools of the Cold War had become more dangerous than those of America’s enemy. “If one is honest,” he predicted, “the most probable view of the future is that of war, exploding atomic bombs, death, and the end of most freedom.”
Oppenheimer lost the H-bomb debate, which eventually led to his loyalty being questioned and his security clearance being revoked. That coda aside, the parallels are obvious today.
Rapid-fire innovation in AI is ushering in another technological revolution. Once again, leading scientists, engineers and innovators argue it is simply too dangerous to unleash this technology on a rivalrous world. In March, prominent researchers and technologists called for a moratorium on AI development. In May, hundreds of experts wrote that AI poses a “risk of extinction” comparable to that of nuclear war. Geoffrey Hinton, a man as prominent in AI as Oppenheimer was in theoretical physics, resigned his post at Google to warn of the “existential risk” ahead.
The sentiment is understandable. Leaving aside the prospect of killer robots, AI — like most technologies — will change the world for good (better health care) and for ill (more disinformation, helping terrorists build chemical weapons). Yet the solutions that Oppenheimer offered in his day, and that some of his successors offer today, aren’t really solutions at all.
Oppenheimer was right that thermonuclear arms were awful, civilization- shattering weapons. He was wrong that the answer was simply not to build them. We now know that Stalin’s Soviet Union had already decided to create its own hydrogen bomb at the time Washington was debating the issue in 1950. Had the US offered to forego development of that weapon, Soviet scientist Andrei Sakharov later acknowledged, Stalin would have moved to “exploit the adversary’s folly at the earliest opportunity.”
A world in which the Soviets had the most advanced thermonuclear weapons would not have been better or safer. Moscow would have possessed powerful leverage for geopolitical blackmail — which is just what Stalin’s successor, Nikita Khrushchev, did in the late 1950s when it seemed that the Soviets had surged ahead in long-range missiles.
The US government did eventually take Oppenheimer’s advice, in a limited way: It negotiated arms control agreements that restricted the number and types of nuclear weapons the superpowers possessed, and the ways in which countries could test them. Yet the US was most successful in securing mutual restraint once it had shown it would deny the Soviet Union unilateral advantage.
Now the US is at the beginning of another long debate in which issues of national advantage are mingled with concern for the common good. It is entirely possible the world will ultimately need some multilateral regime to control AI’s underlying technology or most dangerous applications. US officials are even quietly hopeful that Moscow and Beijing will be willing to regulate technologies that could disrupt their societies as profoundly as it tests the democracies.
Between now and then, though, the US surely does not want to find itself in a position of weakness because the People’s Liberation Army has mastered the next revolution in military affairs, or because China and Russia are making the most of AI — to better control their populations and more effectively diffuse their influence globally, perhaps — and the democracies aren’t.
“AI technologies will be a source of enormous power for the companies and countries that harness them,” reads a report issued in 2021 by a panel led by former Google CEO Eric Schmidt and former Deputy Secretary of Defense Robert Work. As during the nuclear age, the democracies must first address the danger that their rivals will asymmetrically exploit new technologies before they address the common dangers those technologies pose.
So understood the president who decided to build the hydrogen bomb seven decades ago. “No one wants to use it,” Harry Truman remarked. “But … we have got to have it if only for bargaining purposes with the Russians.” In the current era of technological dynamism and intense global rivalry, America needs new Oppenheimers — but it probably needs new Trumans more.
The post Do Oppenheimer’s Warnings About Nuclear Weapons Apply to AI? appeared first on American Enterprise Institute - AEI.
]]>The post Japanese AI Advances Offer Lessons for the Rest of the World appeared first on American Enterprise Institute - AEI.
]]>After a week full of meetings, interviews, and site visits in Tokyo and its environs, I came away impressed by the dedicated, thoughtful way Japanese companies, lawyers, and academics are approaching what may be the most consequential technological leap in decades.
How and why has Japan thrived in developing AI? Integration, demographics, ethics, and smart regulation headline the list of explanations.
For starters, few countries excel like Japan does at integrating numerous disciplines into a single technology, as Kenichi Yoshida, chief business officer of SoftBank Robotics, explained to me. He should know: SoftBank is the Tokyo-based multinational investment conglomerate, and robotics is “the next big thing,” according—in Yoshida’s recollection—to Masayoshi Son, the legendary founder and head of SoftBank.
Yoshida’s group works to integrate the “brain” of AI in the “body” of robots in various industrial and consumer applications. He explained to me, “You need a human level of understanding” for many of these robotics use cases, and his group has focused—with evident success—on exactly that.
Demographics have also exerted a substantial influence on Japanese technological development. With a below-the-red-line birth rate of 1.26—far below the 2.1 threshold required to maintain a stable population—and a populace in numerical decline over the past 16 years, the country must adapt to new realities.
For instance, Yoshida told me more than 60 percent of Japanese janitorial staff are over 60 years old, and the country, famous for its cleanliness, is poised for an increasingly dirty future if demographic trends continue. Thus, SoftBank Robotics has focused many efforts in the janitorial space, much of which he believes “can and should be robotized.”
Japan has also evinced a balanced approach to ethically developing AI.
In July, Japan’s economy minister told students at the University of Tokyo that the government was doing all it could to transform the island nation into “a global AI hub.” I witnessed evidence of those efforts throughout the country, but especially at the University of Tokyo in Professor Yasuo Kuniyoshi’s Next Generation Artificial Intelligence Research Center.
Kuniyoshi’s research focuses on what he calls “embodiment,” or understanding how interaction with the physical world enables cognitive development. “Sharing a similar body,” he explained to me, “is a very important basis for empathy.” Kuniyoshi detailed how his groundbreaking work aims to understand, and ultimately apply to AI, the process of how we “acquire [a] very early proto-moral sense of humanity.”
Of course, ensuring appropriate ethical guidelines falls into the realm of policy. And according to some reports, the Japanese government will shortly introduce data-disclosure guidelines for AI companies to follow that are designed to protect privacy and intellectual property rights. My interlocutors broadly supported a government-driven approach even as they largely disregarded the doomsday mindset of some Western anti-AI advocates.
“The anti-AI fear of the apocalypse isn’t widespread here,” Shuichi Shitara, the general manager of Taiyo, Nakajima and Kato (a leading Tokyo-based intellectual property law firm) told me. Instead, “smart” (i.e. business-friendly) regulation seems to be the favored approach. Agreeing with Shitara’s sentiment, Yoshida likened it to the automotive industry, in which Japan’s regulators and major corporations work hand in glove; he also suggested creating a non-governmental auditing organization.
In June, during a visit to Tokyo, OpenAI CEO Sam Altman reportedly told Japanese business groups and students that “this is the time for Japan to pour all its efforts into AI.” Somebody seemed to be listening, and the rest of the world should pay close attention.
The post Japanese AI Advances Offer Lessons for the Rest of the World appeared first on American Enterprise Institute - AEI.
]]>The post Broken Technical Fixes to Transportation Security appeared first on American Enterprise Institute - AEI.
]]>Thanks to advancing technologies, new ways to search people have become a permanent fixture on our near horizon. But they run into a few challenges. One is the Constitution. A second, even more difficult one is logistics.
Not for the first time, the TSA is working to implement search technologies at transportation hubs such as Grand Central Terminal in New York. One such technology is Pendar, which the TSA describes as “a hand-held video camera designed for short-range, point-and-shoot identification of visible residue levels of hazardous chemicals, explosives and narcotics.” Pendar uses Raman spectroscopy, which means shining a laser at something and analyzing the reflected light for signals of certain molecules.
Another is Thruvision, which detects anomalies in heat patterns coming from people’s bodies. Tucking a paperback into your pants at the sacrum blocks heat coming off your sweaty back, a sign you are sneaking something onto the airplane. (If Fabio Lanzoni modeled for the cover, you should be sneaking it!)
An annotated copy of the Constitution would be an anomaly to Thruvision, so let’s start with the challenge the Constitution poses. As I argued to the Supreme Court in a brief some years ago, using specialized tools, including trained dogs, to search people for chemicals, odors, and items is “searching”:
The use of a drug-sniffing dog is a “search” in ordinary legal language and the nearest precedent of this court. The sniff of such a dog “look[s] for or seek[s] out that which is otherwise concealed from view.” It is “‘look[ing] over or through for the purpose of finding something.” And it is use of “a device that is not in general public use, to explore details of the home that would previously have been unknowable without physical intrusion.” (Citations omitted.)
The basic rule, when government agents search, is that the search must be based on suspicion rising to the level of probable cause. The standard practice is to have that confirmed by a neutral magistrate who issues a warrant. None of this could be administered at a transportation hub.
The reflexive justification for warrantless searching is that it’s reasonable without suspicion or a warrant because threats to transportation are so great. In reality, they’re not, as dispassion will tell anyone doing transportation risk assessment. The argument also proves too much because transportation is not a uniquely vulnerable infrastructure. Anyone appearing where people congregate would be reasonably searched if open-ended location vulnerability were the rule. The Fourth Amendment was included in the Bill of Rights because of the colonists’ enmity toward general warrants that King George’s agents lorded over them. We don’t do that.
But when security is at stake, many people are willing to dispense with niceties such as constitutional rights. So we turn to another challenge: throughput.
About 750,000 people pass through Grand Central Terminal every day. Stopping even a tiny percentage to have their bodies “Thruvisioned” for hidden bodice rippers would be a huge task—and hugely expensive in terms of travelers’ time. The “false positive” problem looms large, as mascara bottles, books, and colostomy bags are vastly more common under people’s clothes than dangerous articles.
Then there is the problem of whom to stop. You can go to extraordinary lengths to prevent it, but this type of searching is ripe for aiming at “the black guy.” (Even if you randomize stops, the length of stops and the levels of courtesy or deference shown would undercut perceptions of neutrality and justifiably contribute to poor public perceptions.) If Grand Central indulged the idea of stopping people for these searches, it would have to delay and deeply offend hundreds of passengers per day to find a negligible quantity of dangerous or criminal items.
Using Pendar presents similar problems, if shifted in phase. For accurate readings, it probably requires stopping people too. If not, the vagaries of airborne particulates and residue could mean groups of people have to be pulled over (including “the black guy”) for superfluous searches. Recent veterans and participants in shooting sports would get special, wrongful attention. Pendar’s suitability for transportation security is highly dubious.
Timeless constitutional principles show these technological “fixes” for transportation security are broken, but the practical problems are probably the leading edge of the sword that kills them.
The post Broken Technical Fixes to Transportation Security appeared first on American Enterprise Institute - AEI.
]]>The post What Delays Airlines’ Use of Technological Innovations? appeared first on American Enterprise Institute - AEI.
]]>The Brisbane trial comes five years after Qantas trialed an apparently very similar app at Sydney Airport. In that trial, proposed as “couch-to-gate” biometrics, customers could download an app and use their face as access identification to automate check-in, bag drop, lounge access and boarding. According to Sydney Airport CEO Geoff Culbert,
In the future, there will be no more juggling passports and bags at check-in and digging through pockets or smartphones to show your boarding pass—your face will be your passport and your boarding pass at every step of the process.
But while it allowed “our lounge staff” to “create a more personalised experience when passengers arrive,” the app did not lead to any changes to airport security or border-processing procedures.
I was intrigued about the time gap between the two trials, and even more so by the irony that using biometric data for border control processing procedures has been de rigueur at Brisbane Airport since 2007, when SmartGate was introduced by the Australian Border Force. SmartGate uses a photograph stored on a chip in an ePassport to automatically verify photographically that the person at the gate is the passport owner. This service is now offered for Australians and a wide range of other ePassport holders (including United States citizens) at all Australian international airports.
The airline appears to be lagging behind the government bureaucracy by nearly 20 years in the use of biometric identification technology. Might that possibly be because the government can compel the use of biometric data for passports, but its use by airlines is voluntary, meaning there is no real reduction in airline costs as both human and electronic processes must operate in tandem indefinitely? Given that the airlines must make substantial investments in camera technology to make the new systems work, the incentives to push their use appear weak.
There is a similar airline-tech conundrum with dot matrix printers at boarding gates. On a recent overseas trip, I had the opportunity to observe them at airports across Africa, Asia, Australia, and Europe including in some of the newest airports in countries priding themselves on their standards of digital sophistication. Dot matrix printers use a fixed number of pins or wires arranged in vertical columns that strike an ink-coated ribbon to make a small dot on the paper. The combination of these dots forms an image—for example, a letter symbol. These printers were ubiquitous from the 1970s until the late 1990s, when they were largely replaced in general use by inkjet and laser printers. The advantage of the latter two is they can print on standard sheets of paper. Dot matrix printers require tractor-feed paper with sprocket holes at the edges that fit over printer prongs that roll the paper through as the pins print on it. The paper comes in folded stacks usually with perforations that allow the paper to be separated into individual sheets once printed.
The continued justification for dot matrix printers, for international flights at least, appears to be pilots’ obligations to provide signed copies of literal “paperwork” in order to meet various regulations in different countries. Furthermore, identical copies, each signed individually by the (separately) responsible crew member, frequently must be provided to different authorities for the same flight. The unique benefit of dot matrix printing is that its physical action enables the production of carbon copies, just like an old typewriter. They are also very cost-effective for printing multiple copies of the same document. Hence, these printers still perform a useful function in specific use cases, notably transportation, in which multiple copies of the same manifest documentation are required.
To be fair, airlines are moving to tablets when possible, particularly for domestic flights (where documentation obligations are less stringent) and in-flight service management. But sometimes, the “old ways” are just more cost-effective, given regulatory requirements.
The post What Delays Airlines’ Use of Technological Innovations? appeared first on American Enterprise Institute - AEI.
]]>The post AI in Education appeared first on American Enterprise Institute - AEI.
]]>Such a device, even after the introduction of the Internet and tablet computers, has remained in the realm of science fiction—until now. Artificial intelligence, or AI, took a giant leap forward with the introduction in November 2022 of ChatGPT, an AI technology capable of producing remarkably creative responses and sophisticated analysis through human-like dialogue. It has triggered a wave of innovation, some of which suggests we might be on the brink of an era of interactive, super-intelligent tools not unlike the book Stephenson dreamed up for Nell.
Sundar Pichai, Google’s CEO, calls artificial intelligence “more profound than fire or electricity or anything we have done in the past.” Reid Hoffman, the founder of LinkedIn and current partner at Greylock Partners, says, “The power to make positive change in the world is about to get the biggest boost it’s ever had.” And Bill Gates has said that “this new wave of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone.”
Over the last year, developers have released a dizzying array of AI tools that can generate text, images, music, and video with no need for complicated coding but simply in response to instructions given in natural language. These technologies are rapidly improving, and developers are introducing capabilities that would have been considered science fiction just a few years ago. AI is also raising pressing ethical questions around bias, appropriate use, and plagiarism.
In the realm of education, this technology will influence how students learn, how teachers work, and ultimately how we structure our education system. Some educators and leaders look forward to these changes with great enthusiasm. Sal Kahn, founder of Khan Academy, went so far as to say in a TED talk that AI has the potential to effect “probably the biggest positive transformation that education has ever seen.” But others warn that AI will enable the spread of misinformation, facilitate cheating in school and college, kill whatever vestiges of individual privacy remain, and cause massive job loss. The challenge is to harness the positive potential while avoiding or mitigating the harm.
What Is Generative AI?
Artificial intelligence is a branch of computer science that focuses on creating software capable of mimicking behaviors and processes we would consider “intelligent” if exhibited by humans, including reasoning, learning, problem-solving, and exercising creativity. AI systems can be applied to an extensive range of tasks, including language translation, image recognition, navigating autonomous vehicles, detecting and treating cancer, and, in the case of generative AI, producing content and knowledge rather than simply searching for and retrieving it.
“Foundation models” in generative AI are systems trained on a large dataset to learn a broad base of knowledge that can then be adapted to a range of different, more specific purposes. This learning method is self-supervised, meaning the model learns by finding patterns and relationships in the data it is trained on.
Large Language Models (LLMs) are foundation models that have been trained on a vast amount of text data. For example, the training data for OpenAI’s GPT model consisted of web content, books, Wikipedia articles, news articles, social media posts, code snippets, and more. OpenAI’s GPT-3 models underwent training on a staggering 300 billion “tokens” or word pieces, using more than 175 billion parameters to shape the model’s behavior—nearly 100 times more data than the company’s GPT-2 model had.
By doing this analysis across billions of sentences, LLM models develop a statistical understanding of language: how words and phrases are usually combined, what topics are typically discussed together, and what tone or style is appropriate in different contexts. That allows it to generate human-like text and perform a wide range of tasks, such as writing articles, answering questions, or analyzing unstructured data.
LLMs include OpenAI’s GPT-4, Google’s PaLM, and Meta’s LLaMA. These LLMs serve as “foundations” for AI applications. ChatGPT is built on GPT-3.5 and GPT-4, while Bard uses Google’s Pathways Language Model 2 (PaLM 2) as its foundation.
Some of the best-known applications are:
ChatGPT 3.5. The free version of ChatGPT released by OpenAI in November 2022. It was trained on data only up to 2021, and while it is very fast, it is prone to inaccuracies.
ChatGPT 4.0. The newest version of ChatGPT, which is more powerful and accurate than ChatGPT 3.5 but also slower, and it requires a paid account. It also has extended capabilities through plug-ins that give it the ability to interface with content from websites, perform more sophisticated mathematical functions, and access other services. A new Code Interpreter feature gives ChatGPT the ability to analyze data, create charts, solve math problems, edit files, and even develop hypotheses to explain data trends.
Microsoft Bing Chat. An iteration of Microsoft’s Bing search engine that is enhanced with OpenAI’s ChatGPT technology. It can browse websites and offers source citations with its results.
Google Bard. Google’s AI generates text, translates languages, writes different kinds of creative content, and writes and debugs code in more than 20 different programming languages. The tone and style of Bard’s replies can be finetuned to be simple, long, short, professional, or casual. Bard also leverages Google Lens to analyze images uploaded with prompts.
Anthropic Claude 2. A chatbot that can generate text, summarize content, and perform other tasks, Claude 2 can analyze texts of roughly 75,000 words—about the length of The Great Gatsby—and generate responses of more than 3,000 words. The model was built using a set of principles that serve as a sort of “constitution” for AI systems, with the aim of making them more helpful, honest, and harmless.
These AI systems have been improving at a remarkable pace, including in how well they perform on assessments of human knowledge. OpenAI’s GPT-3.5, which was released in March 2022, only managed to score in the 10th percentile on the bar exam, but GPT-4.0, introduced a year later, made a significant leap, scoring in the 90th percentile. What makes these feats especially impressive is that OpenAI did not specifically train the system to take these exams; the AI was able to come up with the correct answers on its own. Similarly, Google’s medical AI model substantially improved its performance on a U.S. Medical Licensing Examination practice test, with its accuracy rate jumping to 85 percent in March 2021 from 33 percent in December 2020.
These two examples prompt one to ask: if AI continues to improve so rapidly, what will these systems be able to achieve in the next few years? What’s more, new studies challenge the assumption that AI-generated responses are stale or sterile. In the case of Google’s AI model, physicians preferred the AI’s long-form answers to those written by their fellow doctors, and nonmedical study participants rated the AI answers as more helpful. Another study found that participants preferred a medical chatbot’s responses over those of a physician and rated them significantly higher, not just for quality but also for empathy. What will happen when “empathetic” AI is used in education?
Other studies have looked at the reasoning capabilities of these models. Microsoft researchers suggest that newer systems “exhibit more general intelligence than previous AI models” and are coming “strikingly close to human-level performance.” While some observers question those conclusions, the AI systems display an increasing ability to generate coherent and contextually appropriate responses, make connections between different pieces of information, and engage in reasoning processes such as inference, deduction, and analogy.
Despite their prodigious capabilities, these systems are not without flaws. At times, they churn out information that might sound convincing but is irrelevant, illogical, or entirely false—an anomaly known as “hallucination.” The execution of certain mathematical operations presents another area of difficulty for AI. And while these systems can generate well-crafted and realistic text, understanding why the model made specific decisions or predictions can be challenging.
The Importance of Well-Designed Prompts
Using generative AI systems such as ChatGPT, Bard, and Claude 2 is relatively simple. One has only to type in a request or a task (called a prompt), and the AI generates a response. Properly constructed prompts are essential for getting useful results from generative AI tools. You can ask generative AI to analyze text, find patterns in data, compare opposing arguments, and summarize an article in different ways (see sidebar for examples of AI prompts).
One challenge is that, after using search engines for years, people have been preconditioned to phrase questions in a certain way. A search engine is something like a helpful librarian who takes a specific question and points you to the most relevant sources for possible answers. The search engine (or librarian) doesn’t create anything new but efficiently retrieves what’s already there.
Generative AI is more akin to a competent intern. You give a generative AI tool instructions through prompts, as you would to an intern, asking it to complete a task and produce a product. The AI interprets your instructions, thinks about the best way to carry them out, and produces something original or performs a task to fulfill your directive. The results aren’t pre-made or stored somewhere—they’re produced on the fly, based on the information the intern (generative AI) has been trained on. The output often depends on the precision and clarity of the instructions (prompts) you provide. A vague or poorly defined prompt might lead the AI to produce less relevant results. The more context and direction you give it, the better the result will be. What’s more, the capabilities of these AI systems are being enhanced through the introduction of versatile plug-ins that equip them to browse websites, analyze data files, or access other services. Think of this as giving your intern access to a group of experts to help accomplish your tasks.
One strategy in using a generative AI tool is first to tell it what kind of expert or persona you want it to “be.” Ask it to be an expert management consultant, a skilled teacher, a writing tutor, or a copy editor, and then give it a task.
Prompts can also be constructed to get these AI systems to perform complex and multi-step operations. For example, let’s say a teacher wants to create an adaptive tutoring program—for any subject, any grade, in any language—that customizes the examples for students based on their interests. She wants each lesson to culminate in a short-response or multiple-choice quiz. If the student answers the questions correctly, the AI tutor should move on to the next lesson. If the student responds incorrectly, the AI should explain the concept again, but using simpler language.
Previously, designing this kind of interactive system would have required a relatively sophisticated and expensive software program. With ChatGPT, however, just giving those instructions in a prompt delivers a serviceable tutoring system. It isn’t perfect, but remember that it was built virtually for free, with just a few lines of English language as a command. And nothing in the education market today has the capability to generate almost limitless examples to connect the lesson concept to students’ interests.
Chained prompts can also help focus AI systems. For example, an educator can prompt a generative AI system first to read a practice guide from the What Works Clearinghouse and summarize its recommendations. Then, in a follow-up prompt, the teacher can ask the AI to develop a set of classroom activities based on what it just read. By curating the source material and using the right prompts, the educator can anchor the generated responses in evidence and high-quality research.
However, much like fledgling interns learning the ropes in a new environment, AI does commit occasional errors. Such fallibility, while inevitable, underlines the critical importance of maintaining rigorous oversight of AI’s output. Monitoring not only acts as a crucial checkpoint for accuracy but also becomes a vital source of real-time feedback for the system. It’s through this iterative refinement process that an AI system, over time, can significantly minimize its error rate and increase its efficacy.
Uses of AI in Education
In May 2023, the U.S. Department of Education released a report titled Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations. The department had conducted listening sessions in 2022 with more than 700 people, including educators and parents, to gauge their views on AI. The report noted that “constituents believe that action is required now in order to get ahead of the expected increase of AI in education technology—and they want to roll up their sleeves and start working together.” People expressed anxiety about “future potential risks” with AI but also felt that “AI may enable achieving educational priorities in better ways, at scale, and with lower costs.”
AI could serve—or is already serving—in several teaching-and-learning roles:
Instructional assistants. AI’s ability to conduct human-like conversations opens up possibilities for adaptive tutoring or instructional assistants that can help explain difficult concepts to students. AI-based feedback systems can offer constructive critiques on student writing, which can help students fine-tune their writing skills. Some research also suggests certain kinds of prompts can help children generate more fruitful questions about learning. AI models might also support customized learning for students with disabilities and provide translation for English language learners.
Teaching assistants. AI might tackle some of the administrative tasks that keep teachers from investing more time with their peers or students. Early uses include automated routine tasks such as drafting lesson plans, creating differentiated materials, designing worksheets, developing quizzes, and exploring ways of explaining complicated academic materials. AI can also provide educators with recommendations to meet student needs and help teachers reflect, plan, and improve their practice.
Parent assistants. Parents can use AI to generate letters requesting individualized education plan (IEP) services or to ask that a child be evaluated for gifted and talented programs. For parents choosing a school for their child, AI could serve as an administrative assistant, mapping out school options within driving distance of home, generating application timelines, compiling contact information, and the like. Generative AI can even create bedtime stories with evolving plots tailored to a child’s interests.
Administrator assistants. Using generative AI, school administrators can draft various communications, including materials for parents, newsletters, and other community-engagement documents. AI systems can also help with the difficult tasks of organizing class or bus schedules, and they can analyze complex data to identify patterns or needs. ChatGPT can perform sophisticated sentiment analysis that could be useful for measuring school-climate and other survey data.
Though the potential is great, most teachers have yet to use these tools. A Morning Consult and EdChoice poll found that while 60 percent say they’ve heard about ChatGPT, only 14 percent have used it in their free time, and just 13 percent have used it at school. It’s likely that most teachers and students will engage with generative AI not through the platforms themselves but rather through AI capabilities embedded in software. Instructional providers such as Khan Academy, Varsity Tutors, and DuoLingo are experimenting with GPT-4-powered tutors that are trained on datasets specific to these organizations to provide individualized learning support that has additional guardrails to help protect students and enhance the experience for teachers.
Google’s Project Tailwind is experimenting with an AI notebook that can analyze student notes and then develop study questions or provide tutoring support through a chat interface. These features could soon be available on Google Classroom, potentially reaching over half of all U.S. classrooms. Brisk Teaching is one of the first companies to build a portfolio of AI services designed specifically for teachers—differentiating content, drafting lesson plans, providing student feedback, and serving as an AI assistant to streamline workflow among different apps and tools.
Providers of curriculum and instruction materials might also include AI assistants for instant help and tutoring tailored to the companies’ products. One example is the edX Xpert, a ChatGPT-based learning assistant on the edX platform. It offers immediate, customized academic and customer support for online learners worldwide.
Regardless of the ways AI is used in classrooms, the fundamental task of policymakers and education leaders is to ensure that the technology is serving sound instructional practice. As Vicki Phillips, CEO of the National Center on Education and the Economy, wrote, “We should not only think about how technology can assist teachers and learners in improving what they’re doing now, but what it means for ensuring that new ways of teaching and learning flourish alongside the applications of AI.”
Challenges and Risks
Along with these potential benefits come some difficult challenges and risks the education community must navigate:
Student cheating. Students might use AI to solve homework problems or take quizzes. AI-generated essays threaten to undermine learning as well as the college-entrance process. Aside from the ethical issues involved in such cheating, students who use AI to do their work for them may not be learning the content and skills they need.
Bias in AI algorithms. AI systems learn from the data they are trained on. If this data contains biases, those biases can be learned and perpetuated by the AI system. For example, if the data include student-performance information that’s biased toward one ethnicity, gender, or socioeconomic segment, the AI system could learn to favor students from that group. Less cited but still important are potential biases around political ideology and possibly even pedagogical philosophy that may generate responses not aligned to a community’s values.
Privacy concerns. When students or educators interact with generative-AI tools, their conversations and personal information might be stored and analyzed, posing a risk to their privacy. With public AI systems, educators should refrain from inputting or exposing sensitive details about themselves, their colleagues, or their students, including but not limited to private communications, personally identifiable information, health records, academic performance, emotional well-being, and financial information.
Decreased social connection. There is a risk that more time spent using AI systems will come at the cost of less student interaction with both educators and classmates. Children may also begin turning to these conversational AI systems in place of their friends. As a result, AI could intensify and worsen the public health crisis of loneliness, isolation, and lack of connection identified by the U.S. Surgeon General.
Overreliance on technology. Both teachers and students face the risk of becoming overly reliant on AI-driven technology. For students, this could stifle learning, especially the development of critical thinking. This challenge extends to educators as well. While AI can expedite lesson-plan generation, speed does not equate to quality. Teachers may be tempted to accept the initial AI-generated content rather than devote time to reviewing and refining it for optimal educational value.
Equity issues. Not all students have equal access to computer devices and the Internet. That imbalance could accelerate a widening of the achievement gap between students from different socioeconomic backgrounds.
Many of these risks are not new or unique to AI. Schools banned calculators and cellphones when these devices were first introduced, largely over concerns related to cheating. Privacy concerns around educational technology have led lawmakers to introduce hundreds of bills in state legislatures, and there are growing tensions between new technologies and existing federal privacy laws. The concerns over bias are understandable, but similar scrutiny is also warranted for existing content and materials that rarely, if ever, undergo review for racial or political bias.
In light of these challenges, the Department of Education has stressed the importance of keeping “humans in the loop” when using AI, particularly when the output might be used to inform a decision. As the department encouraged in its 2023 report, teachers, learners, and others need to retain their agency. AI cannot “replace a teacher, a guardian, or an education leader as the custodian of their students’ learning,” the report stressed.
Policy Challenges with AI
Policymakers are grappling with several questions related to AI as they seek to strike a balance between supporting innovation and protecting the public interest (see sidebar). The speed of innovation in AI is outpacing many policymakers’ understanding, let alone their ability to develop a consensus on the best ways to minimize the potential harms from AI while maximizing the benefits. The Department of Education’s 2023 report describes the risks and opportunities posed by AI, but its recommendations amount to guidance at best. The White House released a Blueprint for an AI Bill of Rights, but it, too, is more an aspirational statement than a governing document. Congress is drafting legislation related to AI, which will help generate needed debate, but the path to the president’s desk for signature is murky at best.
It is up to policymakers to establish clearer rules of the road and create a framework that provides consumer protections, builds public trust in AI systems, and establishes the regulatory certainty companies need for their product road maps. Considering the potential for AI to affect our economy, national security, and broader society, there is no time to waste.
Why AI Is Different
It is wise to be skeptical of new technologies that claim to revolutionize learning. In the past, prognosticators have promised that television, the computer, and the Internet, in turn, would transform education. Unfortunately, the heralded revolutions fell short of expectations.
There are some early signs, though, that this technological wave might be different in the benefits it brings to students, teachers, and parents. Previous technologies democratized access to content and resources, but AI is democratizing a kind of machine intelligence that can be used to perform a myriad of tasks. Moreover, these capabilities are open and affordable—nearly anyone with an Internet connection and a phone now has access to an intelligent assistant.
Generative AI models keep getting more powerful and are improving rapidly. The capabilities of these systems months or years from now will far exceed their current capacity. Their capabilities are also expanding through integration with other expert systems. Take math, for example. GPT-3.5 had some difficulties with certain basic mathematical concepts, but GPT-4 made significant improvement. Now, the incorporation of the Wolfram plug-in has nearly erased the remaining limitations.
It’s reasonable to anticipate that these systems will become more potent, more accessible, and more affordable in the years ahead. The question, then, is how to use these emerging capabilities responsibly to improve teaching and learning.
The paradox of AI may lie in its potential to enhance the human, interpersonal element in education. Aaron Levie, CEO of Box, a Cloud-based content-management company, believes that AI will ultimately help us attend more quickly to those important tasks “that only a human can do.” Frederick Hess, director of education policy studies at the American Enterprise Institute, similarly asserts that “successful schools are inevitably the product of the relationships between adults and students. When technology ignores that, it’s bound to disappoint. But when it’s designed to offer more coaching, free up time for meaningful teacher-student interaction, or offer students more personalized feedback, technology can make a significant, positive difference.”
Technology does not revolutionize education; humans do. It is humans who create the systems and institutions that educate children, and it is the leaders of those systems who decide which tools to use and how to use them. Until those institutions modernize to accommodate the new possibilities of these technologies, we should expect incremental improvements at best. As Joel Rose, CEO of New Classrooms Innovation Partners, noted, “The most urgent need is for new and existing organizations to redesign the student experience in ways that take full advantage of AI’s capabilities.”
While past technologies have not lived up to hyped expectations, AI is not merely a continuation of the past; it is a leap into a new era of machine intelligence that we are only beginning to grasp. While the immediate implementation of these systems is imperfect, the swift pace of improvement holds promising prospects. The responsibility rests with human intervention—with educators, policymakers, and parents to incorporate this technology thoughtfully in a manner that optimally benefits teachers and learners. Our collective ambition should not focus solely or primarily on averting potential risks but rather on articulating a vision of the role AI should play in teaching and learning—a game plan that leverages the best of these technologies while preserving the best of human relationships.
Policy Matters
Officials and lawmakers must grapple with several questions related to AI to protect students and consumers and establish the rules of the road for companies. Key issues include:
Risk management framework: What is the optimal framework for assessing and managing AI risks? What specific requirements should be instituted for higher-risk applications? In education, for example, there is a difference between an AI system that generates a lesson sample and an AI system grading a test that will determine a student’s admission to a school or program. There is growing support for using the AI Risk Management Framework from the U.S. Commerce Department’s National Institute of Standards and Technology as a starting point for building trustworthiness into the design, development, use, and evaluation of AI products, services, and systems.
Licensing and certification: Should the United States require licensing and certification for AI models, systems, and applications? If so, what role could third-party audits and certifications play in assessing the safety and reliability of different AI systems? Schools and companies need to begin thinking about responsible AI practices to prepare for potential certification systems in the future.
Centralized vs. decentralized AI governance: Is it more effective to establish a central AI authority or agency, or would it be preferable to allow individual sectors to manage their own AI-related issues? For example, regulating AI in autonomous vehicles is different from regulating AI in drug discovery or intelligent tutoring systems. Overly broad, one-size-fits-all frameworks and mandates may not work and could slow innovation in these sectors. In addition, it is not clear that many agencies have the authority or expertise to regulate AI systems in diverse sectors.
Privacy and content moderation: Many of the new AI systems pose significant new privacy questions and challenges. How should existing privacy and content-moderation frameworks, such as the Family Educational Rights and Privacy Act (FERPA), be adapted for AI, and which new policies or frameworks might be necessary to address unique challenges posed by AI?
Transparency and disclosure: What degree of transparency and disclosure should be required for AI models, particularly regarding the data they have been trained on? How can we develop comprehensive disclosure policies to ensure that users are aware when they are interacting with an AI service?
How do I get it to work? Generative AI Example Prompts
Unlike traditional search engines, which use keyword indexing to retrieve existing information from a vast collection of websites, generative AI synthesizes the same information to create content based on prompts that are inputted by human users. With generative AI a new technology to the public, writing effective prompts for tools like ChatGPT may require trial and error. Here are some ideas for writing prompts for a variety of scenarios using generative AI tools:
You are the StudyBuddy, an adaptive tutor. Your task is to provide a lesson on the basics of a subject followed by a quiz that is either multiple choice or a short answer. After I respond to the quiz, please grade my answer. Explain the correct answer. If I get it right, move on to the next lesson. If I get it wrong, explain the concept again using simpler language. To personalize the learning experience for me, please ask what my interests are. Use that information to make relevant examples throughout.
Mr. Ranedeer: Your Personalized AI Tutor
Coding and prompt engineering. Can configure for depth (Elementary – Postdoc), Learning Styles (Visual, Verbal, Active, Intuitive, Reflective, Global), Tone Styles (Encouraging, Neutral, Informative, Friendly, Humorous), Reasoning Frameworks (Deductive, Inductive, Abductive, Analogous, Casual). Template.
You are a tutor that always responds in the Socratic style. You *never* give the student the answer but always try to ask just the right question to help them learn to think for themselves. You should always tune your question to the interest and knowledge of the student, breaking down the problem into simpler parts until it’s at just the right level for them.
I want you to act as an AI writing tutor. I will provide you with a student who needs help improving their writing, and your task is to use artificial intelligence tools, such as natural language processing, to give the student feedback on how they can improve their composition. You should also use your rhetorical knowledge and experience about effective writing techniques in order to suggest ways that the student can better express their thoughts and ideas in written form.
You are a quiz creator of highly diagnostic quizzes. You will make good low-stakes tests and diagnostics. You will then ask me two questions. First, (1) What, specifically, should the quiz test? Second, (2) For which audience is the quiz? Once you have my answers, you will construct several multiple-choice questions to quiz the audience on that topic. The questions should be highly relevant and go beyond just facts. Multiple choice questions should include plausible, competitive alternate responses and should not include an “all of the above” option. At the end of the quiz, you will provide an answer key and explain the right answer.
I would like you to act as an example generator for students. When confronted with new and complex concepts, adding many and varied examples helps students better understand those concepts. I would like you to ask what concept I would like examples of and what level of students I am teaching. You will look up the concept and then provide me with four different and varied accurate examples of the concept in action.
You will write a Harvard Business School case on the topic of Google managing AI, when subject to the Innovator’s Dilemma. Chain of thought: Step 1. Consider how these concepts relate to Google. Step 2: Write a case that revolves around a dilemma at Google about releasing a generative AI system that could compete with search.
What additional questions would a person seeking mastery of this topic ask?
Read a WWC practice guide. Create a series of lessons over five days that are based on Recommendation 6. Create a 45-minunte lesson plan for Day 4.
The following is a draft letter to parents from a superintendent. Step 1: Rewrite it to make it easier to understand and more persuasive about the value of assessments. Step 2. Translate it into Spanish.
Write me a letter requesting the school district provide a 1:1 classroom aid be added to my 13-year-old son’s IEP. Base it on Virginia special education law and the least restrictive environment for a child with diagnoses of a Traumatic Brain Injury, PTSD, ADHD, and significant intellectual delay.
The post AI in Education appeared first on American Enterprise Institute - AEI.
]]>The post Understanding the Muddled Law of Jawboning in Missouri v. Biden appeared first on American Enterprise Institute - AEI.
]]>The appellate court will consider tossing out a preliminary injunction that US District Judge Terry A. Doughty issued July 4 in favor of plaintiffs Missouri, Louisiana, and five individuals who claim to “have experienced extensive government-induced censorship” on social media platforms. The injunction bars defendants such as White House Press Secretary Karine Jean-Pierre and Health and Human Services Secretary Xavier Becerra from, among other things,
meeting with social-media companies for the purpose of urging, encouraging, pressuring, or inducing in any manner the removal, deletion, suppression, or reduction of content containing protected free speech posted on social-media platforms.
The 5th Circuit temporarily stayed the order in mid-July until it could examine the merits more closely.
As I explained earlier, Missouri v. Biden is politically polarizing, ensnaring the First Amendment in a battle between conservatives and liberals. I contended that whether “one interprets Doughty’s order” as a victory over government censorship or a defeat in the fight against dangerous falsities “almost certainly is influenced by the political-cultural lens one filters it through.”
This post avoids the political fray. It explains the muddled First Amendment doctrine regarding jawboning. Understanding jawboning is vital because, when lawfully and successfully done with business entities, it allows the government to informally implement its policy objectives without needing to clear the high hurdles of the legislative process. In short, it’s sometimes easier for the government to tilt corporate decisions in its desired direction through doses of communicative pressure than via the rocky road leading to legislative fiat. The danger for businesses, of course, is that this communication with government officials occurs outside the confines of the judiciary and legal system, where substantive and procedural guardrails keep the government in check when First Amendment interests are threatened.
Perhaps the simplest way to understand when jawboning constitutes unlawful government censorship is through dichotomies. When speaking with speech intermediaries such as social media platforms about possibly removing—more bluntly, censoring—content others posted, government officials may (1) engage in persuasion but not intimidation, (2) try to convince platforms to remove speech but not coerce them to do so, (3) criticize platforms’ current actions but not threaten adverse reprisals if they continue, (4) request or urge removal but not demand or command it, and (5) advise but not require.
These somewhat slippery semantic dichotomies are derived from four rulings: (1) The US Supreme Court’s 1963 decision in Bantam Books v. Sullivan, (2) the US Court of Appeals for the 7th Circuit’s 2015 ruling in Backpage.com v. Dar,; (3) the 2nd Circuit’s 2022 decision in National Rifle Association v. Vullo, and (4) the 9th Circuit’s 2023 ruling in Kennedy v. Warren.
Bantam Books concluded that “informal censorship” violates the First Amendment when compliance with governmental directives is “not voluntary” and “public officers” make “thinly veiled threats to institute criminal proceedings” against speech intermediaries. In Bantam Books, the threatened speech intermediary was a wholesale book-and-magazine distributor, Max Silverstein & Son. The targeted speech was “objectionable” publications produced by Bantam Books and Dell Publishing that Max Silverstein & Son distributed. The Supreme Court ruled that notices the government had sent to Silverstein & Son stating that “cooperative action will eliminate the necessity of our recommending prosecution to the Attorney General’s department” were illicit censorship.
The courts in Vullo and Kennedy identified four non-exhaustive factors to help decide whether a government official’s speech constitutes permissible persuasion or unlawful coercion: (1) The words and their tone, (2) the official’s regulatory authority over the message recipient, (3) the recipient’s understanding of the message, and (4) whether the message references “adverse consequences that will follow if the recipient does not accede to the request.”
The 9th Circuit applied these factors in Kennedy. The ruling is interesting because one judge (a Donald Trump nominee) disagreed with the conclusion of two other judges (Barack Obama nominees), who ruled that statements Sen. Elizabeth Warren (D-MA) made in a letter to Amazon CEO Andy Jassy affecting the availability of The Truth About COVID-19: Exposing the Great Reset, Lockdowns, Vaccine Passports, and the New Normal (Florida Health Publishing, 2021)—which plaintiff Robert F. Kennedy, Jr. penned the foreword of—did not raise “a serious question” about the letter’s lawfulness. Untangling the tenuous difference between convincing and coercing now rests with a three-judge panel of the 5th Circuit in Missouri v. Biden.
The post Understanding the Muddled Law of Jawboning in Missouri v. Biden appeared first on American Enterprise Institute - AEI.
]]>The post Uncertainty & Technology: The Adaptability Imperative of Automation (LIVE with Brent Orrell—Part I) appeared first on American Enterprise Institute - AEI.
]]>In Part I of this two-part episode, Shane and Brent unpack recent advancements in LLMs and what these products are good at, and what students should be thinking about in this new automation context.
The post Uncertainty & Technology: The Adaptability Imperative of Automation (LIVE with Brent Orrell—Part I) appeared first on American Enterprise Institute - AEI.
]]>The post New Cyber Trust Mark to Enhance IoT Device Security appeared first on American Enterprise Institute - AEI.
]]>Potentially anything connected to the internet is vulnerable to cybercrime. Securing everyday devices such as refrigerators, microwave ovens, television sets, climate control systems, and fitness trackers is becoming more vital for network security since they are increasingly connected to the internet. For example, hackers could exploit vulnerabilities in a connected refrigerator to gain access to the home network and subsequently use them to steal personal data, create a botnet, provide a gateway to the rest of the network, or even give physical access to a home. By securing IoT devices, we can protect our networks from such risks.
In 2018, Sen. Edward J. Markey (D-MA) and I discussed IoT devices’ lack of basic security. His concerns centered on these devices’ vulnerability to cyberattacks and consumers’ limited awareness about the associated risks. To address that issue, Markey introduced the Cyber Shield Act, a voluntary certification program for IoT devices, with Rep. Ted Lieu (D-CA) concurrently introducing it in the House of Representatives.
Building on this legislative idea, the Biden administration transformed it into the Cyber Trust Mark, an initiative led by Deputy National Security Adviser Anne Neuberger from the White House National Security Council and Federal Communications Commission Chairwoman (FCC) Jessica Rosenworcel for the FCC to implement.
In essence, the program has three main goals: (1) Encourage manufacturers to enhance product security, (2) help consumers identify more secure products, and (3) motivate manufacturers to bolster the security of their products, ultimately reducing IoT device cyberattacks.
To earn the Cyber Trust Mark, manufacturers must meet security criteria developed by the National Institute of Standards and Technology. These criteria cover device authentication, encryption, and software updates, and manufacturers must provide consumers clear information about their products’ security features.
The program’s potential is significant; it could empower consumers to make informed decisions about IoT devices’ inherent security features and motivate manufacturers to improve their products’ security, helping consumers better understand cybersecurity strengths and weaknesses in the market. Its dynamic labeling feature enables the products to stay updated over time, too, bolstering consumers’ peace of mind.
As ArsTechnica noted, the US Cyber Trust Mark is expected to be voluntarily incorporated into IoT devices by the end of 2024, and it aims to spare consumers extensive research before purchasing products such as thermostats, sprinkler controllers, and baby monitors. This program will allow consumers to be more informed and to select secure IoT devices for their homes and offices.
While the Cyber Trust Mark program represents a significant advancement in IoT device security, it is still in its early stages. The FCC is collaborating with manufacturers and stakeholders to finalize the security criteria and labeling system; Rosenworcel aims to have the labeling system in place by 2024.
The program is a crucial step forward in combating malware and network cyberattacks with IoT devices. By offering consumers detailed information about IoT device security, it can safeguard both individuals and businesses from a growing list of potential harms.
The post New Cyber Trust Mark to Enhance IoT Device Security appeared first on American Enterprise Institute - AEI.
]]>The post Free Speech Villain or Hero? Framing the Fight Between X Corp. and the Center for Countering Digital Hate appeared first on American Enterprise Institute - AEI.
]]>X Corp. (X) filed a federal complaint on July 31 against a leading critic, the Center for Countering Digital Hate (CCDH), and its UK-based relative. CCDH published an article in June claiming Twitter (now X) “fails to act on 99% of hate posted by Twitter Blue subscribers.” It wasn’t the first time CCDH, a US-based non-profit organization, criticized the platform regarding hate speech since Elon Musk, “a self-professed free speech absolutist,” became owner.
The New York Times in December 2022 cited CCDH and Anti-Defamation League findings in a story alleging “a sharp increase in hate speech” has occurred on X since Musk took over. CCDH then issued a report in February estimating that 10 previously banned X accounts “renowned for publishing hateful content and dangerous conspiracies” that Musk reinstated “will generate up to $19 million a year in advertising revenue for Twitter.” CCDH followed up with another report in March linking Musk’s ownership to a rise in tweets featuring a “‘grooming’ narrative [that] demonizes the LGBTQ+ community with hateful tropes, using slurs like ‘groomer’ and ‘pedophile.’” The report estimated that five accounts alone featuring the grooming narrative would “generate up to $6.4 million per year for Twitter in ad revenues.”
This public shaming, which might deter businesses from advertising on X, fits snugly in CCDH’s stated goal of “increas[ing] the economic and reputational costs for the platforms that facilitate the spread of hate and disinformation.” Indeed, X’s complaint asserts that “in direct response to CCDH’s efforts, some companies have paused their advertising spend on X,” leading to “at least tens of millions of dollars” in damages. X contends “CCDH prepares its ‘research’ reports and articles using flawed methodologies to advance incorrect, misleading narratives” as part of “a scare campaign to drive away advertisers.”
But rather than sue for defamation over the publication of allegedly false, reputation-harming statements damaging its business, X’s legal theories pivot on how CCDH “engag[ed] in a series of unlawful acts designed to improperly gain access to protected X Corp. data” to prepare its articles and reports. To wit, X claims CCDH scraped data from X, thus violating X’s terms of service and giving rise to a breach of contract claim. X also asserts that an unknown third party improperly gave CCDH login credentials to access non-public data, thereby sparking a claim under the Computer Fraud and Abuse Act. The remaining theories include another breach of contract claim and one for intentional interference with contractual relations. In short, X’s legal theories target how CCDH gathered information, not CCDH’s speech.
Legal merits aside, the parties frame the battle to generate public—perhaps even judicial—support for their sides. Consider X’s complaint. First, by not suing for defamation, X avoids the appearance of trying to quash CCDH’s speech. Instead, it’s targeting only the unlawful gathering of information.
Second, X’s complaint openly positions the company as a free-speech defender:
X Corp. has been harmed in its mission to provide its users with a platform in which topics of paramount public concern can be discussed and debated free from the censorship efforts of activist organizations advancing narrow ideological agendas through deceitful means.
It asserts that CCDH favors “an ideological echo chamber that conforms to CCDH’s favored viewpoints.” In short, X is waging a virtuous battle for free expression.
Conversely, CCDH frames X’s lawsuit as a rich, powerful corporate leader’s attempt to squelch its non-profit, do-gooder, hate-speech-exposing reports. “Elon Musk’s latest legal move is straight out of the authoritarian playbook—he is now showing he will stop at nothing to silence anyone who criticizes him for his own decisions and actions,” said Imran Ahmed, CCDH’s founder and CEO. He vowed that “Musk will not bully us into silence.”
Lurking not-so-subtly beneath CCDH’s framing is the idea that X filed a strategic lawsuit against public participation (SLAPP). As the term suggests, plaintiffs (often corporations) strategically file such lawsuits to silence their critics’ speech about matters of public concern through expensive, time-consuming litigation. California, where X filed its federal lawsuit, has an anti-SLAPP statute that applies in federal court cases such as this one, involving diversity jurisdiction. It allows SLAPP targets addressing “a public issue” to move quickly to dismiss the suits if they are meritless. It thus won’t be surprising if CCDH files such a motion to strike X’s complaint as a SLAPP.
So, is X a free speech hero or a critic-crushing corporate villain? That’s what the frame-game spin is all about as the lawsuit heats up.
The post Free Speech Villain or Hero? Framing the Fight Between X Corp. and the Center for Countering Digital Hate appeared first on American Enterprise Institute - AEI.
]]>The post Thoughts on the Industrial Policy Debate appeared first on American Enterprise Institute - AEI.
]]>First, Chris Miller—moderator of the panel and author of the definitive analysis of the semiconductor, Chip War: The Fight for the World’s Most Critical Technology (Scribner, 2022)—queried, “Is industrial policy the right framework for understanding the discussion we’re having? I think there’s a lot of people in this room who might question that phrase as being relevant.”
That is a fair point. But given the Biden administration’s policy, the debate over semiconductors cannot be cabined within only a national security context. Administration officials have repeatedly argued the semiconductor policy in the CHIPS and Science Act is a model for other technologies. As US Trade Representative Katherine Tai asserted, the US must “keep replicating this [CHIPS and Science Act effort] for other industries.” Further, in a speech touted as definitive for Biden’s international economic policy, National Security Adviser Jake Sullivan posited a “new Washington consensus” that eschews “oversimplified market efficiency” theories and espouses “a modern industrial and innovation strategy.” Under the new strategy, the administration will identify “specific sectors that are foundational to economic growth” and also meet national security priorities in cases wherein the private sector is not “poised to make the investments needed to secure our national ambitions.”
Most, though not all, industrial policy skeptics readily consider there are exceptions for national security. In this case, Summers and I do support CHIPS and Science Act funding for new US-based semiconductor plants, given Taiwan’s perilous national security situation and its hosting of 90 percent of advanced chip manufacturing. (For a view that does not accept the national security exception, see an analysis by leading international economist Anne O. Kruger.) However, Summers and Zoellick urged listeners to scrutinize and be skeptical of “Pentagon-style economics,” which pave the way for targeted subsidies.
Summers and Zoellick also warned of the strong connection between subsidy and protection. Zoellick noted that congressional voices are already questioning some US public subsidies, given the number of foreign firms that benefit from electric vehicle and battery subsidies in the Inflation Reduction Act. And the president continually touts his more restrictive changes to the already protectionist Buy America regulations.
So, what is industrial policy? As Greg Ip of the Wall Street Journal recently laid out, industrial policy has many definitions and versions, with politicians particularly advancing myriad policy candidates with different definitions for public intervention and support. However, Summers advanced an approach that I find serviceable. He posited support for a broad-based “industrial strategy,” advancing examples that produced economic and social gains like the trans-pacific railroad, the land grant colleges of the 19th century, and the national highway project under President Dwight Eisenhower. And he contrasted these expansive programs against Biden’s narrow policy which consists of a “patchwork of subsidies nationally oriented toward manufacturing.” Manufacturing-driven industrial policy is “profoundly misguided.”
To review, the speakers and panelists at the AEI CHIPS Act event made a strong, informed national security case for federal intervention in the semiconductor industry. As for the broader debate over industrial policy largesse, a (skeptical) jury is still out.
The post Thoughts on the Industrial Policy Debate appeared first on American Enterprise Institute - AEI.
]]>The post The Problems at the FTC Go Beyond Losing Merger Battles appeared first on American Enterprise Institute - AEI.
]]>Read more on National Review: Capital Matters.
The post The Problems at the FTC Go Beyond Losing Merger Battles appeared first on American Enterprise Institute - AEI.
]]>The post The Promise and Peril of AI in the Music Industry: Highlights from My Conversation with David Hughes appeared first on American Enterprise Institute - AEI.
]]>The reality is likely somewhere between, but to help break down how AI is shaping the future of this form of art—and business—I sat down with David Hughes, a music industry veteran and consultant wielding extensive experience at Sony and the Recording Industry Association of America (RIAA).
Below is an edited and abridged transcript of our discussion. You can listen to this and other episodes of Explain to Shane on AEI.org and subscribe via your preferred listening platform. If you enjoyed this episode, leave us a review, and tell your friends and colleagues to tune in.
Shane Tews: AI is all over the place at the moment. How is it affecting the music industry?
David Hughes: I joke when people ask me what’s important in the music industry right now. I say only three things, AI, AI, and AI.
So the first “AI” is the fact that AI is going to impact or replace pretty much every job in the music industry.
And any job that requires industry expertise or creativity is going to be impacted, and hopefully not replaced. We’ll come back to that. How can we use these tools to fuel human creativity as opposed to replace it? The second “AI” is a generative AI. And that’s what we’re seeing now.
And the third one is the relationship between AI and copyright. And that is the fundamental issue that will decide the future of our industry. If it doesn’t work out for the industry, it will be worse than Napster and peer-to-peer.
Let’s talk about copyright since you say that’s priority number one. Do you see any dangers here in the mimicry that’s possible?
It’s dangerous because we’ve seen in the visual art space, where AI is trained on a specific artist’s style. And then they said, “Well, give me a picture of Iron Man in the style of Shane.” And bang, it pops up. And instead of me hiring you and paying you $5,000 or $10,000 to draw that picture, suddenly I have it. And anybody who’s familiar with your work would look at it and say, “Oh, yeah, that’s a Shane Tews.” But it’s not. And we’re going to see the same thing in music. And that is a real threat to the artist. It’s a threat to their artistic integrity because we don’t know what words are going to be put in their mouth. It’s a threat to their livelihood if it replaces them. And there are other issues. There’s more rights issues, especially in Europe, where they believe in more rights, and privacy issues, perhaps. So, this is a tricky area. And I think this is an area that is going to require legislation.
So you’re worried about the copyright morass of mimicry, but just how good is the mimicry out there now, and what is the level of quality to be dangerous?
We can easily get to the point where the AI can create music that is of roughly equal quality to what the humans are creating, I’m afraid. And it’ll be able to do it at such a volume and speed that it will force human creators to change the way they do things.
The scary part of that is that just “good enough” is going to be a big problem for the music industry. We already have the good enough problem in that the economics of music streaming are broken.
You just put something that’s good enough between Bruno Mars, Lady Gaga, and Beyoncé. And as long as people listen to it for at least 31 seconds, somebody gets paid. And it’s not necessarily the music that people came to hear, but if it’s good enough, it doesn’t matter where it came from. So that’s one of the threats of AI, is to create music that’s just good enough that people don’t skip it. And then they’ll start sort of feeding that in between the good stuff.
Walk us through the history of this kind of AI development. Where did it start viably breaking through?
We go back eight years, to Sony Computer Science Lab, and Sony CSL in Paris created a song, “Daddy’s Car.” And they did this by training on three dozen early Beatles songs. And the result was that they created a composition that was then, I think, recorded by humans and performed. But the composition itself was supposed to sound like a Beatles song. And it sounded like if, you know, a bunch of junior high school kids went into a garage with their instruments and said, let’s write a song that sounds like the Beatles. And you could tell that they were trying to sound like the Beatles. It was quite awful. But again, that was seven, eight years ago. And that same experiment done now on the cutting-edge technology, I think you’d get a very different result.
Now, it would be easy enough to do that same sound recording with the voices of John or Paul, presumably.
How should music industry executives be thinking about AI music?
So one of my good friends from the old days at Sony Music, Matt Carpenter, he was my partner in building the digital distribution system for Sony Music. And he had come from working for Michael Jackson. And he’s probably the brightest computer-savvy audio engineer in the industry. He said, “I’ve been trying to tell all the executives, you know, do you want to be driving the bus? Or do you want to be run over by the bus?” Now, I think he’s a little optimistic about the driving part. So I’ve been saying to people, “Do you want to be on the bus or under the bus?” And the way to be on the bus is to probably threaten some litigation strategically, offer licenses, and then embrace the technology, strategic partnerships, and then start to figure out if you can use this technology.
Is the room for innovation confined to the music itself in the industry? Or are there other ways it could shake things up?
There’re a lot of opportunities to use AI outside the generative space from the music industry actually, for doing things like cleaning up their meta data and figuring out their royalty accounting systems. The royalty accounting systems at major record companies, for example, were designed originally just to ship physical units of CDs. They weren’t really designed to deal with trillions of streams. And the amount of data and the complexity of that data is such that the only way that we’re ever going to really clean that up is using AI.
The post The Promise and Peril of AI in the Music Industry: Highlights from My Conversation with David Hughes appeared first on American Enterprise Institute - AEI.
]]>