Jennifer Huddleston and Jack Solowey
On May 15, a bipartisan group of senators led by Majority Leader Chuck Schumer (D‑NY) released a long‐awaited roadmap for artificial intelligence (AI) policy (the Roadmap). The 31‐page report follows the Senate hosting several months of closed‐door insight forums on various AI topics.
Both AI technology and policy have advanced rapidly since the forums began. On the tech front, the week of the Roadmap’s release saw OpenAI and Google each demo their latest AI products, which featured advanced capabilities like interpreting the world for the blind and detecting fraudulent calls on a user’s device.
On the policy front, the Roadmap follows notable AI proposals and actions at the state, federal, and international levels. Specifically, it comes after the California legislature introduced potentially impactful AI legislation on a national scale in the form of SB 1047, the Biden administration issued a significant AI Executive Order, and the European Union passed its AI Act. (That’s not to mention dozens of bills introduced in Congress, and hundreds more at the state level, with proposals ranging from limits on AI applications to new regulatory agencies, to AI licensing requirements.)
Against this backdrop, what does the Roadmap herald for the future of US AI policy? In short, by looking before leaping, the Roadmap compares favorably to the do‐something! approaches of Sacramento, the Biden White House, and Brussels. In its better moments, the Senate report recognizes that many of the risks related to AI already may be addressed by preexisting laws or policy debates. Nonetheless, while the report asks many of the right questions—e.g., where is AI already regulated and how do we preserve innovation?—it leaves plenty of room for counterproductive answers, including disruptive regulation for important industries, politically motivated federal spending, and unhelpful new authorities.
Roadmap Highlights: Some Steps in the Right Direction
Many debates around AI are neither new nor really about AI. The Roadmap recognizes this to a large extent by not presuming that fundamentally new policies are always necessary when it comes to AI.
Helpfully, the Roadmap supports investigating where existing federal laws affect AI innovation. This is a positive step in that it recognizes AI is already regulated in a variety of ways and provides an opportunity to understand how existing law stymies AI research and development.
Between this understanding and the report’s stated goal of furthering US innovation in AI, one can detect an implicit openness to paring back barriers to technological progress. Still, it would be far better if the Roadmap made this explicit. The fundamental AI policy two‐step should be determining (1) when existing laws already are sufficient and (2) when deregulatory action is needed.
An example of where the Roadmap makes progress on the first step while stumbling before the second, however, is in the financial services context. Specifically, the Roadmap supports surveying the current state of AI regulation in finance (through reference to an earlier Senate bill sponsored by report co‐author Senator Mike Rounds (R‑SD)). In addition to analyzing relevant laws already on the books, the proposed survey has the benefit of identifying when AI implicates financial regulators’ redundant authorities, seemingly in the hope of clarifying matters. Nonetheless, the proposed survey’s shortcoming lies in calling for a gap analysis to identify where new authorities are needed while simultaneously overlooking a deregulatory analysis to spot and remove outmoded regulations. Highlighting and excising counterproductive rules should be an essential aspect of any AI policy roadmap.
Moreover, many AI policy debates ultimately stem from ongoing tech policy debates that remain unresolved. In this regard, the AI Roadmap recognizes that many AI policy issues inevitably point back to longstanding questions around a federal consumer data privacy law. For example, the proposed American Privacy Rights Act’s provisions on algorithm design likely would have a clear AI nexus. Ideally, such legislation would provide clarity to both innovators and consumers regarding data use in AI by, for example, helping to rationalize the ever‐growing patchwork of state privacy laws.
Importantly, any such consumer privacy legislation should adopt a far more flexible approach than that of the EU’s General Data Protection Regulation (GDPR), as the GDPR’s static and prescriptive data policies threaten to hold back AI’s evolution.
Hazards of Roadmap Proposals: Government Spending, Speech Concerns, and Roadblocks to Innovation
While the Roadmap contains some potential positive signals regarding the Senate’s approach to AI, there are areas where the Roadmap risks laying the groundwork for policies that would have negative long‐term consequences for both consumers and innovators.
Federal Spending. The roadmap considers, and in many instances supports, making significant government investments in AI. The government itself modernizing its procurement of technology is not inherently inappropriate. (Indeed, there likely will be plenty of areas where AI can help to achieve efficiency gains over current bureaucratic processes.) However, significant public expenditure in emerging technology could come with unintended costs beyond simply those to taxpayers. Where the government seeks to play the role of a general capital allocator for AI technology, it would be in a position to pick a disproportionate share of winners and losers according to political considerations (such as special interest and constituent preference), as opposed to dynamic market signals about the most promising AI research and development paths. In addition, private‐sector investment has yielded far more consumer applications of AI than we likely would have seen from tools simply piggybacking off of public‐sector AI use cases.
Free Speech. The Roadmap’s implications for civil liberties are mixed. On the one hand, it proposes the positive step of seeking to head off the use of AI to create a People’s Republic of China‐style social credit system. On the other hand, other sections, particularly regarding the use of AI in the election context, raise free‐speech concerns.
Political speech is core protected speech, and policymakers must ensure that legislation does not constitute a form of government censorship by limiting the types of speech available in the marketplace of ideas, including through campaign advertising.
As Jennifer Huddleston discussed in her written statement to the AI Insight Forum — in which she participated in November 2023 — AI can be applied in a variety of ways to election‐related speech without having anything to do with what we typically would consider deception or manipulation. It’s important to ensure, for example, that content benefiting from basic AI applications (like summary and translation tools) does not get swept into warning label regimes for AI‐generated content. In addition, norms on how to handle the credibility and reliability of claims also evolve over time. In the age of the internet, many online platforms now provide further context around election‐related speech and manipulated media. The precise private rules vary depending on the tools available on a platform, as well as its audience or users. This flexibility allows different platforms to come to different decisions around the same piece of material and adjust to the specific needs of their users and societal expectations most relevant to their products.
Financial AI and the new Roadmap
Another typically fraught AI policy area into which the Roadmap wades regards who (or what)—e.g., the AI developer, provider, user, or?—should be liable for any AI‐caused harm. The Roadmap leaves this particular question open, with different congressional committees perhaps to provide different answers. Yet the possibility of imposing strict liability on AI developers and providers is a cause for serious concern. The threat of such legislation is not hypothetical. The Financial Artificial Intelligence Risk Reduction (FAIRR) Act, for example, would universally deem AI providers liable when their tools are used to violate securities laws unless the providers took reasonable preventive measures. Such a provider liability regime would run roughshod over the time‐tested economic and legal principles for efficiently and fairly assessing liability.
On the economic front, it would impose compliance costs on parties ill‐positioned to bear them; risk AI providers; and incentivize plaintiffs to go after parties with the biggest names and deepest pockets—not the ones most to blame. On the legal front, it would dustbin the highly nuanced exceptions, mitigating circumstances, and state‐of‐mind requirements of product liability, agency law, and securities regulation. A sound AI policy roadmap should shut the door on such ham‐fisted proposals, not give them an opening.
In addition, the Roadmap’s discussion of AI “black boxes” and transparency addresses a central policy question for AI’s use in financial services. It also provides an example of how over‐indexing on previous regulatory frameworks can lead to counterproductive policy outcomes. As the Roadmap picks up on, existing fair lending laws, such as the Equal Credit Opportunity Act, require lenders to explain with specificity their reasons for adverse credit decisions (e.g., denying a loan). It’s important to distinguish such a policy’s goal—prohibiting invidious discrimination against loan applicants based on their membership in a protected class—from its mechanism‐required explanations. That’s because there may be circumstances where the mechanism undermines the goal.
For instance, advanced machine learning techniques analyzing alternative datasets have the potential to expand credit to the previously underserved. And where such techniques are less explainable than manual processes, prohibiting their use may inadvertently serve to restrict credit to the individuals the fair lending laws seek to protect. Therefore, instead of unthinkingly applying existing prescriptions to new technologies, AI policy should take an outcome‐oriented approach.
When it comes to the use of AI in lending, such an approach could include providing a safe harbor from certain explainability requirements where AI models demonstrably (e.g., through pre‐launch testing and/or post‐launch observation) tend to increase credit access to the historically credit deprived. To be fair, the Roadmap recognizes that the need for future transparency requirements may remain an open question in some contexts. Yet when transformative new technologies are on the table, policymakers must ask that same question (i.e., is this requirement necessary) retrospectively as well as prospectively.
Conclusion
At its best, the Roadmap appropriately asks what longstanding policy concerns (such as data privacy generally) underlie an AI policy question. Similarly, the report also seeks to examine the role of existing laws rather than presume AI immediately demands expansive new regulation. Nonetheless, one should not assume that a more regulatory approach—which ultimately could stifle the beneficial applications of AI along with any harmful ones—is off the table in the US.
Ideally, when considering the potential implementation of this Roadmap—or any other AI regulatory framework—policymakers should seek to support the light‐touch approach that has long led to Americans’ success in a wide range of technological fields.