The Problem With Public-Private Partnerships in AI

“The consolidation of the AI industry threatens U.S. technological competitiveness,” argued a 2021 report by the National Security Commission on Artificial Intelligence (NSCAI). The independent commission was chaired by ex-Google CEO Eric Schmidt and populated by executives from Amazon, Microsoft, Oracle, and other large tech companies, as well as key members of the national security establishment. The commission pegged market concentration in the technology sector and the capture of resources for AI development as key factors that led to the hollowing out of public-minded innovation in AI, particularly from universities. It also identified a connection between a lack of diversity in who gets to build AI and what kinds of AI get built as a result.

Last month, after three years of study by the National AI Advisory Committee and National Artificial Intelligence Research Resource (NAIRR) Task Force, the National Science Foundation announced details about the launch of its proposed solution: a pilot version of the NAIRR, established through the recent White House executive order on AI, with a $140 million budget. The NAIRR pilot is a small-scale implementation of a broader $2.6 billion proposal, articulated in the CREATE AI Act, that would need congressional approval and appropriations to be brought into being.

NAIRR was born out of a bold vision, first set out in the NSCAI’s final report—the conviction that there are limits to innovation driven by profit and that public investment (and direction) is crucial to escape that paradigm. To that end, its banner of late has been “democratizing” AI development through the provision of resources for research and development outside of the auspices of tech company interests.

Yet despite these goals, and like other similarly conceptualized initiatives launched by the U.K., EU, and elsewhere, NAIRR remains at risk of entrenching the interests it claims to contest.

In the current paradigm for large-scale AI research, all roads lead to a small number of the largest technology companies, ensuring that the pursuit of public innovation will, one way or another, manifest itself in a partnership with the private sector. The problem is not only the flow of taxpayer dollars back to a concentrated market—it’s that commercial AI actors are incentivized in narrow ways that often conflict with broader societal needs. The profusion of these public-private partnerships necessarily impinges on the horizon of possibility as we imagine future trajectories for technological development. NAIRR is predicated on the idea that fostering more innovation in AI is necessarily good. But what if AI-driven innovation isn’t actually serving the needs of the public?

While the nature of company involvement has shape-shifted as NAIRR has evolved, the largest tech companies have remained central beneficiaries of it. The original proposal was structured as a licensing regime, built around cloud contracts with a rotating set of licensed providers on six-year terms that would have meant directly repatriating the bulk of NAIRR funds back to cloud companies (dominated by Microsoft, Google, and Amazon). The current pilot invites AI companies and other nongovernmental partners to offer up donated resources on a public-private marketplace hosted by the National Science Foundation, alongside access to existing government supercomputers, datasets, and research resources. From Amazon Web Services (AWS) came support in the form of cloud credits for at least 20 research projects. From Nvidia, $24 million in computing on its DGX Cloud offering (which at least in part recycles data center access from Google Cloud, AWS, and Azure). From OpenAI, up to $1 million in credits for model access for research “related to AI safety, evaluations, and societal impacts.”

In many ways, this is a playbook familiar to U.S. industrial policy efforts, including on tech, that go back decades. The primary framing for public investment in basic research historically has mainly been to ensure certain industries’ responsiveness to “strategic national interests,” rather than to contest them or offer alternatives. It shouldn’t be lost on us that the idea for the NAIRR originally emanated from within the national security establishment, which has historically promoted a synergistic relationship with corporate monopolies. (Bold posturing against market consolidation aside, this may be best exemplified by the membership of the NSCAI, which is chaired by Schmidt.)

The scale of this project draws attention to the complete lack of a level playing field when it comes to AI. One need only look at the bottom line to get a clear picture of the current state of affairs: The pilot was allocated $140 million through appropriation of current funds, a shoestring budget in comparison to the original NAIRR proposal, which called for $2.6 billion to be spent over a six-year period. But both of these figures are tiny compared to the infrastructure investments of Big Tech firms: Last year, AWS made a commitment to spend $35 billion on data centers in the state of Virginia alone.

The disjunction is astonishing and extends across global public investment initiatives in AI: The U.K. (900 million pounds), EU (3 billion euros), and the state of New York ($400 million), among others, have made similar announcements, none of them at any kind of scale comparable to the investments coming from industry. This economic heft matters for much more than just computing: Access to high-quality data is one of the emerging fronts in the AI arms race, and these firms have outsized ability not only to collect their own but to muscle out competitors through deals for intellectual property, as OpenAI recently struck with Axel Springer.

Unlike its predecessors, which viewed natural monopolies as extensions of state power, the Biden administration has consistently sought to tackle the consolidated economic power of large firms across policy domains from trade to technology. In fact, NAIRR comes at a time when it risks undercutting significant progress in advancing policy elsewhere: From the muscular orientation of U.S. enforcement agencies against Big Tech to the administration’s integration of competition concerns in procurement guidance to its clear assertion that “the answer to the rising power of foreign monopolies and cartels is not the tolerance of domestic monopolization,” there has been a bold, and historically significant, effort to confront concentration of power in the tech sector. This is also threaded through the executive order on AI, through measures addressing the need to shore up worker protections, bolster civil rights, and institute accountability mechanisms that assure the safety and efficacy of AI products. For initiatives like the NAIRR to cohere with these other policy positions, it, too, must contest with the effects of market consolidation.

This is where the limitations of public investment as a policy instrument come into clearer focus: What of the other consistent framing accompanying industrial policy on AI—societal good? What constitutes AI for social good in the first place? This is where it becomes clear that there are real conflicts of interest between public investment in AI and the democratization it intends to invoke. While on certain issues, like privacy and security, there is arguably clear alignment between commercial interests and the public, other issues may present tensions, or even overt conflict, with the interests of dominant AI firms.

For example, the AI company Anthropic made a commitment to provide API access for 10 researchers for the study of AI and the environment. But there’s an important distinction to be drawn between AI research on the environment—such as the use of AI modeling in carbon accounting to enable carbon offsets—and the clearly documented evidence of AI itself as an environmental harm. Rather than engage in research that meaningfully reduces emissions, such as building smaller-scale and more efficient AI, devoting these resources toward the former approach could, ironically, exacerbate a climate crisis through public funding that, in other arenas, the administration is attempting to quell. Not incidentally, researchers who foregrounded similar concerns at Google and Amazon swiftly experienced retaliation.

This is only one example of the kinds of internal contradictions that can emerge from the blended incentive structures of public-private partnerships specifically within the context of AI. We can learn from past experience: From Michigan to Pennsylvania to the Netherlands, time and again AI has been used to justify austerity measures that disenfranchise the public, ramp up mechanisms of oversight and control that disproportionately affect minoritized groups, and devalue the contributions of workers in the name of underspecified “productivity gains.”

In an overwhelming focus on AI-driven harms, we’ve missed a key piece of the puzzle: demanding that firms articulate, clearly and with evidence to back it, what the benefits of AI are to the public. That there are benefits is the underlying premise of industrial policy interventions, whether that be ensuring economic competitiveness or investing in the public good. But so far, we haven’t asked enough of AI firms to show us their homework, instead permitting them to coast on shallow assertions that AI will inevitably lead us down the path of technological innovation.

Ironically, it’s the investment community that’s coming to grips with this first, questioning whether the sector can find a viable business model in an environment where credit is expensive and upfront costs are high, ratcheting up the pressure to demonstrate profitable use cases. Notably, several venture capital firms opted out of OpenAI’s recent $86 billion share sale, concerned that the company’s valuation was priced too high. The sector is floating largely on the margins of large tech firms—many of them “partners” of the NAIRR—that are willing and ready to experiment in order to obtain first-mover advantage on the next big thing. This experimentation with large amounts of capital in search of a viable business model is familiar terrain for the tech industry, which eventually settled on behavioral ad targeting to power the “free” internet, routinely at the expense of privacy and cementing Big Tech platforms as the gatekeepers to the internet. It’s hard enough to justify investment when profit alone is the measure of success, absent the hype-driven bubble we’re emerging from, and it’s far from clear that start-ups without the deep pockets or largesse of giant firms will survive.

But it’s much, much harder to articulate or measure the benefits of AI to society at large. In the past year, political representatives have largely coasted on breezy associations between AI and innovation without feeling pressed to be concrete about what those innovations are and who they’ll serve. From what we’ve seen so far, it largely seems like business as usual—the same firms that brought us the surveillance business model and toxic social media platforms are driving the trajectory for artificial intelligence. If history is any guide, it’s at moments like these, when there are a few incumbents feeling increased pressure to turn a profit, that predatory business models tend to emerge.

This isn’t a vision of the public good that justifies investment at any scale. When evaluating whether to commit taxpayer dollars, we need a better litmus test than “diversity” or “democratization” of a sector. We need to know that these investments will meaningfully benefit society at large, broadening the horizon for innovation in ways that will accrue to the many and not just the few

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Previous post Al-Shabab terrorists kill 4 Emirati troops, 1 Bahraini officer in Somali capital
Next post Why Africa must do more to secure its development