Today, beneath the headline-grabbing reports of geopolitical and geoeconomic volatility, a significant and consequential transformation is quietly unfolding in the public sector: a shift underscored by the change in US federal AI policy marked by Executive Order 14179 and subsequent Office of Management and Budget memoranda (M-25-21 and M-25-22). This policy decisively pivots from internal, government-driven AI innovation to significant reliance on commercially developed AI, accelerating the subtle yet critical phenomenon of the “algorithmic privatization” of government.

Historically, privatization meant transferring tasks and personnel from public to private hands. Now, as government services and functions are increasingly delegated to non-human agents — commercially maintained and operated algorithms, large language models, and, soon, AI agents and agentic systems — government leaders will have to adapt. The best practices that come from a decade’s worth of research on governing privatization — where public services are largely delivered through private-sector contractors — rests on one fundamental assumption: All the actors involved are human. Today, this assumption no longer holds. The new direction of the US federal government opens a myriad of questions and implications for which we don’t currently have the answers. For example:

  • Who does a commercially provided AI agent optimize for in a principal-agent relationship? The contracting agency or the commercial AI supplier? Or does it optimize for its own evolving model?
  • Can you have a network of AI agents from different AI suppliers in the same service area? Who’s responsible for the governance of the AI, the AI supplier, or the contracting government agency?
  • What happens when we need to rebid the AI agent supply relationship? Can an AI agent transfer its context and memory to the new incoming supplier? Or do we risk the loss of knowledge or create new monopolies and rent extraction, driving up costs we saved though AI-enabled reductions in force?

The Stakes Are High For AI-Driven Government Services

Technology leaders — both within government agencies and commercial suppliers — must grasp these stakes. Commercial AI-based offerings using technologies that are less than two years old promise efficiency and innovation but also carry substantial risks of unintended consequences, including maladministration. 

Consider these examples of predictive AI solutions gone wrong in the last five years alone:

These incidents highlight foreseeable outcomes when oversight lags technological deployment. Rapid AI adoption heightens the risk of errors, misuse, and exploitation.

Government Tech Leaders Must Closely Manage Third-Party AI Risk

For government technology leaders, the imperative is clear: Manage these acquisitions for what they are — third-party outsourcing arrangements that must be risk-managed, regularly rebid, and replaced. As you deliver on these new policy expectations, you must:

  • Prioritize transparency and accountability in AI procurement.
  • Insist on visibility into algorithmic processes, rejecting opaque “black box” solutions for those with explainability.
  • Maintain robust internal expertise to oversee and regulate these commercial algorithms effectively.
  • Require all data captured by any AI solution to remain the property of the government.
  • Ensure that a mechanism exists for training or transfer of data for any subsequent solution providers contracted to replace an incumbent AI solution.
  • Adopt an “align by design” approach to ensure that your AI systems meet their intended objectives while adhering to your values and policies.

Private-Sector Tech Leaders Must Embrace Responsible AI

For suppliers, success demands ethical responsibility beyond technical capability. Begin by accepting that your AI-enabled privatization isn’t a permanent grant of fief or title over public service delivery, so you must:

  • Embrace accountability, aligning AI solutions with public values and governance standards.
  • Proactively address transparency concerns with open, auditable designs.
  • Collaborate closely with agencies to build trust, ensuring meaningful oversight.
  • Help the industry drive toward interoperability standards to maintain competition and innovation.

Only responsible leadership on both sides not merely responsible AI can mitigate these risks, ensuring that AI genuinely enhances public governance rather than hollowing it out.

The cost of failure at this juncture won’t be borne by the technology titans such as AWS, Google, Meta, Microsoft, or xAI but inevitably by individual taxpayers: the very people the government is intended to serve.

I would like to thank Brandon Purcell and Fred Giron for their help in challenging my thinking and hardening arguments in what is a difficult time and space in which to address these critical partisan issues.