Post 7 in the Guest Blog Series Preserving Public Values in The Automated State by Albert Sanchez-Graells, Professor of Economic Law, University of Bristol Law School

The public sector is in a quickly accelerating process of digitalisation. The mainstreaming of data-driven technologies such as artificial intelligence (Yeung, 2022), and ever increasing financial pressures pushing public authorities to chase technology-enabled efficiencies and savings, are two key drivers of such uptake in digitalisation; a third wave of digital era governance (Dunleavy and Margetts, 2023). This ‘digital transformation’ has been shaped by a dominating influence of Big Tech on the political economy of digital government (Margetts and Dunleavy, 2024), which carries its own value-based and normative implications and risks. This dynamic has also created a major challenge for the continued influence of public values on the design and operation of the public sector (see eg Ranchordás, 2022).

In this short piece, I focus on the risk of (unacknowledged, hidden) privatisation of value-driven normative and regulatory choices that follows from the structural reliance on the procurement of digital technologies from (Big) tech companies that underpins this accelerated wave of public sector digitalisation (for more detailed analysis, see Sanchez-Graells, 2024a and 2024b).

Regulating public sector digitalisation by contract

The public sector rarely develops digital technologies in-house. It rather acquires them from (Big) tech companies, and such acquisitions are governed by public procurement law, policy, and practice. This leads technology procurement and deployment to regulation by contract. Whether a jurisdiction adopts specific legislation (such as the EU AI Act in the European Union) and supporting instruments (such as model contractual clauses, also in the EU) or doesn’t (such as in the UK, where AI procurement is solely covered in guidance), given the immaturity and variability in the design of the technology, underpinning data sources, and use cases, as well as the open-endedness of the relevant values—the crucial choices in the design and deployment of the specific applications and use cases are left open and subject only to requirements (or expectations) of compliance with technical standards and other industry-originated norms such as the ‘state-of-the-art’. Ultimately, setting the relevant requirements becomes a matter of negotiation between the public buyer and the tech provider. And this is bound to be a continuous and open-ended negotiation.

Let’s take the example of accuracy. It seems uncontroversial that digital technologies used by the public sector must be accurate. However, it is not possible (or wise) to legislate or set a general accuracy requirement applicable across all use cases of a given technology (eg 90%), not least because there are unavoidable trade-offs with other values (such as explainability and transparency, or fairness) and because the consequences arising from inaccurate outputs can vary widely. It is thus crucial that the public buyer is capable of establishing the level of accuracy required to uphold the public interest and to protect the individual rights and collective interests affected by the specific technology deployment. In doing so, the public buyer will most likely have to establish a series of mechanisms (such as metrics, methodologies, audit and oversight mechanisms, accident report protocols, etc), which will hardly involve discrete and final decisions.

Indeed, this task is highly unlikely to be a one-shot exercise. Procuring and deploying digital technologies tends to imply a series of risks and uncertainties (Almada, 2023) and it is likely that issues will need to be revisited to decide how to operationalise overall requirements across discrete issues (eg on the periods in which accuracy is measured, whether there are limits in how many consecutive faulty outputs can be tolerated, on corrective measures, etc). Some issues may well lead to the same (approximated) result by different means (with eg different cost implications, or types of risks involved) and will thus require reconsideration and choices based on grounds other than the ‘core’ requirement (accuracy) to which they relate.

This shows how ‘setting the regulatory needle’ will be a challenging task for public buyers, even where they can in general rely on legislation, model clauses or technical standards. More worryingly, this is a challenge that will be compounded by the context in which this exercise of (delegated) regulatory power is exercised (for a helpful general framework, see Almada, 2024).

Risks of capture and commercial determination, and decentring of procurement

This need to exercise discretion in relation to complex technical matters, and to do so on a continuous basis, places the public buyer in a weak position to effectively uphold public values. Where the public buyer is at a disadvantage in terms of digital skills and economic power, as is the case when it faces (Big) tech companies (and more generally), there are clear risks of regulatory capture and commercial determination because tech providers will be able to influence, or set, the contractual requirements, methodologies, and oversight mechanisms. They will also control key procedures of monitoring and assurance, which are left to self-assessment. And the standards to which requirements will refer—whether they are technical standards, harmonised standards (under the EU AI Act) or simply the state-of-the-art—will also be heavily industry-influenced.

Moreover, the contractual mechanisms put in place during procurement will most often not be enforced by the public buyer, but by the operational unit that will be using the technology. This creates further scope for a conflict of interest between operational demands and upholding broader public values—eg where the technology user is open to reducing accuracy levels or waives certain requirements in the interest of operational goals (or key performance indicators).

An alternative approach

Given the risks that capture, commercial determination and unchecked technology use pose to the continued influence of (or respect for) public values in the design and operation of the public sector, I submit that an alternative regulatory strategy is needed. The procurement of digital technologies and their subsequent deployment should be overseen by a designated regulator—the AI in the Public Sector Authority—as a fundamental precautionary check and balance to minimise the risk of mass negative effects arising from the accelerating digital transformation of the public sector (Sanchez-Graells, 2024c).

Bio

Albert Sanchez-Graells is a Professor of Economic Law at the University of Bristol Law School. His most recent monograph is ‘Digital technologies and public procurement. Gatekeeping and experimentation in digital public governance’ (OUP, 2024).