As a follow-up to the WINIR workshop, ‘Regulation and the Common Good’, this blog post takes the form of a ‘Q & A’ between Richard Craven, the workshop organiser, and the workshop’s two keynote speakers: Tony Prosser and Julia Black.

Tony Prosser is Professor Emeritus at the University of Bristol School of Law, having been Professor in Public Law at Bristol since 2002. Tony was elected a Fellow of the British Academy in 2014, and, prior to Bristol, was the John Millar Professor of Law at the University of Glasgow.

Julia Black CBE PBA is Professor of Law at LSE Law School, where she is also the Strategic Director of Innovation. Julia is the President of the British Academy, having been elected in 2021, and is soon to take up the role of Warden of Nuffield College, University of Oxford.

The WINIR workshop was held at the School of Law and Criminology, University of Sheffield in October 2023. We are grateful to WINIR for providing funding to enable the event to go ahead, and also to the SLSA for its support. The workshop featured a series of socio-legal paper presentations connected to the theme, ‘regulation and the common good’, and a spread of topics were covered. This blog post gives an insight into some of the discussion between Julia, Tony and attendees, which was fascinating in many ways, not least because, whilst Julia and Tony found a great deal to agree upon, they clearly came at the subject from very different entry points.

Tony, your paper was titled ‘Regulation and the Common Good: An Autobiography’. Could you say something about how you came to be interested in regulation?

Given the nature of the Workshop, I thought it would be helpful for me to give an account of how my own thinking, particularly on utilities regulation, developed over a long career from origins built up here in Sheffield in the old Centre for Socio-Legal Studies. In fact, my interest in regulation originated from a pub conversation in Sheffield with Douglas (then Norman) Lewis, the Centre Director, when we identified nationalised industries as an under-researched area in public law.

On studying the industries, I was concerned about the failure to develop any legal conception of the common good, which was instead treated as self-defining or as solely a matter of politics or, later, as equivalent to economic efficiency. This was despite their major problems in meeting social goals and a number of serious scandals. The Aberfan tragedy caused by National Coal Board negligence was a central memory from my childhood and experience in welfare rights work as a student showed how widespread and brutal disconnections of energy services were (far worse than after privatisation). Of course, we are still seeing the malign effects of mismanagement and lack of accountability of the publicly owned Post Office. There was also an extraordinary lack of transparency in relations between the industries and government and in policy making. Could regulation do better?

Julia, your introduction to regulation was quite different to Tony’s. Why did you decide to pursue regulation as an avenue of research?

As a student, I was always more interested in law’s role outside the courts – in organising society – rather than in dispute resolution. What I found, and still find, fascinating is how social groups within and beyond nation states organise themselves through the rules and institutional structures they create, and that extends to law’s use by political actors to impose organising principles on society or particular groups, and indeed to national or supranational constitutions. So I have always been equally interested in company law as in public law, and in self-regulating bodies (which might make only incidental use of law as an enabling mechanism) as much as those constituted by state-based law.   At the time I was studying for my undergraduate degree in the mid 1980s, there was a wave of new statutory bodies being created to regulate the privatised industries, those which Tony was looking at. But I was particularly interested in another set of reforms: those relating to the deregulation and reregulation of the City – Thatcher’s ‘Big Bang’.   The reforms (introduced in the Financial Services Act 1986) created a complicated architecture in which self-regulatory bodies were to be formalised and / or created, and granted law making and enforcement powers. They were to be overseen by another private organisation – a company limited by guarantee that was itself awarded legal powers to make rules and take enforcement action, overseen by a Government Department. This struck me as a fascinating edifice – the embodiment of what we would now call ‘hybrid’ regulation – a combination of public and private regulation. The timing for me was perfect, as my DPhil studied the creation of this new regulatory structure in real time. It looked at how the new self regulatory organisations created and organised themselves, how and why they wrote the rules they did, how the political and institutional dynamics of the whole system worked and how and why the structure changed over a very short period. So my research interests were both empirical and theoretical – trying to analyse how policy choices were being encoded in rules, how those rules were framed by the understandings and norms of the rule writers as well as the competing interests of different industry groups, how the form rules took was itself a decision of regulatory technique which distributed power across the regulatory system, how the statutory system was designed to interact with the common law, and trying to conceptualise the inter-institutional dynamics which I saw being played out, and which were themselves structured by, and were structuring, the system of legal rules being developed. (‘“Which Arrow?” Rule Type and Regulatory Policy’ (1995); Rules and Regulators (1997)).

Following from that, how do you tend to regard normative, common good, arguments and justifications in your writing?

It’s hard to argue against the proposition that regulation should be aimed at the ‘common good’, the debate is always how that should be defined and by whom.  But I would agree that principles such as social benefit, protection from risks and harms, protection of rights, as well as more market-based justifications all have a place in legitimising state intervention in the actions of individuals or organisations, though whilst necessary they may not be sufficient for some to afford regulation that legitimacy. And as we know, there are competing notions of the ‘good life’ and of the state’s role in enabling or securing it, so in practice the goals expressed in different legislative instruments may be more or less acceptable to different groups.

In the case of regulation, I would argue that at the least, regulation should try to curb the harm caused to others by the worst excesses of self-interested behaviour by private actors, usually in the pursuit of profit – to make capitalism honest. But politics does not always work that way. Legislation is the encoding of political choices in law, choices which may favour some groups over others, or some ideologies over others. That said, I’m not a rabid public choice theorist, who sees politics as an open marketplace in which legislation is the good traded off to the highest bidder. That theory developed in the US which has a political and constitutional system which is much more porous than a Westminster system with its strong parties and where the executive drives the legislative agenda, or a system which produces coalition governments. Yet even without going to the extremes of public choice theory, we know that the voice of the consumer or the citizen (or smaller businesses) is often crowded out by that of more powerful actors. And where costs are concentrated, calculable and being incurred now, and benefits diffuse, uncertain and / or materialise over the long term, then it takes significant political judgement and courage to impose those costs on a vocal industry – we can see that today in the constant de-prioritising of actions to address climate change and the loss of biodiversity.

I also think that both the goals and the challenges of regulation, and even the nature of the ‘regulatory state’ can be overly framed by the regulators of the privatised utilities; indeed that it was unfortunate that regulation, and indeed the ‘regulatory state’ came to be associated most closely in the academic and policy discourse with their creation, and with the corresponding agenda of neo-liberal economics.  That focus can tend to leave out of the spotlight other more mundane (at least for economists) but often long-standing regulatory systems which are more focused on risks, such as food safety, occupational health and safety, product safety, consumer protection. And it also ignores the normative goals which they are trying to achieve. As I have said often, regulation has always been about much more than ensuring market efficiency – not least it has been about managing risks caused to health, the environment, indeed to human rights, by the actions of private actors (‘Really Responsive Regulation’ (2008); ‘Really Responsive Risk-Based Regulation’ (2010)).

Finally, whilst it’s obviously important to think about what should be the high level normative agendas for state-based regulation as a whole – which ultimately comes down to our conceptions of the role of the state – I’m just as interested in understanding why different regulators have the goals they do.  Relatedly, I’m also interested in the cognitive and epistemological dimension to regulatory systems – how problems are framed; how  legislators (and regulators) perceive and understand the domain – the area of economic or social life – which they are seeking to regulate; what sources of knowledge they draw on, and thus who is admitted, and who excluded, from that wider regulatory conversation on goals, principles and norms (‘Constructing and Contesting Legitimacy and Accountability in Polycentric Regulatory Regimes’ (2008); ‘Reconceiving Financial Markets – From the Economic to the Social’ (2013)).

Tony, how have concepts like the common good appeared in your research on the privatised utilities?

In the rhetoric and literature at the time of privatisation, a welfare/neo-classical economics approach was central to most of the academic and policy approaches taken. Its appeal was that it appeared to avoid difficult political and economic judgements in defining the common good through its concentration on objectives of efficiency and consumer choice. It claimed to permit a radical separation between economics and politics; the former was rational and calculable whilst the second was arbitrary. The underlying sociological assumptions were Weberian and did not accept the possibility of practical reason in political matters (Politics as a Vocation; R. Swedberg, Max Weber and the Idea of Economic Sociology (1998)). However, this approach was far too narrow to provide useful answers to difficult social questions, precisely because it cut itself off from the other contexts. This inadequacy was confirmed by the statutory mandates given to regulators, even in the field of public utilities, which were not limited to efficiency goals. Indeed, the necessary balancing of objectives suggested that regulation is an art, not a science and that the typical economic language of trade-offs is misleading as many values are not tradable. The market failure approach with which this approach was associated also had a clear political agenda, that markets are always the first choice for allocation of goods and services. This may have made it politically appealing at the time, but it is much less so now that citizenship and sustainability are firmly on the agenda (‘Regulation and Social Solidarity’ (2006), 364-8).

In view of this complexity, I asked which alternative rationales could be found for regulation going beyond generalisations about the common good? I looked outside the Anglo-American context to the French concept of service public. I used this to suggest four basic categories of regulatory rationales:

  1. Maximising efficiency and consumer choice; this will not apply to all areas of regulation, but only to those concerned with opening up or mimicking markets.
  2. Protection of basic rights; examples would be privacy, social care standards and aspects of utilities regulation.
  3. Social solidarity which includes universal service (social and geographic) and other attempts at limiting the fragmenting effects of markets in highly unequal societies, such social tariffs; crucially, it also now includes sustainability. It derives from Durkheim rather than Weber.
  4. Creating institutions for participation and deliberation and balancing of values; this needs to be done through institutional design rather than calculating trade-offs (The Regulatory Enterprise (2010), 11-19).

These rationales co-exist and it is necessary to experiment to find the right mix. Whatever their weaknesses, they did seem to offer some structure for the analysing the potentially unmanageable breadth of regulatory studies; I was delighted to find at the workshop that they have been employed in areas very different from the regulation of utilities.

A further important advantage to these concepts is their openness to intellectual exchange with other disciplinary approaches. They also avoid the stark bifurcation of markets and politics characteristic of the market failure approach. This ability to connect has also been seen in other approaches to regulation, notably the ‘regulatory state’ and ‘regulatory capitalism’ schools, and its necessity has been clear in the policy debates.

Tony, in light of the common good theme, what do you consider to be the main directions and challenges for regulatory studies?

Two areas where further connections need to be developed are;

  1. Corporate governance, which was assumed at the time of privatisation to be much simpler than it is; see for example the extraordinary financial engineering undertaken by the privatised English water companies.
  2. Broader economic constitutions, which are arguably subject to fundamental change (of which Brexit is an illustration) as part of a shift to ‘populism’. This appeals to what Weber termed ‘plebiscitary democracy’, and is hostile to independent institutions whereas a degree of independence was essential to the regulatory model of the 1980s to provide stable and calculable regulation.  The effects of austerity have, at least in the UK, also constituted a major constitutional change with relevance to regulation, especially from the 2010 spending review onwards.

In addition, of course, the future raises a huge range of issues concerned with digital media, with AI and, in particular, private regulation by algorithm through which private corporations regulate private actors; this takes us far beyond the concern with self- and co-regulation in my early work.  Can broader values of the common good mean anything in this context? Can they replace the breakdown in trust caused by radical populism and amplified through social media? Roger Brownsword suggests that the answer is ‘yes’ in his recent work (Law, Technology and Society: Re-Imagining the Regulatory State (2019)). This brings us nicely back to where we started as his interest in regulation was another product of the Sheffield Centre!

Julia, can I ask you to reflect on the common good theme in light of the directions and challenges for regulation with respect to AI?

There are many characteristics of the rapid proliferation of digital technologies and AI which make them both challenging and interesting to regulate: for example, they are both highly distributed and highly concentrated, they combine open source with proprietary models, they use innovative business models which are global in operation and easily defy jurisdictional boundaries, the technologies can be used for both good and ill, and individual actors in society are both the targets of regulation and its beneficiaries. The technologies for both the hardware and software for AI is proceeding at an unprecedented pace, but it’s easy to get distracted by the technologies alone.  I am just as interested in the economic and indeed industrial complex which has emerged, and is emerging, for its production and dissemination. The ownership and management of digital platforms is highly concentrated in different parts of the world, but the development of apps and the creation of content is highly distributed. Subject to the regional containment of different platforms (eg in China, Russia), almost anyone can upload anything at anytime – good or bad – and it will be distributed instantaneously.  In the case of AI, there is a gradual concentration of providers, but the academic philosophy of open source, which imbues parts of the technological community creating such models, means that there is a proliferation of open source as well as proprietary models. The political economy of the industry is also fascinating, and there is an important geo-political dimension, which impacts national decisions on whether and how to regulate. The AI race has become like the space race – each country wants to have sovereign capacity, and to lead – there is a reluctance to ‘kill’ the potential national champion, or ‘throttle AI in red tape’. These issues are all aside from the geopolitical challenges of supply chains for the hardware for the industry, the concerns relating to labour conditions of, for example, coders or content moderators, and the significant environmental impact of AI arising from the energy consumed by the computing power needed to run such algorithms, and to run, and water to cool, the data centres on which we all rely.

The technical, political, legal, and social challenges of regulating the development and deployment of AI are immense, and even greater if we factor in the need for international coordination. Forging common consensus on the goals that such regulation should be pursuing is challenging, even within and between democratic countries. There is gradual coalescence around different sets of international principles for governing AI – for example the OECD Principles, the Hiroshima International Guiding Principles which build on them, the EU’s proposed AI Act, and the Bletchley Park Declaration on AI Safety. To varying degrees these stress the need for the development, deployment and use of AI to be responsible, trustworthy, safe, and transparent. But even across the OECD we can see countries taking different paths, notably on whether they include respect for human rights in the development and deployment of AI. The OECD Principles and EU Act emphasise the need for human centric AI and a respect for human rights. In the UK, despite respondents arguing for their inclusion in the UK’s AI principles, upholding human rights has been excluded from the goals or principles, with no reasons given by Government as to why. But even where countries agree, how those high level normative goals or principles are transposed in different regulatory systems, even within the same country, is inevitably going to vary, and the regulatory challenges of ensuring they are met are huge (‘Regulating AI and Machine Learning: Setting the Regulatory Agenda’ (2019)).