Typology of Legal Technologies

L Diver, P McBride, M Medvedeva, A Banerjee, E D’hondt, T Duarte, D Dushi, G Gori, E van den Hoven, P Meessen, M Hildebrandt, (2022) ‘Typology of Legal Technologies’ (COHUBICOL). View online


The Typology is an online, interactive publication, a methodology, and a mode of analysis. Substantively, it contains a curated set of typical legal technologies (applications, scientific papers, and datasets). The COHUBICOL team assessed these based on the claims made by their developers and/or providers, and on the substantiation of those claims, with an eye to the kind of legal impact their deployment might have. Our particular focus is on systems that might alter or impact the concept of legal effect that lies at the heart of law-as-we-know-it.

Me presenting the Typology of Legal Technologies at the CRCL22 conference
Me presenting the Typology of Legal Technologies at the CRCL22 conference

Argument by Numbers: The Normative Impact of Statistical Legal Tech

L Diver and P McBride, (2022) ‘Argument by Numbers: The Normative Impact of Statistical Legal Tech’ Communitas. View online


The introduction of statistical ‘legal tech’ raises questions about the future of law and legal practice. While technologies have always mediated the concept, practice, and texture of law, a qualitative and quantitative shift is taking place: statistical legal tech is being integrated into mainstream legal practice, and particularly that of litigators. These applications – particularly in search and document generation – mediate how practicing lawyers interact with the legal system. By shaping how law is ‘done’, the applications ultimately come to shape what law is. Where such applications impact on the creative elements of the litigator’s practice, for example via automation bias, they affect their professional and ethical duty to respond appropriately to the unique circumstances of their client’s case – a duty that is central to the Rule of Law. The statistical mediation of legal resources by machine learning applications must therefore be introduced with great care, if we are to avoid the subtle, inadvertent, but ultimately fundamental undermining of the Rule of Law. In this contribution we describe the normative effects of legal tech application design, how they are potentially (in)compatible with law and the Rule of Law as normative orders, particularly with respect to legal texts which we frame as the proper source of ‘lossless law’, uncompressed by statistical framing. We conclude that reliance on the vigilance of individual lawyers is insufficient to guard against the potentially harmful effects of such systems, given their inscrutability, and suggest that the onus is on the providers of legal technologies to demonstrate the legitimacy of their systems according to the normative standards inherent in the legal system.

AI & the compression of law

Updating Wendell Holmes
Updating Wendell Holmes

Last month I had the privilege of being invited to deliver a talk to the Catalan Center for Legal Studies and Specialised Training, a centre for judicial training in Barcelona. The title of talk was ‘AI & the compression of law’, and in it my goal was to debunk the idea of the ‘robot judge’ (always depicted as a glassy white robot figure, either with a blindfold or the scales of justice). Instead, I argued, the worry with the use of AI in law is not the replacement of judges, but rather the subtle reshaping of their activities (and those of other parties in the litigation sphere) by systems whose machine learning underpinnings are geared toward a form of optimisation and relevance that are not necessarily compatible with legal notions of optimality or relevance.

This is a more profound problem than the ‘robot judge’, because its effects are much more subtle – jurists don’t necessarily understand the technical logics underlying the systems they use in their daily practice – but the impact on the Rule of Law is all the more important because of this. Automation bias means jurists will tend to accept the outputs of legal tech systems involved, for example, in search and document drafting, but as we’ve seen in recent debates in Natural Language Processing, large language models (LLMs) are sophisticated pattern matching systems that have no access to the underlying meaning the words represent.

When lawyers take the outputs of such systems as an accurate, valid, or legitimate representation of legal reality, they imbue something that has been generated according to statistical probability with legal legitimacy, and this is potentially corrosive to the ‘creative step’ that lies at the heart of the Rule of Law: the ability to make a new argument that synthesises empirical facts with legal norms in service of the client’s rights and interests. Without that essential aspect of legal practice, the Rule of Law loses its ability to respond to contingent circumstances, and we end up with Holmes’ prediction of the “man of statistics”.

He suggested that the phenomenon of law consists in “the prophecies of what the courts will do in fact, and nothing more pretentious”, my suggestion is that we must avoid law becoming "the prophecies of what legal tech will do in fact, and nothing more pretentious". Since legal technologies mediate legal practice – and always have – their affordances shape that practice. And so, if the underlying design choices reflect a particular logic, as they inevitably will (since technology cannot be neutral), then we have to enquire into what that logic is.

The compression of law
The compression of law

In the case of legal tech built around machine learning, the logic is not legal per se, but instead statistical – it’s about what is likely to come next following a search term or seed text for document generation, on the basis of what patterns of text exist within the training dataset. If lawyers unthinkingly accept such outputs (and automation bias is likely to lead to this, as is the notion of robotomorphy, the inverse of anthropomorphy), we arrive at the antithesis of the Rule of Law, which at its heart must always provide (i) procedural mechanisms that (ii) allow for alternative interpretations of texts such that (iii) reasoned argumentation bridging facts and norms can result in (iv) individuals or collectives receiving the protection of law in specific circumstances, that protection being the underlying spirit that animates and informs this process.

When we rely on datafied abstractions of law – which is already an abstraction of the real world – we are potentially working with something that has ‘compressed’ away important and relevant information to the legal enterprise, and while we can and do work with such abstractions (statistical legal tech does provide usable results), this is paradoxically where the problem sits – by relying on such compressed outputs, we potentially constrict legal flexibility, in ways that might not be apparent to us.

Robot judges aren't the problem we should be worried about
Robot judges aren't the problem we should be worried about

For a related blog post I wrote with Pauline McBride, see ‘High Tech, Low Fidelity? Statistical Legal Tech and the Rule of Law’ on Verfassungsblog. We also have a full paper forthcoming in Communitas.


Response to the Scottish Government's consultation on legal services regulation reform

This response was written in December 2021 with Pauline McBride and submitted to the Scottish Government’s Consultation on legal services regulation reform.

Written submission from Dr. Pauline McBride1 and Dr. Laurence Diver.2

This submission is made in a personal capacity and not on behalf of any of the organisations with which we are affiliated.

We are pleased to have an opportunity to the make a written submission to the Scottish Government Consultation on Legal Services Regulation Reform in Scotland. Our submission is structured as follows: in Section 1 we highlight the importance of the independence of the legal profession for the rule of law; in Section 2 we express concerns about the assumptions and limitations of the consultation document and the Roberton report on which it is based; in Section 3 we flag concerns about funding for the Roberton and Market Regulator models; finally, in Section 4, we highlight the need for more careful consideration of the specific characteristics of legal tech and what these mean in terms of appropriate regulation.

Independence of the legal profession is crucial. It is “a fundamental principle recognised by the international community” and essential to “ensuring that the rule of law is upheld”.3(#_ftn3) That means that the profession must be free from government interference, whether in shaping the practice of law or the business of law. That must be so even (and especially) where government, existing regulators, or commercial organisations see an opportunity to enhance economic growth in the ‘market’ for legal services by defining (or re-defining) their scope and the boundary between reserved and unreserved legal services.

The independence of the legal profession is not well-served by vesting regulatory powers in an organisation which sits outside the profession. For this reason, both the Roberton model and the Market Regulator model (Options 1 and 2) threaten the independence of the profession. We draw attention to the economic considerations at play in the specification of the powers for these regulators – notably, the power to monitor the supply of legal services and the power to act as an economic regulator.

Section 2 – Assumptions and limitations

The consultation document makes repeated reference to the rule of law. However, little consideration is given to the substantive requirements of the rule of law. There is no acknowledgement that the establishment of a regulatory body which is separate from the legal profession threatens the autonomy of the profession and so the rule of law. No attempt is made to explain how this dilemma may be resolved. We do not think it can be resolved. Primacy must be given to the rule of law and therefore to the independence of the legal profession.

We note that there appear to be three separate motivations for the consultation and the proposed reform. First, the Roberton report identifies various failures in the current regulatory regime, noting, for example, the complexity of the complaints system, a lack of clarity about when a practising certificate is required, and issues about pricing transparency. It does not explain why these issues might not be sufficiently tackled within the context of the current regulatory framework.

The second relates to the perceived need for a demarcation between the roles of the Law Society of Scotland as representative body on the one hand, and as regulator on the other. Such demarcation has already been achieved through the establishment of a separate Regulatory Committee of the Law Society of Scotland.

The third, and it seems to us, the main motivation for reform is economic. The consultation document refers to a ‘legal services market’ and the potential for ‘market failure’. It suggests the appointment of a ‘Market Regulator’. It appears primarily to be focused on competition and liberalisation.

Economic considerations such as opening up new markets, preserving market ‘share’, facilitating new business models or enhancing competition should not take priority over preservation of the rule of law. The latter is a fundamental value, underpinning all others in a democratic society. Indeed, it is our view that the rule and theory of law necessarily come prior to any consideration of markets. A narrative that, in effect, prioritises market considerations plays into an instrumental, neoliberal conception of law that puts money before social values, profit before professionalism.4 It is a narrative that favours the powerful: the big law firm, the global legal publisher, the provider of legal tech. This is not a neutral narrative, nor does it reflect the impetus behind the rule of law in terms of facilitating the holding of power to account.

The consultation document points to examples of regulatory changes effected in England and Wales, Australia and the state of California. It is important to put these developments into context, in terms of the legal and political cultures to which they pertain. For example, Sommerled and Hammerslev note that while the “common law world was the vanguard in embracing globalisation and neoliberal policies”, Scotland has on the other hand “successfully resisted the extremes of liberalisation adopted in England and Wales”.5 The consultation document outlines the nature of the current regulatory regime in England and Wales. It makes no attempt to evaluate the impact of that regime, much less to justify its appropriateness for the Scottish legal system.

Relatedly, the consultation document appears to assume that the legal services market is broadly comparable to other market sectors. It is not. Precisely because of the significance of the rule of law for democracy, for rights, and for the goal of justice, it is not possible or desirable to transpose regulatory models suited to other economic sectors into the legal services market. While other sectors may have moved towards a risk-based regulatory approach, it is not apparent that such an approach is appropriate for the legal profession, and the consultation document does not demonstrate otherwise. More fundamentally, insofar as there is a legal services market, it exists to facilitate justice and the rule of law, and thus it must reflect an underlying normative commitment that is not present in other sectors.

The Roberton report and the consultation document draw support from a study which the former describes as a “small qualitative Consumer Study on Scottish Users of Legal Services (2018)”. This study, carried out by the Scottish Government’s User Research Team, was based on interviews with 12 participants. This is a tiny sample size relative to the adult population of Scotland, most of whom will, at some stage, have reason to seek legal advice. We acknowledge that insights may be drawn from the research but it cannot be considered representative of the experiences of ‘consumers’ of legal services in Scotland.

Section 3 – Funding of Options 1 and 2

The consultation document states that under each of Options 1 and 2 the new body would be funded through a levy on the legal profession and that the cost to the profession would be “intended to be no more than the current system.”

Option 1 anticipates that the legal profession will fund both a regulator and a professional representative body. It supposes that the latter should play a significant role in consulting with the regulator, and would have a role in “providing CPD (approved by the regulator), provide professional services and guidance, issue publications, and be able to seek to influence law reform.” This tends to suggest that overall the costs to the profession will increase.

In relation to Option 2, it is difficult to know why, in principle, the cost of the levy should not be significantly less, since the remit of the proposed new regulator is narrower than that of current regulators. In practice, we anticipate that the setting up of a new regulatory body will result in increased costs for the profession.

We are particularly concerned about this section of the consultation. Like the Roberton report, it warns against the “creation of barriers to new legal services founded on legal tech through over specification of regulation in legislation.” It appears to favour a light touch approach to regulation and to favour the use of regulatory sandboxes. Here again we see a market-oriented approach.

The consultation document, like the Roberton report has little to say about the range and diversity of legal tech that is either presently available or under development. For example, neither makes any reference to technologies that employ machine learning, natural language processing or statistical analysis (‘data-driven’ legal tech). Neither conveys any sense of the various legal domains in which these technologies are or might be employed, and how. There are no references to systems that offer to prepare first drafts of documents, carry out contract analysis and reviews, secure compliance with regulatory instruments such as the GDPR, or predict the outcome of court cases through analysis of case law, judicial behaviour, or past practitioner performance. The consultation document appears to take it for granted that legal technologies present only opportunities for the development of legal services; there is no recognition that use of these technologies to carry out (or assist in carrying out) ‘law jobs’ may present subtle but profound threats to the rule of law. In terms of how such technologies are likely to impact real-world practice, the well-known risk of automation bias must be considered. The possibility that lawyers become deskilled through use of such technologies, or fail to appreciate the assumptions of these systems and the design choices on which they are based, is less well recognised. For example, some of these technologies presuppose and play into a formalist conception of law as a set of rules to be applied mechanistically without regard for the particular case, the individuals involved, and the overarching claims of justice. Others embed in their designs the assumption that these essential aspects of the law can be represented and found in data, and that legal practice requires nothing ‘extra’.

Whatever the merits or demerits of these assumptions, it is essential that the design of legal tech systems be open to scrutiny. This is to ensure that any adoption of legal tech is in accordance with the values and commitments of the Scottish legal system and profession. We therefore consider that technologies that carry out or assist in carrying out ‘law jobs’ should be employed with great care and, in some cases, should not be employed at all. We are particularly concerned about the use of data-driven technologies which rely on past data to predict (or dictate) the outcome of cases. The use of such technologies for determination of case outcomes, for example, would inevitably restrict the inherent flexibility of law to adapt to its social and historical context. Such impacts might not be immediately apparent, particularly in light of the automation bias mentioned above.

We consider that legal technologies should be regulated,6 but do not consider they should be regulated as legal services.7 The regulators (at present, the Law Society of Scotland, Faculty of Advocates and the Association of Commercial Attorneys) should, however, develop guidance about the use of these technologies by their members. Such guidance must take into account the implications of different legal technologies for the rule of law, both in terms of equality before the law (its common interpretation) and in terms of ensuring that the underlying ethos of the law is protected. The latter requires that due process, contestability, and access to justice be reflected in both the designs of legal tech applications and in the specific ways and contexts within which they are employed. This is a significant but fundamentally important challenge that the market alone cannot resolve.

Furthermore, we strongly oppose the use of regulatory sandboxes. The regulatory sandbox approach threatens the independence of the regulator. It is designed to favour the interests of developers rather than those of citizens, or the public interest more broadly. Moreover, considering the diversity of legal technologies both at present and currently under development, we suspect that regulatory bodies will not be well-equipped to assess those technologies, whether for compliance with fundamental rights or for conformity with the principles of the rule of law.8 Any assessment of the acceptability of legal tech must be carried out with a full awareness both of the assumptions and design choices underpinning the technologies and of the normative commitments of the (Scottish) legal system that are potentially impacted by them.

  1. LLB (Hons), DipLP, PhD; solicitor; post-doctoral researcher, Vrije Universiteit Brussel: Counting as a Human Being in the Era of Computational Law (cohubicol.com), member of the Technology Committee of the Law Society of Scotland. 

  2. LLB (Hons), DipLP, PGDip, LLM, PhD; post-doctoral researcher, Vrije Universiteit Brussel: Counting as a Human Being in the Era of Computational Law (cohubicol.com). 

  3. The Hon. Justice Michael Kirby, ‘Independence of the Legal Profession: Global and Regional Challenges’ https://www.icj.org/wp-content/uploads/2012/04/independence-legal-profession-occasional-paper-2005.pdf

  4. Sommerlad, H & Hammerslev, O (2020) Lawyers in a new geopolitical conjuncture: continuity and change, in: R. Abel, O. Hammerslev, H. Sommerlad, & U. Schultz (Eds) Lawyers in 21st Century Societies Volume 1: National Reports (Oxford, Hart), pp. 1–41. 

  5. ibid (emphasis added). 

  6. For example, by legislation akin to the EU’s proposed Artificial Intelligence Act. 

  7. We do not think it is helpful to use the term ‘legal services’ for services other than ‘reserved’ services. 

  8. We are acutely aware, through our own research, that assessment of these systems relies on a mix of computer science and legal expertise. 

Digisprudence: about the book

Written for the Edinburgh University Press blog (Dec 2021)

Tell us a bit about your book

Digisprudence is about the technologies that govern our behaviour, and how they can be designed in ways that are compatible with democracy. We’ve probably all had that feeling of frustration when using our smart phone or a website, that we’re in some sense being controlled or manipulated in what we are able to do. That might seem unimportant, but technology is powerful: imagine a car that won’t go faster than 70mph, no matter how hard you accelerate (even to avoid an accident or to get someone to hospital). Or think about the convoluted process you have to go through to delete your Facebook account, especially if you want it to happen immediately (this could have significant implications in the case of doxxing, or other forms of online harassment).

In effect, the choices made by the designer of a system impose rules on you. But those rules are not like legal rules, which you as a citizen are able to interpret the meaning of, or even ignore (think of the speeding example above). Computer code just imposes itself, often without any input from the citizen (‘user’). My argument is that this is a problem in a democracy. We tend to think that parliaments can’t make whatever laws they like – they are constrained in the first place by a constitution, and after-the-fact by the courts, who can consider whether a rule is legitimate after it is made.1 I think that, essentially, the same constraints should apply to code: designers and developers shouldn’t be able to make whatever rules they like, if these have an effect on the behaviour and actions of citizens. If we don’t accept arbitrary law, we shouldn’t accept arbitrary code.

Digisprudence charts this problem, setting up a parallel between unacceptable laws and unacceptable code. It then presents a set of design features, or digisprudential affordances, which can help code avoid imposing arbitrary control over its users. These features are about how the system relates to those interacting with it, and to the legal system more widely: choice, transparency (of operation, purpose, and provenance), delay, oversight, and contestability (by the user and via the courts).

Systems that include those features will be compatible with the underlying values of democracy, at least at a foundational level (they will still need to comply with other laws, such as intellectual property, data protection, etc.).

Finally, Digisprudence is a ‘reboot’ for three main reasons. First, because it takes the idea of code controlling us, a.k.a. ‘code as law’, and builds on it by analysing our immediate interactions and relationships with technology in greater depth than before. Second, because it uses longstanding theories from law as a platform to underpin and justify the digisprudential affordances. And third, because it opens the black box of how code is actually made, with a view to making a difference in practice.

What inspired you to research this area?

Although I have an academic background in law, I’ve been tinkering with programming since I was a kid in the late 90s. After undergraduate studies and a spell as a researcher at the Scottish Law Commission, I worked for a few years as a professional web developer. It became clear to me that I had a strange, and perhaps not entirely legitimate power over the interactions between users and the products I was creating. Even with the best of intentions it seemed odd to me that I – like millions of other developers around the world – had this ability to ‘legislate’ design rules that would control part of someone else’s behaviour.

More broadly, like a lot of people I’ve had the sense since the early 2000s – and especially with the rise of Facebook – that there’s something unethical about how technological architectures are so effective at structuring our behaviour and actions. Combined with my own experience as a developer, that was the seed of my interest in this area. Of course, technology ethics has been an academic concern for decades and in the past 20 or so years has become a really significant field. I thought that there was something useful to be said about the crossover between legal theory and technology design, especially given the very important differences between law and ethics.

What was the most exciting thing about this project for you?

I find the strand of legal research that deals in the ‘materiality’ of how law is done really interesting. Law is often quite an abstract field, even when it deals with real-life situations; questions of how rules become reality, through people’s actions or via the architectures that surround us – those are extremely interesting, and relevant not just to legal academics but to all of us as citizens.

How we analyse those questions when those architectures are digital, and created by commercial actors, is fascinating and important from a democratic perspective. It opens up a lot of really exciting and fundamental questions at the cross-over between law and computers.

Has your research in this area changed the way you see the world today?

I’ve come to realise just how often people assume that computers and Artificial Intelligence are essentially good things that can solve the world’s problems. This is the case for many in academia, civil society, and government (both domestic and international). There is a place for these technologies, of course, but the tendency to frame a problem in a way that leads to a technological solution is very common. This is problematic because it leads to the wrong kinds of solutions to the wrong kinds of problem.

In that respect I can say my own trajectory has changed over time: as a technology enthusiast myself, I can certainly identify an evolution in my own views. I have a deeper appreciation for the tension between the ‘solve the problem with the tools that I have’ mindset of the developer, and the ‘are these the correct tools, and is this even the correct problem?’ approach of the philosopher, ethicist, or lawyer. Having a foot in both camps has really helped me to understand how one side views the other.

What’s next for you?

I’m currently a postdoc in an ERC Advanced Grant project called Counting as a Human Being in the Era of Computational Law, or COHUBICOL. One can probably appreciate the overlap between the book and the focus of the project! Our work really deepens the ‘dual view’ I described above: we are a cross-disciplinary team, comprising lawyers and computer scientists. Our focus is on the deep assumptions of both fields, and how they complement and collide with one another. This is especially important when computer systems become more and more embedded in the practice of law, which is something relevant to all of us.

  1. One might recall the Johnson government’s attempt to prorogue the UK Parliament in 2019 – that was found to be illegal by the Supreme Court, and reversed (or more accurately, the court found that the prorogation had never happened in the first place). 

Digisprudence: The Design of Legitimate Code

L Diver, (2021) ‘Digisprudence: The Design of Legitimate Code’ 13(2) Law, Innovation and Technology. View online


This article introduces digisprudence, a theory about the legitimacy of software that both conceptualises regulative code’s potential illegitimacies and suggests concrete ways to ameliorate them. First it develops the notion of computational legalism – code’s ruleishness, opacity, immediacy, immutability, pervasiveness, and private production – before sketching how it is that code regulates, according to design theory and the philosophy of technology. These ideas are synthesised into a framework of digisprudential affordances, which are translations of legitimacy requirements, derived from legal philosophy, into the conceptual language of design. The ex ante focus on code’s production is pivotal, in turn suggesting a guiding ‘constitutional’ role for design processes. The article includes a case study on blockchain applications and concludes by setting out some avenues for future work.

Interpreting the Rule(s) of Code: Performance, Performativity, and Production

L Diver, (2021) ‘Interpreting the Rule(s) of Code: Performance, Performativity, and Production’ (2021) MIT Computational Law Report. View online


Software code is built on rules, and the way it enforces them is analogous in certain ways to the philosophical notion of legalism, under which citizens are expected to follow legal rules without thinking too much. The ontological characteristics of code – its opacity, immutability, immediacy, pervasiveness, private production, and ‘ruleishness’ – amplify its ‘legalistic’ nature far beyond what could ever be imposed in the legal domain, however, raising significant questions about its legitimacy as a regulator. This contribution explores how we might critically engage with the text of code, rather than just the effects of its performance, in order to temper these extremes with the reflexive wisdom of legality. This means contrasting the technical performance of code with the social performativity of law, demonstrating the limits of viewing the latter as merely a regulative ‘modality’ that can be easily supplanted by code. The latter part of the article considers code and the processes and tools of its production from the perspective of legality, drawing on theories of textual interpretation, linguistics, and critical code studies. The goal is to consider to what extent it might be possible to guide that production, in order to ameliorate an ingrained ‘legalism’ that is democratically problematic.

Computational Legalism and the Affordance of Delay in Law

L Diver, (2020) ‘Computational Legalism and the Affordance of Delay in Law’ 1(1) Journal of Cross-disciplinary Research in Computational Law. View online


Delay is a central element of law-as-we-know-it: the ability to interpret legal norms and contest their requirements is contingent on the temporal spaces that text affords citizens. As computational systems are further introduced into legal practice and application, these spaces are threatened with collapse, as the immediacy of ‘computational legalism’ dispenses with the natural ‘slowness’ of text.

Read more

Aid and AI: The Challenge of Reconciling Humanitarian Principles and Data Protection

J Zomignani Barboza, L Jasmontaitė-Zaniewicz, L Diver, (2020) ‘Aid and AI: The Challenge of Reconciling Humanitarian Principles and Data Protection’ Privacy and Identity 2019: Privacy and Identity Management. Data for Better Living: AI and Privacy 161. View online


Artificial intelligence systems have become ubiquitous in everyday life, and their potential to improve efficiency in a broad range of activities that involve finding patterns or making predictions have made them an attractive technology for the humanitarian sector. However, concerns over their intrusion on the right to privacy and their possible incompatibility with data protection principles may pose a challenge to their deployment.

Read more

Technological mediation vs. the Rule of Law

Model of a man climbing a ladder amongst wire cubes

Presented at the conference on the Philosophy of Human-Technology Relations (PHTR), November 2020


In a constitutional democracy, the operation of law relies on the multi-interpretability of language and the possibility of contesting meaning. These capabilities are assumed in the contemporary structures of rule creation, interpretation, adjudication, and enforcement. When new technologies are introduced into those structures, such as AI/machine learning or self-executing rules (e.g. smart contracts), new mediations necessarily enter the frame.

Read more

Computational Legalism vs. Critical Code Studies

Screenshot of a slide showing 'Hello world' in various programming languages

Presented in the ‘slow science’ series at LSTS, the ideas in this presentation were the basis for the paper Interpreting the Rule(s) of Code: Performance, Performativity, and Production (2021)


Computational legalism refers to the ontological features of digital systems that make it impossible for humans to see, interpret, and contest the rules that they contain and impose. When software code structures our behavioural possibilities, it forces us to act ‘legalistically’, that is, like automatons: we don’t think, but simply act in accordance with the structures laid down by the code, because we have no other choice (unless the system’s designer has elected to give us one). This paper/discussion seeks to view and challenge computational legalism through the lens of critical code studies, the explication of a digital system’s meaning by the interpretive or hermeneutic analysis of its text, i.e. its source code. These texts play a fundamental role in the constitution of all digital systems, and so any analysis cannot be complete without at least some engagement at that level. Looking at digital systems from this angle is relevant to real-world processes of designing and producing code, and might help us to identify normative possibilities for combatting the vices of computational legalism.

Computational Legalism

Do not enter sign

(Originally posted on the COHUBICOL research blog.)

This post summarises computational legalism, a concept I developed in my doctoral thesis that is borne of the parallel between code’s ruleishness – its reliance on strict, binary logic instead of interpretable standards – and its conceptual equivalent in the legal realm, known as legalism (more specifically the strong variant of the latter).

Read more

Normative Shortcuts and the Hermeneutic Singularity

Photo of tools

(Originally posted on the COHUBICOL research blog.)

Legal normativity is an important theme for COHUBICOL, particularly how its nature might change when the medium that embodies it moves from text to code- and data-driven systems. Normativity is a useful concept in thinking about the role of law and of legal systems; it refers to the purposive force of (textual) legal instruments and rulings that, subject to their interpretation and potential contestation, require citizens to act (or not act) in certain ways.

Read more

Law as a User: Design, Affordance, and the Technological Mediation of Norms

L Diver, (2018) ‘Law as a User: Design, Affordance, and the Technological Mediation of Norms’ 15(1) SCRIPTed 4. View online


Technology law scholars have recently started to consider the theories of affordance and technological mediation, imported from the fields of psychology, human-computer interaction (HCI), and science and technology studies (STS). These theories have been used both as a means of explaining how the law has developed, and more recently in attempts to cast the law per se as an affordance. This exploratory paper summarises the two theories, before considering these applications from a critical perspective, noting certain deficiencies with respect to potential normative application and definitional clarity, respectively.

Read more

The law as (mere) user: affordance and the mediation of law by technological artefacts

Presented at TRILcon, University of Winchester, April 2018. This paper was later published as Law as a User: Design, Affordance, and the Technological Mediation of Norms.


Technology law scholars have recently begun to consider the design studies concept of affordance, bringing it into the legal fold both as a means to explain how the law has developed the way it has,[1] and more recently in attempts to cast the law per se as an affordance.[2]

Read more

Digisprudence: developing a legal-theoretical approach to compliance by design

Presented at BILETA 2018, University of Aberdeen.


The literature concerning the regulation of technology is rightly beginning to focus more on ex ante, or ‘by design’, enforcement. Despite almost two decades having passed since Lawrence Lessig’s Code was first published, the acceptance that ‘code’ (as opposed to law) has the power to regulate, and an appreciation of that power, is still developing, particularly since it requires a level of interdisciplinary openness to which the conservative legal world is occasionally hostile.

Read more

Opening the Black Box: Petri Nets and Privacy by Design

L Diver and B Schafer, (2018) ‘Opening the Black Box: Petri Nets and Privacy by Design’ 31(1) International Review of Law, Computers & Technology 68. View online


Building on the growing literature in algorithmic accountability, this paper investigates the use of a process visualisation technique known as the Petri net to achieve the aims of Privacy by Design. The strength of the approach is that it can help to bridge the knowledge gap that often exists between those in the legal and technical domains. Intuitive visual representations of the status of a system and the flow of information within and between legal and system models mean developers can embody the aims of the legislation from the very beginning of the software design process, while lawyers can gain an understanding of the inner workings of the software without needing to understand code. The approach can also facilitate automated formal verification of the models’ interactions, paving the way for machine-assisted privacy by design and, potentially, more general ‘compliance by design’. Opening up the ‘black box’ in this way could be a step towards achieving better algorithmic accountability.

From privacy impact assessment to social impact assessment

L Edwards, D McAuley, L Diver, (2016) ‘From privacy impact assessment to social impact assessment’ IEEE Security and Privacy Workshops (SPW) 53. View online


In order to address the continued decline in consumer trust in all things digital, and specifically the Internet of Things (IoT), we propose a radical overhaul of IoT design processes. Privacy by Design has been proposed as a suitable framework, but we argue the current approach has two failings: it presents too abstract a framework to inform design, and it is often applied after many critical design decisions have been made in defining the business opportunity. To rebuild trust we need the philosophy of Privacy by Design to be transformed into a wider Social Impact Assessment and delivered with practical guidance to be applied at product/service concept stage as well as throughout the system’s engineering.

Monkeying Around with Copyright – Animals, AIs and Authorship in Law

D Komuves, JN Zatarain, B Schafer, L Diver, (2015) ‘Monkeying Around with Copyright – Animals, AIs and Authorship in Law’ Internationales Rechtsinformatik Symposion (IRIS) 26. View online


Advances in artificial intelligence have changed the ways in which computers create “original” work. Analogies that may have worked sufficiently well in the past, when the technology had few if any commercially viable applications, are now reaching the limit of their usefulness. This paper considers particularly radical thought experiment in relation to computer generated art, challenging the legal responses to computer generated works and discussing their similarity to works by animals.

A fourth law of robotics? Copyright and the law and ethics of machine co-production

B Schafer, D Komuves, JMN Zatarain, L Diver, (2015) ‘A fourth law of robotics? Copyright and the law and ethics of machine co-production’ 23(3) Artificial Intelligence and Law 217. View online


Jon Bing was not only a pioneer in the field of artificial intelligence and law and the legal regulation of technology. He was also an accomplished author of fiction, with an oeuvre spanning from short stories and novels to theatre plays and even an opera. As reality catches up with the imagination of science fiction writers who have anticipated a world shared by humans and non-human intelligences of their creation, some of the copyright issues he has discussed in his academic capacity take on new resonance. How will we regulate copyright when robots are producers and consumers of art? This paper tries to give a sketch of the problem and hints at possible answers that are to a degree inspired by Bing’s academic and creative writing.

Would the current ambiguities within the legal protection of software be solved by the creation of a sui generis property right for computer programs?

L Diver, (2008) ‘Would the current ambiguities within the legal protection of software be solved by the creation of a sui generis property right for computer programs?’ 3(2) Journal of Intellectual Property Law & Practice 125. View online


Legal context: Software is an anomaly in the traditional sphere of IP, and its problematic nature has been manifest in the confused findings of courts on both sides of the Atlantic. This article considers the reasons for the confusion, where things might have been done better, and how the law could develop considering the realities of the industry.

Key points: Software protection at present favours the multinational corporations, while the interests of smaller companies and the Free and Open Source Software community are prejudiced greatly. The current regime is not fundamentally incompatible with software, however, and as such features of it could and should be retained in the creation of a sui generis IP right.

Practical significance: Much of today’s software industry is driven by the efforts of small enterprises and the Free and Open Source Software community. Their interests are not recognized in the current protection-biased framework, and as a result innovation is being stifled by the threat of litigation. IP law in this area is preventing the very thing it is designed to foster—enterprise and innovation.