Publications

Maynard Smith revisited: A multi-agent reinforcement learning approach to the coevolution of signalling behaviour

Published in PLOS Computational Biology, 2025

The coevolution of signalling is a complex problem within animal behaviour, and is also central to communication between artificial agents. The Sir Philip Sidney game was designed to model this dyadic interaction from an evolutionary biology perspective, and was formulated to demonstrate the emergence of honest signalling. We use Multi-Agent Reinforcement Learning (MARL) to show that in the majority of cases, the resulting behaviour adopted by agents is not that shown in the original derivation of the model. This paper demonstrates that MARL can be a powerful tool to study evolutionary dynamics and understand the underlying mechanisms of learning over generations; particularly advantageous is the interpretability of this type of approach, as well as that fact that it allows us to study emergent behaviour without the need to constrain the strategy space from the outset. Although it originally set out to exemplify honest signalling, we show that the game provides no incentive for such behaviour. In the majority of cases, the optimal outcome is one that does not require a signal for the resource to be given. This type of interaction is observed within animal behaviour, and is sometimes denoted proactive prosociality. High learning and low discount rates of the reinforcement learning model are shown to be optimal in order to achieve the outcome that maximises both agents’ reward, and proximity to the given threshold leads to suboptimal learning.

Download here

(Ir)rationality in AI: State of the Art, Research Challenges and Open Questions

Published in Artificial Intelligence Review, 2025

The concept of rationality is central to the field of artificial intelligence (AI). Whether we are seeking to simulate human reasoning, or trying to achieve bounded optimality, our goal is generally to make artificial agents as rational as possible. Despite the centrality of the concept within AI, there is no unified definition of what constitutes a rational agent. This article provides a survey of rationality and irrationality in AI, and sets out the open questions in this area. We consider how the understanding of rationality in other fields has influenced its conception within AI, in particular work in economics, philosophy and psychology. Focusing on the behaviour of artificial agents, we examine irrational behaviours that can prove to be optimal in certain scenarios. Some methods have been developed to deal with irrational agents, both in terms of identification and interaction, however work in this area remains limited. Methods that have up to now been developed for other purposes, namely adversarial scenarios, may be adapted to suit interactions with artificial agents. We further discuss the interplay between human and artificial agents, and the role that rationality plays within this interaction; many questions remain in this area, relating to potentially irrational behaviour of both humans and artificial agents.

Download here

Game-theoretic agent-based modelling of micro-level conflict: Evidence from the ISIS-Kurdish war

Published in PLOS ONE, 2024

This article delves into the dynamics of a dyadic political violence case study in Rojava, Northern Syria, focusing on the conflict between Kurdish rebels and ISIS from January 1, 2017, to December 31, 2019. We employ agent-based modelling and a formalisation of the conflict as an Iterated Prisoner’s Dilemma game. The study provides a nuanced understanding of conflict dynamics in a highly volatile region, focusing on microdynamics of an intense dyadic strategic interaction between two near-equally- powered actors. The choice of using a model based on the Iterated Prisoner’s Dilemma, though a classical approach, offers substantial insights due to its ability to model dyadic, equally-matched strategic interactions in conflict scenarios effectively. The investigation primarily reveals that shifts in territorial control are more critical than geographical or temporal factors in determining the conflict’s course. Further, the study observes that the conflict is characterised by periods of predominantly one-sided violence. This pattern underscores that the distribution of attacks, and target choices are a more telling indicator of the conflict nature than specific behavioural patterns of the actors involved. Such a conclusion aligns with the strategic implications of the underlying model, which emphasises the outcome of interactions based on differing aggression levels. This research not only sheds light on the conflict in Rojava but also reaffirms the relevance of this type of game-theoretical approach in contemporary conflict analysis.

Download here

(Ir)rationality and Cognitive Biases in Large Language Models

Published in Royal Society Open Science, 2024

Do large language models (LLMs) display rational reasoning? LLMs have been shown to contain human biases due to the data they have been trained on; whether this is reflected in rational reasoning remains less clear. In this paper, we answer this question by evaluating seven language models using tasks from the cognitive psychology literature. We find that, like humans, LLMs display irrationality in these tasks. However, the way this irrationality is displayed does not reflect that shown by humans. When incorrect answers are given by LLMs to these tasks, they are often incorrect in ways that differ from human-like biases. On top of this, the LLMs reveal an additional layer of irrationality in the significant inconsistency of the responses. Aside from the experimental results, this paper seeks to make a methodological contribution by showing how we can assess and compare different capabilities of these types of models, in this case with respect to rational reasoning.

Download here