Digital Agency in Archaeological Practice

Jeremy Huggett, University of Glasgow

Keywords: agency;automation; augmentation; digital; ethics


A key development in archaeology is the increasing agency of the digital tools brought to bear on archaeological practice. Roles and tasks that were previously thought to be uncomputable are beginning to be digitalised, and the presumption that computerisation is best suited to well-defined and restricted tasks appears to be starting to break down. What are the implications of this shift to algorithmic, automated practices for archaeology? Are there limits to algorithmic agency within archaeology? How might our practice change in the light of contemporary technical and social developments in computing and allied fields?


This paper considers the gamut of digital tools and technologies along with the various degrees of trust, reliance, and expectations placed upon themas they become increasingly complex. These can range from simply the use of a database through to a GIS in support of analysis and interpretation, to linking and amalgamating datasets into data-driven meta-analyses and incorporating them in machine-learning tools, to the range of automated devices operated by archaeologists (remotely controlled terrestrial and aerial drones, and autonomous surface and underwater vehicles), to laboratory-based robotic devices and semi-autonomous bio-mimetic or anthropomorphic robots. Many of these devices seek to augment archaeological practice, reducing routinised and repetitive work in the office environment and in the field. Others augment practice by developing data-driven methods which represent, store, and manipulate information in order to undertake tasks previously thought to be uncomputable or incapable of being automated. Still others augment through substituting the human component in environments which would be otherwise be inaccessible or dangerous. Whichever applies, separately or in combination, such technologies are typically seen as black-boxing practicewith often little or no human intervention beyond the allocation of their inputs and subsequent incorporation of their outputs in analyses. These tools and techniques can be seen as effectively acquiring agency through their assumption of a share of the cognitive load associated with tasks, or through enabling tasks to be undertaken which were previously not possible, and in the process potentially changing the relationship between device and user.

The paper will address these issues by first characterising the nature of digital agency, considering the extent to which such tools can be ascribed agency (e.g. Applin& Fischer 2015; Broussard 2018), and questioning the development of autonomous action in archaeological devices. In the process, it will seek to establish the arena for development and the scope of application of these tools: is it possible to define appropriate task areas in terms of suitable criteria, for example?It will then examine the way in which these digital devices can become normalised within practice andaddressing the effects of such normalisation: the extent to which they are perceived as neutral, trustworthy and authoritative, for example, and the concomitant risks of hidden machine bias (e.g. Hill 2018) and automation bias(e.g. O’Neill 2016; Pasquale 2015) which can create a mixture of vicious and virtuous circular relationships between human users and the digital devices employed.The paper will then consider ways in which the range of complexities inherent in such devices can be suitably mitigated in practice in order to address the challenges of fetishization, habituation and enchantment (Smith 2018) which are associated with the increasing ubiquity and pervasiveness of these devices.


Through the preceding investigation, key ethical areas for development will be developed as digital devices are increasingly employed in archaeological practice and which agentiallyact alongside us, or on our behalf, or in our place. Where these ethics best sit will be discussed: for example, is ethical behaviour best managed implicitly within devices through their design and programming, or can the devices themselves behave ethically through making explicit judgements on their own account (e.g. Moor 2006)? Where then does the ethical responsibility lie for their practice? Part of the answer to these questions will lie in ensuring that the archaeologist remains an active participant in rather than an observer of practice by adopting a variant of the digital participatory turn (e.g. Gubrium et al. 2015). This essentially seeks to reconfigure the digital divide between human archaeologist and machine, ensuring the human component retains a critical influential and strategic oversight relative to the device and thereby acting as a moderating ‘archaeologist-in-the-loop’ in practice.


Applin, Sally A. and Michael D. Fischer. 2015.“New technologies and mixed-use convergence: How humans and algorithms are adapting to each other.”IEEE International Symposium on Technology and Society (ISTAS), Dublin, pp. 1-6.

Broussard, Meredith. 2018. Artificial Unintelligence: How Computers Misunderstand the World. Cambridge MA: MIT Press.

Gubrium, Aline C., Krista Harper and Marty Otañez (editors). 2015. Participatory Visual and Digital Research in Action.New York: Routledge.

Hill, Robin K.2018. “Assessing Responsibility for Program Output.”Communications of the ACM 61(8):12-13.

Moor,James H. 2006.“The Nature, Importance, and Difficulty of Machine Ethics.” IEEE Intelligent Systems, 21 (4): 18-21.

O’Neil, Cathy. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. London: Allen Lane.

Pasquale, Frank. 2015. The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge MA: Harvard University Press.

Smith, Gavin J.D.2018. “Data doxa: The affective consequences of data practices.”Big Data & Society 5(1):1-15.