Melanie Smallman
Search for other papers by Melanie Smallman in
Current site
Google Scholar
Cian O’Donovan
Search for other papers by Cian O’Donovan in
Current site
Google Scholar
James Wilson
Search for other papers by James Wilson in
Current site
Google Scholar
, and
Jack Hume
Search for other papers by Jack Hume in
Current site
Google Scholar
Data ethics in an emergency

Has data ethics been a casualty of COVID-19? Data have played a central role in how we understand, mitigate and adapt to COVID-19. For instance, it was critical to the work of new public infrastructures such as vaccine certification systems and test and trace infrastructures. Aggregated data about individuals provided the basis for priority shielding lists that protect people deemed vulnerable to COVID-19, and also remade the very categories of vulnerability on which decisions to recommend or enforce their shielding and isolation depended. But what happens in emergencies when urgency trumps careful deliberation? In this chapter, we aim to understand how ethics advice featured in decision-making and the governance arrangements of data use in such situations, arguing that a set of ‘emergency data ethics’ are needed to help guide thinking in a future emergency.

Over the past five or more years, as the power of data-driven technologies has become increasingly apparent, the field of data ethics has emerged as an interdisciplinary field, drawing upon important ideas from bioethics, particularly around the rights of the individual. Notwithstanding the plurality of ethical guidelines, prior to the coronavirus (COVID-19) pandemic, reviews of guidelines for ethical AI and data technologies have described a strong convergence around five key ethical principles: transparency; justice and fairness; non-maleficence; responsibility and accountability; and privacy (Jobin et al., 2019; Fjeld et al., 2020; Hagendorff, 2020).

The pandemic has, however, seen data used in new and accelerated ways, and in a very different context. The urgent need to find public health measures to reduce the spread of the disease and to develop new vaccines and new treatments has been coupled with the promise that more data can help (Wood et al., 2019; Davalbhakta et al., 2020). In the UK in particular, this increased collection and use of data – specifically via the initially planned NHS contact tracing app – raised serious concerns about users’ privacy, as the app would have gathered and stored anonymised data from the app on a central NHS database. The Ethics Advisory Group to NHSX on the COVID-19 Contact Tracing App put forward a series of ethical issues to be considered in the development of the app, highlighting the importance of value, impact, security and privacy, accountability, transparency and control in securing public trust (Ethics Advisory Group, 2020).1

However, for some scholars, the increased possibility for surveillance of citizens afforded by these apps pushed ethical concerns further, raising issues about human rights and privacy (Sekalala et al., 2020a, 2020b), while others have argued that there is a danger that heightened levels of surveillance become more publicly acceptable (Couch et al., 2020) in a context in which capacity for critical scrutiny is disarmed (Philip and Cherian, 2020).

In medical and research ethics, there have been important discussions about the appropriateness or sufficiency of traditional research ethics in the context of a pandemic. Research ethics aims to protect research participants from any unacceptable risks presented by new treatments or the research process as set out in the WMA (World Medical Association) Declaration of Helsinki – Ethical Principles for Medical Research Involving Human Subjects, 1964 (WMA, 1964). A life-threatening pandemic raises new questions about what risks should count as ‘unacceptable’: a prominent example was the performance of Challenge Trials, in which participants were deliberately infected with COVID-19 (Killingley et al., 2022). Arguably, the widespread presence of a novel infectious disease calls for lower levels of risk aversion towards untested treatments as we try to expand our arsenal of medical defence against a new disease (Edwards, 2013). In recognition of that, and the pressures that pandemics put upon researchers, in 2016 the World Health Organization (WHO) released specific Guidance for Managing Ethical Issues in Infectious Disease – a set of guidelines for conducting ethical research during a pandemic (World Health Organization, 2016). These guidelines acknowledge that ‘during an infectious disease outbreak there is a moral obligation to learn as much as possible as quickly as possible, in order to inform the ongoing public health response, and to allow for proper scientific evaluation of new interventions being tested’.

In this chapter, we argue that we need a similar debate and set of guidelines for ethical data use in an emergency such as a pandemic. By examining how data have been used during COVID-19 in the UK, we reflect on the value of the ethical principles that currently govern the ethical use of data and consider whether they have been sufficient during the pandemic.

COVID-19 and data in the UK

To begin reviewing the effects of data on the pandemic we examined third-party timelines of policy decisions relating to the pandemic in England – for instance, the Institute of Government Timeline and supporting data (Institute for Government, 2021). We aimed to identify key ‘moments’ where data played prominent or particular roles. This flagged up three ‘data episodes’ which we investigated further and briefly describe below. These ‘data episodes’ were selected to help us understand the different ways that data were involved and used in the pandemic, rather than to give an historical account of the pandemic. We have described the time period in which these episodes were situated for contextual purposes, but it is important to note that these are not necessarily discrete events. The timescale in which these episodes (and their effects) occurred are complex, intertwining and far reaching – factors that generate many important, and perhaps novel, ethical considerations.

Episode 1: the data pandemic (March 2020)

On 16 March 2020, the then British Prime Minister Boris Johnson began a series of daily press conferences to update people on the COVID-19 pandemic. In his very first press conference, he explained that ‘our objective is to delay and flatten the peak of the epidemic by bringing forward the right measures at the right time, so that we minimise suffering and save lives’ and that ‘it looks as though we’re now approaching the fast growth part of the upward curve’ (UK Government, 2020c).

Four days later, on 20 March 2021, more than 600 public health, epidemiology and medical experts sent a letter to the government arguing that data, models and experience elsewhere were clearly pointing to a need for urgent measures to stop the spread of COVID-19 (Iacobucci, 2020). Infographics of the model produced by Imperial College, showing the predicted exponential growth of COVID-19 cases with and without lockdown measures, spread across social media. On 23 March 2021, Boris Johnson announced the first UK-wide lockdown, which would come into force a few days later on 26 March 2021. Dramatically restricting people’s movement and social contact, the Prime Minister explained that the purpose of this lockdown was to protect the NHS. ‘To put it simply, if too many people become seriously unwell at one time, the NHS will be unable to handle it – meaning more people are likely to die, not just from Coronavirus but from other illnesses as well’ (UK Government, 2020b). From the very start, this was a data pandemic, with graphs, infographics and figures driving public concern and, in turn, government action.

Reinforcing this, in March 2020, a team from the digital arm of the NHS (NHSX) were assembled and tasked with developing a contact tracing app. The app ‘appeared to be at the very centre of the government’s strategy to beat coronavirus and help us all emerge from lockdown’ (Cellan-Jones, 2020). Using mobile phones’ Bluetooth functionality, the app would keep a record of any other users that the phone came into contact with, sending an alert and instruction to quarantine if a close contact tested positive. Importantly, this NHS app would have gathered and stored anonymised data on a central NHS database. Even before the app was tested in the Isle of Wight in May 2020, concerns about data security and privacy were prevalent. In June 2020, plans to develop the NHS’s own app were dropped in favour of one based on technology provided by Google and Apple – ostensibly because this technology was more reliable, although this version of the app also had the privacy benefit of not passing data onto a single official database (Wise, 2020). During the first year after its launch, the app was used regularly by approximately 16.5 million users (28 per cent of the total population), sending approximately 1.7 million exposure notifications. The privacy-preserving features of the app made measuring its effects difficult, but Wymant et al. (2021) estimate that, between 24 September 2020 and the end of December 2020, it reduced case numbers in England and Wales by between 108,000 and 914,000.

Alongside these new forms of data collection, existing regulations and guidance on the use of confidential patient information were relaxed on a temporary basis. Under the Health Service (Control of Patient Information) Regulations 2002 (COPI), a COPI notice was sent to a wide range of NHS bodies on 20 February 2020, requiring them to process and make available confidential information to support the COVID-19 response. This had the effect of setting aside the common law duty of confidentiality where data was processed by health professionals for COVID-19 purposes during the time that the COPI notices remained in force (they expired on 30 June 2022).

The COPI notices provided the legal basis for the setting up of the NHS COVID Data Store, in March 2020, which brought ‘together multiple data sources from across the health and care system in England into a single, secure location’. Creating such a centralised NHS data store ‘would have taken years under normal circumstances’, but under the COPI notices, ‘the data store was established at pace, while protecting the privacy of our citizens’ (NHS Transformation Directorate, undated). The US tech company, Palantir, known for its work on behalf of the CIA, played a key role in the setting up and running of the backend for the data store. Palantir initially contracted to work for a fee of £1, with its contract being extended for four months in June 2020 for a further £1 million. This contract was extended again, in December 2020, for another two years, this time for £23 million, without the contract at any stage undergoing a formal tendering process (Downey, 2020).

Adding to the data involved in the government’s response to the pandemic, in mid-April 2020, the UK’s Office of National Statistics (ONS) working with the University of Oxford, IQVIA, Lighthouse Laboratory at Glasgow, UKHSA, the University of Manchester and the Wellcome Trust, piloted a coronavirus (COVID-19) infection survey, inviting 20,000 people in England to provide a blood and swab sample at regular intervals. In August 2020, this sample size was expanded and from October 2020 extended to the whole of the UK, such that by the spring of 2022, the survey was collecting and analysing 120,000 blood samples every month. As well as being used to estimate the rate of transmission of the infection, the survey provided sociodemographic information about those contracting COVID-19.

Episode 2: care homes scandal (May 2020)

By 15 May 2020, press conferences from 10 Downing Street featuring politicians and senior government science advisers had become a daily feature of pandemic life. That day, the Secretary of State for Health and Social Care – standing in for the Prime Minister and announcing a new support package for the social care sector – assured the public that a ‘protective ring’ had been placed around UK care homes (Merrick, 2021). He was responding to concerns that the government’s stated priority to protect the NHS had resulted in neglect elsewhere across other public sectors – specifically in social care.

In particular, in March 2020, thousands of older people had been discharged from hospital, many ending up in care homes without having been tested for COVID-19. Gathering and sharing data became vital when decisions about allocation of PPE, enforcing action on care home staff and restricting visitors were being made, not least because care homes were housing and looking after residents who were typically older and less healthy than the general population, and so were more vulnerable to the most severe effects of COVID-19. Yet basic data about who lived in which care home were missing. This is perhaps unsurprising given that while care homes are information- and data-rich environments, it was also a sector experiencing decades of neglect, underfunding and being undervalued. In addition, different stakeholders and different care homes had different information requirements. But this absence of data created fresh vulnerabilities (and effectively rendered invisible) an already extremely vulnerable population. In the first wave, COVID-19 was the greatest cause of death in care homes, and deaths in care homes occurred at greater frequency than in any other institutional setting (Dyer, 2022). However, because of insufficient testing at the time and an absence of data about who was in which care home, it is likely that the exact figures of COVID-19-related infections and deaths – and whether Hancock’s ‘protective ring’ had been effective – will never be known.

Since then, some social care data infrastructure has improved. The ONS began to report (on an experimental basis) the numbers of self-funded people in care homes on 15 October 2021; and the Capacity Tracker, which was used by administrators to locate available care home beds, was adapted to monitor levels of infection, PPE and, ultimately, vaccination nationally. Nevertheless, even by the summer of 2022, no UK country was able routinely to identify who was resident in care homes, who was receiving social care at home or who was working in or visiting a care home or the home of someone receiving care in the community (O’Donovan et al., 2021; Burton et al., 2022). Moreover, comprehensive data on the case mix and needs of residents are still absent. Simply put, even as the pandemic recedes, the government does not know who is in care homes, where they are or what risks they face.

Episode 3: It ain’t over ’til it’s over: the end of the pandemic? (February 2021–22)

In February 2021, as the COVID-19 vaccine rollout was reaching the majority of the UK population and the Prime Minister faced mounting pressure from his own party to ease restrictions, the Westminster government stated in its ‘Roadmap out of lockdown’ (22 February 2021) that the government would be ‘guided by data, not dates’. Shortly afterwards, the UK saw the arrival of the delta and the omicron variants of the virus, which, although they were significantly more transmissible and caused infection rates to escalate, in combination with the effectiveness of the vaccine, ultimately caused a lower proportion of deaths than the initial strain.

The soaring number of cases that came with the initial relaxation of lockdown measures in the summer of 2021 resulted in what became known as the ‘pingdemic’, when more than 500,000 alerts and instructions to self-isolate were sent out via the contact tracing app in the week ending 7 July 2021. Coming at a time when the seven-day rolling death rate was under 30 per day,2 the disruption being caused led many individuals to delete the app.

The government launched a ‘Living with COVID-19’ plan on 21 February 2022. This involved a phased reduction or elimination of various COVID measures, including dropping asymptomatic testing in schools (on 21 February 2022); removing the requirement for routine contact tracing and self-isolation after a positive test result (on 24 February 2022); removing powers from local authorities to process data under the COPI regulations (which lapsed on 1 July 2022); and removing free testing (asymptomatic and symptomatic) for the general public (on 1 April 2022) (see Mehlmann-Wicks, 2022; GOV.UK, 2022).

With the end of free testing, of the test and trace system and of the legal duty to report cases of COVID-19, the most significant source of data about the spread and levels of COVID-19 infections was effectively disrupted, and the official government COVID data reports3 moved from daily, to weekly updates on 1 July 2022. The COPI regulations also lapsed on 1 July 2022. Nevertheless, COVID cases continued to rise, with more than 200,000 new cases reported on 24 March 2022,4 and deaths within 28 days of positive test remained high, averaging over 200 per day for much of April 2022.

Concurrent with relaxation of COVID regulations, one of the biggest scandals in British politics (‘Partygate’) was unfolding. Between December 2021 and January 2022, allegations emerged that the Prime Minister and staff at Downing Street had broken the government’s own COVID rules by holding a series of parties during lockdown. In December 2021, civil servant Sue Gray was tasked with investigating these allegations, after the Cabinet Secretary, the Prime Minister’s initial choice to lead the inquiry, had to recuse himself when one of the parties was revealed to have occurred in his private office. Public outrage about the allegations coincided, in January 2022, with the growing concern among right wing Conservative MPs about the number of restrictions on freedoms still being imposed through the COVID-19 regulations, and a number of Conservative backbenchers began to call for Boris Johnson’s resignation. On 25 January 2022, the Metropolitan Police also announced a criminal investigation into Downing Street parties. On 13 April 2022, Boris Johnson and his wife Carrie were served with fixed penalty notices for breaking lockdown rules, with Johnson eventually being forced to resign as Conservative Party leader on 7 July 2022.


These three ‘data episodes’ raise a number of questions about whether the values standardly discussed in data ethics are sufficient to guide us in a pandemic.

Ethics advice in an emergency and the risk of ‘ethics washing’

At a practical level, the standardly discussed data ethics values have been adopted in the UK government’s Data Ethics Framework (UK Government, 2020a), which specifically highlights the values of transparency, accountability and fairness. But neither this framework, nor the principles developed specifically by the ethics advisory board for the NHS COVID-19 app, nor the government’s Pandemic Influenza Ethical framework, appear to have been made use of in government decision-making during the pandemic (Gadd, 2020). It was therefore unclear whether, and which, ethical considerations were given weight within government decision-making.

This is problematic in the first instance because, in the UK, there are clear rules to ensure those in public life are acting in the public interest, enshrined in the ‘Nolan Principles of Public Life’: Selflessness, Integrity, Objectivity, Accountability, Openness, Honesty and Leadership. The lack of transparency about if and how ethical frameworks were used would arguably have fallen short of the Openness and Accountability called for by the Nolan Principles, and the Good Decision-making principle of the ‘Pandemic Influenza Ethical Framework’.

Beyond that, however, there is also a risk that the logic of ethics advice could uphold the logic that produces the unethical effects, by providing the possibility of ethical cover and compliance. Taking ethical advice could thus be a way to give the impression that actions are ethically justifiable. While we argue that a framework for data ethics in an emergency is needed, being able to evaluate whether and how such frameworks and sources of ethics advice are used is important too.

Time, scale, data infrastructure and democracy

Measures that might appear appropriate during a pandemic may become problematic if they are incorporated into everyday life – perhaps, as some have argued, risking gently moving us towards a less democratic and more authoritarian state (Cooper and Aitchison, 2020; Thomson and Ip, 2020). Furthermore, the impact on particular groups that have already been well described elsewhere (for instance, Hendl et al., 2020) are likely to be compounded by the persistence of COVID surveillance infrastructure. Considering questions of time and persistence seems to be vital – but seemingly absent in the pandemic response – when looking at infrastructures that affect large numbers of people and institutions, such as discussions around the test and trace system and the contact tracing app (discussed in Episode 1 above).

Arguably, existing ethical frameworks could incorporate these elements. For instance, time in the form of intergenerational fairness would seem to be a reasonable consideration within the ‘fairness’ agenda. But, in the absence of a structured way of ensuring time and scale are factored into ethical considerations, the point at which the trade-off between privacy and public health becomes less favourable for the majority of the public becomes difficult to identify. As a result, the data infrastructure put in place and the data collected during COVID-19 risk taking on a permanence. For example, while the COPI notices lapsed at the end of June 2022, the appetite for large-scale health data infrastructure built by commercial companies persisted. In April 2022, NHS England announced a £240 million contract for a Federated Data Platform for the NHS, the value of which was upgraded to £360 million in July 2022.

Scheuerman (2006) has written about the problem of ending emergency powers in relation to the additional powers acquired by the US President in the aftermath of 9/11. He argues that the distinction between ordinary and emergency law easily becomes blurred, while the mechanisms for containing emergency powers are unclear. Scheuerman advocates for legal processes to constrain and potentially end any emergency powers (for instance requiring increasing super-majorities in parliament to support extensions of temporary powers). Here, we add that these questions should also be built into ethical evaluations of data collection and use.

Role of data/absence of data in creating vulnerabilities

As we have described, concerns about privacy and over-surveillance appear to have been at the fore of ethical discussions and frameworks relating to COVID, and the contact tracing app in particular. However, the care homes episode shows very clearly that the opposite (under-surveillance) could present a risk too – an issue that was neglected in ethical considerations.

This danger presented itself on two fronts: first, as an ongoing absence of data about the health, wellbeing and whereabouts of residents and staff in care homes; second, as an absence of appropriate data sharing and linkage between care home providers, the care sector more broadly and other public domains which relied upon such data to make decisions and to evaluate the impact of those decisions in real, or close to real, time.

Sustained absences in these data were especially pernicious in social care, as they fed into decisions about the distribution of resources, responsibilities and obligations between markets, the welfare state, voluntary sectors and families. Without these data, decisions about who is deemed to require or deserve care, about how resources such as PPE should be distributed and how the effectiveness of decisions can be monitored and measured, were very difficult, if not impossible.

A renewed data ethics for pandemics that is attentive to these issues would trace how data systems simultaneously tackle and sustain diverse vulnerabilities, while at the same time reflexively addressing unintended systemic impacts.

Role of data in co-producing authority to act

Looking at all three data episodes, it becomes apparent that data played a role in providing the authority to act: First, at the start of the pandemic, the Prime Minister drew upon figures and statistics based on the data to build a picture of the pandemic as a growing threat that required him to introduce stringent measures to limit people’s personal freedoms. For instance, his daily press conferences typically opened with a series of slides showing graphs and figures, which journalists and the public were invited to query and share. We argue that at the start of the pandemic, given the serious restrictions that needed to be imposed by a Conservative leader who was politically disposed to increase, not limit, freedoms, data provided additional legitimacy to act, beyond that provided by political arrangements at the time. The authority and legitimacy to act were co-produced through the sense of objectivity that data offered, along with the political powers of the office of prime minister.

Second, during the care-home episode, the absence of data was implicated in the neglect of care home residents; without the data to back up the need to act, residents, family members and staff on the ground had little by way of authority to redirect centralised resource distribution to care homes, or to engage in democratic challenges to the story emerging at government level of a ‘protective ring’ having been constructed around the sector.

Others have previously described the power of numbers and how data and political arrangements co-produce power and legitimacy (for instance Ezrahi, 1990; Porter, 1996). Looking at the third episode (the end of the pandemic), we argue that it was the complex and co-productive relationship between data, politics, authority and legitimacy, and a prime minister under threat and needing to consolidate power, that played the key role in creating the sense that the pandemic was over in the UK. For most Britons, the COVID-19 pandemic ended in the spring of 2022, when restrictions were lifted and test and trace stopped. Yet, in June 2022, the British Medical Journal warned that one in fifty people in the UK tested positive that week (Wise, 2022). Even in the well-vaccinated UK, the pandemic was far from over. Despite claims to ‘follow the data not dates’, we argue that when the Prime Minister’s authority was challenged after the ‘Partygate’ scandal, he wrested back power by tipping the balance of the co-productive relationship towards his political authority by ending the collection and publication of key data – most notably the daily numbers of new infections. In the absence of these data, the authority and legitimacy to act – and therefore decision-making power – was produced by the position of prime minister alone.


In this chapter we have identified and described three key ‘episodes’ in the COVID-19 pandemic where data played a key role – and raised significant ethical issues. In so doing, we have shown that during the pandemic, emergency measures were intimately linked with the collection and analysis of data at an accelerated pace. Data, therefore, formed a key part of the logic by which power was wielded over the public. Moreover, the authority given by the seeming objectivity of data was sufficiently powerful to enable the British Prime Minister and government to enact, and then repeal (arguably too quickly), severe restrictions on civil liberties in the UK.

We have drawn attention to the range of consequences this has had for our lives as citizens. Some of these consequences are acutely ethical: increased surveillance capability; invisibility and neglect; placement of power and political authority to act; and the potential to ‘ethics wash’, yet typically escape the scrutiny of ‘traditional’ data ethics frameworks. We therefore conclude that a set of ethical guidelines for the use of data in an emergency needs to be developed to help improve the use of data in future pandemics.

In particular, these emergency data ethics need to take account of how the sense of objectivity, and therefore the authority to act, are co-produced by data and political arrangements; the effect of data gaps in creating vulnerabilities; a mechanism for structuring ethics thinking to look at the timescale of the pandemic and to consider the conditions under which the emergency will have ended and when normal regulations should once again apply. This last point is not inconsiderable given that, in the UK at least, data and the sense of the pandemic were intertwined, with the end of the pandemic being produced by the end of data collection and reporting, rather than the end in infections.

Finally, any emergency ethics framework needs concrete principles and clear ways of evaluating and accounting for their uptake, since, as we have shown, ethics frameworks may prove ineffectual if they are too abstract and indeterminate, allowing multiple plausible interpretations and accommodations. For instance, the issues we have raised here, such as the long-term effects of data collection, could be accommodated within ideas around privacy and security, but appear not to have been. More concrete principles and ways of evaluating their uptake seems necessary in the face of creeping powers and surveillance during a pandemic.

In saying this, we are not denying that data – and indeed emergency powers – can play a legitimate role in achieving public safety during a pandemic. But we argue that emergency contexts set different standards for data ethics and government transparency, because emergencies give rise to uncomfortable decisions about whose wellbeing ought to be prioritised and how. Neither the ‘trust-based’ reasons for transparency, common to some public health discourse, nor the ‘rights-based’ approach taken by traditional data ethics, may be sufficient in the context of a pandemic to ensure that a desirable balance is struck between health benefits, erosions of privacy and costs to democracy.


The chapter was supported by the UK Pandemic Ethics Accelerator, Data Use workstream, grant number AH/V013947/1.


1 This ethics advisory board was disbanded when the UK government made the decision to move from an NHS developed app to one based on Google/Apple technology.
2 Deduced by comparing the deaths data here: with the COVID-19 app dataset at Available at:
4 Infection rates reported here:


Burton, J. K. et al. (2022) ‘Developing a minimum data set for older adult care homes in the UK: exploring the concept and defining early core principles’, The Lancet Healthy Longevity, 3(3): e186e193.
Cellan-Jones, R. (2020) ‘Coronavirus: what went wrong with the UK’s contact tracing app?’ BBC News, 20 June. Available at: (accessed 16 August 2022).
Cooper, L. and Aitchison, G. (2020) ‘The dangers ahead: COVID-19, authoritarianism and democracy’, London School of Economics. Available at: (accessed 16 August 2022).
Couch, D. L., Robinson, P. and Komesaroff, P. A. (2020) ‘COVID-19: extending Surveillance and the panopticon’, Journal of Bioethical Inquiry, 17(4): 809–814.
Davalbhakta, S. et al. (2020) ‘A systematic review of smartphone applications available for corona virus disease 2019 (COVID19) and the assessment of their quality using the Mobile Application Rating Scale (MARS)’, Journal of Medical Systems, 44(9): 164.
Downey, A. (2020) ‘Palantir awarded £23m deal to continue work on NHS COVID-19 data store’, Digital Health (blog), 21 December 2020. Available at: (accessed 16 August 2022).
Dyer, C. (2022) ‘COVID-19: policy to discharge vulnerable patients to care homes was irrational, say judges’, BMJ, 377: o1098.
Edwards, S. J. L. (2013) ‘Ethics of clinical science in a public health emergency: drug discovery at the bedside’, The American Journal of Bioethics, 13(9): 314.
Ethics Advisory Group (2020) ‘Report on the work of the Ethics Advisory Group to NHSX on the COVID-19 contact tracing app’. Available at: (accessed 16 August 2022).
Ezrahi, Y. (1990) The Descent of Icarus: Science and the Transformation of Contemporary Democracy, Cambridge, MA: Harvard University Press.
Fjeld, J. et al. (2020) ‘Principled artificial intelligence: mapping consensus in ethical and rights-based approaches to principles for AI’, Berkman Klein Center Research Publication No. 2020-1, Cambridge, MA: Harvard University.
Gadd, E. (2020) ‘Is the government using its own ethical framework?’ The Nuffield Council on Bioethics, 24 April. Available at: www.nuffield fbclid=IwAR15mGREtJMO4ktXsk_wO0J9c3q3tXcBSyAjSZOvcAph FL2-kkd07q-3RAI (accessed 16 August 2022).
GOV.UK (2022) ‘PM Statement on Living with COVID: 21 February 2022’. Available at: (accessed 29 April 2022).
Hagendorff, T. (2020) ‘The ethics of AI ethics: an evaluation of guidelines’, Minds and Machines, 30: 99120.
Hendl, T., Chung, R. and Wild, V. (2020) ‘Pandemic surveillance and racialized subpopulations: mitigating vulnerabilities in COVID-19 apps’, Journal of Bioethical Inquiry, 17(4): 829–834.
Iacobucci, G. (2020) ‘COVID-19: UK lockdown is “crucial” to saving lives, say doctors and scientists’, BMJ, 24 March. Available at: (accessed 16 August 2022).
Institute for Government (2021) ‘Timeline of UK Government coronavirus lockdowns and restrictions’, The Institute for Government, 9 April. Available at: (accessed 16 August 2022).
Jobin, A., Ienca, M. and Vayena, E. (2019) ‘The global landscape of AI ethics guidelines’, Nature Machine Intelligence, 1(9): 389–399.
Killingley, B. et al. (2022) ‘Safety, tolerability and viral kinetics during SARS-CoV-2 human challenge in young adults’, Nature Medicine, 28: 10311041.
Mehlmann-Wicks, J. (2022) ‘BMA briefing: living with COVID-19 response’, BMA, 24 May. Available at: (accessed 16 August 2022).
Merrick, R. (2021) ‘Matt Hancock denies making “protective ring around care homes” claim, despite saying it live on TV’, The Independent. Available at: (accessed 16 August 2022).
O’Donovan, C., Smallman, M. and Wilson, J. (2021) ‘Making older people visible: solving the denominator problem in care home data’, UK Pandemic Ethics Accelerator. Available at: (accessed 16 August 2022).
Philip, J. and Cherian, V. (2020) ‘The psychology of human behavior during a pandemic’, Indian Journal of Psychological Medicine, 42(4): 402–403.
Porter, T. M. (1996) Trust in Numbers: The Pursuit of Objectivity in Science and Public Life (reprint edition), Princeton, NJ: Princeton University Press.
Scheuerman, W. E. (2006) ‘The powers of war and peace: the Constitution and foreign affairs after 9/11’, Perspectives on Politics, 4(3): 605607.
Sekalala, S. et al. (2020a) ‘Analyzing the human rights impact of increased digital public health surveillance during the COVID-19 crisis’, Health and Human Rights, 22(2): 720.
Sekalala, S. et al. (2020b) ‘Health and human rights are inextricably linked in the COVID-19 response’, BMJ Global Health, 5(9): e003359.
NHS Transformation Directorate (undated) ‘The NHS COVID-19 data store: putting data at the centre of decision making’. Available at: (accessed 16 August 2022).
Thomson, S. and Ip, E. C. (2020) ‘COVID-19 emergency measures and the impending authoritarian pandemic’, Journal of Law and the Biosciences, 7(1): lsaa064.
UK Government (2020a). ‘Data ethics framework’. Available at: (accessed 16 August 2022).
UK Government (2020b) ‘Prime Minister’s statement on coronavirus (COVID-19): 23 March 2020’, GOV.UK. Available at: (accessed 16 August 2022).
UK Government (2020c) ‘Prime Minister’s statement on coronavirus (COVID-19): 16 March 2020’, GOV.UK. Available at: (accessed 16 August 2022).
Wise, J. (2022) ‘Covid-19: Omicron sub variants driving new wave of infections in UK’, BMJ, 377: o1506.
WMA (1964)WMA declaration of Helsinki – ethical principles for medical research involving human subjects’. Available at: (accessed 16 August 2022).
Wood, A. J. et al. (2019) ‘Good gig, bad gig: autonomy and algorithmic control in the global gig economy’, Work, Employment and Society, 33(1): 5675.
World Health Organization (2016) ‘Guidance for managing ethical issues in infectious disease outbreaks’. Available at: (accessed 16 August 2022).
Wymant, C. et al. (2021) ‘The epidemiological impact of the NHS COVID-19 app’, Nature, 594(7863): 408412.
  • Collapse
  • Expand

All of MUP's digital content including Open Access books and journals is now available on manchesterhive.



All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 93 93 35
PDF Downloads 39 39 5