By Liam Kelly
Following the World Health Organisation’s (WHO) declaration that the international spread of COVID-19 constituted a pandemic on the 11th of March, 2020, it did not take long for institutions and individuals on the frontlines of response measures to recognize that they were actually confronted with two global public health crises at once. In the chaotic policy environment of the early pandemic, where the need to adapt to a fluid situation overwhelmed governments at all levels, poor and inconsistent health communication efforts began to erode the public confidence in government upon which the effectiveness of pandemic measures would depend. Worse, according to the Australian Department of the Prime Minister and Cabinet’s COVID-19 Response Inquiry Summary, when reliable information was finally to hand, health authorities and governments found themselves competing for attention in a ‘marketplace of ideas’ crowded with the misinformation peddlers and conspiracy theorists who had filled the vacuum (DPMC 2024, 46).
This low-trust, high-bullshit[1] information ecosystem resulted in a parallel pandemic, the ‘infodemic’: a constant barrage of information inducing cognitive overload, hampering the ability to evaluate source quality, and leading to the viral transmission of false or dangerous ideas between people (The Lancet Infectious Diseases 2020, 875). Far from being a harmless corollary to ‘the real thing’, the COVID infodemic was itself a significant health hazard: one estimate attributes 232,000 preventable deaths to COVID-19 vaccine hesitancy between May 2021 and September 2022 in the U.S. alone (Jia et al. 2023), while the misinformation-induced ‘off-label’ use of drugs, like Ivermectin, to treat COVID-19 has been associated with serious, and sometimes fatal, injury (Knowles, Gowen, and Mark 2021; DPMC 2024, 47). With our eyes on the next pandemic, and the infodemic likely to follow in its train, it seems safe to say that what might be called ‘the bioethics of trust’ is no small issue, but literally a matter of life and death.
One of the most crucial questions involved in thinking about the last infodemic, and the one which motivated this piece, is this: ‘why were so many people so ready to believe ideas which, by conventional standards, are manifestly false or even ridiculous?’ Particularly when one approaches the outer horizons of COVID conspiracy theory, it might be tempting to answer this question with reference to what are called epistemic vices. That is, one might wish to chalk up a person’s propensity to accept such ideas to their possession of “…character traits, attitudes, or ways of thinking that systematically obstruct knowledge or understanding” (Cassam 2024, 31). While this might seem to be a more eloquent spin on the popular non-explanation ‘they believe it because they’re stupid’, a vice epistemological account is more subtle than that. A quality which hampers the development of knowledge or understanding only becomes a vice if one can be legitimately blamed for possessing it; the poor cognitive habits of someone raised on a cult compound might reasonably be attributed to accidents of birth, while the same habits, in someone granted a good education and material comforts, are likely blamable by virtue of the fact that they could, and should, have ‘known better’ (Cassam 2024, 32; Nguyen 2020a, 154-155).
There is good reason, however, to suppose that a vice-epistemological explanation of the infodemic will be too narrowly individualistic to suffice. A more promising approach lies in the disquieting suggestion, courtesy of philosopher C. T. Nguyen, which has informed the selection of articles for discussion below. Namely, Nguyen urges that the apparent explosion of epistemically vicious behaviour in recent years is not necessarily attributable to skyrocketing individual vice, but rather to the exploitation of our intrinsic cognitive vulnerabilities by features of our epistemic environments which frustrate earnest efforts to discover the truth (Nguyen 2022, 1-4). He calls the study of this phenomenon ‘hostile epistemology’, a name which demands some immediate clarification.
This low-trust, high-bullshit[1] information ecosystem resulted in a parallel pandemic, the ‘infodemic’: a constant barrage of information inducing cognitive overload, hampering the ability to evaluate source quality, and leading to the viral transmission of false or dangerous ideas between people
An environment, epistemic or otherwise, is ‘hostile’ relative to our capacities, qua human beings, and our purposes. To tweak one of Nguyen’s examples, the deep sea is a friendly environment for an angler fish to grow and reproduce, but a fairly hostile place for human beings to host an academic conference. As such, an epistemic environment is hostile if, by virtue of characteristically human cognitive capacities, it inhibits or distorts our attempts to gain knowledge about the world and our place within it (Nguyen 2022, 4). The four articles discussed in this piece each touch on various aspects of the hostile epistemic environment which spawned the infodemic, and so offer crucial ideas to consider if we want to prepare for, or at least better adapt to, the next one. Before I discuss them, though, I need to briefly sketch another dimension of Nguyen’s concept.
Nguyen argues that we are exposed to hostile epistemic environments by virtue of two main ‘categories’ of vulnerability, each of which is an immutable feature of the human condition. One is a kind of hard limit on our cognitive resources. We are finite beings in every sense: we make our way through finite lives with finite attention spans and a finite capacity to recall that which we do attend to. We simply do not have the time or ability to reason exhaustively about every decision we make or every idea we entertain; living in and making sense of a dizzyingly complex world forces us to “…take cognitive shortcuts, use heuristics, and engage in quick and dirty reasoning” (Nguyen 2022, 4). A crucial tool in this regard is a ‘thought-terminating heuristic’: a cognitive rule of thumb we rely on in order to tell us when to stop considering alternatives and fix our beliefs. Of these heuristics, one of the most powerful is ‘the feeling of clarity’. When we come to learn something true, we are often able to integrate this new information with prior knowledge such that we arrive at understanding, a comprehensible picture which clarifies how the various things we know ‘hang together’ to form a partial ‘model’ of the world (Nguyen 2022, 9-10; Sellars 1991, 1). We associate understanding with the feeling of clarity because learning true things really does clarify our view of the world, and so, per Nguyen, we use that feeling as a truth-detecting, thought-terminating heuristic. The trouble, naturally, is that the causal relationship between truth and clarity does not run both ways. Conspiracy theories provide a relevant example of how the clarity heuristic can lead us astray, epistemically speaking. They seduce us by presenting models of the world which are clear, comprehensible,but (often) false, and so are more likely to seem true if the actual truth happens to be obscure, morally ambiguous, or narratively unsatisfying, as it often is regarding the pandemic (Nguyen 2022, 10-11).
The second category of vulnerability concerns our epistemic dependence on, and so our need to trust, others. Human lifespans preclude us from becoming experts in everything we would need to know in order to be absolutely autonomous agents. As a result, we non-experts need to rely on experts in order to make our way in the world(Nguyen 2022, 15). However, the very necessity of trust raises a serious difficulty: how do non-experts figure out which professed experts are worth trusting? Some fields are such that one needs to be an expert in order to evaluate claims to expertise, while others have ‘litmus tests’, accessible to non-experts, the limitations of which are only apparent to experts, meaning that they can give non-experts an inflated sense of their capacity to judge expertise (Nguyen 2020b).[2] When you combine these limitations with the fact that proxy measures of expertise can be ‘gamed’ by unscrupulous actors (e.g., fake qualifications from ‘diploma mills’, plagiarized publications, etc.), non-experts can be led to place their trust in poor or even malicious ‘experts’, and so to become vulnerable to misinformation or manipulation (Nguyen 2020b, 2816-2817).
The infodemic tragically illustrated the ways in which hostile epistemic environments can leverage the vulnerability entailed by our need to trust experts: as poor messaging and the use of mandates frayed the bond of trust between citizens and governments, the resulting discredit to ‘official’ expertise made some more susceptible to ‘thought leaders’ selling clarifying narratives on the cheap (DPMC 2024:39-41, 46-47). For those suspicious of credentialled expertise, immersed in an epistemic environment teeming with competing explanations tailor-made to appeal to otherwise-dependable heuristics, any explanation which promised understanding may have seemed preferable to none at all.
In order to shore up the trust necessary to weather the next pandemic, then, it’s a good idea to study in detail the difficulties that effort faced in the last one. A nice example of this kind of work is the first article in this collection: ‘Unmasking the Ethics of Public Health Messaging in a Pandemic’ (Ho and Huang 2021). In brief, Ho and Huang examine how health officials in the U.S. managed the high levels of uncertainty surrounding the use of masks as personal protective equipment (PPE), in their public-facing communications, over roughly the first 18 months of the COVID-19 pandemic. One key problem in this regard was the breakdown in coordination, between public health bodies like the WHO/CDC and local, state, and federal governments, which took place as the former changed positions on mask-wearing between March and June 2020. Poor communication around the policy shift produced confusion regarding the appropriateness of masking and resulted in injury to the perceived expertise of health authorities, an issue exacerbated by local and state politicians denouncing mask recommendations/mandates as political overreach by ‘big government’ (Ho and Huang 2021, 551).
Ho and Huang then proceed to make two broad recommendations concerning how to repair this ‘trust deficit’ in future public health crisis communication. The second, concerning the need to balance considerations of fairness with effectiveness in determining the least restrictive pandemic response measures, appears to have taken up by at least the Australian Government (DPMC 2024, 42-45), so I will focus instead on the first (Ho and Huang 2021, 555-556). This recommendation involves the need to strike an appropriate balance between ‘epistemic humility’ (admitting ignorance, acknowledging the provisional nature of scientific consensus, etc.) and the projection of competence. The nature of pandemics qua public health crises makes this balance particularly difficult to achieve, as they can sometimes preclude scientists from being able to gather the kinds of experimental evidence needed to marshal widespread cooperation with health directives (e.g., it would be unethical to expose clinical control groups to potentially lethal viruses). In such cases, it is precisely, and ironically, the robustness of scientific procedures which undermines the capacity of health experts to shore up public confidence in their epistemic authority (Ho and Huang 2021, 551). Consequently, Ho and Huang recommend that future public health communication in pandemics should frame governments and health bodies as ‘being on the same side’ as the public against the pandemic. Such expressions of goodwill aim to counter ideas, floating in the epistemic environment, that governments are either exploiting or engineering the disease in order to dominate the public.
The public needs to be sold on these norms, as well as the facts they make available, in order to be convinced that the ‘official’ expertise organized around said norms is worth trusting over the rival claims pressing upon them…This is a very daunting prospect, to be sure.
To have any hope of success, though, this declaration needs to be backed up by ongoing manifestations of goodwill via transparent and accessible updates on the state of scientific understanding of the disease, alongside the science literacy education necessary to know why this understanding develops in the gradual, provisional way that it does (Ho and Huang 20212, 554). In addition to this procedural ‘why’, this effort should ideally address the normative ‘why’ of science: why scientific methods, practices, and norms are those best-suited for understanding and adapting to public health crises. This is because part of the reason why the previous strategy injured trust is that significant segments of the public were not just ignorant of scientific epistemic norms like fallibilism, but subscribed to different epistemic norms entirely: e.g., that changing one’s mind represents weakness or incompetence, or that making mistakes is anathema to ideal knowledge-formation practices rather than being a crucial part of them (Ho and Huang 2021, 553-4; Shields 2025). In a ‘post-truth’ epistemic environment, governments and public health bodies cannot assume that their publics already endorse rational norms of enquiry such that all they need are ‘the facts’. The public needs to be sold on these norms, as well as the facts they make available, in order to be convinced that the ‘official’expertise organized around said norms is worth trusting over the rival claims pressing upon them (Legg 2018, 55-56). This is a very daunting prospect, to be sure. Even if it is only partly successful, however, it offers hope that broad segments of the public will see the practical manifestation of epistemic humility in future crises as expressions of expertise, not as indications of its absence, reinforcing that trust the lack of which renders them vulnerable to manipulation by hostile epistemic actors.
The dynamic relationship between trust, the perception of ‘goodwill’, and willingness to adhere to health expert recommendations lies at the heart of the second article discussed in this piece: ‘Vaccine Rejecting Parents’ Engagement With Expert Systems That Inform Vaccination Programs’ (Attwell et al. 2017). Briefly, Attwell et al. engage in a narrative analysis and discussion of semi-structured interviews they conducted, with 27 vaccine-rejecting parents of children under 5, between 2013 and 2015. When asked why they refused vaccinations for their children, the key reason cited by almost all parents was the notion that the pharmaceutical industry is a kind of malignant ‘puppet master’, bending the entire system of expertise informing vaccination programs to the pursuit of profit at the expense of patient well-being (Attwell et al. 2017, 69-70). With medical researchers, doctors, and the state thus ‘compromised’, the prudent strategy is to trust no-one and disinvest from public health measures as far as is possible (Atwell et al. 2017, 71-73).
The key variable distinguishing full non-vaccinators from partial or delayed vaccinators is that the latter, despite agreeing with the former concerning the corrosive influence of the profit motive on medicine, allowed that commercial incentives sometimes aligned with the public good, or that some agents within the medical system were genuinely motivated by goodwill (Atwell et al. 2017, 70-71). Personal encounters or relationships with doctors, and particularly GPs, were instrumental in shaping the latter opinion, as these figures constitute ‘access points’ by which the public interface with the wider ‘expert system’ underpinning conventional medicine. By virtue of their role as ‘representatives’ of medical expertise as such, the valence of these encounters influences patient views of scientific medicine as a whole (Attwell et al. 2017, 67).
Reading this article from a hostile epistemology perspective helps to clarify an issue relevant to tackling future infodemics. Atwell et al. note that both vaccinators and vaccine-refusing parents lean on heuristics to fix their beliefs in the absence of expertise. Crucially, heuristics are vital for resolving the moral conflict many contemporary parents experience over the decision to vaccinate their children: between their civic duty to ‘do their part’ for public health, and their private duty, rooted in the increasing prominence of individualist parenting and ‘wellness’ cultures, to protect their child from becoming one of the unlucky few who develop serious complications post-vaccination (Cassam 2024, 35-36). Which heuristic they use, though, appears to depend substantially on their trust in institutional medicine. If they have ‘the will to trust’, they can use the ‘deference to expertise’ heuristic to reconcile themselves to the vulnerability this entails (Attwell et al. 2017, 67). If they lack it, the clarity heuristic can incline them instead to an explanation which resolves the dissonance between expert recommendations and their personal reservations: ‘the experts are bought and paid for, so we’re on our own’. Viewed so, vaccine refusal/hesitancy need not be a consequence of embracing ‘irrationalism’ or even ‘anti-science’ epistemic frameworks, but might be the product of a desperate need to gain enough understanding to ‘settle the mind’ and act, in an epistemic environment one can’t navigate without expertise, when one can’t trust the experts at hand to lead the way (Nguyen 2022, 8-9). If you can’t accept the help needed to tackle a complex world, one adaptive strategy is to force the world to be simple enough to go it alone.
This suggests two broad aims for improving conditions for the next infodemic. The first, as at Attwell et al. urge, is to talk to vaccine-refusers in good-faith and with goodwill, engaging them as ‘fellow-citizens to be persuaded’ rather than as technocratic ‘problems’ to be solved (2017, 74; Sandel 2020, 108-110). This gesture lays the ground for trust, on which everything else depends. Once trust is established, the second involves inclining vaccine-refusers to use better heuristics to cut through the noise of their epistemic environments (Nguyen 2022, 20-21). One might do this by recontextualising what they do get right in a new set of ‘background beliefs’ which explain their core insights in a more robust way, thereby illustrating the pitfalls of using clarity as a guide to truth. For instance, one might argue that low levels of replication studies are not a consequence of ‘‘Big Pharma’ suppressing attempts to expose their fraud’, but reflect problems with the incentive structure of the ‘publish or perish’ model in academia (Atwell et al. 2017, 70; Nguyen 2020a, 157).
Given the role of frontline doctors as ‘access points’ to the medical expert system, the bond of trust underpinning the health of medicine-qua-institution begins, and sometimes ends, with the ‘clinical relationship’ between doctor and patient. The third article discussed here, ‘Epistemic Injustice and Nonmaleficence’ (Croce 2023), explores the harms which can result when a patient’s trust is not reciprocated by their doctor, both to the medical bond of trust and to the patient directly. Drawing on the work of philosopher Miranda Fricker, Croce argues that when doctors distrust the accuracy and/or sincerity of their patients without a medical basis, the patient suffers testimonial epistemic injustice: “…a diminished level of credibility imputable to the existence of a negative identity-prejudicial stereotype which undermines one’s testimony” (Croce 2023, 447). Insofar as it can be said to harm patients, there is a prima facie case that committing testimonial injustice violates the medical ethical principle of nonmaleficence, the obligation to ‘do no harm’, though this will almost always be the result of negligence, rather than any malevolent intent on the part of the physician (2023, 448-449). Croce substantiates this point by referring to the literature on the experiences of fibromyalgia patients in clinical contexts (2023, 452). Fibromyalgia patients often report having the seriousness of their symptoms dismissed or being accused of malingering by their doctors, and this testimonial injustice results in three practical forms of harm: over/undermedication, loss of confidence in medicine leading to an increased risk of breaking off treatment, and exacerbation of the underlying condition through the stress of being consistently disbelieved (Croce 2023, 453).
Concerning fibromyalgia, Croce argues that two forms of prejudice may contribute to the prevalence with which patients report experiencing testimonial injustice. The first is the notoriously low position of chronic somatic illnesses in the informal ‘disease prestige rankings’ made apparent in clinical practice (Croce 2023, 449). The second, given that fibromyalgia is more commonly diagnosed in women, is misogyny, a problem all-too-common in clinical medicine (Croce 2023, 453). With this in mind, hostile epistemology can account for why fibromyalgia patients routinely suffer testimonial injustice without presuming that their doctors ever intend to harm them in any way. Frontline doctors, and particularly GPs, conduct their medical practice under very tight time, monetary, and capacity constraints. Fibromyalgia is a difficult condition to deal with as a physician, given that there is no diagnostic test, very little is known about its causes, and even less is known about how to treat it (Croce 2023, 453). Furthermore, the often-debilitating character of the condition’s symptoms can appear baffling in the absence of any apparent physiological basis for them.
Confronting a patient with an invisible, undetectable condition, resistant to both diagnosis and treatment, when one is running 30 minutes behind schedule with a waiting-room full of patients, our doctor might lean on the aforementioned stereotypes as ‘cognitive shortcuts’ to quickly resolve the difficulty. If it is decided that the patient is malingering, for instance, one need not bother with time-consuming explorations into the nature of the problem; one can say whatever is necessary to get them out of the door, and move on to the next patient. It is important to stress that the doctor in this case isn’t necessarily choosing to believe that their patient is malingering in order to harm them epistemically, or even to save themselves the trouble of treating a difficult patient. Rather, in an operative context in which the doctor is pressed on all sides by heavy cognitive loads, financial stress, and the emotional labour of patient management, it can genuinely seem that ‘the patient is malingering’ is the best explanation to hand for such a baffling presentation, and this ‘seeming’ is an effect of the capacity of the ‘malingerer’ heuristic to settle the mind amidst epistemic difficulty. If this account is plausible, then the kind of testimonial injustice discussed here won’t be attributable to the existence of negative identity-prejudicial stereotypes alone. Rather, it will generally be the product of a collision between the cognitive vulnerabilities of doctors, the trust-based vulnerabilities of fibromyalgia patients, and the hostile epistemic environments in which they are compelled to seek treatment.
The final article in this retrospective concerns a point of conflict at the root of the problematic clinical relationship outlined above and, more broadly, the COVID-19 pandemic itself. This is between what philosopher Wilfrid Sellars called the ‘manifest and scientific images of humanity’: that is, between the conceptual framework through which we humans articulate and refine how we appear to ourselves and by which we make our way in the world—what Sellars once described as “sophisticated common sense” (1963, 20)—, and a kind of synthetic theoretical description of the world, including ourselves, integrating the accounts offered by the specialized sciences (1963, 6-20). In ‘Revisiting the Persisting Tension Between Expert and Lay Views About Brain Death and Death Determination: A Proposal Inspired by Pragmatism’, Racine (2015) sketches this conflict with reference to disputes between lay views and scientific insights concerning criteria for the determination of death.
While the conflict between these perspectives can rear its head in arguments concerning particular determinations of death, Racine draws on Sellars to argue that these conflicts are partial manifestations of a broader issue: whether lay or expert perspectives should be considered foundational with regard to death determination. ‘Foundational lay views’ hold that the ‘manifest image of death’, represented by cardio-pulmonary criteria for death determination (sustained arrest of the heart and lungs), is so fundamental to the experience and cultural significance of death that deference to it is a criterion for success in scientific definition. By contrast, ‘foundational expert views’ assert that death’s manifest image was a product of prior technological limitations, and that the ‘scientific image of death’ now made possible (including neurological criteria for death determination viz. ‘irreversible cessation of brain function’) should be authoritative regarding the nature and determination of death (Racine 2015, 624-627). Both share the same root flaw. The quest for foundations is the quest for a certainty neither can provide; science has undermined both the intuitive ‘certainty’ of manifest death and the hope that something functionally-equivalent might replace it, as “…certainty is not a good that science can deliver” (Racine 2015, 629).
A parallel conflict of perspectives played out in the implementation of, and public reactions to, pandemic response measures by government. To effectively manage pandemics, governments and health authorities have to ‘see like states’—imposing legibility on ‘raw’ social data through classification and systematisation (Scott 1998, 22-24)—and think like viruses, producing estimates of reproduction rates and countermeasure effectiveness by treating individuals primarily as ‘vertices of infection’ (Du 2021). Viewed through this ‘scientific image of disease’, COVID lockdowns and restrictions on gatherings were emergency ‘precautionary measures’, necessary to slow infection rates and prevent the health system from collapsing prematurely, which justifiably overrode countervailing concerns (DPMC 2024, 12). For a significant portion of the population, however, the ‘manifest image’ of COVID was very different. With the majority of COVID-19 fatalities occurring among the elderly, some began to think of the disease as akin to the common cold for otherwise-healthy adults. From this point of view, many pandemic response measures—particularly the lengthy lockdowns experienced in metropolitan Melbourne (Macreadie 2022)—appeared as extreme overreactions, and substantially injured public trust in government when they prevented people from attending events of great personal significance, like weddings and funerals.
This issue reflects one of Sellars’ observations about the conflicting images of humanity. Namely, insofar as both the manifest and scientific images purport to describe the world, only the scientific is right. On its own, though, the scientific image is unlivable: the persons and projects around which our lives revolve melt into biochemical flows, just as the manifest table breaks down into its atomic components. As such, Sellars proposed that the scientific image should be ‘enriched’ by joining to it an irreducible, normative, conceptual framework of personhood, so as to make the world described by science one in which we can live (1963, 38-40). Likewise, the scientific image of COVID is descriptively accurate, but the failure to adapt its prescriptions to accommodate the normative fabric of human life not only made them increasingly unlivable during the pandemic, but undermined the public trust in (epistemic) authority crucial for weathering the next crisis (DPMC 2024, 40-41).
The articles in this collection illustrate how this loss of public trust contributed to the hostile epistemic environment which spawned the infodemic. With trust in official expertise undermined by poor communication, misfiring heuristics, and adverse medical interactions, the need to ‘settle the mind’ in an overwhelming environment led a not-insignificant number of people to embrace fraudulent ‘experts’ and ideas with a view to clarifying chaos. By hypothesis, we can’t improve this situation by eliminating hostile epistemic environments. Nguyen describes the only adaptative strategy available as “…an unending epistemic arms race” (2022, 21, italics original): a hostile environment exploits our vulnerabilities, we respond by improving our heuristics, whereupon the environment changes and exploits our vulnerabilities in a new way, etc. More crucial than any other aspect of ‘epistemic rearmament’, however, is the need to rebuild trust and the perception of goodwill between governments, health authorities, and their publics. Without trust, our efforts to combat the next infodemic will resemble Earth’s resistance to the Martians in Wells’ War of the Worlds: “bows and arrows against the lightning”.
Liam Kelly is a copy editor at the the Journal of Bioethical Inquiry and final year B.Arts student (Philosophy + Media Studies) at Deakin University.
Correspondence: lckelly@deakin.edu.au
References
Attwell, K., J. Leask, S.B. Meyer, P. Rokkas, and P. Ward. 2017. Vaccine rejecting parents’ engagement with expert systems that inform vaccination programs. Journal of Bioethical Inquiry 14: 65-76. https://doi.org/10.1007/s11673-016-9756-7.
Cassam, Q. 2024. Some vices of vice epistemology. Metaphilosophy 55(1): 31-43. https://doi.org/10.1111/meta.12664.
Croce, Y.D. 2023. Epistemic justice and nonmaleficence. Journal of Bioethical Inquiry 20: 447-465. https://doi.org/10.1007/s11673-023-10273-4.
DPMC (Department of the Prime Minister and Cabinet). 2024. COVID-19 response inquiry summary: lessons for the next crisis. Australian Government. https://www.pmc.gov.au/sites/default/files/resource/download/covid-response-inquiry-summary.pdf. Accessed 29 March, 2025.
Du, M. 2021. Mitigating COVID-19 on a small-world network. Scientific Reports 11: 20386, https://doi.org/10.1038/s41598-021-99607-z.
Frankfurt, H.G. 2005. On bullshit. Princeton: Princeton University Press.
Ho, A. and V. Huang. 2021. Unmasking the ethics of public health messaging in a pandemic’, Journal of Bioethical Inquiry 18: 549-559. https://doi.org/10.1007/s11673-021-10126-y.
Jia, K.M., W.P. Hanage, M. Lipsitch, et al. 2023. Estimated preventable COVID-19-associated deaths due to non-vaccination in the United States. European Journal of Epidemiology 38: 1125-1128. https://doi.org/10.1007/s10654-023-01006-3.
Knowles, H., A. Gowen, and J. Mark. 2021. Doctors dismayed by patients who fear coronavirus vaccines but clamor for unproven ivermectin. The Washington Post, September 1. https://www.washingtonpost.com/health/2021/09/01/ivermectin-covid-treatment/. Accessed 29 March, 2025.
The Lancet Infectious Diseases. 2020. The COVID-19 infodemic. The Lancet Infectious Diseases 20(8): 875. https://doi.org/10.1016/S1473-3099(20)30565-X.
Legg, C. 2018. The solution to poor opinions is more opinions: Peircean pragmatist tactics for the epistemic long game. In Post-truth, fake news: viral modernity & higher education, edited by M.A. Peters, S. Rider, M. Hyvönen, and T. Besley, 43-58. Singapore: Springer.
Macreadie, I. 2022. Reflections from Melbourne, the world’s most locked-down city, through the COVID-19 pandemic and beyond. Microbiology Australia 43(1): 3-4. https://doi.org/10.1071/MA22002.
Nguyen, C.T. 2020a. Echo chambers and epistemic bubbles. Episteme 17(2): 141-161. https://doi.org/10.1017/epi.2018.32.
Nguyen, C.T. 2020b. Cognitive islands and echo chambers: problems for epistemic dependence on experts. Synthese 197: 2803-2821. https://doi.org/10.1007/s11229-018-1692-0.
Nguyen, C.T. 2022. Hostile epistemology. Paper presented at the 39th North American Society for Social Philosophy International Social Philosophy Conference, 14-16 July, in Aston, Pennsylvania, United States.
Racine, E. 2015. Revisiting the persisting tension between expert and lay views about brain death and death determination: a proposal inspired by pragmatism. Journal of Bioethical Inquiry 12: 623-631. https://doi.org/10.1007/s11673-015-9666-0.
Sandel, M.J. 2020. The tyranny of merit: what’s become of the common good?. New York: Farrar, Straus, and Giroux.
Scott, J.C. 1998. Seeing like a state: how certain schemes to improve the human condition have failed. New Haven: Yale University Press.
Sellars, W. 1963. Science, perception and reality. Atascadero: Ridgeview Publishing Company.
Shields, M. 2025. On the value of changing your mind. Episteme: 1-23. https://doi.org/10.1017/epi.2024.54.
[1] In the sense articulated by Frankfurt: a statement “…unconnected to a concern with truth” (2005, 33).
[2] For a fuller discussion of the problems associated with non-expert identification of experts, see Nguyen 2020b.

