1 Particularism versus generalism

When generalists and particularists disagree about whether conspiracy theories can be evaluated as a class, the kind of evaluation in question is usually epistemic: it concerns the rationality, evidential status, coherence, truth, or similar features of conspiracy theories. Thus, generalism is the view that we can give an epistemic evaluation of such theories as a class. In fact, most, if not all, generalist views offer a negative assessment, and so we may define generalism as the view that there is something epistemically bad about conspiracy theories, considered as a class. Importantly, we take any genuine form of generalism to be committed to the epistemic badness being inherent to the class. Some claim to be generalists but insist that the badness be a mere empirical presumption against, or a contingent fact about, all conspiracy theories. But such claims are under threat of collapsing into particularism, for friends of this view would rightly maintain that only empirical investigation of individual conspiracy theories could reveal whether all such theories do in fact share any given epistemic defect.Footnote 1 Thus, such a view does not count as genuinely generalist, in our sense.

Particularists say that the claim that a theory is a conspiracy theory does not allow the deduction of a claim about the negative epistemic status of the theory (although it might allow the defeasible empirical presumption of such a claim). Thus, each conspiracy theory must be epistemically evaluated on its own individual merits. One may think that generalism entails that there is no need to investigate the epistemic status of individual conspiracy theories, but this is only true in the limited sense that a generalist view that they all have a certain epistemic defect removes the need to examine them to see whether they have that defect. They would still need to be scrutinized on a case-by-case basis to determine whether they have other positive or negative epistemic features.

The generalist/particularist disagreement has been one of the main dividing lines in the literature on conspiracy theories. Boudry and Napolitano (2023: 22), however, write that nowadays “self-identified particularists are clearly in the majority, and the particularist camp has over the years been tempted to declare victory or at least a broad consensus.”Footnote 2 Despite the (at least historical) centrality of the disagreement between generalists and particularists in academic discussion of conspiracy theories, it has been suggested that the disagreement is merely verbal. For example, Räikkä et al. (2020: 50) say that “anyone who would like it to be true that ‘Many conspiracy theories are justified’ can make their wish come true by defining the concept of ‘conspiracy theory’ so that ordinary conspiracy explanations are counted as conspiracy theories.” Likewise, Boudry and Napolitano (2023: 23) suggest that “either position can be trivially vindicated just by adopting the right definition of ‘conspiracy theory’”. Here we argue that this diagnosis of the disagreement, at least as it stands, is correct: both generalists and particularists work with definitions that imply their respective view, and so either view can be vindicated just by adopting their preferred definition.

Beginning with particularism, friends of this view take a conspiracy to be “a proposed explanation of some historical event (or events) in terms of the significant causal agency of a relatively small group of persons—the conspirators—acting in secret” (Keeley, 1999: 116), “an explanation of an event which cites a conspiracy as a salient cause” (Dentith, 2019a: 102), “a secret plan on the part of a group [“the plotters”] to influence events partly by covert action” (Pigden, 1995: 5), or “a proposed explanation of an historical event, in which a conspiracy (i.e., agents acting secretly in concert) has a significant causal role” (Coady, 2003: 201). In fact, as some of these particularists explicitly acknowledge, e.g., Dentith (2019b: 2244) and Pigden (2022), these definitions basically boil down to:

  • (N) A conspiracy theory is a theory about a conspiracy.

What’s important is not the details, but that (N) is a neutral definition, in the sense that no epistemically evaluative element is built into it. According to (N), conspiracy theories are just theories with the specific content of positing conspiracies, irrespective of whether there is anything epistemically wrong with them.

But (N) entails particularism, for if a conspiracy theory is just a theory about a conspiracy, the only way to assess it is to determine its specific epistemic credentials. Consider the theory that members of Al-Qaeda conspired to bring about the events of 9/11, and the competing theory that members of the US government conspired to bring about these events as part of an inside job. On (N), both count as conspiracy theories, but that tells us nothing about their epistemic status.Footnote 3 Empirical investigation of each theory, however, reveals that the former enjoys conclusive evidential support, while the latter does not.

What about generalism? Few philosophers explicitly endorse this view, possibly because it’s obviously false if something like (N) is adopted as a complete definition. Cassam (2023: 192–3) claims that “a prima facie suspicion of conspiracy theories is justified” and “conspiracy theorists are mostly amateurs”. The qualifications “prima facie” and “mostly” are key, as he admits the existence of counterexamples. But that means such claims are compatible with particularism, because, as Dentith (2023) notes, those defects will not be features of all theories or theorists. Particularists can thus incorporate any stripe of restricted generalism as a view about some conspiracy theories.Footnote 4 However, in the literature we find two potential versions of unrestricted or genuine generalism, on which conspiracy theories are inherently problematic. Both agree that something along the lines of (N) is part of the definition of ‘conspiracy theory’, but they go on to add further conditions. Harris (2018, 2022a), Levy (2007), Ichino & Räikkä et al. (2020) and Räikkä (2023) propose the condition that:

  • (C) Conspiracy theories conflict with the testimonies of the epistemic authorities, and Napolitano (2021) maintains that:

  • (I) Conspiracy theory beliefs are completely insulated from counterevidence.

What is notable is that when conjoined with (N), (C) and (I) appear to result in a pejorative definition, into which a negative epistemic feature is baked. For opposition to epistemic authorities and immunity to disconfirmation by counterevidence look like epistemic defects of theories.Footnote 5Footnote 6

Pejorative definitions of ‘conspiracy theory’ obviously imply that all conspiracy theories, without exception, will share the inbuilt epistemic badness, thus making generalism a foregone conclusion.Footnote 7 For if we define ‘conspiracy theory’ as a theory about a conspiracy that has epistemic feature F, then trivially all conspiracy theories are F. There’s no need to examine individual theories to see if they are F, although we may still want to find out on a case-by-case basis what other epistemic features they may possess.

We conclude that the disagreement between the two camps is, as it stands, merely verbal, analogous to a disagreement between an American and a Brit about whether a bag of Walkers is a bag of chips. Each party to the so-called ‘disagreement’ is correct, given what they mean by ‘chip’ or ‘conspiracy theory’. Indeed, it’s hard to see how the generalist/particularist disagreement could possibly be anything other than verbal. On a neutral definition of ‘conspiracy theory’, e.g., (N), particularism is true, and on a pejorative definition, e.g., (N) combined with (C) or (I), generalism is true. And yet there is no option but to work with one kind of definition or the other. If our diagnosis is correct, the current consensus that particularists have won the debate is wrong. No one wins (or loses) a verbal disagreement: both parties are trivially right, given their respective definitions.

Still, there may well be a substantive disagreement in the vicinity. One possibility is that the real debate is about how to define ‘conspiracy theory’. However, in the absence of agreement on what would count as a correct definition, one is free to stipulate different definitions for different reasons, bearing in mind that no genuine disagreement will arise in that case. For there to be a genuine disagreement about how to define ‘conspiracy theory’, there must be a clear set of agreed criteria for counting a definition as correct. And presumably, such criteria come from the project that motivates a particular definition in the first place. We distinguish between three kinds of such project.

First, we might wish to provide a descriptive theory of a phenomenon of interest. Some philosophers, psychologists, and social scientists aim to provide a theoretical account of some phenomenon in the vicinity of what is picked out by ordinary usage of ‘conspiracy theory’. For such theorists, the most important criteria on the correct definition of ‘conspiracy theory’ is that it uniquely picks out the phenomenon of interest. Whether the definition accords with ordinary usage is neither here nor there. If it turns out that the phenomenon of interest is not uniquely referred to by ‘conspiracy theory’, they can simply offer a technical definition. Theorists often redefine ordinary words for the purpose of theorising about the phenomena that interest them: think for example of the philosophical definition of ‘valid argument’ in terms of entailment, versus the lay understanding of that term as a good line of reasoning.

Another project is that of conceptual engineering. This is the project of assessing and improving our conceptual schemes. More precisely, it’s about assessing whether, or to what degree, concepts fulfil their function, or achieve their intended aim, but it’s also about designing new concepts, or fixing old concepts, to enhance their fit with their function or aim.Footnote 8 This project is thus not only normative, but also revisionary, vis-à-vis the relevant concepts. It’s motivated by the fact that occasionally our concepts must be ameliorated to attain certain beneficial consequences, which may be social, political, or moral. So, for example, someone pursuing this project might prefer a certain definition of ‘conspiracy theory’ on the grounds that if that definition is generally taken up, we will see an improvement in the world according to certain criteria.

A third project is to provide an analysis of the concept that is expressed by ‘conspiracy theory’ as the term is ordinarily used. This is the project of conceptual analysis familiar from philosophical discussion of e.g., knowledge, consciousness, freedom or truth. This project aims to give a definition that captures the concept that we actually have (or concepts, if ‘conspiracy theory’ turns out to be polysemous), rather than to introduce a new definition for the purposes of theorising, or with the aim of revisionary conceptual engineering. The minimal criterion of a correct definition on this project is that it must be co-extensional with the concept. For example, a conceptual analysis of knowledge aims to provide a definition that applies to all cases of knowledge, and that excludes all cases of non-knowledge. One important, although defeasible, source of evidence about the extension of the concept is our intuitive judgement about (actual or possible) cases, for example our intuitions about when people do and do not know.Footnote 9

Ordinary usage of the term ‘conspiracy theory’ is relevant to this project insofar as the proposed definition must capture correct usage. However, not all usage is correct (or even consistent), and so it is not a requirement of this project that it provides a definition that is co-extensional with all ordinary use. Indeed, the project of conceptual analysis has the potential to yield surprising conclusions about when ordinary usage of a term is mistaken. Nor is it a requirement on the project of conceptual analysis that the resulting definition is something that ordinary users of the concept are explicitly aware of. For example, it is no criticism of e.g., an attempt to analyse knowledge in terms of a modal safety condition that ordinary users of this concept are not explicitly aware of the relevant definition, or of the idea of safety.

Unfortunately, there is no clear consensus on which project is being pursued, at least in the philosophical literature, when it comes to defining ‘conspiracy theory’. Some authors talk about giving a definition that is theoretically fruitful, suggesting a theoretical project.Footnote 10 Other authors talk explicitly about conceptual engineering.Footnote 11 And yet others emphasise that their definition “reflects relatively well the ordinary usage of the term and seems to be extensionally adequate”, suggesting a project of conceptual analysis.Footnote 12 Indeed, some authors appear to be pursuing more than one of these projects within the same paper.Footnote 13 Given that these projects yield very different criteria for a correct definition, it’s not clear that in proposing different definitions of ‘conspiracy theory’, these authors are genuinely disagreeing with each other (notwithstanding the fact that they give the impression of prosecuting a vigorous disagreement).

To avoid these confusions, we state clearly that our aim is to provide an analysis of the concept expressed by ‘conspiracy theory’ as this term is ordinarily used, or, as we will sometimes say, an analysis of the ordinary concept of a conspiracy theory. We are not aiming at a definition for theoretical purposes or to engage in conceptual engineering. Thus, our criterion for correctness is that our definition is co-extensional with the ordinary concept of a conspiracy theory, where our main source of (defeasible) evidence as to this extension is our intuitive judgements about cases.

There are many reasons why one might wish to analyse the concept ordinarily expressed by ‘conspiracy theory’. Doing so will allow for a better understanding of public discussion that employs this term. What exactly is going on, for example, when a politician dismisses a view as a conspiracy theory, or brands an opponent as a conspiracy theorist? How are we to evaluate and respond to such claims?Footnote 14 A better understanding of the concept expressed by the term will help us answer these questions. Our project is also relevant to the projects of conceptual engineering and theoretical description. If one wants to alter or replace the ordinary concept of a conspiracy theory, it will be helpful to know what the ordinary concept is in the first place. And if we are giving a definition of ‘conspiracy theory’ for theoretical purposes, we will want to know how and whether the claims of that theory are relevant to ordinary discussion of conspiracy theories. In addition, relevant theorists are likely to want to theorise about the phenomenon that ordinary usage of ‘conspiracy theory’ picks out. The fact that people routinely talk about this phenomenon indicates (and to some extent constitutes) its interest to psychologists, political theorists, philosophers, and other theorists.

2 Existing definitions

Given our goal of conceptual analysis, (N) alone is clearly inadequate. As noted above, on (N) the theory that members of Al-Qaeda conspired to carry out the Twin Tower attacks counts as a conspiracy theory. However, it’s not this, but rather the theory that 9/11 was an inside job that counts as a conspiracy theory. Further, Pigden (2007: 222) surely captures ordinary usage of ‘conspiracy theorist’ when he defines this term as referring to “someone who subscribes to a conspiracy theory”. But if (N) is paired with this definition of ‘conspiracy theorist’, then the members of the 9/11 Commission are conspiracy theorists, as are the professional academics who conduct research on such events. Surely, what we would ordinarily say is that the latter are conspiracy theory theorists, in that they theorise about conspiracy theories, but are not themselves conspiracy theorists.

A natural thought is that in calling the theory that 9/11 was an inside job a conspiracy theory, people are ascribing some negative property to it, and that people do not call the theory that 9/11 was carried out by Al-Qaeda a conspiracy theory, because they don’t believe it has that property. Indeed, it’s hard to imagine someone calling a theory a conspiracy theory while also claiming that it’s a good theory that they believe. Conspiracy theorists don’t usually think of themselves as conspiracy theorists. And indeed, a series of empirical studies conducted by Napolitano and Reuter (2023) provide evidence that ordinary usage of ‘conspiracy theory’ is pejorative.Footnote 15 All of this suggests that the definition of ‘conspiracy theory’ that we are looking for is pejorative rather than neutral.

What then is the pejorative definition? We think that the definitional elements listed in §1, (C) and (I), are the best potential answers to this question currently available in the literature. Before considering these, however, it will be useful to clear away some answers that may seem tempting as analyses. For example, we should not build the condition that conspiracy theories are false into the definition. As Harris (2022b) points out, some conspiracy theories have turned out to be true. He gives PRISM as an example, a program conducted by the NSA to collect data on US citizens. At one time the view that such a program existed was a conspiracy theory, believed only by a few eccentrics. Now, however, we know of the existence of PRISM, due to documents leaked to the media by Edward Snowden. Assuming there are true conspiracy theories, we should also reject the condition that conspiracy theories are not based on sufficient evidence. As Harris (op. cit.) notes, for true conspiracy theories, it’s hard to deny that someone has sufficient evidence of their truth, namely, the conspirators themselves.

Nor can we accept the condition that conspiracy theories are unfalsifiable.Footnote 16 Most, if not all, paradigm examples of conspiracy theories make empirical predictions and are therefore subject to refutation. For example, according to one version of the theory that 9/11 was an inside job, the Twin Towers were bought down by explosives, placed on structural supports by agents of the US government. This theory predicts that traces of these explosives should be in the wreckage, and that we can find evidence, in the form of witnesses and documents, of a large volume of explosives being smuggled into the Towers. Proponents of this theory and members of the 9/11 Commission agree that the theory makes these predictions, and so is falsifiable. Where they disagree is on whether the predictions are confirmed or not. The 9/11 Commission presented ample evidence that the predictions are false. In contrast, conspiracy theorists reject such evidence, or alter the theory in ad hoc ways to explain why the predictions have not been confirmed, but such moves don’t render it unfalsifiable.

To take another paradigmatic conspiracy theory, consider the claim that the Earth is flat, but that there is a conspiracy to suppress this fact.Footnote 17 Again, there are many different versions of this theory. One popular version claims that the Earth is a disk ringed by huge walls of ice. Friends and foes agree the theory predicts that if we travel far enough in a straight line, or ascend high enough, we will see this ice. Nobody disputes that the theory is falsifiable, and it’s hard to see how it could fail to be. What people disagree about is whether it has been falsified, with its ardent friends refusing to acknowledge counterevidence, e.g., claiming that photos of the Earth from space provided by NASA are fake, or making ad hoc alterations to the theory to account for this evidence.

Note that in both examples just considered, the relevant conspiracy theory conflicts with the account provided by an epistemic authority (the 9/11 Commission and NASA, respectively). This suggests that condition (C), i.e., that conspiracy theories conflict with the testimony of epistemic authorities, might be on the right track. It’s also true that in both examples, the conspiracy theorists who hold the relevant theories resist falsification by empirical evidence. This suggests that (I), i.e., that conspiracy theory beliefs are completely insulated from counterevidence, might also be on the right track. As we shall see, there is something right about both definitional elements, but at the same time both (C) and (I) face serious problems. However, we shall adopt (C), as preferable to (I), though we must be careful when spelling out both what constitutes epistemic authorities, and what being in conflict with them amounts to. In §3 and §4 we develop and defend a version of the epistemic authority definition that fits the bill.

3 The epistemic authority definition

Consider the following definition of ‘conspiracy theory’, resulting from the combination of (N) and (C):

  • (EAD) Theory T is a conspiracy theory at time t if and only if:

  • (C) T conflicts with the testimonies of the epistemic authorities at t.

Levy (2007) and Harris (2018, 2022a) endorse something close to (EAD), and indeed, as we will see in this section, there is a lot to be said for (EAD). However, it suffers from serious problems given Levy and Harris’ interpretation of ‘epistemic authority’ and ‘conflict’.

The claim that it’s part of the very idea of a conspiracy theory that it conflicts with the testimonies of the epistemic authorities has intuitive appeal, and it appears to say the right thing about the paradigmatic conspiracy theories we have encountered so far, where ‘testimony’ is understood broadly in the sense of a view that is overtly expressed.Footnote 19 The theory that 9/11 was an inside job conflicts with the views of the epistemic authorities, e.g., the 9/11 Commission, while the claim that the attacks were carried out by Al-Qaeda operatives does not. The same is true of the flat Earth theory. PRISM, the theory that the NSA ran a surveillance programme of intercepting Americans’ phone calls and Internet communications, used to conflict with the views of the epistemic authorities, and so it used to count as a conspiracy theory. Now, however, the views of the epistemic authorities have changed, so that this theory no longer conflicts with them, and thus no longer counts as a conspiracy theory.Footnote 20 This brings out the important point that (EAD) can count the same theory as a conspiracy theory at one time, and not at another, which is a desirable feature, as the example of PRISM shows. So, (EAD) looks promising, but it stands in need of further explanation. Three questions spring to mind: (a) How are we to understand the uniqueness of the testimonies of the epistemic authorities? (b) What is it for a theory to conflict with the views of the epistemic authorities? (c) What is an epistemic authority in the first place?

The problem raised by question (a) is that the two occurrences of ‘the’ in (C) assume that there is a unique set of testimonies offered by a unique list of epistemic authorities. But neither assumption may hold in particular cases. For instance, we can easily imagine that T conflicts with some testimonies of some epistemic authorities, while consistent with other testimonies stemming from different epistemic authorities. As we shall see in §4, our own formulation clarifies the issue by requiring that T conflicts with the total testimony of all epistemic authorities, while avoiding that any conflicting testimony implies that the total testimony acts as a defeater.

Question (b) is pressing, because there is a sense in which many conspiracy theories don’t conflict with the views of any epistemic authority. The theory that 9/11 was an inside job is incompatible with the view of an epistemic authority in that it’s explicitly rejected by the 9/11 Commission. However, conspiracy theories are legion, and many less prominent theories have never been considered, let alone denied, by any epistemic authority.Footnote 21 Furthermore, some such theories are at least logically compatible with the views of the epistemic authorities, especially once their proponents have made ad hoc alterations to the theory to ensure that this is so. So, proponents of (EAD) needs a relatively liberal understanding of the relevant notion of conflict, which they are yet to articulate. We return to (b) in §4.

As regards question (c), Levy and Harris offer an answer. Harris says that “epistemic authority is possessed in virtue of credentials, professional positions, and the like” (2022a: 23). Similarly, Levy (2007: 187–8) talks about “conflict with the official story of the properly constituted epistemic authorities”, where proper constitution is a matter of social structure: to be an epistemic authority is to occupy an appropriate position within a “distributed network of knowledge claim gatherers and testers that includes engineers and politics professors, security experts and journalists.”Footnote 22 These definitions suggest the following social definition (−S) of ‘epistemic authority’:

  • (EA-S) Something (an individual, group, or institution) is an epistemic authority if and only if it occupies an appropriate position within society, and/or possess appropriate professional credentials.

Note that (EA-S) is non-mainstream in that recent accounts of epistemic authority are all based on competence or reliability conditions, which are either required to be believed (or judged) to be satisfied by the putative authorities, as on subjectivist views, or required to be actually satisfied by them, as on objectivist views.Footnote 23 In fact, those who emphasise that epistemic authorities play characteristic social-functional roles, such as being readily available to provide accessible answers to laypeople’s queries, or display other didactic virtues or communicative skills, still agree that any such virtues or skills are in addition to satisfying certain epistemic conditions.Footnote 24

Crucially for our purposes, on (EA-S), (EAD) turns out to be neutral, rather than pejorative, because it will not always be a bad thing for a claim to conflict with the views of socially defined epistemic authorities. After all, social positions and professional credentials may be possessed by people who are completely unreliable on the relevant subject matter. And there needn’t be anything epistemically wrong with a theory that conflicts with the views of such people; indeed, opposing the views of those who reliably track falsity may even be epistemically advantageous.

Thus, when paired with (EA-S), (EAD) entails particularism, and so (EAD) fails to capture the ordinary inherently pejorative meaning of ‘conspiracy theory’.Footnote 25 This comes out in counterexamples. Suppose the 9/11 Commission and other relevant socially defined epistemic authorities on the events of 9/11 are replaced by people who believe that 9/11 was an inside job. Now the claim that 9/11 was an inside job no longer conflicts with the view of the socially defined epistemic authorities, and so it ceases to count as a conspiracy theory, if (EAD) is combined with (EA-S). This is the wrong result. The claim that 9/11 was an inside job is still a conspiracy theory, and to point this out is still to say something evaluative about that theory. Indeed, the natural thing to say is that, unfortunately, the 9/11 Commission is now made up of conspiracy theorists who believe the conspiracy theory that 9/11 was an inside job.

Another counterexample comes from the case of a badly functioning society, where the socially defined epistemic authorities are completely unreliable on the relevant subject matter. We can easily imagine such a society in which laypeople have good reason to think that their government and other powerful figures are conspiring in secret to plan some nefarious activities, while the socially defined epistemic authorities (those that have been appointed and awarded credentials) help them cover it up. When paired with (EA-S), (EAD) entails that theories about those activities are conspiracy theories, and that laypeople who believe them are conspiracy theorists. But this is again the wrong result. If North Korea fits the above description such that any so-called epistemic authorities, who have been assigned positions and granted qualifications by the regime, help cover up its plot to act with ill intent, the laypeople who reasonably believe that this is exactly what’s going on are certainly not conspiracy theorists.

4 The epistemic superiority definition

The counterexamples to (EAD) arise because the fact that someone is an epistemic authority as defined by (EA-S) implies nothing about their actual epistemic status, and so there will be no general epistemic principles that tell us whether a claim ought not conflict with the views of such an authority. This ensures that (EAD), when combined with (EA-S), is a neutral definition of ‘conspiracy theory’, and it exposes the definition to counterexamples, as just witnessed. This suggests that the way forward is to replace (EA-S) with a definition of something that is genuinely epistemic, rather than social. In this section we give such a definition, and then employ it in our own pejorative definition of ‘conspiracy theory’.

As mentioned in §3, extant accounts of epistemic authority all involve the satisfaction of an epistemic condition. In fact, they proceed to spell out such condition in terms of being in a substantially advanced epistemic position on the relevant subject matter (or in the relevant domain), relative to non-authorities. Thus, we shall adopt the following epistemic definition (−E) of ‘epistemic authority’:

  • (EA-E) A is an epistemic authority for B regarding subject matter X if and only if A is epistemically superior to B regarding X.Footnote 26

To avoid any confusion generated by the fact that two (until now) distinct literatures are defining ‘epistemic authority’ in two different ways, we will refer to those epistemic authorities who satisfy (EA-E) as epistemic superiors.Footnote 27

A few points on the differences between (EA-S) and (EA-E) are in order. Reflect first that while (EA-S) defines a one-place property, (EA-E) defines a relational property. On (EA-S) someone either is or is not an epistemic authority on a topic in virtue of their social position. By contrast, on (EA-E), A may be an epistemic authority regarding topic X with respect to B, but not with respect to C. For example, on (EA-E) John, who lives in London, may count as an epistemic superior on the geography of London with respect to Lucy, who has never visited the UK, but not with respect to Jason, who is a black-cab driver in London.Footnote 28

Another important difference is that (EA-S) is a definition in terms of social position, while (EA-E) is a definition in terms of epistemic status, hence the hyphenated letters ‘S’ and ‘E’. Thus, counting as an epistemic authority on (EA-S) doesn’t imply anything about someone’s epistemic status, while counting as an epistemic authority on (EA-E) implies the status of epistemic superiority. There are various ways of understanding such superiority. According to Zagzebski (2012: 109), for example, an epistemic authority, relative to me, is “someone who does what I would do if I were more conscientious or better than I am at satisfying the aim of conscientiousness – getting the truth”, where the virtue of conscientiousness involves doing one’s best to achieve one’s epistemic goals, which above all include forming true beliefs and avoiding false ones. What matters for our purposes is that epistemic authorities are epistemic superiors in terms of possessing and processing reasons that are epistemic by being truth-conducive: they have significantly better reasons pertaining to the issue than laypeople do and are significantly more skilled at processing and evaluating them than laypeople are (relative to a domain and a time). The epistemic good to which ascriptions of authority are relativised is thus primarily knowledge, but our view is consistent with authorities also characteristically possessing the distinct good of greater understanding.Footnote 29 However, while epistemic superiors are relatively more reliable, they aren’t infallible, and so they may stand corrected.Footnote 30Footnote 31

Importantly, (EA-E) is what we in §3 called an objectivist account. What matters is that A be de facto epistemically superior to B, irrespective of whether B believes (or judges) that this is the case.Footnote 32 If we followed subjectivist (or even mixed) accounts in requiring that B (truly) believe A to have certain epistemically superior qualities, then combining (EAD) with such accounts would misclassify certain conspiracy theories. The reason is that conspiracy theorists could well lack such beliefs, indeed they often identify their own “experts”, falsely believing them to be epistemically superior. For instance, those who subscribe to anti-vaxx conspiracy theories are likely to question the superiority, if not the competence, of official medical authorities, turning instead to self-proclaimed “experts”, such as the disgraced, former physician, Andrew Wakefield.

In any case, however exactly we understand epistemic superiority, it’s plausible that epistemic principles govern how we should respond to the testimony of our epistemic superiors. Two main views dominate the literature on these principles.

The first view is Zagzebski’s (2012, 2013) pre-emptionist view.Footnote 33 According to this view we should completely defer to epistemic authorities: “[t]he fact that the authority has a belief p is a reason for me to believe p that replaces my other reasons relevant to believing p and is not simply added to them”.Footnote 34 That is, instead of believing p on the basis of one’s own reasons, when presented with an epistemic authority, one is rationally required to adopt their belief on the exclusive basis of their epistemic authority. The pre-emption view is plausible when applied to cases of disagreement with an epistemic superior, especially if the inferior person opposes an authoritative judgment from a poor epistemic position. Suppose I believe there are no snakes native to the UK for the reason that I live there, and I have never seen one in the wild. But now an expert herpetologist tells me that adders, grass snakes, and smooth snakes are all native to the UK. Intuitively, I ought to defer to my epistemic superior in this case, and for the sole reason that they believe as they do, such that my own reason ceases to have any weight. Because this epistemic authority has a better track-record than me vis-à-vis amphibians and reptiles, endorsing their judgement for such authoritative reason increases my chance of hitting the truth and avoiding error.

However, the pre-emption view struggles to account for cases of agreement between laypeople and epistemic authorities. As Dormandy (2018) argues, the degree of doxastic justification one has depends on the number and strength of the reasons on which one’s belief is based. Following the pre-emption view, if one has several good reasons to believe but only bases one’s belief on the pre-emptive reasons, then one will have a lesser degree of doxastic justification, and so be in a sub-optimal position, than if one bases the belief on the other reasons as well. Indeed, as Jäger (2016) notes, the fact that the authority shares one’s belief is evidence that one’s own reasons are indeed good ones. So, why replace them altogether with other reasons? Surely, it would be more sensible to keep one’s own reasons and simply add the authoritative reasons to them, thus resulting in a cumulatively stronger epistemic position.Footnote 35 The alternative total evidence view accommodates this point by saying that one’s belief should rationally be based on the total evidence that one possesses, instead of being based solely on the belief or evidence of the authority, i.e., no matter how forceful or plentiful the authoritative reasons are, they never replace or pre-empt one’s own reasons. Still, proponents of the total evidence view say that laypeople ought to assign significant evidential weight to the testimony of epistemic authorities.Footnote 36

Suppose B has a justified belief that p, and that A is an epistemic authority relative to B on the topic of whether p. Suppose also that A testifies that not-p, and yet B continues to believe that p. How does this affect the epistemic status of B’s belief that p? The pre-emption view and the total evidence view give different answers to this question. On the pre-emption view, the answer is that any justification that B might have had for p will always be completely defeated. Since A is an epistemic authority on the relevant topic relative to B, A’s testimony should replace all the reasons B has for believing p, thus defeating any justification B might have for this belief. On the total evidence view, things are more complicated. B must incorporate A’s testimony that not-p into her total evidence, giving it significant evidential weight. Thus, B’s justification for p will be diminished or partially defeated. Due to the evidential weight of epistemically superior testimony, B’s belief will often cease to be justified, all things considered. But it’s possible that B’s belief remains justified, all things considered, if B’s original evidence is sufficiently strong. Note, however, that even if B’s belief that p remains justified, the total level of justification will be diminished.

Bearing the above points in mind, here is our proposed definition of ‘conspiracy theory’ in terms of epistemic superiority:

  • (ESD) Theory T is a conspiracy theory for subject S at time t if and only if:

  • (N) T is a theory about a conspiracy, and

  • (U) T is undermined by the total testimony of the epistemic superiors of S at t.

Note that, as per (EA-E), talk of the epistemic superiors of S in (U) is how we prefer to understand the epistemic authorities for S. Also, as noted in §3, (EAD) defines ‘conspiracy theory’ in terms of conflict with the epistemic authorities, while (U) employs a notion in the same ballpark of tension with testimony, namely undermining. We understand ‘undermining’ as follows:

  • (UNDERMINING) For S, T is undermined by the total testimony of S’s epistemic superiors TES if and only if were S to receive TES, any prior justification S may have for T would be either completely or mostly defeated.

Note that (UNDERMININING) bakes an epistemic defect into (ESD), namely that any justification for belief in conspiracy theories is always negatively impacted, by receipt of epistemically superior testimony, since such testimony would destroy or significantly reduce the justification. Note also that not actually receiving, and so being unaware of, the authoritative testimony, either accidentally or deliberately, offers no protection of the target belief. For the counterfactual specified by (UNDERMINING) may still be true. All that matters is that the testimonial defeater/diminisher for the justification of that belief exists in the relevant epistemic environment. There is thus no need to consider the vexed question of whether the justification for the belief is defeated or weakened by any such counterevidence that the believer lacks but should have had.Footnote 37

For this reason, (ESD) implies a robustly generalist view of conspiracy theories. All belief in conspiracy theories is epistemically defective in the way just described, and there is no need to investigate individual conspiracy theories, on a case-by-case basis, to determine if they have this defect. In addition, this defect is the epistemically negative feature that explains why ordinary usage of ‘conspiracy theory’ is pejorative. Intuitively, someone who responds to the assertion that 9/11 was an inside job by saying ‘that’s a conspiracy theory’ rebuts the assertion, rather than giving a neutral description of it. And that’s exactly what our view predicts: the response implies that any justification the speaker might have for their assertion is undermined by the testimony of epistemic superiors, relative to the speaker, in that any justification the speaker might have for that belief would be defeated or significantly diminished by that testimony.

In the remainder of this section, we clarify several points regarding (ESD), and we show that it makes the right predictions both about the cases we have already considered, and about a range of new cases.Footnote 38

As noted in §3, (EAD) owes us an explanation of the sense in which certain obscure conspiracy theories are in ‘conflict’ with the testimony of epistemic authorities, given that this testimony is neither explicitly denying nor otherwise logically incompatible with those conspiracy theories. By contrast, given our definition of undermining in terms of defeat, it is easy to explain how a conspiracy theory can be undermined by epistemically superior testimony that is not logically incompatible with that theory. This is because there are various familiar ways in which justification for p can be defeated by testimony that is not logically incompatible with p.

First, consider rebutting defeaters. S’s justification for p is subject to a rebutting defeater when S receives new evidence that not-p. This may happen when S receives epistemically superior testimony that is logically incompatible with p. But S may also receive such testimony that counts as evidence that not-p without being logically incompatible with p, for instance because it is evidence that p is unlikely to be true. Imagine an obscure theory that the Canadian government has secretly conspired to form the crop circles (actually made by local youths) appearing in S’s region of the UK, as a way of communicating with Aliens. Suppose that no epistemic superior to S has said anything that is logically incompatible with this theory – perhaps these epistemic superiors are unaware of the existence of this theory. Nevertheless, it’s easy to imagine that S‘s epistemic superiors have said many things about the aims, workings, and limitations on the powers of the Canadian government that indicate that this theory is unlikely in the extreme. Thus, this theory about a conspiracy is undermined by the testimony of epistemic superiors relative to S, despite this testimony not being logically incompatible with the theory.

Next, consider undercutting defeat.Footnote 39S’s justification for p is subject to an undercutting defeater when S receives new evidence that undercuts the connection between S’s original evidence for p and the truth of p. For example, suppose that S has evidence for p in the form of testimony that p from a source that S believes to be reliable. Then, S receives evidence that this source is actually highly unreliable. This new evidence acts as an undercutting defeater for S’s original justification for p, without being logically incompatible with p. Suppose the conspiracy theory about the Canadian government is being promoted on a news site that S believes to be reliable. However, S then receives testimony from an epistemic superior that this site is extremely unreliable. This results in undercutting defeat for whatever justification S might have had for the conspiracy theory, despite the logical compatibility of the testimony with the theory.

Another feature of (ESD) is worth noting. As we have seen, an attractive feature of (EAD) is that it relativises the answer to the question of whether T is a conspiracy theory to a time t, and (ESD) retains this feature, allowing for the possibility that T is a conspiracy theory for S at t1 but not at t2. However, unlike (EAD), (ESD) also relativises the answer to the question of whether T is a conspiracy theory to a subject S. This allows for the possibility that at time t, T is a conspiracy theory relative to S1, but not to S2. This is also a desirable feature, given our aim of capturing ordinary usage of ‘conspiracy theory’. Consider again the example of PRISM. At some point before the evidence for this theory was made publicly available, Snowden had excellent evidence for it. Relative to him, the supposition that PRISM exists didn’t count at that time as a conspiracy theory on (ESD). This is because his belief in PRISM was not at that time undermined by any epistemically superior testimony. To be sure, Snowden had epistemic superiors on the topic of PRISM, but given his evidence, the testimony of those epistemic superiors did not undermine his justification. On the other hand, relative to uninformed members of the public, PRISM did count at that time as a conspiracy theory. For such members of the public, the theory was undermined by the testimony of the authorities that are epistemically superior relative to them.

This is a good result. It would be odd to say that Snowden is a conspiracy theorist after having gathered all the evidence, but not yet leaked any of it to the media. But if, as we assume, conspiracy theorists are people who believe conspiracy theories, and if PRISM counted as a conspiracy theory relative to him, we would have no choice but to say this. On the other hand, if members of the public were to believe the theory at this interim point, it would be right to say that they were conspiracy theorists. Indeed, if Snowden himself were to describe such people as conspiracy theorists, he would speak truly, insofar as he would be pointing out that what they believe is a conspiracy theory relative to them, despite it not being a conspiracy theory relative to him. And this is exactly what (ESD) predicts.Footnote 40

What about an absolute superior (Sab), i.e., a subject with no epistemic superiors whatsoever? On (ESD) a theory is a conspiracy theory relative to S only if that theory is undermined by the testimony of S’s epistemic superiors, and yet Sab has no epistemic superiors, and thus no theory is a conspiracy theory relative to Sab. This may seem a surprising consequence, but it isn’t a bad consequence, so long as we are careful about what it amounts to. First, although (ESD) implies that e.g., the theory that 9/11 was an inside job does not count as a conspiracy theory relative to Sab, it entails that Sab is correct to describe it as a conspiracy theory relative to their epistemic inferiors. This is analogous to the case where PRISM does not count as a conspiracy theory relative to Snowden, and yet, at least in the interim period before his evidence is made publicly available, he would be right to speak of PRISM as a conspiracy theory relative to uninformed members of the public. Second, it is not to say that if Sab were to believe that 9/11 was an inside job, they would not believe a conspiracy theory or be a conspiracy theorist. For in that case, Sab would cease to be absolutely, epistemically superior. As this belief necessitates the adoption of many false and unjustified beliefs about relevant topics, Sab, a one-time, or one-world, absolute superior, would count as far less reliable and knowledgeable than many others, and so would no longer be epistemically superior in the absolute sense, at least regarding the relevant topics.Footnote 41 And finally, although (ESD) entails that Sab cannot regard the theory that 9/11 was an inside job as a conspiracy theory relative to themself, it does not preclude Sab from correctly describing it as a very bad theory (e.g., as conflicting with relevant strong evidence, or as simply false).

Another feature of (ESD) should also be noted. (UNDERMINING) specifies that S’s justification for belief in T would be either completely or mostly defeated. This is crucial to the idea that (ESD) implies a robust form of generalism, according to which all conspiracy theories suffer from a major epistemic defect; on our view, to label something a ‘conspiracy theory’ is a major accusation, not just a minor criticism. And we think that this makes the right predications about relevant cases. If Snowden is in possession of a large body of evidence e for PRISM, it is possible that some particular piece of evidence e1 would be defeated as justification for PRISM upon receipt of epistemically superior testimony. Insofar as e1 is not an important piece of evidence, this might entail some small degree of defeat of e as justification for PRISM, despite e continuing to provide substantial justification for PRISM. It would be odd to say that Snowden believes a conspiracy theory in this case, and (ESD) says that he does not.

Of course, in the absence of the specification of a particular threshold, it is unclear whether a theory would be mostly defeated by epistemically superior testimony, and so there are cases where it is correspondingly vague (or indeterminate) whether a theory about a conspiracy is a conspiracy theory, according to (ESD). Such vagueness is exactly what one would expect in the ordinary concept of a conspiracy theory. Just as there are cases where it is vague whether it is correct to say that someone is bald, there are cases where it is vague whether it is correct to criticise a theory about a conspiracy by saying that it is a conspiracy theory. Indeed, we can imagine a case where Snowden gradually receives increasingly strong evidence for PRISM, such that at the start his justification for PRISM would be defeated by epistemically superior testimony, but at the end it would not. (ESD) says that PRISM is a conspiracy theory relative to Snowden at the start, but not at the end, of this evidence-gathering process. At some point in-between, it is vague whether PRISM is a conspiracy theory for Snowden. And surely, this is the right thing to say.

Finally, what about cases where B’s epistemic superiors disagree in substance with each other? (UNDERMINING) talks about T being undermined by the total testimony of B’s epistemic superiors TES. Suppose TES consists of testimony that T is true from A1, who is an epistemic superior of B, and testimony that T is false fromA2, who is another epistemic superior of B. In a case where A1 and A2 are epistemic equals, and B has no reason to privilege the testimony of one over the other, this may ensure that receiving TES would not defeat B’s justification for T. As far as B is concerned, the testimonies of A1 and A2 cancel each other out, allowing B to fall back on their own evidence. Thus (ESD) does not count T as a conspiracy theory relative to B when there is widespread disagreement on relevant topics among epistemic peers who are all epistemically superior to B, and equally so. This is the right result. In such circumstances, T becomes a live option for B, and not something that can reasonably be dismissed as a conspiracy theory.Footnote 42

5 Evidential insulation

In §1 we mentioned Napolitano’s (2021: 86) view that conspiracy theories aren’t theories, but rather “a distinctive way of holding a belief in the existence of a conspiracy, namely, one that is self-insulated.” A belief is self-insulated in her sense when the believer takes “the conspiracy to neutralize the relevant counterevidence” with the result that “no evidence could be presented to them that would cause them to change their minds, because any counter-evidence would be dismissed as a fabrication of the conspirators to steer the public away from the truth.”Footnote 43 Napolitano argues convincingly that we cannot rationally hold self-insulated beliefs in conspiracy theories. So, her definition entails a generalist view of conspiracy theories that, like our view, has the potential to explain the pejorative meaning of ‘conspiracy theory’ in ordinary usage. True, her aim (2021: 85) is to further a project of conceptual engineering, but she does take her definition to capture an aspect of the ordinary meaning. For that reason, we will treat it as a rival to (ESD), while acknowledging that even if we are correct that (ESD) is preferable given our desideratum, it may be that it is not preferable given Napolitano’s desideratum.

Now, Napolitano recognises that it’s somewhat revisionary to equate a conspiracy theory with a way of holding a belief, rather than, as on (ESD), as a kind of theory. However, she suggests that the theoretical benefits of the concept that she has engineered justify the revision. That may be so, but since we only aim to capture ordinary usage, this cannot justify the revision for us. However, Napolitano (2021: 96) also claims that the revision required by her analysis is limited, in that “the extension of the ordinary concept may, to a large extent, be preserved.” Here we disagree. One issue is that it’s natural to talk of conspiracy theories that no one holds. Consider for example “joke” or “ironic” conspiracy theories not intended to be believed by anyone, such as the theory that birds aren’t real. Or consider our earlier example of the theory that the Canadian government is secretly conspiring to form the crop circles in the UK as a way of communicating with Aliens. Plausibly, no one believes these conspiracy theories, and so Napolitano’s account cannot classify them as such. Meanwhile, (ESD) counts both as conspiracy theories relative to (presumably) all subjects and times.

Another issue arises because as Munro (2023b, 2024) plausibly argues, many people who assert, promote, and generally act as if they believe paradigmatic conspiracy theories do not really believe them.Footnote 44 Since they do not believe the relevant theories, they cannot be said to have conspiracy beliefs in Napolitano’s sense. And yet intuitively, the theories that these people “go along with” are conspiracy theories. (ESD) counts them as such, relative to the people who go along with them.

Putting these problems aside, counterexamples suggest that evidential self-insulation is neither necessary nor sufficient for a belief to count as belief in a conspiracy theory on ordinary usage. To illustrate why self-insulated belief in a conspiracy isn’t a necessary feature of belief in a conspiracy theory, consider the following case:

  • (QANON-MUM) Back in early January 2021 QAnon-mum believed that a cabal of Satanic child molesters, controlled by Clinton, funded by Soros, and assisted by mainstream media, conspired against Trump to steal the 2020 US election. She took part in the 6th January US Capital attack. However, she also believed that sufficient evidence of the conspiracy would come to light in time for Trump to continue serving as the US President. The 20th January would be the day of reckoning. But, following the inauguration of Biden, QAnon-mum abandoned her QAnon beliefs altogether, citing the events on that crucial day.

Intuitively, QAnon-mum believed a conspiracy theory up until Inauguration Day when Biden was sworn in as the next US President. But her belief was not self-insulated since she gave it up in direct response to the events of 20th January. Thus, it doesn’t qualify as a conspiracy belief on Napolitano’s view. By contrast, (ESD) straightforwardly counts QAnon-mum as believing a conspiracy theory.

Is this too quick? One might naturally suppose that while QAnon-mum’s belief was not completely self-insulated, it must have been somewhat self-insulated, for how else could she have maintained it in the face of so much counterevidence until 20th January. Napolitano (2021: 86, 94) does not talk about degrees of self-insulation, and when she defines ‘conspiracy beliefs’ in terms of self-insulation, she suggests they are “completely immune to counter-evidence [disconfirmation]”. But perhaps cases like QAnon-mum show that the best version of her preferred definition would be in terms of a high degree of, rather than total, evidential self-insulation.

A tricky question then arises about how self-insulated a belief in a conspiracy must be if it is to count as a conspiracy belief. Presumably, any belief in a minimally competent conspiracy is liable to be somewhat self-insulating, as it would be reasonable for the holder of that belief to think that some apparent counterevidence can be explained away with reference to the conspiracy. But Napolitano doesn’t want to count all beliefs in conspiracies as conspiracy beliefs, and, importantly for our purposes, we don’t ordinarily do so. Relatedly, Napolitano’s argument that conspiracy theories are always irrational hinges on the idea that conspiracy beliefs are completely self-insulated, and it’s unclear that the argument will work if we understand conspiracy theories in terms of merely a high degree of evidential self-insulation. But if this argument fails, we have no reason to think that Napolitano’s definition of ‘conspiracy theory’ captures the ordinary pejorative meaning.

Nor is self-insulated belief in a conspiracy a sufficient feature of belief in a conspiracy theory, according to ordinary usage. Take the following case.

  • (SECOND SHOOTER) Suppose that in addition to Lee Harvey Oswald, there actually was a second shooter of JFK, named Burt. Burt hid near to Oswald, and having observed Oswald take his shot at JFK, Burt then took his own shot. Burt acted as part of an extremely competent, powerful, and secretive conspiracy to assassinate JFK. Burt is the leader of the conspiracy, and as such his knowledge of its workings and motivations are extensive and authoritative. Burt of course believes that there was a second shooter of JFK, and he believes that a powerful conspiracy has acted to cover up this fact.

If, as is plausible, Burt’s belief that there was a second shooter is completely self-insulated against any normal evidence, it would count as a conspiracy theory on Napolitano’s definition. That’s problematic for two reasons. First, it’s easy to fill in the case so that there is nothing epistemically defective about Burt’s belief, despite its self-insulation. After all, he has direct perceptual knowledge that there was a second shooter (i.e., himself), and Burt himself played a key role in organising the conspiracy to cover up this fact. It’s hard to imagine any ordinary evidence we could present to Burt that would make it reasonable for him to give up his belief. So, given that Burt’s belief counts as a belief in a conspiracy theory on Napolitano’s definition, it’s a counterexample to her claim that all belief in conspiracy theories is irrational. Secondly, if we fill in the details of the case so that Burt is not irrational in maintaining a self-insulated belief, it’s counterintuitive to say that Burt believes a conspiracy theory. This is because in ordinary usage ‘conspiracy theory’ is evaluative, used to attribute negative epistemic features, and yet in this case Burt’s belief has no negative epistemic feature. Thus, self-insulated belief in a conspiracy theory is not a sufficient condition for something to count as a conspiracy theory (on ordinary usage).

What does (ESD) say about (SECOND SHOOTER)? It appears that Burt has no epistemic superior, perhaps even no epistemic equal, when it comes to the topic of whether there was a second shooter of JFK. It’s true that many very informed people give testimony that conflicts with Burt’s belief, e.g., a government commission tasked with investigating the assassination of JFK. But while the members of this commission certainly are the epistemic superiors of ordinary people, they are not epistemic superiors of Burt, who has a wealth of knowledge that the commission lacks, and who has not been misled by the conspiracy, as the commission has. So, relative to him, the theory that there was a second shooter doesn’t count on (ESD) as a conspiracy theory.

What about the ordinary people? Assuming the only testimony available to them is coming from the commission and other public institutions that deny the existence of a second shooter, then relative to the ordinary people, this theory counts as a conspiracy theory on (ESD). If (SECOND SHOOTER) is indeed true, this doesn’t make it the case that the ordinary people who believe this theory are not conspiracy theorists. It just happens that, by an incredible chance, the conspiracy theory that they believe happens to be true. Of course, if the government commission were to uncover the conspiracy and publicly testify as to the truth of (SECOND SHOOTER), these people would no longer be conspiracy theorists on our definition.

6 Conclusion

We first argued that as it stands the disagreement between generalists and particularists is merely verbal. We then revived a substantial version of the debate by adopting the project of providing an analysis of the concept ordinarily expressed by ‘conspiracy theory’. We defended (ESD) as the best candidate for such an analysis. Importantly, (ESD) implies the truth of generalism. Thus, there is an important sense in which generalism is true, namely that given the concept ordinarily expressed by ‘conspiracy theory’ (outside of theoretical contexts and attempts to engineer a new concept), generalism is true.Footnote 45

In closing, we note that generalists and particularists are often motivated by different concerns. Generalists are worried that many people believe bizarre and outlandish theories about conspiracies. Given their pejorative understanding of the concept of a conspiracy theory, they are likely to express the worry by saying that the problem is that people believe conspiracy theories. Meanwhile, particularists are worried that sometimes powerful people really do conspire to do nefarious things, and that legitimate claims that this is happening are often unfairly dismissed and suppressed on the grounds that these claims are “just” conspiracy theories. Given their neutral understanding of the concept of a conspiracy theory, they are likely to express this worry by saying that the problem is that sometimes there is nothing epistemically defective about belief in conspiracy theories, and so the attempts to dismiss them as “just” conspiracy theories should be resisted.

There is surely something valid about both concerns. Our analysis implies that generalists are right to express their worry by saying that it is a problem that many people believe farfetched conspiracy theories, where ‘conspiracy theory’ is interpreted as expressing the ordinary concept. And we hope to have clarified what this worry amounts to by clarifying what is inherently epistemically defective about conspiracy theories, namely, that they are undermined by the testimony of epistemic superiors. It is problematic that people believe theories about conspiracies that have this epistemic defect.

On the other hand, our analysis implies that particularists should not express their worry by saying that the problem is that sometimes there is nothing epistemically defective about belief in conspiracy theories, and so attempts to dismiss these beliefs as “just” conspiracy theories should be resisted (at least insofar as ‘conspiracy theory’ is being used to express the ordinary concept). On our analysis this is a confusion because the ordinary concept is pejorative in that it implies an epistemic defect. This is one reason why the claims of particularists can seem so odd, for example Pigden’s (2022: 133) startling claim that “every politically and historically literate person is a big-time conspiracy theorist, since every such person subscribes to a vast range of conspiracy theories”. We don’t want to dismiss the general worry about suppression of claims about genuine conspiracies that particularists are trying to express, but we recommend that they do not express the worry like this. However, given our analysis of the concept ordinarily expressed by ‘conspiracy theory’, a clearer expression of the particularist worry is that claims about conspiracies are sometimes dismissed as conspiracy theories, when they are not really conspiracy theories. And our analysis implies that the way to establish that this is what is happening in a particular case is to show that the relevant claim is not undermined by the testimony of epistemic superiors, despite what powerful people might say.Footnote 46