I downvoted this post because it felt slippery. I kept running into parts that didn't fit together or otherwise seemed off.
If this was a google doc I might leave a bunch of comments quickly pointing to examples. I guess I can do that in list format here.
Appreciate the comment even if you disliked the post! Here are some responses to various bullet points in a kind of random order that made sense to me:
The post highlights the market for lemons model, but then the examples keep not fitting the lemons setup. Covid misinformation wasn't an adverse selection problem, nor was having spies in the government, nor was the Madman Theory situation.
The market for lemons model is really just an extremely simple model of a single-dimensional adversarial information environment. It's so extremely simple that it's hard to fit to any real world situation, since it's really just a single dimension of price signals. As you add more dimensions of potential deception things get more complex, and that's when then paranoia stuff becomes more useful.
I think COVID misinformation fits the lemon's market situation pretty well though of course not perfectly. Happy to argue about it. Information markets are a bit confusing because marginal costs and marginal willingness to pay are both very low, but I do think that at least for informed observers, most peaches (i.e. high-quality information sources) ended up being priced out by low-quality l...
That helped give me a better sense of where you're coming from, and more of an impression of what the core thing is that you're trying to talk about. Especially helpful were the diagonalization model at the end (which I see you have now made into a separate post) and the part about "paranoia to me is centrally invoked by high-bandwidth environments that are hard to escape from" (while gesturing at a few examples, including you at CEA). Also your exchange elsewhere in the comments with Richard.
I still disagree with a lot of what you have to say, and agree with most of my original bullet points (though I'd make some modifications to #2 on your division into three strategies and #6 on selective skepticism). Not sure what the most productive direction is to go from here. I have some temptation to get into a big disagreement covid, where I think I have pretty different models than you do, but that feels like it's mainly a tangent. Let me instead try to give my own take on the central thing:
The central topic is situations where an adversary may have compromised some of your internal processes. Especially when it's not straightforward to identify what they've compromised, fix your process...
Great post. I'm going to riff on it to talk about what it would look like to have an epistemology which formally explains/predicts the stuff in this essay.
Paranoia is a hard thing to model from a Bayesian perspective, because there's no slot to insert an adversary who might fuck you over in ways you can't model (and maybe this explains why people were so confused about the Market for Lemons paper? Not sure). However, I think it's a very natural concept from a Knightian perspective. My current guess is that the correct theory of Knightian uncertainty will be able to formulate the concept of paranoia in a very "natural" way (and also subsume Bayesian uncertainty as a special case where you need zero paranoia because you're working in a closed domain which you have a mechanistic understanding of).
The worst-case assumption in infra-Bayesianism (and the maximin algorithm more generally, e.g. as used in chess engines) is one way of baking in a high level of paranoia. However, two drawbacks of that approach:
Paranoia can itself constitute being fucked over by an agent that can induce it while not being weakened by it, a la cults encouraging their members to broadly distrust the world leaving the cult as the only trusted source of information, or political players inciting purges that'll fall on their enemies.
Yeah, this is a big thing that I hope to write more about. Like, a huge dynamic in these kinds of conflicts is someone reinforcing the paranoia, which then elevates more flailing, causing someone's environment to become more disorienting, which causes them to become worse at updating on evidence and become more disoriented, which then makes it easier to throw them off balance more.
Like, as is kind of part of this whole dynamic, the process you use to restabilize yourself in adversarial environments can itself be turned against you.
Nitpick:
If you can't predict what you are going to do tomorrow, your opponents can't either.
Not necessarily. If you're using a shoddy randomization procedure, a smarter opponent can identify flaws in it, and then they'd be better able to predict you than you can predict yourself. E. g., "vibes-based" randomization, where you just do whatever feels random to you at the moment, is potentially vulnerable in this way, due to your (deterministic) ideas regarding what's appealing or what's random.
Fair point! And not fully without basis.
I do think in reality, outside of a few quite narrow cybersecurity scenarios, you practically always have enough bits you can genuinely randomize against. I'll still edit in a "probably".
Yet another approach: A dictator surrounds himself with the people he used to know before he became the dictator, i.e. people, trust in whom was built in an environment where there was yet not much point in lying to him.
The CDC started lying about the effectiveness of masks to convince people to stop using them so service workers would have access to them as political pressure on them mounted.
That's the common explanation given, but from my understanding at least it's (at least partially) incorrect and the original recommendations were due in part to not understanding Covid was airborne. And that was because of a major communication error ~60 years ago from "Alexander Langmuir, the influential chief epidemiologist of the newly established CDC" who had pressed back on airborne viruses being common as he thought they were too similar to miasma theory.
Like his peers, Langmuir had been brought up in the Gospel of Personal Cleanliness, an obsession that made handwashing the bedrock of US public health policy. He seemed to view Wells’ ideas about airborne transmission as retrograde, seeing in them a slide back toward an ancient, irrational terror of bad air—the “miasma theory” that had prevailed for centuries. Langmuir dismissed them as little more than “interesting theoretical points.”
Langmuir seems to have finally accepted it after a period of time, but the focus was on <5 micron particles a...
It occurs to me this is connecting to the concepts in From Personal to Prison Gangs: Enforcing Prosocial Behavior. That post notes how dramatically increasing the number of prisoners in a prison means the prisoners have a much harder time establishing trust and reputation, because there's too many people to keep track of. The result is prison gangs: gang leaders are few enough they can manage trust between each other, and then they are responsible for ensuring their gang members follow "the rules."
...At some point over the past couple hundred years, society underwent a transition similar to that of the California prison system.
In 1800, people were mostly farmers, living in small towns. The local population was within an order of magnitude of Dunbar’s number, and generally small enough to rely on reputation for day-to-day dealings.
Today, that is not the case [citation needed].
Just as in prisons and companies, we should expect this change to drive two kinds of transitions:
- A transition from informal, decentralized rules to formal, written, centrally-enforced rules.
- A transition from individual to group-level identity.
This can explain an awful lot of the ways in which society has changed o
The feeling of losing is a sense of disorientation and confusion and constant reorienting as reality changes more quickly than you can orient to, combined with desperate attempts to somehow slow down the speed at which your adversaries are messing with you.
I just want to note that there doesn't need to be anything adversarial about that. I see this sentence as a very on-point articulation of what is going on in people's mind when they are angry about technological or societal changes.
Both medical advice and legal advice are categories where we only allow certified experts to speak freely
Really? I thought only medical/legal/financial professionals have to write "not a medical/legal/financial advice" disclaimers. (I'm not from US)
This treatment suffers from being framed entirely in terms of loss-aversion rather than in positive terms that balance costs against benefits. An important remedy to spies and subversion is above-board investigative processes like open courts. But then you have to be thinking in terms of the info and outcomes you want, not just in terms of who might get one over on you.
All that said, in reality, navigating a lemon market isn’t too hard. Simply inspect the car to distinguish bad cars from good cars, and then the market price of a car will at most end up at the pre-lemon-seller equilibrium, plus the cost of an inspection to confirm it’s not a lemon. Not too bad.
...“But hold on!” the lemon car salesman says. “Don’t you know? I also run a car inspection business on the side”. You nod politely, smiling, then stop in your tracks as the realization dawns on you. “Oh, and we also just opened a certification business that certif
Curated. I think it's long been a problem that LessWrong doesn't have great models on how to handle adversarial situations. I've been wanting Habryka to write up their thoughts on this for awhile.
I think this post is more like re-establishing some existing concepts that have already been in the water supply (as opposed to adding something new), but it does a good job introducing them in a way that sets up the problem, with a kind of practical mindset. It does a good job motivating why you'd want a more fleshed out model for thinking about this, and, I thin...
There is a lot of variance in decision-making quality that is not well-accounted for by how much information actors have about the problem domain, and how smart they are.
...I currently believe that the factor that explains most of this remaining variance is "paranoia". In-particular the kind of paranoia that becomes more adaptive as your environment gets filled with more competent adversaries. While I am undoubtedly not going to succeed at fully conveying why I believe this, I hope to at least give an introduction into some of the concepts I use to think
I don't comment a lot, but I felt this one was definitely worth the read and my time.
While I don't necessarily agree with every aspect, much of this resonated with how I see social media has (been) warped from a regular market of social connection to a lemon market, where the connection is crappy, and many sane people I know are blinding themselves to it (leaving in some corners behind a cesspool of the dopamine hit addicted).
Ultimately, this also seems to be true about how people have responded to the latest wave of human-rights initiatives (DEI) carried ...
Good post.
A thing that's not mentioned here but is super salient to my personal experience of navigating an untrustworthy world is reputation-based strategies - asking your friends, transitive trust chains, to some extent ratings/reviews/forum commments are this-ish too. This is perhaps a subset of "blind yourself" (I often do kind of blind myself to anything not recommended by friends, when I can afford to). And I guess this kind of illustrates how even though we're in an age where anyone can in principle access any information and any thing, it's common ...
Great post, with lots of application for startups especially in how you choose which ideas to work on. I've had a "claim" philosophy in contrast to the idea of "positioning" in that the latter is a passive activity, it is what gets assigned to you by the market instead of what you push onto the market, e.g., how Slack positions itself (workplace productivity) versus how people actually use it and talk about it (chatting app for work). Which is also why it's notoriously difficult to find the right positioning because you're essentially trying to guesstimate...
Great post - enjoyable read and connected some concepts I hadn't considered together before.
The first thing that immediately comes to mind when I think about how to act in such an environment is reputation: trying to determine which actors are adversarial based on my (or other's) previous interactions with them. I think I would try this before resorting to the other three tactics.
For example, before online ratings became a thing, chain restaurants had one significant advantage over local restaurants: if you were driving through and needed a place to ...
Trouble starts brewing when your adversary can predict what you are going to do before you, yourself, even know what you are going to do. Which, as luck would have it, is likely to be an increasingly common feeling in the AI-dominated world.
Both medical advice and legal advice are categories where we only allow certified experts to speak freely,
In reddit's legal advice forum commenters just proclaim their advice as not legal advice whether they are lawyers or not. Sometimes they recommend getting a lawyer instead of giving advice.
At some point, when you are surrounded by people feeding you information adversarially and sabotaging your plans, you just start purging people until you feel like you know what is going on again.
One of my friends—who was the target of a vicious online witch hunt over their political beliefs—eventually adopted this strategy while vetting new members for their Discord server. Entryism (real or imagined) creates a multipolar trap where both sides are maximally insulated against outside beliefs.
In practice the way this problem is often solved nowadays is to find third-party internet forums where people can leave honest reviews that can't be censored easily - such as google maps reviews or reviews on reddit or glassdoor job reviews or so on.
Google and Reddit can't be trusted to be censorship-free either, but the instances of censorship there are often various govts (China, US, Russia etc) demanding censorship, as opposed to your ice cream seller demanding censorship.
Mass violence, especially collusion to apply violence between various parties (gov...
OK, so you're talking about the conjunction of two things. One is the social and political milieu of Bay Area rationalism. That milieu contains anti-democratic ideologies and it is adjacent to the actual power elite of American tech, who are implicated in all kinds of nefarious practices. The other thing is something to do with the epistemology, methodology, and community practices of that rationalism per se, which you say render it capable of being coopted by the power philosophy of that amoral elite.
These questions interest me, but I live in Australia and have zero experience of the 21st century Bay Area (and of power elites in general), so I'm at a disadvantage in thinking about the social milieu. If I think about how it's evolved:
Peter Thiel was one of the early sponsors of MIRI (when it was SIAI). At that time, politically, he and Eliezer were known simply as libertarians. This was the world before social media, so politics was more palpably about ideas...
Less Wrong itself was launched during the Obama years, and was designed to be apolitical, but surveys always indicated a progressive majority among the users, with other political identities also represented...
People sometimes make mistakes. (Citation Needed)
The obvious explanation for most of those mistakes is that people do not have access to sufficient information to avoid the mistake, or are not smart enough to think through the consequences of their actions.
This predicts that as decision-makers get access to more information, or are replaced with smarter people, their decisions will get better.
And this is substantially true! Markets seem more efficient today than they were before the onset of the internet, and in general decision-making across the board has improved on many dimensions.
But in many domains, I posit, decision-making has gotten worse, despite access to more information, and despite much larger labor markets, better education, the removal of lead from gasoline, and many other things that should generally cause decision-makers to be more competent and intelligent. There is a lot of variance in decision-making quality that is not well-accounted for by how much information actors have about the problem domain, and how smart they are.
I currently believe that the factor that explains most of this remaining variance is "paranoia". In particular the kind of paranoia that becomes more adaptive as your environment gets filled with more competent adversaries. While I am undoubtedly not going to succeed at fully conveying why I believe this, I hope to at least give an introduction into some of the concepts I use to think about it.
The simplest economic model of paranoia is the classical "lemon's market":
In the classical lemon market story you (and a bunch of other people) are trying to sell some nice used cars, and some other people are trying to buy some nice used cars, and everyone is happy making positive-sum trades. Then a bunch of defective used cars ("lemons") enter the market, which are hard to distinguish from the high-quality used cars since the kinds of issues that used cars have are hard to spot.
Buyers adjust their willingness to pay downwards as the average quality of car in your market goes down. This causes more of the high-quality sellers to leave the market as they no longer consider their car worth selling at that lower price. This further reduces the average willingness to pay of the buyers, which in turn drives more high-quality sellers out of the market. In the limit, only lemons are sold.
In this classical model, a happily functioning market where both buyers and sellers are happy to trade, generating lots of surplus for everyone involved, can be disrupted or even completely destroyed[1] by the introduction of a relatively small number of adversarial sellers who sell sneakily low-quality goods. From the consumer side, this looks like one day you having a fine and dandy time buying used cars, and the next day being presented with a large set of deals so suspiciously good that you know something is wrong (and you are right).
Buying a car in a lemon's market is a constant exercise of trying to figure out how the other person is trying to fuck you over. If you see a low offer for a car, this is evidence both that you got a great deal, and evidence that the counterparty knows something that you don't that they are using to fuck you over. If the latter outweighs the former, no deal happens.
For some reason, understanding this simple dynamic is surprisingly hard for people to come to terms with. Indeed, the reception section of the Wikipedia article for Akerlof's seminal paper on this is educational:
Both the American Economic Review and the Review of Economic Studies rejected the paper for "triviality", while the reviewers for Journal of Political Economy rejected it as incorrect, arguing that, if this paper were correct, then no goods could be traded.[4] Only on the fourth attempt did the paper get published in Quarterly Journal of Economics.[5] Today, the paper is one of the most-cited papers in modern economic theory and most downloaded economic journal paper of all time in RePEC (more than 39,275 citations in academic papers as of February 2022).[6] It has profoundly influenced virtually every field of economics, from industrial organisation and public finance to macroeconomics and contract theory.
(You know that a paper is good if it gets rejected both for being "trivial" and "obviously incorrect")
All that said, in reality, navigating a lemon market isn't too hard. Simply inspect the car to distinguish bad cars from good cars, and then the market price of a car will at most end up at the pre-lemon-seller equilibrium, plus the cost of an inspection to confirm it's not a lemon. Not too bad.
"But hold on!" the lemon car salesman says. "Don't you know? I also run a car inspection business on the side". You nod politely, smiling, then stop in your tracks as the realization dawns on you. "Oh, and we also just opened a certification business that certifies our inspectors as definitely legitimate", he says as you look for the next flight to the nearest communist country.
What do you do in a world in which there are not only sketchy used car salesmen, but also sketchy used car inspectors, and sketchy used car inspector rating agencies, or more generally, competent adversaries who will try to predict whatever method you will use to orient to the world, and aim to subvert it for their own aims?
As far as I can tell the answer is "we really don't know, seems really fucking hard, sorry about that". There are no clear solutions to what to do if you are in an environment with other smart actors[2] who are trying to predict what you are going to do and then try to feed you information to extract resources from you. Decision theory and game theory are largely unsolved problems, and most adversarial games have no clear solution.
But clearly, in practice, people deal with it somehow. The rest of this post is about trying to convey what it feels like to deal with it, and what it looks like from the outside. These "solutions", while often appropriate, also often look insane, and that insanity explains a lot of how the world has failed to get better, even as we've gotten smarter and better informed. These strategies often involve making yourself dumber in order to make yourself less exploitable, and these strategies become more tempting the smarter your opponents are.
John Boyd, a US Air Force Colonel, tried to figure out good predictors of who wins fighter jet dogfights. In the pursuit of that he spent 30 years publishing research reports and papers and training recruits, ultimately culminating in his model of the "OODA loop".
In this model, a fighter jet pilot is engaging in a continuous loop of: Observe, Orient, Decide, Act. This loop usually plays out over a few seconds as the fighter observes new information, orients towards this new environment, makes a decision on how to respond, and ultimately acts. Then they observe again (both the consequences of their own actions and of their opponent), orient again, etc.
What determines (according to Boyd) who wins in a close dogfight is which fighter can "get into" the other fighters OODA loop.
If you can...
You will win the fight. Or as Boyd said "he who can handle the quickest rate of change survives". And to his credit, the formal models of fighter-jet maneuverability he built on the basis of this theory have (at least according to Wikipedia) been one of the guiding principles of modern fighter jet design including the F-15 and F-16 and are widely credited with determining much of modern battlefield strategy.
Beyond the occasional fighter-jet dogfight I get into, I find this model helpful for understanding the subjective experience of paranoia in a wide variety of domains. You're trying to run your OODA loop, but you are surrounded by adversaries who are simultaneously trying to disrupt your OODA loop while trying to speed up their own. When they get into your OODA loop, it feels like you are being puppeted by your adversary, who can predict what you are going to do faster than you can adapt.
The feeling of losing is a sense of disorientation and confusion and constant reorienting as reality changes more quickly than you can orient to, combined with desperate attempts to somehow slow down the speed at which your adversaries are messing with you.
There are lots of different ways people react to adversarial information environments like this, but at a high level, my sense is there are roughly three big strategies:
All three of those produce pretty insane-looking behavior from the outside, yet I think are by-and-large an appropriate response to adversarial environments (if far from optimal).
When a used car market turns into a lemon's market, you don't buy a used car. When you are a general at war with a foreign country, and you suspect your spies are compromised and feeding you information designed to trick you, you just ignore your spies. When you are worried about your news being the result of powerful political egregores aiming to polarize you into political positions, you stop reading the news.
At the far end of paranoia lives the isolated hermit. The trees and the butterflies are (mostly) not trying to deceive you, and you can just reason from first principles about what is going on with the world.
While the extreme end of this is costly, we see a lot of this in more moderate form.
My experience of early-2020 COVID involved a good amount of blinding myself to various sources of information. In January, as the pandemic was starting to become an obvious problem in the near future, the discussion around COVID picked up. Information quality wasn't perfect, but overall, if you were looking to learn about COVID, or respiratory diseases in general, you would have a decent-ish time. Indeed, much of the research I used to think about the likely effects of COVID early on in the pandemic was directly produced by the CDC.
Then, the pandemic became obvious to the rest of the world, and a huge number of people started having an interest in shaping what other people believed about COVID. The CDC started lying about the effectiveness of masks to convince people to stop using them so service workers would have access to them as political pressure on them mounted. Large fractions of society started wiping down every surface and trying to desperately produce evidence that rationalized this activity. Most channels that people relied on for reliable health information became a market for lemons as forces of propaganda drowned out the people still aiming to straightforwardly inform.
I started ignoring basically anything the CDC said. I am sure many good scientists still worked there, but I did not have the ability to distinguish the good ones from the bad ones. As the adversarial pressure rose, I found it better to blind myself to that information.
The general benefits to blinding yourself to information in adversarial environments are so commonly felt, and so widely appreciated, that constraining information channels is a part of almost every large social institution:
U.S. courts extensively restrict what evidence can be shown to juries
A lot of US legal precedent revolves around the concept of "admissible evidence", and even furthermore "admissible argument". We are paranoid about juries getting tricked, so we blind juries to most evidence relevant to the case we are asking them to judge, hoping to shield them from getting tricked and controlled by the lawyers of either side, but still leave enough information available to usually make adequate judgements.
Nobody is allowed to give legal or medical advice
While much of this is the result of regulatory capture, we still highly restrict the kind of information that people are allowed to give others on many of the topics that matter most to people. Both medical advice and legal advice are categories where we only allow certified experts to speak freely, and even there, we only do so in combination with intense censure if the advice later leads to bad consequences for the recipients.
Within governments, the "official numbers" are often the only things that matter
The story of CIA analyst Samuel Adams and his attempts at informing the Johnson administration about the number of opponents the US was facing in the Vietnam war is illustrative here. As Adams tells the story himself as he found what appeared to him very strong evidence of Vietnamese forces being substantially more numerous than previously assumed (600,000 vs. 250,000 combatants):
Dumbfounded, I rushed into George Carver's office and got permission to correct the numbers. Instead of my own total of 600,000, I used 500,000, which was more in line with what Colonel Hawkins had said in Honolulu. Even so, one of the chief deputies of the research directorate, Drexel Godfrey, called me up to say that the directorate couldn't use 500,000 because "it wasn't official."
[...]
The Saigon conference was in its third day, when we received a cable from Helms that, for all its euphemisms, gave us no choice but to accept the military's numbers. We did so, and the conference concluded that the size of the Vietcong force in South Vietnam was 299,000.
[...]
A few days after Nixon's inauguration, in January 1969, I sent the paper to Helms's office with a request for permission to send it to the White House. Permission was denied in a letter from the deputy director, Adm. Rufus Taylor, who informed me that the CIA was a team, and that if I didn't want to accept the team's decision, then I should resign.
When governments operate on information in environments where many actors have reasons to fudge the numbers in their direction, they highly restrict what information is a legitimate basis for arguments and calculations, as illustrated in the example above.
The next thing to try is to weed out the people trying to deceive you. This... sometimes goes pretty well. Most functional organizations do punish lying and deception quite aggressively. But catching sophisticated deception or disloyalty is very hard. McCarthyism and the second red scare stands as an interesting illustration:
President Harry S. Truman's Executive Order 9835 of March 21, 1947, required that all federal civil-service employees be screened for "loyalty". The order said that one basis for determining disloyalty would be a finding of "membership in, affiliation with or sympathetic association" with any organization determined by the attorney general to be "totalitarian, fascist, communist or subversive" or advocating or approving the forceful denial of constitutional rights to other persons or seeking "to alter the form of Government of the United States by unconstitutional means".[10]
What became known as the McCarthy era began before McCarthy's rise to national fame. Following the breakdown of the wartime East-West alliance with the Soviet Union, and with many remembering the First Red Scare, President Harry S. Truman signed an executive order in 1947 to screen federal employees for possible association with organizations deemed "totalitarian, fascist, communist, or subversive", or advocating "to alter the form of Government of the United States by unconstitutional means."
At some point, when you are surrounded by people feeding you information adversarially and sabotaging your plans, you just start purging people until you feel like you know what is going on again.
This can again look totally insane from the outside, with lots of innocent people getting caught in the crossfire and a lot of distress and flailing.
But it's really hard to catch all the spies if you are indeed surrounded by lots of spies! The story of the Rosenbergs during this time period illustrates this well:
Julius Rosenberg (May 12, 1918 – June 19, 1953) and Ethel Rosenberg (born Greenglass; September 28, 1915 – June 19, 1953) were an American married couple who were convicted of spying for the Soviet Union, including providing top-secret information about American radar, sonar, jet propulsion engines, and nuclear weapon designs. They were executed by the federal government of the United States in 1953 using New York's state execution chamber in Sing Sing in Ossining,[1] New York, becoming the first American civilians to be executed for such charges and the first to be executed during peacetime.
The conviction of the Rosenbergs resulted in enormous national pushbacks to McCarthyism, with it playing a big role in the formation of its legacy as a period of political overreach and undue paranoia:
After the publication of an investigative series in the National Guardian and the formation of the National Committee to Secure Justice in the Rosenberg Case, some Americans came to believe both Rosenbergs were innocent or had received too harsh a sentence, particularly Ethel. A campaign was started to try to prevent the couple's execution. Between the trial and the executions, there were widespread protests and claims of antisemitism. At a time when American fears about communism were high, the Rosenbergs did not receive support from mainstream Jewish organizations. The American Civil Liberties Union did not find any civil liberties violations in the case.[37]
Across the world, especially in Western European capitals, there were numerous protests with picketing and demonstrations in favor of the Rosenbergs, along with editorials in otherwise pro-American newspapers. Jean-Paul Sartre, an existentialist philosopher and writer who won the Nobel Prize for Literature, described the trial as "a legal lynching".[38] Others, including non-communists such as Jean Cocteau and Harold Urey, a Nobel Prize-winning physical chemist,[39] as well as left-leaning figures—some being communist—such as Nelson Algren, Bertolt Brecht, Albert Einstein, Dashiell Hammett, Frida Kahlo, and Diego Rivera, protested the position of the American government in what the French termed the American Dreyfus affair.[40] Einstein and Urey pleaded with President Harry S. Truman to pardon the Rosenbergs. In May 1951, Pablo Picasso wrote for the communist French newspaper L'Humanité: "The hours count. The minutes count. Do not let this crime against humanity take place."[41] The all-black labor union International Longshoremen's Association Local 968 stopped working for a day in protest.[42] Cinema artists such as Fritz Lang registered their protest.[43]
Many decades later, in 1995, as part of the release of declassified information, the public received confirmation that the Rosenbergs were indeed spies:
The Venona project was a United States counterintelligence program to decrypt messages transmitted by the intelligence agencies of the Soviet Union. Initiated when the Soviet Union was an ally of the U.S., the program continued during the Cold War when it was considered an enemy.[67] The Venona messages did not feature in the Rosenbergs' trial, which relied instead on testimony from their collaborators, but they heavily informed the U.S. government's overall approach to investigating and prosecuting domestic communists.[68]
In 1995, the U.S. government made public many documents decoded by the Venona project, showing Julius Rosenberg's role as part of a productive ring of spies.[69] For example, a 1944 cable (which gives the name of Ruth Greenglass in clear text) says that Ruth's husband David is being recruited as a spy by his sister (that is, Ethel Rosenberg) and her husband. The cable also makes clear that the sister's husband is involved enough in espionage to have his own codename ("Antenna" and later "Liberal").[70] Ethel did not have a codename;[26] however, KGB messages which were contained in the Venona project's Alexander Vassiliev files, and which were not made public until 2009,[71][72] revealed that both Ethel and Julius had regular contact with at least two KGB agents and were active in recruiting both David Greenglass and Russell McNutt.[73][71][72]
Turns out, it's really hard to prove that someone is a spy. Trying to do so anyway often makes people more paranoid, which produces more intense immune reactions and causes people to become less responsive to evidence, which then breeds more adversarial intuitions and motivates more purges.
But to be clear, a lot of the time, this is a sane response to adversarial environments. If you are a CEO appointed to lead a dysfunctional organization, it is quite plausibly the right call to get rid of basically all staff who have absorbed an adversarial culture. Just be extremely careful to not purge so hard as to only be left with a pile of competent schemers.
And ultimately, if you are in a situation where an opponent keeps trying to control your behavior and get into your OODA, you can always just start behaving unpredictably. If you can't predict what you are going to do tomorrow, your opponents (probably) can't either.
Nixon's mad dog strategy stands as one interesting testament to this:
I call it the Madman Theory, Bob. I want the North Vietnamese to believe I've reached the point where I might do anything to stop the war. We'll just slip the word to them that, "for God's sake, you know Nixon is obsessed about communism. We can't restrain him when he's angry—and he has his hand on the nuclear button" and Ho Chi Minh himself will be in Paris in two days begging for peace.
Controlling an unpredictable opponent is much harder than an opponent who in their pursuit of taking optimal and sane-looking actions ends up behaving predictably. Randomizing your strategies is a solution to many adversarial games, and in reality, making yourself unpredictable in what information you will integrate and which you will ignore, and where your triggers are for starting to use force, often gives your opponent no choice but to be more conservative, or ease the pressure, or aim to manipulate so much information that even randomization doesn't save you.
Now, where does this leave us? Well, first of all, I think it helps explain a bunch of the world and allows us to make better predictions about how the future will develop.
But I think more concretely, I think it motivates a principle I hold very dear to my heart: "Do not be the kind of actor that forces other people to be paranoid".
Paranoid people fuck up everything around them. Digging yourself out of paranoia is very hard and takes a long time. A non-trivial fraction of my life philosophy is oriented around avoiding environments that force me into paranoia and incentivizing as little paranoia as possible in the people around me.