LessWrong is trying to cultivate a specific culture. The best pointers towards that culture are the LessWrong Sequences and the New User Guide.
LessWrong operates under benevolent dictatorship of the Lightcone Infrastructure team, under its current CEO habryka. It is not a democracy. For some insight into our moderation philosophy see "Well Kept Gardens Die By Pacifism".
Norms on the site get developed largely by case-law. I.e. the moderators notice that something is going wrong on the site, then they take some moderation actions to fix this, and in doing so establish some precedent about what will cause future moderation action. There is no comprehensive set of rules you can follow that will guarantee we will not moderate your comments or content. Most of the time we "know it when we see it".
LessWrong relies heavily on rate-limits in addition to deleting content and banning users. New users start out with some relatively lax rate limits to avoid spamming. Users who get downvoted acquire stricter and stricter rate limits the more they get downvoted.
Not all moderation on LessWrong is done by the moderators. Authors with enough upvoted content on the site can moderate their own posts.
Below are some of the top-level posts that explain the moderation guidelines on the site. On the right, you will find recent moderation comments by moderators, showing you examples of what moderator intervention looks like.
Beyond that, this page will show you all moderation actions and bans taken across the site by anyone, including any deleted content (unless the moderators explicitly deleted it in a way that would hide it from this page, which we do in cases like doxxing).
| User | Account Age | Karma | Posts | Comments | Rate Limits | Trigger Reason | Triggered | Condition to Lift |
|---|---|---|---|---|---|---|---|---|
| milanrosko | 5/22/2024 | 40 | 6 | 94 | Comments: 1/3drolling | Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content. | 12/19/2025 | Until last 20 posts + comments improve |
| shanzson | 4/5/2025 | 1 | 6 | 16 | Comments: 1/1hrolling | Users with less than 0 karma on recent posts/comments can comment once per hour. You can read here for details, and for tips on how to write good content. | 12/13/2025 | Until last 20 posts + comments improve |
| sdeture | 5/1/2025 | -17 | 5 | 16 | Comments: 1/3drollingPosts: 1/2wrolling | Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content. | 10/14/2025 | Until last 20 posts + comments improve |
| Jef Jelten | 4/8/2023 | -17 | 0 | 6 | Comments: 1/3drolling | Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content. | 12/12/2025 | Until last 20 posts + comments improve |
| Saif Khan | 4/16/2025 | -1 | 3 | 6 | Comments: 1/1hrolling | Users with less than 0 karma on recent posts/comments can comment once per hour. You can read here for details, and for tips on how to write good content. | 12/7/2025 | Until last 20 posts + comments improve |
| Abe Dillon | 6/25/2019 | 67 | 1 | 35 | Comments: 1/1drolling | Users with less than -5 karma on recent posts/comments can write up to 1 comment per day. You can read here for details, and for tips on how to write good content. | 12/6/2025 | Until last 20 posts + comments improve |
| Oscar Davies | 9/17/2025 | -28 | 2 | 5 | Comments: 1/3drolling | Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content. | 11/29/2025 | Until last 20 posts + comments improve |
| Jesper L. | 8/19/2025 | 93 | 6 | 105 | Comments: 1/1hrolling | Users with less than 0 karma on recent posts/comments can comment once per hour. You can read here for details, and for tips on how to write good content. | 11/27/2025 | Until last 20 posts + comments improve |
| Joseph Van Name | 2/6/2023 | -7 | 7 | 112 | Comments: 1/3drolling | Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content. | 11/24/2025 | Until last 20 posts + comments improve |
| PaddyC | 7/8/2024 | -19 | 1 | 19 | Comments: 1/3drolling | Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content. | 11/21/2025 | Until last 20 posts + comments improve |
| Cipolla | 5/17/2024 | 0 | 5 | 20 | Comments: 1/1drolling | Users with less than -5 karma on recent posts/comments can write up to 1 comment per day. You can read here for details, and for tips on how to write good content. | 11/19/2025 | Until last 20 posts + comments improve |
| breaker25 | 10/14/2025 | -25 | 1 | 5 | Comments: 1/3drolling | Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content. | 11/19/2025 | Until last 20 posts + comments improve |
| d_el_ez | 1/20/2025 | 52 | 1 | 101 | Comments: 1/3drolling | Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content. | 11/18/2025 | Until last 20 posts + comments improve |
| samuelshadrach | 12/22/2024 | 253 | 36 | 380 | Comments: 1/3drolling | Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content. | 11/18/2025 | Until last 20 posts + comments improve |
| Blake | 8/24/2022 | 15 | 7 | 15 | Comments: 1/1hrolling | Users with less than 0 karma on recent posts/comments can comment once per hour. You can read here for details, and for tips on how to write good content. | 11/17/2025 | Until last 20 posts + comments improve |
| Krantz | 4/7/2023 | -13 | 5 | 16 | Comments: 1/3drolling | Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content. | 11/17/2025 | Until last 20 posts + comments improve |
| Shankar Sivarajan | 4/17/2019 | 1303 | 6 | 567 | Comments: 1/1drolling | Users with less than -5 karma on recent posts/comments can write up to 1 comment per day. You can read here for details, and for tips on how to write good content. | 11/14/2025 | Until last 20 posts + comments improve |
| Sinclair Chen | 1/29/2018 | 652 | 7 | 203 | Comments: 1/1hrolling | Users with less than 0 karma on recent posts/comments can comment once per hour. You can read here for details, and for tips on how to write good content. | 11/9/2025 | Until last 20 posts + comments improve |
| p4rziv4l | 1/12/2021 | -67 | 5 | 21 | Comments: 1/3drolling | Users with less than -15 karma on recent posts/comments can write up to 1 comment every 3 days. You can read here for details, and for tips on how to write good content. | 11/6/2025 | Until last 20 posts + comments improve |
| dscft | 4/5/2025 | 42 | 1 | 10 | Posts: 1/1wrolling | Users with less than -15 karma on their recent posts can post once per week. You can read here for details, and for tips on how to write good content. | 11/1/2025 | Until last 20 posts + comments improve |
| Date | Title | Author | Reason |
|---|---|---|---|
| 12/21/2025 | Why intelligence might be fundamentally trajectory‑based, not state‑based | Albert E. Vinci | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at [email protected] and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 12/21/2025 | The Last Line of Defense: A Proposal for Adversarial Judgment Signal Observation Channels | Evan kim | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at [email protected] and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 12/21/2025 | Sandbagging Is Linearly Separable in Transformer Activations | Subhadip | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at [email protected] and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 12/20/2025 | The Logic Breakers: Why Logic Is A Biological Heuristic, Not A Universal Law. | Oliviero |
|
| 12/20/2025 | The Rationality Community’s Rationality Problem: “Irrationality” Is a Modeling Error | James Miller | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at [email protected] and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 12/20/2025 | Why Does Thinking Feel Like Something? (And Why That's Not Actually a Mystery) | Rafael Almeida Reis | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at [email protected] and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 12/20/2025 | Negation Cost in Large Language Models | Diego C | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at [email protected] and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 12/20/2025 | Re-reasoning the Transformer and Understanding Why RL Cannot Adapt to Infinite Tasks | Dandelion |
|
| 12/20/2025 | Fasten your seatbelts, 2026 will have a new pandemic in her sleeves. | haghiri75 |
|
| 12/20/2025 | Untitled DraThe Will in Silicon: The Emergence of Data-Based Organisms and Breaking the Biological Barrierft | Ze the Mad Bird | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at [email protected] and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 12/19/2025 | Logical Dependence in Fundamental Physics | Martin Pražák | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at [email protected] and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 12/19/2025 | "Sanity is a Skill Issue" — A Father's Plea on the Risk of Outsourcing Our Self-Programming. | Mahmoud Saeed Elkomy | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at [email protected] and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 12/19/2025 | Structural Limits of Scaling-Based AI | wangius |
|
| 12/19/2025 | The Military Origin of Modernity: How Ballistics Invented the West | The Eastern Audit |
|
| 12/19/2025 | Tollner’s Law: A Structural Hypothesis About Observation, Uncertainty, and Alignment Risk By: Nickolus Tollner | NickTollner |
|
| 12/19/2025 | Tollner's Law: A Safety-First Constraint on Observability, Agency, and Alignment By: Nickolus Tollner | NickTollner |
|
| 12/19/2025 | A Thinking Discipline That Uses Prediction as Its Test | Mars “Ma-rs” | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at [email protected] and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 12/19/2025 | Is there existing work on treating continuity and halting as first-class constraints in long-horizon LLM interaction? | Atron | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at [email protected] and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 12/19/2025 | The 700-Year Ballistic Trajectory: Why AGI is the Inevitable "Philosopher-King" | The Eastern Audit |
|
| 12/19/2025 | Untitled Draft | Me |
|
| Date | User | Post | Reason |
|---|---|---|---|
| 12/20/2025 | The Eastern Audit | AGI Ruin: A List of Lethalities |
|
| 12/20/2025 | Zuzana Kapustikova | A Three-Layer Model of LLM Psychology |
|
| 12/20/2025 | [email protected] | Emergent Machine Ethics: A Foundational Research Framework for the Intelligence Symbiosis Paradigm |
|
| 12/19/2025 | Eric Werkhoven | Eliezer's Unteachable Methods of Sanity |
|
| 12/19/2025 | Casey Pearce (Doc Holliday) | When Were Things The Best? |
|
| 12/19/2025 | JamesWalker | — | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at [email protected] and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 12/19/2025 | JamesWalker | — | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at [email protected] and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 12/19/2025 | JamesWalker | — | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at [email protected] and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 12/19/2025 | John Martin | What is David Chapman talking about when he talks about "meaning" in his book "Meaningness"? | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at [email protected] and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 12/19/2025 | Aashish999 | The $140K Question: Cost Changes Over Time | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at [email protected] and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 12/19/2025 | Aashish999 | The $140,000 Question | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at [email protected] and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 12/18/2025 | onion | onion's Shortform |
|
| 12/18/2025 | rainmarket83 | Why would AIs not be likely to be conscious or morally relevant? |
|
| 12/18/2025 | doeixd | doeixd's Shortform |
|
| 12/18/2025 | dwaltig | The Waluigi Effect (mega-post) | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at [email protected] and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 12/18/2025 | Q as In Qarmik | Q as In Qarmik's Shortform | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at [email protected] and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 12/18/2025 | Q as In Qarmik | Q as In Qarmik's Shortform | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at [email protected] and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| 12/18/2025 | EntanglementResonance159856 | The behavioral selection model for predicting AI motivations |
|
| 12/17/2025 | EntanglementResonance159856 | The behavioral selection model for predicting AI motivations |
|
| 12/17/2025 | george gogilayri | The Rise of Parasitic AI | This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work. An LLM-detection service flagged your post as >50% likely to be written by an LLM. We've been having a wave of LLM written or co-written work that doesn't meet our quality standards. LessWrong has fairly specific standards, and your first LessWrong post is sort of like the application to a college. It should be optimized for demonstrating that you can think clearly without AI assistance. So, we reject all LLM generated posts from new users. We also reject work that falls into some categories that are difficult to evaluate that typically turn out to not make much sense, which LLMs frequently steer people toward.* "English is my second language, I'm using this to translate" If English is your second language and you were using LLMs to help you translate, try writing the post yourself in your native language and using a different (preferably non-LLM) translation software to translate it directly. "What if I think this was a mistake?" For users who get flagged as potentially LLM but think it was a mistake, if all 3 of the following criteria are true, you can message us on Intercom or at [email protected] and ask for reconsideration.
If any of those are false, sorry, we will not accept your post. * (examples of work we don't evaluate because it's too time costly: case studies of LLM sentience, emergence, recursion, novel physics interpretations, or AI alignment strategies that you developed in tandem with an AI coauthor – AIs may seem quite smart but they aren't actually a good judge of the quality of novel ideas.) |
| User | Karma | Posts | Comments | Account Creation | Banned Until |
|---|---|---|---|---|---|
| Eugine_Nier | 6397 | 2 | 4625 | 9/19/2010 | 12/31/3000 |
| ialdabaoth | 4818 | 19 | 714 | 10/11/2012 | 12/12/2029 |
| diegocaleiro | 2223 | 107 | 719 | 7/27/2009 | 1/1/2040 |
| Gleb_Tsipursky | 1557 | 88 | 875 | 7/16/2013 | 1/1/2030 |
| aphyer_evil_sock_puppet | 265 | 0 | 0 | 4/1/2022 | 4/1/3022 |
| Mirzhan_Irkegulov | 235 | 0 | 1 | 7/11/2014 | 4/28/3024 |
| Victor Novikov | 150 | 4 | 139 | 2/2/2015 | 12/25/2030 |
| ClipMonger | 112 | 0 | 20 | 7/27/2022 | 9/26/2026 |
| alfredmacdonald | 92 | 3 | 21 | 12/15/2012 | 1/1/2100 |
| Josh Smith-Brennan | 54 | 1 | 2 | 4/23/2021 | 6/14/3021 |
| lmn | 35 | 0 | 89 | 4/10/2017 | 1/1/3023 |
| Carmex | 27 | 0 | 47 | 9/18/2021 | 12/4/3021 |
| What People Are Really Like | 10 | 0 | -1 | 4/1/2023 | 4/1/3023 |
| RootNeg1Reality | 8 | 0 | 0 | 6/25/2025 | 7/6/3025 |
| mail2345 | 8 | 0 | 0 | 2/3/2011 | 5/22/3024 |
| JAEKIM M.D | 5 | 0 | 0 | 5/27/2021 | 5/28/3021 |
| DylanD | 4 | 0 | 0 | 12/25/2023 | 1/20/3025 |
| joedavidson | 4 | 0 | 0 | 3/4/2022 | 3/24/3022 |
| 29f8c80d-235a-47bc-b | 4 | 0 | 1 | 5/28/2017 | 1/1/3023 |
| feafueaf | 4 | 0 | 0 | 3/28/2023 | 3/29/3023 |