It sounds like incredibly annoying, tedious, thankless work - but it would fascinating to see a site dedicated to applying these rules objectively/neutrally to various news outlets. Not "debunking" style, but similar to how one might go about assessing a scientific paper: "it is possible the conclusion is true, but the data (for a newsstory: sources, logic, context, etc...) provided do not prove that is true, and also do not exclude other potentially more true conclusions, and here is why this is the conclusion they would like you to arrive at." Given that MSNBC gleefully points out misleading FOX articles, and vice versa, it seems like there's a market for that - but is there one for a both?
I am certainly on board with the "increased partisanship of media" isn't such a bad thing, particularly for the reasons you mention! I just want some process where "I, rightwing partisan, gleefully slam MSNBC story for these reasons, but then also (perhaps grudgingly) accept that this FOX story has similar problems - and reject the conclusions that BOTH are trying to sell me, and instead stop reading both and start reading stories on Whatever Site Is Publishing The Honest Critiques Of Both."
You could have a right (or left) leaning news org that published reporting with biased goals, but still upheld the virtues of honest reporting that the above post seeks to promote.
I think that cynicism is well justified - I would just like to find a way to apply that external pressure and competition, and social approval/disapproval. I think we've gotten part of the way there - lots of people really DO look down on and avoid clickbait stuff, even if it panders to their prior beliefs. Lots of people have learned to avoid spam or phishing emails - this is just more complicated social engineering scams.
I think we want to aim for some kind of social status thing - where you would look down on clicking on "this nigerian prince offers you 10 million dollars" and through education (propaganda?) we move to look down on someone saying "did you hear that MSNBC/FOX showed that all REpublicans/Democrats were fascists?"
I have all kinds of debates with the "literary critiques of racism"/"intersectionality" type arguments, but one thing they do absolutely right is highlight (and that it is often ignored) is how "what is not said" or "what is chosen for context" is really important for the message. You can 100% truthfully report on Crimes, but if you put the crimes committed by individuals with X traits in the biggest letters and most prominently, you can definitely create beliefs that serve your goals without actually lying.
edit: to double down on "these are just more complicated social engineering scams" I think the Antivirus model has some merit. Particularly with the neutral text parsing of new GPT stuff, it seems like you could install a browser plugin that, instead of scanning links for known malware sites, instead parsed the text of the news story you're reading, told you what conclusion the author thought you should be getting, showed you highlights/markups that indicated the subtle nudges, selectively-emotional language, and missing context of what you're reading. Maybe Clippy could pop up and go "it looks like this article is trying to manipulate you! Would you like some context to show these evil tricksters what for?"
The problem is getting right-leaning people to read something critical of right-sided sources, and left-leaning people to read something critical of left-sided sources. Right now each "side" can endlessly criticize the flaws in their opponents, but are often simply and completely unaware of the flaws in the argument of their own side. The seem to truly not realize that there are flaws, or to consider them to be minor and not worth worrying about if/when they do exist.
you're absolutely right that this is the problem, and I don't have a silver bullet for it. But I think the wedge to get leverage on is convincing as many as possible that "our side" bad/misleading arguments are actually worse for you and me and the rest of our side than if the other side had them, and that characterizing politics as low quality emotional-story driven identity-based arguments is so lucrative for media (and politicians) at our expense. Because yeah, it will never work if right wing people read left (or even centrist, or often even apostates, like Weiss or Taibbi above) wing critiques of their arguments that say "here is why you are bad people who believe bad things." It has to be from fellow travelers who share their goals, but can show that bad arguments are actually hurting their (shared) side, and that those advancing them are either self-promoting conmen or media pit fighter organizers who profit from them seeming to struggle, but never drawing real blood.
You touch on something here that I think is important and often overlooked.
> but can show that bad arguments are actually hurting their (shared) side
To me, at least, it would be a grim dark future if the only reason to care about the truth was that it instrumentally helped one's group to win. (Although I understand that this may be a useful way to get a foot in the door.)
America seems to have lost the civic virtue that embraced light and truth for their own sakes (whatever amount of that virtue we had). And I suppose the group identitarians and ethno-nationalists have made it passe to care about how bad arguments hurt both oneself as an individual and society as a whole.
But if we have no vision of why truth is Good, then we will we fail to inspire. And if truth is only valued as instrumentally useful in internecine warfare, well, sometimes it's not the best tool for that purpose. There are other, older tools in the box, tools that can fit in the brains of ants. And in some situations they'll work better than "truth".
You make a good point, and definitely that part is "a foot in the door" step, as you say.
Though I think some site of factionalism is inescapable for humanity, I think you're pointing to a wider "next step", where we try to convert people to a faction that considers its "side" to be based around that kind of objective, neutral, truth-seeking, entertainment-avoiding debate and information. They would still have whatever left/right factions they belong to, and would undoubtably be influenced by that, but hopefully allegiance to the "objective" faction would trump that. My example for that would be something like the allegiance that Americans are supposed to have to the constitution, which trumps partisan affiliation. I think that allegiance has waned more than it should, but it was obviously there at one point, so I think it can be done.
Some degree of factionalism seems inherent to the human condition. Of course humans care more about "my children", "my spouse", "my family", "my people", "my viewpoint", " "my country". If someone makes a nasty accusation against your spouse, it seems right to weigh that less than a similar accusation made against someone else. But to me it feels wrong to apply this beyond family and friends: I don't think my tribe's politicians are less prone to vice then the other side's, and I don't think random members of my tribe are less likely to lie about such stuff than members of the other tribe. But I suppose that was never the question; it was more about how long to back them up? My thinking in this area is not particularly clear.
(Assuming here that "my tribe" is not as I would wish it, organized around truth and virtue, but instead merely the group that calls me a heretic rather than the group that calls me a pagan.)
The underlying problem is economic incentives. The way to survive as an ad supported news publication these days is to mass produce as much minimally readable content as possible thst gets clicks. Which means inflammatory headlines, partisan framing, etc. (And most are still barely profitable). People say they want stuff that impartially assesses both sides, but the revealed preference isn't for that. Or not enough that it's worth it.
I think you're right, which is why, for this hypothetical publication, I would want inflammatory headlines ("FOX/MSNBC PROPAGANDA LIES TO YOUR CHILDREN") and partisan framing but where the party in question is not D or R, but anti-deceptive-media. I think there is actually so much media/status quo distrust bubbling around, but it (currently) all channeled into traditionally partisan ends (FOX news lies! Don't trust the lame-stream media!) and then the person peddling that lies just as blatantly (but pleasingly). I think it is possible (difficult, certainly!) to push that further, and apply that filter honestly and consistently. Sure it would probably fail. But I wish someone with a zillion bucks would give it a shot.
Arguably that's what Substack has become for several outlets. Bari Weiss and Matt Taibbi in particular both seem to fill that niche. Neutral-ish (both are left-leaning Democrats, but spend more time criticizing Democrats and other leftists than they do Republicans and conservatives) and directly calling out poor journalism and related.
I would love to see much smaller targeted articles, perhaps as even short as: "The article at this URL seeks to convince you of X, as demonstrated by items 1,2,3, and by neglecting to inform you of 4,5 and 6. X may be true, here is our link to an indepth discussion of it (like the bricks thing), but this article is relying on manipulation rather than presenting a good argument for it."
Obviously producing something like that bricks article for every example would be exhausting and no one who is looking for a brief emotion hit is gonna read it.
Paring down On Bounded Distrust to the extent that it can be stated briefly is really hard, mad props on the attempt. I think it could stand to be about a third again longer and much more convincing, though: without any specific examples in the post it's both unclear what "how to bounded distrust" looks like in practice and I'm also worried that to those who lack context the essay reads like it was written by an actual crazy person. (Capitalizing references/concept handles like Narrative, Bad People, and This Is Not A Coincidence doesn't help with not seeming crazy to people who don't get the references, and would probably be enough on its own to stop me from linking zero-context people to this explanation even if it had specific examples.)
"This leads to a situation of Bounded Distrust, which I analyze at length here. I then work through some examples here. If you want to think about the problem in detail, start at these links." Would read better to me if after the link to Bounded Distrust you gave a brief definition of what bounded distrust is, like "knowing which things you can trust people to tell the truth about" (obviously as a better writer than me you could give a better definition than this) so the reader knows what they're reading here and why it's supposed to be useful, instead of hoping they read until the Logical Implications section before knowing what the purpose is.
The "What Are The Rules?" section would be less jarring if it began with a one-sentence "What does it mean that the media rarely lies but is often deceptive?" or other better sentence that serves the same purpose of signposting to the reader what's going on and why they are now reading a list of rules.
"When the stakes are so high that the consequences could be seen as worth paying for either the reporter or the outlet, they might do that, which can be called Using the One Time. You must be extra careful." Would really benefit from a specific example, to seem less crazy to people just tuning in. A link and a one sentence description would work here.
"The reporter is allowed to lie in order to get the story, the way a cop can lie during their investigation. Both often do so." Would also benefit from a specific example to seem less crazy to people who are out of the loop.
"For each source at all levels, and each class of source, one must maintain a Translation Matrix that lays out what rules they can be assumed to be following." I'm imagining my mother reading as far as capital T capital M Translation Matrix before she closes the tab and thinks less of me for linking it.
"There is zero obligation for media to verify their source is not spouting Obvious Nonsense." As a longtime reader, I know how Obvious Nonsense is different from obvious nonsense (Obvious Nonsense isn't intended to look plausible, unlike obvious nonsense which presumably is) but if this is supposed to be low-context and linkable I think the lowercase letters work better.
I hope you found this useful and if not I hope it wasn't too annoying! Thanks for writing.
I care a lot about 'this would be something I could link to if it didn't have X.' That's a very important threshold. If capitalizing Arc Words is a common barrier to this I can take that out, I think it is good but it is not that load bearing. Curious if others share that instinct.
I considered putting in examples but I worried (1) that they would work less well over time and (2) that brevity was super important here, and (3) properly working through examples is both a slow process and a lot of what the longer post does. Again, curious if others agree here.
Perhaps a relatively short section with a brief explanation of the type of issue, and between one and three links per type that show a real example of the phenomenon.
My instinct is, why would you ever use Capitalized Shorthand (or any kind of shorthand) with no translation if you're not just writing for an in-group?
Anyway, like the parent, my first impression was also that examples would make this more understandable and credible. On further thought, though, the article seems to be trying to do two things:
1) Describe the media's implicit rules for writing headlines and articles.
2) Provide guidance for close-reading articles.
Examples might help support and clarify #1. But I think the attempt at #2 was a bad idea and is probably unrescuable. Instructions like "compare actual word choice to standard word choice given house style" and "evaluate how the reporter and media outlet each feel about the stakes of the article" are impossible to execute for 99.9% of people, and for 100% of people who need to hear them. Plus it's unclear why you think we should spend time close-reading dubious articles at all.
The "So What Do We Do Now?" section of the original article had better advice and would have been a better target for brev-ification, in my opinion.
Your point about signaling that one is writing for an in-group, rather than a broader audience, is useful. I do it and I hadn't considered that some readers might not understand. Thanks for that.
For what it’s worth, the random capitalization is also something that would stop me from linking this to someone without a lot of context.
I also think a few linked examples at key points would be helpful. I think well-chosen examples wouldn’t require any explanation beyond the link/title, so brevity wouldn’t take much of a hit. If you’re worried about the examples not being evergreen, this seems like a spot where it might be worth coming back every year or two to update the examples. I know that’s a bunch of extra friction, but I suspect you accrue a spreadsheet full of these kinds of articles in the normal course of your research, so maybe it wouldn’t be too bad?
I did not actually know there was a Proper Term, "Arc Words", for certain obligatory capitalizations frequently found in rationalist writing. Some of them I simply don't get the reference and don't care to hunt down (do appreciate removing "Translation Matrix", that would have been confusing to me). But I think others are valuable constructs which speak for themselves, and I've come to adopt these evergreen keywords for ordinary non-ingroup discourse as well.* Like I disagree on Obvious Nonsense in particular, that's an evergreen keyword which implies very different things from obvious nonsense (or just nonsense). The explicit official endorsement is the point.
A big part of what makes your reference posts in particular impactfully concise and concisely impactful...is skillful usage of such phrases. I'd hope to not have them weeded out indiscriminately. It's just hard to reverse-simplify pithy concepts like Something Which Is Not That. ("Narrative" is indeed contentious though...the same way Moldbug's "Cathedral" is. Fish not knowing what water is, and all that. This is one significant contributor to rationalism getting rounded off to fringe right-wing laundering by outgroups. People are much more willing to meet halfway at "bias", "slant", etc.)
ETA: Using the One Time you could probably - just this once - use a slightly simpler variant that gestures in the same direction of envelope-pushing, line-crossing, slippery-sloping, boy-who-cried-wolfing. That's a good one, but relies a little too much on background knowledge of, like, decision theory. Cooperate, but Defect if the payoff is irresistible. Waiving epidemiological concerns over mass gatherings because racial justice was a perfect IRL demonstration of the phenomenon in action during 2020; that's a charged example, but lots of people *did* grok the idea afterwards, so it's definitely possible to describe in normie terms without much loss of fidelity.
Examples would help illustrate, but Bounded Distrust is complicated enough that I think it's one of those "pick 2" situations: brevity, comprehension, completeness. If people really don't want to Do The Homework by following any links, then they'd better be pre-selected for receptiveness to such ideas in the first place. It's too big a lift here to introduce that first fault line of doubt, "maybe the One True News I trust isn't always right", if it doesn't exist already.
*although in fairness my friend group already all do the Royal Capitalization For Exaggerated Ironic Emphasis thing, so it probably scans more normally to them than to true normies. I think - maybe - a useful test is if the phrase still makes sense with an added trademark or other emphasis rather than capitalization(tm), capitalization!, ~capitalization~ UwU, McCapitalization. Normies ever do that.
It has been a long time since I was a relative normie, or interacted much with relative normies. Still, I think my initial reaction back then to rampant Capitalization was dislike. Not any profound or deep dislike, it just annoyed me a bit. I get the vague impression that a lot of normal people would feel about the same, or maybe a bit worse. But that's a guess founded on ~zero evidence except for my knowledge of how disdainful people can be to slight deviations from the norm.
I agree that I couldn't link this to someone who lacks enough context. (Such as my parents. And before I read that comment I was actually internally debating sending a link to my father, and settling on "not".)
FWIW, this is very much what I expect from a post by you: rapid-fire insights, each needing a bit of unpacking, a surprisingly high percentage of which seem to pan out. The original commenter seemed to want something much more like what Scott Alexander writes, which style I agree is distinctly more linkable. Not sure how you weigh goals vs. comparative advantage here, but perhaps, to paraphrase Scott, if someone needs to and no one else does... *shrug*
Do you see a path to making a readily-linkable version of this for normal people, that would be valuable for them? Seems like you'd have to either cut out quite a lot and/or expand word length quite a lot to do that?
I think there might be, but maybe what's needed is either one of your older posts or a completely different post.
Step one would be translating it from "internet register" to something more generally comprehensible and prestigious. That's going to expand the length a bit, but there are parts that are already fine, and I think this is a place where time can be burned to produce brevity. A non-exhaustive list of stuff to watch out for includes capitalization and sentence fragments. Hyperlinks as footnotes and examples are OK, but I'm less sure about the use of hyperlinks as definitions (which to me smacks of TvTropes and Less Wrong). Unfortunately I am a bad editor for these sorts of things, since I instinctively attempt to mimic other people's styles.
However, on re-reading, I noticed that you're explicitly positioning this as the brief summary of important points from several previous posts, which had slipped my mind by the end. As such, maybe this *shouldn't* be made verbose. Some people can jump in to the deep end, but others need a more gradual introduction to the concepts. The high information density is what made this post stand out to me and made me want to send it to other people, but it's also the reason that I don't think I can.
On reflection, I think what I'm looking for is a introduction to the topic that can persuade an open-minded but skeptical person, such that eventually they would find this useful I should re-read your earlier posts to see if they'll work.
No apologies needed, this is fascinating. Especially the tension between 'this is super information dense so I want to share it' with 'this is super information dense so I can't share it.'
I dispute Assertion 3, unless you define "reliable" as "supports the Narrative". I have read entire articles in my field of expertise (water and wastewater treatment) where the only sources used are political interest groups.
'Drawing vague flimsy associations between the target and Bad People tells you that this was the best they could do.'
I think your inferences are much to strong here, the people writing these pieces are often not very well paid journalism majors who have just run through the gauntlet of indoctrination of a mediocre college. That they put out low quality pieces is an indictment or their (lack of) skill, or an indictment of their intended audience, but neither of those implies that there aren't strong arguments for the positions they posit, just that they don't have the ability to elucidate such arguments given their constraints.
I agree this doesn't mean the best they can do is the best anyone can do. I don't think I've seen good examples of 'they knew damn well there was an actual strong argument, instead they went with vague flimsy associations only.' Why Not Both applies. If they were super squeezed for space, maybe, but again I can't think of examples here.
Because high quality reporting and low quality reporting are two distinct things, if you send your crappy reporter out to cover the latest event to put a very quick story up, and then also send a really good reporter out to fully investigate then a large amount of the time the two are going to contradict each other in details or even conclusions. Because crap is quicker your readers are going to constantly see story 1 saying X and then the next day story 2 saying Y being distinctly different from X. If you ty to manage it, say by only printing the second story when it fits well enough with the first then you are going to be constantly restricting the output of your better (and likely more expensive with more options) writer.
As for examples, George Floyd's death produced a huge range of articles on the event of varying quality. Here's an NPR article which is heavily slanted with little useful information,
Of note the NPR article claims Chauvin was 'that the arresting officer was not practicing a technique condoned by the city's police', while the USA today piece actually quotes the police departments position on neck restraints. Both articles (to me at least) read in the same direction- that Floyd died as the result of racism on the part of the cop or police department, but one has an unquestioned line declaring a fact which would be disqualifying if you knew the information in the second.
You mention sometimes that you write long essays for lacking the time to write short ones; I write short things faster than long ones and the shorter the faster. Also, I couldn't have written On Bounded Distrust but I think I could have written How to Bounded Distrust. If the latter took more time to write than the former, writing short things is super not your comparative advantage.
For $20 I'll lossily compress any one important long post of yours down to 1,353 words or fewer and get it back to you inside 24 hours, to see if getting people to compress your posts nets positive.
- only $20 because I'd run this test for free if it's worth $20 to you
- as many as $20 because things written for money are much easier to write quickly
> Can call anyone an expert. Expert consensus means three people. ‘Some investors’ and similar phrases mean two (as does ‘surrounded by.’)
If only you'd see news in national television in Poland... they usually invoke "experts" to comment on a given topic. The thing is, these are usually chosen from a set of just a few people. And they are... affiliated internet alt-media 'journalists'.
Imagine if Trump _somehow_ established a US state-funded TV channel (well, it already existed in Poland), made it completely partisan, with news sometimes eerily reminiscent of North-Korea propaganda somehow. News would have segments where "experts" from Breitbart discuss excesses of the Wokism.
Shame OpenAI Whisper didn't exist back then. Thankfully it does now, so I'll also share what main opposition TV channel said a few days ago: https://www.youtube.com/watch?v=GFH_CfN7dvM
I mean, I agree with them. But it's a bit eerie to see mainstream media (they're US owned, so I interpret them as roughly extension of MSM; maybe that's going a little too far tho) talk about freedom of speech.
Anyway; one guy on Twitter https://twitter.com/FlasH_vikop even counts things like frequencies of these "experts" appearing http://ekspercitvpis.info or amount of times in 2021 (well, to 24.10) they reused the same clip of Donald Tusk's fist, supposed to make him seem aggressive (27) or the clip of Donald saying "fur Deutschland" (completely contextless, apparently supposed to associate him with Germany) (97 times) https://twitter.com/FlasH_vikop/status/1452032913046913024
In 2007, there was a story in the New York Times about some kerfuffle over a sorority that had kicked out about 2/3 of their membership for not being cool enough or something like that.
One phrase that immediately jumped out at me was a statement by the author that, among others, the only black, Korean, and Vietnamese members had been cut. The weirdly specific wording immediately tipped me off to the fact that there had been another Asian member, probably Chinese, who had not been cut, but that the author really wanted to insinuate that they had cut all the non-white members. There was no other plausible explanation for that phrasing.
Sure enough, CNN was reporting that a Chinese member had not been cut.
Granted, this wasn't a terribly important story, but it's a great illustration of the media including all the facts that fit the Narrative, and only the facts that fit the Narrative, and what can be inferred by accounting for this.
I can't fault anyone for being too paranoid about their information intake, but this set of rules does not jibe with Steve Sailer's observation that e.g. NYT often does put inconvenient or counter-Narrative facts right there in its articles, except they are shoved towards the end after all the convenient and pro-Narrative ones and all the boring details. This way casual readers check out before encountering them from being bored, pro-Narrative readers check out after having seen their expectations confirmed, whereas diligent readers can actually construct a decent picture of the world by reading to the very end, and presumably journalists and editors can feel good about themselves as adhering to the standards of truthfulness and conscientiousness. (The observation was correct every time I checked him.)
It suggests that there is some cost they see to not including that information, or some benefit to including it, and they're following that incentive while moving it to the end to minimize narrative cost, so... it seems pretty consistent to me, although it gives a small hopeful caveat and additional strategy (always read to the end if you are wondering about that)?
> it suggests that there is some cost they see to not including that information, or some benefit to including it [...]
Any motivation can be trivially expressed in cost-benefit terms, therefore such a restatement adds no new information and is barely short of tautology. The real content of your thesis is in the rule set (1-15) and, while I can readily think of examples of 3-15 and while pushing Narrative-damaging information to the end of the article is not against these rules, I feel that such an action does not agree with their spirit. Why include Narrative-damaging information at all? Perhaps there are (still are) additional costs here which you have not considered?
Can I add a corollary to the Pravda joke? The WSJ is useful because it is used by Washington to “tell the truth” . I.e. DOD info leaked regarding some military operation. This we get a Pravda effect - sort of. This source is useful to DC because it is not the NYT and will be viewed as less biased. Better for info leaking, whether deliberate or not. It is useful to readers because the anonymous source is more likely to be someone who actually works at the DOS or DOD or White House. Not 100% Pravda but WSJ as conduit effect.
It sounds like incredibly annoying, tedious, thankless work - but it would fascinating to see a site dedicated to applying these rules objectively/neutrally to various news outlets. Not "debunking" style, but similar to how one might go about assessing a scientific paper: "it is possible the conclusion is true, but the data (for a newsstory: sources, logic, context, etc...) provided do not prove that is true, and also do not exclude other potentially more true conclusions, and here is why this is the conclusion they would like you to arrive at." Given that MSNBC gleefully points out misleading FOX articles, and vice versa, it seems like there's a market for that - but is there one for a both?
I am certainly on board with the "increased partisanship of media" isn't such a bad thing, particularly for the reasons you mention! I just want some process where "I, rightwing partisan, gleefully slam MSNBC story for these reasons, but then also (perhaps grudgingly) accept that this FOX story has similar problems - and reject the conclusions that BOTH are trying to sell me, and instead stop reading both and start reading stories on Whatever Site Is Publishing The Honest Critiques Of Both."
You could have a right (or left) leaning news org that published reporting with biased goals, but still upheld the virtues of honest reporting that the above post seeks to promote.
I think that cynicism is well justified - I would just like to find a way to apply that external pressure and competition, and social approval/disapproval. I think we've gotten part of the way there - lots of people really DO look down on and avoid clickbait stuff, even if it panders to their prior beliefs. Lots of people have learned to avoid spam or phishing emails - this is just more complicated social engineering scams.
I think we want to aim for some kind of social status thing - where you would look down on clicking on "this nigerian prince offers you 10 million dollars" and through education (propaganda?) we move to look down on someone saying "did you hear that MSNBC/FOX showed that all REpublicans/Democrats were fascists?"
I have all kinds of debates with the "literary critiques of racism"/"intersectionality" type arguments, but one thing they do absolutely right is highlight (and that it is often ignored) is how "what is not said" or "what is chosen for context" is really important for the message. You can 100% truthfully report on Crimes, but if you put the crimes committed by individuals with X traits in the biggest letters and most prominently, you can definitely create beliefs that serve your goals without actually lying.
edit: to double down on "these are just more complicated social engineering scams" I think the Antivirus model has some merit. Particularly with the neutral text parsing of new GPT stuff, it seems like you could install a browser plugin that, instead of scanning links for known malware sites, instead parsed the text of the news story you're reading, told you what conclusion the author thought you should be getting, showed you highlights/markups that indicated the subtle nudges, selectively-emotional language, and missing context of what you're reading. Maybe Clippy could pop up and go "it looks like this article is trying to manipulate you! Would you like some context to show these evil tricksters what for?"
The problem is getting right-leaning people to read something critical of right-sided sources, and left-leaning people to read something critical of left-sided sources. Right now each "side" can endlessly criticize the flaws in their opponents, but are often simply and completely unaware of the flaws in the argument of their own side. The seem to truly not realize that there are flaws, or to consider them to be minor and not worth worrying about if/when they do exist.
you're absolutely right that this is the problem, and I don't have a silver bullet for it. But I think the wedge to get leverage on is convincing as many as possible that "our side" bad/misleading arguments are actually worse for you and me and the rest of our side than if the other side had them, and that characterizing politics as low quality emotional-story driven identity-based arguments is so lucrative for media (and politicians) at our expense. Because yeah, it will never work if right wing people read left (or even centrist, or often even apostates, like Weiss or Taibbi above) wing critiques of their arguments that say "here is why you are bad people who believe bad things." It has to be from fellow travelers who share their goals, but can show that bad arguments are actually hurting their (shared) side, and that those advancing them are either self-promoting conmen or media pit fighter organizers who profit from them seeming to struggle, but never drawing real blood.
You touch on something here that I think is important and often overlooked.
> but can show that bad arguments are actually hurting their (shared) side
To me, at least, it would be a grim dark future if the only reason to care about the truth was that it instrumentally helped one's group to win. (Although I understand that this may be a useful way to get a foot in the door.)
America seems to have lost the civic virtue that embraced light and truth for their own sakes (whatever amount of that virtue we had). And I suppose the group identitarians and ethno-nationalists have made it passe to care about how bad arguments hurt both oneself as an individual and society as a whole.
But if we have no vision of why truth is Good, then we will we fail to inspire. And if truth is only valued as instrumentally useful in internecine warfare, well, sometimes it's not the best tool for that purpose. There are other, older tools in the box, tools that can fit in the brains of ants. And in some situations they'll work better than "truth".
You make a good point, and definitely that part is "a foot in the door" step, as you say.
Though I think some site of factionalism is inescapable for humanity, I think you're pointing to a wider "next step", where we try to convert people to a faction that considers its "side" to be based around that kind of objective, neutral, truth-seeking, entertainment-avoiding debate and information. They would still have whatever left/right factions they belong to, and would undoubtably be influenced by that, but hopefully allegiance to the "objective" faction would trump that. My example for that would be something like the allegiance that Americans are supposed to have to the constitution, which trumps partisan affiliation. I think that allegiance has waned more than it should, but it was obviously there at one point, so I think it can be done.
Yeah.
Some degree of factionalism seems inherent to the human condition. Of course humans care more about "my children", "my spouse", "my family", "my people", "my viewpoint", " "my country". If someone makes a nasty accusation against your spouse, it seems right to weigh that less than a similar accusation made against someone else. But to me it feels wrong to apply this beyond family and friends: I don't think my tribe's politicians are less prone to vice then the other side's, and I don't think random members of my tribe are less likely to lie about such stuff than members of the other tribe. But I suppose that was never the question; it was more about how long to back them up? My thinking in this area is not particularly clear.
(Assuming here that "my tribe" is not as I would wish it, organized around truth and virtue, but instead merely the group that calls me a heretic rather than the group that calls me a pagan.)
The underlying problem is economic incentives. The way to survive as an ad supported news publication these days is to mass produce as much minimally readable content as possible thst gets clicks. Which means inflammatory headlines, partisan framing, etc. (And most are still barely profitable). People say they want stuff that impartially assesses both sides, but the revealed preference isn't for that. Or not enough that it's worth it.
I think you're right, which is why, for this hypothetical publication, I would want inflammatory headlines ("FOX/MSNBC PROPAGANDA LIES TO YOUR CHILDREN") and partisan framing but where the party in question is not D or R, but anti-deceptive-media. I think there is actually so much media/status quo distrust bubbling around, but it (currently) all channeled into traditionally partisan ends (FOX news lies! Don't trust the lame-stream media!) and then the person peddling that lies just as blatantly (but pleasingly). I think it is possible (difficult, certainly!) to push that further, and apply that filter honestly and consistently. Sure it would probably fail. But I wish someone with a zillion bucks would give it a shot.
Arguably that's what Substack has become for several outlets. Bari Weiss and Matt Taibbi in particular both seem to fill that niche. Neutral-ish (both are left-leaning Democrats, but spend more time criticizing Democrats and other leftists than they do Republicans and conservatives) and directly calling out poor journalism and related.
It is challenging work. This is the closest example I know of, and it was extremely resource intensive for them to put together: https://consilienceproject.org/pallets-of-bricks/
that is great work. I especially like: https://consilienceproject.org/endgames-of-bad-communication/ , feels similar to this post.
I would love to see much smaller targeted articles, perhaps as even short as: "The article at this URL seeks to convince you of X, as demonstrated by items 1,2,3, and by neglecting to inform you of 4,5 and 6. X may be true, here is our link to an indepth discussion of it (like the bricks thing), but this article is relying on manipulation rather than presenting a good argument for it."
Obviously producing something like that bricks article for every example would be exhausting and no one who is looking for a brief emotion hit is gonna read it.
A couple editing errors:
> ‘this ‘expert’ irregularities he says shows fraud.’
> This type of logic-washing does not only applies to one side.
Paring down On Bounded Distrust to the extent that it can be stated briefly is really hard, mad props on the attempt. I think it could stand to be about a third again longer and much more convincing, though: without any specific examples in the post it's both unclear what "how to bounded distrust" looks like in practice and I'm also worried that to those who lack context the essay reads like it was written by an actual crazy person. (Capitalizing references/concept handles like Narrative, Bad People, and This Is Not A Coincidence doesn't help with not seeming crazy to people who don't get the references, and would probably be enough on its own to stop me from linking zero-context people to this explanation even if it had specific examples.)
"This leads to a situation of Bounded Distrust, which I analyze at length here. I then work through some examples here. If you want to think about the problem in detail, start at these links." Would read better to me if after the link to Bounded Distrust you gave a brief definition of what bounded distrust is, like "knowing which things you can trust people to tell the truth about" (obviously as a better writer than me you could give a better definition than this) so the reader knows what they're reading here and why it's supposed to be useful, instead of hoping they read until the Logical Implications section before knowing what the purpose is.
The "What Are The Rules?" section would be less jarring if it began with a one-sentence "What does it mean that the media rarely lies but is often deceptive?" or other better sentence that serves the same purpose of signposting to the reader what's going on and why they are now reading a list of rules.
"When the stakes are so high that the consequences could be seen as worth paying for either the reporter or the outlet, they might do that, which can be called Using the One Time. You must be extra careful." Would really benefit from a specific example, to seem less crazy to people just tuning in. A link and a one sentence description would work here.
"The reporter is allowed to lie in order to get the story, the way a cop can lie during their investigation. Both often do so." Would also benefit from a specific example to seem less crazy to people who are out of the loop.
"For each source at all levels, and each class of source, one must maintain a Translation Matrix that lays out what rules they can be assumed to be following." I'm imagining my mother reading as far as capital T capital M Translation Matrix before she closes the tab and thinks less of me for linking it.
"There is zero obligation for media to verify their source is not spouting Obvious Nonsense." As a longtime reader, I know how Obvious Nonsense is different from obvious nonsense (Obvious Nonsense isn't intended to look plausible, unlike obvious nonsense which presumably is) but if this is supposed to be low-context and linkable I think the lowercase letters work better.
I hope you found this useful and if not I hope it wasn't too annoying! Thanks for writing.
I care a lot about 'this would be something I could link to if it didn't have X.' That's a very important threshold. If capitalizing Arc Words is a common barrier to this I can take that out, I think it is good but it is not that load bearing. Curious if others share that instinct.
I considered putting in examples but I worried (1) that they would work less well over time and (2) that brevity was super important here, and (3) properly working through examples is both a slow process and a lot of what the longer post does. Again, curious if others agree here.
Perhaps a relatively short section with a brief explanation of the type of issue, and between one and three links per type that show a real example of the phenomenon.
i.e.
"Misleading by omission: Link 1, Link 2"
"Out of context: Link 1, Link 2, Link 3"
An example would work in a comment, without increasing the size of the main piece.
My instinct is, why would you ever use Capitalized Shorthand (or any kind of shorthand) with no translation if you're not just writing for an in-group?
Anyway, like the parent, my first impression was also that examples would make this more understandable and credible. On further thought, though, the article seems to be trying to do two things:
1) Describe the media's implicit rules for writing headlines and articles.
2) Provide guidance for close-reading articles.
Examples might help support and clarify #1. But I think the attempt at #2 was a bad idea and is probably unrescuable. Instructions like "compare actual word choice to standard word choice given house style" and "evaluate how the reporter and media outlet each feel about the stakes of the article" are impossible to execute for 99.9% of people, and for 100% of people who need to hear them. Plus it's unclear why you think we should spend time close-reading dubious articles at all.
The "So What Do We Do Now?" section of the original article had better advice and would have been a better target for brev-ification, in my opinion.
Your point about signaling that one is writing for an in-group, rather than a broader audience, is useful. I do it and I hadn't considered that some readers might not understand. Thanks for that.
For what it’s worth, the random capitalization is also something that would stop me from linking this to someone without a lot of context.
I also think a few linked examples at key points would be helpful. I think well-chosen examples wouldn’t require any explanation beyond the link/title, so brevity wouldn’t take much of a hit. If you’re worried about the examples not being evergreen, this seems like a spot where it might be worth coming back every year or two to update the examples. I know that’s a bunch of extra friction, but I suspect you accrue a spreadsheet full of these kinds of articles in the normal course of your research, so maybe it wouldn’t be too bad?
I did not actually know there was a Proper Term, "Arc Words", for certain obligatory capitalizations frequently found in rationalist writing. Some of them I simply don't get the reference and don't care to hunt down (do appreciate removing "Translation Matrix", that would have been confusing to me). But I think others are valuable constructs which speak for themselves, and I've come to adopt these evergreen keywords for ordinary non-ingroup discourse as well.* Like I disagree on Obvious Nonsense in particular, that's an evergreen keyword which implies very different things from obvious nonsense (or just nonsense). The explicit official endorsement is the point.
A big part of what makes your reference posts in particular impactfully concise and concisely impactful...is skillful usage of such phrases. I'd hope to not have them weeded out indiscriminately. It's just hard to reverse-simplify pithy concepts like Something Which Is Not That. ("Narrative" is indeed contentious though...the same way Moldbug's "Cathedral" is. Fish not knowing what water is, and all that. This is one significant contributor to rationalism getting rounded off to fringe right-wing laundering by outgroups. People are much more willing to meet halfway at "bias", "slant", etc.)
ETA: Using the One Time you could probably - just this once - use a slightly simpler variant that gestures in the same direction of envelope-pushing, line-crossing, slippery-sloping, boy-who-cried-wolfing. That's a good one, but relies a little too much on background knowledge of, like, decision theory. Cooperate, but Defect if the payoff is irresistible. Waiving epidemiological concerns over mass gatherings because racial justice was a perfect IRL demonstration of the phenomenon in action during 2020; that's a charged example, but lots of people *did* grok the idea afterwards, so it's definitely possible to describe in normie terms without much loss of fidelity.
Examples would help illustrate, but Bounded Distrust is complicated enough that I think it's one of those "pick 2" situations: brevity, comprehension, completeness. If people really don't want to Do The Homework by following any links, then they'd better be pre-selected for receptiveness to such ideas in the first place. It's too big a lift here to introduce that first fault line of doubt, "maybe the One True News I trust isn't always right", if it doesn't exist already.
*although in fairness my friend group already all do the Royal Capitalization For Exaggerated Ironic Emphasis thing, so it probably scans more normally to them than to true normies. I think - maybe - a useful test is if the phrase still makes sense with an added trademark or other emphasis rather than capitalization(tm), capitalization!, ~capitalization~ UwU, McCapitalization. Normies ever do that.
It has been a long time since I was a relative normie, or interacted much with relative normies. Still, I think my initial reaction back then to rampant Capitalization was dislike. Not any profound or deep dislike, it just annoyed me a bit. I get the vague impression that a lot of normal people would feel about the same, or maybe a bit worse. But that's a guess founded on ~zero evidence except for my knowledge of how disdainful people can be to slight deviations from the norm.
I agree that I couldn't link this to someone who lacks enough context. (Such as my parents. And before I read that comment I was actually internally debating sending a link to my father, and settling on "not".)
FWIW, this is very much what I expect from a post by you: rapid-fire insights, each needing a bit of unpacking, a surprisingly high percentage of which seem to pan out. The original commenter seemed to want something much more like what Scott Alexander writes, which style I agree is distinctly more linkable. Not sure how you weigh goals vs. comparative advantage here, but perhaps, to paraphrase Scott, if someone needs to and no one else does... *shrug*
Do you see a path to making a readily-linkable version of this for normal people, that would be valuable for them? Seems like you'd have to either cut out quite a lot and/or expand word length quite a lot to do that?
I think there might be, but maybe what's needed is either one of your older posts or a completely different post.
Step one would be translating it from "internet register" to something more generally comprehensible and prestigious. That's going to expand the length a bit, but there are parts that are already fine, and I think this is a place where time can be burned to produce brevity. A non-exhaustive list of stuff to watch out for includes capitalization and sentence fragments. Hyperlinks as footnotes and examples are OK, but I'm less sure about the use of hyperlinks as definitions (which to me smacks of TvTropes and Less Wrong). Unfortunately I am a bad editor for these sorts of things, since I instinctively attempt to mimic other people's styles.
However, on re-reading, I noticed that you're explicitly positioning this as the brief summary of important points from several previous posts, which had slipped my mind by the end. As such, maybe this *shouldn't* be made verbose. Some people can jump in to the deep end, but others need a more gradual introduction to the concepts. The high information density is what made this post stand out to me and made me want to send it to other people, but it's also the reason that I don't think I can.
On reflection, I think what I'm looking for is a introduction to the topic that can persuade an open-minded but skeptical person, such that eventually they would find this useful I should re-read your earlier posts to see if they'll work.
Sorry about the partial false alarm; mea culpa.
No apologies needed, this is fascinating. Especially the tension between 'this is super information dense so I want to share it' with 'this is super information dense so I can't share it.'
I dispute Assertion 3, unless you define "reliable" as "supports the Narrative". I have read entire articles in my field of expertise (water and wastewater treatment) where the only sources used are political interest groups.
As I like to say, they lie even when they tell the truth.
I'd call it (mis)information laundering. After a few steps a lie looks like the truth.
'Drawing vague flimsy associations between the target and Bad People tells you that this was the best they could do.'
I think your inferences are much to strong here, the people writing these pieces are often not very well paid journalism majors who have just run through the gauntlet of indoctrination of a mediocre college. That they put out low quality pieces is an indictment or their (lack of) skill, or an indictment of their intended audience, but neither of those implies that there aren't strong arguments for the positions they posit, just that they don't have the ability to elucidate such arguments given their constraints.
I agree this doesn't mean the best they can do is the best anyone can do. I don't think I've seen good examples of 'they knew damn well there was an actual strong argument, instead they went with vague flimsy associations only.' Why Not Both applies. If they were super squeezed for space, maybe, but again I can't think of examples here.
Because high quality reporting and low quality reporting are two distinct things, if you send your crappy reporter out to cover the latest event to put a very quick story up, and then also send a really good reporter out to fully investigate then a large amount of the time the two are going to contradict each other in details or even conclusions. Because crap is quicker your readers are going to constantly see story 1 saying X and then the next day story 2 saying Y being distinctly different from X. If you ty to manage it, say by only printing the second story when it fits well enough with the first then you are going to be constantly restricting the output of your better (and likely more expensive with more options) writer.
As for examples, George Floyd's death produced a huge range of articles on the event of varying quality. Here's an NPR article which is heavily slanted with little useful information,
https://www.npr.org/2020/05/27/862956646/george-floyds-death-stokes-call-for-minneapolis-officers-to-be-charged-with-murd
and here's a USA today one which is substantially better
https://www.usatoday.com/story/news/nation/2020/05/26/george-floyd-minneapolis-police-officers-fired-after-public-backlash/5263193002/
Of note the NPR article claims Chauvin was 'that the arresting officer was not practicing a technique condoned by the city's police', while the USA today piece actually quotes the police departments position on neck restraints. Both articles (to me at least) read in the same direction- that Floyd died as the result of racism on the part of the cop or police department, but one has an unquestioned line declaring a fact which would be disqualifying if you knew the information in the second.
How does a journalist count to ten?
"One, several, many, nearly the better part of ten, five..."
You mention sometimes that you write long essays for lacking the time to write short ones; I write short things faster than long ones and the shorter the faster. Also, I couldn't have written On Bounded Distrust but I think I could have written How to Bounded Distrust. If the latter took more time to write than the former, writing short things is super not your comparative advantage.
For $20 I'll lossily compress any one important long post of yours down to 1,353 words or fewer and get it back to you inside 24 hours, to see if getting people to compress your posts nets positive.
- only $20 because I'd run this test for free if it's worth $20 to you
- as many as $20 because things written for money are much easier to write quickly
I respect you.
> origins of the false ‘more athletes died in the last year than in the last 38 years’ claim
(kinda weak putting a paywalled story here as evidence for the claim)
> Can call anyone an expert. Expert consensus means three people. ‘Some investors’ and similar phrases mean two (as does ‘surrounded by.’)
If only you'd see news in national television in Poland... they usually invoke "experts" to comment on a given topic. The thing is, these are usually chosen from a set of just a few people. And they are... affiliated internet alt-media 'journalists'.
Imagine if Trump _somehow_ established a US state-funded TV channel (well, it already existed in Poland), made it completely partisan, with news sometimes eerily reminiscent of North-Korea propaganda somehow. News would have segments where "experts" from Breitbart discuss excesses of the Wokism.
(see this clip I've translated and subtitled; shame OpenAI Whisper didn't exist back then: https://www.youtube.com/watch?v=MOZR2OxM4uc )
Shame OpenAI Whisper didn't exist back then. Thankfully it does now, so I'll also share what main opposition TV channel said a few days ago: https://www.youtube.com/watch?v=GFH_CfN7dvM
I mean, I agree with them. But it's a bit eerie to see mainstream media (they're US owned, so I interpret them as roughly extension of MSM; maybe that's going a little too far tho) talk about freedom of speech.
Anyway; one guy on Twitter https://twitter.com/FlasH_vikop even counts things like frequencies of these "experts" appearing http://ekspercitvpis.info or amount of times in 2021 (well, to 24.10) they reused the same clip of Donald Tusk's fist, supposed to make him seem aggressive (27) or the clip of Donald saying "fur Deutschland" (completely contextless, apparently supposed to associate him with Germany) (97 times) https://twitter.com/FlasH_vikop/status/1452032913046913024
In 2007, there was a story in the New York Times about some kerfuffle over a sorority that had kicked out about 2/3 of their membership for not being cool enough or something like that.
One phrase that immediately jumped out at me was a statement by the author that, among others, the only black, Korean, and Vietnamese members had been cut. The weirdly specific wording immediately tipped me off to the fact that there had been another Asian member, probably Chinese, who had not been cut, but that the author really wanted to insinuate that they had cut all the non-white members. There was no other plausible explanation for that phrasing.
Sure enough, CNN was reporting that a Chinese member had not been cut.
Granted, this wasn't a terribly important story, but it's a great illustration of the media including all the facts that fit the Narrative, and only the facts that fit the Narrative, and what can be inferred by accounting for this.
I can't fault anyone for being too paranoid about their information intake, but this set of rules does not jibe with Steve Sailer's observation that e.g. NYT often does put inconvenient or counter-Narrative facts right there in its articles, except they are shoved towards the end after all the convenient and pro-Narrative ones and all the boring details. This way casual readers check out before encountering them from being bored, pro-Narrative readers check out after having seen their expectations confirmed, whereas diligent readers can actually construct a decent picture of the world by reading to the very end, and presumably journalists and editors can feel good about themselves as adhering to the standards of truthfulness and conscientiousness. (The observation was correct every time I checked him.)
It suggests that there is some cost they see to not including that information, or some benefit to including it, and they're following that incentive while moving it to the end to minimize narrative cost, so... it seems pretty consistent to me, although it gives a small hopeful caveat and additional strategy (always read to the end if you are wondering about that)?
> it suggests that there is some cost they see to not including that information, or some benefit to including it [...]
Any motivation can be trivially expressed in cost-benefit terms, therefore such a restatement adds no new information and is barely short of tautology. The real content of your thesis is in the rule set (1-15) and, while I can readily think of examples of 3-15 and while pushing Narrative-damaging information to the end of the article is not against these rules, I feel that such an action does not agree with their spirit. Why include Narrative-damaging information at all? Perhaps there are (still are) additional costs here which you have not considered?
I imagine that you could use some carefully selected system prompts to use AI to help with this kind of analysis.
Bounded-Distrust-GPT could definitely be a thing if we put some work into it. Would take some work to become net useful though.
Can I add a corollary to the Pravda joke? The WSJ is useful because it is used by Washington to “tell the truth” . I.e. DOD info leaked regarding some military operation. This we get a Pravda effect - sort of. This source is useful to DC because it is not the NYT and will be viewed as less biased. Better for info leaking, whether deliberate or not. It is useful to readers because the anonymous source is more likely to be someone who actually works at the DOS or DOD or White House. Not 100% Pravda but WSJ as conduit effect.