"The other option is division of labor and outsourcing.
If you can find a sufficiently trustworthy secondary source that analyzes the information for you, then you don’t need to worry about the trust level of their sources. That’s their problem."
I don't think you intended it to but this finally convinced me to upgrade to a paid subscription. When worded this way it's clearly too valuable a service to expect someone to do it for free.
It caught my attention to hear Bloomberg called out as reliable. I don't ever really pay attention to them, so the main thing that comes to mind when I think of them is the "Big Hack" story that (as I understand it) turned out to mostly likely be false, but that it's just an honest failure as will occasionally happen with investigative reporting, as opposed to an intentional narrative-fitting deceit.
For the most part, the the fact that I don't really think about them at all or have an opinion on them suggests that they haven't been feeding the zeitgeist of anger, so it's a point in their favor. I may try checking them out periodically.
They have an aggressive and expensive paywall, and make their money from people in business who need real information. It all makes sense. The reason I don't use them more is that I can't share their stuff and I don't want to become disconnected from the open internet, but they're very good info in my experience.
Oof, yeah. I was looking at their "$0.99/mo" offer and considering signing up, but now see that that's just the first month and it's actually something like ~$100/year. Oh well.
This article and Scott's are going to be required reading for my scouts working on the Citizenship in the Nation merit badge, which has a *very* outmoded media literacy component. Fantastic read.
Also, I think you should add Scientism to your lexicon. It's an idea you use often.
I was thinking the same thing initially, but I've talked myself out of it.
Scientism invovles fooling yourself, not just others. For example, a simple redistribution fo $100 from a rich person to a poor person is a net possitive in utility, because poor people value $100 more than a rich person. A simple Scientism analsysis says this is a free lunch! Lots of utility minus less utility = some utility! But, of course, that's not a real model of the world, because utility is not transitive. You might end up with a pissed off rich person withdrawing their charitable efforts and a poor person who learns that rent seeking has more utility than wealth creation. But lots of people honestly and in good faith believe its a free lunch because they are pre-disposed to believe things that are suseptable to numeric analysis.
Not a shock that the biggest popularizer of the term scientism was an economist...
ScienceTM is, to me, more about the controll of the narrative. That tiktok video of 2 people in "Science... like magic but real" shirts doing a Roll Call cheer is 100% ScienceTM and 0% scientism.
I'm glad he's not using 'scientism' then because I don't understand it to mean what you claim it means!
Your usage seems more like 'leaning too hard on a toy model' to me. But I also don't understand "free lunch" to _ever_ involve the loss of any utility (even if the net utility is positive), so I'm a little confused by your example. [A 'free lunch', to me, is one that involves not even paying at all (or, more pragmatically, but much more loosely, only a 'reasonable investment' to realize much larger 'returns').]
The original commenter wrote:
> Also, I think you should add Scientism to your lexicon. It's an idea you use often.
But I don't think 'scientism' (as you understand it) _is_ an idea that Zvi uses often. (I realize that you didn't make that claim yourself.)
I do think a good portion of Science™️ is driven by a 'scientistic' (in your sense) idea about how science works, e.g. 'there was a study proving X', or 'there's no evidence of X' (i.e. 'there is no study proving X').
I've updated a little towards thinking that using any terms like this is fraught!
Ha ha, I'm both Eye Beam users. I was on a new device and couldn't remember the user name I had been operating under here.
Anyway, there are two uses of the term scientism. The one I'm talking about is the one used by Hayak, Popper, and Russ Roberts.
Its actually a bit of a motte and baily. The motte is that your models suck and you are drawing big and hard conclusions without taking into acount the weakness of your model. The baily is that you suck and you shouldn't be excluding other forms of knowing like case studies, anthropological studies (see e.g. Ronald Coase) , common sense (Russ Roberts), traditional wisdom (Hayak, Roberts), and remembering what all the little details of your own damn model are (Hayak).
Its the accusations that, to the targets, systematized analysis is the only analysis worth considering, or the only one possible, and that you are missing out on some important and/or obvious stuff. And also your model might be super complex but you can't keep that complexity in mind when making a decision, so you take mental short cuts that actually violate your model's recomendations.
It is the argument that you are only looking for your keys under lightposts because that's where the light is, not because its where you think you dropped your keys. Its the argument when your model and reality diverge, you argue against reality instead of revising your model. Its the argument that when an exprienced NASA engineer has a sick feeling is his stomach about the O-ring, he should raise his voice even if he can't prove it mathmatatically (yet). It was the argument that even if every flight has <50% casualty rate, you can't expect (American, or at least non-RAF) pilots to go on mission after mission and be happy about it. Its the argument that you shouldn't have sterlizied that rape victim just because some doctor said she was the 3rd generation of imbicile in her family. Its the argument that your GUI A/B test isn't telling you people are more engaged just because they spend longer on your website looking for the search bar (Challenger, WWII, and Buck vs Bell, basically every webpage circa 2015, respectively)
It was in short a response against the trends and fashions originating in the progressive era that overused this kind of analysis - macroeconomics, socialist or communit calculations, Macnamara's form of warfare, eugenics, and scientific management.
In some ways, its much less accurate than it was 100 years ago. We can in fact know a lot more about an economic system in 2022 than we did in 1922. In other ways, its more salient because its the dominant form of knowing things, and America's educated class discount other valuable forms of knowing things more than ever before.
The other version of the term is used to argue that someone is holding an overly materialist view of the world. That some things can not now and can never be analyzed with the scientific method - human emotions, the divine or supernatural, etc. I don't mean it in this sense.
Your free lunch point is spot on. That's what a free lunch is - a Pareto improvement, a change where at least 1 person is better off and noone is worse off. Scientisism is the accusation one makes against someone who treats non-Pareto-improving redistribution schemes as Pareto improving because the arithmatic works out if you treat human utility like you would an investment portfolio. And in practice, this happens all the time, and not just when its done by the state. My department was recently re-orged. Efficiency is up like 33%. But communication mistakes with our clients are way, way up because the client-relations possition that did the client communication QA got eliminated. But we don't have that as a line on our COO's spreadsheet, so it doesn't matter. Hey, why are we hemeraging clients??
I’ve been following economist Paul Krugman for 20+ years, and have found he has an excellent track record for predictions…even for things that are out of his lane, like the misinformation used to justify the Iraq war.
He explains his reasoning, shows his data, and admits when he was wrong (like when he predicted the Internet would not be important).
Yeah yeah…he’s best known as a columnist for the dreaded NYT, but he’s also on Twitter, if anyone wants to give him a fair shake.
Interesting. He would have been maybe my very first example of a former scientist who sacrificed his legitimacy in service of the narrative. I noticed he seems to bat about 1.000 when his team's narrative happens to line up with the real world, and about 0.000 when it doesn't.
Well, he’s very obviously and adamantly on the Blue Team…I should have noted that.
But…other than a couple of times when he made a bad call based on a knee jerk reaction, and quickly retracted…I can’t think of any cases where he was obviously wrong.
What would be examples of him batting 0 when it doesn’t match his narrative?
Hardly the most substantive, but maybe the shortest time between prediction and resolution.
Krugman, the night of Trump's election. "It really does now look like President Donald J. Trump, and markets are plunging. When might we expect them to recover?... Still, I guess people want an answer: If the question is when markets will recover, a first-pass answer is never."
https://www.cnbc.com/2016/11/09/us-markets.html The next morning, markets did not plunge. "U.S. stocks surged more than 1 percent Wednesday with financials and health care leading after Republican Donald Trump won the presidential election, defying market expectations for a Hillary Clinton win.
The day's rally took the major averages within 2 percent of their all-time intraday highs, and marked a stunning recovery from a sharp plunge in stock index futures overnight. Trade volume Wednesday was roughly 12 billion shares, the highest since the surprise U.K. vote to leave the European Union in June."
That was one of the knee jerk incidents I was referring to. He admitted as much within a day or two (not that he had much choice, being so flagrantly wrong. But as Zvi says, points should be given when people admit mistakes)
More recently, he is in the process of climbing down from his prediction that inflation will only last a few months, because he underestimated supply chain issues.
I was really hoping you would be able to point to something more substantial. If you think of anything, please share.
In the meantime, off the top of my head, here’s a list of successful predictions he’s made.
1. Warnings about the dot com bubble around 1999 -2000 (though I may be confusing him with Schiller here)
2. Warnings about the mis-information being spread to justify the Iraq war
3. Warning about the real estate bubble circa 2008
4. Correctly predicting the course of the 2008 Great Recession. I.e. It would be long and deep, and that the stimulus measures would NOT lead to inflation and “debasement of the currency”
You might want to reread his 90's book "Pop Internationalism" and compare/contrast to what he writes now. I used to use that in classes, and every semester a handful of students would ask "Wait... is the same Paul Krugman?" The differences are shocking enough to make you want to stop and ask "Which do you actually believe?" because it isn't clear what mental model could produce both without contradictions.
This is a legendary post, and not because if it was merely good the living would envy the dead. If you have ever thought that you would like to write a book but didn't know what would be a good topic, I would highly recommend this. I honestly don't think people understand these sorts of issues well anymore, not the way they used to, and this was masterfully done. Thank you.
> He doesn’t draw any parallels to the past, but his version of bounded distrust reads like something one might plausibly believe in 2015, and which I believe was largely the case in 1995. I am confused about how old the old rules are, and which ones would still have held mostly true in (for example) 1895 or in Ancient Rome.
I'm somewhat worried about Gell-Mann amnesia here. We see people lying about things we know about (the events of the recent past), recognize it for nonsense, then blithely assume that other people in a substantially similar incentive structure - and occasionally actually the same people in the 2015/1995 cases - weren't doing the same thing for the same reasons. I don't know for sure this *isn't* true, but I'd also only be moderately surprised to learn that the "new rules" are much older than this post assumes.
This article is absolutely fascinating to me and I'm glad you wrote it.
I am a person who falls into a bunch of the "Incorrect Anti-Narrative Contrarian Cluster". I obviously believe I am correct. But reading through your essay I was starting to get some cognitive dissonance because you're making really good arguments about things that I had not considered before.
About halfway through this article, it hit me what's going on. We have nearly identical reasoning processes, but come to opposite conclusions because of different weights on different priors.
Take this for example
> So yeah, anthropogenic global warming is real and all that, again we know this for plenty of other good reasons, but the reasoning we see here about why we can believe that? No.
...
> This is not the type of statement that we can assume scientists wouldn’t systematically lie about. Or at least, it’s exactly the type of statement scientists will be rewarded rather than punished for signing, regardless of its underlying truth value.
To simplify the example, two priors. 1: "I am intelligent, I have checked the data for myself, and it's good". 2: "I don't know if scientists _did_ lie about this, but they totally _would_ based on their incentives and their demonstrated past behaviour"
You weight (1) stronger. I weight (2) stronger. You trust in your own ability to evaluate scientific research independently, more than I trust in my ability to do the same. I believe public officials' propensity to lie is higher than you believe it is.
This is absolutely fascinating to me and, I think, contains a lot of explanatory power for how I came to be in such violent disagreement with many of you on many of these issues
Interesting, that all sounds very right. At some point, the question is not whether the petition is evidence in favor of AGW, it's whether the petition is evidence against AGW, because these people's lips are moving and therefore they must be lying.
And there are cases where that's true, depending on what you're taking as your given. The fact that the petition exists is evidence that the petition seemed necessary, and in the worlds where AGW is sufficiently agreed to be true then there's no need for the petition. But given that we already know there are those denying AGW, the petition isn't more likely in the worlds where AGW isn't real than in the worlds where it is real, so given our perspective it's not evidence that AGW is false. But, if you'd grown up thinking actual everyone believed in AGW, then suddenly the petition comes out, then it IS negative information, because it implies things about others positions. Hence "government denies knowledge" on the X-files.
The logical thing to do is... check more data for yourself, if you are up to it?
I don't see any reason for believing politicians are any more dishonest than the media in any general sense, and I don't think this 2-week time frame makes a lot of sense. Politicians' incentives are different than the media to the extent that their constituency is different, not in some deep way. And most politicians do actually seem to be pretty careful about not straight up lying about disprovable physical truth in exactly the same way as the media. In act, they have more of a disadvantage than the media in the dishonesty game, because they actually have to live in the real world enough to understand how to win an election. This is probably somewhat less true of non-establishment politicians, who may not know the rules, or may have constituencies that are systematically different than those of establishment politicians in ways that make lying about ground truth more worth the risk. But on the flip side, experienced politicians running for high-visibility office in swing states probably have to be *more* honest than any given news source, because they have to appeal to a broader constituency and are more open to attack if they lie.
2 weeks is arbitrary and I think for someone who lied about ground truth in an obvious way, getting caught at any time would hurt them. Brian Williams only got demoted about 4 months after it came out that he'd been lying for years about getting shot at/shot down in Iraq. That was 2015 (old rules?) and he did get caught 4 days after his latest telling of the story, but importantly it didn't just blow over after 2 weeks.
I think you're misunderstanding the 2-week rule - it's more like, within 2 weeks it needs to be clear that there will be consequences in the future, which causes there to be consequences now. And this can be a long chain of anticipation, so the actual demotion can be 4 months in the future so long as the investigation starts within 2 weeks.
As for politicians vs. media, in my observations most major candidates tell steady streams of outright lies and it seems strange to think otherwise.
Thank you for writing this. I found it worthwhile and insightful.
What I don't feel confident about at all is how to avoid ending up in an epistemic defensive crouch where my priors become unshakeable. I don't want to trust untrustworthy sources, sources mostly seem untrustworthy, and the ones that seem most trustworthy generally are ones that basically align with my worldview (presumably this is a common problem, given how typical my mind is). That's a great recipe for not being fooled if I'm currently not being fooled. It's a terrible recipe for escaping if I'm *currently* being fooled.
How would I even know the difference?
I've been leaning really hard on the "these people seem to be trying to help me figure out how much to trust them" angle--anybody who puts an "epistemic status" tag at the top of their thing, for instance. But other than just looking for people who end every sentence with a question mark, I'm pretty low confidence that the heuristics I've figured out aren't just me justifying my own preconceived notions.
Anyway, thanks for articulating a lot of what I've been feeling!
Trapped priors are definitely a problem. One brainstorm is to try seeing what happens when those priors reverse, if you're capable of it - read your trusted sources as if you're convinced of the opposite perspective, see what happens, or vice versa. Or ask the question, what would the world (and this) look like if I was wrong?
But yeah, one needs to do better than looking for question marks and epistemic status titles - I used to use explicit ones but I ended up finding them mostly not useful, and only use one on rare occasions now.
So e.g. you're reading something you mostly agree with, look for the parts where it seems like it's on shaky ground, going too far or jumping too quick, get skeptical there, see where it takes you, and all that, as a first step, perhaps.
One thing that does end up bothering me about your and Scott's posts is they both describe an informational world that's hopelessly broken because of dishonesty in such a way that maybe at best 1% of people can, with tremendous effort, know kind-of-sort-of what's going on in a few niche subjects. And then both articles just... stop. I'm always waiting for that "and thus this is lying, and lying is bad, and we shouldn't do it, and maybe we could try to change the world in X way" practical component, but it never really comes.
In rationalist circles, that practical component is basically never proposed, and I'm not sure why. Money can end more or less suffering leads to EA. Neurons are consciousness leads to Scott recommending only eating the very biggest animals, or something. Lying having no significant downsides has made all our news and the entirety of science unreliable except for like 3 guys leads to... nothing, every, it's an assumed natural law that doesn't respond to norms and shouldn't be challenged.
It's always been weird to me, because anything even kind of rationalist-adjacent absolutely needs data to function at all; bad data breaks the whole system. This is sort of the biggest problem to knowing anything or doing anything effectively that we could imagine, and the furthest we are generally willing to go is describing the problem, describing the incentives that caused it, and completely refusing to imagine the kind of incentives that might un-cause it.
Scott (and Bryan Caplan's) focus on prediction markets and reputational bets is part of a solution. Those pundits and experts who are willing to make specific predictions and then review their track records in public are more credible than are those who don't.
Right now this is a relatively small universe, but part of what we can all do is to support a norm that credible people are willing to make predictions and review their track record in public. (This is a short version; we could and should elaborate a more complete and nuanced version). People who are not willing to stake their reputations on predictions, or who hide or obfuscate their record of failed predictions, are ipso facto not credible.
And, of course, those with poor track records are not credible.
Science built its credibility on the extraordinary predictive power of physics and astronomy. The reputational capital of science is being destroyed by the sloppiness and motivated reasoning that is now well-known in the rationality community. Robin Hanson's famous paper "Could Gambling Save Science?," in which he introduced prediction markets as a solution, is more than 30 years old now,
It is well worth re-reading in light of the erosion of the epistemological commons in the past thirty years. Robin Hanson was pointing in the right direction a long time ago.
I consider this strong positive selection - the few sources that are doing real verification of this type are essentially all valuable, but there are very few of them. Even if you count cases like Matt Yglesias, which I think you should, one can't get that high a percentage of news from them, and also most such sources only track as subset of things - e.g. Matt's predictions doing OK doesn't obviously make his strategic analysis wise.
In terms of getting more people to do it, one barrier we can solve is that the whole thing is time-consuming and annoying. Maybe we could start a service that compiled predictions in real time of some kind.
I suspect that if prediction markets (even good, easy-to-use markets) end up failing, it's going to be for a sort of cart-before-the horse problem. If we solve all the barriers to using prediction markets and indicating credibility, there's still the question of why anyone who isn't already interested in credibility would make predictions there, and how many people would check it.
Most people who consider you and Scott to be credible don't do so because of your prediction-market scores; a lot probably don't even know you have them. They find you credible because - well, just because you and he generally are. They can see you doing the work. And conversely you and Scott aren't credible-sorts-of-people because you use prediction markets; you use prediction markets because you are the kind of people who would visibly do the work even if they didn't exist.
Once you, he and like four other guys are factored out, you have everyone else. And broadly they can be categorized as people who don't want things like prediction markets because apathy, people who don't want them because deception and people who don't want them because they couldn't perform there.
You can improve apathy a little and probably get a few more marginal guys by making prediction markets easier to use, but everyone else is just going to say "nope, not doing that". I think you end up with a situation where everyone who already cares about being credible is already pretty identifiably so and everyone who isn't wouldn't care to use the markets. I'd argue you have to start further back - they have to have an incentive to want to use the markets and be credible in the first place.
That's sort of why I get frustrated with article's like this and Scott's and tie-ins to prediction markets. We all agree deception and inaccuracy are bad; we all agree that if everyone was to put in a lot of effort to being honest/accurate things would be better. We all agree there's basically no incentive for them to do that. And instead of looking at making incentives, we talk about designing tools we know they won't use.
It would be great if wealthy philanthropists put in place more incentive-based systems for better judgment.
While we are waiting for that, the next best solution seems to be:
1. Communicate with others just how badly the epistemic commons has been damaged.
2. Communicate with others the crucial importance of building better options.
3. Not promoting garbage.
All too many people still complacently believe that peer review, elite institutional affiliations, and prestige media are ipso facto credible. Until and unless more people realize that these old systems of credibility are no longer adequate, they won't seek out better solutions. Once people do begin to seek out better solutions, then most rally around some combination of skin in the game, reputations at stake, forecasting/predictions, etc.
Will this solve anything in the next 5 years? Probably not, though I predict greater than 30% growth in participants on prediction market platforms over the next five years (a super safe, trivial prediction, it will probably be much larger than that). Will we begin to have better solutions in 10 years? Maybe.
While predictive analytics are a very different tool, note that private sector growth of this sector is expected to be 20% CAGR for the next few years,
As predictive analytics becomes more widespread in business, related techniques will become more common in other domains as well, especially domains adjacent to financial impact. Business has a real incentive to obtain more reliable information. Pundit land will mostly remain garbage.
Sadly, academia is also likely to continue to lose credibility until and unless a coalition forms within academia to innovate new solutions that provide a particular department or university with a systematic competitive advantage with respect to real world accuracy. Ideally a philanthropist or business would put real money behind such a center.
"In terms of getting more people to do it, one barrier we can solve is that the whole thing is time-consuming and annoying. Maybe we could start a service that compiled predictions in real time of some kind." Agreed, it is tiny at present, but moving in the direction of more visibility regarding track records would be an improvement.
Even developing a norm around making specific claims and acknowledging openly when one's claims have been falsified would be an improvement. A resource that documented who refuses to make specific claims or to be held accountable for their track record would be a start.
My heuristic is a bit simpler: If any source that is a professional user of words uses hedges (could, would, should, etc, etc) outside of a context that demands it (explaining uncertainty) they almost are certainly lying and you should dig deeper.
The other major tell is mixing units in the same story (totals vs percent) or otherwise obfuscating actual data points (narrative about how good/bad a particular metric without trends over time).
By using the word “tell”, you have given me a great mental anchor for understanding how I operate in a practical (non-time consuming) way with the problem of dealing with an utterly unreliable media environment. I use them all the time but have not reflected on that before. .
> My heuristic is a bit simpler: If any source that is a professional user of words uses hedges (could, would, should, etc, etc) outside of a context that demands it (explaining uncertainty) they almost are certainly lying and you should dig deeper.
This filters out uncertain/unconfident people. Through I guess that excludes them from being a professional user of words?
2020 broke "don't pay attention to the news for me."
Obviously, COVID restrictions dramatically affected real life in a way that politics never did before. You just couldn't ignore it. COVID basically radicalized me.
Also, the ongoing race and gender Cultural Revolution. A little bit of cynical affirmative action is dumb and unjust, but not the end of the world. But it seems like "cynical apathy" just wasn't a stable equilibrium. I've come around to the idea that we have to "Face Reality" as Charles Murray says. People who can't process the relevant facts have attempted to world build an understanding of how things are while ignoring important facts and they built crazy world models that demand them to force crazy things on the rest of us.
It may be harder for people that don't have young children to understand this, because both of these items are really really bad in the schools right now compared to regular adult life.
Anyway, it's not clear what "obviously disprovable factual statements about the world" means. If something involves multiple regression and cause and effect statements, even if the evidence is really strong in one direction, its harder to "definitely" prove then "X% of Y at time Z". COVID and race/gender are often in that category. It's easy enough to provide a level of evidence in support of a claim that a reasonable person would accept, but not enough to 100% shut down someone engaged in motivated reasoning.
> But what about the global problem as a global problem? Sure, politicians have mostly always lied their pants on fire, but what to collectively do about this epic burning of the more general epistemic commons?
There's also a problem with AI of course. It seems rather inevitable that in a few years we're screwed if something is not done, even if further advances in the field stall for some reason.
This is a nice take on a problem, if a little dated (ok, it's a few months old, but this timespan is subjectively years now...): https://youtu.be/oppj9MdNf44
"The other option is division of labor and outsourcing.
If you can find a sufficiently trustworthy secondary source that analyzes the information for you, then you don’t need to worry about the trust level of their sources. That’s their problem."
I don't think you intended it to but this finally convinced me to upgrade to a paid subscription. When worded this way it's clearly too valuable a service to expect someone to do it for free.
Much appreciated.
"This seems very much like a Be The Change You Want to See in the World situation."
For more on this, please see my LessWrong posts on how to strengthen ones virtues of Honesty and Sincerity: https://www.lesswrong.com/s/xqgwpmwDYsn8osoje/p/9iMMNtz3nNJ8idduF and https://www.lesswrong.com/s/xqgwpmwDYsn8osoje/p/haikNyAWze9SdBpb6 respectively.
It caught my attention to hear Bloomberg called out as reliable. I don't ever really pay attention to them, so the main thing that comes to mind when I think of them is the "Big Hack" story that (as I understand it) turned out to mostly likely be false, but that it's just an honest failure as will occasionally happen with investigative reporting, as opposed to an intentional narrative-fitting deceit.
For the most part, the the fact that I don't really think about them at all or have an opinion on them suggests that they haven't been feeding the zeitgeist of anger, so it's a point in their favor. I may try checking them out periodically.
They have an aggressive and expensive paywall, and make their money from people in business who need real information. It all makes sense. The reason I don't use them more is that I can't share their stuff and I don't want to become disconnected from the open internet, but they're very good info in my experience.
Oof, yeah. I was looking at their "$0.99/mo" offer and considering signing up, but now see that that's just the first month and it's actually something like ~$100/year. Oh well.
Right. They're worth the price but you have to mean it.
Bloomberg is low key the best mainstream source for legal news an Supreme Court pundantry. For no reason I can figure out.
This was my thought, too. Bloomberg FUBT with their Supermicro hack story, and refused to admit it or correct the record.
They also published an embarrassingly bad story on quantum computing (https://www.bloomberg.com/news/articles/2021-02-07/a-swiss-company-says-it-found-weakness-that-imperils-encryption) that should be disavowed.
wow I didn't realize the Supermicro hack story was in error? That seemed earth shattering at the time. Not quite, eh?
This article and Scott's are going to be required reading for my scouts working on the Citizenship in the Nation merit badge, which has a *very* outmoded media literacy component. Fantastic read.
Also, I think you should add Scientism to your lexicon. It's an idea you use often.
I think, in Zvi's 'own language' (and many others), 'Science™️' is equivalent to 'scientism'.
I was thinking the same thing initially, but I've talked myself out of it.
Scientism invovles fooling yourself, not just others. For example, a simple redistribution fo $100 from a rich person to a poor person is a net possitive in utility, because poor people value $100 more than a rich person. A simple Scientism analsysis says this is a free lunch! Lots of utility minus less utility = some utility! But, of course, that's not a real model of the world, because utility is not transitive. You might end up with a pissed off rich person withdrawing their charitable efforts and a poor person who learns that rent seeking has more utility than wealth creation. But lots of people honestly and in good faith believe its a free lunch because they are pre-disposed to believe things that are suseptable to numeric analysis.
Not a shock that the biggest popularizer of the term scientism was an economist...
ScienceTM is, to me, more about the controll of the narrative. That tiktok video of 2 people in "Science... like magic but real" shirts doing a Roll Call cheer is 100% ScienceTM and 0% scientism.
I'm glad he's not using 'scientism' then because I don't understand it to mean what you claim it means!
Your usage seems more like 'leaning too hard on a toy model' to me. But I also don't understand "free lunch" to _ever_ involve the loss of any utility (even if the net utility is positive), so I'm a little confused by your example. [A 'free lunch', to me, is one that involves not even paying at all (or, more pragmatically, but much more loosely, only a 'reasonable investment' to realize much larger 'returns').]
The original commenter wrote:
> Also, I think you should add Scientism to your lexicon. It's an idea you use often.
But I don't think 'scientism' (as you understand it) _is_ an idea that Zvi uses often. (I realize that you didn't make that claim yourself.)
I do think a good portion of Science™️ is driven by a 'scientistic' (in your sense) idea about how science works, e.g. 'there was a study proving X', or 'there's no evidence of X' (i.e. 'there is no study proving X').
I've updated a little towards thinking that using any terms like this is fraught!
Ha ha, I'm both Eye Beam users. I was on a new device and couldn't remember the user name I had been operating under here.
Anyway, there are two uses of the term scientism. The one I'm talking about is the one used by Hayak, Popper, and Russ Roberts.
Its actually a bit of a motte and baily. The motte is that your models suck and you are drawing big and hard conclusions without taking into acount the weakness of your model. The baily is that you suck and you shouldn't be excluding other forms of knowing like case studies, anthropological studies (see e.g. Ronald Coase) , common sense (Russ Roberts), traditional wisdom (Hayak, Roberts), and remembering what all the little details of your own damn model are (Hayak).
Its the accusations that, to the targets, systematized analysis is the only analysis worth considering, or the only one possible, and that you are missing out on some important and/or obvious stuff. And also your model might be super complex but you can't keep that complexity in mind when making a decision, so you take mental short cuts that actually violate your model's recomendations.
It is the argument that you are only looking for your keys under lightposts because that's where the light is, not because its where you think you dropped your keys. Its the argument when your model and reality diverge, you argue against reality instead of revising your model. Its the argument that when an exprienced NASA engineer has a sick feeling is his stomach about the O-ring, he should raise his voice even if he can't prove it mathmatatically (yet). It was the argument that even if every flight has <50% casualty rate, you can't expect (American, or at least non-RAF) pilots to go on mission after mission and be happy about it. Its the argument that you shouldn't have sterlizied that rape victim just because some doctor said she was the 3rd generation of imbicile in her family. Its the argument that your GUI A/B test isn't telling you people are more engaged just because they spend longer on your website looking for the search bar (Challenger, WWII, and Buck vs Bell, basically every webpage circa 2015, respectively)
It was in short a response against the trends and fashions originating in the progressive era that overused this kind of analysis - macroeconomics, socialist or communit calculations, Macnamara's form of warfare, eugenics, and scientific management.
In some ways, its much less accurate than it was 100 years ago. We can in fact know a lot more about an economic system in 2022 than we did in 1922. In other ways, its more salient because its the dominant form of knowing things, and America's educated class discount other valuable forms of knowing things more than ever before.
The other version of the term is used to argue that someone is holding an overly materialist view of the world. That some things can not now and can never be analyzed with the scientific method - human emotions, the divine or supernatural, etc. I don't mean it in this sense.
Your free lunch point is spot on. That's what a free lunch is - a Pareto improvement, a change where at least 1 person is better off and noone is worse off. Scientisism is the accusation one makes against someone who treats non-Pareto-improving redistribution schemes as Pareto improving because the arithmatic works out if you treat human utility like you would an investment portfolio. And in practice, this happens all the time, and not just when its done by the state. My department was recently re-orged. Efficiency is up like 33%. But communication mistakes with our clients are way, way up because the client-relations possition that did the client communication QA got eliminated. But we don't have that as a line on our COO's spreadsheet, so it doesn't matter. Hey, why are we hemeraging clients??
I’ve been following economist Paul Krugman for 20+ years, and have found he has an excellent track record for predictions…even for things that are out of his lane, like the misinformation used to justify the Iraq war.
He explains his reasoning, shows his data, and admits when he was wrong (like when he predicted the Internet would not be important).
Yeah yeah…he’s best known as a columnist for the dreaded NYT, but he’s also on Twitter, if anyone wants to give him a fair shake.
Interesting. He would have been maybe my very first example of a former scientist who sacrificed his legitimacy in service of the narrative. I noticed he seems to bat about 1.000 when his team's narrative happens to line up with the real world, and about 0.000 when it doesn't.
Well, he’s very obviously and adamantly on the Blue Team…I should have noted that.
But…other than a couple of times when he made a bad call based on a knee jerk reaction, and quickly retracted…I can’t think of any cases where he was obviously wrong.
What would be examples of him batting 0 when it doesn’t match his narrative?
Hardly the most substantive, but maybe the shortest time between prediction and resolution.
Krugman, the night of Trump's election. "It really does now look like President Donald J. Trump, and markets are plunging. When might we expect them to recover?... Still, I guess people want an answer: If the question is when markets will recover, a first-pass answer is never."
https://www.cnbc.com/2016/11/09/us-markets.html The next morning, markets did not plunge. "U.S. stocks surged more than 1 percent Wednesday with financials and health care leading after Republican Donald Trump won the presidential election, defying market expectations for a Hillary Clinton win.
The day's rally took the major averages within 2 percent of their all-time intraday highs, and marked a stunning recovery from a sharp plunge in stock index futures overnight. Trade volume Wednesday was roughly 12 billion shares, the highest since the surprise U.K. vote to leave the European Union in June."
That was one of the knee jerk incidents I was referring to. He admitted as much within a day or two (not that he had much choice, being so flagrantly wrong. But as Zvi says, points should be given when people admit mistakes)
More recently, he is in the process of climbing down from his prediction that inflation will only last a few months, because he underestimated supply chain issues.
I was really hoping you would be able to point to something more substantial. If you think of anything, please share.
In the meantime, off the top of my head, here’s a list of successful predictions he’s made.
1. Warnings about the dot com bubble around 1999 -2000 (though I may be confusing him with Schiller here)
2. Warnings about the mis-information being spread to justify the Iraq war
3. Warning about the real estate bubble circa 2008
4. Correctly predicting the course of the 2008 Great Recession. I.e. It would be long and deep, and that the stimulus measures would NOT lead to inflation and “debasement of the currency”
Pretty good track record, IMO.
You might want to reread his 90's book "Pop Internationalism" and compare/contrast to what he writes now. I used to use that in classes, and every semester a handful of students would ask "Wait... is the same Paul Krugman?" The differences are shocking enough to make you want to stop and ask "Which do you actually believe?" because it isn't clear what mental model could produce both without contradictions.
Can't tell if joking.
Apparently they're not joking!
This is a legendary post, and not because if it was merely good the living would envy the dead. If you have ever thought that you would like to write a book but didn't know what would be a good topic, I would highly recommend this. I honestly don't think people understand these sorts of issues well anymore, not the way they used to, and this was masterfully done. Thank you.
> He doesn’t draw any parallels to the past, but his version of bounded distrust reads like something one might plausibly believe in 2015, and which I believe was largely the case in 1995. I am confused about how old the old rules are, and which ones would still have held mostly true in (for example) 1895 or in Ancient Rome.
I'm somewhat worried about Gell-Mann amnesia here. We see people lying about things we know about (the events of the recent past), recognize it for nonsense, then blithely assume that other people in a substantially similar incentive structure - and occasionally actually the same people in the 2015/1995 cases - weren't doing the same thing for the same reasons. I don't know for sure this *isn't* true, but I'd also only be moderately surprised to learn that the "new rules" are much older than this post assumes.
This article is absolutely fascinating to me and I'm glad you wrote it.
I am a person who falls into a bunch of the "Incorrect Anti-Narrative Contrarian Cluster". I obviously believe I am correct. But reading through your essay I was starting to get some cognitive dissonance because you're making really good arguments about things that I had not considered before.
About halfway through this article, it hit me what's going on. We have nearly identical reasoning processes, but come to opposite conclusions because of different weights on different priors.
Take this for example
> So yeah, anthropogenic global warming is real and all that, again we know this for plenty of other good reasons, but the reasoning we see here about why we can believe that? No.
...
> This is not the type of statement that we can assume scientists wouldn’t systematically lie about. Or at least, it’s exactly the type of statement scientists will be rewarded rather than punished for signing, regardless of its underlying truth value.
To simplify the example, two priors. 1: "I am intelligent, I have checked the data for myself, and it's good". 2: "I don't know if scientists _did_ lie about this, but they totally _would_ based on their incentives and their demonstrated past behaviour"
You weight (1) stronger. I weight (2) stronger. You trust in your own ability to evaluate scientific research independently, more than I trust in my ability to do the same. I believe public officials' propensity to lie is higher than you believe it is.
This is absolutely fascinating to me and, I think, contains a lot of explanatory power for how I came to be in such violent disagreement with many of you on many of these issues
Interesting, that all sounds very right. At some point, the question is not whether the petition is evidence in favor of AGW, it's whether the petition is evidence against AGW, because these people's lips are moving and therefore they must be lying.
And there are cases where that's true, depending on what you're taking as your given. The fact that the petition exists is evidence that the petition seemed necessary, and in the worlds where AGW is sufficiently agreed to be true then there's no need for the petition. But given that we already know there are those denying AGW, the petition isn't more likely in the worlds where AGW isn't real than in the worlds where it is real, so given our perspective it's not evidence that AGW is false. But, if you'd grown up thinking actual everyone believed in AGW, then suddenly the petition comes out, then it IS negative information, because it implies things about others positions. Hence "government denies knowledge" on the X-files.
The logical thing to do is... check more data for yourself, if you are up to it?
I don't see any reason for believing politicians are any more dishonest than the media in any general sense, and I don't think this 2-week time frame makes a lot of sense. Politicians' incentives are different than the media to the extent that their constituency is different, not in some deep way. And most politicians do actually seem to be pretty careful about not straight up lying about disprovable physical truth in exactly the same way as the media. In act, they have more of a disadvantage than the media in the dishonesty game, because they actually have to live in the real world enough to understand how to win an election. This is probably somewhat less true of non-establishment politicians, who may not know the rules, or may have constituencies that are systematically different than those of establishment politicians in ways that make lying about ground truth more worth the risk. But on the flip side, experienced politicians running for high-visibility office in swing states probably have to be *more* honest than any given news source, because they have to appeal to a broader constituency and are more open to attack if they lie.
2 weeks is arbitrary and I think for someone who lied about ground truth in an obvious way, getting caught at any time would hurt them. Brian Williams only got demoted about 4 months after it came out that he'd been lying for years about getting shot at/shot down in Iraq. That was 2015 (old rules?) and he did get caught 4 days after his latest telling of the story, but importantly it didn't just blow over after 2 weeks.
I think you're misunderstanding the 2-week rule - it's more like, within 2 weeks it needs to be clear that there will be consequences in the future, which causes there to be consequences now. And this can be a long chain of anticipation, so the actual demotion can be 4 months in the future so long as the investigation starts within 2 weeks.
As for politicians vs. media, in my observations most major candidates tell steady streams of outright lies and it seems strange to think otherwise.
Do you have concrete examples of this? Even Trump mostly stuck to lying by implication, omission, or attribution from what I saw.
Thank you for writing this. I found it worthwhile and insightful.
What I don't feel confident about at all is how to avoid ending up in an epistemic defensive crouch where my priors become unshakeable. I don't want to trust untrustworthy sources, sources mostly seem untrustworthy, and the ones that seem most trustworthy generally are ones that basically align with my worldview (presumably this is a common problem, given how typical my mind is). That's a great recipe for not being fooled if I'm currently not being fooled. It's a terrible recipe for escaping if I'm *currently* being fooled.
How would I even know the difference?
I've been leaning really hard on the "these people seem to be trying to help me figure out how much to trust them" angle--anybody who puts an "epistemic status" tag at the top of their thing, for instance. But other than just looking for people who end every sentence with a question mark, I'm pretty low confidence that the heuristics I've figured out aren't just me justifying my own preconceived notions.
Anyway, thanks for articulating a lot of what I've been feeling!
Trapped priors are definitely a problem. One brainstorm is to try seeing what happens when those priors reverse, if you're capable of it - read your trusted sources as if you're convinced of the opposite perspective, see what happens, or vice versa. Or ask the question, what would the world (and this) look like if I was wrong?
But yeah, one needs to do better than looking for question marks and epistemic status titles - I used to use explicit ones but I ended up finding them mostly not useful, and only use one on rare occasions now.
So e.g. you're reading something you mostly agree with, look for the parts where it seems like it's on shaky ground, going too far or jumping too quick, get skeptical there, see where it takes you, and all that, as a first step, perhaps.
Really good article, and I really enjoyed it.
One thing that does end up bothering me about your and Scott's posts is they both describe an informational world that's hopelessly broken because of dishonesty in such a way that maybe at best 1% of people can, with tremendous effort, know kind-of-sort-of what's going on in a few niche subjects. And then both articles just... stop. I'm always waiting for that "and thus this is lying, and lying is bad, and we shouldn't do it, and maybe we could try to change the world in X way" practical component, but it never really comes.
In rationalist circles, that practical component is basically never proposed, and I'm not sure why. Money can end more or less suffering leads to EA. Neurons are consciousness leads to Scott recommending only eating the very biggest animals, or something. Lying having no significant downsides has made all our news and the entirety of science unreliable except for like 3 guys leads to... nothing, every, it's an assumed natural law that doesn't respond to norms and shouldn't be challenged.
It's always been weird to me, because anything even kind of rationalist-adjacent absolutely needs data to function at all; bad data breaks the whole system. This is sort of the biggest problem to knowing anything or doing anything effectively that we could imagine, and the furthest we are generally willing to go is describing the problem, describing the incentives that caused it, and completely refusing to imagine the kind of incentives that might un-cause it.
Scott (and Bryan Caplan's) focus on prediction markets and reputational bets is part of a solution. Those pundits and experts who are willing to make specific predictions and then review their track records in public are more credible than are those who don't.
Right now this is a relatively small universe, but part of what we can all do is to support a norm that credible people are willing to make predictions and review their track record in public. (This is a short version; we could and should elaborate a more complete and nuanced version). People who are not willing to stake their reputations on predictions, or who hide or obfuscate their record of failed predictions, are ipso facto not credible.
And, of course, those with poor track records are not credible.
Science built its credibility on the extraordinary predictive power of physics and astronomy. The reputational capital of science is being destroyed by the sloppiness and motivated reasoning that is now well-known in the rationality community. Robin Hanson's famous paper "Could Gambling Save Science?," in which he introduced prediction markets as a solution, is more than 30 years old now,
https://mason.gmu.edu/~rhanson/gamble.html
It is well worth re-reading in light of the erosion of the epistemological commons in the past thirty years. Robin Hanson was pointing in the right direction a long time ago.
I consider this strong positive selection - the few sources that are doing real verification of this type are essentially all valuable, but there are very few of them. Even if you count cases like Matt Yglesias, which I think you should, one can't get that high a percentage of news from them, and also most such sources only track as subset of things - e.g. Matt's predictions doing OK doesn't obviously make his strategic analysis wise.
In terms of getting more people to do it, one barrier we can solve is that the whole thing is time-consuming and annoying. Maybe we could start a service that compiled predictions in real time of some kind.
I suspect that if prediction markets (even good, easy-to-use markets) end up failing, it's going to be for a sort of cart-before-the horse problem. If we solve all the barriers to using prediction markets and indicating credibility, there's still the question of why anyone who isn't already interested in credibility would make predictions there, and how many people would check it.
Most people who consider you and Scott to be credible don't do so because of your prediction-market scores; a lot probably don't even know you have them. They find you credible because - well, just because you and he generally are. They can see you doing the work. And conversely you and Scott aren't credible-sorts-of-people because you use prediction markets; you use prediction markets because you are the kind of people who would visibly do the work even if they didn't exist.
Once you, he and like four other guys are factored out, you have everyone else. And broadly they can be categorized as people who don't want things like prediction markets because apathy, people who don't want them because deception and people who don't want them because they couldn't perform there.
You can improve apathy a little and probably get a few more marginal guys by making prediction markets easier to use, but everyone else is just going to say "nope, not doing that". I think you end up with a situation where everyone who already cares about being credible is already pretty identifiably so and everyone who isn't wouldn't care to use the markets. I'd argue you have to start further back - they have to have an incentive to want to use the markets and be credible in the first place.
That's sort of why I get frustrated with article's like this and Scott's and tie-ins to prediction markets. We all agree deception and inaccuracy are bad; we all agree that if everyone was to put in a lot of effort to being honest/accurate things would be better. We all agree there's basically no incentive for them to do that. And instead of looking at making incentives, we talk about designing tools we know they won't use.
What incentives do you propose?
It would be great if wealthy philanthropists put in place more incentive-based systems for better judgment.
While we are waiting for that, the next best solution seems to be:
1. Communicate with others just how badly the epistemic commons has been damaged.
2. Communicate with others the crucial importance of building better options.
3. Not promoting garbage.
All too many people still complacently believe that peer review, elite institutional affiliations, and prestige media are ipso facto credible. Until and unless more people realize that these old systems of credibility are no longer adequate, they won't seek out better solutions. Once people do begin to seek out better solutions, then most rally around some combination of skin in the game, reputations at stake, forecasting/predictions, etc.
Will this solve anything in the next 5 years? Probably not, though I predict greater than 30% growth in participants on prediction market platforms over the next five years (a super safe, trivial prediction, it will probably be much larger than that). Will we begin to have better solutions in 10 years? Maybe.
While predictive analytics are a very different tool, note that private sector growth of this sector is expected to be 20% CAGR for the next few years,
https://www.mordorintelligence.com/industry-reports/predictive-and-prescriptive-analytics-market
As predictive analytics becomes more widespread in business, related techniques will become more common in other domains as well, especially domains adjacent to financial impact. Business has a real incentive to obtain more reliable information. Pundit land will mostly remain garbage.
Sadly, academia is also likely to continue to lose credibility until and unless a coalition forms within academia to innovate new solutions that provide a particular department or university with a systematic competitive advantage with respect to real world accuracy. Ideally a philanthropist or business would put real money behind such a center.
"In terms of getting more people to do it, one barrier we can solve is that the whole thing is time-consuming and annoying. Maybe we could start a service that compiled predictions in real time of some kind." Agreed, it is tiny at present, but moving in the direction of more visibility regarding track records would be an improvement.
Even developing a norm around making specific claims and acknowledging openly when one's claims have been falsified would be an improvement. A resource that documented who refuses to make specific claims or to be held accountable for their track record would be a start.
My heuristic is a bit simpler: If any source that is a professional user of words uses hedges (could, would, should, etc, etc) outside of a context that demands it (explaining uncertainty) they almost are certainly lying and you should dig deeper.
The other major tell is mixing units in the same story (totals vs percent) or otherwise obfuscating actual data points (narrative about how good/bad a particular metric without trends over time).
By using the word “tell”, you have given me a great mental anchor for understanding how I operate in a practical (non-time consuming) way with the problem of dealing with an utterly unreliable media environment. I use them all the time but have not reflected on that before. .
> My heuristic is a bit simpler: If any source that is a professional user of words uses hedges (could, would, should, etc, etc) outside of a context that demands it (explaining uncertainty) they almost are certainly lying and you should dig deeper.
This filters out uncertain/unconfident people. Through I guess that excludes them from being a professional user of words?
I said outside the context if explaining uncertainty.
2020 broke "don't pay attention to the news for me."
Obviously, COVID restrictions dramatically affected real life in a way that politics never did before. You just couldn't ignore it. COVID basically radicalized me.
Also, the ongoing race and gender Cultural Revolution. A little bit of cynical affirmative action is dumb and unjust, but not the end of the world. But it seems like "cynical apathy" just wasn't a stable equilibrium. I've come around to the idea that we have to "Face Reality" as Charles Murray says. People who can't process the relevant facts have attempted to world build an understanding of how things are while ignoring important facts and they built crazy world models that demand them to force crazy things on the rest of us.
It may be harder for people that don't have young children to understand this, because both of these items are really really bad in the schools right now compared to regular adult life.
Anyway, it's not clear what "obviously disprovable factual statements about the world" means. If something involves multiple regression and cause and effect statements, even if the evidence is really strong in one direction, its harder to "definitely" prove then "X% of Y at time Z". COVID and race/gender are often in that category. It's easy enough to provide a level of evidence in support of a claim that a reasonable person would accept, but not enough to 100% shut down someone engaged in motivated reasoning.
“Motivated reasoning.” Precisely.
> But what about the global problem as a global problem? Sure, politicians have mostly always lied their pants on fire, but what to collectively do about this epic burning of the more general epistemic commons?
There's also a problem with AI of course. It seems rather inevitable that in a few years we're screwed if something is not done, even if further advances in the field stall for some reason.
This is a nice take on a problem, if a little dated (ok, it's a few months old, but this timespan is subjectively years now...): https://youtu.be/oppj9MdNf44