I like that it is a completely different technical approach. I have zero idea where it falls from Obvious Nonsense to Super Exciting, or how such an approach would likely be for safety. Curious what others think. Maybe we should ask Aaronson?
So far as I can tell it appears to be a chip company (formerly known as Qyber), one of the many in the space working on non-von Neumann architectures for ML accelerators. The basic idea is to take advantage of the regularity and structure of the matrix operations involved in training and inference and highly optimize the hardware for rather significant performance _and_ energy efficiency gains. Popular approaches to this right now are in-memory compute and fancy things based on that to enable programmability of the hardware (eg for choice of activation function), and using analog computing to win gains that are impossible with digital computing. Problems like matrix multiplication are obvious candidates for this, so lots of people are building fancy chips. It’s rather like FPGAs and ASICs.
Regarding the gains, companies in the field like Rain are claiming several orders of magnitude improvement in energy efficiency for training and inference and maybe an order of magnitude scale speedup (need to check if they’re claiming better than this right now). This is kind of worrying and I think should push us toward compute governance _much faster_, since as the $/FLOP for AI training drops substantially it puts frontier-scale models in reach for many more players.
Regarding Extropic... ugh... it’s infused with Jezos’ thermodynamic eldrich god language, so take from that what you will.
"using analog computing to win gains that are impossible with digital computing."
Could I impose on you to expand a bit on this? I'm essentially a layman, but I do understand a bit about CS and microelectronics at a hobby level. This sounds really interesting, like a Babbage-style calculator...
Sure, so you’re familiar with how matrix multiplication requires O(n^3) operations with the naive nested loops algorithm (and best known algos are like O(n^2.3<something>). This is due to the fundamental limitations here of digital circuits / traditional computer architectures where you have to sit and do all the individual steps of the algorithm serially (well, effectively serially, of course you can “parallelize” to some extent but it’s bounded by your architecture and not related to the size of the problem being solved). In an analog environment, if you can tolerate the noise, you can build truly parallel multiplier and adder circuits that perform entire matrix operations (or large subproblems thereof) in one step by cleverly combining voltages and currents of the memory cells (look up, for example, ‘memristors’) storing the values of the matrix elements.
Here I am! :-) Coincidentally, I met someone from Extropy at a quantum conference today. I connected it with this post and asked them to explain to me what Extropy does. Apparently they’re trying to build classical superconducting chips that get a ~1000x speedup over conventional GPUs via higher clock rates. Alas, I have no insight into whether this will succeed — it doesn’t really engage any question in basic physics or theory of computing. Surely there *are* still ~1000x speedups to be had consistent with physics; the question is whether this or that startup can achieve them. As a general matter, I feel most optimistic about startups whose founders show that they’re serious and intellectually honest and not prone to over-the-top hype.
"I highlight it to show exactly how out of line and obscenely unacceptably rude and low are so many of those who would claim to be the ‘adults in the room’ and play their power games. It was bad before, but the last month has gotten so much worse."
I get off the bus whenever people shift from "here are the logical reasons and data why I'm right" to "now I shall play tribal monkey politics games and instead advocate with emotional appeals to people's identities." Even when I agree with those emotions, and identify with those identities. You could make this case against lots of people on the left, but it's no less silly when not-left people do it. I don't actually think "AI doom" is inevitable, but that's because I think people can identify the risk (and also less severe, but perhaps more likely ones) and try to mitigate it. Which is a radically pro-tech viewpoint! But if the predominant viewpoint among people building X is "there is no risk to X, we shouldn't do anything about them, and people who think there is are stupid" then I'm pretty confident they are making that risk far more likely.
"radically 99th percentile pro-nuclear-power along with most other technological advancements and things we could build."
I think this is a great point. You see people going "oh the AI doom people are just like luddites or Elizabeth Warren or Gary Gensler or anti-vaxxers" and again, that's where I am getting off the bus. Maybe you're right - maybe AI will be 100% awesome and they're wrong, but the fact that you're not acknowledging that they'd been aggressively in support of almost every OTHER advancement, and lumping them in with people who are canonically against many other advancements, makes me think you're not accurately assessing their objections, and even more importantly, that you don't want to, because it makes it easier to win those monkey politics games. And if that's what you're doing, then you're just making me even more suspicious that you're not right.
It is just so painfully obvious that Verdon is a grifter, just like so many through history including SBF, Trump, Holmes, Madoff and so many more. And exposing a grifter is one of the highest callings of journalism, not a failing.
E/acc is not a legitimate intellectual position, as I am certain you are aware, any more than Elon or Thiel's earth shattering idea-of-the-month. It is a trivial oversimplification of society, the economy, and how we should live and manage our society that it is astounding it has been adopted by anyone past the late-adolescent "Fountainhead" phase of their own development.
The pap that you are going to hold some moral high ground that they should not have doxxed this grifter is embarrassing to you.
What would be delightful would be if we would recognize that most of the c**ts that keep foisting this nonsense, including MAGA and EA, are men with self-esteem issues trying to find power in a world that they cannot control and that resists them. It is virtually always men (apologies to Holmes and Thatcher, who both wanted to be men), and almost all very poorly-read outside of math and sciences.
They attempt to create mathematical approaches to social issues without any comprehension of the stunning complexity of the world, ecology, psychology, sociology or history. They inevitably create trivial models (like EA's expected value calculation) that make problems seem solvable, without understanding that there are only three outcomes when one tries to do so:
1. The algorithm is simply wrong beyond a very short term or outside of very constrained circumstances because what is considered noise by the assumptions becomes signal (weather, stocks, food to health connections)
2. In order to "work," over a long period of time, the nature of reality is constrained by force to simplify its terms (modern bureaucracy, schooling, financial markets)
3. The model/algorithm becomes exactly as complex and perfectly models the world, in which case it is useless as it would operate at the same time scale as the world. Read "On Exactitude in Science" by Borges.
The people behind these new movements (not unlike the last round of Seasteaders), are usually in category one. Just look at a few of the example uses of EA's algorithm to realize how unbelievably silly it is). The scary part is that they inevitably realize this and move to category 2. Have you noticed that they all end up talking about some form of monarch or tyrant? Usually referring at some point to Plato's "Philosopher King" ?
EA and e/acc are childish, simplistic, and immensely dangerous. They are grifts by deeply damaged men/quasi-men It is incumbent upon us to expose their nonsense and the sociopaths who are behind them.
(Side note: Thatcher did study at Oxford, but she studied Chemistry, which means she took zero classes not related to that discipline. No philosophy, no political philosophy, no economics. I attended Oxford, it is not an American university)
in the same way that I think the e/acc people being discussed have crossed the line past "honestly arguing objective reasons that they're right" into emotional/tribalistic/identity appeals, I think your comment is doing the same. After reading it, I don't feel like I now know more reasons about why they're wrong, but I do know that you feel contempt for them. But I'm not sure that helps advance the discussion at all.
I was addressing the assertion that it was wrong to out Verdon. The author claimed it was doxxing and a violation of journalistic integrity.
My contention is that e/acc is a grift, and that the progenitor of a grift should be revealed by journalists. If journalists cannot reveal people doing wrong, then what is the point of journalism?
I put e/acc in a group with other grifters because they have a common pattern of behavior, usually a technocratic, oversimplified algorithm or model for improving human outcomes.
I also showed why these always fail.
The point is that ideological grift is eternal, and it must always be called out.
"a common pattern of behavior, usually a technocratic, oversimplified algorithm or model for improving human outcomes.
I also showed why these always fail."
You have correctly identified the point of contention - but I am not sure you have shown this. It is 2023, we are sitting on about 200 years of history that relatively mindless pursuit of technology advancement has indeed resulted in (net, yes, certainly some bad side effects) massive improvements to human outcomes. If *that* is the thing you are contesting, then this isn't really the forum? The debate here is whether or not these e/acc people are correct that "powerful AI" is in that category of "definitely net benefit new tech" or not.
If you are arguing that there is some identifiable-same-basis "tech-improves-the-world grift" and that your list of people is exemplary of it "(SBF, Trump, Holmes, Madoff, Thatcher)" (???) or even the same category of person at all, then you are not arguing just against this Beff Jezos guy, but against many his critics cited in this piece, and millions of other people - and is way outside the context here.
I did not address the core "is tech beneficial" argument because that was not what the article is about, but it is certainly worthy of contention. It is interesting how many propositions are taken as axiomatic like "Capitalism is the only viable path," "Private property is inviolable" or "Technology indisputably makes our lives better." These may be true or may not be, but they are certainly not beyond dispute.
As of the mid-18th century, virtually everything man made could be consumed by the earth. As of now, most of the post-consumer waste stream is not safely or effectively decomposable. It is entirely possible that the entire explosion in quality of life is just a massive, unpaid debt on our progeny. The consumer technology-driven society could be a ponzi-scheme enabled by the non-pricing or mispricing of externalities including the consumption of resources (land depletion, deforestation, extraction) and production of waste.
So no, I do not take technology's value as assumed.
That said, I am not lumping those folks together under technology improves the world. EA and e/acc are just the latest two incarnations of a longer trail, which is "capitalism is the answer to every problem". The rally cry is tech, but the underlying mechanism is identical - we have to move quickly from what is to what is most capital efficient, or, more to the point, what more efficiently aggregates capital.
The grift is that individuals have a magic formula that demands huge amounts of capital and will improve the lives of everyone, except of course those whose it does not, and they were really not necessary anyway.
When you peel it back, it is not technology, it is capital and capital aggregation - there is no effective altruism without individual/private capital accumulation, the more the better. There is no e/acc without massive allocations of capital. They are Coke and Pepsi, not Theism v Atheism.
Bear in mind I am a tech CEO, so I am not Ted Kaszinsky, though it is entirely within the realm of possibility that he ends up being right (the manifesto, not the blowing people up part), it seems also likely that technology in the service of mankind is a good.
But the type of people who have created these capital manifestos will not set us free.
Hm, the idea being they've already defected in society's Prisoner's dilemma, and so forfeited some of society's protections? Feels like that makes it too easy to rationalize.
I'd agree to revealing the identities of serial grifters, or at least grifters that are egregiously committing embezzlement or something, since that'll prevent harm. Under the assumption that Beff isn't such a grifter, I feel like calling him out personally doesn't really achieve anything more than calling out the movement or its arguments. Or at least, the benefits would be more to do with journalistic incentives.
It isn't something we have to individually develop a gut feeling around. Very smart people have noodled about it for years. Read what they have to say and then figure out where they are wrong. Google privacy expectations of public figures.
The reason people develop standards of behavior is to avoid this.
If he wanted to have a private life, then he would have to give up working hard to develop a public persona. That is the gist of it. It is why you can say things about the Kardashians in public that you can't say about my Mom.
Ah, I was mostly trying to address identity-revealing in the case of malfeasance, not with regards to public figures. Though in the Google results I'm consistently seeing the idea that even for public figures, there is (or at least, should be) a balance between the public's right to know and an individual's expectation of privacy. As you said, an individual may take actions that diminish that expectation. But in the case of pseudonyms my first intuition is that there has to be pretty clear harm to justify unmasking them.
And indeed, the US Supreme Court has ruled that anonymous publishing is protected by the First Amendment, and while of course that only applies to what the government does, I think some of the reasoning applies or should apply to society at large. Like "Anonymity is a shield from the tyranny of the majority," and the idea that anonymous publishing makes it easier to focus on the issues rather than the person proclaiming them.
EDIT: Now realizing this is our second conversational thread, whoops.
I hard disagree. The ethics of the journalistic principle i.e. blackmailing someone into an interview with releasing their private information, or outing them, are what's in question.
And the journalists would do it either for clicks or for ideological reasons, or both, whether or not to this person was genuine in their views, or had something reasonable (of unPC or objectionable) to say, or inconvenient but true.
1. Were the WSJ able from doing ai text analysis to identify Ted Kaszinsky, I presume it would not only be ethical but imperative to dox him, correct?
2. And if a prominent politician ran a blog talking about the inherent lesser and greater abilities of different races, and a journalist figured it out, then that would probably also be very important to disclose
3. A non-profit organization has a very effective social media presence debunking scientific research around climate change. And a journal discovers a prominent hedge fund investor is behind it.
There are more examples, but I think it is reasonable that there are many legitimate cases where a journalist has the right and obligation to "doxx." It is in the public interest to understand the people connected with movements and ideas, especially when those persons may have intentions that would not serve the public interest.
So a blanket "don't doxx" just isn't a particularly supportable argument.
The question is where one draws the line on privacy, and that ground has been covered by the press and the law for a long time. Verdon is a public figure, which the Supreme Court defines as "those who hold government office and those who have achieved a role of special prominence in the affairs of society by reason of notoriety of their achievements or vigor and success with which they seek public's attention"
I think we can all agree that Verdon has both success and shows vigor in gaining public attention.
Secondly we should consider whether the ideas pose a threat of any kind to a member, part or all of society. I would claim that e/acc is a threat to all society, but even a limited interpretation could make a reasonable conclusion that e/acc poses a threat to anyone who does not fit in with its relatively parochial worldview. It admits that substantive portions of society may be unfit for survival - "The wellbeing of conscious entities has *no weight* in the morality of their worldview."
I think the movement is the puerile nonsense of emotionally stunted narcissists, but an objective analysis would have to weigh whether a movement that was highly public and gaining adherents while suggesting a threat against a segment of the population is just as dangerous as the UnaBomber's, and should be aggressively exposed.
To close, there is no intellectual or ethical argument for "never-doxx." Whether to violate the privacy of an individual in society is contingent, and the rules are understandable, if gray at times.
1. I think harm reduction is indeed the appropriate lens here.
2. I more or less agree with the evocative example, but I feel like there are more ambiguous versions.
3. I may have misunderstood what you were pointing at here, ignore this if so. But I don't think this level of conflict of interest is necessarily bad enough to require calling out the individual. After all, the argument can be evaluated for merit regardless of the proponent's motivations.
Anyway, there's some ambiguity on where we -should- draw the line, regardless of where it's at now. As a rule of thumb, "clear and present harm" seems decent, but worthless as an actual standard (without further elaboration) because then people will claim to meet it as a matter of course. When there's ambiguity, I'm thinking it's reasonable to err on the side of "don't doxx."
I would consider it not a doxx opportunity except jt has grown to a point where not only is there significant community support but also the support of highly influential and extremely wealthy individuals.
I also am tipped by the fact it is so overtly amiable to both authoritarian and neo-Darwinist approaches.
A more fringe case would be someone promoting a neo-Georgist or Chartist approach where they clearly were not backed by or going to be backed by any substantial wealth and further that it did not threaten marginalized groups. In that case I would say leave the identity private.
Hey, I've wanted to respond to this for a while, didn't want to leave it hanging.
You give compelling examples. I probably wouldn't decry a "doxxing" in your scenarios.
Perhaps there's a case to be made that engaging in social media under a pseudonym is cowardly in some way. (Although if The Powers that Be are likely to make someone's life miserable for speaking truth to them, surely that's a reason to assume a pseudonym, even if they're powerful?)
To me, the example of Beff Jezos is fuzzier. It doesn't seem a priori that this person has the outsize influence of, say, a Koch Brother or George Soros. Like, is it appropriate to publicize the personal contact information of anyone in any way popular with opinions you find objectionable? Nor, from the little I've seen, do I see that the views expressed by him are harmful enough to justify full digging up of dirt on him.
And puerility or simplicity of their ideologies isn't enough to get me on board, honestly. I don't think the reporters in question would be unmasking corruption, ideological inconsistency and (frankly) harm among various individuals more politically aligned with them, though there is plenty of that to be had.
And I don't trust the reporters to be ethical about it and do it because of honest assessments of harm, and don't think it right for them to arrogate to themselves the power to arbitrate who is worthy of privacy and who isn't. It's not that they're trying to be ethical and unmask corruption wherever they find it; to me, they are serving their specific ideology, and they would've done it even if Beff Jezos were a humbler person (not one with a startup), with a more benign version of his philosophy.
So maybe I could be persuaded that in this case the benefits outweigh the harms, but in general the current media engaging in those particular tactics to unveil identities of figures they don't like will be extremely suspect to me.
I feel like I agree with the core e/acc principles. Beff started it but it has spread to other people I respect like Garry Tan and Marc Andreessen. I see it as focused on the points:
1. Tech progress is good
Anything like an "AI slowdown" or "destroying this company is consistent with the mission" is a bad idea.
2. Freedom of religion
It's okay if Beff Jezos believes in the rise of the machine god. Just like it's okay if people believe in the second coming of Jesus. And it's okay if people believe that one day AI may destroy humanity. But you have to get along and work with people who don't agree with your particular religious vision and not try to convert them all the time.
Maybe do more than feel? "Freedom of religion" does not bestow the right to chain literally *everyone* to your stone altar and raise the knife - is the Constitution actually a suicide pact?
Sorry, are you saying we need to worry more about Jesus or the AI god here, or some other impending apocalypse? I can't tell the difference from your comment.
Sorry, atheist, "doomer", but taking you at your word -- I assumed "stone altar" would be evocative of other death cults like "bad" e/acc. I'm pro-tech, think AGI is clearly achievable and almost inevitably fatal because of the short timeline due to status/economic/race (as in "racing") incentives. And also the nihilism. It is not okay, everything will not turn out okay by default.
I agree with what you're saying, but there is one tiny thing: the timetable. If Jesus is coming tomorrow, or super AI in 2025, then it is actually pretty important that we work out which religious beliefs about those things are true or not.
No, because it's not possible to just "work out" which religious beliefs are true. You can argue forever whether Jesus is coming soon and you will never convince all the nonbelievers. Same thing with "the AI god" instead of "Jesus". Similarly, if 100 years pass and there's no AI god, there will probably still be cults who worry that GPT-72 will be The One.
Okay, but the fact that some people hold irrational, impervious to facts beliefs about AI or Jesus doesn't mean that there aren't lots of regular other people who feel pro/con could be persuaded to be con/pro. And that if you think the % chance of Jesus or super AI within a reasonable timeframe is high enough (as both the most devout AI doomers and AI proponents do) then what that mass of people think is pretty important.
I think there is less than a negligible chance that a "second coming of Jesus sufficiently similar to what Christians believe" will happen, so yeah I don't have any strong beliefs about what we should do about it, and I don't care what others believe. I think it is absolutely reasonable to believe that a sufficiently high chance of a sufficiently powerful AI within my lifetime such that I would like to debate the context, constraints and societal expectations on the various actors in that event now.
For the record, Zvi is one of a tiny group to whom I basically outsource my understanding of AI and the issues arising from it. The work he puts in, rounding up and interpreting developments is admirable. Thanks for this piece too.
I think maybe it'd be good to 'develop some in-house expertise for overseeing the outsourcing', but otherwise agree Zvi is a good 'vendor' of AI insights.
You're definitely right on all counts. I'm lazy and entirely reliant on a vague heuristic based around Zvi seeming like an intelligent, thoughtful and decent person.
It seems to me that e/acc has taken the usual course that identity-based memes seem to universally take. It’s kind of a particular flavor of the community-of-idiots effect but it’s a little more complicated in that there’s definitely an element of people starting it as a goofy joke but you get a one-two punch of people thinking they’re 100% unironically serious about it & joining in and ideological opponents thinking they’re 100% unironically serious about it & panic fearmongering over it. Like I remember circa 2015 telling my dad he needs to chill because “alt right” is just a stupid internet joke but now here we are. I struggle to think of any sort of meme identity that has successfully maintained a reasonable level of unseriousness.
I guess my question, having not paid a ton of attention to this, is how sure are we he’s the “founder” versus the arch-guy-who-decided-to-take-this-meme-too-seriously?
I hate being a meatbag. I hate being talking meat (reference: https://www.mit.edu/people/dpolicar/writing/prose/text/thinkingMeat.html). It's a stupid arrangement of atoms that resulting from some random evolution. I do not want to die and I do want my consciousness preserved, if not 100% than to some reasonable degree. I also think that the worst case scenario of AI-based grabby aliens has been empirically invalidated by us still being around after 14 billion years, unless you are willing to bite the bullet and say that "we are the first within a billion light years or so", which is very much anti-Copernican. Given the above, I think e/acc or d/acc or something similar is a lot closer to the approach I want than Yudkowskian doomerism. Full steam ahead until there are good empirical (not hypothetical) reasons to slow down. It is unfortunate that Beff Jesos' discourse style is so obnoxious, I wish it was more reasonable, but it does not invalidate the goal.
While I enjoy being a meatbag, I think this comment is useful because you lay out what you think, and what you want, and what the terms of your agreement/disagreement would be.
"Full steam ahead until there are good empirical (not hypothetical) reasons to slow down. "
I feel like, to my un-educated viewpoint, that you are in 99.9% agreement with "Yudkowskian doomerism"? My impression of that views is that he is "full steam ahead" on all the pre-super-AI technologies, he just feels that he has sufficient data that says "slow down" on this single specific "super AI" situation. To my mind, it seems like there is a very easy (haha, well...) way to resolve this: some kind of sufficiently complex test simulation case that tests the hypothesis: "will an AI of sufficient power do something of sufficient badness in a world of sufficiently-similar complexity to our own?"
> I feel like, to my un-educated viewpoint, that you are in 99.9% agreement with "Yudkowskian doomerism"?
The main disagreement is on what counts as empirical. Eliezer says "we do not understand AI nearly as well as nuke design, we should slow down, or else it will be too late to stop". My view is "we do not understand AI nearly as well as nuke design, we should keep iterating until we do, there will be no x-risk-level negative externalities until we do, though plenty of other catastrophic-level risks are possible, though by no means guaranteed, but there also will be commensurate benefits."
Right now there is no known way to perform "some kind of sufficiently complex test simulation case that tests the hypothesis", as far as I understand it.
There isn't, yet, but if I have one soapbox it would be that I think we should work on developing simulated worlds that you could run those tests in. Both to discover if you or EY are right, and (assuming "instant AI doom" doesn't occur) also a sandbox to game out/test ways to make AI work better.
I note that almost by definition "grabby aliens" are nigh invisible - If they're grabbing space at close to the speed of light, anyone who still exists is going to be blissfully unware of them until a scant few years before they arrive
Yes, that's my point. We're are past the most active time of star and planetary system formation by almost 10 billion years, and it took less than 5 billion years for the Earth to produce us. The size of the local group of galaxies is roughly 10 million light years, so, unless we are the first, we would have been eaten by now.
Thank you for this! Listened to the Beff Jezos interview on the Moment of Zen podcast and did not feel he was constructively engaging with any criticisms of the e/acc position.
I try to keep up with AI and accelerationism and related topics - but the people involved in these discussions are so radically divorced from reality and basic human values that it is remarkable that anyone takes the time to think about or describe or summarize their bizarre and (I can only assume) drug-addled points of view.
I kind of respect Scott Alexander on a personal level, though I'm especially sympathetic to his protest against doxxing in almost all cases.
At the same time, I'm ambivalent about how I should react to Scott's protest actions. He was doxxed by NYT. Should I not share any links myself from Forbes, or NYT, for "at least a year" too? Should I do that indefinitely until such a time NYT or Forbes issues a public apology? Should I not share them on any social media for at least a year? If the answer to all those questions is 'yes,' am I obliged to pressure anyone else I know with a blog to follow suit?
Those seem like kind of high bars to clear. I'm not personally inclined to follow to the letter all, or even any one, of those rules. If Scott or some of his avid fans knew that, would they expect me to impose those rules on myself anyway? Would they judge me to be bad or wrong if I didn't? If they did, how seriously should I take them?
I've got no sense of what the answers to those questions would be, either.
At least a partial explanation of Jezos's fanatical brand of e/acc is that it's a useful recruiting tool for his startup. Evidence is a) he recently mentioned the importance of ideology to recruit passionate, committed people toward a startup cause like his and, adjacently, b) in talking about OpenAI, Roon's made similar comments on twitter about the importance of ideology to motivate herculean efforts.
I think this is the best case argument for that fanaticism. The key difference is, unlike a crusade, the "attention to detail" in your crusaders is extremely important, because they are going to be writing the code of your crusade, not just carrying spears into battle. If you recruit them and retain them with the fulfilling, "bigger-purpose-embodying", FULL STEAM AHEAD mentality, they will apply that in their work. Which is great! But also, they (or you, thru executive control) will apply that mentality when it comes to bugs or unexpected negative side effects.
I actually am 99% on board with the idea that all regular ol' tech advancement so far, the people doing it are taking in bugs/sideffects reasonably in stride and coming up with ways to fix them, in a way that effectively means that I am a 100% accelerationist for all that. We invented awesome factories the built cool new stuff. This also caused air pollution. I am 100% opposed to "and therefore factories are bad!" or "we must shut them down to protect the climate" but yeah, smog is bad, no one likes it, and we took steps (and other places further behind on the curve are, or will) to minimize it. I am still 100% accelerationist on factories because it's obvious that as a society we had a method (extreme shorthand for the actual process, obviously) for ameliorating that, and the benefits vastly outweighed the costs.
But if the people making the factories said "air pollution is impossible, it can't hurt anyone, and anyone who thinks otherwise is a terrible person on the side of the luddites who would have us living in mud huts", I would still be pro-factories-as-a-technology, but I would wonder if perhaps it might be better if factories were built by people who accepted the reality of trade-offs and who had great plans for dealing with them, and also that perhaps the rest of us should have a plan for fixing the side effects that apparently they are going to ignore.
Doing a Good Job of building tech (which nowadays is honestly shorthand for "code") requires Good Logic, or let's just be insufficiently PC and say "smart." Being smart also means understanding how monkey tribal politics works, and that its ignoring of nerdy details is a major hindrance to its effectiveness, and a big part of the reason anyone succeeded in tech over the past 100 years was with a ruthless focus on the reality, details and nerd-shit of that tech. If your topline mission statement rejects that, then yes, I have questions about YOU, not the tech. Blustery monkey politics e/acc people attacking 99% allies don't make me hate tech, it makes me hate them, and also - mistrust tech created BY them. It would've been so easy to pass the "do I know monkey games are dumb?" test and throw one line out about "we're pro-tech, and pro-glorious future of happiness and prosperity, but obviously like all tech in the past, we'll be super careful to ensure that it works right, both for our benefit and yours - and people who are otherwise 99.9% pro-tech, but maybe have concerns about specific implementations not working right - they're on our side against the actual luddites, too!" That's the line. See, easy?
If you don't care enough about details of What Is Tech, and you lump the extreme radical edge of pro-tech people into the same bucket as "we must regulate nuclear out of existence/weaving loom smashing luddites" just because they disagree over a few tiny edge cases, then my worry is that you don't understand "What Is Tech" well enough to do a good job building it. And if your rebuttal is "well look at all the amazing things I've built over the years, who are you to disagree?", I totally agree - don't listen to little ol' me, listen to yourself back when you built those things. I notice the actually-implemented mentality of you ( and the other people who built those amazing things) during the building process, and I see that it was extremely focused on details, logic and fixing bugs - and also, accepting it when one of the engineers said "here's a bug". You didn't say "ha ha stupid luddite, you just hate all tech, this bug doesn't exist!" because if you had, your product wouldn't have worked.
Don't be "Guy Who Promises A Glorious Future, Never Mind The Details." Be "Guy Who Built Awesome Stuff In The Past By Focusing On Details, And Promises More For The Same Reason."
Hard to tell how much they matter in the real world, but I am highly certain it is not that kind of op.
Do you have any view or insight into Beff Jezos' startup extropic.ai? I don't really understand what it is that it claims to be doing.
I like that it is a completely different technical approach. I have zero idea where it falls from Obvious Nonsense to Super Exciting, or how such an approach would likely be for safety. Curious what others think. Maybe we should ask Aaronson?
Yeah this is more or less my view. Aaronson probably has more insight.
So far as I can tell it appears to be a chip company (formerly known as Qyber), one of the many in the space working on non-von Neumann architectures for ML accelerators. The basic idea is to take advantage of the regularity and structure of the matrix operations involved in training and inference and highly optimize the hardware for rather significant performance _and_ energy efficiency gains. Popular approaches to this right now are in-memory compute and fancy things based on that to enable programmability of the hardware (eg for choice of activation function), and using analog computing to win gains that are impossible with digital computing. Problems like matrix multiplication are obvious candidates for this, so lots of people are building fancy chips. It’s rather like FPGAs and ASICs.
Regarding the gains, companies in the field like Rain are claiming several orders of magnitude improvement in energy efficiency for training and inference and maybe an order of magnitude scale speedup (need to check if they’re claiming better than this right now). This is kind of worrying and I think should push us toward compute governance _much faster_, since as the $/FLOP for AI training drops substantially it puts frontier-scale models in reach for many more players.
Regarding Extropic... ugh... it’s infused with Jezos’ thermodynamic eldrich god language, so take from that what you will.
"using analog computing to win gains that are impossible with digital computing."
Could I impose on you to expand a bit on this? I'm essentially a layman, but I do understand a bit about CS and microelectronics at a hobby level. This sounds really interesting, like a Babbage-style calculator...
Sure, so you’re familiar with how matrix multiplication requires O(n^3) operations with the naive nested loops algorithm (and best known algos are like O(n^2.3<something>). This is due to the fundamental limitations here of digital circuits / traditional computer architectures where you have to sit and do all the individual steps of the algorithm serially (well, effectively serially, of course you can “parallelize” to some extent but it’s bounded by your architecture and not related to the size of the problem being solved). In an analog environment, if you can tolerate the noise, you can build truly parallel multiplier and adder circuits that perform entire matrix operations (or large subproblems thereof) in one step by cleverly combining voltages and currents of the memory cells (look up, for example, ‘memristors’) storing the values of the matrix elements.
Here I am! :-) Coincidentally, I met someone from Extropy at a quantum conference today. I connected it with this post and asked them to explain to me what Extropy does. Apparently they’re trying to build classical superconducting chips that get a ~1000x speedup over conventional GPUs via higher clock rates. Alas, I have no insight into whether this will succeed — it doesn’t really engage any question in basic physics or theory of computing. Surely there *are* still ~1000x speedups to be had consistent with physics; the question is whether this or that startup can achieve them. As a general matter, I feel most optimistic about startups whose founders show that they’re serious and intellectually honest and not prone to over-the-top hype.
See my reply to Zvi
My only relevant position is that humanity and biology needs to survive, no matter what.
"I highlight it to show exactly how out of line and obscenely unacceptably rude and low are so many of those who would claim to be the ‘adults in the room’ and play their power games. It was bad before, but the last month has gotten so much worse."
I get off the bus whenever people shift from "here are the logical reasons and data why I'm right" to "now I shall play tribal monkey politics games and instead advocate with emotional appeals to people's identities." Even when I agree with those emotions, and identify with those identities. You could make this case against lots of people on the left, but it's no less silly when not-left people do it. I don't actually think "AI doom" is inevitable, but that's because I think people can identify the risk (and also less severe, but perhaps more likely ones) and try to mitigate it. Which is a radically pro-tech viewpoint! But if the predominant viewpoint among people building X is "there is no risk to X, we shouldn't do anything about them, and people who think there is are stupid" then I'm pretty confident they are making that risk far more likely.
"radically 99th percentile pro-nuclear-power along with most other technological advancements and things we could build."
I think this is a great point. You see people going "oh the AI doom people are just like luddites or Elizabeth Warren or Gary Gensler or anti-vaxxers" and again, that's where I am getting off the bus. Maybe you're right - maybe AI will be 100% awesome and they're wrong, but the fact that you're not acknowledging that they'd been aggressively in support of almost every OTHER advancement, and lumping them in with people who are canonically against many other advancements, makes me think you're not accurately assessing their objections, and even more importantly, that you don't want to, because it makes it easier to win those monkey politics games. And if that's what you're doing, then you're just making me even more suspicious that you're not right.
It is just so painfully obvious that Verdon is a grifter, just like so many through history including SBF, Trump, Holmes, Madoff and so many more. And exposing a grifter is one of the highest callings of journalism, not a failing.
E/acc is not a legitimate intellectual position, as I am certain you are aware, any more than Elon or Thiel's earth shattering idea-of-the-month. It is a trivial oversimplification of society, the economy, and how we should live and manage our society that it is astounding it has been adopted by anyone past the late-adolescent "Fountainhead" phase of their own development.
The pap that you are going to hold some moral high ground that they should not have doxxed this grifter is embarrassing to you.
What would be delightful would be if we would recognize that most of the c**ts that keep foisting this nonsense, including MAGA and EA, are men with self-esteem issues trying to find power in a world that they cannot control and that resists them. It is virtually always men (apologies to Holmes and Thatcher, who both wanted to be men), and almost all very poorly-read outside of math and sciences.
They attempt to create mathematical approaches to social issues without any comprehension of the stunning complexity of the world, ecology, psychology, sociology or history. They inevitably create trivial models (like EA's expected value calculation) that make problems seem solvable, without understanding that there are only three outcomes when one tries to do so:
1. The algorithm is simply wrong beyond a very short term or outside of very constrained circumstances because what is considered noise by the assumptions becomes signal (weather, stocks, food to health connections)
2. In order to "work," over a long period of time, the nature of reality is constrained by force to simplify its terms (modern bureaucracy, schooling, financial markets)
3. The model/algorithm becomes exactly as complex and perfectly models the world, in which case it is useless as it would operate at the same time scale as the world. Read "On Exactitude in Science" by Borges.
The people behind these new movements (not unlike the last round of Seasteaders), are usually in category one. Just look at a few of the example uses of EA's algorithm to realize how unbelievably silly it is). The scary part is that they inevitably realize this and move to category 2. Have you noticed that they all end up talking about some form of monarch or tyrant? Usually referring at some point to Plato's "Philosopher King" ?
EA and e/acc are childish, simplistic, and immensely dangerous. They are grifts by deeply damaged men/quasi-men It is incumbent upon us to expose their nonsense and the sociopaths who are behind them.
(Side note: Thatcher did study at Oxford, but she studied Chemistry, which means she took zero classes not related to that discipline. No philosophy, no political philosophy, no economics. I attended Oxford, it is not an American university)
in the same way that I think the e/acc people being discussed have crossed the line past "honestly arguing objective reasons that they're right" into emotional/tribalistic/identity appeals, I think your comment is doing the same. After reading it, I don't feel like I now know more reasons about why they're wrong, but I do know that you feel contempt for them. But I'm not sure that helps advance the discussion at all.
I was addressing the assertion that it was wrong to out Verdon. The author claimed it was doxxing and a violation of journalistic integrity.
My contention is that e/acc is a grift, and that the progenitor of a grift should be revealed by journalists. If journalists cannot reveal people doing wrong, then what is the point of journalism?
I put e/acc in a group with other grifters because they have a common pattern of behavior, usually a technocratic, oversimplified algorithm or model for improving human outcomes.
I also showed why these always fail.
The point is that ideological grift is eternal, and it must always be called out.
"a common pattern of behavior, usually a technocratic, oversimplified algorithm or model for improving human outcomes.
I also showed why these always fail."
You have correctly identified the point of contention - but I am not sure you have shown this. It is 2023, we are sitting on about 200 years of history that relatively mindless pursuit of technology advancement has indeed resulted in (net, yes, certainly some bad side effects) massive improvements to human outcomes. If *that* is the thing you are contesting, then this isn't really the forum? The debate here is whether or not these e/acc people are correct that "powerful AI" is in that category of "definitely net benefit new tech" or not.
If you are arguing that there is some identifiable-same-basis "tech-improves-the-world grift" and that your list of people is exemplary of it "(SBF, Trump, Holmes, Madoff, Thatcher)" (???) or even the same category of person at all, then you are not arguing just against this Beff Jezos guy, but against many his critics cited in this piece, and millions of other people - and is way outside the context here.
I did not address the core "is tech beneficial" argument because that was not what the article is about, but it is certainly worthy of contention. It is interesting how many propositions are taken as axiomatic like "Capitalism is the only viable path," "Private property is inviolable" or "Technology indisputably makes our lives better." These may be true or may not be, but they are certainly not beyond dispute.
As of the mid-18th century, virtually everything man made could be consumed by the earth. As of now, most of the post-consumer waste stream is not safely or effectively decomposable. It is entirely possible that the entire explosion in quality of life is just a massive, unpaid debt on our progeny. The consumer technology-driven society could be a ponzi-scheme enabled by the non-pricing or mispricing of externalities including the consumption of resources (land depletion, deforestation, extraction) and production of waste.
So no, I do not take technology's value as assumed.
That said, I am not lumping those folks together under technology improves the world. EA and e/acc are just the latest two incarnations of a longer trail, which is "capitalism is the answer to every problem". The rally cry is tech, but the underlying mechanism is identical - we have to move quickly from what is to what is most capital efficient, or, more to the point, what more efficiently aggregates capital.
The grift is that individuals have a magic formula that demands huge amounts of capital and will improve the lives of everyone, except of course those whose it does not, and they were really not necessary anyway.
When you peel it back, it is not technology, it is capital and capital aggregation - there is no effective altruism without individual/private capital accumulation, the more the better. There is no e/acc without massive allocations of capital. They are Coke and Pepsi, not Theism v Atheism.
Bear in mind I am a tech CEO, so I am not Ted Kaszinsky, though it is entirely within the realm of possibility that he ends up being right (the manifesto, not the blowing people up part), it seems also likely that technology in the service of mankind is a good.
But the type of people who have created these capital manifestos will not set us free.
Hm, the idea being they've already defected in society's Prisoner's dilemma, and so forfeited some of society's protections? Feels like that makes it too easy to rationalize.
I'd agree to revealing the identities of serial grifters, or at least grifters that are egregiously committing embezzlement or something, since that'll prevent harm. Under the assumption that Beff isn't such a grifter, I feel like calling him out personally doesn't really achieve anything more than calling out the movement or its arguments. Or at least, the benefits would be more to do with journalistic incentives.
It isn't something we have to individually develop a gut feeling around. Very smart people have noodled about it for years. Read what they have to say and then figure out where they are wrong. Google privacy expectations of public figures.
The reason people develop standards of behavior is to avoid this.
If he wanted to have a private life, then he would have to give up working hard to develop a public persona. That is the gist of it. It is why you can say things about the Kardashians in public that you can't say about my Mom.
Just read about it. It is all out there.
Ah, I was mostly trying to address identity-revealing in the case of malfeasance, not with regards to public figures. Though in the Google results I'm consistently seeing the idea that even for public figures, there is (or at least, should be) a balance between the public's right to know and an individual's expectation of privacy. As you said, an individual may take actions that diminish that expectation. But in the case of pseudonyms my first intuition is that there has to be pretty clear harm to justify unmasking them.
And indeed, the US Supreme Court has ruled that anonymous publishing is protected by the First Amendment, and while of course that only applies to what the government does, I think some of the reasoning applies or should apply to society at large. Like "Anonymity is a shield from the tyranny of the majority," and the idea that anonymous publishing makes it easier to focus on the issues rather than the person proclaiming them.
EDIT: Now realizing this is our second conversational thread, whoops.
I, for one, have enjoyed both, and really appreciate your insight. Thanks!
I hard disagree. The ethics of the journalistic principle i.e. blackmailing someone into an interview with releasing their private information, or outing them, are what's in question.
And the journalists would do it either for clicks or for ideological reasons, or both, whether or not to this person was genuine in their views, or had something reasonable (of unPC or objectionable) to say, or inconvenient but true.
Ok, let's play with a few scenarios:
1. Were the WSJ able from doing ai text analysis to identify Ted Kaszinsky, I presume it would not only be ethical but imperative to dox him, correct?
2. And if a prominent politician ran a blog talking about the inherent lesser and greater abilities of different races, and a journalist figured it out, then that would probably also be very important to disclose
3. A non-profit organization has a very effective social media presence debunking scientific research around climate change. And a journal discovers a prominent hedge fund investor is behind it.
There are more examples, but I think it is reasonable that there are many legitimate cases where a journalist has the right and obligation to "doxx." It is in the public interest to understand the people connected with movements and ideas, especially when those persons may have intentions that would not serve the public interest.
So a blanket "don't doxx" just isn't a particularly supportable argument.
The question is where one draws the line on privacy, and that ground has been covered by the press and the law for a long time. Verdon is a public figure, which the Supreme Court defines as "those who hold government office and those who have achieved a role of special prominence in the affairs of society by reason of notoriety of their achievements or vigor and success with which they seek public's attention"
I think we can all agree that Verdon has both success and shows vigor in gaining public attention.
Secondly we should consider whether the ideas pose a threat of any kind to a member, part or all of society. I would claim that e/acc is a threat to all society, but even a limited interpretation could make a reasonable conclusion that e/acc poses a threat to anyone who does not fit in with its relatively parochial worldview. It admits that substantive portions of society may be unfit for survival - "The wellbeing of conscious entities has *no weight* in the morality of their worldview."
I think the movement is the puerile nonsense of emotionally stunted narcissists, but an objective analysis would have to weigh whether a movement that was highly public and gaining adherents while suggesting a threat against a segment of the population is just as dangerous as the UnaBomber's, and should be aggressively exposed.
To close, there is no intellectual or ethical argument for "never-doxx." Whether to violate the privacy of an individual in society is contingent, and the rules are understandable, if gray at times.
1. I think harm reduction is indeed the appropriate lens here.
2. I more or less agree with the evocative example, but I feel like there are more ambiguous versions.
3. I may have misunderstood what you were pointing at here, ignore this if so. But I don't think this level of conflict of interest is necessarily bad enough to require calling out the individual. After all, the argument can be evaluated for merit regardless of the proponent's motivations.
Anyway, there's some ambiguity on where we -should- draw the line, regardless of where it's at now. As a rule of thumb, "clear and present harm" seems decent, but worthless as an actual standard (without further elaboration) because then people will claim to meet it as a matter of course. When there's ambiguity, I'm thinking it's reasonable to err on the side of "don't doxx."
Great points!
I would consider it not a doxx opportunity except jt has grown to a point where not only is there significant community support but also the support of highly influential and extremely wealthy individuals.
I also am tipped by the fact it is so overtly amiable to both authoritarian and neo-Darwinist approaches.
A more fringe case would be someone promoting a neo-Georgist or Chartist approach where they clearly were not backed by or going to be backed by any substantial wealth and further that it did not threaten marginalized groups. In that case I would say leave the identity private.
Hey, I've wanted to respond to this for a while, didn't want to leave it hanging.
You give compelling examples. I probably wouldn't decry a "doxxing" in your scenarios.
Perhaps there's a case to be made that engaging in social media under a pseudonym is cowardly in some way. (Although if The Powers that Be are likely to make someone's life miserable for speaking truth to them, surely that's a reason to assume a pseudonym, even if they're powerful?)
To me, the example of Beff Jezos is fuzzier. It doesn't seem a priori that this person has the outsize influence of, say, a Koch Brother or George Soros. Like, is it appropriate to publicize the personal contact information of anyone in any way popular with opinions you find objectionable? Nor, from the little I've seen, do I see that the views expressed by him are harmful enough to justify full digging up of dirt on him.
And puerility or simplicity of their ideologies isn't enough to get me on board, honestly. I don't think the reporters in question would be unmasking corruption, ideological inconsistency and (frankly) harm among various individuals more politically aligned with them, though there is plenty of that to be had.
And I don't trust the reporters to be ethical about it and do it because of honest assessments of harm, and don't think it right for them to arrogate to themselves the power to arbitrate who is worthy of privacy and who isn't. It's not that they're trying to be ethical and unmask corruption wherever they find it; to me, they are serving their specific ideology, and they would've done it even if Beff Jezos were a humbler person (not one with a startup), with a more benign version of his philosophy.
So maybe I could be persuaded that in this case the benefits outweigh the harms, but in general the current media engaging in those particular tactics to unveil identities of figures they don't like will be extremely suspect to me.
Thank you so much for the thoughtful response! It is a hard question for sure, and if it were just him I'd say doxxing is unnecessary.
But now the billionaires have gotten on the bus, and what is essentially the next QAnon has real power behind it.
But I am persuaded by your comments as well. Thanks again
I feel like I agree with the core e/acc principles. Beff started it but it has spread to other people I respect like Garry Tan and Marc Andreessen. I see it as focused on the points:
1. Tech progress is good
Anything like an "AI slowdown" or "destroying this company is consistent with the mission" is a bad idea.
2. Freedom of religion
It's okay if Beff Jezos believes in the rise of the machine god. Just like it's okay if people believe in the second coming of Jesus. And it's okay if people believe that one day AI may destroy humanity. But you have to get along and work with people who don't agree with your particular religious vision and not try to convert them all the time.
Maybe do more than feel? "Freedom of religion" does not bestow the right to chain literally *everyone* to your stone altar and raise the knife - is the Constitution actually a suicide pact?
Sorry, are you saying we need to worry more about Jesus or the AI god here, or some other impending apocalypse? I can't tell the difference from your comment.
Sorry, atheist, "doomer", but taking you at your word -- I assumed "stone altar" would be evocative of other death cults like "bad" e/acc. I'm pro-tech, think AGI is clearly achievable and almost inevitably fatal because of the short timeline due to status/economic/race (as in "racing") incentives. And also the nihilism. It is not okay, everything will not turn out okay by default.
I agree with what you're saying, but there is one tiny thing: the timetable. If Jesus is coming tomorrow, or super AI in 2025, then it is actually pretty important that we work out which religious beliefs about those things are true or not.
No, because it's not possible to just "work out" which religious beliefs are true. You can argue forever whether Jesus is coming soon and you will never convince all the nonbelievers. Same thing with "the AI god" instead of "Jesus". Similarly, if 100 years pass and there's no AI god, there will probably still be cults who worry that GPT-72 will be The One.
Okay, but the fact that some people hold irrational, impervious to facts beliefs about AI or Jesus doesn't mean that there aren't lots of regular other people who feel pro/con could be persuaded to be con/pro. And that if you think the % chance of Jesus or super AI within a reasonable timeframe is high enough (as both the most devout AI doomers and AI proponents do) then what that mass of people think is pretty important.
I think there is less than a negligible chance that a "second coming of Jesus sufficiently similar to what Christians believe" will happen, so yeah I don't have any strong beliefs about what we should do about it, and I don't care what others believe. I think it is absolutely reasonable to believe that a sufficiently high chance of a sufficiently powerful AI within my lifetime such that I would like to debate the context, constraints and societal expectations on the various actors in that event now.
For the record, Zvi is one of a tiny group to whom I basically outsource my understanding of AI and the issues arising from it. The work he puts in, rounding up and interpreting developments is admirable. Thanks for this piece too.
I think maybe it'd be good to 'develop some in-house expertise for overseeing the outsourcing', but otherwise agree Zvi is a good 'vendor' of AI insights.
You're definitely right on all counts. I'm lazy and entirely reliant on a vague heuristic based around Zvi seeming like an intelligent, thoughtful and decent person.
It's a great heuristic!
It seems to me that e/acc has taken the usual course that identity-based memes seem to universally take. It’s kind of a particular flavor of the community-of-idiots effect but it’s a little more complicated in that there’s definitely an element of people starting it as a goofy joke but you get a one-two punch of people thinking they’re 100% unironically serious about it & joining in and ideological opponents thinking they’re 100% unironically serious about it & panic fearmongering over it. Like I remember circa 2015 telling my dad he needs to chill because “alt right” is just a stupid internet joke but now here we are. I struggle to think of any sort of meme identity that has successfully maintained a reasonable level of unseriousness.
I guess my question, having not paid a ton of attention to this, is how sure are we he’s the “founder” versus the arch-guy-who-decided-to-take-this-meme-too-seriously?
I hate being a meatbag. I hate being talking meat (reference: https://www.mit.edu/people/dpolicar/writing/prose/text/thinkingMeat.html). It's a stupid arrangement of atoms that resulting from some random evolution. I do not want to die and I do want my consciousness preserved, if not 100% than to some reasonable degree. I also think that the worst case scenario of AI-based grabby aliens has been empirically invalidated by us still being around after 14 billion years, unless you are willing to bite the bullet and say that "we are the first within a billion light years or so", which is very much anti-Copernican. Given the above, I think e/acc or d/acc or something similar is a lot closer to the approach I want than Yudkowskian doomerism. Full steam ahead until there are good empirical (not hypothetical) reasons to slow down. It is unfortunate that Beff Jesos' discourse style is so obnoxious, I wish it was more reasonable, but it does not invalidate the goal.
While I enjoy being a meatbag, I think this comment is useful because you lay out what you think, and what you want, and what the terms of your agreement/disagreement would be.
"Full steam ahead until there are good empirical (not hypothetical) reasons to slow down. "
I feel like, to my un-educated viewpoint, that you are in 99.9% agreement with "Yudkowskian doomerism"? My impression of that views is that he is "full steam ahead" on all the pre-super-AI technologies, he just feels that he has sufficient data that says "slow down" on this single specific "super AI" situation. To my mind, it seems like there is a very easy (haha, well...) way to resolve this: some kind of sufficiently complex test simulation case that tests the hypothesis: "will an AI of sufficient power do something of sufficient badness in a world of sufficiently-similar complexity to our own?"
> I feel like, to my un-educated viewpoint, that you are in 99.9% agreement with "Yudkowskian doomerism"?
The main disagreement is on what counts as empirical. Eliezer says "we do not understand AI nearly as well as nuke design, we should slow down, or else it will be too late to stop". My view is "we do not understand AI nearly as well as nuke design, we should keep iterating until we do, there will be no x-risk-level negative externalities until we do, though plenty of other catastrophic-level risks are possible, though by no means guaranteed, but there also will be commensurate benefits."
Right now there is no known way to perform "some kind of sufficiently complex test simulation case that tests the hypothesis", as far as I understand it.
There isn't, yet, but if I have one soapbox it would be that I think we should work on developing simulated worlds that you could run those tests in. Both to discover if you or EY are right, and (assuming "instant AI doom" doesn't occur) also a sandbox to game out/test ways to make AI work better.
I agree that it is a worthwhile endeavor. I hope someone does that in parallel to everything else.
Your dislike of biology does not validate the death of myself, my children or trillions of sentient life(including non-human).
I agree that it does not, I disagree with the premise that one implies the other.
The likelihood of it is too high for non-regulation.
Again, I disagree with that assessment of likelihood.
I note that almost by definition "grabby aliens" are nigh invisible - If they're grabbing space at close to the speed of light, anyone who still exists is going to be blissfully unware of them until a scant few years before they arrive
Yes, that's my point. We're are past the most active time of star and planetary system formation by almost 10 billion years, and it took less than 5 billion years for the Earth to produce us. The size of the local group of galaxies is roughly 10 million light years, so, unless we are the first, we would have been eaten by now.
Thank you for this! Listened to the Beff Jezos interview on the Moment of Zen podcast and did not feel he was constructively engaging with any criticisms of the e/acc position.
I try to keep up with AI and accelerationism and related topics - but the people involved in these discussions are so radically divorced from reality and basic human values that it is remarkable that anyone takes the time to think about or describe or summarize their bizarre and (I can only assume) drug-addled points of view.
At least I can take one Substack off my list.
Love everything you write Zvi, keep it up :)
I used to use the word 'doomer' in a kind of ironic self-deprecating humility (I'm one myself: p(doom) in the 10-90% range).
I'm thinking of changing now, this probably does more harm to the underlying idea than is worth it.
Still not in love with the alternatives sadly.
I kind of respect Scott Alexander on a personal level, though I'm especially sympathetic to his protest against doxxing in almost all cases.
At the same time, I'm ambivalent about how I should react to Scott's protest actions. He was doxxed by NYT. Should I not share any links myself from Forbes, or NYT, for "at least a year" too? Should I do that indefinitely until such a time NYT or Forbes issues a public apology? Should I not share them on any social media for at least a year? If the answer to all those questions is 'yes,' am I obliged to pressure anyone else I know with a blog to follow suit?
Those seem like kind of high bars to clear. I'm not personally inclined to follow to the letter all, or even any one, of those rules. If Scott or some of his avid fans knew that, would they expect me to impose those rules on myself anyway? Would they judge me to be bad or wrong if I didn't? If they did, how seriously should I take them?
I've got no sense of what the answers to those questions would be, either.
It's fine if you don't join the protest.
EDIT: Deleted because it was meant to be a reply, reposting in the proper place.
At least a partial explanation of Jezos's fanatical brand of e/acc is that it's a useful recruiting tool for his startup. Evidence is a) he recently mentioned the importance of ideology to recruit passionate, committed people toward a startup cause like his and, adjacently, b) in talking about OpenAI, Roon's made similar comments on twitter about the importance of ideology to motivate herculean efforts.
I think this is the best case argument for that fanaticism. The key difference is, unlike a crusade, the "attention to detail" in your crusaders is extremely important, because they are going to be writing the code of your crusade, not just carrying spears into battle. If you recruit them and retain them with the fulfilling, "bigger-purpose-embodying", FULL STEAM AHEAD mentality, they will apply that in their work. Which is great! But also, they (or you, thru executive control) will apply that mentality when it comes to bugs or unexpected negative side effects.
I actually am 99% on board with the idea that all regular ol' tech advancement so far, the people doing it are taking in bugs/sideffects reasonably in stride and coming up with ways to fix them, in a way that effectively means that I am a 100% accelerationist for all that. We invented awesome factories the built cool new stuff. This also caused air pollution. I am 100% opposed to "and therefore factories are bad!" or "we must shut them down to protect the climate" but yeah, smog is bad, no one likes it, and we took steps (and other places further behind on the curve are, or will) to minimize it. I am still 100% accelerationist on factories because it's obvious that as a society we had a method (extreme shorthand for the actual process, obviously) for ameliorating that, and the benefits vastly outweighed the costs.
But if the people making the factories said "air pollution is impossible, it can't hurt anyone, and anyone who thinks otherwise is a terrible person on the side of the luddites who would have us living in mud huts", I would still be pro-factories-as-a-technology, but I would wonder if perhaps it might be better if factories were built by people who accepted the reality of trade-offs and who had great plans for dealing with them, and also that perhaps the rest of us should have a plan for fixing the side effects that apparently they are going to ignore.
Doing a Good Job of building tech (which nowadays is honestly shorthand for "code") requires Good Logic, or let's just be insufficiently PC and say "smart." Being smart also means understanding how monkey tribal politics works, and that its ignoring of nerdy details is a major hindrance to its effectiveness, and a big part of the reason anyone succeeded in tech over the past 100 years was with a ruthless focus on the reality, details and nerd-shit of that tech. If your topline mission statement rejects that, then yes, I have questions about YOU, not the tech. Blustery monkey politics e/acc people attacking 99% allies don't make me hate tech, it makes me hate them, and also - mistrust tech created BY them. It would've been so easy to pass the "do I know monkey games are dumb?" test and throw one line out about "we're pro-tech, and pro-glorious future of happiness and prosperity, but obviously like all tech in the past, we'll be super careful to ensure that it works right, both for our benefit and yours - and people who are otherwise 99.9% pro-tech, but maybe have concerns about specific implementations not working right - they're on our side against the actual luddites, too!" That's the line. See, easy?
If you don't care enough about details of What Is Tech, and you lump the extreme radical edge of pro-tech people into the same bucket as "we must regulate nuclear out of existence/weaving loom smashing luddites" just because they disagree over a few tiny edge cases, then my worry is that you don't understand "What Is Tech" well enough to do a good job building it. And if your rebuttal is "well look at all the amazing things I've built over the years, who are you to disagree?", I totally agree - don't listen to little ol' me, listen to yourself back when you built those things. I notice the actually-implemented mentality of you ( and the other people who built those amazing things) during the building process, and I see that it was extremely focused on details, logic and fixing bugs - and also, accepting it when one of the engineers said "here's a bug". You didn't say "ha ha stupid luddite, you just hate all tech, this bug doesn't exist!" because if you had, your product wouldn't have worked.
Don't be "Guy Who Promises A Glorious Future, Never Mind The Details." Be "Guy Who Built Awesome Stuff In The Past By Focusing On Details, And Promises More For The Same Reason."