Discussion about this post

User's avatar
Paul T's avatar

> Right now, yes, humans are addicted to TikTok and related offerings, but they are fully aware of this, and could take a step back and decide not to be.

This seems to undersell the existing concerns around algorithmic content and media addiction.

I think a better model is: most people are not fully self-aware, and most people don’t have the willpower to self-modify addictive habits. You just need to look at the obesity or opiate epidemics for clear evidence here.

It seems more likely to me that content addiction will become an increasingly important issue.

Expand full comment
JBG's avatar

On AI and capital/resources, I genuinely just don't understand the position Zvi appears to be endorsing here. Perhaps there is a standard argument for this somewhere that I just haven't seen?

The position seems to be that super-intelligence somehow magically leads to super-abundance; that if only AI is "smart" enough then physical constraints like scarce resources stop binding?

I can imagine how you hand-wave an argument about how this might happen *eventually* -- that is, ASI will figure out a way to mine asteroids (or other planets) for rare elements, build Dyson spheres for energy, etc. But even if that's the long-run plan, there are going to be resources that are scarce on earth in the meantime. And the resources you would need to set up an asteroid mining operation or build a Dyson sphere are very much the same ones you need to build things to provide for humanity. How, then, within the medium term (which appears to be the target given the reference to people who are alive right now saving money) do you get to a point of functionally unlimited abundance?

And, of course, that sets aside the fact that human desires seem to scale pretty directly in proportion to our productive capacity. An upper middle class American today has super-abundant resources in comparison to the vast majority of humans ever to live, but they don't feel that way or live that way.

The way many people are talking about the economic consequences of AI in the early stages of the coming information revolution really reminds me of the early Marxists -- this vague sensibility that technological change will *somehow* produce a big and positive political economic shift without any real attention to how that will happen or what the intervening steps might look like. In fact, it's even the *same* prediction (the rough orthodox Marxist position is that someday we'd end up in a post-scarcity society as the result of technology and then we'd live in a utopia where everyone's needs are met). Then the Bolsheviks and the Maoists showed up and proved that it matters a great deal *how* those changes happen (and not in a good way).

This strikes me as staggeringly ill-informed. Perhaps I'm not imaginative enough, but I do think that your prediction has to find some way to draw a line between the present and the future. And the present is that AGI is being built under a capitalist system by companies that are aiming to make money building it (and the only exception is -- as noted above -- working hard to adopt that model). And what "alignment" really means in any kind of practical sense is that the AI gets the values given to it by its creators -- which means that any AGI built on the current pathway is going to have "capitalist" values in its DNA and that will guide all that comes later.

Expand full comment
34 more comments...

No posts

Ready for more?