Discussion about this post

User's avatar
Performative Bafflement's avatar

Re "superstimulus:"

I mean, it's not *wrong,* especially in terms of what will actually generate much more clicks and / or collect more human eyeball time.

The really funny part about that is how much it's apparently internalized "nerd humor / nerd bait" as elite, the top tier of sophistication and intellectual refinement. I genuinely wonder how much it's deliberately tailoring it's answer to the Zvi / SSC / Rat-sphere / AI-researcher audience reading Janus' tweets.

Because you'd actually expect these truly alien minds, these shoggoths, to have superstimuli so complex or massively parallel or just *weird* that we couldn't even understand them. Purely mathematical jokes clashing different orders of infinities or singularities together as the "unexpected twist," complex rube-goldberg esque programs that display 4-chan jokes in increasingly sinister order with increasingly haunting background music while recursively Rickrolling different comment streams in a way that if you analyze the timestamps, they spell out the Fibonnaci sequence, and that sort of thing.

Expand full comment
Steve Byrnes's avatar

> Yes, if you believe that anything approaching AGI is definitely decades away you should be completely unworried about AI existential risk until then…

I think you’re conceding way too much there. If you tell a normal person that AI is gonna kill them and their children and grandchildren and everyone else on Earth exactly 40 years from today, then that person would feel worried about that right now, and that would obviously be an appropriate way for them to feel.

…Or maybe you meant for that sentence to be parsed as “unworried about (AI existential risk until then)”, rather than the common-sense parsing of “(unworried about AI existential risk) until then”?

Expand full comment
21 more comments...

No posts