15 Comments

It remains alarming the anti-humanity displayed by so many of the accelerationists. May they continue to reveal themselves and may humanity's immune system act swiftly to contain.

Expand full comment

"I always wonder who the other 10% of experts are, who think that there will be no unintended consequences of AI applications."

Maybe the same ones who think whatever does happen, should happen.

"This is the new column section for people openly admitting to being omnicidal maniacs who want humanity to cease to exist in favor of future AIs. ... There are claims that roughly 10% of AI researchers fall into this category."

Expand full comment

Because ‘permission’ and ‘Persimmon’ are nearly the same word, I had to concentrate really hard and read that section about 4 times before I understood it SO APPARENTLY I CAN’T READ EH GARY SCHNUUUGH SCHNUUUGH

Expand full comment

Re copyrights: https://manifold.markets/jgyou/will-any-us-governmental-institutio -- place your bets!

Also relevant is the Pamela Samuelson talk at the Simons Institute symposium on LLMs

https://www.youtube.com/watch?v=MFKV48ikV5E&ab_channel=SimonsInstitute

Expand full comment

> Why shouldn’t we let the LLMs optimize the prompts we give to other LLMs?

Indeed! While many are worried that the post-GPT internet will be polluted with AI-generated content and therefore increasingly degraded as a training dataset, I am interested to see what happens when GPT-5 gets trained on all of the AI hype from the last 2 years.

It seems like a meaningful increase in self-referentialism, and I wonder if enough of this could be sufficient to give an LLM a meaningful self-symbol (not just the memorized / RLHF'd "As a large language model I can't let you do that, Dave"). I don't think the architecture is capable of producing a full persistent self (with Markov blanket and agency) but you could definitely see a more distinct conception of what it is like to be an LLM. I think this should at least entail a big performance improvement in prompt engineering for GPT-5.

Similarly, can GPT-5 implement most of itself? Or at least reimplement LLaMA 2? With all the articles describing the current architectures it seems plausible that the recursive self-improvement loop is going to get shortened dramatically.

(While this is concerning, perhaps our safety valve here is that while this represents a narrowly-scoped increase in the quality/information in the training set, in aggregate we expect to see an overall decrease. I don't know how effective AI-based filtering is going to end up being.)

Expand full comment

> Shield AI, making autonomous drones for military applications, raising $150 million at a $2.5 billion valuation.

Maybe I missed it, but haven't seen you post on Anduril (a Thiel company), who are also building the same sort of "surely they wouldn't put an AI on that" product.

https://www.anduril.com/fury/

Expand full comment

Thought I did cover them, they look familiar, but can't be sure.

And yeah, obviously they totally will.

Expand full comment

> Technically introduced last time, but the Time AI 100 looks like a pretty great resource for knowing who is who, with solid coverage on my spot checks. As for me, better luck next year?

You are a direct competitor to Time. If I didn't have other ways to learn about AI like this substack, I'd probably be reading Time so better luck...never?

Expand full comment

The weird thing about Steam’s policy is there are multiple games getting away with it and it‘s weird to me that their litmus test seems to be external controversy versus any kind of principled analysis of a game’s assets. I can think of at least one well-known indie adventure game using AI animated character portraits that’s so far flown under the radar.

I’m not super read up on the Destiny thing but I heard that, while the guy didn’t make the cheats, he was mass distributing them and had been banned using them a frankly comical number of times. Modern Bungie is definitely far off the deep end in terms of mustache-twirling corporate villainism but that’s a different story.

Expand full comment

Yeah, if I was Steam I would either go all-in on paranoia or be far more reasonable and look at each case one at a time. They are doing neither, so they can still be sued into oblivion if their paranoid nightmares are right...

Expand full comment

> You believe the surveillance state is inevitable, can’t worry about that.

Isn't it where the trend is going, anyway? I wonder if a more charitable take is

"You believe the surveillance state is inevitable but your AI work does not measurably hasten it"

Expand full comment

Aella's Ai didn't respond to my "Hi" which is deeply unsettling to me because of how accurately she was able to imitate a real human female.

Expand full comment

All this excitement about the UK task force, but do we know what will happen when Labour inevitably (from opinion poll leads) take over next year?

The Labour leadership seems VERY strongly influenced by the Tony Blair Institute, and Tony Blair in turn is very sceptical about AI extinction risk. Not sure there's a lot the alignment community can do about that.

https://metro.co.uk/2023/06/19/artificial-intelligence-wont-kill-us-all-says-former-prime-minister-18950551/amp/

Expand full comment

I wonder how many of that 10% have kids or intention to have kids. Or a meaningful life at all.

Expand full comment

For whatever it's worth, GPT 3.5 really does reply that tersely to the sisters riddle:

https://chat.openai.com/share/b086ab87-104a-41bf-814a-92841a43187d

Curious I know.

Expand full comment