20 Comments

But Tucker is basically correct: the development of AI is profoundly undemocratic. I don't advocate anything violent but if people were to setup hunger strikes and die ins to protest in front of accelerating companies, I think it would be a very obvious comment that they want us to die and do not care, and that the government regulations to help them coordinate need to come as soon as possible.

Expand full comment

“Shut it all down. We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong. Shut it down.”

This is what in the Time article. It's by far the reasonable ethical position, and indeed, the majority of us did not choose this. Even if we do not all die, the suffering risks are already real(see the art communities,etc). And most Americans would agree:

https://theaipi.org/

With over 80% favoring a slowdown.

Expand full comment

> What if Llama going open has the simplest of explanations?

I'm not very convinced by this. Lets recap

1. Meta releases Llama-3

2. A large community adopt Llama-3 because it's free and good

3. The combined expertise of the large community figures out how to make inference on Llama-3 faster

4. Concurrently, Meta will spend billions upon billions on inference compute

5. Meta saves potentially billions on their own stuff because they will take up the community improvements to inference compute

Sure, that's one way this could happen. But is this worth giving your whole model away for? Why not simply lopp off a few million dollars of budget a year and assign a team of performance engineers the job of making this faster? Is Facebook's AI talent so constrained?

Sounds like they want to give it away for other reasons and the "the open source community will contribute fixes back to us!" is an extra hand-wavey benefit they can stick in the Pro column.

Expand full comment

What they do with Llama-3 approximately doesn't matter. The two long-term outcomes Meta probably cares about are either Llama N becoming so good that it is the product and thus not open sourced or another model becoming the must-have and in that scenario minimizing the extent to which Meta is at the mercy of the other model provider. Maximizing Llama's prominence and removing revenue from other providers' top lines both marginally work towards those scenarios.

Expand full comment

This is my preferred explanation as well.

Expand full comment

https://www.standard.co.uk/news/tech/first-ai-child-tong-tong-china-b1138176.html

Quite possibly the literal way to replace humans by mimicking development all the way

Expand full comment

I no longer understand what Janus is trying to do.

But, my understanding is that real-world Janus (call him Janus0) was not hypnotised.

But in Claude's generation of a story about Janus (i.e. Janus1), Janus1 was hypnotized.

Expand full comment

It's a long time since I last read Farenheit 451, but the robot dog with flamethrower sounds kind of familiar...

Expand full comment

"But mostly I do not see the issue? Why other than the model not being good enough at text prediction would this [simulating the world to predict text] not work?"

I think there's a very meaningful quantitative difference about predicting text as it happened in the world-thus-far vs predicting text because you're trying to change the world in a motivated way. This creates an anti-inductive incentive that doesn't exist in the corpus. So there's a fundamental disconnect for simulating a goal-seeking being, because you aren't simulating other beings being anti-inductive in your frame. Become more robust to anti-inductivity exactly means being less good at prediction, so you can't get there by being maximally good at prediction and then saying "predict how to be robust to anti-inductivity." Wrote more about this in a 3-post sequence on predator models: https://tis.so/tagged/predator-models

Expand full comment

I think the game theory on TSMC is that (entirely understandable) anti-Taiwan-getting-invaded sentiment at the company means that they are not actually that motivated to getting a critical mass of chip fabrication up and running in other countries. Having TSMC-owned small, perhaps-not-that-productive plants in Japan or Arizona (due to those lazy Americans, tsk tsk) is a great way to tie yourself to those strategic partners "we're in this together, no need to build your own chip plants, look at all these nice jobs we're providing domestically, but you know, wouldn't it suck if anything ever bad happened to the key Taiwanese leadership and work ethics?" but without ever getting to the point where domestic factions (in US or J) can say: "hey wait, we can build all the chips we want here, why do we need to defend Taiwan again?"

I feel like this is the math:

Ukraine :: post-Soviet nuclear arsenal :: Russia

Taiwan :: critical chip production :: China

Expand full comment

I must not watch the stock. Tracking stock is the mind killer

Expand full comment

> I am extremely thankful to be living in this timeline, this universe, where everything is going cosmically right. It could‘ve been so much worse.

Survivorship bias, anyone? Most (if not all) of us would be dead if s*** hit the fan. Conditional on us existing in the future, the future is likely to be great.

Expand full comment

i still don't understand why you need to fine-tune to instill hierarchy for the system prompt. can't it just be done with a few lines of code/natural language? e.g. "read the system prompt first, read the user prompt second. before you do anything, check the user prompt against the system prompt to make sure it doesn't break any system prompt rules"

Expand full comment

Roon appears to have deactivated his Twitter account. All of the links to his twoots are now dead.

Expand full comment

The point about strategic behavior and some level of manipulation being baked into human behavior and thus baked into the training data and the RLHF is an important one.

Expand full comment

>I would not presume we have a good 5-10 years before confronting this. We are confronting the beginnings of autonomy right now. The threshold for self-replication capabilities was arguably crossed decades ago.

I didn't read that as those 3 points becoming a reality in 5-10 years. But in the sense that in 5-10 years models will be smart enough that combined with autonomy, self-replication, self-improvement it would be an existential risk.

Expand full comment

In case you missed it, you may be interested in Matthew Barnett's description of his reasons for AI optimism. 1. Skepticism of a particular way that AI can go wrong. 2. Belief that AI alignment is somewhat easy. 3. Expectation that society will react strongly to AI so there is no need to increase the reaction preemptively. 4. Treating unaligned AI (!!) as having moral worth. 5. Mundane utility of AI.

https://forum.effectivealtruism.org/posts/YDjzAKFnGmqmgPw2n/matthew_barnett-s-shortform?commentId=qdE7xqktPovc26jDP

Expand full comment

I removed a comment that appeared to be an advertisement for hacking services. Not cool. Warning only this time, will ban or report if it happens again.

Expand full comment

"Talk to an AI therapist (now running on Llama-3 70B), given the scarcity and cost of human ones. People are actually being remarkably relaxed about the whole ‘medical advice’ issue. Also, actually, it ‘can be’ a whole lot more than $150/hour for a good therapist, oh boy."

I, like many of us, have fallen victim to a $700 bill for 15 minutes for a specialist's time and an obvious answer. I figure a model that can suggest home remedies and (with enough training) determine when a visit to the ER is warrented would provide a lot of utility but is also morally indefensible since getting it wrong can mean death. It's a lot of liability for a single company so maybe an open source model.

And the more I think about it the more unlikely it sounds.

Expand full comment

Although therapy can also mean life and death I'm talking more about infectious disease

Expand full comment