19 Comments

> I am not about to browse 4chan.

As time goes on and open source catches up to GPT-4, you might eventually have to. /g/ users really are usually the first to implement new releases for consumer hardware, with varied versions of Mixtral being the hot new thing right now. Protip for the intrigued commenters: the local models general thread is not that awful, the chatbot general is, and the info is spread across both. Good luck!

Expand full comment

Regarding Mixtral offering their model for free, I *think* it's actually openrouter hosting the open source model on their own servers, undercutting Mixtral's own offering. The thread you posted had different OSS model providers offering increasingly low prices for the same model.

Expand full comment

Scott missed a golden opportunity to call it "My Wife's Son of Bay Area House Party"

Expand full comment

Bryan is wrong, of course. Humans may have been at risk but never really humanity. We were never at risk at losing everything alive, beautiful and loving.

We must contain such risks as absolutely as possible.

Expand full comment

Genetic studies reveal a bottleneck where all of modern humans are descended from around ~7000 individuals that (IIRC the study) were all in one valley. This is deep in pre-history, of course, and the biosphere as a whole wasn't at risk, but the pedant in me does want to point out that Homo Sapiens has definitely been on the brink of extinction at least once in the past.

(The cause was a particularly large volcanic eruption causing a multi-year winter)

Expand full comment

> "on top of the impossibly hard technical problems, we have to solve governance of superintelligence"

For what it's worth, here is an academic paper from March that seems to understand, and try to address, this particular problem, judging by the abstract:

https://link.springer.com/article/10.1007/s43681-023-00268-7

I found it because the author is Maarten Boudry's co-author on another AI paper, a draft that is apparently just weeks old, titled "The Selfish Machine? On the Power and Limitation of Natural Selection to Understand the Development of Advanced AI" (unfortunately, registration is required for full access to this one):

https://philsci-archive.pitt.edu/22788

Possibly the first time Boudry has written on AI. Why do I think that's noteworthy? Before getting interested in Slate Star Codex in 2019, I read about rationalist-like topics mainly in academic papers, and perhaps the best writer I encountered was Boudry. (On the other hand, he now participates in a progress-studies blogger program with mentors like Tyler Cowen or Steven Pinker, so not sure what to expect on AI.)

Expand full comment

>Gemini Pro clocks into Chatbot Arena as a GPT-3.5 level model

This is important, because the Google paper claims Gemini Pro crushes GPT-3.5 in nearly every benchmark, often by 10-25 percentage points.

https://storage.googleapis.com/deepmind-media/gemini/gemini_1_report.pdf

...but actual users, in blind evaluations, rate GPT-3.5's answers as equal or slightly better (turbo-0613 is currently six points ahead of Gemini Pro). You wonder what will happen to Gemini Ultra's (far smaller) lead over GPT4...

(Yes, I know these kinds of nonscientific leaderboards aren't the be-all end-all. One model rated above another doesn't prove that it's smarter or better or anything like that. It just means that users preferred its answer. In theory, a model could "hack" its score by flattering the user or being sycophantic—not that we have evidence of this happening, but it's something to be aware of.)

Expand full comment

My assumption is that GPT-3.5 is effectively tuning for the 'make the user choose it in a binary selection in a chat' metric far more than other bots relative to other benchmarks.

Expand full comment

Qiaochu Yuan and others have suggested: why not build an AI parent rather than AI girlfriend?

Well, it's an interesting idea. I think it might be harder to do. Current LLMs seem very reactive to what you say ... you're driving where the conversation is going, not the LLM.

This tendency of LLMs would be a bit suspect in a girlfriend emulator, but would be totally disastrous in a parent emulator, because the simulated parent is just not taking the lead.

Expand full comment

Parents also physically do many things in the real world, arguably much more so than romantic partners. Even as an adult, if you eg. want DIY home improvement advice you're probably *also* borrowing some tools from your dad, maybe getting in-person help too, not just getting advice via SMS.

Parents are also a safety net - even if called upon rarely or never, they're (hopefully) there if you really really need someone to help you due to misfortune or ill health

Expand full comment

One aspect of Immoral Mazes, as I understand Zvi:

- institutions are bound to become calcified. They will lose their ability to innovate and will grow detached from object-level considerations.

- There are short-term solutions (founder effects) but nothing that works permanently

I agree strongly with the first point; institutions decay all the time. But I disagree with the second. I'm more optimistic. There are known solutions.

At Lloyd's of London, a group of insurance underwriters collude to provide shared market infrastructure in a well known location. Those underwriters then compete for business within the entity of Lloyd's. At Lloyd's, weak forms of the EMH will hold. Their policies will stay reasonably competent and reasonably dynamic.

In the House of Commons, political parties share the institutions of democracy and compete for votes. Whether or not they do this well is debatable, but that their incentives point in this direction is not. The parties may grow old and fail; parliament itself will not.

The NFL is a group of teams, sharing some forms of revenue and keeping others at the source, who compete to win at American Football. These teams, despite the league mechanisms favouring underdogs, generally earn more when they win more. So teams will continue to innovate. New plays will emerge, moneyball strats will improve. The low hanging fruit is likely to be picked.

Some institutions are above the game. They don't compete. They host the competition. These meta-institutions do not need to die for creative destruction to occur. The part dies so the whole might live.

Underwriters at Lloyd's go bust. Some lose their accumulated profits. Some even lose their homes. But Lloyd's itself, by nature of its unusual constitution, lives on unperturbed; stronger, if anything, for the weakness removed.

Organisations with a track record of dynamism and competence almost always have this meta-org structure. Imo, this is the play to avoid calcification. Huge numbers of proposed orgs in this form have been proposed, see Hanson for plenty of untried examples, and the benefit produced could be great. A new longtermist cause area?

Expand full comment

While I agree that these meta-orgs are *more* stable, I'm not so sure that they'll be stable over centuries or millennia - they *might* be, but there's nothing guaranteeing it.

Lloyds could one day be outcompeted; the UK could cease to exist as a nation, or radically change its political structure; the NFL could just lose prominence, and slowly starve as another sport - or even another football league - takes all the viewers.

I do agree that structuring such that failing components can be smoothly excised without wounding the whole is absolutely crucial

Expand full comment

Great: porny Tamagotchi.

Expand full comment

> And I still think assuming evaluation is easier than generation is incorrect, and wish I had figured out how to explain myself more convincingly on that.

Could this be a way:

The statement "evaluation is easier than generation" is true in social context. So for human work and interaction.

If the AI is not 'social' in that sense, the statement is false/ not reliable.

Expand full comment

I think it's less easy to distinguish the cases from that, there are clearly some non-social cases where evaluation is easier (e.g. 'does this airplane work?' vs. building it)

Expand full comment

Why are we listening to anonymous troll Twitter users?

Expand full comment

Sorry for necro-posting, but Terence Tao deserves his named being spelled right (not "Terrance", twice).

Expand full comment