23 Comments

Typo: Connor Axios -> Connor Axiotes

Expand full comment

I always rely on your newsletter to keep me informed and am able to stay off Twitter which is just incredible value. After the OpenAI board situation I spent the last week actively reading Twitter to see how much of what I found personally interesting would actually end up here and the only things missing were from yesterday. Pretty amazing to see!

Curious if the following would've made it into next week's newsletter:

- Piece from Nora Belrose: https://optimists.ai/2023/11/28/ai-is-easy-to-control/

- 2 new short timeline to human level AI estimates from Elon Musk (3 years) and the CEO of NVIDIA (5 years).

Expand full comment
author

Have not seen the timeline estimates.

I did read Nora's piece and decided not to include it. I did not especially feel like reiterating all the reasons I don't agree and didn't feel like she was bringing new arguments my readers wouldn't be familiar with, and I came away mostly with 'man I'm just tired and don't want to do this again.'

Expand full comment

Makes sense! I stopped reading about 1/3 in last night after feeling similarly - looks like I didn’t miss anything.

Expand full comment

> d/acc for defensive (or decentralized, or differential) accelerationism.

"D is for lots of things."

"Doubling down" along the lines of your own take: I hope we can add "deliberative" too.

Expand full comment

Vitalik raises an excellent point: if we build a perfectly aligned AGI, a large number of humans will immediately lose their meaning in life because the value of any human outputs will instantly drop to zero. And likewise all human problems such as ageing will immediately be solved, leaving humans no possible problems to work on.

There’s a lot of discussions on what a poorly aligned AGI will do to human existence but not enough about the opposite scenario.

Expand full comment

a friendly AI in charge of resolving the motivation problems probably ends up needing to do lots of weird stuff that will *look* mis-aligned even if it turns out to be good.

But that very argument acts as a cover story for a close-but-not-perfectly-aligned AI as well....

It makes me imagine outcomes where "build the matrix and stuff unhappy people into a fake world where they can feel useful" and "fabricate/allow real problems for the humans to spend their time solving" show up as promising options on the AIs radar.

Expand full comment

This isn't really a big concern, IMO, because when brain reading gets good enough (and it feels pretty close already with DeepDream and other stuff), we're going to lose 80%+ of the population to VR heaven.

As in, there will be experiences where a computer (or AI) will read your mind, and create a wholly tailored-to-you virtual environment where you can live a MUCH better life than you ever could in person. Frankly, the real world sucks for a lot of people with all the attendant complexities, indignities, and obdurate other people it entails, and people already try to escape it for most of the hours of the day in their phones, streaming services, and video games. So imagine that desire to escape and the desire to live a happier life on the demand end, and seamlessly tailored and creative, cheap-to-implement, free-to-the-user VR Heaven on the supply end.

Who's paying for this? If we have aligned AGI, we have UBI. It doesn't cost much to maintain a bunch of people in Manna-style complexes with IV nutrition drips and catheters (or whatever smarter physical-stasis solution AI can come up with) and their choice of VR experience.

But anyways, once you can literally live your ideal life entirely untrammeled by reality for free, we're going to lose ~80% of the population, so you don't need to worry about this.

Expand full comment

Re: the culture novels, and specifically your tweet:

"I read several of the books, got the message loud and clear, and did not read it as utopian at all. Nor do I think Banks views it as all that utopian either?

If it's a world where I don't expect to want to live >400 years despite the freedom to do so, something is VERY wrong."

I think this demonstrates just a very different type of person/world view.

I'm someone for whom _literally_ every hobby I do/thing I do purely for the joy (ie. most of the things that give value to my life) are things that would be done better by a professional/someone beside myself. I am not better or faster or more efficient. None of them add meaningful value to the world.

Two examples:

I garden, cook, cook, brew beer, and do lots of other food related things.

I write terrible code to do minor home automation tasks

These things have exactly zero meaning to anyone other than myself/my direct family. For the time I spend doing these tasks, I could easily work instead and use the money made to pay someone else to do them. And yet I choose to do them myself anyways.

The fact that I am less efficient than a farm, less of a good cook than a professional, a worse brewer than Russian River, and a worse coder than......lots of people/companies does not diminish the value I gain from these activities _in the least_.

An existence where I was able to have a family, and engage in these kinds of small tasks does not at all seem like a bad one.

My work, admittedly does have some small amount of value (although to be perfectly honest, most days it feels like it probably won't end up mattering in the long run), but if I had a less obviously valuable job, or if I didn't have to work at all and could spend my time doing meaningless hobbies like the above, and doing things with loved ones, I don't think my life would be worse.

To be perfectly honest, the above describes a very large chunk of all humans throughout all of human history. Most of them would not have said they did not lead meaningful lives.

Expand full comment

I agree heartily, though I do note the second half of the tweet you quoted - a fulfilling life of leisure with family ought to IMO stay worth living for millennia, and it's weird that Banks assumes people will get sick of life after a few centuries absent our bodies and brains falling apart to force the issue.

Expand full comment

Also some interesting progress in the AI art space in the last week:

- pika.art, significant (to my eye) improvement in AI-generated video, though still not able to maintain much consistency even over fairly short timeframes

- SDXL turbo, open-weights model generates images in ~200ms, fast enough to update as you type

- ZipLoRA, a technique which lets you combine subject and style LoRAs, which might finally make it more practical to get consistent character+styles across images for e.g. story illustrations

- The Chosen One, another technique for consistent characters across images (this one was two weeks ago, to be fair)

Expand full comment

Not a language model, but Google released an inorganic material prediction model that works great, they claim they have advanced materials discovery 200 years or something. Might be interesting for the next article.

Expand full comment

Worth mentioning because it's a critique (ish?) of EA that doesn't fit neatly on the above compasses: https://blog.ayjay.org/45745-2/

Jacobs is writer well worth following on rationalist culture and adjacent topics-- too easily swayed by tech-illiterate arguments but otherwise incisive and often original. For example, here the (latent) argument is that EA is a motte and bailey position-- in practice the "EA" label pulls in much stronger cultural norms and presuppositions than Alexander's definition admits.

Expand full comment

re: the Jurgen Schmidhuber quote, I think it -may- be slightly uncharitable to jump straight to calling that pro-extinction partly because it seems reasonable (and I’ve said before and continue to endorse that) to me you can strongly prefer AI not wipe out humanity and also be open to the idea that even if it does there’s some sense in which it will be humanity’s next generation, come what may. I would maybe disagree strongly with “be content with that little role”, but elsewise that’s kind of my attitude towards it as well. (wrote about this idea as a reaction to the Dial of Progress a ways back here: https://scpantera.substack.com/p/age-of-eye )

I’ve been saying the problem with Effective Altruism is that it suffers from not having a visible villain, and thus suggest someone spearhead Effective Maltruism.

Expand full comment

Re: Tyler's thoughts on the job impact: It really depends on how quickly capabilities grow and whether there are barriers. If we really get to AGI that is at or above the level of the best humans at all skills then I have a hard time imagining that any real estate will be going up given what happens to first the wages of white collar workers and then all the downstream impacts as their spending falls.

I find these short timelines (from e.g. OpenAI super alignment website, Elon, NVIDIA CEO, and others) to be highly speculative but given that more of them are claiming this it really seems time to prepare for what this might mean for society, even if alignment goes well.

Totally unrelated to real estate or the economy but being able to quickly spin up engineers & scientists that are at or above the level of the world's current best is going to have massive impacts on progress. Imagine a company of all Greg Brockman's, all working together day and night. I suspect we're quite a ways off but still.

Expand full comment

Re Lecun's AI IQ test: There is *no way* a set of random people could get 92% on that test. Where did he get his testers, did he just grab his fellow researchers? Looking at a few random questions:

> ¬(A ∧ B) ↔ (¬A ∨ ¬B), ¬(A ∨ B) ↔ (¬A ∧ ¬B), (A → B) ↔ (¬B → ¬A), (A → B) ↔ (¬A ∨ B), (¬A → B) ↔ (A ∨ ¬B), ¬(A → B) ↔ (A ∧ ¬B)

> Which of the above is not logically equivalent to the rest? Provide the full statement that doesn't fit.

Barely anyone even knows what those symbols *mean*, he expects the average person to be capable of solving this. This is from the *easiest level*.

> In July 2, 1959 United States standards for grades of processed fruits, vegetables, and certain other products listed as dehydrated, consider the items in the "dried and dehydrated section" specifically marked as dehydrated along with any items in the Frozen/Chilled section that contain the whole name of the item, but not if they're marked Chilled. As of August 2023, what is the percentage (to the nearest percent) of those standards that have been superseded by a new version since the date given in the 1959 standards?

My Google-fu is decent, but even I would balk at that.

> When you take the average of the standard population deviation of the red numbers and the standard sample deviation of the green numbers in this image using the statistics module in Python 3.11, what is the result rounded to the nearest three decimal points?

Goodness, there are people out there who don't know python? Shock.

> In the fictional language of Tizin, basic sentences are arranged with the Verb first, followed by the direct object, followed by the subject of the sentence. I want to express my love for apples to my Tizin friend. The word that indicates oneself is "Pa" is the nominative form, "Mato" is the accusative form, and "Sing" is the genitive form. The root verb that indicates an intense like for something is "Maktay". When it is used in the present, it is used in it's root form, when it is used in the preterit past, it is "Tay", and when it is used in the imperfect past, it is "Aktay". It is used differently than in English, and is better translated as "is pleasing to", meaning that the thing doing the liking is actually the object of the sentence rather than the subject. The word for apples is borrowed from English in Tizin, and so it is "Apple" is the nominative form, "Zapple" is the accusative form, and "Izapple" is the genitive form. Please translate "I like apples" to Tizin.

What.

(This was from the easiest set, again.)

> A standard Rubik’s cube has been broken into cubes making up its sides. The cubes are jumbled, and one is removed. There are 6 cubes with one colored face, 12 edge cubes with two colored faces, and 8 corner cubes with three colored faces. All blue cubes have been found. All cubes directly left, right, above, and below the orange center cube have been found, along with the center cube. The green corners have all been found, along with all green that borders yellow. For all orange cubes found, the opposite face’s cubes have been found. The removed cube has two colors on its faces. What are they? Answer using a comma separated list, with the colors ordered alphabetically.

This is also from the easiest level.

If this is the best they can come up with to stump GPT-4, we've already lost. GPT-4 has superhuman intelligence across the board, and Lecun must be fooling himself if he thinks "92% score for humans" is accurate.

Expand full comment
Dec 1, 2023·edited Dec 1, 2023

Indeed, the questions are pretty hard!

Their annotators have substantial academic background: 61% Bachelor’s Degree, 26% Master’s Degree, 17% PhD. If the demographics percentages they give in appendix B of their paper https://arxiv.org/abs/2311.12983 are correct, they have at least 23 annotators, so at least it's not just two people from the team.

The numbers of questions at each level in Table 4 do not add up to the 623 mentioned in Table 3, nor to 68% of that (which would make sense since Table 4 is probably only concerned with valid questions), even taking into account rounding. On a related note, 93.9 is not a possible score for human annotators on 146 questions (times 2 because there are two annotators per question). Probably I'm trying to extract too much from a few numbers.

> Statistics on the validation phase. 623 newly crafted questions were validated by two new annotators each. The statistics were computed on their 1246 annotations. *: a valid question is a question for which two annotators give the same answer as the question designer, or only one annotator gives the same answer as the question designer and the other made a mistake. **: the human baseline is computed as the fraction of correct answers for all tentative on valid questions by the new annotators.

Ah, I think "tentative" here is French for "attempt", which explains why the numbers don't quite add up, and probably explains some of the human success: humans probably have the option not to answer. Besides, rejecting questions that have been incorrectly answered by both annotators also inflates the human score.

Expand full comment

IMHO those questions all feel like they’re explicitly tailored to be “easy for computers, hard/impossible for people.”

Expand full comment

Emily Bender’s posts always confuse me. Does she really not think LLMs aren’t useful, like, at all?

Expand full comment

"Nate Sores" → "Nate Soares"

"not as his as" → "not as ??? as"

"ElutherAI" → "EleutherAI"

"millions discovery of millions" → remove one "millions"

"the the ‘acc’" → "the ‘acc’"

"John Carmack uses as" → " John Carmack uses this as"

"with crime and governance" → "with crime and ??? governance"

"far enough along that has" → "far enough along that this has"

Expand full comment

The rates for https://manifold.markets/ZviMowshowitz/is-the-reuters-story-about-openais and your question (https://manifold.markets/PlasmaBallin/is-openais-q-real) are weird... I know, I know, these are diferent questions, different resolution times, etc. but still too wide a gap for my taste.

Expand full comment
author

I mean Q* probably exists AND Q* probably didn't play a role in events with the board such that the Reuters article is false. I think the gap is appropriate and for these to resolve to NO and YES respectively most of the time?

Expand full comment

"Someone at Google will see this, here. That someone at Google should ensure someone puts the absolute banhammer on this person’s website."

This did, in fact, probably happen: "Congratulations. Death penalty." -- Theophite https://twitter.com/revhowardarson/status/1728970144448393385

Expand full comment