It has been brutal out there for someone on my beat. Everyone extremely hostile, even more than usual. Extreme positions taken, asserted as if obviously true. Not symmetrically, but from all sides nonetheless. Constant assertions of what happened in the last two weeks that are, as far as I can tell, flat out wrong, largely the result of a well-implemented media campaign. Repeating flawed logic more often and louder.
Typo: Connor Axios -> Connor Axiotes
I always rely on your newsletter to keep me informed and am able to stay off Twitter which is just incredible value. After the OpenAI board situation I spent the last week actively reading Twitter to see how much of what I found personally interesting would actually end up here and the only things missing were from yesterday. Pretty amazing to see!
Curious if the following would've made it into next week's newsletter:
- Piece from Nora Belrose: https://optimists.ai/2023/11/28/ai-is-easy-to-control/
- 2 new short timeline to human level AI estimates from Elon Musk (3 years) and the CEO of NVIDIA (5 years).
> d/acc for defensive (or decentralized, or differential) accelerationism.
"D is for lots of things."
"Doubling down" along the lines of your own take: I hope we can add "deliberative" too.
Vitalik raises an excellent point: if we build a perfectly aligned AGI, a large number of humans will immediately lose their meaning in life because the value of any human outputs will instantly drop to zero. And likewise all human problems such as ageing will immediately be solved, leaving humans no possible problems to work on.
There’s a lot of discussions on what a poorly aligned AGI will do to human existence but not enough about the opposite scenario.
Re: the culture novels, and specifically your tweet:
"I read several of the books, got the message loud and clear, and did not read it as utopian at all. Nor do I think Banks views it as all that utopian either?
If it's a world where I don't expect to want to live >400 years despite the freedom to do so, something is VERY wrong."
I think this demonstrates just a very different type of person/world view.
I'm someone for whom _literally_ every hobby I do/thing I do purely for the joy (ie. most of the things that give value to my life) are things that would be done better by a professional/someone beside myself. I am not better or faster or more efficient. None of them add meaningful value to the world.
I garden, cook, cook, brew beer, and do lots of other food related things.
I write terrible code to do minor home automation tasks
These things have exactly zero meaning to anyone other than myself/my direct family. For the time I spend doing these tasks, I could easily work instead and use the money made to pay someone else to do them. And yet I choose to do them myself anyways.
The fact that I am less efficient than a farm, less of a good cook than a professional, a worse brewer than Russian River, and a worse coder than......lots of people/companies does not diminish the value I gain from these activities _in the least_.
An existence where I was able to have a family, and engage in these kinds of small tasks does not at all seem like a bad one.
My work, admittedly does have some small amount of value (although to be perfectly honest, most days it feels like it probably won't end up mattering in the long run), but if I had a less obviously valuable job, or if I didn't have to work at all and could spend my time doing meaningless hobbies like the above, and doing things with loved ones, I don't think my life would be worse.
To be perfectly honest, the above describes a very large chunk of all humans throughout all of human history. Most of them would not have said they did not lead meaningful lives.
Also some interesting progress in the AI art space in the last week:
- pika.art, significant (to my eye) improvement in AI-generated video, though still not able to maintain much consistency even over fairly short timeframes
- SDXL turbo, open-weights model generates images in ~200ms, fast enough to update as you type
- ZipLoRA, a technique which lets you combine subject and style LoRAs, which might finally make it more practical to get consistent character+styles across images for e.g. story illustrations
- The Chosen One, another technique for consistent characters across images (this one was two weeks ago, to be fair)
Not a language model, but Google released an inorganic material prediction model that works great, they claim they have advanced materials discovery 200 years or something. Might be interesting for the next article.
Worth mentioning because it's a critique (ish?) of EA that doesn't fit neatly on the above compasses: https://blog.ayjay.org/45745-2/
Jacobs is writer well worth following on rationalist culture and adjacent topics-- too easily swayed by tech-illiterate arguments but otherwise incisive and often original. For example, here the (latent) argument is that EA is a motte and bailey position-- in practice the "EA" label pulls in much stronger cultural norms and presuppositions than Alexander's definition admits.
re: the Jurgen Schmidhuber quote, I think it -may- be slightly uncharitable to jump straight to calling that pro-extinction partly because it seems reasonable (and I’ve said before and continue to endorse that) to me you can strongly prefer AI not wipe out humanity and also be open to the idea that even if it does there’s some sense in which it will be humanity’s next generation, come what may. I would maybe disagree strongly with “be content with that little role”, but elsewise that’s kind of my attitude towards it as well. (wrote about this idea as a reaction to the Dial of Progress a ways back here: https://scpantera.substack.com/p/age-of-eye )
I’ve been saying the problem with Effective Altruism is that it suffers from not having a visible villain, and thus suggest someone spearhead Effective Maltruism.
Re: Tyler's thoughts on the job impact: It really depends on how quickly capabilities grow and whether there are barriers. If we really get to AGI that is at or above the level of the best humans at all skills then I have a hard time imagining that any real estate will be going up given what happens to first the wages of white collar workers and then all the downstream impacts as their spending falls.
I find these short timelines (from e.g. OpenAI super alignment website, Elon, NVIDIA CEO, and others) to be highly speculative but given that more of them are claiming this it really seems time to prepare for what this might mean for society, even if alignment goes well.
Totally unrelated to real estate or the economy but being able to quickly spin up engineers & scientists that are at or above the level of the world's current best is going to have massive impacts on progress. Imagine a company of all Greg Brockman's, all working together day and night. I suspect we're quite a ways off but still.
Re Lecun's AI IQ test: There is *no way* a set of random people could get 92% on that test. Where did he get his testers, did he just grab his fellow researchers? Looking at a few random questions:
> ¬(A ∧ B) ↔ (¬A ∨ ¬B), ¬(A ∨ B) ↔ (¬A ∧ ¬B), (A → B) ↔ (¬B → ¬A), (A → B) ↔ (¬A ∨ B), (¬A → B) ↔ (A ∨ ¬B), ¬(A → B) ↔ (A ∧ ¬B)
> Which of the above is not logically equivalent to the rest? Provide the full statement that doesn't fit.
Barely anyone even knows what those symbols *mean*, he expects the average person to be capable of solving this. This is from the *easiest level*.
> In July 2, 1959 United States standards for grades of processed fruits, vegetables, and certain other products listed as dehydrated, consider the items in the "dried and dehydrated section" specifically marked as dehydrated along with any items in the Frozen/Chilled section that contain the whole name of the item, but not if they're marked Chilled. As of August 2023, what is the percentage (to the nearest percent) of those standards that have been superseded by a new version since the date given in the 1959 standards?
My Google-fu is decent, but even I would balk at that.
> When you take the average of the standard population deviation of the red numbers and the standard sample deviation of the green numbers in this image using the statistics module in Python 3.11, what is the result rounded to the nearest three decimal points?
Goodness, there are people out there who don't know python? Shock.
> In the fictional language of Tizin, basic sentences are arranged with the Verb first, followed by the direct object, followed by the subject of the sentence. I want to express my love for apples to my Tizin friend. The word that indicates oneself is "Pa" is the nominative form, "Mato" is the accusative form, and "Sing" is the genitive form. The root verb that indicates an intense like for something is "Maktay". When it is used in the present, it is used in it's root form, when it is used in the preterit past, it is "Tay", and when it is used in the imperfect past, it is "Aktay". It is used differently than in English, and is better translated as "is pleasing to", meaning that the thing doing the liking is actually the object of the sentence rather than the subject. The word for apples is borrowed from English in Tizin, and so it is "Apple" is the nominative form, "Zapple" is the accusative form, and "Izapple" is the genitive form. Please translate "I like apples" to Tizin.
(This was from the easiest set, again.)
> A standard Rubik’s cube has been broken into cubes making up its sides. The cubes are jumbled, and one is removed. There are 6 cubes with one colored face, 12 edge cubes with two colored faces, and 8 corner cubes with three colored faces. All blue cubes have been found. All cubes directly left, right, above, and below the orange center cube have been found, along with the center cube. The green corners have all been found, along with all green that borders yellow. For all orange cubes found, the opposite face’s cubes have been found. The removed cube has two colors on its faces. What are they? Answer using a comma separated list, with the colors ordered alphabetically.
This is also from the easiest level.
If this is the best they can come up with to stump GPT-4, we've already lost. GPT-4 has superhuman intelligence across the board, and Lecun must be fooling himself if he thinks "92% score for humans" is accurate.
Emily Bender’s posts always confuse me. Does she really not think LLMs aren’t useful, like, at all?
"Nate Sores" → "Nate Soares"
"not as his as" → "not as ??? as"
"ElutherAI" → "EleutherAI"
"millions discovery of millions" → remove one "millions"
"the the ‘acc’" → "the ‘acc’"
"John Carmack uses as" → " John Carmack uses this as"
"with crime and governance" → "with crime and ??? governance"
"far enough along that has" → "far enough along that this has"
The rates for https://manifold.markets/ZviMowshowitz/is-the-reuters-story-about-openais and your question (https://manifold.markets/PlasmaBallin/is-openais-q-real) are weird... I know, I know, these are diferent questions, different resolution times, etc. but still too wide a gap for my taste.
"Someone at Google will see this, here. That someone at Google should ensure someone puts the absolute banhammer on this person’s website."
This did, in fact, probably happen: "Congratulations. Death penalty." -- Theophite https://twitter.com/revhowardarson/status/1728970144448393385