27 Comments

Zvi,

I admit to only "skimming" your deep and frequent posts on AI.

Have you made suggestions somewhere to people like me who are absolute AI neophytes as to the best way to learn about the different tools available? I'm suffering from a Paradox of Choice as well as laziness.

Expand full comment

Not sure what happened to Solana. He used to be one of my favorite reads but AI discourse has broken his brain.

Expand full comment

Re chatGPT making the meals 20% better: does he really rate his dinners every night and work out the statistics? I would assume it's a joke, but you never know.

Expand full comment

I'm concerned about GPT-4 becoming less useful with time. The only way to get people to understand the risk is to keep these relatively open (read: publicly accessible) systems clearly and legibly useful to the layman.

If private groups with "actually useful" versions of LLMs monopolize access to the tech, I suspect it both increases risk and decreases the willingness of the electorate to legislate regulation since "I tried ChatGPT and it told me 2+2=5".

Expand full comment
Jun 8, 2023·edited Jun 9, 2023

“ The sky is not blue. Not today, not in New York City. At least it’s now mostly white, yesterday it was orange. Even indoors, everyone is coughing and our heads don’t feel right. I can’t think fully straight. Life comes at you fast.”

Zvi, get some air purifiers! Or make Corsi-Rosenthal boxes. https://www.texairfilters.com/using-a-corsi-rosenthal-box-to-remove-wildfire-smoke-make-sure-to-use-the-right-filters/

Expand full comment

The link for "John Wentworth notes that..." seems wrong.

Expand full comment

Ha! I had a similar realization wrt ChatGPT vs old Google just the other day too: https://twitter.com/SCPantera/status/1666232160729325568

Can agree with Shako that it’s weird I have a lot of colleagues who will get stumped by something and just don’t realize Google is/was an option.

Glad to see my AI post made it in, though I’d object to some of these being not worthwhile use cases, or rather I suppose this depends on which side of the “when will you guys be replaced by robots” question you fall. I suspect if you could get an LLM to identify pills (especially visually) you’re maybe around halfway there to replacing pharmacists (at least for retail). “Checking the answers” for an embarrassing fraction of my workload is opening a bottle/bag to see if what’s inside matches the image on my computer. (If you’d like to know more about how the pharmacy sausage is made and have got time for a longer read, I’d love to get more eyeballs on my retail pharmacy explainer: https://scpantera.substack.com/p/navigating-retail-pharmacy-post-covid )

If I get to doing a second post I’ll need to see if/how it can handle stuff adjacent to prescription processing maybe. Else there's a bunch I maybe could/should write on the should-pharmacists-be-replaced-by-robots/AI topic. A lot of the profession is very strongly convinced it'll never happen but it's a general mix of tech illiteracy, poor imagination, and reflexive guilded defensiveness.

Expand full comment

Personally, I have not seen GPT-4 get worse, but I wonder if I'm getting better at prompting faster than the capabilities are degrading. That would explain why I still like the results I get in almost all cases. I still struggle with general code gen. There's some implicit nature of the requirements of the code that I write that I can't quite write down and that causes the code to be wrong

Expand full comment

I don't think "Find new conservation laws in fluid mechanics and atmospheric chemistry" uses language models at all, so maybe it doesn't belong in "Language Models Offer Mundane Utility"?

Expand full comment

re Ada Palmer - I loved Terra Ignotia, one of my favorite series ever, and I think I always assumed in the back of my mind that Utopia must have solved alignment by embodying AIs into their companions and never creating anything more than that (considering big AGI systems to be inherently harbinger-like), and that the other hives like Gordian were too busy with their own interests (eg psychology) to bother much with AI development when “that would be Utopia’s business”

All that said, yeah, I see _a lot_ of similar sentiments from the artistic segments, very highly focused on the proximate issue of being displaced

Expand full comment
Jun 9, 2023·edited Jun 9, 2023

Re: your observation about the [perceived risk of] self driving cars....

Keep in mind that a rogue self-driving automobile is a VERY PLAUSIBLE risk for anybody to imagine.

-- The risk is immediate and personal (almost everybody drives all the time)

-- possibly ubiquitous (any car on the road , maybe MANY cars on the road, could be an alien SDV), and most importantly,

-- it is EXACTLY the SAME FORM as a risk everybody is familiar with -- the crazy/irresponsible/asshole driver.

Most people cannot easily imagine what human extinction fue to runaway AGI might be. (Nukes? Skynet?) But everybody can imagine being in a collision with a robot car, and that would really suck. Not just because the SDV might do crazy and dangerous things unlike humans (illogical u-turns, driving absurdly fast or erratically). But even in a fender-bender, you would try to be adjucating an insurance claim and be deailng with HAL9000.

By the way, what is the status of liability insurance for SDVs? That seems a far more thorny problem that the technical peoblems.

BRetty

Expand full comment

The temperature thing that Riley showed isn’t actually setting the temperature right? ChatGPT just knows what temperature is and tries to become more creative, but the model is sampled with the original temperature.

Expand full comment

It might be useful to tie things together by labeling quotes with the simulacra levels you think they are expressing. Those with causal models are obviously level 1. Mike Solana and Noah Giansiracusa on level 3 or 4. (Or are they accusing EA of being purely 3 or 4? Can they imagine someone operating on actual reasons?)

Expand full comment
Jun 23, 2023·edited Jun 23, 2023

> This matches Terra Ignota in its lack of either AGI or any explanation of why it doesn’t have AGI

This understates the thoughtfulness of Palmer's writing. The conceit of Terra Ignota is that we get literal flying cars instead of the Internet but they raise exactly the same intellectual topics after all, since they have the same effect of replacing old forms of social ties with voluntary association. Likewise it's easy to read <spoiler>, <spoiler>, and/or <spoiler> in the books as stand-ins for AGI.

Expand full comment

Your link to an Emily Bender quote about enemies / allies vs. truth links to a tweet by Karin Rudolph.

Expand full comment