I admit to only "skimming" your deep and frequent posts on AI.
Have you made suggestions somewhere to people like me who are absolute AI neophytes as to the best way to learn about the different tools available? I'm suffering from a Paradox of Choice as well as laziness.
I genuinely tried to deep dive into the topic, but everyone seems to assume you're part of a cult which knows programming and game theory and crypto and what your IQ is.
Re chatGPT making the meals 20% better: does he really rate his dinners every night and work out the statistics? I would assume it's a joke, but you never know.
I'm concerned about GPT-4 becoming less useful with time. The only way to get people to understand the risk is to keep these relatively open (read: publicly accessible) systems clearly and legibly useful to the layman.
If private groups with "actually useful" versions of LLMs monopolize access to the tech, I suspect it both increases risk and decreases the willingness of the electorate to legislate regulation since "I tried ChatGPT and it told me 2+2=5".
“ The sky is not blue. Not today, not in New York City. At least it’s now mostly white, yesterday it was orange. Even indoors, everyone is coughing and our heads don’t feel right. I can’t think fully straight. Life comes at you fast.”
Can agree with Shako that it’s weird I have a lot of colleagues who will get stumped by something and just don’t realize Google is/was an option.
Glad to see my AI post made it in, though I’d object to some of these being not worthwhile use cases, or rather I suppose this depends on which side of the “when will you guys be replaced by robots” question you fall. I suspect if you could get an LLM to identify pills (especially visually) you’re maybe around halfway there to replacing pharmacists (at least for retail). “Checking the answers” for an embarrassing fraction of my workload is opening a bottle/bag to see if what’s inside matches the image on my computer. (If you’d like to know more about how the pharmacy sausage is made and have got time for a longer read, I’d love to get more eyeballs on my retail pharmacy explainer: https://scpantera.substack.com/p/navigating-retail-pharmacy-post-covid )
If I get to doing a second post I’ll need to see if/how it can handle stuff adjacent to prescription processing maybe. Else there's a bunch I maybe could/should write on the should-pharmacists-be-replaced-by-robots/AI topic. A lot of the profession is very strongly convinced it'll never happen but it's a general mix of tech illiteracy, poor imagination, and reflexive guilded defensiveness.
Personally, I have not seen GPT-4 get worse, but I wonder if I'm getting better at prompting faster than the capabilities are degrading. That would explain why I still like the results I get in almost all cases. I still struggle with general code gen. There's some implicit nature of the requirements of the code that I write that I can't quite write down and that causes the code to be wrong
I don't think "Find new conservation laws in fluid mechanics and atmospheric chemistry" uses language models at all, so maybe it doesn't belong in "Language Models Offer Mundane Utility"?
re Ada Palmer - I loved Terra Ignotia, one of my favorite series ever, and I think I always assumed in the back of my mind that Utopia must have solved alignment by embodying AIs into their companions and never creating anything more than that (considering big AGI systems to be inherently harbinger-like), and that the other hives like Gordian were too busy with their own interests (eg psychology) to bother much with AI development when “that would be Utopia’s business”
All that said, yeah, I see _a lot_ of similar sentiments from the artistic segments, very highly focused on the proximate issue of being displaced
Re: your observation about the [perceived risk of] self driving cars....
Keep in mind that a rogue self-driving automobile is a VERY PLAUSIBLE risk for anybody to imagine.
-- The risk is immediate and personal (almost everybody drives all the time)
-- possibly ubiquitous (any car on the road , maybe MANY cars on the road, could be an alien SDV), and most importantly,
-- it is EXACTLY the SAME FORM as a risk everybody is familiar with -- the crazy/irresponsible/asshole driver.
Most people cannot easily imagine what human extinction fue to runaway AGI might be. (Nukes? Skynet?) But everybody can imagine being in a collision with a robot car, and that would really suck. Not just because the SDV might do crazy and dangerous things unlike humans (illogical u-turns, driving absurdly fast or erratically). But even in a fender-bender, you would try to be adjucating an insurance claim and be deailng with HAL9000.
By the way, what is the status of liability insurance for SDVs? That seems a far more thorny problem that the technical peoblems.
I mean every human driver has to buy insurance for exactly this risk, so the danger is that the law will charge VASTLY higher damages per incident to SDVs. If it simply acts as if a human made the mistake in assigning damages you can just... buy the insurance. Plenty of us will be willing to help with that!
My question was more about who is the "responsible party" in a self-driving vehicle? Who bears the risk and responsibility for accidents:
-- Owner/driver?
-- Manufacturer?
-- Sometimes both, depending on contracts negotiated between insurer/manufacturer/policyholder?
Has it been decided that the notional human "driver" still bears all the risk? If several people are in the car, which one was legally "driving" it? Will there have to be a legal "driver of record" logged/recorded when the car starts?
Will the vehicle and/or sortware manufacturer be liable for accidents caused by their products? Has that been ruled out?
I heard a lot of opinion 5 years ago that these risk profiles were the real killer app and problem to solve for driverless cars. I haven't heard what has been developed or proposed reagrding this, very interested if anybody has insights.
The temperature thing that Riley showed isn’t actually setting the temperature right? ChatGPT just knows what temperature is and tries to become more creative, but the model is sampled with the original temperature.
It might be useful to tie things together by labeling quotes with the simulacra levels you think they are expressing. Those with causal models are obviously level 1. Mike Solana and Noah Giansiracusa on level 3 or 4. (Or are they accusing EA of being purely 3 or 4? Can they imagine someone operating on actual reasons?)
> This matches Terra Ignota in its lack of either AGI or any explanation of why it doesn’t have AGI
This understates the thoughtfulness of Palmer's writing. The conceit of Terra Ignota is that we get literal flying cars instead of the Internet but they raise exactly the same intellectual topics after all, since they have the same effect of replacing old forms of social ties with voluntary association. Likewise it's easy to read <spoiler>, <spoiler>, and/or <spoiler> in the books as stand-ins for AGI.
Zvi,
I admit to only "skimming" your deep and frequent posts on AI.
Have you made suggestions somewhere to people like me who are absolute AI neophytes as to the best way to learn about the different tools available? I'm suffering from a Paradox of Choice as well as laziness.
I genuinely tried to deep dive into the topic, but everyone seems to assume you're part of a cult which knows programming and game theory and crypto and what your IQ is.
AI is a subject that makes me feel uniquely stupid!
There's the Stampy project, intended to act as a general FAQ with cross links:
https://stampy.ai/
Thanks.
Honestly just pay for ChatGPT premium from OpenAI. Most tools are simply not as good as GPT-4 so you don't need to bother with them
Thanks.
Not sure what happened to Solana. He used to be one of my favorite reads but AI discourse has broken his brain.
Re chatGPT making the meals 20% better: does he really rate his dinners every night and work out the statistics? I would assume it's a joke, but you never know.
I'm concerned about GPT-4 becoming less useful with time. The only way to get people to understand the risk is to keep these relatively open (read: publicly accessible) systems clearly and legibly useful to the layman.
If private groups with "actually useful" versions of LLMs monopolize access to the tech, I suspect it both increases risk and decreases the willingness of the electorate to legislate regulation since "I tried ChatGPT and it told me 2+2=5".
“ The sky is not blue. Not today, not in New York City. At least it’s now mostly white, yesterday it was orange. Even indoors, everyone is coughing and our heads don’t feel right. I can’t think fully straight. Life comes at you fast.”
Zvi, get some air purifiers! Or make Corsi-Rosenthal boxes. https://www.texairfilters.com/using-a-corsi-rosenthal-box-to-remove-wildfire-smoke-make-sure-to-use-the-right-filters/
The link for "John Wentworth notes that..." seems wrong.
Ha! I had a similar realization wrt ChatGPT vs old Google just the other day too: https://twitter.com/SCPantera/status/1666232160729325568
Can agree with Shako that it’s weird I have a lot of colleagues who will get stumped by something and just don’t realize Google is/was an option.
Glad to see my AI post made it in, though I’d object to some of these being not worthwhile use cases, or rather I suppose this depends on which side of the “when will you guys be replaced by robots” question you fall. I suspect if you could get an LLM to identify pills (especially visually) you’re maybe around halfway there to replacing pharmacists (at least for retail). “Checking the answers” for an embarrassing fraction of my workload is opening a bottle/bag to see if what’s inside matches the image on my computer. (If you’d like to know more about how the pharmacy sausage is made and have got time for a longer read, I’d love to get more eyeballs on my retail pharmacy explainer: https://scpantera.substack.com/p/navigating-retail-pharmacy-post-covid )
If I get to doing a second post I’ll need to see if/how it can handle stuff adjacent to prescription processing maybe. Else there's a bunch I maybe could/should write on the should-pharmacists-be-replaced-by-robots/AI topic. A lot of the profession is very strongly convinced it'll never happen but it's a general mix of tech illiteracy, poor imagination, and reflexive guilded defensiveness.
Made a quickie second post for a mixed success use case this morning: https://scpantera.substack.com/p/ai-and-pharmacy-2
Personally, I have not seen GPT-4 get worse, but I wonder if I'm getting better at prompting faster than the capabilities are degrading. That would explain why I still like the results I get in almost all cases. I still struggle with general code gen. There's some implicit nature of the requirements of the code that I write that I can't quite write down and that causes the code to be wrong
I don't think "Find new conservation laws in fluid mechanics and atmospheric chemistry" uses language models at all, so maybe it doesn't belong in "Language Models Offer Mundane Utility"?
Oh yeah. That's other systems offering mundane utility, good call. If I get sufficiently unlazy I'll move it to In Other AI News.
re Ada Palmer - I loved Terra Ignotia, one of my favorite series ever, and I think I always assumed in the back of my mind that Utopia must have solved alignment by embodying AIs into their companions and never creating anything more than that (considering big AGI systems to be inherently harbinger-like), and that the other hives like Gordian were too busy with their own interests (eg psychology) to bother much with AI development when “that would be Utopia’s business”
All that said, yeah, I see _a lot_ of similar sentiments from the artistic segments, very highly focused on the proximate issue of being displaced
Re: your observation about the [perceived risk of] self driving cars....
Keep in mind that a rogue self-driving automobile is a VERY PLAUSIBLE risk for anybody to imagine.
-- The risk is immediate and personal (almost everybody drives all the time)
-- possibly ubiquitous (any car on the road , maybe MANY cars on the road, could be an alien SDV), and most importantly,
-- it is EXACTLY the SAME FORM as a risk everybody is familiar with -- the crazy/irresponsible/asshole driver.
Most people cannot easily imagine what human extinction fue to runaway AGI might be. (Nukes? Skynet?) But everybody can imagine being in a collision with a robot car, and that would really suck. Not just because the SDV might do crazy and dangerous things unlike humans (illogical u-turns, driving absurdly fast or erratically). But even in a fender-bender, you would try to be adjucating an insurance claim and be deailng with HAL9000.
By the way, what is the status of liability insurance for SDVs? That seems a far more thorny problem that the technical peoblems.
BRetty
I mean every human driver has to buy insurance for exactly this risk, so the danger is that the law will charge VASTLY higher damages per incident to SDVs. If it simply acts as if a human made the mistake in assigning damages you can just... buy the insurance. Plenty of us will be willing to help with that!
My question was more about who is the "responsible party" in a self-driving vehicle? Who bears the risk and responsibility for accidents:
-- Owner/driver?
-- Manufacturer?
-- Sometimes both, depending on contracts negotiated between insurer/manufacturer/policyholder?
Has it been decided that the notional human "driver" still bears all the risk? If several people are in the car, which one was legally "driving" it? Will there have to be a legal "driver of record" logged/recorded when the car starts?
Will the vehicle and/or sortware manufacturer be liable for accidents caused by their products? Has that been ruled out?
I heard a lot of opinion 5 years ago that these risk profiles were the real killer app and problem to solve for driverless cars. I haven't heard what has been developed or proposed reagrding this, very interested if anybody has insights.
Thanks, BR
I don't know the current law but I think any consistent policy everyone is clear on should be fine, again because insurance and contracts and Coase.
The temperature thing that Riley showed isn’t actually setting the temperature right? ChatGPT just knows what temperature is and tries to become more creative, but the model is sampled with the original temperature.
It might be useful to tie things together by labeling quotes with the simulacra levels you think they are expressing. Those with causal models are obviously level 1. Mike Solana and Noah Giansiracusa on level 3 or 4. (Or are they accusing EA of being purely 3 or 4? Can they imagine someone operating on actual reasons?)
It is a classical level 3+ move to forget that levels 1 and 2 exist, or to claim that no one could care about them.
Alas, I don't think we can use the simulacra levels like that with this audience, and also they'd end up being a lot like name calling.
> This matches Terra Ignota in its lack of either AGI or any explanation of why it doesn’t have AGI
This understates the thoughtfulness of Palmer's writing. The conceit of Terra Ignota is that we get literal flying cars instead of the Internet but they raise exactly the same intellectual topics after all, since they have the same effect of replacing old forms of social ties with voluntary association. Likewise it's easy to read <spoiler>, <spoiler>, and/or <spoiler> in the books as stand-ins for AGI.
Your link to an Emily Bender quote about enemies / allies vs. truth links to a tweet by Karin Rudolph.