26 Comments

A couple of thoughts about OpenAI and its sales efforts. My understanding is that the sales team is trying to sign up enterprise customers. For ex., think Ford or Disney or Coca-Cola wanting to train an LLM on corporate data. Useful for the company and its employees, I think, and, sure, OpenAI has their API and other tooling to get it done, I guess. But OpenAI seems to want to be *both* a research org and an enterprise sales org. That...doesn't really make sense to me?

I've thought for a while now that OpenAI should separate itself into a research org, call it OpenAI_Labs, and an enterprise sales org, call it OpenAI_Sales. OpenAI_Sales licenses OpenAI_Labs software on a preferential basis, and builds out enterprise sales capability and hires a CEO who knows the enterprise sales playbook cold. Else OpenAI is just going to get curb-stomped by all the enterprise-focused companies out there that *do* know how to run enterprise sales playbooks.

Expand full comment

> Dan Shipper spent a week with Gemini 1.5 Pro and reports it is fantastic

Aside from the part where he was fooled by a large Gemini confabulation, however - where, per Murphy's law on irony, he was using the confabulation as an example of why you should trust Gemini and 'delegate' and not worry about problems like confabulations or check the outputs. (I assume you read the pre-correction version.)

Expand full comment

It’s incredibly difficult to keep remembering to not trust AI outputs without independent validation. I’d guess that company generated system prompts could google its own output to try to validate itself before presenting output to the end user. That would make it slower, but a “try harder to be accurate” mode would be nice. I don’t really see how you prevent AIs from being widely trusted without checking outputs

Expand full comment

Yeah, it seems very unfortunate that there's not even a 'quote literal range in prompt from X to Y indices' for this sort of retrieval task, which could avoid confabulation by construction.

Scott Alexander has another example today, where 'Consensus' doesn't necessarily make stuff up but does overweight what apparently are minor risks, and concludes: https://www.astralcodexten.com/p/links-for-february-2024

> I think the Mayo Clinic summary is much better. I’m still not at a point where I would use Consensus without checking its answers carefully.

But more broadly, this sort of thing is why I struggle to make use of LLMs for my own writing: they are not accurate enough to be useful for strictly factual/research-like tasks without supervision, and yet, have also been ruined for anything creative or with interactive supervision.

Readers do not read gwern.net for essays where 'only' 1% of the claims or quotes or links are confabulations; and I do not want to write things meant to be disposable like used kleenex tissue where a 1% would be fine. (If you're a spammer/marketer or otherwise engaged in spewing out bureaucratic paperwork, then that may be fine. Indeed, a 1% error rate is probably a lot more accurate than existing approaches.)

Nor do I benefit that much from getting critiques from LLMs RLHFed or otherwise neutered into inanity. Oh sure, they can help a little and occasionally point out issues like spelling errors, but their 'chatgptspeak style' improvements are much worse, and they rarely make any major suggestions; they can help my writing, say, 10%, but not 10x.

So I'm largely left using GPT-4 for programming, where I can read & run the code to verify it without doing the equivalent work of writing from scratch, for miscellaneous one-off tasks and better-than-Google searches, and for minor automation tasks where errors are not a big deal (eg. reformatting text in very specific ways, like adding linebreaks to run-on abstracts). The first is valuable but a narrow domain; and the second and third are, almost by definition, low-value-add in total. So, overall, far less utility to me than I know is possible.

Expand full comment

The Wonka experience failure was an AI-alignment issue, not because of the poor quality, but because of the script. The script for it was wholly AI-generated, as the lead actor has noted on TikTok, and the AI author inserted a novel, menacing non-Roald-Dahl character into the event script, who frightened the children to tears. That creepiness, I believe, is why parents called the police, not just because of the low-effort swindle, which could have been addressed without law enforcement.

https://x.com/mttpgn/status/1763233980755714251?s=46

Expand full comment

The script bit is interesting, did they not second-guess it at all?

Any case, I've been thinking that AI generation will continue making it harder to judge books (metaphorically) by their covers, because good art/grammar/whatever won't be reliably costly signals of quality.

Expand full comment

It sounds like the entire thing was an exercise in "put on a show, but save money by replacing as much creative work as possible with AI", so presumably they removed humans from the loop whenever possible?

I bet the grammar problem goes away by the GPT-5-equivalent generation, at the latest. Gemini's large context window may already have fixed that, and also perhaps the problem with long-term coherence.

Expand full comment

"even if we do successfully get AIs to reflect the preferences expressed by the feedback they get, and even if everyone involved is well-intentioned, the hard parts of getting an AI that does things that end well would be far from over. We don’t know what we value, what we value changes," there is an assumption that we need to teach AI values, our values so that it acts on its own for our benefit as a kind of AI governor. But AI doesn't need to have any values to be able to answer queries like "Prepare detailed plan for achieving X which doesn't do harm to humans, tell me of any potential controversial consequences". Even current LLM can do that and they understand concept of "harm to humans" well enough. LLMs are amoral, they don't care about what we do with the output even if they can predict it. We can use AI as a tool even if it is way smarter than us. More about this and other common assumptions about AGI in https://medium.com/@jan.matusiewicz/agi-safety-discourse-clarification-7b94602691d8

Expand full comment

Seems like you're describing an oracle, but I don't think that's the natural endpoint of AI, even if that's the natural extension of LLMs.

Expand full comment

Oracle AI can be used to verify consequence of action plan from the Agent AI. LLM would need some training in virtual environment in producing plans of different details level, verifying their execution, modifying, etc. Perhaps some RL to enhance its game playing abilities. In some games - refraining from pursing instrumental goals will be a constraint, in some not - just to train its obedience in virtual world where it is clear what is happening. As long as agent AI isn't trained for particular goals and constraints - it may be kept universal. Equally capable of producing plan to "increase industry production without harming people" as "decrease industry production and harm people". Amoral and indifferent to effects of its plan. And thus safe as it doesn't care about anything including its own survival.

Expand full comment

Verifying an oracle's output in all possibly relevant cases seems hard/tedious, the sort of thing we're planning to outsource to AI in the first place.

It sounds like the plan is to train for the particular goal of obedience. Even assuming it worked (that is, the agent isn't tempted to maximize whatever proxy it has for obedience), it's difficult for me to imagine the following equilibrium. Ensuring only good commands are given, that the internal representations are robust to corruption, that having humans in the loop will be sufficiently competitive with full automation, and probably other issues I'm missing at the moment.

Expand full comment
Feb 29·edited Feb 29

This quote from the Dwarkesh Patel interview stood out to me:

"We’re working on things like AlphaZero-like planning mechanisms on top that make use of that model in order to make concrete plans to achieve certain goals in the world, and perhaps sort of chain thought together, or lines of reasoning together, and maybe use search to kind of explore massive spaces of possibility."

Pretty clearly refutes the arguments that LLMs won't have goals and we won't have to worry about the instrumental goal issues RL has.

Expand full comment

"First note is that this says ‘text and images’ rather than images. Good.

However it also identifies the problem as ‘offended our users’ and ‘shown bias.’ That does not show an appreciation for the issues in play."

The charitable reading of Pichai's statement is that he knows perfectly well the underlying issues with Geminipocalypse, but deliberately chooses to frame the problems with Gemini using the same kind of language the people responsible for Geminipocalypse often push in their own justifications. Kind of a cheeky "hoist by your own petard" and "if you can use therapeutic appeals to harm and bias to advance your own purposes, then we can use those same principles to argue why you are wrong" thing.

Then again I may just be inhaling the copium and giving Google's top brass too much credit.

Expand full comment

A pedantic reminder that that Minus comic doesn't mean what most think it does. Within the story of the long running comic, the child is a godlike being who has summoned the meteorite for her own amusement, so she can easily bat it away.

Expand full comment

I think the difference in jobs-we-mourn and jobs-we-do-not is concentration.

There were never entire towns or communities whose existence depended on blockbuster. There are/were for coal mining. So when coal mining goes away, entire communities disappear. When blockbuster disappeared, the 0.3% of employees in the town found a new job without having to move or otherwise disrupt their life.

I think this will continue to be true.

The jobs-we-will-mourn in the future will either be hollywood jobs or silicon valley jobs, when/if those get en-masse replaced (those are the two most concentrated classes of jobs I can think off the top of my head, but I'm sure there are others).

With regards to khanmigo, I'm very curious if this is just now launching or if it just now moving out of beta, since we first heard about it almost a year ago I think. If it's the former, then hopefully very soon we will get the results of the attempt to replicate the results in Blooms 2-sigma problem paper [0]. If it's the latter, then hopefully they will share the results of the beta testing and we will have it even sooner.

In other words, I very, very much want to know if Khanmigo comes even close to the dramatic results we see with human tutors.

[0] https://en.wikipedia.org/wiki/Bloom%27s_2_sigma_problem

Expand full comment

re: Blockbuster, I was (I guess) technically one of those jobs lost. At the time I worked at a third party company that managed their customer database and marketing. When they died - and hilariously, dumped all their debt on Blockbuster Canada - I got reassigned to the Harley-Davidson client team. As a microcosm, I think it explains why there wasn't a big uproar: almost everyone at the company had relatively marketable skills and moved onto to doing something similar with a place with a different logo - even if you were the blue shirt at the counter, there are lots of jobs that need the clerk skillset, or you were planning to upgrade to a more prestigious job anyway.

Expand full comment

> > we have an incredible springboard for the Al wave

This makes me want to channel Bernard from "Yes, Minister", except I can't do it justice: "Er, don't you mean a surfboard? Springboards are usually found in pools, not the ocean. And if you dive into a wave, you'll wind up behind it, not in front like a surfer."

> > Our analysis found that the individual GPT-3.5 output with the highest percentage of plagiarism was in Physics

Yeah, I bet they all plagiarize "F = ma". Seriously, though, before looking closely, I register a prediction that there'll be an effect where the more hard-science the field, the more we'll see identical phrasing used to represent identical ideas, because of the number of concepts that have precise technical definitions. After looking, this seems somewhat borne out by the data, although I wonder what Psychology is doing up there.

> [filtered water metaphor]

5. If you find out that the maker of your water filter has been quietly updating it to pass through chemicals that produce effects which they view as politically desirable.

> > Gamers worldwide left confused after trying Google's new chess app.

Oh, this is just like stuff we did in elementary school. Player with their king on the right goes first, and then you have to remember whose pieces are whose. Touch a wrong piece and you forfeit your turn.

Expand full comment

> Flagging stuff as important when it isn’t is fine, but not the other way around.

This reminds me of the short story "Huddling Place" by Clifford D. Simak. A doctor becomes intensely agoraphobic, in part due to never having to leave home because of his robot servants. He is asked to travel to another planet, to save what might be the most valuable life in all civilization, but finds it difficult. With effort he manages to pack, but then in the end, discovers that his robot butler had declined on his behalf. (Who are we, if not our patterns?)

http://www.pierssen.com/cfile/huddling_place.html

----

A tap came on the door.

"Come in," Webster called.

It was Jenkins, the light from the fireplace flickering on his shining metal hide.

"Had you called earlier, sir?" he asked.

Webster shook his head.

"I was afraid you might have," Jenkins explained, "and wondered why I didn't come. There was a most extraordinary occurrence, sir. Two men came with a ship and said they wanted you to go to Mars."

"They are here," said Webster. "Why didn't you call me?"

He struggled to his feet.

"I didn't think, sir," said Jenkins, "that you would want to be bothered. It was so preposterous. I finally made them understand you could not possibly want to go to Mars."

Webster stiffened, felt chill fear gripping at his heart. Hands groping for the edge of the desk, he sat down in the chair, sensed the walls of the room closing in about him, a trap that would never let him go.

Expand full comment

Dude you've got to write less. I think this is the first one of your posts I've made it to the end of, and it took me over an hour.

Expand full comment

Good news, reading is optional. Bolded sections are the ones Zvi considers noteworthy if you want to read less.

Expand full comment

I'll use that strategy in the future

Expand full comment

I just want to say that I've read all of these posts so far, and I found them valuable to keep up with the topic. I assume there are other people like me, so there's some target audience for this?

Expand full comment

Maybe you're a faster reader than me? Or maybe it's because I already subscribe to 4 other AI/ML newsletters so a lot of the content is redundant?

Expand full comment

What other ones are you reading? Always looking for good news feeds.

Over the last week, I closed all my tabs that had partially read Zvi posts (and associated links to other things). I decided on the rule (heuristic?) to just close them when a new one comes out. If it's really important, the thing will be talked about over several posts, but much of it is not important in the scheme of things (but may be super interesting).

Oh, and this is me finishing reading this post just as #54 is out! ;)

Expand full comment

The Algorithmic Bridge and NLP Newsletter are by primary ones. Plus all of the stuff I absorb through osmosis on Twitter.

Expand full comment
author

I think a rule of 'anything not in bold that is important will come back again' is solid. I wouldn't assume that non-roundup posts will get echoed.

Expand full comment