28 Comments

> Build a list of representative incidents.

I think I know what this means. It sounds like people tend to ask, "What are some historical examples of <bank failures due to regulation|hacks due to social engineering|mass shootings>"

Or am I misunderstanding something? Anyway, sounds like something that can turned into a product.

Expand full comment

> Strangely, the length here happens not to rule anyone out, since Larry (Page) was the only other reasonable guess.

It's true you can't rule out Larry just from that signature, because 'Google/Deepmind' and a 5-letter name correlate equally well with 'Larry' and 'Demis'; but it is more likely to be 'Demis' because of the external evidence that Hassabis is the one that Musk keeps going around and using as the example of the enemy in OA contexts (Larry's just the funder). Then for internal evidence, you can check the first forwarded email, signed by another 5-letter name, and see that it has to be 'Demis Hassabis' because 'Larry Page' is way too short to fit the 14-letter or whatever email name. So 'Larry' becomes highly implausible - the entire email convo was sparked by a *Demis* email! Why suddenly switch to 'Larry'? That would be illogical. So, it's 'Demis'.

Also, the Claude reconstruction is obviously wrong, because CCing Demis on an internal email attacking him makes zero sense. (The CC is almost certainly Andrej Karpathy, given Musk forwarding Karpathy's email as the new Master Plan for OA, and the implausibility of CCing someone entirely unmentioned hitherto on such important strategic internal emails/planning; but I haven't checked the length of the CC name vs the forwarded plan email name.) However, Claude wasn't given the right information so doesn't mean much. The right prompt would be to specify each redaction in character count, as converted from the em width and engineer a prompt with plausible names and their character lengths to ensure as few unforced errors as possible. It looks like Claude-3 might have changed the tokenization, possibly even all the way to a character/byte-encoding, but it is still bad to force an LLM to do such discrete low-level character manipulation tasks unnecessarily. (I'm impressed it did as much as it did from... is that just a *screenshot* of the web page?!)

Although the best approach of course would be a proper cryptographic approach which uses a LLM and standard maximization algorithms, provides a rich prompt of metadata like candidate names/context/related-documents to enrich the probabilities, and exploit the fact that each individual word length is leaked, to iteratively search through all possible graphs of completions to maximize the exact likelihood. (Something like https://en.wikipedia.org/wiki/Viterbi_algorithm ) A prompt-only approach is like a worse version of a single iteration of that, unnecessarily local, sloppy, and running only once on only one candidate solution.

Expand full comment

Quick note: "Confirm that Google wait times are not reliable" doesn't seem to link to the right thing. It's a tweet from Nate Silver just saying he feels better about AI alignment now. Perhaps this was intentional, but if so it's quite abstruse (or do I mean obtuse - I dunno)

Expand full comment

Every week in AI news always seems like it is a bleak future for humanity and the adults in the room are not stopping it.

Expand full comment

"this kind of ‘look what details it cannot do right now’ approach is, in the bigger picture, asking the wrong questions, and often looks silly even six months later."

Positioning oneself as saying "the models cannot do X" is like rowing a boat toward Niagra Falls.

Expand full comment

In the "people doing potentially offensive things with llms" department ... character.ai now has "Austrian Painter" as a character. He spent the 1910's doing scenic watercolours of Vienna. Yes, that Austrian Painter.

I will confess to setting up a prompt where it is 1920, I am a minor official of the Bavarian state government, and Austrian Painter guy is attempting to fill in the form to register a new political party.

"Does your party have a manifesto? Section 5 of the form says you can write in your manifesto if you have one."

Etc.

Expand full comment

"63% say their employer cares more about their productivity than their career development"

That's hilarious. Every generation has to learn what jobs are, I guess.

Expand full comment
Mar 7·edited Mar 7

Claude-3 Opus has landed at #3 on the Chatbot Arena leaderboards, behind two endpoints for GPT4.

https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard

NOTE: This does not mean Claude-3 is objectively dumber or GPT4 objectively smarter. It just means that human users, faced with competing answers by GPT4 and Claude-3, preferred GPT4's by a tiny bit. Votes are still coming in and this result could theoretically change.

But look at how great Sonnet is. They could have released THAT model and called it a GPT4 competitor. Is it worth paying 5x more for Opus unless you're enterprise scale?

edit: and Bard Pro is ~80 ELO higher than it should be because it's searching the internet. Apples to oranges.

Expand full comment

"But yes, of course the employer cares more about your productivity than your career development"

Yes, and also... what weird ass careers are people pursuing where they don't think "learning to be very productive" IS career development? I totally get that there are a bunch of skills relative to career advancement that have little to do with your actual profession or your skills (you can call these "people skills" or "office politics" or whatever), but obviously, all else held equal, "getting better at whatever it is you get paid to do" is going to be pretty useful for career advancement, so it's really weird to dismiss this.

Expand full comment

The geohot/AMD drama this week was also surreal, he publicly pressured* the CEO into open sourcing their firmware so he could fix it to help it compete with Nvidia through rants on Twitter and it seems to have worked.

Edit: more neutral language, but it was a pretty aggressive exchange I didn't expect to work. Kudos to him for making a giant company realize what was in its own best interests honestly.

Expand full comment

Re: The racism study - I haven't looked at it in detail beyond the tweet thread, and maybe I'll get curious enough to take a closer look, but did they only include these two options for dialect? Because...not all white people talk like the proper English sample, and if those are the only two choices then the test itself is biased in the sense that it really has "proper english" vs. "dialect", made stereotypical black speech the only example of a dialect, then declared the discriminator racist when it's entirely possible, even likely, that it's discriminating by perceived class/education level. How does it react to a stereotypical Boston accent? Valley speak? Anything that flags the speaker as Southern? (I realize "black" and "Southern" have an overlap but the Venn diagram is not a circle) I mean, I grew up in the rural South and I know that even in lily-white farming communities the teenagers are aware that they should at least consider learning to suppress their accents. Sounding too hick is a problem even if you want to fit in a Southern city. "ain't" in particular isn't a strictly black word despite what's implied by one of those screenshots; it's a differentiator among Southern white people too.

Expand full comment

I admit I skimmed the Marc Andreesen security thing and thought "yes of course, this is the Way, I hope they are doing this" ... I don't know if I am more embarrassed for my failure in interpretation or his failure in reasoning

Expand full comment

Similar to Maven, Israel has been using "Hasbora" to help it select targets for bombing in Gaza. I think you intentionally stay away from this topic in your writing, Zvi, but I wanted to flag it for you, just in case you hadn't come across it.

Expand full comment

Hang on, the woman responsible for writing the AI principles at Google is named Jen Gennai?

This is simply too much.

Expand full comment