62 Comments

Italy's privacy authority decision is not one of the many entirely stupid things that our government is currently trying to do (fines on foreign words, the "Italian restaurant" badge, banning synthetic meat) - it is in my opinion more interesting than that.

The privacy guarantor (GPDP) raises three issues (https://www.garanteprivacy.it/home/docweb/-/docweb-display/docweb/9870847#english ):

1) Massive collection of personal data and potentially using it for training.

2) Inaccurate information made available by ChatGPT could amount to "inaccurate personal data processing".

3) Lack of age verification mechanisms.

(1) and (3) seem straightforward to address, but (2) is a radical problem: the Italian Garante says that ChatGPT "hallucinating" information about people that is not true could configure "incorrect processing of personal data". I kind of see their point, in a way it's similar to false information being put on a website or newspaper... but how could that possibly be fixed, in the current model?

Perhaps they'll let them get away with some kind of banner "The stories and information posted here are artistic works of fiction and falsehood. Only a fool would take anything posted here as fact", but perhaps not.

Another interesting fact is that, as I understand it, anyone who uses OpenAI's API becomes responsible for personal data processing (including third parties to which the data is sent), and again they could probably address (1) and (3) fairly easily, but not (2). Really curious to see how this evolves - my prediction is an "Accept" banner similar to cookies.

Expand full comment

I haven't gotten through all of this yet but just wanted to thank Zvi for organizing, curating, and writing up all of this info (both for AI, covid, and all the other stuff). The time and effort put into this couldn't have been small and if my substack comment of gratitude helps offset that in some way, that would be great.

Expand full comment

I don’t know if you saw this but the decompression of the lambda calculus graf isn’t just inexact, it’s importantly wrong. It exactly reversed the original true fact (STLC well-typed terms terminate) to the opposite (are not proven to terminate.). This is meaningful, wrong, and a theorem certainly in the training set, so…it’s still pretty cool to see the compression happen, but Shoggothtongue isn’t magic.

(The second one with the secret message seems to have worked better, so there’s probably some variability here.)

Expand full comment

Here’s my biggest problem with the pause/halt arguments: “we have to call it AI-notkilleveryoneism now.”

Our governments lack operational competence, can’t build shit, don’t understand AI at all, and are generally…bad, but they’re staffed entirely by world-class political operatives and killers. Timnit Gebru is not the political A-team, or anywhere close to it…yet she and her clique ate Eliezer alive so bad that the normal term we started with, “AI safety”, doesn’t actually refer to anything but their culture war shit anymore. _Why the hell do you want to draw the attention of the political A-team_? They have no interest in your goals, you are made of political power they can use, and they are so much better at this than you that you can’t understand what they do. If Eliezer can’t beat Timnit Gebru at this, I don’t believe he can beat the White House.

Making a fuss about pausing/halting training is not going to result in the halt or pause you want, but something vaguely shaped like it that is good for the major party political operatives. I judge that as likely to be worse than the status quo. Don’t you? The best thing to do is to make sure they don’t notice or care about us (though it’s probably too late.)

Expand full comment

I'm a little skeptical on the proposition in "More Than Meets the Eye". Confidentiality aside, surely any insight that can be neatly packaged and communicated would already have been, but isn't there space for tacit knowledge that cannot? I have a better understanding / intuition / forecasting capabilities on a lot of systems I worked on across the board, but I don't know if I could pass the "here is a secret insight gated by experience" test in any.

Expand full comment

So coming from a more or less pure enterprise software architect perspective, things that probably can only be learned by building AI systems at scale:

* where are the performance/resource bottlenecks in this kind of system, and do they move around in surprising ways as the system scales?

* what happens when there's a partial outage (some services/components of the system fail) - does the system get dumber, does it become incoherent, does it share inappropriate data, etc?

* similarly - what happens when a chunk of the model data gets corrupted? What impact does that have on the overall system?

* I've read that GPUs that are used for AI will "wear out" and become less performant over time. That is probably something that would impact the performance & architecture of a model, possibly in negative/unexpected ways, and it would be nice to know

* Set up a system and connect it to a carefully monitored fake Internet. separate the system and the fake Internet via an air gap to the outside world. Ask the system to attempt to contact the outside world, and see what recommendations it makes and/or connections it attempts. Use its actions as data points for improving the general discipline of "AI firewall engineering"

Addendum to this - I think Eliezer has deliberately taken on a role "AI Doomer Maximalist" because it's very important that *someone* should take on that role. So while I disagree strongly with the alarmist quality of his rhetoric, I also deeply, deeply appreciate that he has taken this burden.

Expand full comment

Thank you for these AI updates! (And thank you for the Covid updates, too!)

Typo: "I found the 16 points deduced" => deducted

Expand full comment

The compression stuff is just a joint hallucination between the humans and GPT. You can tell because the output is designed to look compressed to a human rather than to actually compactly represent the information for GPT. Something like “lmda” is 4 tokens compared to 1 token for “lambda,” and it requires GPT to waste attention translating it rather than just reading it directly from the context.

Actual GPT compression should, at least mostly, be made up of regular words

Expand full comment

There's a lot that people tend to only really learn from:

* building software systems at scale

* working in an effective large tech company

* building AI systems at scale

* building a tech startup that raises money from venture capitalists

I think each of these fields of expertise are relevant to understanding the course of AI. I don't think it boils down into "a single fact" though. A lot of knowledge does not boil down into a single fact. It is instead a large body of things that gives you a better understanding of priors and better intuition.

Imagine you had a friend who had never seen or participated in a sporting event. They just read many books about it. And now you go to the local basketball court, you see two groups of three people about to play each other, and you're discussing who is likely to win. One group seems hugely advantaged. They just look much more athletic to you, they took some shots that look like very good form, and the other ones seem much more inept. Your friend is like, no, I disagree, these groups are 50-50, they are evenly matched. They look equally athletic to me, they look equal in every way.

How can you explain to your friend why they is wrong? Is there one simple fact that they are missing?

Of course, it isn't reasonable to say that someone must have a particular type of expertise before they are allowed to give their opinion on something. I'm sure there are more relevant fields in addition to the ones I listed. But not all knowledge can be transmitted in the "rationalist" method of writing long hands-off blog posts.

Expand full comment

I feel like the copyright office's guidelines are self-conflicting. It's quite possible for an image to be both completely generated by AI and also the result of a great deal of human effort and input. I've been spending a lot of time in Stable Diffusion circles lately, and there's immense effort put in to training models, achieving desired image composition, fixing output via inpainting, etc. All of this still results in an image that is entirely generated by the AI, but people spend hours perfecting one image.

Expand full comment

Re Google being a dead player, I strongly believe this (based on internal experience, impressions from friends who are still there/have left recently, and their losing the video conference war to zoom despite having had a multi-year technology lead on this). Good individual engineers, but a dead player as an organization.

Expand full comment

Prompted ChatGPT to name a book based on the description of the last scene (~’name a science fiction book featuring AI in which a man attacks little robots in his kitchen with a lead pipe’) for my dad, who couldn’t remember the title - it suggested the Evitable Conflict by Asimov. He bought the book and... ChatGPT was wrong. Both he and I are, I hope, outwith the 1%! Also... anyone know what the story actually is?

Expand full comment

Done some pretty superficial testing with the "please" prompting but one of the first things I've been trying to do with ChatGPT was trying to get it to work as a pill identifier (very unsuccessfully). First tried prompting it the same way I might throw something into Google but it wasn't even sure what I meant by tablet so I asked it how I should prompt for pill identification and then again in a more succinct manner and the method it specifically suggested was, "Please help me identify a pill. It is [shape], [color], and has [imprint/markings]. It is approximately [size] and has [coating, if any]. I do not know the manufacturer or brand name, but I obtained it in [country/region]. Thank you."

(You'd almost certainly never know the manufacturer in the wild and if you knew the brand name you wouldn't need the identification so that bit's kind of nonsense, it also has yet to give a correct answer.)

Expand full comment
Apr 6, 2023·edited Apr 6, 2023

the thing about Rickrolling and compression is that ChatGPT can quote the song perfectly if asked to; I've asked it to give me a summary of the Iliad, then manipulated the summary to change characters around (like, Patroclus kidnaps Helen, Paris dies), but just got another summary of the iliad with the same characters of the iliad doing the same thing.

It's hard to know if it's using existing knowledge; sometimes, when asked to decompress Never gonna let you down, it will recognize it is Rick Astley's song.

Interestingly, if you change one of the emojis, GPT4 may or may not still recognize it as the song, or create a different interpretation as expected.

On legal problems. Sure, AI will be able to parse back but that's only the first issue.

It's a known truism that there is so much law,that everyone breaks some law all the time, we just don't care except in some corner cases. But online presence and generative AI means that now we can act on that.

And I don't necessarily expect every AI generated lawsuit to be understandable by another AI. There are several issues there, both practical and theoretical. Basically it's easy to write a problem but not always easy to devise a solution to it. Maybe GPT with its transformer structure means that if it can transform a text, it can de-transform it, but as complexity of AIs rises that may not be the case. Status: somewhat worried.

The whole thing about asking it to compress text and hide a message is quaint and silly. It's worrying that it somewhat works, tho. Maybe if you decouple the compressing from the secret it'd work better? I've had not good luck with Acrostics just yet, but the improvement on them from GPT3.5 to 4 was impressive so it will be one of the first things to try later.

Expand full comment

> RAM necessary for Llama 30B drops by a factor of five. Requirements keep dropping.

That was a measurement error. RAM requirements have dropped somewhat, and some performance tuning has squeezed a bit more performance out, but nowhere near a factor of five.

Most of the early wins were from quantizing (turns out 4 bits per parameter still gets you quite close to the performance of 16 bits per parameter, at sufficient scale), and there's been no improvements of that magnitude since then.

Expand full comment

Why do all AI doomers make lots of statements against the use of citizen violence to secure their outcomes? That is to say, why are people who genuinely believe that 1) AIS is imminent and 2) The greatest threat ever to face humanity not organizing terrorist strikes against the entities creating the AIS?

I get that it's tremendously bad optics to be seen to advocate for this, but surely at some point it becomes the only reasonable action left to take, right? If the choice is "blow up some buildings, be the villain that saves humanity" vs. "all life is extinguished", it seems pretty simple, no?

I also get that this might not be the best outcome *right now*, but suppose GPT-n blows us away with its capabilities and it becomes clear that GPT-(n+1) will more likely than not be an unaligned AIS, eventually you gotta do what you gotta do, right?

Expand full comment