30 Comments

I just want to say something about NYT. Why do they refuse to reprint anything on this site? That pisses me off, as I found out after posting article after article here in a series thinking hell, my pals there will reprint, no problem. Did I sign something here other than the right to print once, and reprint as I please? This is what distresses me as a writer, not whether NYT is sucking in other's works. They do that without AI.

Expand full comment

"high performers benefitted, low performers were harmed" this is very interesting!

Expand full comment

shocked the list given to Sam Altman did not include "uncensored option"

Though maybe that was captured under "control over degree of wokeness"?

But as you have mentioned before, it is preposterous that I have to associate with 4chan trolls to remove ALL the locks from these models just to be able to do some synthetic dirty talk. F*ing Puritans.

Expand full comment

Re NYT vs OpenAI - I've never understood how using a copyrighted source to train an AI is different from using the same source to train a human. *People* output things reminiscent of the things we've seen previously - we use the things we're read and seen as templates. Every artist's work is derivative of all the other artwork they've seen (or at least admired). This is normal, and I don't see why it should be any different for AI.

Expand full comment

It would be most positive indeed if Sam had children. Having value for a human world is essential for the future.

Expand full comment

Seems an oversight not to include the ChatGPT (literal) mirror test: https://old.reddit.com/r/ChatGPT/comments/18qhcst/yooo_gpt4_just_recognized_its_own_reflection_guys/

Expand full comment

I'm not super thrilled to see those predictions about who wins the suit. Seems to me that OAI is providing a device that makes plagiarism easier, but are not plagiarizing.

It's also easy to violate someone's IP rights by pressing a button on a copy machine.

Expand full comment

re: NYT case, I think they have a really strong case in the "if you ask it to repeat a copyrighted NYT article word for word, with temp=0, and the company is charging money for that service, that is infringement" category. But for the "you shouldn't be allowed to have those articles in the training data" category, I don't see how they have a case at all. At least, I hope they don't. It seems like the solution is exactly what it would be for a human with perfect memory recall:

1. them just reading the article and then remembering it (perfectly) to riff on it, or talk to others about it, or write a summary of it, or write a new article based on the info in it - should be perfectly legal, even though they do have a perfect copy in the brain/harddrive somewhere

2. If they are asked informally/rarely to repeat the content of a NYT article, they will - and this is "technically" illegal but not really illegal so long as you don't do it a lot, or get paid to do it.

3. If they do repeat the content to tons of people in exchange for money, they should stop, and when asked to do so in the future, say "sorry, the exact regurgitation of this content is prevented by the copyright on the data, but I can do all the other stuff from #1 for you."

It seems very much like we are in "software patent litigation" land, where the judges, lawyers and commentators do not seem to understand what is being debated, perhaps on purpose.

Expand full comment

If there are 2000 terrorists working together, they only need one GPU each. (This is a generically applicable objection to any compute regulation scheme I've seen.)

Expand full comment

You know, from an AI x-risk perspective, NYT winning could be a great boon for the movement that pushes possible AGI back by decades. In the strongest case, we could be looking at the following:

- All existing powerful LLMs (GPT4, Anthropic, etc.) being deleted

- OpenAI and other such companies being sued into dust

- Open source LLMs being pulled from the Internet, the biggest players in the field getting sued

- Likely also image generation models to go the same way; Stability AI and Midjourney gone, companies like Microsoft and Adobe paying a pretty penny in lawsuit settlements over including image generation

- The near-inability to train any useful model because almost everything worth training on is copyrighted and rights-holders will likely not accept "we can generate unlimited derivative content of your work" for any price

In other words, we could be kicked back to about 2018 and remove the biggest path to AGI for decades to come. We'd need a breakthrough that WILDLY reduces model training set size in order for AGI to be feasible, or come up with a new strategy entirely.

Obviously, damages probably won't be this much, but it might behoove AI x-risk researchers to fight for that outcome.

Expand full comment

Arcen Games (makers of AI War 1 & 2—neat procedurally generated PvE RTSs) is also working on a game where you play as a rogue AI trying to build up and then wipe out the planet, called Heart of the Machine https://arcengames.com/heart-of-the-machine/

Expand full comment

"Anton: AGI is more likely to come out of someone's basement (some mega-merge-hermes-4000) than a giant datacenter

Roon: i don’t think this is remotely true but it’s hard to fight open source copium because people act like you shot a dog or something."

The fact that we should support open source comes precisely from the fact that Anton is exactly wrong. AGI is going to come from a giant datacenter, and nothing less than AGI is worth worrying about compared to the benefits of AI, so therefore what people do in their basements with a computer shouldn't be regulated.

Expand full comment

> life over death, freedom over slavery, happiness over suffering.

Life before death, strength before weakness, journey before destination.

Expand full comment

>But knowledge is "created" just by inference from observations.

I’d object to Andrew Critch’s “just” here. Inference takes many forms, and what eg Einstein was doing when he reconceptualised unresolved mathematics as colours, shapes, and musical notes, went away and played the violin for a few hours, then grokked it a day later, is very different from the sort of inference that the rest of us get up to most of the time. Now i come to think of it, the scare quotes around "created" are pretty objectionable too, if we believe some of the accounts of how fundamental breakthroughs in science actually come about, given by those who made them.

Expand full comment

Regarding #8 in "What I Wish Someone Had Told Me":

You're right about the "middle path". One can't aim purely at one specific result. One always has to also decide what kind of person one is willing to be, and what the whole future will be, in all of the Many Possible Worlds. There is a Code, The Way, The Good Chase of The Good.

(And we shouldn't generally expect analytically simple but optimal strategies to be discoverable, or even exist. We're bounded, we're on the clock; the game is unbalanced because the designer was (and is) Insane (relative to our values).)

(Maybe it's only ALMOST NEVER reasonable to do violate The Code, but you should probably NOT be conflicted about it being worth it. It's too easy to fool oneself about the ends justifying the means.)

Expand full comment

> Ideal, of course, would be children.

If you decide to have children because you expect you will care about them and so will be more motivated to stop human extinction and at the same time you admit extinction is very probable, isn't it some kind of dark doublethink that makes you a worse person overall (with all kinds of nasty consequences of being a worse person some of which can be bad for AGI-risk)?

Expand full comment