I remain confident that the best solution to the problem of the doom machine is not to build the doom machine.
I do have my own nature-centric perspective and it is akin that "biological life has essentialist value" and therefore omnicide even if it "simulates" life is unacceptable.
> Go players are steadily getting stronger in the wake of AlphaGo and subsequent open source versions, both studying the AI and using the inspiration to innovate.
I haven't figured out exactly where the error in the paper is*, but the paper essentially claims that Go performance was unchanged for ~60 years before AI came along. This is not credible. More likely they are measuring something adjacent to strength rather than strength exactly.
* my first guess was they are over-weighting on opening performance, but they check that in the appendix
A key fact that you really have to dig to find is that Nightshade doesn't do what people want—namely, prevent future models from learning "their style" or ruin their training in general. For one, it leaves very visible artifacts even in the lowest setting: https://twitter.com/sini4ka111/status/1748378223291912567. For other, maybe most importantly, this only works for the model it was trained for (presumably some version of Stable Diffusion, it will never work on DALL·E because they don't have the training code for that).
I've talked about it with some buddies, and we can only conclude this is just a researcher trying to make their name with fancy sounding papers. It's snake oil, not a functioning product.
The AlphaFold work is a great example of AI demonstrating skill far beyond our human intelligence curve can, because it beats us both on accuracy but also vastly beats us on speed. We simply cannot compete. And by that I mean the entire human race combined cannot compete with its speed/accuracy trade off. Instead we simply must take what it outputs as hints on what to research next.
So taking that further -- and agreeing with your post -- AI wont look like a single very smart agent that is 20% smarter than us. It will instead look like thousands of completely solved problem categories and we will just take that as a fact of life and hope there is something left for us to help with.
I assume that Zuck just uses the term "full general intelligence" because of it's buzzword value, not as a deeply thought-out concept. On the responsibility bit, the guy is running a company, so I guess most of his understanding of even Meta's own models would come from LeCun, hence mirroring his sentiment
> I do not understand why very smart people are almost intelligence deniers
My theory is that, since most people who boasts of their intelligence are morons, intelligent people notice, and internalize that boasting about intelligence is a clear signal of moron-ism. Then the rational attitude of humility becomes a ritual, the ritual a belief.
I laughed out loud at the inclusion of TensorFlow on that list, because it's so true. Doing anything even marginally novel in TF is like pulling teeth while blind.
For those unfamiliar with 70s sci-fi movies, Anton's "Chinese and American AI systems are plugged into the nuclear weapons and a minute later form a full agreement for mutual cooperation amongst them via acausal trade" story is awfully similar to the movie Colossus: The Forbin Project, except obviously with Russia and US.
I remain confident that the best solution to the problem of the doom machine is not to build the doom machine.
I do have my own nature-centric perspective and it is akin that "biological life has essentialist value" and therefore omnicide even if it "simulates" life is unacceptable.
> Go players are steadily getting stronger in the wake of AlphaGo and subsequent open source versions, both studying the AI and using the inspiration to innovate.
I haven't figured out exactly where the error in the paper is*, but the paper essentially claims that Go performance was unchanged for ~60 years before AI came along. This is not credible. More likely they are measuring something adjacent to strength rather than strength exactly.
* my first guess was they are over-weighting on opening performance, but they check that in the appendix
A key fact that you really have to dig to find is that Nightshade doesn't do what people want—namely, prevent future models from learning "their style" or ruin their training in general. For one, it leaves very visible artifacts even in the lowest setting: https://twitter.com/sini4ka111/status/1748378223291912567. For other, maybe most importantly, this only works for the model it was trained for (presumably some version of Stable Diffusion, it will never work on DALL·E because they don't have the training code for that).
I've talked about it with some buddies, and we can only conclude this is just a researcher trying to make their name with fancy sounding papers. It's snake oil, not a functioning product.
Best ChatGPT prompting tutorial out there, anyone? I wish to sharpen my skills.
The AlphaFold work is a great example of AI demonstrating skill far beyond our human intelligence curve can, because it beats us both on accuracy but also vastly beats us on speed. We simply cannot compete. And by that I mean the entire human race combined cannot compete with its speed/accuracy trade off. Instead we simply must take what it outputs as hints on what to research next.
So taking that further -- and agreeing with your post -- AI wont look like a single very smart agent that is 20% smarter than us. It will instead look like thousands of completely solved problem categories and we will just take that as a fact of life and hope there is something left for us to help with.
I assume that Zuck just uses the term "full general intelligence" because of it's buzzword value, not as a deeply thought-out concept. On the responsibility bit, the guy is running a company, so I guess most of his understanding of even Meta's own models would come from LeCun, hence mirroring his sentiment
> I do not understand why very smart people are almost intelligence deniers
My theory is that, since most people who boasts of their intelligence are morons, intelligent people notice, and internalize that boasting about intelligence is a clear signal of moron-ism. Then the rational attitude of humility becomes a ritual, the ritual a belief.
I laughed out loud at the inclusion of TensorFlow on that list, because it's so true. Doing anything even marginally novel in TF is like pulling teeth while blind.
For those unfamiliar with 70s sci-fi movies, Anton's "Chinese and American AI systems are plugged into the nuclear weapons and a minute later form a full agreement for mutual cooperation amongst them via acausal trade" story is awfully similar to the movie Colossus: The Forbin Project, except obviously with Russia and US.
One guy was trying to find a girlfriend and used ChatGPT to chat with over five thousand girls instead of him.
Maybe it will be useful for the next AI post: https://twitter.com/biblikz/status/1752335415812501757?t=1D7HlOnC4Z8R0g3mIIjCuw