24 Comments

This is a brilliant idea, too good to be buried here:

"If you use my communication channel, and I decide you wasted my time, you can push a button. Then, based on how often the button gets pushed, if this happens too often as a proportion of outreach, an escalating tax is imposed. So a regular user can occasionally annoy someone and pay nothing, but a corporation or scammer that spams, or especially mass spams using AI, owes big, and quickly gets blacklisted from making initial contacts without paying first."

Better than the old idea of charging people to send email, with the receiver having the option of waiving the charge. But still need to solve the problem of the evil party hiding its identity.

Expand full comment

Am I missing something here? "Ethan Mollick: The AI is great at the sonnet, but, because of how it conceptualizes the world in tokens, rather than words, it consistently produces poems of more or less than 50 words." Aren't most poems either more than 50 words or fewer than 50 words? I don't think in tokens but I bet most of what I write falls into these ranges.

Expand full comment

I recently paid for GPT 4, since I'm using it as an editor occasionally.

It doesn't have to be able to access the entire Internet. It would be much more useful if it could look at one site. For example, I would love to ask it to look at my substack and make comments on my writing. It should be capable of providing that sort of feedback.

Expand full comment

I have often wondered why there is not more emphasis on obviously dangerous ideas like automated drone/robot armies or giving AI agents unfettered internet access. Is this something people discuss from a policy perspective and I’m just missing it?

Expand full comment

Everyone enjoys the conspiracy that Google et al’s data collection has an extremely clear picture of every individual but if Bard can’t read your emails without hallucinating fake ones doesn’t it sort of beg the question how much Google really knows about people. I still like to pull out that one screenshot I took every so often where for a while Facebook thought I was a black MD that grew up on the east coast and had multiple kids (none of these are true).

I think Yudkowsky’s default-AI-doom-scenario is fine for (probably most) normies as long as you condense/translate it into English (I continue to argue that he needs to hire a translator, I’d offer but my hourly is maybe too high for this?). I gave my dad a basic three step Yudkowskian "it won't be Terminator, an AI somewhere on the planet might reach a random threshold that lets it solve all of physics without anyone noticing, send a snapchat to some rando offering a couple million bucks to order a few chemicals and mix them, then a few weeks later everyone on the planet drops dead in the same minute from the resultant artificial nanovirus” and, though I guess I wouldn’t say he got it in the sense that this is now something he’s proximally or urgently worried about, I don’t think we ended the conversation where we started at with “why would anyone worry about AI???” I guess I agree with him that there's so many reference pointers in popular culture to "AI tries to kill everyone but humanity triumphs" that this is a hard hurdle to jump, and I haven't spent as much time on this problem as he has, but I'm sort of more optimistic about it. Gotta cross those inferential distances my dude.

Expand full comment

In the frontier study, +43% wasn't due to overview but rather the surge in bottom-half-skill performers; aka, skill-leveling. Although you got the Inside/Outside difference right (i.e., the study pre-defined Inside tasks and Outside tasks and assigned 385 consultants to the Inside experiment and the remaining 373 to the Outside experiment), which many did not. There is something about the study that makes it hard to grok its method, so many are sharing the blue/red plot without realizing it's only half the participants. The authors decided not to share an overall plot, it seems.

Expand full comment

Blair is a toxic zombie as far as the current Labour leadership is concerned, I don't expect much influence from there on what Starmer's team does. More worrying is that the current UK task force might get associated too strongly with Sunak, and therefore be left in a political wilderness.

Expand full comment

I'm confused by your comments on Contrastive Decoding. Contra the linked hot take this appears to have nothing to do with reverse stupidity. The original paper was https://arxiv.org/abs/2210.15097 at ACL 2023, and recently https://arxiv.org/abs/2309.09117 used the idea with more recent LLMs. The idea seems to be to use the alpha from the "better" model for inference, by picking a completion where the choice to use the better model has the largest impact.

Expand full comment

>The process for getting good AI looks highly finicky and fiddly. Photoshop was commonly used to deal with unwanted objects.

That remains my impression of AI generated images. If you want a random cool-looking image, they're great. But as soon as you have particular requirements (like a specific character doing a specific thing in a specific way in a specific setting), they rapidly become useless.

To get ANY control over AI generated images, you end up with a messy, annoying workflow involving SDXL + ControlNet + ReLight + custom LORAs + hundreds of attempts + Photoshop. At that point, you may as well just learn to draw.

Dalle-3 looks like a step in the right direction, equalling Midjourney in quality while being far more steerable. Still some issues with hands.

Expand full comment

From a link in a link

https://www.businessinsider.com/video-game-company-made-bot-its-ceo-stock-climbed-2023-3?r=US&IR=T

>A video game company made a bot the CEO, and its stock climbed

The graph is fascinating, although other explanations are available and I'm not a stock expert. The alternate explanation is that it's not the AI managing well so much as management is expensive and LLMs have approximately the same advice and same organizational impact through that advice as a person.

Expand full comment

Just wanted to note that the spiral and checkerboard patterns in the AI generated art were constraints imposed on Stable Diffusion using a technique called ControlNet.

A spiral or checkerboard template was used to force Stable Diffusion to reproduce the pattern.

https://arstechnica.com/information-technology/2023/09/dreamy-ai-generated-geometric-scenes-mesmerize-social-media-users/

Expand full comment