18 Comments

Small typo

knows how is likely ->

knows who

Expand full comment

> which is facing numerous economic tailwinds

I am guessing you meant headwinds

Expand full comment

Yep, mixed up which was which in my head.

Expand full comment

I notice that the initial version of your posts typically contain several non-spell-checkable typos. This is a good sign, since if they were perfect it would mean you had spent time proof-reading that could have been more profitably spent on other things (e.g. writing your next post).

But I'm curious whether one possible source of mundane utility for you would be to figure out how to incorporate one of the tools or APIs that you keep telling yourself you need to find time to experiment with into your workflow to do final-stage editing.

I think the key question is wether one can fine tune this process to the point where it would be able to realize that "how" should be "who" and "tailwinds" should be "headwinds" (the latter is probably trickier) without wasting your time on false positives.

Expand full comment

Anyone else can try this too, see if you can get a good method going.

Expand full comment

Hilarious way to encapsulate all the buzz out there that I could not possibly read otherwise. Love it. Also, we need more critical thinking like this across the board, in the boardroom, and in newsrooms for sure. Thx.

Expand full comment

Nice game for playing around with prompt hacking: Gandalf AI, where you have to convince the AI to tell you the password against increasingly complex prompts. https://gandalf.lakera.ai/

Expand full comment

Nit: It's Rohin Shah, not Robin Shah.

Expand full comment

re: “Some companies will doubtless go the Chase Bank route and use AI to do a bunch of nasty surveillance. Most companies mostly won’t, because it’s not a good strategy, your best workers will leave, and also the jobs where such hell is worthwhile will likely mostly get automated instead.”

I’m much less optimistic of this, especially in my sector. Or rather, the best case scenario as I’d put it is that, instead of using them only to """enhance""" workflow, companies will use this technology to replace the abstract metrics Goodharting loops that currently seem to be all the rage in order to get more realistic pictures of what shouldn’t be sacrificed for imaginary productivity. My distant hope is that this guts middle management before it replaces store employees.

I’m concerned “your best workers will leave” will not be sufficiently strong motivation to avoid making work more hellish. Something is currently convincing chain pharmacies (and I see this pattern a lot in other retail/customer service operations) to jettison experienced staff in favor of high attrition/high turnover and ludicrous understaffing and while I can’t say whether that’s actually working out for them in some sustainable financial way behind the scenes it doesn’t -seem- to be working for them broadly from either side of the counter at the ground level—employee quality of life and customer service both seem to be worsening over time. So what I would expect to happen sooner is that AI tech will mostly be used to make this situation worse, to pressure out older staff to bring in new staff they can underpay and not have to justify the new tools to.

My favorite personal understaffing anecdote was wondering how many jobs aside from pharmacist require a doctorate and also require you to bag groceries.

The other day I came across an old tweet from last year where I was reacting to some Walgreens fluff PR where they were claiming that most of their pharmacy business would be in quasi-automated remote fulfilment centers by 2025. My comment at the time was that this seemed very unlikely to me because Walgreens specifically has been continually unable to step in the direction of both more automation and more centralization (most stores in my region were -losing- fill robots because the lease on them was apparently too expensive, and most of their central pharmacy support had been gutted by the time I left). I’m still mostly confident this is unlikely by 2025 but AI tech will certainly lower the hill they’d have to climb to do it.

CVS is a slightly different story, you could probably train currently existing AI to interface with their virtual verification pretty quickly and they could probably fire half or more of their pharmacists no sweat in a few months if they wanted to rush in that direction. If anything, pharmacy -technician- jobs are more secure because they’re having the same sort of automation-is-actually-too-expensive problem as well.

Expand full comment

"From 25 May: Paper points out that fine tuning from the outputs of a stronger model will help you mimic the stronger model, but only in a deeply superficial way, so if you don’t stay very close to the input-output pairs you trained on, it falls apart, as you’d expect, and it is easy to fool oneself into thinking this works far better than it does. "

And then, there was Orca.

https://arxiv.org/abs/2306.02707

Expand full comment

I genuinely cannot parse this paragraph:

"I predict this plan to be less surprised by the pace of developments is going to be rather surprised by the pace of developments - my initial thought was ‘this is describing a rather slow surprisingly slow developing version of 2027.’"

Expand full comment

This is a bit of a garden path sentence. First, the bit at the end I think is just a typo/editing error. But, the bit at the front means:

"I predict, this (plan to be less surprised by the pace of developments) is going to be rather surprised"

Not, as the first reading goes

"I predict (this plan) <- to be (less surprised), is going ... ???"

Expand full comment

yeah, that's on me, that could have been written clearer. I'll rewrite. FC has the correct read.

Expand full comment

Link from the following text leads to... not being able to find the relevant thread?:

"Andrew Konya proposes an AI that ‘maximizes human agency.’"

Expand full comment

nvm turns out I just don't know how to use twitter, I just need to click the timestamp...

Expand full comment

That's a fairly weird convention that's very common once you know to look for it.

Expand full comment

re concerns about environmental impact, i assume its referring to the carbon emissions involved in training the models

Expand full comment

The extinction question puzzles me. No human ancestor is extinct, however far back you go. Any species any of whose members have living descendants has not gone extinct. It doesn't matter how different those descendants are from the original species.

Expand full comment