13 Comments

Re Anthropic hiring less engineers: there're benefits to developing a project with fewer higher-performing engineers. Throwing as many engineers as you can at the project is not always the best thing that you can do.

Expand full comment

Another key consideration: if you were running a very-high-security national security project, you'd be THRILLED at the notion that you had to hire fewer new, less-vetted folks to join that project...

Expand full comment

I’m confused about this. I know several engineers who have had recruiters reach out to them for interviews. Maybe this is very recent

Expand full comment

"If Dean is correct here, then the carbon cost of training is trivial." Note that Table 1 in that paper concerned models using 1e19 flops for training. If I understand correctly that's 1e6 times less than GPT-4, so that GPT-4 would have costed 1e5 tons of CO2 equivalents on the most inefficient setup listed in their Table 1 and 2e3 tons if efficient. No idea about inference costs. No idea about lifetime CO2e cost of the GPUs/TPUs.

Expand full comment

No lighter side this week... I hope this does not mean things are getting bleaker

Expand full comment

I wonder if AI generated CSAM isn't a good thing on net that we should, while perhaps not encourage, maybe treat as less bad than real-world CSAM. There is obviously a market for CSAM that won't go away, and if this can be supplied more cheaply and with less risk than the actual child abuse stuff, and without harming children in the process, that seems better than the alternative.

I suppose the most important question is whether this would, on balance, more serve as a "gateway drug" to real CSAM and make CSAM more socially acceptable, or more satisfy pedophilic desires without any actual abuse going on. Also, of course, there is the question where you get the training data, but presumably enough of this exists that this shouldn't be an issue, and perhaps a good enough image generator could extrapolate this well enough without even needing any real CSAM.

One might also surmise that real CSAM could be claimed to be deepfaked to avoid or reduce punishment, which would definitely be an issue with this approach.

Expand full comment

4/5 dentists recommend living past the singularity for good mouth health!

Expand full comment

one way of resolving the hiring issue is the creation of a thing that I still don't get why it hasn't always existed: the applicant side equivalent of a head hunter agency.

There already exist agencies whose job it is to find people for a needed position. Why there aren't equivalent agencies whose job it is to find a position for an applicant I don't understand, but the new AI application world creates a _new_ reason:

These agencies are consistent, repeat players in the game-theory sense. They have an incentive to find _actually_ good candidates for fit. If they are repeatedly sending in people who have lied etc, then they get a bad reputation and companies won't trust them, and they can't place applicants and they go out of business.

And, since these people are hired _by_ the job applicant, they have an ability to actually find out real information about the candidate.

I expect a service like this to be expensive, but it also seems _obviously_ worthwhile.

And game theory optimum with repeated games fixes a lot of the current problems with AI application, AI hiring managers, and incentives to lie.

Expand full comment

"I am happy to hear that they are at least noticing that they might not want to release this in its current form."

Screw that. LET'S GO. I'm sick and tired of only the Illuminati Privilege Bros having access to this tech. Lets cut it loose. If we die, we die. It's a completely insane world where Zuck is the good guy, but that seems to be the current state of affairs.

Expand full comment

On the physics Nobel prize:

I think Hinton was arguably awarded for work that was not-physics, but Hopfield's work was squarely within the scope of physics as generally understood. Hopfield himself seems to think so

https://www.annualreviews.org/content/journals/10.1146/annurev-conmatphys-031113-133924

(Note that this essay was written a decade ago).

Also the condensed matter physics arxiv contains `disordered systems and neural networks' as a subarxiv. I've read it (and contributed to it) for over a decade, and `neural networks are part of physics' strikes me as an utterly obvious and uncontroversial take.

Expand full comment

When I moved to San Francisco:

Me: Thanks for showing me these apartments today.

Rental Agent: No problem, love helping people move to The City.

Me: Does this next apartment have AC?

Her: AC? Uh, AC?

Me: ... "Air Conditioning".

Her: Oh! Ha, ha! Of course not!

Expand full comment

> I find myself excited to write actual code that does the things, but the thought of having to set everything up to get to that point fills with dread - I just know that the AI is going to get something stupid wrong, and everything’s going to be screwed up, and it’s going to be hours trying to figure it out and so on, and maybe I’ll just work on something else.

Yeah, the exact same concern put me off for a long time (and I have not yet fully pushed through it).

FWIW, I imagine the experience _might_ be better if you use JavaScript or Python. I went with Java because I have deep muscle memory for that language, but it's probably an unusual choice nowadays for starter projects. Regardless of whether LLMs would be better at giving troubleshooting advice, I think it's likely that in ReplIt and possibly Cursor you'd be following a more well-trodden path.

Expand full comment