19 Comments

"Back in AI#3 we were first introduced to keeper.ai, the site that claims to use AI to hook you up with a perfect match that meets all criteria for both parties so you can get married and start a family, where if you sign up for the Legacy plan they only gets paid when you tie the knot. They claim 1 in 3 dates from Keeper lead to a long term relationship. Aella has now signed up, so we will get to see it put to the test.

Word on Twitter is the default cost for the keeper service is $50k. "

It occurs to me that unless they incur substantial expenses providing this service, (Which seems unlikely.) they'd be able to make a profit on this model even if their proposed matches were no better than random chance. It's not as though they're out much in the cases where they fail, after all.

Expand full comment

The clear anti-humanity stances were, and remain, worrying - although hopefully being terroristic stances, remain marginal and outresourced by healthier perspectives.

Expand full comment

I kind of skimmed the section on RLHF problems so I’m extremely sorry if this was covered, but is there any talk about transparency in selection of, uh, what are we calling them? Reinforcers?

Anyways, I feel like there’s a jury selection problem here where you’re going to get increasing contamination from people who know (more about) what they’re being asked to do and so are deliberately motivated to tamper with the process. Plus I mean given how many AI researchers we’re discovering are not-so-secretly death cultists maybe someone ought to be peering into who’s being asked to provide the human feedback.

Expand full comment

The pricing on Keeper.ai as of today is: $100 fee to get started (as a man), followed by a choice between three options:

- $8k/physical date. You’re allowed to chat over voice/video for free as many times as you want first.

- $50k/marriage if you put down a deposit. You may ask for the deposit back at any time and close your account. I didn’t ask what the deposit size is.

- $100k/marriage without a deposit

I will be extremely surprised if they end up finding a good match for me personally - but signed up anyway because I think the idea is cool.

Expand full comment

"Fair enough, but I will keep reminding everyone that any sane policy on this would start with things like allowing Chinese graduates of American universities to become Americans"

And not just Chinese graduates. Why we grant thouands of STEM PhDs to foreign students every year and then force them to leave is an absolute mystery to me. MIT alone has nearly 3000 foreign graduate students, a solid majority of whom would stay in the US if they could (NSF estimate is about 2/3). But we make it very hard for them to do so.

Expand full comment

Re the horse analogy, and whether it convinces anyone. Speaking as a (copium-huffing?) fence sitter as regards the question itself, I found the analogy centrally unconvincing, as it assumes away a crux in positing no meaningful difference between physical and mental labour. If it was meant to convince me that AI will rapidly overtake human mental faculties, it didn’t do that, and I’d say it rather begged the question. Where it got interesting was if I granted the premise and followed the analogy further down the chain of inference, eg “will humans have jobs still?” I thought that, given superintelligent AI, the horse analogy worked very well to dispel any illusion of continuing human relevance in that scenario.

Expand full comment

I keep getting getting lots of value from your posts, thank you. However you continue to write as though small, easy to train systems are not relevant. The Watson model ("about five should be enough") pushed by the largest players isn't where most of the overall research effort seems to be going, transformers are not the only game, and a world in which there are lots of cheap low-capability systems isn't effectively regulated by clamping down on training massive "foundation" models, yet seems to bring risks and opportunities just as the GPT-infinity model does. Is your stance that a soup of AI systems is low-probability or low-impact or otherwise fine, so you need to warn only about the GPT-calypse?

Expand full comment

I think you've made some pretty convincing arguments that open source models, and especially open source cutting edge models, are dangerous and should be avoided.

Do you think that this would still be true, if, in the next year or so, we find out that LLMs are fundamentally limited in some way and can't/won't get much more advanced than GPT-4/4.5?

If the AI paradigm moves on to another architecture, would having open versions of whatever the last, most advanced, versions of the old paradigm still be considered dangerous?

Expand full comment

Re RLHF and revealed preferences...it feels nebulous to me. It’s what you as a rater pick, distinct from what you’d state in advance about what kind of responses you’d prefer.

That said, assuming it’s not the revealed preference (because they’re stating they prefer this response), then I wonder how much we would _really_ want to optimize for revealed preference, given what recommender/newsfeed systems have done to us when optimizing for the same (addiction et al)... I would worry a chatbot optimized for revealed preference of what you “like” probably ends up in emotionally manipulative conversational partners / sex and partner-substitution chatbots, which may not end up that well for humanity either!

Expand full comment