19 Comments

"Back in AI#3 we were first introduced to keeper.ai, the site that claims to use AI to hook you up with a perfect match that meets all criteria for both parties so you can get married and start a family, where if you sign up for the Legacy plan they only gets paid when you tie the knot. They claim 1 in 3 dates from Keeper lead to a long term relationship. Aella has now signed up, so we will get to see it put to the test.

Word on Twitter is the default cost for the keeper service is $50k. "

It occurs to me that unless they incur substantial expenses providing this service, (Which seems unlikely.) they'd be able to make a profit on this model even if their proposed matches were no better than random chance. It's not as though they're out much in the cases where they fail, after all.

Expand full comment

True if people didn't notice, but if your matches seem random then everyone will quickly ask for their money back and stop using the service.

Expand full comment

The clear anti-humanity stances were, and remain, worrying - although hopefully being terroristic stances, remain marginal and outresourced by healthier perspectives.

Expand full comment

I kind of skimmed the section on RLHF problems so I’m extremely sorry if this was covered, but is there any talk about transparency in selection of, uh, what are we calling them? Reinforcers?

Anyways, I feel like there’s a jury selection problem here where you’re going to get increasing contamination from people who know (more about) what they’re being asked to do and so are deliberately motivated to tamper with the process. Plus I mean given how many AI researchers we’re discovering are not-so-secretly death cultists maybe someone ought to be peering into who’s being asked to provide the human feedback.

Expand full comment

I don't think so, other than some talk about geographical/cultural distribution. There's currently not transparency on this, and it's a problem in all directions (e.g. if you are worried about what the jury looks like to outsiders, that's also an issue).

Expand full comment

The pricing on Keeper.ai as of today is: $100 fee to get started (as a man), followed by a choice between three options:

- $8k/physical date. You’re allowed to chat over voice/video for free as many times as you want first.

- $50k/marriage if you put down a deposit. You may ask for the deposit back at any time and close your account. I didn’t ask what the deposit size is.

- $100k/marriage without a deposit

I will be extremely surprised if they end up finding a good match for me personally - but signed up anyway because I think the idea is cool.

Expand full comment

The $8k line has to be the most brutal pricing - if I'm paying $8k to see you in person that had better be worth it!

I presume the deposit is $50k, see Clear and Present Danger for the explanation. At 100k I'd start to worry about the money changing people's decisions...

Expand full comment

> At 100k I'd start to worry about the money changing people's decisions...

I’ve picked $8k/date for the same reason. If I do meet someone extremely nice I don’t want the risk of $100k looming over my head.

Expand full comment

Movie idea, the guy whose job it is to get the couples to actually tie the knot so they'll pay the $200k. Except, this time...

Expand full comment

"Fair enough, but I will keep reminding everyone that any sane policy on this would start with things like allowing Chinese graduates of American universities to become Americans"

And not just Chinese graduates. Why we grant thouands of STEM PhDs to foreign students every year and then force them to leave is an absolute mystery to me. MIT alone has nearly 3000 foreign graduate students, a solid majority of whom would stay in the US if they could (NSF estimate is about 2/3). But we make it very hard for them to do so.

Expand full comment

I find this very sad, but not mysterious.

What political actors could feasibly profit from this? The two big coalitions are, as of now, basically 'no immigration' or 'yes to _immigrants_ but we'll get back to you about immigration policy'.

'Skilled' immigration is a bad cause for either coalition. I kind of appreciate that the 'anti-immigration' coalition is at least consistent. The 'somewhat pro-immigration' coalition possibly enjoys _the appearance_ of rallying to the defense of _immigrants_ and increasing 'skilled' immigration both somewhat directly cuts against that remaining a cause but also is open to internal criticism along the lines of 'class betrayal'.

Expand full comment

Re the horse analogy, and whether it convinces anyone. Speaking as a (copium-huffing?) fence sitter as regards the question itself, I found the analogy centrally unconvincing, as it assumes away a crux in positing no meaningful difference between physical and mental labour. If it was meant to convince me that AI will rapidly overtake human mental faculties, it didn’t do that, and I’d say it rather begged the question. Where it got interesting was if I granted the premise and followed the analogy further down the chain of inference, eg “will humans have jobs still?” I thought that, given superintelligent AI, the horse analogy worked very well to dispel any illusion of continuing human relevance in that scenario.

Expand full comment

Great feedback! I see that as all it is meant to do. So in your case, it resolved one objection, but left another, which is still pretty good even if both need to be solved.

Expand full comment

I keep getting getting lots of value from your posts, thank you. However you continue to write as though small, easy to train systems are not relevant. The Watson model ("about five should be enough") pushed by the largest players isn't where most of the overall research effort seems to be going, transformers are not the only game, and a world in which there are lots of cheap low-capability systems isn't effectively regulated by clamping down on training massive "foundation" models, yet seems to bring risks and opportunities just as the GPT-infinity model does. Is your stance that a soup of AI systems is low-probability or low-impact or otherwise fine, so you need to warn only about the GPT-calypse?

Expand full comment

I've actually been saying (when relevant) that I very much believe in cheaper low-capability systems for mundane utility as the baseline future. The question is whether such systems pose an existential threat? And then the follow-up is, if you could, do we even have any outs to that?

Expand full comment

I can't now think of ways to control the soupy AI future, so I am currently practicing acceptance while continuing to look for points of leverage.

Expand full comment

I think you've made some pretty convincing arguments that open source models, and especially open source cutting edge models, are dangerous and should be avoided.

Do you think that this would still be true, if, in the next year or so, we find out that LLMs are fundamentally limited in some way and can't/won't get much more advanced than GPT-4/4.5?

If the AI paradigm moves on to another architecture, would having open versions of whatever the last, most advanced, versions of the old paradigm still be considered dangerous?

Expand full comment

Great question. My guess: It depends on the long-term affordances this makes available. If nothing substantially more capable than GPT-4 is possible then my guess is that you'd want it released, because it will help build dependencies on it that then dead end. The question is what would take us over lines we don't want to cross, and 4.5 seems like it would be close - more likely fine than not, but the tail risk is large.

Expand full comment

Re RLHF and revealed preferences...it feels nebulous to me. It’s what you as a rater pick, distinct from what you’d state in advance about what kind of responses you’d prefer.

That said, assuming it’s not the revealed preference (because they’re stating they prefer this response), then I wonder how much we would _really_ want to optimize for revealed preference, given what recommender/newsfeed systems have done to us when optimizing for the same (addiction et al)... I would worry a chatbot optimized for revealed preference of what you “like” probably ends up in emotionally manipulative conversational partners / sex and partner-substitution chatbots, which may not end up that well for humanity either!

Expand full comment