28 Comments

I'm not sure if you've covered this, but Sabine Hossenfelder's response to Leopold Aschenbrenner's collection of essays seems like it's up your alley: https://nautil.us/a-reality-check-on-superhuman-ai-678152/

Expand full comment

To be clear this is not a request for medical advice. I offer it as the sort of question I might ask a medical AI, if I trusted it, which I don''t

So, there am I in the ER with thyrotoxicosis and tachycardia. ER doc #1 runs off a list of test he'd like run, which includes CAT scan.

Me: That all sounds fine, except ... contrast medium contains iodine. (I don't elaborate, and leave him to fill in the chain of thought.

ER doc #1: Crap. I will talk to the radiologist,.

(I am handed off to another doc, by being wheeled down the corridor on a trolley. In the interim, I imagine someone was looking up "contrast induced thyrotoxicosis")

ER doc #2: You know, I don't really need to see a CAT scan, I don't think it would tell me anything useful.

(My heart rate is stabilized with beta blockers, and I am -- after 2 days .. discharged from the ER. Couple of weeks later, I m scheduled for the CAT scan they initially didn't dared do, for which I much more prepared to do from a starting point of controlled heart rate, as opposed to severe tachycardia,

Come in guys..

a) how likely is contrast induced thyrotoxicosis? I would kind of like to know the rough probability that I am about to die here

n) How fast is the onset? Like, if the onset of contrast induced thyrotoxicosis is slower than heart rate can be brought down by beta blockers adminstered via intravenous drip, then maybe this is all manageable and non-fatal if things go wrong.,

But, no-one feels like telling rough order of magnitude of the danger i'm in here. Maybe it's fine and the risk is really low.

Expand full comment
author

Obvious thing to say is I probably wouldn't be willing to do this until I had a good answer!

Claude says general risk of that is historically between 0.01% and 0.1%, but it 'could be higher' for pre-existing problems, and also tech could have improved to reduce risk. So who knows. It's crazy how much people won't give straight answers on such things.

(I have no idea, from this story, WHY this person wanted a CAT scan)

Expand full comment

Oh, the CAT scan is to see what's going in my heart, in addition to the X Ray and ultrasound that they've already done.;

Expand full comment

If you have ever had a weird medical condition, it readily becomes apparent that:

1. Your regular physician is a well-educated generalist, but there are a million things they don't know.

2. Specialists know a ton, but there are tons of situations where they don't know either. There are conditions where 50% of ultimate diagnoses are "idiopathic", which is doctor speak for "beats us, maybe we can try random things to ameliorate the symptoms."

3. ER doctors exist to stabilize you until actual specialists can see you during working hours.

4. Nobody in the system has that much time to think about your individual case.

5. And yet, very often, a doctor can take 3 weird clues and instantly identify a problem.

When in doubt, take notes, do your reading, bring a notebook, ask questions. If you don't like the answers, ask around.

Expand full comment

Never "Just do something." Usually "Just stand there." If in doubt, don't do the test, or take the treatment. Most tests, treatments, drugs do not have a net benefit. Maybe 20% do, but those are substantive and invaluable!. Don't scorn the 20%.

Expand full comment

I consider my advice above to be medical advice, but not "illegal medical advice" since it is general and not specific to the individual's condition. As least that is what ChatGPT implies. Lol

Expand full comment

I am not offering medical advice, I'm just saying that there seems to be an answer (or maybe multiple answers; it's a long article) in UpToDate. Can the AI be trained using UpToDate? Because that seems like an obvious source.

UpToDate says "In North America and other iodine-replete populations, iodine-induced hyperthyroidism may occasionally occur in patients with autonomous thyroid nodules after treatment with high doses of iodine, usually in the form of drug therapy (table 1) or exposure to iodinated contrast agents during diagnostic radiography (eg, computed tomography [CT] or angiography) [16,21-24]. As an example, in a prospective study of 73 patients (mean age 65.7 years), only two developed hyperthyroidism after exposure to radiographic contrast [25]. In another study, the risk was higher in patients who had subnormal serum TSH concentrations and increased technetium thyroid uptake prior to radiographic contrast exposure [26]."

The article "Iodine-induced Thyroid Dysfunction" goes on and on for longer than I want to read. I don't know, maybe I should ask Claude to summarize it.

Expand full comment

Also to clarify: I think the doctors treating me were absolutely excellent, and the discharge letter they wrote was a brilliant piece of scientific writeup of a series of experiments (i.e. what did we do to this patient.)

It's more: you can find yourself running off the edge of what people know, and if we had an AI with superhuman knowledge of the medical literature, I know what question we'd be asking it.

Expand full comment

In a completely different incident, I did once get one of those diagnoses with "idiopathic" in it's name. We have run right off the edge of the official diagnostic flowchart (which, in this case, really is standardised by the National Institute for Clinical Excellence) and the hospital consultant is like "Ah! I know what this is" and orders up a confirming test. Some DX;s are too low probability to have made it onto the official flowchart.

Expand full comment

Let me give you all a clue: Medical Science is not advancing as quickly as Computer Science and AI, because the human body is many magnitudes more complex than a computer, and we don't understand the former very well.

Expand full comment

Can youhave a none-iodine based contrast agent like gadolinium, or have they said that wouldn't be suitable for the images they're after?

Expand full comment

The lighter side should be above truth terminal

Expand full comment

Usually I'd be like "come on, Wikipedia drama isn't on-topic". However: fuck that guy, he deserves it.

Expand full comment

no...no lighter side?

Expand full comment
Jul 11·edited Jul 11

I think you meant the lighter side

- to be in the table of content

- above "Marc Andreessen gives $50k in Bitcoin"?

OR

not have that header at all, if you consider the funny thing to be "Other People Are Not As Worried About AI Killing Everyone" (which it kinda also is)

Expand full comment

Regarding existing AI implementations like fast food ordering, the question of whether they're using an old model or a GPT-4 level one is secondary. A lot of these companies are plugging AI into their extremely kludgy back-end systems, which is making the outcomes worse. Implications are that 1) there will be much more motivation for companies to update data and other systems to take advantage of AI and 2) these applications will get much better even if we hold AI capabilities constant. So these data points are actually somewhat bullish for cloud, software, and AI.

Expand full comment

One hack that comes to mind for improving receptivity about AI existential risk is to avoid talking much about AGI independently causing problems and instead asking people to consider a bad actor using AGI/ASI. It just removes the question of volition, but otherwise the scenarios are basically the same, right? We still want to make sure the AGI doesn't send swarms of killer drones because Kim Jong Un or whoever told it to, and it makes the whole scenario seem less far out.

Expand full comment

Few bad actors are actively omnicidal. Many goals orthogonal to human wellbeing are incidentally omnicidal.

Expand full comment

FYI: a couple of spelling mistakes: "Goldman Sacks", "revival of the cite"

Expand full comment

A nice writeup on the AIMO contest from Thomas Wolf (HF co-founder), which includes a Terence Tao quote: https://x.com/Thom_Wolf/status/1809895886899585164

The winners have shared their model here as a HF app: https://huggingface.co/spaces/AI-MO/math-olympiad-solver

And you can see ten sample problems in the training set from the competition: https://www.kaggle.com/competitions/ai-mathematical-olympiad-prize/data?select=train.csv

As he notes, the results are especially impressive given the compute constraints, 2xT4 GPU machines.

Expand full comment

The JEST paper validates the hypothesis that a good teacher can affect learning outcomes by selecting material appropriate for the learner. It also proposes an algorithm for choosing batches of examples to speed up learning. This is important for humans, and I am glad it was published.

Expand full comment

https://www.nytimes.com/2024/07/08/opinion/elevator-construction-regulation-labor-immigration.html?unlocked_article_code=1.600.V_23.IbwTbcLqu_Oi&smid=url-share

Yes, from NYT, which means I think you haven't read yet.

Fits in your buckets for "housing theory of everything" and "EU doing EU things", except in this case their regulations are insanely better than ours.

Expand full comment
Jul 15·edited Jul 15

> Which is funny, since no, it is not yet (fully) priced into the market. Not even close.

<seriousness level="45%">That's because if there's a singularity soon it ~doesn't matter how much money one has, so share prices reflect how much stocks will be worth on average *among possible futures with no singularity soon* — and think about what the average possible future with no singularity soon looks like.</seriousness>

(where "soon" = "in a time shorter than approximately the inverse of interest rates")

Expand full comment
Jul 15·edited Jul 15

I'm frankly reluctant to even mention SB 1047 in public, and haven't followed it as closely as you have, but having just seen your Asterisk piece, it seems worth noting that the open source exemption re shutdown appears to be gone in the current version.

In more detail:

- As of the Asterisk piece (not sure when exactly that was published), you quote the bill as saying, '22602 (m): “Full shutdown” means the cessation of operation of a covered model, including all copies and derivative models, on all computers and storage devices within custody, control, or possession of a person, including any computer or storage device remotely provided by agreement.'

- The current version (July 3) no longer mentions custody etc; the definition (now in 22602 (l)) reads, '“Full shutdown” means the cessation of operation of any of the following: (1) The training of a covered model. (2) A covered model. (3) All covered model derivatives controlled by a developer.'

- Full shutdown is referenced in 22603 (a): 'Before a developer initially trains a covered model, the developer shall do all of the following:...(2) Implement the capability to promptly enact a full shutdown.'

- As I read it, that suggests that as of the latest version of the bill, it's impermissible to release the weights of a covered model.

It's possible that I'm missing an importantly relevant clause that means that open source covered-model weights *can* be released, and if so I'd be quite interested to know that. I personally think it's a cost worth paying, but if in fact the bill now extracts that cost it seems worth acknowledging.

Expand full comment

The human body is already perfect. It cannot be improved by A.I.

Expand full comment