11 Comments
Feb 14, 2022·edited Feb 14, 2022

I was thinking about this when Scott's article came out. I used to be a lifeguard. Most of my job consisted of watching a pool of people where no one drowned.

In some ways it gets far worse odds than the volcanologist problem. How often you need to check makes a difference. As a lifeguard those checking intervals are every 15-30 seconds. How often do the volcano need to be checked? Once a day? Once a week?

To some extent I was definitely serving the same purpose as a security guard, to make people feel safe, to lower insurance costs, etc. But people did occasionally start drowning, and I did have to jump in and save them.

I did some back of the envelope math based on how often I saved people and how often I had to check the pool. The odds were much lower than a 1 in a 1,000 chance that someone was drowning. It was more like a 1 in a 100,000 chance that someone was drowning while I was checking the pool. I was a teenager while I was a lifeguard, many other lifeguards I worked with were also teenagers. No one drowned at any of the pools I worked at, and we made a few saves each summer.

I can't help but think that being right 99.9% of the time when being wrong is catastrophic is actually a really crappy record. If I had been a lifeguard that was only right 99.999% of the time, there would have been at least one dead kid.

Expand full comment
author

The key question for the lifeguard that comes to mind right away is: When a kid is drowning do you notice if and only if you look? I can imagine that often there's a very clear sign the kid is drowning that you'd notice anyway, other times your looking wouldn't see it. So the odds of marginal value get even worse. Tough job to stick with.

Expand full comment

Drownings are often the opposite of how hollywood depicts them. It is rare that the drowning victim can shout for help. They might be struggling, but you certainly won't hear them over the ambient noise of a pool (lots of splashing and screaming). You can easily spot people sitting still on the bottom of the pool, but that might be way too late. One of the methods taught to lifeguards is not to look for people drowning, because that is hard to see. It is to assess people's swimming ability and constantly recheck all the bad swimmers to make sure they aren't quietly sinking.

There is a youtube channel that is just full of lifeguard rescues:

https://www.youtube.com/channel/UCnERyC7dwJwTvEyzYz6uxHw

You'll notice in a lot of them that despite the fact there there are ~100 people in the pool the lifeguard almost immediately notices the drowning victim. Probably because they had already marked the victim as a potential danger.

Expand full comment
author

Ah, thanks, that's really interesting. I wonder if this is one of those places where you could combine machine learning with cameras and do very well. Also, this sounds like it makes the job more interesting - the idea 'scan for X every few seconds' sounds crazy terrible but assessing everyone's skill is a real thing.

Expand full comment

I wouldn't bet any money on the cameras + machine learning performing very well:

1. Very little training data. Saves are rare.

2. Non-generalized data. I think you'd have to train it on specific pools.

What makes that specific pool dangerous are:

1. Young/untrained swimmers.

2. Access to deep water (via flotation devices).

3. Turbulent waters (its a wave pool).

The AI might be recognize young/untrained swimmers, but the other two things are unique to that specific pool, and other pools have other type of danger.

There are also less common ways of drowning that don't look anything like these. One of them is shallow water blackout. It is when someone holds their breath so long that they pass out. This is a real danger for snorkelers and experienced swimmers, and it also looks completely different from a kid struggling to stay afloat.

I'd never thought about lifeguarding in these terms before, but it really is a profession dedicated to saving people from tail risks and super rare events. There is probably something there for experts to learn, but I'm not exactly sure what. The non-medical parts of the lifeguard training were definitely all about "dont listen to the rock that says no one is drowning, you *will* kill someone if you always listen to the rock"

"the idea 'scan for X every few seconds' sounds crazy terrible but assessing everyone's skill is a real thing."

Oh it definitely got super boring and terrible. If you have just five swimmers, one is a boring adult swimming laps, and the other four are young teens all on the swim team then you are absolutely just there for insurance purposes at that point. But it was one of those times when a shallow water drowning almost happened at one of the pools I worked at. So again "dont listen to the rock, it WILL get someone killed eventually"

Expand full comment

Football Outsiders has long has the problem of making the best case that their pre-season predictions are useful. A lot of simpleminded analysis says that they are less accurate than just naively predicting 8-8 for every team. But of course, while 8-8 across the board will have a lower value on certain metrics of error... its useless. Its way more useful as they use it- to headline a chapter in their seasonal prospective and then go on to explain *why* their metrics made a prediction. Counter-intuitive predictions especially are useful, because it lets the point out that, say, a good defense with great 3rd down stop rates and fumble recovery #'s is going to regress to the mean more than a bad offense with good completion percentages, bad red-zone numbers and a lot of close losses because offense and completion percentage are very stable year-to-year while the other metrics are very unstable but very invluential on wins and losses.

Expand full comment
author

I've gotten good value out of FO but if there are reasonable metrics where they are losing to 8-8 it means something simple - they're not building in enough randomness / reversion / etc and need to fix it. Obviously one can do better than 8-8 on any sane measure very easily (e.g. predict 9-7 for all teams that had 11+ wins last year, 7-9 for all with 5 or less, will obviously do better). If the predictions aren't baselines (e.g. mean or median numbers) then it seems odd to not fix that...

Expand full comment

Well, naive metrics are often unreasonable.

Its been a while, but I think it was in relation to a sports gambling website that 'proved' FO was useless because they took the sum of signed errors. So if team A gets 4 wins and team B gets 12 wins, a straight 8-8 prediction has an error of +4 + -4=0. If FO gives a prediction of 4 wins and 13 wins, that's an error of +1.

Of course, a more useful metric is probably the sum of absolute values taken to some exponent (not 2 but it makes the math easy), so you get say 4^2 + 4^2 = 32 for the 8-8 prediction vs 1 for the FO prediction.

Expand full comment
author

Sum of signed errors is sufficiently stupid that I think we can ignore anyone using it. Sum of unsigned errors is not great but at least reasonable.

Expand full comment

Thanks for mentioning The Phantom Tollbooth.

Expand full comment

This is reminding me quite a bit of the Euthyphro Dilemma

Expand full comment