3 Comments

Missing link: [LINK tktk]

Expand full comment

While I generally like your points here I think asking "what happens" introduces it's own bias. It pushes us into considering futures which feel narratively understandable to us. This effect gets worse the more we try to imagine specifics.

While I think this has some downsides when applied to how people will react in the near term that's not so bad. However, when we try to apply it to things we really only have very limited or poor precedents for I get really worried.

For instance, we only have experience with intelligent systems that came about as the result of undirected evolution so when we try and ask "what happens" with AI there is this tendency to imagine it will have to behave in much the same way we do. We assume it will necessarily be the case that if it intelligently works to achieve some end via one mechanism it will do so via others. We assume that the more intelligent it gets the more it behaves like it is maximizing a simple objective function etc etc...

So I'd advocate extreme caution with this approach in cases where we know that we can only really give a coherent story about how things work under weakly supported assumptions that they'll be similar to what we're familiar with.

Expand full comment

Yeah, it's a problem too, availability heuristic, narrative plausibility, etc. Concerns like this are why my posts are always super long, I guess?

Expand full comment