3 Comments
author

NOTE: I had another draft that was much longer and went into my view of various regulatory options and details and other things like that. as well as the struggles ahead and various ongoing battles and lots of other stuff. I decided that it was much better to keep this short and to the point instead, and to indeed keep it short I am putting the explanation of this in a comment rather than in the main body. I do plan to post about that stuff in the future.

Expand full comment

I'd really like to see an attempt to flesh out the argument for AI risk in greater clarify/rigor in the way it might be done in a well-functioning academic discipline. Even if you are already convinced, I feel like the process of fleshing that argument out and dealing with objections would help illuminate exactly what's necessary for the problem to arise and hence suggest what will and won't help solve it.

For instance, merely having a good grip on exactly how much risk varies with ability/intelligence or what assumptions must be made about AI motivation would suggest some solutions and rule out others.

I mean, I've been impressed with the quality and professionalism of work that's trying to solve the alignment problem and the discussions going on there. However, the argument for the danger doesn't seem to get a similar treatment (Bostrom was a good start but kinda seems like the high water mark in terms of careful treatment).

I feel like the problem is that people who are convinced want to find solutions and the doubters just aren't motivated (and it's not mainstream in any academic discipline) so it doesn't get a professional treatment.

Expand full comment