A few months ago, Ian Hogarth wrote the Financial Times Op-Ed headlined “We must slow down the race to God-like AI.”
A few weeks ago, he was appointed head of the UK Foundation Model Taskforce, and given 100 million pounds to dedicate to AI safety, to universal acclaim. Soon there will also be a UK Global AI Summit.
He wrote an op-ed in The Times asking everyone for their help, with accompanying Twitter thread. Based on a combination of sources, I am confident that this effort has strong backing for the time being although that is always fragile, and that it is aimed squarely at the real target of extinction risk from AI, with a strong understanding of what it would mean to have an impact on that.
Once again: The real work begins now.
The UK Taskforce will need many things in order to succeed. It will face opposition within and outside the government, and internationally. There is a narrow window until the AI summit to hit the ground running and establish capability and credibility.
The taskforce represents a startup government mindset that makes me optimistic, and that seems like the best hope for making government get things done again, including on other vital causes that are not AI, and not only in the UK. We likely only get one shot at this. If the taskforce fails, there will probably not be another such effort.
Right now, the main bottleneck is that the taskforce is talent constrained. There is an urgent need to scale up rapidly with people who can hit the ground running and allow the taskforce to orient.
If you are in position to help, then with the possible exception of creating your own organization at scale, I believe this is the highest leverage opportunity currently available.
To reach out and see if you can help, you can fill out this Google Form here.
NOTE: I had another draft that was much longer and went into my view of various regulatory options and details and other things like that. as well as the struggles ahead and various ongoing battles and lots of other stuff. I decided that it was much better to keep this short and to the point instead, and to indeed keep it short I am putting the explanation of this in a comment rather than in the main body. I do plan to post about that stuff in the future.
I'd really like to see an attempt to flesh out the argument for AI risk in greater clarify/rigor in the way it might be done in a well-functioning academic discipline. Even if you are already convinced, I feel like the process of fleshing that argument out and dealing with objections would help illuminate exactly what's necessary for the problem to arise and hence suggest what will and won't help solve it.
For instance, merely having a good grip on exactly how much risk varies with ability/intelligence or what assumptions must be made about AI motivation would suggest some solutions and rule out others.
I mean, I've been impressed with the quality and professionalism of work that's trying to solve the alignment problem and the discussions going on there. However, the argument for the danger doesn't seem to get a similar treatment (Bostrom was a good start but kinda seems like the high water mark in terms of careful treatment).
I feel like the problem is that people who are convinced want to find solutions and the doubters just aren't motivated (and it's not mainstream in any academic discipline) so it doesn't get a professional treatment.