Discussion about this post

User's avatar
Charlie Sanders's avatar

Good critique.

"This is the peer! This is the review! That is how all of this works! This is it working!"

I'm not sure this is fair. Peer review comes before publishing, not after, for a good reason: most of the people who will ever read the AI 2027 post will have read it in its pre-review state.

There's also a bunch of subtle but important norms and practices that formal peer review enforces but a lesswrong back-and-forth does not. Consider things like preregistration of methodologies, establishment of data integrity procedures, conflict of interest statements, robust version control - all of those hard-won scientific table stakes (and more) are being lost by this choice of review strategy.

Expand full comment
Michael Sullivan's avatar

You make a point early on that uncertainty can go two ways, maybe it's a lot slower, maybe it's a lot faster.

But can it actually? Like, AI by mid 2027 is pretty darn fast already! Can it get "a lot" faster than that? I mean, clearly on some level no. The absolute fastest it could've gotten is 2 years faster, and realistically maybe the very fastest it could be is one year faster? This isn't symmetric. AI could be a LOT slower than AI2027 suggests, but it can't really be a lot faster than it suggests.

Expand full comment
21 more comments...

No posts