Discussion about this post

User's avatar
Fergus Argyll's avatar

> The dumbest style of reaction is when a company offers an incremental improvement (see: GPT-5) and people think that means it’s all over for them, or for AI in general, because it didn’t sufficiently blow them away. Chill out.

Please take this more seriously, we've now had multiple attempts at scaling which have failed to deliver what the creators have hoped for (gpt 4.5, 5, llama 4, now deepseek) it *is* a pattern. No it doesn't mean Gary Marcus was right and it's over but you really do need to take it more seriously.

Ask questions like; which scaling laws are we pretty sure still survive, what are signs the labs still plan on bigger training runs, is there a financial reason why the labs have not yet scaled successfully, did we expect more than this currently etc etc.

Engage with it

T Stands For's avatar

Chinese chipmaker stocks did pop following this release. Although training was not super extensive, if it was done entirely on domestic hardware, then this should be a significant update. Pangu Ultra was upcycled and not remotely close to frontier-level capabilities, so v3.1 could mark a major breakthrough depending on the HW stack.

12 more comments...

No posts

Ready for more?