Discussion about this post

User's avatar
David F Brochu's avatar

Human language is bounded by the universe we observe. The universe itself is bounded by physics. Absent new physics no system bounded by language can achieve “unbounded” intellegence. AGI is nonsensical. Faster does not equal smarter. It just means errors propagate a pace humans can not keep up with. One error propagated at the scale of current compute ends only one way. Alignment is a math problem. It is so simple everyone one is missing it. S=L/E.

Eric Lortie's avatar

AI coding is definitely progressing at a pace that should have been expected by anyone with relevant coding experience, and exposure to the tech/capabilities of the models. As a simple example: it was possible to output single file Wordpress plugins through a manual prompting process in 2024 and the quality of the plugins were solid when the build was prompted properly. ChatGPT evolved from outputting a few hundred good lines to a thousand, and then the systems evolved to allow for multiple chunks of similar work to be assembled cohesively. The issue has always been one of orchestration. It was always going to scale rapidly once agential orchestration improved, which happened a while ago and still the major labs don't seem willing to acknowledge what they've created.

For all their billions of dollars the major labs seems to have a serious lack of builders and imagination, or just like oAI did with advanced voice mode: they've been sitting on internal knowledge they know the public/economy isn't ready for (lots of that going around rn)

4 more comments...

No posts

Ready for more?