"ChatGPT’s new image generator, Image 1.5, went live this week. It is better and faster (they say ‘up to’ 4x faster) at making and edits precise images, including text. It follows instructions better. "
"Rob Wiblin crystalizes the fact that AI is a ‘natural bubble’ in the sense that it is priced as a normal highly valuable thing [X] plus a constantly changing probability [P] of a transformational even more valuable (or dangerous, or universally deadly) thing [Y]. So the value is ([X] + [P]*[Y]). If P goes down, then value drops, and Number Go Down."
Should we factor in the possibility that stock prices, if they ultimately need to be traced to 'human' owners, might include AI-driven robotic taxidermy at some point? Might (competing?) ASIs wind up setting up legal fictions of human owners while actually operating roboticised skins of former stockholders?
The opening paragraph states OpenAI sued Google for copyright infringement, but the actual story is that Disney sued Google for copyright infringement.
You've repeatedly suggested there are *severe* problems with Gemini. Is there a flagship example of this? The "jealous" CoT you posted here doesn't strike me as one, it's a fairly relatable character for Gemini to play. If I discovered a coworker doing that I wouldn't call them a sociopath.
Re Mom, Owain Evans Is Turning The AIs Evil Again:
I would submit a different sort of interpretation than "the AI deduces it is Hitler". It is being pushed by fine-tuning on the highlighted text to be more likely to spend time (tokens) in parts of latent space that are associated with Hitler. This is then triggered when eliciting the behavior in testing.
The part I keep circling is your implicit point that we are bottlenecked by workflows, not intelligence. The models are already “smart enough” to be useful in tons of places. The reason it still feels jagged is that the slow parts are permissioning, tooling, handoffs, and all the tiny human steps that turn a good idea into a shipped artifact.
I felt this viscerally this month building a little home maintenance app locally. I have been tinkering with websites since the 90s, always dependent on someone else’s stack, template, or patience. This time I went from “I wish I had this” to a working app on my laptop, solo, in the CLI. It is not even on my phone yet and it still feels like a personal phase shift.
My current bottleneck is taste. The code is no longer the hard part. The hard part is knowing what should exist, what should not, and what “good” looks like when the surface area is infinite.
Curious if you buy this framing: as models get cheaper and faster, “taste” becomes the scarce resource, and the biggest winners are the people who can set crisp product constraints. Not prompts, constraints.
The hypothesis that we will have AGI in 16 years ( stated in 2009), is not the same as the hypothesis that we will have AGI in 3 years (stated in 2025). That is not a red flag of a failure to update.
Podcast episode for this post:
https://open.substack.com/pub/dwatvpodcast/p/ai-147-flash-forward
"ChatGPT’s new image generator, Image 1.5, went live this week. It is better and faster (they say ‘up to’ 4x faster) at making and edits precise images, including text. It follows instructions better. "
Minor grumble: I tested it with a prompt of
"Green glass tetrahedron on red table"
and I got a _square_ pyramid instead :-(
https://chatgpt.com/s/m_694459bc15348191b222a35dd783491b
( admittedly not my central use case - I'm more interested in STEM answers from ChatGPT )
<mildSnark>
"Rob Wiblin crystalizes the fact that AI is a ‘natural bubble’ in the sense that it is priced as a normal highly valuable thing [X] plus a constantly changing probability [P] of a transformational even more valuable (or dangerous, or universally deadly) thing [Y]. So the value is ([X] + [P]*[Y]). If P goes down, then value drops, and Number Go Down."
Should we factor in the possibility that stock prices, if they ultimately need to be traced to 'human' owners, might include AI-driven robotic taxidermy at some point? Might (competing?) ASIs wind up setting up legal fictions of human owners while actually operating roboticised skins of former stockholders?
( "The Puppet Masters" but with GPUs :-) )
</mildSnark>
The opening paragraph states OpenAI sued Google for copyright infringement, but the actual story is that Disney sued Google for copyright infringement.
You've repeatedly suggested there are *severe* problems with Gemini. Is there a flagship example of this? The "jealous" CoT you posted here doesn't strike me as one, it's a fairly relatable character for Gemini to play. If I discovered a coworker doing that I wouldn't call them a sociopath.
> OpenAI has new terms of service…Pliny feels personally attacked
It looks like the Pliny thing is about new X terms of service, not OpenAI
Re Mom, Owain Evans Is Turning The AIs Evil Again:
I would submit a different sort of interpretation than "the AI deduces it is Hitler". It is being pushed by fine-tuning on the highlighted text to be more likely to spend time (tokens) in parts of latent space that are associated with Hitler. This is then triggered when eliciting the behavior in testing.
The part I keep circling is your implicit point that we are bottlenecked by workflows, not intelligence. The models are already “smart enough” to be useful in tons of places. The reason it still feels jagged is that the slow parts are permissioning, tooling, handoffs, and all the tiny human steps that turn a good idea into a shipped artifact.
I felt this viscerally this month building a little home maintenance app locally. I have been tinkering with websites since the 90s, always dependent on someone else’s stack, template, or patience. This time I went from “I wish I had this” to a working app on my laptop, solo, in the CLI. It is not even on my phone yet and it still feels like a personal phase shift.
My current bottleneck is taste. The code is no longer the hard part. The hard part is knowing what should exist, what should not, and what “good” looks like when the surface area is infinite.
Curious if you buy this framing: as models get cheaper and faster, “taste” becomes the scarce resource, and the biggest winners are the people who can set crisp product constraints. Not prompts, constraints.
The hypothesis that we will have AGI in 16 years ( stated in 2009), is not the same as the hypothesis that we will have AGI in 3 years (stated in 2025). That is not a red flag of a failure to update.
Oops, that should have been 19 years for the hypothesis in 2009.