4 Comments

You say 1.5 uses Pro levels of compute, but I don't think their announcement says that. It says "less compute" than Ultra. Long context likely uses more compute than their pro model, but how much less than a 32k context Ultra is yet to be known. It seems to take longer for lover tokens at least, so even if the hardware it runs on is the same as Pro(I'm assuming it takes more VRAM), it takes more compute cycles overall. If you're worried about fast takeoff, I think this is a good thing because it's not an improvement in compute requirements.

Expand full comment

‘this just works’ and ‘I do not have to explain this.’

Exactly where i am at with this. I signed up for the two month free trial and have used it to organize my daily ramblings in my journal as well as do research on Mormonism(recently moved to Idaho). It feels more personable to me than the free GPT I use...strange as it feels for me to write that! I am 71 years old and I will barely understand how anything works anymore; my personal concern is how much I am coddling the beast that might eventually devour us.

Expand full comment

The pace of change reminds me of the Web circa 1994-1995. Nobody can keep up.

Expand full comment