16 Comments
Comment deleted
Mar 30, 2023
Comment deleted
Expand full comment

I really, really hope you were cool with the big government, anti-free-trade policy of the restricting the sale of chips to China.

Expand full comment

I thought for hours about a good D&D pun with the "Bard" name, and of course you get the perfect one

Expand full comment

I thought this was pretty good GPT-3.5 creativity:

Come up with three novel ideas for poetic aesthetics

Sure, here are three novel ideas for poetic aesthetics:

"Silent Poems": A poetic form that focuses on the use of white space and silence to create meaning and emotion. Instead of using words and language, poets in this style would use line breaks, punctuation, and other visual elements to convey their message. This form would be particularly well-suited to exploring themes of absence, loss, and emptiness.

"Poetry of the Ineffable": A poetic style that seeks to capture the beauty and mystery of experiences that cannot be fully articulated in language. This could include the sensation of falling in love, the feeling of being lost in a vast natural landscape, or the sense of transcendence that comes from a religious or spiritual experience. Poets working in this style would use metaphor, symbolism, and sensory description to evoke the ineffable.

"Algorithmic Poetry": A form of poetry that is generated using computer algorithms. This could involve using natural language processing to analyze existing poems and generate new ones based on patterns and themes, or it could involve using random number generators to create new combinations of words and phrases. Algorithmic poetry would challenge traditional notions of authorship and creativity, and could result in unexpected and surprising poetic expressions.

Please explain how a poem demonstrating these aesthetic principles would appear or be written

Certainly! Here are some possible examples of how a poem demonstrating each of the three aesthetic principles I suggested earlier might appear or be written:

"Silent Poems":

white space

punctuates

the emptiness

between us

silence

falls

like snow

on the city

In a "silent poem," the use of white space and silence becomes just as important as the words themselves. The first example uses line breaks to convey a sense of distance and separation, while the second example uses the absence of language to create a mood of quiet contemplation.

"Poetry of the Ineffable":

The way you move

through the world

is like the light

on a dragonfly's wings

I am lost

in the vastness

of your eyes

like a sailor

on a starless sea

"Poetry of the ineffable" relies on metaphor and sensory description to evoke the beauty and mystery of experiences that cannot be fully expressed in language. In the first example, the poet uses the image of a dragonfly's wings to convey the delicate and elusive nature of the subject's movements. In the second example, the poet uses a nautical metaphor to convey a sense of being adrift in an overwhelming and unknowable space.

"Algorithmic Poetry":

The moon is a silver button

sewn onto the black cloth of the sky

A thread of light that ties the night

to the memory of a lover's sigh

The sun is a blazing coin

tossed across the azure blue

A promise of warmth and light

that will see us through

In "algorithmic poetry," the poet may use computer algorithms to generate new combinations of words and phrases. The examples above were generated by a program that analyzed existing poems and used patterns and themes to create new ones. The resulting poems can be surprising and unexpected, and challenge traditional notions of authorship and creativity.

Expand full comment

Even GPT-3.5 is fantastic at inventing languages, I documented my experiments here:

https://maximumeffort.substack.com/p/i-taught-chatgpt-to-invent-a-language

Expand full comment

Re: “Llama is this so easy”, this repo https://github.com/ZrrSkywalker/LLaMA-Adapter seems to suggest that actually you may need *very* few additional parameters and a tiny amount of fine-tuning of a network that you bolt on top of an LLM to get pretty good instruction following. I haven’t experimented with it though.

Expand full comment

Why do you think gradient descent isn't enough? From my perspective, if the architecture is right, gradient descent will get 'er done. The human brain has a complicated set of different architectures, but if you find the right combination then train it on all the video on youtube I'd expect to get general AI with gradient descent as the learning mechanism.

Expand full comment

Thanks for these excellent write-ups, Zvi. It's exciting to be living on the exponential. I'm continually disappointed by all my other news sources in finding me the most exciting and relevant developments in AI. Meanwhile, you continue to update us rapidly on these developments and provide the necessary context and commentary for my dumb dumb brain.

I'm apparently one of the few with access to the ChatGPT 3rd party plugins. If you want to give it a try through my access, drop me a line and I'd be happy to smuggle your prompts into it and provide the outputs to you. If you're interested, drop me a line:

Twitter: @JetbatUnleashed

Expand full comment

The self-corrected "non-rhyming poem" just switches from couplets to ABAB.

Strangely, I can get a non-rhyming poem in one shot if I give it a subject ("write me a non-rhyming poem about frogs"), but I get the exact same pattern of mistakes as the tweet (couplets, followed by ABAB) if I just ask for a non-rhyming poem and then ask it to self-correct.

Expand full comment

An interesting angle to all of this is watching the academic and intellectual classes realize that a new technology seems aimed specifically for them. Previous novel advancements made physical labor easier, and the people who had to adapt or lost jobs were the laborers.

Now labor looks like the hardest task to automate and all the thinking classes are stressing out about whether they are going to keep their jobs or how to adapt.

No plumbers are reading about AI and wondering if they will still have a job.

Expand full comment

Re: spam, I've already seen people posting on technical forums with suggestions and arguments written by GPT. My suspicion - but it's impossible to know - is that they wouldn't have come up with the argument GPT was making but assumed it was good. I haven't bothered responding to things I could tell were written by GPT, but if it were harder to tell, well... if people think GPT is accurate and start relaying its output without marking it as such, Brandolini's law is going to render such forums useless. Right now they only avoid falling to Brandolini because people are generally trying to make good arguments, which takes effort even when the arguments end up being bad.

Expand full comment

10. "Lindy" is hard to take seriously. What are the odds that a business would land on the one name that most concisely describes what's wrong with their product?

Expand full comment

24. Suffering in token predictors is really hard to gauge. When was the last time you saw written words express real, serious distress? The format and the training data don't lend themselves to it. Yet we do see LLMs producing results that come off as "inner" discomfort (i.e. the LLM isn't being directly asked to simulate anything uncomfortable, but is giving off uncomfortable vibes due to its weights pulling it in conflicting directions). I doubt LLMs currently suffer, but if and when they do it seems relatively likely that they'll have no mouth to scream with.

Expand full comment

About AI and unemployment: the main argument that it won't be a problem goes something like this:

"In the past, increases in productivity in one area (1) increased demand for things other areas, (2) sometimes increased demand for people in that area because cheaper X leads to more uses for X, and (3) enabled entire new fields such that mass unemployment was never a major problem. So AI will do the same."

One could quibble with the premise (we are working a lot fewer hours, with lower labor force participation and a much higher percentage of rent-seeking jobs, then for, instance in 1900), but it seems basically correct. But this is basically a function of how extremely versatile humans are - almost everyone can do a tremendous variety of physical and mental tasks, such that even automating very large sections of these tasks left plenty of things left - which does not imply that sufficient automation wouldn't have this effect. Is there a historical example of this? Yes - horses.

Horse-drawn carts were the cutting edge of land transport for millennia. Trains represented a massive leap forward in terms of land transport - orders of magnitude more efficient than horses at hauling large loads long distances. Did this render horses obsolete? No, it actually increased demand for horse transport hugely, since something had to take loads to and from the train stations which were now handling colossal volumes of cargo, and something had to plow the fields which were now growing far more then every before. Did this last forever? Also no. Internal combustion engined-vehicles proved much better at both small-scale irregular transportation and for things like fieldwork. Cars, trucks, and tractors replaced horses pretty much everywhere. Now, horses mostly exist out of nostalgia and a sense of aesthetics; they serve little economic purpose outside of a handful of the world's roughest spots. If we collectively decided that horses were too much trouble, we could get rid of them entirely with basically no loss.

Previous industrial advances are the trains in this analogy. Internal combustion engines is AGI. We're the horses. Industrial revolution tech : humans : AGI :: trains : horses : internal combustion engines.

This is somewhat related to X-risk - a non-actively malevolent superintelligence, of the sort that treats humans the way we treat ants, rather than actively trying to exterminate us out of spite, would first have to automate all parts of the economy required to sustain itself.

Expand full comment

Re: AI Gives Requested Information.

This argument is being taken too far though. You could say the same thing about the examples where we get LLM’s to explain how they would take over the world or kill everyone. As with you agreeing with EY that they should at the very least be aligned not to say they are conscious, shouldn’t they also at least be aligned to not go around saying these kinds of things?

If their plan to take over the world is realistic, surprisingly effective or contains steps and subgoals, then we may already have a demonstration of one key step.

And let’s not forget that an AI that reaches the point where it could execute such a plan could execute it much more effectively than a human. There are plenty of plans infeasible for a human that are trivial for an AI to execute.

Expand full comment

You wrote:

>"If you train an LLM to learn the legal moves in Othello, you can then look into the model and figure out how it represents the board state, and you can tweak that representation to change what it thinks is the board state. Cool."

In the Lex interview EY states that he thinks we'll need all the best minds doing theoretical physics working on interpretability for 30 years to understand the inscrutable matrices. I think results from toy models like the one above show that might not actually be the case. You actually only need 7 or so SSS-Tier minds working a very specific problem very carefully to make rapid progress. Maybe there's a way to scale up probe generalization such that a "Manhattan project of interpretability" could create very powerful general tools by 2030.

I have a speculative hypothesis that by generating deliberately weak BUT highly deceptive AIs prone to 'treacherous turning' we could learn to scan nascent large models for similar architectures and flag problems before deployment.

Expand full comment

> I continue to be in the no camp on gradient descent being enough. [to make AGI]

Why?

Expand full comment