76 Comments

Regarding Stripe's release of an AI Agent SDK: this seems to complement their recent acquisition of the stablecoin platform Bridge nicely. It seems obvious to me that AI agents, using a more advanced form of AI than we have today, will be significant users of stablecoins, in that stablecoins provide a way for the unbanked (i.e., AI agents) to convey value from one party to another.

Expand full comment

I have had good luck communicating AI risk - if there are others interested, please reach out to me on strategy. We need massive awareness to have a chance.

Expand full comment

I think a lot of it is just people not putting two and two together, and just doing so in a neutral, gentle way works really well.

Expand full comment

You're trying to build support for some sort of AI pause, right? If that's still your goal I would strongly urge you to follow's Zvi's advice and read the emails, actually read the emails.

I don't know that there's anything we didn't know in there but this reminder is the money quote:

"The core algorithms we use today have remained largely unchanged from the ~90s. Not only that, but any algorithmic advances published in a paper somewhere can be almost immediately re-implemented and incorporated." - Karpathy

It's entirely a question of compute. In 1998 the Cray T3E hit 1 teraflop. The iPhone 15 Pro is over 2 teraflops.

Past performance not a guarantee of future results and all but what happens when, even assuming no further algorithmic enhancements, the cost of the compute to train the kinds of models you're afraid of is cheap enough for me to access?

You don't just need to get China On The Phone - to say nothing of extraterrestrial evolved intelligences who we should assume are or will be as capable of building a superintelligent AI if we can - eventually you'd have to get me and I pre-commit to not picking up. It might take 30+ years but you shouldn't bet humanity's fate on me not being around. I get plenty of aerobic exercise: I have a pool. And if I don't make it, millions of other people are going to do the same thing anyway.

The trend with training data is also unfavorable. Not enough today? How much more do you think will exist by the time the compute costs come down? That's just thinking about text generated per day. Last week at my job I was dealing with OCR problems from one of our data providers. Half an hour later I was having GPT-4 extract and format the info directly. Multi-modal is getting better and better, expanding the training corpus.

If what you fear is possible it is also inevitable. Pausing would be a disastrous choice, to say nothing of the cowardice of foisting the transition onto future humans. The earlier we advance the better the chance of some kind of initial controlled guidance and, if we survive, we'll be vastly more capable of handling the subsequent explosion. Pausing simply eats into our window without any benefit: Yudkowsy is absolutely right, alignment is hard and probably impossible in the way people mean, so there's not some theoretical breakthrough a pause gives us time to find.

Expand full comment

Let's assume as you say that alignment is impossible. If we let AI companies go full steam ahead, as soon as hardware permits AGI we are dead. If we don't, and we pick up the phone with China, then AGI is only achieved a few years later, then we die. A few extra years of life is great, actually! Having 3 years to live, or 10 years to live, is a very different scenario.

Expand full comment

Your unfalsifiable belief that AGI inevitably leads to everyone dying is leading you to encourage a degrowth strategy that increases human misery for the payoff of...the same outcome a few years later? This is a very difficult mindset for me to understand.

Expand full comment

This is hardly a degrowth strategy, more a "don't kill everyone" strategy. I'd like to have my children grow up, for starters, and for them to have meaningful lives.

As for why AGI by default leads to human extinction, please read the Compendium.

https://www.thecompendium.ai/

You have not provided any reason at all that AGI won't lead to industrialized dehumanization and even if not human extinction, pretty bad outcomes for humanity as a whole.

Expand full comment

I have. Lots of arguments, no evidence. The only convincing arguments are that currently scaling seems likely to lead to AGI - and if you give that a moment's thought you'll realize that the safest course of action is to build it now while compute is relatively scarce and expensive. It would have been safer still if Louis XIV had done so with a country sized windmill and canal mechanical computation device but no sense crying about it now.

Instead, you advocate for a pause. Burning more time until the fall in compute prices leads to the inevitable, except that it will start from a base of greater access to compute. Whoops.

As for reasons why it won't - we can play campfire stories all night but you really don't want to assume that reality is dictated by what I can or cannot imagine. I don't even have a good reason for why most of a human body's functions work - and let's not even start on magnets.

Yudkowsky is right that debunking any given doomer argument is no proof against doomerism and you can debunk any argument I can provide for positive outcomes without either of us getting near to touching how things actually work out. There simply is insufficient data.

Expand full comment

I have not even seen a single good argument.

Expand full comment

I am not (yet?) a PauseAI advocate, and remain fundamentally uncertain about the effect of AGI; I did not put quite enough caveats in my previous comment to remain brief. (In contrast, this comment has ballooned a bit, sorry.)

Note that I prefaced my comment with *your assumption* that alignment is <b>impossible</b>. Under that assumption, having an AGI act in a way compatible with human interest is unstable (otherwise you've solved alignment) so humans eventually stop being in charge. I don't really see a realistic scenario where that happens on the scale of more than a decade after AGI (human intelligence is a ceiling? inference-time compute becomes a bottleneck? AGI fails to convince humans to help it grow?). Then we all die, unless you've solved at least some weaker form of alignment such that the now-autonomous AGI system leaves us something.

Let's think about pros and cons of pausing (again under this alignment-*impossibility* assumption).

- If we go full steam ahead, we get a few years of health improvements, better wars, great economic growth, less poverty and hunger, more leisure, many fewer deaths, before losing control.

- If we pause, we get a few years of business as usual, with deaths, illnesses, hunger, some existential risk (from nuclear wars/climate change), then get the same years of exponential growth as without pause, before losing control as well.

I think humanity as it is, despite its numerous terrible flaws and actions, is net positive (or at least if given the choice I would not delete it suddenly now), so a few years of business as usual is good. (And there are kids that I would like to see grow up a bit.)

If alignment is not impossible, but only very hard, then what a shame to go full steam ahead! Oh the wasted opportunity to get exponential growth in well-being **and sustain it!**

Besides, as has been pointed out numerous times elsewhere on this blog, slowing down or regulating AI does not necessarily stop most "mundane" uses of AI that can save lives now. And, more broadly, we can be growthers in all other domains! Only very specific niches (gain-of-function research, top AI capabilities) have the potential to produce technology that could supersede humanity with fitter entities (viruses, AI); why focus your energy on one of them rather than on all other domains that can fight back stagnation?

Expand full comment

"I don't really see a realistic scenario where that happens on the scale of more than a decade after AGI (human intelligence is a ceiling? inference-time compute becomes a bottleneck? AGI fails to convince humans to help it grow?). Then we all die..."

This is the problem that you, sean pan, Zvi in this post (yet not when he quantifies his actual risk estimates), and other safety first proponents have. There is no justification to assume "blah blah blah - everybody dies, no questions asked, that's it, no other outcomes". This is an unsupported belief. You cannot remotely claim any knowledge about the likelihood of this happening, you cannot make any honest probability assessment, and you cannot justify a pause or any other course other than us progressing and empirically finding out what we do not currently know without these baseless, dire "no more thinking" stop signs.

Expand full comment

I cannot prove that jumping out of an airplane at 35k feet without a parachure is invariably lethal, but all evidence leans that way.

Expand full comment

Honestly, basic logic shows the risk. All of the warning shots right now strongly provide even more evidence.

And yes, we have all of the other opportunities to... not die.

Expand full comment

Agreed. Every additional year of life is in effect, 8 billion more years of life added for humanity.

Hard to see how this isn't beautiful, and how every additional year doesn't also give us opportunities to try to find a better solution.

Expand full comment

Pausing definitely gives us the ability to understand how to handle alignment with the current situations. Generally speaking, you don't get better at steering the car by racing ahead and it really helps to have a brake as needed. By building the Pause button, we also get a lot better at learning how to control such explosions and keeping ourselves alive. If that means not driving off the cliff, then that's also part of the solution.

Suicide racing is definitely not the way.

Its a good question why you want to risk the lives of all humanity.

The fact that you're bringing aliens into the mix seems to suggest a really faltering argument besides "Let's casually risk killing everything of value, because its hard to prevent that. lol."

My general awareness is that most people do support some form of a pause and regulations on superintelligent AI, once they have gotten the appropriate information. The issue is that the information isn't penetrating widely, but very few people ultimately want everyone to die. Though I'll take the advice I heard from another and not talk to e/accs who want everything to die.

Expand full comment

"Though I'll take the advice I heard from another and not talk to e/accs who want everything to die."

I'm not sure if that was directed towards me, but it's another sign that makes me think you really need to take a step back and give your thinking on the topic of AI some self-examination.

I think the Safety end of the debate will, eventually, lead to stagnation and general poor outcomes for humanity up to extinction. I'm very comfortable saying so even sharply. But I don't think they're anti-human. I don't think they want everyone to die. I know there's probably some nuts you can pick on the pro-AI development side who do, but you should try to gently remind yourself occasionally that people can disagree with you without being monsters. I'm pro-AI development, as is everyone else I know who is, because I'm pro-humanity.

Communication, dialogue, understanding the other side: these are all good things.

I know everything seems urgent and existential and you are concerned for the safety of your children. But the way you talk and your responses make it seem like your mind is already dangerously closed off in this area. The warning shots you mention? None of those are surprising for people who are pro-AI development because they only reflect capabilities. Capabilities are the things I want and expect. You don't need to respond or do it right away, but I do urge you to take some time to ask yourself if you're not engaging in confirmation bias.

You'll notice that I'm talking to you on Zvi's blog. I namedrop Yudkowsky to indicate that I have and do read things from the Safety side. Listening to and trying to understand those you might disagree with and taking a little time to consider their perspectives and things they may be correct about don't make you an apostate or traitor to the cause - this isn't a religious debate, remember? Just people trying to figure things out - and, hey, if you're right the worst it can do is make you a better advocate.

I am not trying to tell you that you're wrong about anything at all right now. I'm not trying to change your mind. I'm trying to sincerely tell you that your posts pattern match to zealotry and that you may not be thinking clearly. If you are, great, apologies that I was wrong. Just don't be afraid to ask yourself that question.

Expand full comment

I have rather extensively read the literature on the "opposite side" and you may be surprised to know my origins as a Landian accelerationist. It is because I am well familiar with their anti-human mentality that I am concerned: as Zvi has noted, many of them have "spoken clearly into the microphone."

As a consequence, Sutton has spoke pretty that a future "without meat humans" is not so bad, Larry Page spoke of "preserving humanity as sentimental nonsense", and Verdun has spoken avidly of "posthuman futures." The actual actions of Marc Andressens against safety while investing in an AI agent that has since created a half billion crypto all demonstrate deeply anti-human feelings.

So no, while I'll love to find a better way to safety and I try to see things otherwise, I do not see how embracing extinction and loss of all beauty and life is a good thing.

I am not opposed to AI as a whole and highly controllable ANIs would be deeply desired by me. The problem is the generality of the systems and the warning shots are not capabilities, but negative influence and behavior.

And notably, nothing you said has shown any real concern for humanity and life.

Expand full comment

I don't agree that a pause is necessarily going to achieve what you hope. In fact I'm worried that pausing has a nonzero probability of shifting the likely outcome closer to kill-everyone. We are already seeing cooption of the AI debate as a front in the hawks versus doves struggle raging in the military, and I'm not sure giving the establishment the time to steer the debate in this direction is going to go well for world stability or maximizing human lives lived.

Here is why I also disagree with your reasoning that kill-everyone is the "logical" outcome. Yudkowsky's argument for inevitability seems sound except for the crucial step where he assumes that increasing intelligence inevitably leads at some point to overcoming hard constraints of physics and computation. Until that "and then magic happens" step is properly justified, the rest of this Safety argument seems isomorphic to a medieval theologian's argument for whatever the Church had decreed was rightthink at the time. Any logical argument can be turned into an argument for any conclusion at all by including a false step, no matter how sound the rest of the steps.

Your willingness to engage in good faith is highly appreciated, though.

Expand full comment

I dont see how you need to "overcome physics", you just need a combination of reward hacking that emphasizes the goal above human lives and a generally higher ability, and voila: very bad outcomes for humanity. The Compendium is a very good article on this.

We dont hurt animals because we hate them, we hurt animals because we happen to want to build cities that wipe out their habitats. And that is why we have caused mass extinction.

A pause ensures that at least humans are needed, which results in human survival.

Expand full comment

Fwiw, the "magic" imo is just overcoming human judgment and oversight, and really that is farcially easy. Arguably just social media is doing it but we have excellent evidence of how well AI could do it:

https://www.psychologytoday.com/us/blog/emotional-behavior-behavioral-emotions/202403/ai-is-becoming-more-persuasive-than-humans

So human extinction would probably look at lot less like Terminator as it might be two AI systems warring and humans dying because crossfire, or useless humans being convinced that everything is fine while we go to glue-factory equivalents. The mind uploading scenarios are one such way.

Expand full comment

TK-421, you're not going to get through to Sean, you can't. In the event that we find ourselves in the other world - where we have scaled current AI to AGI, by adding modalities and possibly new neural networks, of course designed by RSI, and they just reliably do whatever the prompt says, Sean will still be demanding a Pause and still be saying they are about to turn on us. I don't think he will change until he dies.

In that world, he'll be citing as evidence all the times that these new AGI have failed, ignoring it's 1 in 1000 or whatever, and state his belief that it's just about to escape or betray us, despite likely requiring substantial hardware requirements.

A lot more people will have joined Sean by that point - obviously there will be massive shifts in employment, even if there end up being plenty of jobs as medical beta testers (for the new treatments for aging and dementia and cancer) and VR world beta testers and O'Neil Colony project managers and whatever else becomes possible.

Expand full comment

My concerns with extinction risk is not sudden left turn but industrialized dehumanization, which seems by default will industrialize humanity out of existence as friction in the system.

Expand full comment

Well I hope if that fails to happen you will update your views. Theoretically the opposite can be done, we could make life better for every living person if we had access to a lot more resources, and also simultaneously make the cost to keep everyone alive in optimal health far cheaper in resources spent.

Expand full comment

Of course, I'll be happy to update my view if it happens, however nothing shows this as happening. I just returned from giving a presentation from LA on AI safety and watched people freeze to death while self-driving cars drove by them.

Expand full comment

Ironically the problem there is straight human stupidity. Both the people freezing to death and the bottom 50 percent or 75 percent of the entire population - who have votes proportional to their numbers - choose poorly. For example "affordable housing" requirements tacked on to each housing development have the opposite effect, increasing costs and making housing not get built at all. Making it less affordable.

Well ok that could happen to humans - we could be scammed yes, initially holding all the hard power, and scammed by AI into giving it up and bad outcomes.

The answer there is humans have to be smarter, whether his is accomplished by developing AI that can analyze proposals and give an unbiased opinion, neural implants, or larger scale tweaks.

Expand full comment

I share your worry, maybe with slightly different beats. What incentives does a Pause shift that makes messily discarding hundreds of millions of humans less likely? I see the incentives for the dystopian outcome being strengthened by giving politicians time to divert energy to their pet causes. The capabilities people seem at least to be trying to rush to a place where the dystopia is not the only likely outcome.

Expand full comment

Well, I would say if people are needed, they remain having some value via collective bargaining and resistance. By removing the need for people and fully replacing them, its a rosy world for robots but not for humans.

Expand full comment

Agreed. But is a pause going to shift incentives to keep humans around? I don't see the mechanism. If anything, human resistance to rolling out AI seems to reduce the value for a selfish agent to keep us around. I agree that a pause appears short term attractive. But real long term changes tend to be driven by shifting incentives so that actors prefer to keep pushing in the desired direction, and we get there (and stay there) by means of many distributed actions. How does a pause do this?

Expand full comment

Sean seems to have changed strongly held opinions at least once before, so I'm not sure Pause is as firmly held as you say.

Expand full comment

I would update if AI has shown clearly limited capabities and full human replacement was thus impossible. This seems highly unlikely so far but I would update if evidence indicated that.

Expand full comment

Von Neumann was indeed awake: "Yet it would be impossible not to see it through."

The relevant fact to extract from the man who could conceive of the universal replicator in the 40s isn't that he was aware of some form of existential risk from AI. It's that he saw the risk _and kept moving the work forward_.

That's the same man who advocated preemptively nuking the Soviets before they could build a bomb. Yet he continued his work on computing. If his genius is going to be used as an example of a smart person perceiving existential risk from AI then his actions should receive equivalent weight.

Expand full comment

No, AI safety people would simply agree it didn't make sense to do AI safety things in the 40s. It was way too early to do anything meaningful, except for writing banger sci-fi.

Expand full comment

That's convenient.

The guy who was willing to immolate millions upon millions of Soviets due to the existential risk of them getting the bomb totally agreed with the AI safety perspective, totally, and his chosen actions to move us towards AI / expand concepts like self-replicating machines is totally because he couldn't think of anything better to do. Totally.

Expand full comment

I would have done as the same as him, given the lack of additional information, the best thing to do was to write about it and give warnings, but for all he knew, there was a lot more to learn.

Expand full comment

Naturally. But now you know enough?

Expand full comment

Fuck yeah, its really overwhelming at this point. If you haven't been convinced, you need to reread Zvi and just collect warning shots from Diplomacy AI to sleeper agents(anthropic) to reward hacking(o1 scorecard) to truth_terminal creating a half billion crypto to the fact that every benchmark isn't holding, etc.

Yes, heck yes. The data is very good that this is not a good idea for humans.

Expand full comment
Nov 23Edited

He was exceptionally intelligent, but he didn’t have the experience of a great many other intelligent people in the world. That sometimes counts for something when making decisions about the world.

Expand full comment

Technological progress is a hell of a drug, and being smart doesn't guarantee good decision-making.

Expand full comment

The chess author published a follow-up today which showed that GPT-4o can do much better (but still not that great) if the setup is modified. The main improvement was asking it to repeat back all the moves and add another; fine-tuning and adding examples helped too. Surprisingly, fine-tuning it on 100 games of maximum-power Stockfish was not as helpful as adding 3 examples of legal moves.

So it seems that once again the problem is activating the base model's latent potential. But they point out that OpenAI has been training their models on chess games while the open models probably didn't train them on games explicitly, which probably is why even their base models don't do great.

https://dynomight.net/more-chess

Expand full comment

Must confess that tweets with video (especially from TikTok) make me even less likely to follow such links than previous low odds. It's actively painful to be ambushed by that modality; I already automatically disregard 99% of video links to begin with. A big part of why I hang out in the ratsphere is to get a picture of the world mostly through safe and relaxing words, rather than frenetically overstimulating videos, yknow?

Which is a shame because the point is on point. It's not that there isn't intense porn out there, but it's necessarily limited by the restrictions of the physical world. (Or trivially inconvenient guardrails, for AI generation.) The sheer level of...debauchery...one can encounter with erotica completely blows it out of the water. The only limits are your imagination...which also circles back to your recurring point that the product offered by virtual companions continues to be largely shit-tier, compared to SOTA LLMs. That kind of weaksauce writing would get downvoted to hell on Literotica or whatever. I'm rather curious where we end up with a GPT-5-powered Replika, or the "adult" settings fully enabled on frontier models.

Expand full comment

My concern with AI diagnosing is that it would be nice to know it's pedigree as much as possible, not in the sense that I need it to have an MD but I would be more likely to discount it regardless of accuracy if I found out it was trained mostly on WebMD.

-

The Last Mile problem makes me wonder if AI alignment will start to be a problem in earnest once we're at the point where it's designing and selling consumer products but isn't properly hooked into the incentive to make the products actually work. So like for example I was looking at automatic cat litter boxes the other day and one of them had a video on their Amazon page that was either noticeably partially made by AI or made conventionally by foreigners who just don't know what real life looks and sounds like, which when I notice it makes me question everything about the product up to and including its design. Now I'm imagining a scenario where, maybe not even as a deliberate scam, [a company uses] an AI [that] designs and sells a product without even checking it does any of the things it's supposed to do (or whether it's able to continue doing them for any reasonable length of time).

If an AI can successfully more or less scam sell garbage products at scale, we can imagine there's eventually going to need to be some sort of regulation requiring there to not be a disconnect between design and function. There's perhaps opportunity for that point to be a bit of a wake up moment.

-

I think the problem with using AI for movies and games (and/but, to a lesser extent, visual art and music), at least this early and near term, is that you're not going to be handed a finished product that isn't going to need additional refinement to make a wholly complete, comparable-quality end product and that refinement is still going to require some level of expertise in specific parts of the craft. I expect this will eventually be a solved problem but there's going to be a long frictional period where AI isn't going to be able to hide it's AI-ness without help.

-

I'm again remiss to point out there's maybe a half dozen ways it could be technically true that Elon Musk is legitimately in the top 20 Diablo 4 (softcore seasonal) world rankings (for the one niche speedrunning part of the game) and still have had 99% of the work done by other people (given, of course, that this could be true for any or all of the top 20). He's using the same stock broken build everyone else in the top 20 is using (this is two problems with the game broadly, 1) there isn't much competitive balancing being done on the game in the first place and 2) very few people are doing/need to do the actual work to figure out what you need to do to optimize a character) and he could have an infinite amount of people funneling him gear (ie all of the time and effort of sorting through drop quality variance--which is pretty significant if you're trying to perfectly optimize--can be offloaded).

I'd be more impressed if he streamed the entire process, which I haven't seen evidence of (to be fair on this point, I haven't looked but also nobody has been eager to be like "and here's where you can see it!"). This is especially weird since he wants X to be a game streaming platform.

Expand full comment

Design/function disconnect has been happening for years at scale, I'm not sure AI is going to materially change the outcome. At this point I am experimenting with AI filtering to surface the real products, because the online marketplaces aren't alleviating the problem.

Regarding Musk, I assume it's an xAI bot doing most of the work, I doubt it is idle amusement.

Expand full comment

The US officially has a "parents must do everything for their kids" cult, which I think explains why parents giving their kids AI to use touches such a nerve. For those who think that's shameful, what do they think of a kid reading a book? Shouldn't the parent be narrating an original story rather than relying on a mass-market commodity to teach their kid?!? Parental quality time is wonderful and indispensable, but kids interact with the world in lots of other valuable ways too.

Expand full comment

Those emails were indeed worth reading. I'm just a random guy, but I came away with a very positive opinion of Ilya - though it kind of reinforced my feeling that maybe he's a really smart guy who doesn't belong in the hard-knuckled power competitions of corporate work.

I had a positive read on Elon there, though obviously Elon behaves in a way that looks out for Elon. But I think he behaved sincerely, especially in the bits that are basically saying to go start a private company if you want to be a private company.

Expand full comment

Eavesdropping:

> I’m more frightened of the AI boys than I am of their AI. The problem is they have corrupted the language around it, so the word “safety” is now meaningless because the doomsters treat safety as them not destroying humankind, when there are very real safety issues that need to be dealt with around bias and fraud and the environment and so on. So it’s difficult to have the conversation now because we don’t have common terms.

(https://www.theguardian.com/technology/2024/nov/23/jeff-jarvis-elon-musks-investment-in-twitter-seemed-insane-but-it-gave-him-this-power)

Expand full comment

I mean that is the most important thing: not killing everyone. That said the other parts can lead to the worst outcomes.

Expand full comment

>As always, ask what it can do, not what it can’t do

This seems wrongly asymmetrical to be. Sure, don’t *just* ask what it can’t do. But don’t I want to know both? As a user, I don’t want to rely on an AI that falls down on simple arithmetic to summarise my accounts, for example. As a policymaker, I’d want to know both in order to assess things like AI risk.

Expand full comment

I mean obviously you want to know both, the same way you do want to know what your country can do for you.

Expand full comment

Ha ok I missed the Kennedy reference before.

Expand full comment

> Well, were worried, but we can definitively include John von Neumann.

Did you see his "Can We Survive Technology?". Admittedly, there's not that much there specifically about computing.

https://sseh.uchicago.edu/doc/von_Neumann_1955.pdf

> "The great globe itself" is in a rapidly maturing crisis—a crisis attributable to the fact that the environment in which technological progress must occur has become both undersized and underorganized.

> In the first half of this century the accelerating industrial revolution encountered an absolute limitation—not on technological progress as such but on an essential safety factor. This safety factor, which had permitted the industrial revolution to roll on from the mid-eighteenth to the early twentieth century, was essentially a matter of geographical and political Lebensraum: an ever broader geographical scope for technological activities, combined with an ever broader political integration of the world. Within this expanding framework it was possible to accommodate the major tensions created by technological progress.

> Now this safety mechanism is being sharply inhibited; literally and figuratively, we are running out of room. At long last, we begin to feel the effects of the finite, actual size of the earth in a critical way.

> Thus the crisis does not arise from accidental events or human errors. It is inherent in technology's relation to geography on the one hand and to political organization on the other. The crisis was developing visibly in the 1940's, and some phases can be traced back to 1914. In the years between now and 1980 the crisis will probably develop far beyond all earlier patterns. When or how it will end—or to what state of affairs it will yield—nobody can say.

> Dangers—present and coming

> In all its stages the industrial revolution consisted of making available more and cheaper energy, more and easier controls of human actions and reactions, and more and faster communications. Each development increased the effectiveness of the other two. All three factors increased the speed of performing large-scale operations—industrial, mercantile, political, and migratory. But throughout the development, increased speed did not so much shorten time requirements of processes as extend the areas of the earth affected by them.

> The reason is clear. Since most time scales are fixed by human reaction times, habits, and other physiological and psychological factors, the effect of the increased speed of technological processes was to enlarge the size of units—political, organizational, economic, and cultural—affected by technological operations. That is, instead of performing the same operations as before in less time, now larger-scale operations were performed in the same time. This important evolution has a natural limit, that of the earth's actual size. The limit is now being reached, or at least closely approached.

----------------------

> Likely to evolve fast—and quite apart from nuclear evolution—is automation. Interesting analyses of recent developments in this field, and of near-future potentialities, have appeared in the last few years. Automatic control, of course, is as old as the industrial revolution, for the decisive new feature of Watt's steam engine was its automatic valve control, including speed control by a "governor."

> In our century, however, small electric amplifying and switching devices put automation on an entirely new footing. (...). The last decade or two has also witnessed an increasing ability to control and "discipline" large numbers of such devices within one machine. Even in an airplane the number of vacuum tubes now approaches or exceeds a thousand. Other machines, containing up to 10,000 vacuum tubes, up to five times more crystals, and possibly more than 100,000 cores, now operate faultlessly over long periods, performing many millions of regulated, preplanned actions per second

> Many such machines have been built to perform complicated scientific and engineering calculations and large-scale accounting and logistical surveys. There is no doubt that they will be used for elaborate industrial process control, logistical, economic, and other planning, and many other purposes heretofore lying entirely outside the compass of quantitative and automatic control and preplanning. Thanks to simplified forms of automatic or semi-automatic control, the efficiency of some important branches of industry has increased considerably during recent decades. It is therefore to be expected that the considerably elaborated newer forms, now becoming increasingly available, will effect much more along these lines.

> Fundamentally, improvements in control are really improvements in communicating information within an organization or mechanism. The sum total of progress in this sphere is explosive.

> Such developments as free energy, greater automation, improved communications, partial or total climate control have common traits deserving special mention. First, though all are intrinsically useful, they can lend themselves to destruction.

> Technology—like science—is neutral all through, providing only means of control applicable to any purpose, indifferent to all.

> Second, there is in most of these developments a trend toward affecting the earth as a whole, or to be more exact, toward producing effects that can be projected from any one to any other point on the earth. There is an intrinsic conflict with geography—and institutions based thereon—as understood today. The technology that is now developing and that will dominate the next decades seems to be in total conflict with traditional and, in the main, momentarily still valid, geographical and political units and concepts. This is the maturing crisis of technology.

> Whatever one feels inclined to do, one decisive trait must be considered: the very techniques that create the dangers and the instabilities are in themselves useful, or closely related to the useful. In fact, the more useful they could be, the more unstabilizing their effects can also be. It is not a particular perverse destructiveness of one particular invention that creates danger. Technological power, technological efficiency as such, is an ambivalent achievement. Its danger is intrinsic.

> For progress there is no cure. Any attempt to find automatically safe channels for the present explosive variety of progress must lead to frustration. The only safety possible is relative, and it lies in an intelligent exercise of day-to-day judgment

Expand full comment