i agree with being explicit on “given goals” and “has goals”, and i usually think zvi is good with that but yeah does tend to switch back and forth in depending on the point, and i wish it was easier to talk about these different (imo) stages of ai development.
that said, even distinguishing them, “the possibility for getting rich due to …
i agree with being explicit on “given goals” and “has goals”, and i usually think zvi is good with that but yeah does tend to switch back and forth in depending on the point, and i wish it was easier to talk about these different (imo) stages of ai development.
that said, even distinguishing them, “the possibility for getting rich due to morally dubious things will be low, because there will be so many people doing morally dubious things to get rich does not really inspire hope. in particular since it means the agents (human or ai) working to prevent morally dubious acts will be overwhelmed and perhaps unable to spot the particularly bad stuff (offense generally being easier than defense in most cases of cases from bio, chem, cyber…to bribing, killing, stealing, etc etc)
Multiple bad actors with their AI agents is a better situation in a sense that no single one will gain the whole power. Also it would force the society to be prepared to thwart bad AIs no matter if it is "rogue AI" or obedient AI controlled by a bad actor. Generally situation wouldn't be much different from the current one - there are many ruthless people with resources especially in business. They hire consultants, lawyers, lobbyists, etc. AGI would be just another tool. The question is - will AGI be expensive and available only to the rich ones or will it more or less available to everybody?
ahh i’ve heard this argument before, but never had the ability to dig in on it with someone before, hope you don’t mind if i dig deeper here!
there are two parts that i struggle with: the belief that we can be robust to this, and the belief that it can be available to everyone
first, i agree that being robust to this can help ensure we’re robust to bad actors regardless of hums or AI. unfortunately, this seems to assume we can just remedy the problem of “offense is easier than defense”, but it doesn’t seem to address how we do that nor whether it’s even possible!
* it’s much easier to create a virus to infect one person, than it is to create a cure and give it to everyone
* it’s much easier to hack a company with a single exploit, than it is to give all exploits and prevent every company from being hacked
* it’s easier to bribe/blackmail one individual than prevent all individuals from being bribed/blackmailed
* it’s easier to find a new physics invention and weaponize it, than it is to find all new physics inventions and protect against any of them
i agree a society robust to all this is a safer society, and agree we should try to get there! i think it probably involves reducing the amount of tension and dissent and motives for bad actions, which is probably useful too!
but i don’t believe wanting it makes it possible. and not sure the assumption that we’ll just do it makes it possible. can you expand on this? or can we at least acknowledge this is a crux that your line of reasoning depends on?
second, you mention the idea of it being available to rich people like or available to everyone. i am not unclear how you imagine it being available to everyone though, and would love to hear a clear story of how that’s done! are you imagining this running on local hardware, or running in the cloud?
running more and more powerful models requires many gpus, and i’ve not seen a way around that. llama/mistral 70b requires an a100, or multiple gpu cards, so we’ve already set a lower bound of “must have a few thousand dollars” to get access to a non-frontier model. and more powerful models have many more weights with increasingly more cost. and if you amortize that cost with multi-batch inference or shared gpu pools, you’re back in the realm of “hosted behind an api in the cloud on someone else’s system” that i thought you were trying to avoid and that you feared.
and your ability to affect change is influenced by the amount of inference you can do. more gpus == more work. so you can do more did influence campaigns, more agentic work in whatever domain you want, more probing of binaries looking for exploits, etc. for someone else, they’d need to devote as many gpus to “fighting back”. so it turns into a capitalist game again of who has more money for gpus, perhaps biased by the relative cost multipliers of offense vs defense. but a game of “who has more gpus” turns back into “rich people win”, but with the added difficulty of also needing to ensure society is robust to bad actors.
so in summary (thank you for reading this far, if you have!), i’m curious how you imagine making society robust to this and solve/imagine offense being easier than defense , and curious how you imagine “power available to all not just rich” playing out in a way that doesn’t just privilege the rich again. concrete scenarios would help me since i’m having trouble visualizing how they’d come about, but maybe i’m just not creative enough :)
i agree with being explicit on “given goals” and “has goals”, and i usually think zvi is good with that but yeah does tend to switch back and forth in depending on the point, and i wish it was easier to talk about these different (imo) stages of ai development.
that said, even distinguishing them, “the possibility for getting rich due to morally dubious things will be low, because there will be so many people doing morally dubious things to get rich does not really inspire hope. in particular since it means the agents (human or ai) working to prevent morally dubious acts will be overwhelmed and perhaps unable to spot the particularly bad stuff (offense generally being easier than defense in most cases of cases from bio, chem, cyber…to bribing, killing, stealing, etc etc)
Multiple bad actors with their AI agents is a better situation in a sense that no single one will gain the whole power. Also it would force the society to be prepared to thwart bad AIs no matter if it is "rogue AI" or obedient AI controlled by a bad actor. Generally situation wouldn't be much different from the current one - there are many ruthless people with resources especially in business. They hire consultants, lawyers, lobbyists, etc. AGI would be just another tool. The question is - will AGI be expensive and available only to the rich ones or will it more or less available to everybody?
ahh i’ve heard this argument before, but never had the ability to dig in on it with someone before, hope you don’t mind if i dig deeper here!
there are two parts that i struggle with: the belief that we can be robust to this, and the belief that it can be available to everyone
first, i agree that being robust to this can help ensure we’re robust to bad actors regardless of hums or AI. unfortunately, this seems to assume we can just remedy the problem of “offense is easier than defense”, but it doesn’t seem to address how we do that nor whether it’s even possible!
* it’s much easier to create a virus to infect one person, than it is to create a cure and give it to everyone
* it’s much easier to hack a company with a single exploit, than it is to give all exploits and prevent every company from being hacked
* it’s easier to bribe/blackmail one individual than prevent all individuals from being bribed/blackmailed
* it’s easier to find a new physics invention and weaponize it, than it is to find all new physics inventions and protect against any of them
i agree a society robust to all this is a safer society, and agree we should try to get there! i think it probably involves reducing the amount of tension and dissent and motives for bad actions, which is probably useful too!
but i don’t believe wanting it makes it possible. and not sure the assumption that we’ll just do it makes it possible. can you expand on this? or can we at least acknowledge this is a crux that your line of reasoning depends on?
second, you mention the idea of it being available to rich people like or available to everyone. i am not unclear how you imagine it being available to everyone though, and would love to hear a clear story of how that’s done! are you imagining this running on local hardware, or running in the cloud?
running more and more powerful models requires many gpus, and i’ve not seen a way around that. llama/mistral 70b requires an a100, or multiple gpu cards, so we’ve already set a lower bound of “must have a few thousand dollars” to get access to a non-frontier model. and more powerful models have many more weights with increasingly more cost. and if you amortize that cost with multi-batch inference or shared gpu pools, you’re back in the realm of “hosted behind an api in the cloud on someone else’s system” that i thought you were trying to avoid and that you feared.
and your ability to affect change is influenced by the amount of inference you can do. more gpus == more work. so you can do more did influence campaigns, more agentic work in whatever domain you want, more probing of binaries looking for exploits, etc. for someone else, they’d need to devote as many gpus to “fighting back”. so it turns into a capitalist game again of who has more money for gpus, perhaps biased by the relative cost multipliers of offense vs defense. but a game of “who has more gpus” turns back into “rich people win”, but with the added difficulty of also needing to ensure society is robust to bad actors.
so in summary (thank you for reading this far, if you have!), i’m curious how you imagine making society robust to this and solve/imagine offense being easier than defense , and curious how you imagine “power available to all not just rich” playing out in a way that doesn’t just privilege the rich again. concrete scenarios would help me since i’m having trouble visualizing how they’d come about, but maybe i’m just not creative enough :)