1 Comment
⭠ Return to thread

I worry we're talking past each other. Devin HAS intentionality. It is trying to satisfy the user's goal. And it has some training to know what it isn't supposed to do. It is also limited in what it is capable of doing. DevinV2 will be more capable, so less limited in what it could do.

My claim is that it is currently limited in what bad things it can do based on two things: capabilities and alignment. My claim is that it will become more capable, so that limitation is dropping over time. My other claim is that alignment is hard and that I would be quite surprised if when this is widely deployed and several orders of magnitude more users use it, that someone doesn't ACCIDENTALLY cause Devin to do something that one might call 'unethical'.

As I said above, I don't believe chat models are particularly well aligned, contrary to your position. If you believe otherwise, fine.

I also don't agree with your position that creating a general purpose coding agent with broad scale access to the internet without any sense of ethics is going to be a good idea for humanity. I agree with Zvi that I'm not particularly worried about DevinV1, though.

Expand full comment