Discussion about this post

User's avatar
Alex's avatar

For anything that you're working on, I think there is a moral obligation to ask yourself "Would I be excited if my kids were using this?"

If the answer is "Definitely no.", then you also have a moral obligation to stop working on that thing. Full stop.

I think this aligns with most people's moral intuition, but smart people are unfortunately really good at rationalizing behavior that aligns with their personal interest. That often takes the form of "If I don't do it (or my company doesn't do it), then someone else would do it anyway, so I might as well do a good version where I have control of the outcomes." My primary counterpoints would be:

- People in tech love to talk about the scarcity of talent, and the amazing power of agency. You Can Just Do Things, but this is so rare that it is very high impact when it happens. If you have scarce talent, knowledge, and agency, you don't have to use it to do negative things!

- By choosing to do it, you are actively setting an example and social norms that doing it is acceptable. Don't be part of that.

- "I will do this bad thing, because I will maintain control and make it less bad than it otherwise would be" is a plan that rarely survives contact with reality. Your behavior shapes your own character, and your values will often evolve to match your behavior, rather than vice-versa.

- If your "value above replacement" in your current role is really so low, what are you even doing there?

This especially applies to AI researchers, software engineers, product managers, and others in tech working on frontier capability right now. If you are uncomfortable with the end-uses or potential outcomes of what you are building, please just stop and find something else to work on. Your skills are in enough demand that you will certainly find a comfortable income doing something that you think is a benefit to the world.

Ethics Gradient's avatar

I am literally an IP Attorney (I am not *your* IP attorney and this is not legal advice) and yeah, I think Zvi's take on the copyright issues here is spot on. OpenAI does not get carte blanche to violate copyright just because they have some (narrowly-tailored!) opt-out button. That's not how this works, that's not how any of this works.

I honestly have no idea what their in-house attorneys were smoking. My best guess would be something about the DMCA safe harbor provisions of 17 USC 512(c), but I would be *extremely* skeptical that those apply in a context in which OpenAI is literally (and knowingly!) generating the infringing content. That was supposed to be the whole point behind the training / generation fair use distinction.

17 more comments...

No posts

Ready for more?