Discussion about this post

User's avatar
Jeff Rigsby's avatar

"Leaking"... to the board of directors?

Ffs, we are doomed

Expand full comment
John's avatar

I don't know of any of the fact of this particular case, but just a general observation: As a securities litigator, my understanding is that OpenAI employees who raise concerns about potential security risks have strong whistleblower protections under Sarbanes-Oxley. The Supreme Court's decision in Lawson v. FMR LLC establishes that employees of contractors to publicly traded companies like Microsoft are covered by Sarbox anti-retaliation rules.

Hence, if an OpenAI employee has a good faith belief that the company is not properly addressing significant security risks, it follows that information potentially material to Microsoft's shareholders is not being disclosed. That's a sufficient basis for employees to report their concerns internally, to law enforcement, or to Congress. They do not need to prove actual securities laws violations to be shielded from retaliation. One major limit, however, is that Sarbox does *not* protect disclosures to news media or otherwise sending information outside the organization/government channels.

The SEC has broad authority and substantial resources, and they are sometimes happy to explore exotic theories of securities fraud. An OpenAI whistleblower making a case that the company is concealing critical risks could get the SEC's attention, on the theory that information is being concealed from Microsoft investors. While this isn't legal advice for any particular situation, would-be whistleblowers at OpenAI should know they may have a path to protect themselves if they feel compelled to speak out.

(Disclaimer: not legal advice. Employees in this situation should obtain counsel before acting)

Expand full comment
17 more comments...

No posts