35 Comments

Gmail marked this email as phishing for me (which is relevant both directly and in that it updates me further towards skepticism of Google's ability to be competitive with AI)

Expand full comment

Tried the Zapier ChatGPT plugin. You authorize only specific actions like writing a draft email and even then you always have to go to Zapier website to review and confirm the action. Doesn't seem too risky to me.

Expand full comment

I’m experimenting with using gpt inside of a workflow.

I’m still on the waitlist for plugins and api access to gpt4 after nearly 2 weeks. Wondering if anyone here has gotten approved and how long they waited.

Expand full comment

> Zapier offers to create new workflows across applications using natural language. This sounds wonderful if you’re already committed to giving an LLM access and exposure to all your credentials and your email. I sincerely hope you have great backups and understanding colleagues.

I think you’re very confused about what’s going on here. (I use Zapier myself. No other affiliation.)

Zapier is a service for making simple automated scripts involving API interaction with any of a large number of other online services. Zapier has a nice UI for making and editing these scripts. That’s the core Zapier service, very simple, and it really hasn’t changed since the company was founded in 2011.

Note that there is no ML / AI in this story—except that a few of the thousands of APIs they support happen to be APIs into ML models. For example, you can make a Zapier script that connects to DALL-E and requests a picture from a certain prompt, and then posts that picture to Facebook, or whatever. Obviously DALL-E would be using ML in this process, but Zapier does not.

Then the new thing you’re talking about here is that Zapier has added an LLM such that you can type what you want in natural language, and it tries to guess the script that you wanted, and it initializes the script-editing UI with that script guess. Then you use the UI like normal, look at the script, and you can edit it or delete it or use it as-is or whatever you like.

So the LLM is involved in creating the script, but once you have the script, there’s no more AI involved.

It’s a lot like, if you’re writing the code for a new password manager browser extension, and you’re using GitHub Copilot to help write the code, and when it’s done you put sensitive passwords into your browser extension. That’s a perfectly fine and normal thing to do. It is completely different from giving an LLM access to your passwords!!

Expand full comment

Have a story on early alarm bells wrt Google deleting inactive accounts earlier this year. Google Voice stopped working on my desktop out of the blue late last year in a strangely specific way (how and how I fixed it can be found where I made an angry how-to-fix post after I accidentally fixed it troubleshooting a similar problem with Discord several months later: https://support.google.com/voice/thread/203602686/re-no-sound-when-making-call-but-audio-setting-s-work-and-get-end-call-sound-when-hanging-up?hl=en ). After a few simple attempts to fix it failed I just kind of shrugged and got on with life since I could still use it for texting and it was still otherwise working on my phone.

But every so often I'd waste a few hours of an afternoon trying to troubleshoot it and it's impressive how awful the Google support page for Voice is for how important of a product it is for my (and presumably others') professional life. You can get there from the above link, it's not only profoundly useless but it's apparently not even officially supported? Like there was one volunteer guy who was pretending to be customer support and is straight up just harassing people who are asking for help.

The two apparent exceptions to this are where Google employees from other products would chime in to A) remind people that you shouldn't be using Google Voice for professional or business applications because it's not an officially supported product and they won't help you if you're locked out of an essential phone application and B) that your Google Voice number/account can be functionally deleted without warning after some period of inactivity (I think it was as quickly as 6 months?).

So naturally this came as a bit of a surprise after having used a separate Google Voice number for a little over a decade as my professional/high-priority/must-be-least-vulnerable-to-spam phone service. And this isn't even something like email where there's at least theoretical alternatives; there doesn't really seem to be anything else that is a well-function fusion of VOIP and phone service.

Expand full comment

Also re: El Chapo threat model, there probably isn't as much mystery as you'd expect, I would expect someone like El Chapo almost certainly has access to a smartphone even if he isn't supposed to. Prison I'm working at now is having ongoing problems with people using drones to airdrop in A) drugs and B) phones.

Expand full comment

"This is the same Eric Schmidt that told Elon Musk, when Musk asked how we were going to ensure the humans were going to be okay, that Musk was a ‘speciesist.’ " <- That was Larry Page

Expand full comment

Former regulator here. I’ve seen many bad regulations and some good ones. Good regulations target leverage points and get enough buy-in that they are implemented in a strategic way rather than via tactical compliance paperwork. I’m working on an essay that walks through a specific example. In general, as I see it, a good regulation:

• Is targeted. 


• Engages the governed, listening to their concerns and gaining their trust as a governed party to ensure buy-in at a strategic level instead of tactical compliance in a check-the-box exercise. 


• Is not make-work and is reasonably time-efficient, allowing sufficient flexibility in implementation for different types of businesses and organizations. 


• Ideally makes required something industry participants mostly wanted to do anyway but couldn’t since doing so would have put them at a perceived or actual market disadvantage. In essence, regulation in this situation is a key for overcoming a prisoner’s dilemma.


• Helps ensure safety for stakeholders that have insufficient leverage to negotiate with more powerful stakeholders on their own (consumer protection regulation is an example). 


• Is forward-looking, with consideration of how processes and systems are evolving and how the regulation might apply to their anticipated future states.  


• Costs businesses and societies less than the risk it protects against. 


• Has manageable second-order effects that do not undermine the intent of the regulation (e.g., by driving activity outside of the regulatory jurisdiction en masse without actually reducing risk in the system as a whole).

I'm hopeful that AI regulation can get it right, because good regulation, while a hard problem, is not impossible. (And those draft EU regulations, hoo-boy....)

Expand full comment

On the topic of prompt injection being hard to safeguard against, someone turned this into a game where each level adds another one of those safeguards you mentioned. https://gandalf.lakera.ai/

Expand full comment

If a video should be listened to at 1.25x or faster then it should be a podcast. If a podcast should be listened to at 1.25x or faster it should be a blog post. If a blog post can be skimmed then it should be a tweet or a comment or a paragraph. I'm really not happy people keep using the wrong mode for communicating, but luckily whisper.cpp means I can turn blathering into transcripts that I can then skim fast.

Expand full comment

Hi there, it is worth correcting the statement "Over in Europe, draft regulations were offered that would among other things de facto ban API access and open source models" - this is misinformation that has been roundly de-bunked by us working on EU AI regulation, there is 0 evidence of this in the European Parliament's proposed text on the EU AI Act. See below for more info:

https://www.linkedin.com/posts/tyulkanov_ai-activity-7066108711152283648-Z_s5?utm_source=share&utm_medium=member_ios

https://twitter.com/j2bryson/status/1659548440110456832

Expand full comment

> #8: a browser that incorporated text ANALYSIS — annotating text with markers to indicate patterns like “angry”, “typical of XYZ social cluster”, “non sequitur”, etc — could be a big deal, for instance. We could understand ourselves and each other so much better.

What this instantly called to my mind was Orwell's "1984". The Party had a problem where they didn't have enough trusted people to watch all the telescreens at once.* We have almost solved that problem now. Soon our LLMs will be able to identify wrongthink merely from transcriptions of audio.

* If this was a problem, and not an intentional deception intended to lure people into disobedience.

Expand full comment