Discussion about this post

User's avatar
Laszlo's avatar

> Or they embed malware in a Stable Diffusion extension pack and tell victims ‘You have committed one of our sins: Art Theft.’

That is not at all what happened. The dev (AppleBotzz) is the one who embedded the malware (and then modified the package twice to make it harder to detect). Then he tried to false-flag it, making up a nonexistent anti-AI hacker group and pretending they were responsible. This can be confirmed easily by looking at the repo history. The malicious code was already in the very first version of the code.

This is a very ordinary "malware creator creates and distributes malware" story, it has nothing to do with AI haters.

Expand full comment
Mikhail Samin's avatar

> Achieve Artificial Gerbil Intelligence achieved internally at DeepMind, via a deep reinforcement learning to train a virtual agent to imitate freely moving rats with a neurochemically realistic model of their brains. Insights are then gained from looking at the model. Gulp?

My impression (I haven't looked at the paywalled paper itself though!) is that they trained a virtual agent to imitate freely moving rats and then the internal activations of the trained neural network were a much better predictor for real rat neural activity than the movements themselves.

I think this might be a good example for the idea that neural networks learn the generators of the data they're trained on and not just the stats.

(Someone please double-check the paper, though.)

Expand full comment
13 more comments...

No posts