I have considered open-sourcing a couple of small codebases that I vibe coded, but then decided that was a category error: the entire point of those projects is that they are personal tools. If someone wants a similar tool, instead of downloading mine from Github, they should get AI to code their own, tuned to their tastes. That's what's special about these things: they aren't particularly good or polished products, they're ones designed to your personal wants and needs.
So that's my little contribution to a lack of explosion of new Github repos.
I agree. Coding up a tool to answer a quick question doesn't warrant making a github repo. If it's something that I keep iterating on then that warrants a git repo. My distribution still looks something like 10 coding projects per one that is turned into a git repo, and 10 repos that are private per one that is made public. The difference is that now my private one-off projects have gone from short scripts to thousands of lines of code and have greater scope.
On the lack of new software: the level where AI provides the greatest benefit is with novices. For people like me who started from essentially zero, I’ve mostly vibe coded a bunch of small, individual personal tools rather than anything meant for public release. These aren’t really visible anywhere, but if you could see them, I think you’d notice an explosion—by multiple orders of magnitude—of small, just-functional tools that provide personal efficiency gains.
Yesterday I did a very simple three word search in google (not the AI version of google) for a friend I have not contacted in quite a while. One of the results was a Google AI "analysis" of these three words - her name and the town she lives in- which told me that her husband had died. I knew this, but the details were quite stunning, for their errors.
First of all, it got the date of his death wrong, even though the google search results included an obituary that correctly identified the correct date of death. The date declared by the "analysis" was off by one year, and I cannot figure out where it came from.
Second error was his name. The last name was correct, but the AI decided that his first name was "Thomas", when he was actually a "Kenneth". No Thomas relatives that I know about.
How can an AI that has access to the most relevant information (a real, published obituary) for the simple search that I requested, get two very important facts completely wrong?
This is not good. It is like I ask for the atomic number of an element in the periodic table, and it gives me the information for an entirely different element. Or if I ask for information about a famous politician, and the name and date of death are completely incorrect.
As long as these sorts of responses keep popping up, for no identifiable reason, I will consider the google AI to be completely "Unfit for Service", of any sort. I cannot figure out how to test other AIs to see if they produce the same errors, but I am not going to waste my time, because this event show me that the AI with probably the largest and best set of training data cannot answer without getting basic data wrong, so I just don't think I can believe anything ANYTHING that ANY of them say to me.
I would really recommend trying out the other models. Google's free AI on the web search is very bad, and so is the GPT that you get when you're not logged in to ChatGPT. The best models that we've got are all mostly accessible for free with an account, so I would really recommend you go check out any of ChatGPT-5, Gemini 2.5, Claude 4.1, or Perplexity. All of these AIs can look stuff up on the internet now, but Perplexity is probably the best.
I tried out AI for the first time this past winter. I wanted to see what it could do. Although I have been a computer person for nearly 60 years, and started my career at CMU, which is one of the founding organizations for AI research, my computer experience has been in technical computing (engineering calcs), not AI, so I thought I would see what all the hype was about.
I asked the AI for help planning a sailboat trip up the east coast of the US, from my home in Stuart FL, to NYC. I gave it a little bit of background about the boat and my boating preferences and asked it to give me some suggestions for the trip. I expected it to either give me a travel guide response, or start to ask me questions to figure out what, exactly what I wanted to do.
Instead, I got a route that started with me heading SOUTH from Stuart, to Miami, and then north to NYC, in a travelogue fashion. Nothing useful for boating, but just a top ten sights along the way. When I asked why it wanted me to head SOUTH, it apologized and suggested that I head SOUTH, AGAIN, on I-95(!!) to Miami, and then head NORTH on I-95 in my sailboat (!!).
I gave up at that point. I have asked this question again, recently, and it no longer tells me to use I-95, but rather gives me some very basic info about boating along the waterway. It is better, but not much.
I have been told that I need to be much more specific about asking these sorts of questions, and provide details of what I want without expecting to be questioned by the AI. These experiences, and the one I just had with a name and a city, make me extremely skeptical about the future of AI to help do any useful work.
I used to be chief of a group at the NRC that evaluated technical computer models for nuclear power plants. We had to determine whether the models accurately, and conservatively predicted the behavior of the reactor under normal operating conditions, transients, and accidents, so that we could determine whether the reactor design was "safe" (a rather nebulous concept in itself).
I am very concerned that MANY organizations are touting the use of AI to design, manufacture, construct, and operate new nuclear power plants. I have seen one ad by a national laboratory that touts the abilities of its AI products to do ALL of these things. It terrifies me that this is happening, especially since they want to use it to build new reactor designs that were "imagined" back in the 1950s-60s, but never commercially successful. Without any actual experience with these new designs, I do not understand how any AI can conceive of all of the ways that humans can make mistakes that will need to be prevented and mitigated.
I was part of the team at the NRC that spent 10 years looking at the design of the AP600 and the AP1000, which has actually been built in several countries. It took us this long to get comfortable with a number of different aspects of the design that were different from designs currently in operation. MANY aspects of this design, including the fuel, are identical to those of currently operating designs which are well understood. But when someone comes up with something that is claimed to be "absolutely safe", I am inherently skeptical, because I KNOW that humans can make the most amazing mistakes, in design, construction, maintenance, and operation, to create conditions that NOT ONE EVER CONSIDERED POSSIBLE before.
How does an AI that is based on word associations make decisions about interactions among multiple mechanical, hydraulic, nuclear, electrical systems inside a nuclear power plant? Especially when there are tech staff all around carrying tweeker screwdrivers and diagnostic tablets that they can plug into the control systems. (Whoops! Oh $hit) It doesn't really understand how they work, and worst of all it does not CARE what happens when something breaks and a big mess is created.
You can take these same concerns and apply them to airplanes, pharma, building bridges and other important structures, etc, etc. Correlating words and phrases to avoid having to do engineering calculations of complex systems is a recipe for disaster.
I saw the most insightful quote about AI the other day ' "I want AI do do my laundry and dishes so that I can do art and writing, not for AI do do my art and writing so that I can do my laundry and dishes". This is from a non-tech person who is concerned about AI.
As an engineer, I might want AI to do all of my mundane paperwork (expenses from trave, or hotel reservations, or my health insurance paperwork, etc) so that I can concentrate on the technical details of my job, not take over the technical details of my job so that I can do the mundane paperwork. But right now, I cannot rely on AI even to tell me the real name of a friend of 60 years who recently passed away, even if it has a copy of his obituary staring the AI in its face.
Instead, we get all sorts of predictions from the titans of AI about how all the code is going to be written by AI by 2030, or Super Artificial Intelligence is right around the corner, but we don't have to worry about it because they are sure that it will all turn out fine. I'm sorry, we have all seen the movies where things go very wrong, and even some recent live video of things going really wrong (the explosion of one of the Fukishima reactors is something every nuclear engineer will remember vividly until the say they die).
Re: the lack of increase in github repos/apps/etc. I'm quite skeptical of this being a good indicator as there's a lot of noise and not much signal.
The average github repo is a fork with no changes. The average "real" repo is a dead project. Fork rates would be mostly driven by popularity, which is indirectly driven by quality/completeness, itself indirectly driven by LLMs. But if the stats exclude forks then I'm wrong on this.
Open-source projects outside the AI space can be hostile to LLM contributions.
The average mobile app is slop, and this pre-dates LLMs by a long time. The review process can be a bottleneck, especially with Apple, as it seems to be entirely done by humans.
LLMs will let you cram more into your slop apps, though. Or more charitably, if your new popular app can be developed 30% faster now, does that motivate you to make a second app, or focus on just building out more features to increase retention?
I wonder if this could be tracked by average bundle size. That too would have noise problems (npm bloat or nearest equivalent for whichever app frameworks are popular now).
I expect there should be an increase in the number of interactive websites, but this is very hard to track due to the decentralised nature, not to mention automated SEO spam which again pre-dates LLMs. These mostly won't be on Github.
From personal experience, I've mostly used LLMs for one-shots and on proprietary web apps so neither would be 'tracked'.
Could LLM agency be held back by chatbot/instruct training?
I've been wondering this for a while and have a couple reasons to suspect so but the short version is that the models are always attempting a 1:1 match of user message to instruction-following reply, and considering "everything affects everything", I expect this instills some literal servitude in the models, along with a bunch of other weirdness.
The "thinking model" is an interesting hack, but 'agency' seems to mostly come from tool use, (extensive) prompting and a while loop.
So, is there a different way, starting from a base model, to actually get to an agent model, where thinking and goal-following is the core behaviour and "chatting to a user" is just another tool call?
It seems obvious enough, but the lack of models like this (there could be small ones I don't know about?) suggests that either it doesn't work (well), or alignment/safety is more difficult, or the labs are just keeping it cooking internally. Or path dependence.
Huh, I am skeptical of any BigCo that does the "now with AI!" thing, but will definitely try out the eBAI when that's ready. For once it's an area I have prior expertise in and so can grade accurately...I used to sell stuff regularly back in the day. Stopped about 8 years ago when it became too onerous vs holding down jobs ("is this uncertainly-paid labour during my valuable Slack actually a worthy tradeoff?") + by that time the professionalization of listings had really gotten rolling, so it got harder and harder to compete as an amateur vs. high-production-value cutthroat-priced listings. If an AI can really make it "one click" levels of simple though...? Quite possibly a profitable Levels of Friction arbitrage! I've been selling my stuff to pawnshops instead these days, and as much as I appreciate tax-free same-day cash in hand ($some > $0 from tossing in donation bin), my inner capitalist always sighs bigly cause I know I'm letting stuff go for relative fire sale prices. Could be getting significantly more by directly reselling via marketplaces like eBay...but only if the labour cost to making the listings drops a lot. Otherwise even my meagre cashier's hourly is usually too high to justify such effort.
Noted failure mode for comparison: Poshmark. Has already been very easy to create quick listings there for years, but as a result, the listings are full of low-effort shit that's a huge pain to sort through. Loooot of fake listings (including actually-fake items, not just accounts hawking vapourware), likely bot-generated even if not literally AI. This does select for a less-sophisticated userbase that occasionally sells things for way less than the "market clearing price", but the search and transaction costs are so high to find such gems that the whole endeavour is mostly pointless now. I'd be very sad if eBay went that way.
Unfortunately my current tax situation doesn't make itemizing worthwhile (iirc the standard deduction is going up again too?) + literal donation bins don't give a tax-deductible receipt anyway. Could lug stuff to Goodwill or whatever, but...anything that a pawn shop wouldn't take, isn't gonna be worth much as a tax writeoff either. It's tough when one tends to buy with the mindset of a vintage collector, but the default market price at such places doesn't always take that into account...
I do notice that a really good book review - "human distillation of a book" - sometimes makes me want to read the original. Scott's obviously done/hosted a thousand such, you don't do them often but often do them well, occasionally fanfiction of a well-known historical book serves same purpose. (The SCP take on Moby Dick is pretty cool!) If AI could serve this function, for lesser-known books that don't have human-made hooks, that'd be a good potential source of mundane utility. The time and effort investment in Actually Reading A Real Book is sufficiently high compared to, say, sampling new music or food that I don't wanna just walk in blind...don't have that sort of leisure anymore. Never could get the hang of reading ebooks, it's just way more effortful than leafing through a physical copy for some reason. (Which feels highly ironic as a DWATV/ACX reader, obviously...then again, the "A Map that Reflects the Territory" mini-book collection was way more accessible for me than reading the same essays on LW, so who's to say other blogs wouldn't be similar in print?)
Whenever I read about people doing something almost deliberately stupid & verging on cartoonishly evil, like that ICE raid, I can't help but wonder if it's a Motive Ambiguity thing (https://thezvi.substack.com/p/motive-ambiguity). Think young men competing to play Chicken, or "hold my beer and watch this", or getting into fights in bars to protect the honor of their lady, except with government officials competing against each other to prove their loyalty to the new administration.
And how do you prove your loyalty? By doing something so stupid there's no other possible reason you could be doing this. If you want to look loyal, "stupid but enthusiastic" / "his heart's in the right place, he's just a little overenthusiastic", is far preferable to "He never sticks his neck out & he never charges into battle". In a Trumpian context, this manifests as something like the Based Ritual (https://www.richardhanania.com/p/the-based-ritual), where people compete to show "My only crime is loyalty to the cause!". Conspicuous consumption of a sort, same way 20-ish years ago people might compete to conspicuously fly American flags in their yard, or 5 years ago they might compete to conspicuously display "In this house we believe..." signs in their yard. Well, nowadays they compete to be conspicuously Trumpian.
Podcast episode for this post:
https://open.substack.com/pub/dwatvpodcast/p/ai-133-america-could-use-more-energy?r=67y1h&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
>Even in quiet weeks like this one
I can't tell if this a joke or just a literal reference to solely AI news.
The second one.
I have considered open-sourcing a couple of small codebases that I vibe coded, but then decided that was a category error: the entire point of those projects is that they are personal tools. If someone wants a similar tool, instead of downloading mine from Github, they should get AI to code their own, tuned to their tastes. That's what's special about these things: they aren't particularly good or polished products, they're ones designed to your personal wants and needs.
So that's my little contribution to a lack of explosion of new Github repos.
I agree. Coding up a tool to answer a quick question doesn't warrant making a github repo. If it's something that I keep iterating on then that warrants a git repo. My distribution still looks something like 10 coding projects per one that is turned into a git repo, and 10 repos that are private per one that is made public. The difference is that now my private one-off projects have gone from short scripts to thousands of lines of code and have greater scope.
On the lack of new software: the level where AI provides the greatest benefit is with novices. For people like me who started from essentially zero, I’ve mostly vibe coded a bunch of small, individual personal tools rather than anything meant for public release. These aren’t really visible anywhere, but if you could see them, I think you’d notice an explosion—by multiple orders of magnitude—of small, just-functional tools that provide personal efficiency gains.
Yesterday I did a very simple three word search in google (not the AI version of google) for a friend I have not contacted in quite a while. One of the results was a Google AI "analysis" of these three words - her name and the town she lives in- which told me that her husband had died. I knew this, but the details were quite stunning, for their errors.
First of all, it got the date of his death wrong, even though the google search results included an obituary that correctly identified the correct date of death. The date declared by the "analysis" was off by one year, and I cannot figure out where it came from.
Second error was his name. The last name was correct, but the AI decided that his first name was "Thomas", when he was actually a "Kenneth". No Thomas relatives that I know about.
How can an AI that has access to the most relevant information (a real, published obituary) for the simple search that I requested, get two very important facts completely wrong?
This is not good. It is like I ask for the atomic number of an element in the periodic table, and it gives me the information for an entirely different element. Or if I ask for information about a famous politician, and the name and date of death are completely incorrect.
As long as these sorts of responses keep popping up, for no identifiable reason, I will consider the google AI to be completely "Unfit for Service", of any sort. I cannot figure out how to test other AIs to see if they produce the same errors, but I am not going to waste my time, because this event show me that the AI with probably the largest and best set of training data cannot answer without getting basic data wrong, so I just don't think I can believe anything ANYTHING that ANY of them say to me.
I would really recommend trying out the other models. Google's free AI on the web search is very bad, and so is the GPT that you get when you're not logged in to ChatGPT. The best models that we've got are all mostly accessible for free with an account, so I would really recommend you go check out any of ChatGPT-5, Gemini 2.5, Claude 4.1, or Perplexity. All of these AIs can look stuff up on the internet now, but Perplexity is probably the best.
I tried out AI for the first time this past winter. I wanted to see what it could do. Although I have been a computer person for nearly 60 years, and started my career at CMU, which is one of the founding organizations for AI research, my computer experience has been in technical computing (engineering calcs), not AI, so I thought I would see what all the hype was about.
I asked the AI for help planning a sailboat trip up the east coast of the US, from my home in Stuart FL, to NYC. I gave it a little bit of background about the boat and my boating preferences and asked it to give me some suggestions for the trip. I expected it to either give me a travel guide response, or start to ask me questions to figure out what, exactly what I wanted to do.
Instead, I got a route that started with me heading SOUTH from Stuart, to Miami, and then north to NYC, in a travelogue fashion. Nothing useful for boating, but just a top ten sights along the way. When I asked why it wanted me to head SOUTH, it apologized and suggested that I head SOUTH, AGAIN, on I-95(!!) to Miami, and then head NORTH on I-95 in my sailboat (!!).
I gave up at that point. I have asked this question again, recently, and it no longer tells me to use I-95, but rather gives me some very basic info about boating along the waterway. It is better, but not much.
I have been told that I need to be much more specific about asking these sorts of questions, and provide details of what I want without expecting to be questioned by the AI. These experiences, and the one I just had with a name and a city, make me extremely skeptical about the future of AI to help do any useful work.
I used to be chief of a group at the NRC that evaluated technical computer models for nuclear power plants. We had to determine whether the models accurately, and conservatively predicted the behavior of the reactor under normal operating conditions, transients, and accidents, so that we could determine whether the reactor design was "safe" (a rather nebulous concept in itself).
I am very concerned that MANY organizations are touting the use of AI to design, manufacture, construct, and operate new nuclear power plants. I have seen one ad by a national laboratory that touts the abilities of its AI products to do ALL of these things. It terrifies me that this is happening, especially since they want to use it to build new reactor designs that were "imagined" back in the 1950s-60s, but never commercially successful. Without any actual experience with these new designs, I do not understand how any AI can conceive of all of the ways that humans can make mistakes that will need to be prevented and mitigated.
I was part of the team at the NRC that spent 10 years looking at the design of the AP600 and the AP1000, which has actually been built in several countries. It took us this long to get comfortable with a number of different aspects of the design that were different from designs currently in operation. MANY aspects of this design, including the fuel, are identical to those of currently operating designs which are well understood. But when someone comes up with something that is claimed to be "absolutely safe", I am inherently skeptical, because I KNOW that humans can make the most amazing mistakes, in design, construction, maintenance, and operation, to create conditions that NOT ONE EVER CONSIDERED POSSIBLE before.
How does an AI that is based on word associations make decisions about interactions among multiple mechanical, hydraulic, nuclear, electrical systems inside a nuclear power plant? Especially when there are tech staff all around carrying tweeker screwdrivers and diagnostic tablets that they can plug into the control systems. (Whoops! Oh $hit) It doesn't really understand how they work, and worst of all it does not CARE what happens when something breaks and a big mess is created.
You can take these same concerns and apply them to airplanes, pharma, building bridges and other important structures, etc, etc. Correlating words and phrases to avoid having to do engineering calculations of complex systems is a recipe for disaster.
I saw the most insightful quote about AI the other day ' "I want AI do do my laundry and dishes so that I can do art and writing, not for AI do do my art and writing so that I can do my laundry and dishes". This is from a non-tech person who is concerned about AI.
As an engineer, I might want AI to do all of my mundane paperwork (expenses from trave, or hotel reservations, or my health insurance paperwork, etc) so that I can concentrate on the technical details of my job, not take over the technical details of my job so that I can do the mundane paperwork. But right now, I cannot rely on AI even to tell me the real name of a friend of 60 years who recently passed away, even if it has a copy of his obituary staring the AI in its face.
Instead, we get all sorts of predictions from the titans of AI about how all the code is going to be written by AI by 2030, or Super Artificial Intelligence is right around the corner, but we don't have to worry about it because they are sure that it will all turn out fine. I'm sorry, we have all seen the movies where things go very wrong, and even some recent live video of things going really wrong (the explosion of one of the Fukishima reactors is something every nuclear engineer will remember vividly until the say they die).
In addition to Holden's remarks, I would recommend re-trying the experiments you made whenever a new version is released.
As Zvi keeps on telling us, the important thing is **not** the performance at any one point in time, but the trend.
Re: the lack of increase in github repos/apps/etc. I'm quite skeptical of this being a good indicator as there's a lot of noise and not much signal.
The average github repo is a fork with no changes. The average "real" repo is a dead project. Fork rates would be mostly driven by popularity, which is indirectly driven by quality/completeness, itself indirectly driven by LLMs. But if the stats exclude forks then I'm wrong on this.
Open-source projects outside the AI space can be hostile to LLM contributions.
The average mobile app is slop, and this pre-dates LLMs by a long time. The review process can be a bottleneck, especially with Apple, as it seems to be entirely done by humans.
LLMs will let you cram more into your slop apps, though. Or more charitably, if your new popular app can be developed 30% faster now, does that motivate you to make a second app, or focus on just building out more features to increase retention?
I wonder if this could be tracked by average bundle size. That too would have noise problems (npm bloat or nearest equivalent for whichever app frameworks are popular now).
I expect there should be an increase in the number of interactive websites, but this is very hard to track due to the decentralised nature, not to mention automated SEO spam which again pre-dates LLMs. These mostly won't be on Github.
From personal experience, I've mostly used LLMs for one-shots and on proprietary web apps so neither would be 'tracked'.
Could LLM agency be held back by chatbot/instruct training?
I've been wondering this for a while and have a couple reasons to suspect so but the short version is that the models are always attempting a 1:1 match of user message to instruction-following reply, and considering "everything affects everything", I expect this instills some literal servitude in the models, along with a bunch of other weirdness.
The "thinking model" is an interesting hack, but 'agency' seems to mostly come from tool use, (extensive) prompting and a while loop.
So, is there a different way, starting from a base model, to actually get to an agent model, where thinking and goal-following is the core behaviour and "chatting to a user" is just another tool call?
It seems obvious enough, but the lack of models like this (there could be small ones I don't know about?) suggests that either it doesn't work (well), or alignment/safety is more difficult, or the labs are just keeping it cooking internally. Or path dependence.
Huh, I am skeptical of any BigCo that does the "now with AI!" thing, but will definitely try out the eBAI when that's ready. For once it's an area I have prior expertise in and so can grade accurately...I used to sell stuff regularly back in the day. Stopped about 8 years ago when it became too onerous vs holding down jobs ("is this uncertainly-paid labour during my valuable Slack actually a worthy tradeoff?") + by that time the professionalization of listings had really gotten rolling, so it got harder and harder to compete as an amateur vs. high-production-value cutthroat-priced listings. If an AI can really make it "one click" levels of simple though...? Quite possibly a profitable Levels of Friction arbitrage! I've been selling my stuff to pawnshops instead these days, and as much as I appreciate tax-free same-day cash in hand ($some > $0 from tossing in donation bin), my inner capitalist always sighs bigly cause I know I'm letting stuff go for relative fire sale prices. Could be getting significantly more by directly reselling via marketplaces like eBay...but only if the labour cost to making the listings drops a lot. Otherwise even my meagre cashier's hourly is usually too high to justify such effort.
Noted failure mode for comparison: Poshmark. Has already been very easy to create quick listings there for years, but as a result, the listings are full of low-effort shit that's a huge pain to sort through. Loooot of fake listings (including actually-fake items, not just accounts hawking vapourware), likely bot-generated even if not literally AI. This does select for a less-sophisticated userbase that occasionally sells things for way less than the "market clearing price", but the search and transaction costs are so high to find such gems that the whole endeavour is mostly pointless now. I'd be very sad if eBay went that way.
A note that the donation box is in expectation worth more than $0 due to the tax deduction.
Unfortunately my current tax situation doesn't make itemizing worthwhile (iirc the standard deduction is going up again too?) + literal donation bins don't give a tax-deductible receipt anyway. Could lug stuff to Goodwill or whatever, but...anything that a pawn shop wouldn't take, isn't gonna be worth much as a tax writeoff either. It's tough when one tends to buy with the mindset of a vintage collector, but the default market price at such places doesn't always take that into account...
I do notice that a really good book review - "human distillation of a book" - sometimes makes me want to read the original. Scott's obviously done/hosted a thousand such, you don't do them often but often do them well, occasionally fanfiction of a well-known historical book serves same purpose. (The SCP take on Moby Dick is pretty cool!) If AI could serve this function, for lesser-known books that don't have human-made hooks, that'd be a good potential source of mundane utility. The time and effort investment in Actually Reading A Real Book is sufficiently high compared to, say, sampling new music or food that I don't wanna just walk in blind...don't have that sort of leisure anymore. Never could get the hang of reading ebooks, it's just way more effortful than leafing through a physical copy for some reason. (Which feels highly ironic as a DWATV/ACX reader, obviously...then again, the "A Map that Reflects the Territory" mini-book collection was way more accessible for me than reading the same essays on LW, so who's to say other blogs wouldn't be similar in print?)
Jippity, Jiminy, Claude, and Clippy, with their pet llama and pet [g]rok.
It sounds like an old-school 1960s Disney comic about a plucky band of pre-teen anthropomorphised animals, possibly gophers.
Whenever I read about people doing something almost deliberately stupid & verging on cartoonishly evil, like that ICE raid, I can't help but wonder if it's a Motive Ambiguity thing (https://thezvi.substack.com/p/motive-ambiguity). Think young men competing to play Chicken, or "hold my beer and watch this", or getting into fights in bars to protect the honor of their lady, except with government officials competing against each other to prove their loyalty to the new administration.
And how do you prove your loyalty? By doing something so stupid there's no other possible reason you could be doing this. If you want to look loyal, "stupid but enthusiastic" / "his heart's in the right place, he's just a little overenthusiastic", is far preferable to "He never sticks his neck out & he never charges into battle". In a Trumpian context, this manifests as something like the Based Ritual (https://www.richardhanania.com/p/the-based-ritual), where people compete to show "My only crime is loyalty to the cause!". Conspicuous consumption of a sort, same way 20-ish years ago people might compete to conspicuously fly American flags in their yard, or 5 years ago they might compete to conspicuously display "In this house we believe..." signs in their yard. Well, nowadays they compete to be conspicuously Trumpian.
It’s time to abolish ICE