67 Comments
Comment deleted
Expand full comment

You nailed it! The powers that be will definitely keep us employed, as "work" is too useful as a means of social control. The nightmare I see coming is the movie Office Space, but instead of a flesh-and-blood Bill Lumbergh, you got ChatGPT telling you, in some robotic monotone, "And if you could go ahead and come in on Sunday too, that'd be great."

Expand full comment

Zvi writes: "ChatGPT lets students skip make-work. System responds by modifying conditions to force students to return to make-work." The underlying assumption seems to be "any school work that can be done by ChatGPT is make-work." This strikes me as extreme.

The fact that you can train a computer to do X does not imply that now humans can do X without any training. More importantly, it very much doesn't imply that humans can learn to do (Y that has, until now, been strongly correlated with X) without training in X, because there isn't a process for training for (Y but not for X). This probably means there's fertile ground for research there -- to use a subject I understand better than writing, the analogy would be "how do you train general numeracy without spending so much time on the sorts of arithmetic, algebra, and calculus that a calculator / WolframAlpha can do," -- but until results exist there, the claim "computers can do X, therefore making students to X by hand is make-work" does not follow.

(This does not apply to actual bullshit jobs, whose stated goal is the execution of the task at hand rather than any changes wrought in the mind of the worker.)

Expand full comment

There's a nightmare I don't think you've quite articulated - the job of developing bullshit rules for others to follow is itself a bullshit job that can be automated by GPT-X.

It's obvious now - the hypothetical AI has turned all matter in the universe into paperclips because another AI has turned all energy in the universe into paperwork.

Expand full comment

It’s possible that bullshit employers can retain a lot of the desired bullshit function of their employees by not acknowledging the reality that employees can use GPT-X to do most of their work.

Expand full comment

I don't really follow your arguments.

1) Homework might be 'make-work' for you because you're really smart. For a kid who isn't as smart, it might (MIGHT) actually help them master the concepts they're supposed to learn. Using ChatGPT to produce the homework defeats that purpose, and so the child is worse off using ChatGPT than just mastering the material.

2) In my experience, there aren't a lot of jobs that are completely bullshit. There's almost always some element of the job that is difficult to automate (typically something involving coordination). If AI can provide automated coordination, most companies will transfer the people out of those jobs, or fire them. That's an economic win, IMO.

Expand full comment

There is something almost Randian about the concepts "bullshit work" vs what I guess is something like "real producers" -- a little normative kiss if you feel like you are part of the second group (or perhaps not that little).

Yes, people are asked to do all kinds of nonsense, but bullshit does not arrive out of nowhere. A great part of this seems like a structural feature of a capitalist economy (perhaps bound by the Protestant ethic of needing to earn) since without money circulating the whole thing bellies up.

So whatever pressure AI mounts, it mounts on the entirety to my eye (unless a society is very agile or very forward looking).

Expand full comment

Is writing articles about 'bullshit jobs' a 'bullshit job' or not? :-)

Expand full comment

One thing that's weird about the whole bullshit jobs thing is that tons of apparently useful but unnecessary customer-facing jobs have been eliminated over the last decade or so, in favor of technology that approximately everyone hates--instead of being able to call the company and get a receiptionist who knows how to put you in touch with the right person, you get a voice mail menu that might eventually connect you to a call center employee with a menu they'll get fired for deviating from. My own employer got rid of our front desk entirely, so now when visitors show up in our main office, they often have to ask around (bugging people with offices close to the front door) to figure out where they're going.

I suspect the same forces that lead to expansion of bullshit jobs also lead to elimination of useful jobs--the feedback to the people making the decisions about hiring/firing/funding is broken in some way. There's a manager who gets to put "cut customer support costs by 50% by outsourcing them to a call center in India" on their performance review, but there's nobody whose performance review is dinged for the corresponding lost customers in any legible way, so useful jobs get eliminated. There's another manager whose budget and importance is determined by the size of his staff, but nobody whose incentivized in a legible way to decrease the fraction of employees whose actual job is to underline someone's importance.

Expand full comment

I like this post, though I disagree with it quite a lot.

I think many bullshit jobs are soulcrushing because they require the person doing them to spend the majority of the time on the low-value-add tasks, rather than the high-value-add ones. That's because (public or private sector) bureaucracies are, basically, superhuman intelligences composed of ordinary humans, some IT, and a lot of slow-moving process rules. And for many of those bureaucrats writing the rules, "have a human do it," is the default answer because their modal IT procurement fails. They know how to yell at a human to get them to do stuff.

So what I suspect you'll see happening is that a small portion of people in bullshit jobs will liberate themselves by using new tools, but most won't and will stay stuck in the current way of working for far longer than you think. For example, when MSFT ships GPT-5 embedded in Excel, it will transform the lives of those bureaucrats who already experiment with learning new features in Excel, but a shockingly large part of the economy won't even know those features exist until it becomes unavoidable.

I mean, we have people who still run their businesses on _paper ledgers_. That's gonna last for a while, even if there's another part of the economy where we have AIs building us flying cars made of nanodiamond and aerogel.

Expand full comment

Wait. How much of the economy do we actually think is bullshit? Is there evidence for this? A post?

Expand full comment

I don’t know of anyone in the business world who takes the Bullshit Jobs hypothesis seriously. There is too much pressure on margins in *all* businesses over the long term and on *most* businesses even over the short term.

Samo’s Law of Bullshit Is nothing more than a corollary of Parkinson’s Law: “Work expands so as to fill the time available for its completion.” At a for-profit organization It is the job of managers to be ever vigilant of the threat posed by Parkinson’s Law.

Bullshit jobs created by government regulation are real to be sure. But that’s nothing more interesting than deadweight loss imposed by legislatures or bureaucrats who are generally terrible at pricing externalities, because of course they are. That’s an unfortunate reality, but to a business regulations are a cost to be managed just like any other cost - that can be done well or badly, but it’s not a source of bullshit jobs, at least not from the standpoint of the bottom line.

I have a colleague whose job is contract management for our entire company. It’s a tedious and thankless job, and like the hypothetical security guard jobs discussed in another comment here, seems like a bullshit job. The thing is those jobs seem marginally important at best, until they aren’t, at which point they can be revealed to be very, very consequential. Risk management is a crucial undertaking for any business that expects to survive forever (most do) and those who toil to make sure a firm’s contracts and insurance and safety and finances and supply chains stay in order are doing very consequential work that is only fully revealed in the fullness of time, when crisis arrives.

Expand full comment

Every time I read a post like this I get lost in thoughts about all the non-bullshit-absolutely-can't-be-done-by-a-robot-or-AI jobs that are going unfilled. In the last year of our family's life, we have struggled to find:

--someone to paint the house ($20,000 job, only 2 of 15 people even responded, 3 month wait from those who did)

--chimney inspector (2 month wait, $500 for 30 minute job)

--chimney repair (another 2 months, $2500 for 2 hour job)

--vet for dog (first three weren't accepting new clients)

--groomer for dog (first four weren't accepting new clients)

--mammogram (booking four months out)

--tire installer for winter tires (6 week wait)

--propane line inspection to use heater (6 week wait)

--window repair (2 month wait)

--kitchen backsplash tile job (4 month wait)

Why aren't people doing these jobs instead of bullshit jobs!?!?!?

Expand full comment
Jan 11, 2023·edited Jan 11, 2023

I hate the hearts! Why is that? It's not money. I'm not here to make money.

The hearts are part of why news and bounded distrust is about more than just the ad money involved. (though I agree ad money distorts things more.) Bounded distrust is also about getting hearts. Even if we are paying for the news, the news will still be giving us what we want. Telling us the story we want to hear. And yeah, I want good stories too.

Well and finally, I don't check email so often, and silly 'someone gave you a heart email' are more of a pain than a pleasure. I'm a grumpy old man, 'get off my lawn', give me no hearts.

Turn off the hearts, it's somehow the wrong type of feedback. (Would only negative down arrows be any better? It seems unlikely.)

Expand full comment

There are few actual bullshit jobs, but the ones that do exist emanate either from government regulation or costly social signalling. Both of those burdens are likely to grow to meet the increased capacity for bearing them. Costly social signals that become cheap due to AI cease to be good signals, and people will find new signals that are more costly. Social signaling is a positional good but there's no good way to tax it the way other positional goods should be taxed.

Expand full comment
Jan 11, 2023·edited Jan 11, 2023

The responses I've seen from other university instructors so far have gone along two paths, both of them pretty expected.

The first is going back to more high-stakes, in-person exams. No tech. Sit and write for two hours, answer the questions, etc. This way of teaching has been really unpopular for quite a while even though (in my opinion) it's a really good way of actually determining student achievement and learning. This should have the effect of reducing the make-work for students at the high-end.

The second path is extreme amounts of scaffolding. I teach in the humanities, which is a lot of writing. One system for teaching is scaffolding assignments so they build on each other and where each new assignment is required to incorporate instructor/peer feedback and correction. ChatGPT might make this easier, but barely, for the moment. This structure should have limited reduction in make-work. It works to teach writing to the median student quite well.

I'm lucky to teach a visual subject, film and media, so my assignments for next semester involve a lot of visual description, shot analysis, formal description. I've also limited film choices that students can write about to things released in the last year.

I also want to chime in to agree with Elena. Much of what you call make-work is make-work for you, but for the median student it's called "practice." People learn to write by being forced to write over and over and over again until they at least begin to approach the limits of their natural ability. Especially if that writing is in an unfamiliar mode (journalism, science, etc.)

Here I suppose the goal could be to get students to the point where their ChatGPT inputs are sufficient to spit out what they actually want to say and that they can parse the output to determine that it's correct.

Expand full comment