The ground truth for colleges before the advent of AI was that higher education was about 80% signaling and only 20% learning, but most professors and college administrators refused to believe this or any evidence supporting it. "Why do my students cheer when class is cancelled, since it deprives them of the ability to learn? Ah, they must be stupid, or at least such excessively short-term thinkers" they seemed to believe.
As Bryan Caplan has pointed out, AI is exposing a lot of the institutional rot within academia. Professors like to think employers valued college because it expanded students' minds, but college was always actually about proving a student's 1) conformity, 2) conscientiousness, and 3) baseline intellect. With AI threatening signals 2 and 3, colleges will either have to face the music or risk becoming irrelevant.
That is mostly not true. Some students' college experiences, e.g. humanities degrees at selective universities, were always in large part a signaling thing. But this was the exception, even though it's the what most people idly think of as the essence of college.
For one thing, most universities are not selective. Getting into them says little about you as a student. Getting out was primarily a testimony to doing enough actual learning in college to get good grades. Perhaps once could say that college grades are a "signal", but they are a signal that until now was a good proxy for actual learning in college (even if there were a few exceptions, like old school cheaters, or the day class was cancelled and students had their grade based on the material of 9 lectures rather than 10).
This was true even of many majors at selective universities. An Ivy League CS major would have better career opportunities than an Ivy League gender studies major, even though both went through the same selection process to get into college, the only difference was their college course choice and grades. Even for two majors which were both challenging - say CS and chemical engineering - one would find it much easier to get a job in the tech and chemical industries respectively than with the reverse majors, because employers care about actual subject knowledge rather just "conformity, conscientiousness, and baseline intellect".
Hard disagree. Read Bryan Caplan's book, the Case Against Education, if you think college is mostly about building knowledge. He makes a powerful case that it's almost all signaling (80% in his estimation, which I think is overly conservative).
Elite colleges send a signal about what type of teenager you were to be able to get into that good college. Hardworking + smart high school students tend to be hardworking + smart individuals in the workforce too, so that can be useful for employers. As far as I know they usually don't learn things that are radically different from students at e.g. state colleges, at least in their normal classes.
As for certain majors improving job prospects more, there's several things going on there. Many people don't respect liberal arts as much as STEM, so they'd intrinsically see a gender studies major as less worthy than someone who did a hard sciences or engineering degree. They might think the classes were all trivial, so the signal would be less effective. From my perspective, employers give a good degree of leeway as long as the majors were in the same general genre as what they're looking for, e.g. most hiring managers for programmer jobs would be OK with hiring someone with an engineering degree as long as they could prove they could code in the interview. There might be less flexibility in the initial HR screen, but that's just a case of HR checking boxes and not communicating with the hiring manager. Really, CS is a particularly bad example here since there's a lot of reverse-elitism when it comes to college degrees, and it's also a field I work in.
I think you’re misunderstanding professors. Most of us have always known that most students aren’t that interested in learning. Still, we think that learning can happen for all of them, that some of them can be motivated to learn some things, and that the ones that are in fact motivated to learn can get a lot of learning done, between the classes and the companionship of the other students who are learning things.
I think you're misunderstanding the point I was trying to make. I'm not talking about poorly-motivated students in gen-ed classes, I'm talking about about the entire edifice being rotten. Much of the "learning" is mere rote memorization for the exams. Even if it's genuinely learned, most or all of the facts taught will not be relevant in the student's future job, and even if it is it's almost certain to be forgotten unless it's used again and reinforced over and over after the class is done. Almost nothing taught in colleges passes that bar.
Memorization and facts aren’t the point of education (except in a few specialized cases like doctors learning anatomy). It’s about developing certain skills. I don’t care if students remember any facts they learned in my class a few years later - I care if they are better at reading something, understanding the arguments within it, and coming up with ways to challenge or extend them. They have to temporarily learn some facts about the particular readings to do that, but the facts aren’t the point
This is the point educators typically fall back on when knowledge retention is show to be abysmal, that it's not really about learning facts, it's about learning broad principles or even simply "learning how to learn". But it's similarly not really true -- humans are very bad at cross-applying principles or knowledge from one area to another. Teaching students about the Scientific Method could be a revolutionary step, but the overwhelming majority of kids who learn it treat it like just any other vocabulary term and move on with their lives. I encourage reading Caplan's book if you have disagreements on this point.
Teaching students *about* scientific method does just turn it into a fact. I don’t see any reason to think that factual learning is relevant. In fact, it’s only because people like the idea of facts that people think there is such a thing as *the* scientific method.
What’s important is getting people to practice learning things. And in higher education (above undergraduate degrees), to get people enculturated in the social practices that support that.
One thing not mentioned but that I found to be true when teaching in universities: cheating was already common, and the university didn’t want anyone punished for doing it. It was very hard to penalize cheating, even when “If I google the question your answer is the first that comes up. Five other students have the same answer word for word. It is also incorrect.” There were boards that considered plagiarism cases, and they almost never punished anyone (one student in particular of mine spent two years on one and was disgusted.)
It’s been a long time since colleges were willing to punish an ambulatory 5-6 year annuity and risk having them drop out or transfer.
the end result may be a better equilibrium. people are going to move to in person exams now.
cheating during an exam (copying from other students or sneaking in outside material) does tend to be punished because there’s no sympathetic gray area.
plagiarism, copying homework can easily have plausible excuses or sympathetic stories.
Yea, that pretty much became my answer: go ahead and cheat on homework because you are going to crash upon the rocks of the exam. The school won’t let me fail them for plagiarizing, but if they fail the exam because they didn’t know how to answer questions that were on the homework it is pretty hard to argue with that.
This was also my experience teaching at an R1 university (in the immediately-pre-GPT era). Nobody -- not the students, not the professors, and not the university -- actually wanted to go through with the official policy on plagiarism and cheating. Reams of paperwork, angry parents, angry students, bad for business. Even in cases where the infraction was comically over the line, the punishment was, at worst, a zero on the assignment or the exam, even if the official policy said the punishment should be expulsion!
Also, not mentioned in the post, but usually the pre-ChatGPT cheaters were not good students, they were the bad ones. So a high base rate of cheating was not that harmful to the signaling equilibrium. Maybe it still isn't.
I used to be a professor until last year. The reason why no one is doing anything is because everyone already gave up a decade or two ago. Cheating has been rampant for a long while. I was in engineering and chegg made it impossible to give homework. My wife taught languages and Google translate made that impossible. Those two are much older than AI. I also served in the board of review (basically a judge and jury for cheating cases). The burden placed on professors to bring cases forward was huge. Then you got to the board and the twists that professors and admins would get themselves into to justify the very obvious cheating would make any sane person lose whatever belief in universities they had left. Even when found guilty, the penalties were so pathetically low that cheating was definitely the right choice from the students' standpoint. I also knew of one case were a student was found guilty, he went to complain to the Dean, and the Dean unilaterally and without explanation reversed the decision.
Students think it is all a waste of time and they are mostly right. With that said, the top ~10% of my students did their own work and were both very smart and hard-working. Towards the end I geared all of my teaching efforts towards them.
I said people gave up a long time ago, but actually, most profs seemed to have their head buried very deep in the sand. I have never met a more out of touch group of people
Universities are very backwards professionally in all sorts of ways. I think it's because professors are pictured and picture themselves as detached intellectuals, but in reality, they have become team leaders and managers with a complex set of organizational powers and responsibilities.
The high school insisted on giving my son a Chromebook for his foreign language class which he then used google translate for all his assignments. He thinks school is a joke and maybe he's not wrong. If the professionals in charge of it don't care, it's hard for me to make him care.
I went and dug up the Magic Earring while reading this, prepared to link it here.
A large part of me agrees that college is bullshit signaling, and this is all going to cause us to finally deal with that. Not "the system will collapse some day in the far future" or "people can keep pushing off dealing with this problem for a generation or two." The system has already collapsed and we're all walking down through the valley in its former shadow.
Some time in the next year, or two, or it might even take three, someone of sufficient authority is going to point out it's all dead, and everything will implode on itself. And part of me will be happy to see it all die.
But something valuable will be lost along the way. There is some genuine value in learning to write your own thoughts. I had a much better understanding of history when I had memorized all the world capitals and locations because there was a skeleton for me to hang everything on. It's like the first example of things you don't need to bother learning because you can look them up, like I don't know anyone's phone number any more.
Employers in CS careers are starting to move towards in-person interviews again. The remote interview is being destroyed. (Some kid like Lee is thinking this is a positive development, but it's just burning down the commons.) We may need to just have a future of physical presence and no phones or computers in schools. Half the fight will be "no, your IEP doesn't let you bring in arbitrary electronics that no one else has" and it may simply not happen at all because of that.
A calculus student is plugging your diffEq into the AI and pasting in the answer. We need to figure out the reason we're teaching them calculus. If it's a filter and dumb kids, admit it. If these people are going to be designing airplanes and bridges then they'd better be able to tell when the AI is doing something reasonable.
The earring isn't in our control. People already worry about Google putting its thumb on search results, for good reason (good reason to worry that they're doing it, and good reason to worry it's a bad thing). I
i occasionally audit some courses at my state uni, especially biology stuff where getting lab access is genuinely really valuable and underpriced (if you're auditing, and therefore not getting 'credit'), but also as of late a few humanities courses.
whenever i had an assignment that was conducive to LLM assistance, i simply included a link to the LLM convo in the submission, so the professor could actually see exactly what i did and what the LLM did and how the submission was produced
in the beginning, when i was the only one using LLMs, the professors tended to be super excited. in all honesty at least some of my motivation was showing off what you could do with these LLMs if you actually learned how to use them. they were suitably impressed.
now it's gotten all strange. i'll get assigned a reading and told to write about it, and at the top of the assignment it'll say "ABSOLUTELY NO AI", and i'll think to myself "um... no?". and i'll read the reading, and talk about it with claude, and discuss ideas for an essay with him, and collaboratively write the essay over a long session with lots of drafts and back-and-forth, and then submit the essay with a link to that claude convo.
Then I'll end up having a very awkward conversation with the professor where they try to threaten me with honor code violations and losing my future degree, and I have to remind them that I'm not getting a degree, I'm auditing their class because i genuinely wanted to learn the subject matter, and (as is evident from the linked conversation) i succeeded at this objective and did indeed learn, and so from my end i am perfectly satisfied with the exchange.
they probably think i'm an annoying little twit, but i can feel that it's only going to get weirder from here on out. soon i suspect they'll be forcing me to install ludicrously invasive spyware on my computer, as a few different departments have already started doing, not just for final exams but for all assignments. I think this is a far more likely direction than the universities moving back towards in-person assignments.
of course, by then AI will be good enough that the "audit courses at very good unis for dirt cheap" lifehack will no longer actually be valuable, and i'll just have AI teach me
but i don't envy the people actually trying to navigate getting certification
I’m hopeful that soon only capable and motivated students will go to college.
More than half of US adults aged 25-54 have a college degree, and there’s also a significant number who went to college but didn’t get a degree. As you correctly point out the majority of students are not interested to learn in that setting.
My problem is that while AI can do 90% of my job better than me in 1/100th the time, everything I work with is confidential so work doesnt let us use LLM.
My current proposal for employers, facing weakened signals from university degrees and online interviews is an in-person test:
1. Put the interviewee in a room with a one-hour assignment related to the work you do. They have access to project materials and a computer with ChatGPT or similar frontier-level AI model.
2. Describe the outcome of the project: what it is, what it's needed for, how it fits into the work of your org.
3. Let them work, then come back and evaluate.
The evaluation criteria isn't their ability to complete the assignment (that should be assumed with AI assistance); it's the quality of their interaction with the AI. Did they lazily ask for the solution, or did they use the AI to help them break down the problem into parts and then strategically work through parts to make the final output? Does their AI interactions show passiveness (they write simple prompts and uncritically regurgitate Ctrl+C, Ctrl+V the outputs) or did they interact with the AI as a work partner?
Hire for curiosity, ability to learn, cognitive humility, and demonstrated depth of thinking and reasoning. That person is going to stay valuable to you over time much more than "new grad who knows how to use Excel and was pleasant to interview."
"Did they lazily ask for the solution, or did they use the AI to help them break down the problem into parts and then strategically work through parts to make the final output? Does their AI interactions show passiveness (they write simple prompts and uncritically regurgitate Ctrl+C, Ctrl+V the outputs) or did they interact with the AI as a work partner?"
Within six months, every potential interviewee is going to know that this is what you're looking for, and habituate themselves by it. Then it will no longer be an indicator of curiosity etc.
If AI capabilities weren’t advancing, we could probably work out how to incorporate AI in undergraduate teaching.
Trouble is, capabilities are advancing so fast that it is very unclear what things will loook like in a couple of year’s time. Maybe AI will be so superhuman that it will be “aww.. some of those monkeys have degrees! How adorable,”
The part that infuriates me about all of this is that many teachers have done nothing to adjust their courses to AI, even though it's fairly trivial. Just do in-class writing assignments! If you want to be really sadistic (i.e., make sure students do some work), give them a list of 10 potential essay topics ahead of the in-class essay! They'll use AI to go through the ins and outs of each one. Tada, all fixed. You can have AI grade the assignments, too. Or the other students. Go back and reread the teaching sections of Zen and the Art of Motorcycle Maintenance. It's basically all there.
I do have sympathy for the idea that schoolwork teaches effort. We should make sure kids still have things to work hard on. But I do wonder if all of this ends up immensely raising the value of homeschooling or unschooling, because then the kids have to figure out and pursue what they actually want to, instead of using AI to check off boxes they don't care about.
My courses have long held students accountable for completing the readings and showing up to class. I have quizzes, discussions, and exams. My institution has assigned me an online course this semester. The powers that be do not give a shit about cheating. I have no resources to prevent it -- not even a crappy lockdown browser (let alone proctoring services). There are students who live on campus but take online classes. The university touts the modality's "accessibility." The Dean of Bullshit/Distance Education brags that online classes are the fastest to fill. Students like them because they offer the flexibility to cheat.
I just got through a Zoom meeting with one such student who could not explain basic concepts in a 100-level course. His eyes kept sliding to his monitor to see his ChatGPT history. Trapped in an obvious contradiction, he indigantly said, "You can't prove that I used AI."
Fair point, online courses make it much harder, as do lack of support from the administration and the issue of students coming in without the right knowledge. I was too hasty in my original comment.
You seem to think that professors are there to teach. But that is usually not the case: they are there to do research. Teaching is a mild annoyance, the cost of doing business. Do you know of the Student-Professor Non-Aggression Pact? It reads as follows:
"I, the professor, promise not to make this class too demanding. As long as you show up, give a token effort on the assignments, and do not get on my bad side, you will get a good grade. In exchange, you agree not to complain about me recycling the same slide deck, exams, and assignments I've used for the last 15 years, you will not overly burden me with requests for office hours or extra help, and you will give me a good teaching evaluation at the end of the semester. Thus, I will have more time and freedom to write grants, conduct research, and otherwise further my career."
It’s not trivial to shift from take-home writing assignments to in-class ones. First, the point of the two types of writing assignments is different (even back in 1950) - the in-class writing assignment is about what you can recall and get out quickly, while the take-home one is about how you can plan and structure a paper given some time. Second, the in-class writing assignment takes up time that you had originally planned for something else. Third, reading and grading in-class writing is much more painful than reading and grading something that students had some time to think about (even though a good chunk of them didn’t do much more or better).
All of this can be fixed of course, maybe even without too much difficulty, if you’re willing to say that just like science classes often have a four hour lab session in addition to the regular class time, then humanities classes have a four hour writing lab session in addition to regular class time. (And students have access to a locked down lab computer with all the documents they are studying and some secondary sources, but no AI that lets them skip the steps of coming up with an idea, coming up with arguments for it, and structuring the writing.)
You're right, I definitely overstated my case. I shouldn't have called it trivial. I'm comparing it to the prospect of having a broken course, which it seems is what many teachers have currently. It seems like trying out new options of some kind as fast as possible is preferable to what we have now.
Students would also miss out on the practice of organizing their time, although I'm not sure I ever successfully learned that in school.
It might be interesting to see what new forms of writing come up in this context. I could imagine a 1-2 day long "write-athon" or classes that are entirely writing labs.
"Using AI is going to be the most important skill, ..."
Agreed, and I want to emphasize that WHAT it is sane to learn has changed drastically with the availability of AI.
One key point that I think could use more emphasis is: When the students graduate, they are going to _continue_ to have access to AI. (Setting aside ASI and machine takeover for the sake of this discussion) Like books (as you cited "this pattern dates back at least to Socrates"), calculators, and Google before them, the existence and availability of AI massively changes what is worthwhile to learn "by heart". Now, I'm treating AI as a tool here, and that approximation is going to go out the window once ASI agents exist, but, for the moment, AI is _also_ a tool.
Essay-writing skills that were sane skills to teach in 2010 are now as mistaken a target as manual multiplication of twelve digit numbers.
As you wrote: "Whereas as times change, the portfolio of skills and knowledge shifts."
As Yglesias wrote:
"The AI challenge for higher education isn’t that it’s undermining the assessment protocols (as everyone has noticed you can fix this with blue books or oral exams if you bother trying) it’s that it’s undermining the financial value of the degree!"
Yup. Replicating the language skills of ChatGPT "by hand" is not a paying proposition. (To the extent that jobs survive at all, not the way I'd bet past 2035) Higher education has to find something _else_ to teach, something that employers are willing to pay for.
Edit: And, of course, _right now_ the reliability of AI is still sufficiently flaky that being able to notice that it is hallucinating is important. (If it stays under human control) I don't expect that to last. At some point I'd expect generally available AIs to be as or more reliable than a conscientious and competent human worker, at which point a human auditing the AI's responses may, on average, _introduce_ errors rather than eliminating them.
We can do some totally wild things with current AI, and it’s look to get better soon (like: combining with an automated theorem prover is almost within reach, and I must be one of the maybe … a couple of dozen people in the entire world .. who would actually use that)
How you cite AI, in an academy context, is tricky. I guess I could go with writing it up like an edited journalistic interview, with me the interviewer and the AI as the interview subject. We have existing conventions for how you attribute that.
[ some years, I edit the proceedings of the Security Protocols Workshop, and we publish an edited transcript of the Q+A session after each author’s presentation. We declare up front that these are edited, in much the same way a TV station would edit an interview before broadcast. Could treat an AI session similarly, I guess. ]
There was some news that Rohan Pandey had quit OpenAI to work on Sanskrit OCR. I can the case for that,
(a) if your timescale to AGI is really, really short, the race is on to get stuff into the AGI’s training corpus that you want it to have
(b) the corpus of Sanskrit literature is one of the top priority items (and Rogan can safely assume that other people are frantically working on the other priority items)
Which is to say, one of the responses we could have is to forget about teaching _people_; the race is on to explain everything humans know to AGI.
I had a friend who was a former monk. He told me stories of sitting in “meditation “: he got really good at falling asleep in a sitting position. Still, when the monks leading the meditation saw he’d fallen asleep they’d hit him with a stick to wake him up. So, yeah some monks definitely cheat at meditation , particularly when it’s done in an organized environment.
I think the author is too critical about the value of university. In technical fields, possibly with the important exception of programming-related stuff, univeristy seems rather valuable. I'm basically usaware of any major sicientist who dropped from their undergrad. This might not be the case in programming/computer science, where its easier to learn and start building proyects on your own.
In (non-programming related) technical fields, university seems to add value, possibly as a commitment device to "force" you to go through a large body of knowledge and techniques which you need to master before you can make any relevant contribution. In this respect, if AI breaks the commitment device, it is a real problem.
Yep. My thought as a linguist is there, too. If you cheat, you may trick here and now, but you've broken your understanding of the subject (and that's why I had to essentially relearn statistics way later than I was officially taught it).
> I'm basically usaware of any major sicientist who dropped from their undergrad.
Science these days is highly credentialist. Without an appropriate affiliation, it’s hard to get published, even hard to get access to some source materials (like data) or lab resources, and even harder to get funding. Without a degree, hard to get the affiliation.
The ground truth for colleges before the advent of AI was that higher education was about 80% signaling and only 20% learning, but most professors and college administrators refused to believe this or any evidence supporting it. "Why do my students cheer when class is cancelled, since it deprives them of the ability to learn? Ah, they must be stupid, or at least such excessively short-term thinkers" they seemed to believe.
As Bryan Caplan has pointed out, AI is exposing a lot of the institutional rot within academia. Professors like to think employers valued college because it expanded students' minds, but college was always actually about proving a student's 1) conformity, 2) conscientiousness, and 3) baseline intellect. With AI threatening signals 2 and 3, colleges will either have to face the music or risk becoming irrelevant.
That is mostly not true. Some students' college experiences, e.g. humanities degrees at selective universities, were always in large part a signaling thing. But this was the exception, even though it's the what most people idly think of as the essence of college.
For one thing, most universities are not selective. Getting into them says little about you as a student. Getting out was primarily a testimony to doing enough actual learning in college to get good grades. Perhaps once could say that college grades are a "signal", but they are a signal that until now was a good proxy for actual learning in college (even if there were a few exceptions, like old school cheaters, or the day class was cancelled and students had their grade based on the material of 9 lectures rather than 10).
This was true even of many majors at selective universities. An Ivy League CS major would have better career opportunities than an Ivy League gender studies major, even though both went through the same selection process to get into college, the only difference was their college course choice and grades. Even for two majors which were both challenging - say CS and chemical engineering - one would find it much easier to get a job in the tech and chemical industries respectively than with the reverse majors, because employers care about actual subject knowledge rather just "conformity, conscientiousness, and baseline intellect".
Hard disagree. Read Bryan Caplan's book, the Case Against Education, if you think college is mostly about building knowledge. He makes a powerful case that it's almost all signaling (80% in his estimation, which I think is overly conservative).
Elite colleges send a signal about what type of teenager you were to be able to get into that good college. Hardworking + smart high school students tend to be hardworking + smart individuals in the workforce too, so that can be useful for employers. As far as I know they usually don't learn things that are radically different from students at e.g. state colleges, at least in their normal classes.
As for certain majors improving job prospects more, there's several things going on there. Many people don't respect liberal arts as much as STEM, so they'd intrinsically see a gender studies major as less worthy than someone who did a hard sciences or engineering degree. They might think the classes were all trivial, so the signal would be less effective. From my perspective, employers give a good degree of leeway as long as the majors were in the same general genre as what they're looking for, e.g. most hiring managers for programmer jobs would be OK with hiring someone with an engineering degree as long as they could prove they could code in the interview. There might be less flexibility in the initial HR screen, but that's just a case of HR checking boxes and not communicating with the hiring manager. Really, CS is a particularly bad example here since there's a lot of reverse-elitism when it comes to college degrees, and it's also a field I work in.
I think you’re misunderstanding professors. Most of us have always known that most students aren’t that interested in learning. Still, we think that learning can happen for all of them, that some of them can be motivated to learn some things, and that the ones that are in fact motivated to learn can get a lot of learning done, between the classes and the companionship of the other students who are learning things.
I think you're misunderstanding the point I was trying to make. I'm not talking about poorly-motivated students in gen-ed classes, I'm talking about about the entire edifice being rotten. Much of the "learning" is mere rote memorization for the exams. Even if it's genuinely learned, most or all of the facts taught will not be relevant in the student's future job, and even if it is it's almost certain to be forgotten unless it's used again and reinforced over and over after the class is done. Almost nothing taught in colleges passes that bar.
Memorization and facts aren’t the point of education (except in a few specialized cases like doctors learning anatomy). It’s about developing certain skills. I don’t care if students remember any facts they learned in my class a few years later - I care if they are better at reading something, understanding the arguments within it, and coming up with ways to challenge or extend them. They have to temporarily learn some facts about the particular readings to do that, but the facts aren’t the point
This is the point educators typically fall back on when knowledge retention is show to be abysmal, that it's not really about learning facts, it's about learning broad principles or even simply "learning how to learn". But it's similarly not really true -- humans are very bad at cross-applying principles or knowledge from one area to another. Teaching students about the Scientific Method could be a revolutionary step, but the overwhelming majority of kids who learn it treat it like just any other vocabulary term and move on with their lives. I encourage reading Caplan's book if you have disagreements on this point.
Teaching students *about* scientific method does just turn it into a fact. I don’t see any reason to think that factual learning is relevant. In fact, it’s only because people like the idea of facts that people think there is such a thing as *the* scientific method.
What’s important is getting people to practice learning things. And in higher education (above undergraduate degrees), to get people enculturated in the social practices that support that.
One thing not mentioned but that I found to be true when teaching in universities: cheating was already common, and the university didn’t want anyone punished for doing it. It was very hard to penalize cheating, even when “If I google the question your answer is the first that comes up. Five other students have the same answer word for word. It is also incorrect.” There were boards that considered plagiarism cases, and they almost never punished anyone (one student in particular of mine spent two years on one and was disgusted.)
It’s been a long time since colleges were willing to punish an ambulatory 5-6 year annuity and risk having them drop out or transfer.
the end result may be a better equilibrium. people are going to move to in person exams now.
cheating during an exam (copying from other students or sneaking in outside material) does tend to be punished because there’s no sympathetic gray area.
plagiarism, copying homework can easily have plausible excuses or sympathetic stories.
Yea, that pretty much became my answer: go ahead and cheat on homework because you are going to crash upon the rocks of the exam. The school won’t let me fail them for plagiarizing, but if they fail the exam because they didn’t know how to answer questions that were on the homework it is pretty hard to argue with that.
Note: I never had non-in-person exams. I am amazed anyone ever thought that would work.
This was also my experience teaching at an R1 university (in the immediately-pre-GPT era). Nobody -- not the students, not the professors, and not the university -- actually wanted to go through with the official policy on plagiarism and cheating. Reams of paperwork, angry parents, angry students, bad for business. Even in cases where the infraction was comically over the line, the punishment was, at worst, a zero on the assignment or the exam, even if the official policy said the punishment should be expulsion!
Also, not mentioned in the post, but usually the pre-ChatGPT cheaters were not good students, they were the bad ones. So a high base rate of cheating was not that harmful to the signaling equilibrium. Maybe it still isn't.
The buddhist monk example missed something that adds to it: people fake being buddhist monks! https://www.nytimes.com/2016/07/02/nyregion/fake-monks-begging-buddhist.html
Maybe cheating college students are fake students.
Podcast episode for this post:
https://open.substack.com/pub/dwatvpodcast/p/cheaters-gonna-cheat-cheat-cheat?r=67y1h&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
I used to be a professor until last year. The reason why no one is doing anything is because everyone already gave up a decade or two ago. Cheating has been rampant for a long while. I was in engineering and chegg made it impossible to give homework. My wife taught languages and Google translate made that impossible. Those two are much older than AI. I also served in the board of review (basically a judge and jury for cheating cases). The burden placed on professors to bring cases forward was huge. Then you got to the board and the twists that professors and admins would get themselves into to justify the very obvious cheating would make any sane person lose whatever belief in universities they had left. Even when found guilty, the penalties were so pathetically low that cheating was definitely the right choice from the students' standpoint. I also knew of one case were a student was found guilty, he went to complain to the Dean, and the Dean unilaterally and without explanation reversed the decision.
Students think it is all a waste of time and they are mostly right. With that said, the top ~10% of my students did their own work and were both very smart and hard-working. Towards the end I geared all of my teaching efforts towards them.
I said people gave up a long time ago, but actually, most profs seemed to have their head buried very deep in the sand. I have never met a more out of touch group of people
Universities are very backwards professionally in all sorts of ways. I think it's because professors are pictured and picture themselves as detached intellectuals, but in reality, they have become team leaders and managers with a complex set of organizational powers and responsibilities.
The high school insisted on giving my son a Chromebook for his foreign language class which he then used google translate for all his assignments. He thinks school is a joke and maybe he's not wrong. If the professionals in charge of it don't care, it's hard for me to make him care.
I went and dug up the Magic Earring while reading this, prepared to link it here.
A large part of me agrees that college is bullshit signaling, and this is all going to cause us to finally deal with that. Not "the system will collapse some day in the far future" or "people can keep pushing off dealing with this problem for a generation or two." The system has already collapsed and we're all walking down through the valley in its former shadow.
Some time in the next year, or two, or it might even take three, someone of sufficient authority is going to point out it's all dead, and everything will implode on itself. And part of me will be happy to see it all die.
But something valuable will be lost along the way. There is some genuine value in learning to write your own thoughts. I had a much better understanding of history when I had memorized all the world capitals and locations because there was a skeleton for me to hang everything on. It's like the first example of things you don't need to bother learning because you can look them up, like I don't know anyone's phone number any more.
Employers in CS careers are starting to move towards in-person interviews again. The remote interview is being destroyed. (Some kid like Lee is thinking this is a positive development, but it's just burning down the commons.) We may need to just have a future of physical presence and no phones or computers in schools. Half the fight will be "no, your IEP doesn't let you bring in arbitrary electronics that no one else has" and it may simply not happen at all because of that.
A calculus student is plugging your diffEq into the AI and pasting in the answer. We need to figure out the reason we're teaching them calculus. If it's a filter and dumb kids, admit it. If these people are going to be designing airplanes and bridges then they'd better be able to tell when the AI is doing something reasonable.
The earring isn't in our control. People already worry about Google putting its thumb on search results, for good reason (good reason to worry that they're doing it, and good reason to worry it's a bad thing). I
i occasionally audit some courses at my state uni, especially biology stuff where getting lab access is genuinely really valuable and underpriced (if you're auditing, and therefore not getting 'credit'), but also as of late a few humanities courses.
whenever i had an assignment that was conducive to LLM assistance, i simply included a link to the LLM convo in the submission, so the professor could actually see exactly what i did and what the LLM did and how the submission was produced
in the beginning, when i was the only one using LLMs, the professors tended to be super excited. in all honesty at least some of my motivation was showing off what you could do with these LLMs if you actually learned how to use them. they were suitably impressed.
now it's gotten all strange. i'll get assigned a reading and told to write about it, and at the top of the assignment it'll say "ABSOLUTELY NO AI", and i'll think to myself "um... no?". and i'll read the reading, and talk about it with claude, and discuss ideas for an essay with him, and collaboratively write the essay over a long session with lots of drafts and back-and-forth, and then submit the essay with a link to that claude convo.
Then I'll end up having a very awkward conversation with the professor where they try to threaten me with honor code violations and losing my future degree, and I have to remind them that I'm not getting a degree, I'm auditing their class because i genuinely wanted to learn the subject matter, and (as is evident from the linked conversation) i succeeded at this objective and did indeed learn, and so from my end i am perfectly satisfied with the exchange.
they probably think i'm an annoying little twit, but i can feel that it's only going to get weirder from here on out. soon i suspect they'll be forcing me to install ludicrously invasive spyware on my computer, as a few different departments have already started doing, not just for final exams but for all assignments. I think this is a far more likely direction than the universities moving back towards in-person assignments.
of course, by then AI will be good enough that the "audit courses at very good unis for dirt cheap" lifehack will no longer actually be valuable, and i'll just have AI teach me
but i don't envy the people actually trying to navigate getting certification
As someone who cares about learning the material instead of the grade, you're the opposite of most other students.
I’m hopeful that soon only capable and motivated students will go to college.
More than half of US adults aged 25-54 have a college degree, and there’s also a significant number who went to college but didn’t get a degree. As you correctly point out the majority of students are not interested to learn in that setting.
My problem is that while AI can do 90% of my job better than me in 1/100th the time, everything I work with is confidential so work doesnt let us use LLM.
Any reason not to self-host open source models?
I don't like it, but I'd guess we don't actually care about productivity. One of those jobs about box ticking.
Sounds like a solution to a problem.
My current proposal for employers, facing weakened signals from university degrees and online interviews is an in-person test:
1. Put the interviewee in a room with a one-hour assignment related to the work you do. They have access to project materials and a computer with ChatGPT or similar frontier-level AI model.
2. Describe the outcome of the project: what it is, what it's needed for, how it fits into the work of your org.
3. Let them work, then come back and evaluate.
The evaluation criteria isn't their ability to complete the assignment (that should be assumed with AI assistance); it's the quality of their interaction with the AI. Did they lazily ask for the solution, or did they use the AI to help them break down the problem into parts and then strategically work through parts to make the final output? Does their AI interactions show passiveness (they write simple prompts and uncritically regurgitate Ctrl+C, Ctrl+V the outputs) or did they interact with the AI as a work partner?
Hire for curiosity, ability to learn, cognitive humility, and demonstrated depth of thinking and reasoning. That person is going to stay valuable to you over time much more than "new grad who knows how to use Excel and was pleasant to interview."
"Did they lazily ask for the solution, or did they use the AI to help them break down the problem into parts and then strategically work through parts to make the final output? Does their AI interactions show passiveness (they write simple prompts and uncritically regurgitate Ctrl+C, Ctrl+V the outputs) or did they interact with the AI as a work partner?"
Within six months, every potential interviewee is going to know that this is what you're looking for, and habituate themselves by it. Then it will no longer be an indicator of curiosity etc.
If AI capabilities weren’t advancing, we could probably work out how to incorporate AI in undergraduate teaching.
Trouble is, capabilities are advancing so fast that it is very unclear what things will loook like in a couple of year’s time. Maybe AI will be so superhuman that it will be “aww.. some of those monkeys have degrees! How adorable,”
Yup! In my comment I had to embed "(Setting aside ASI and machine takeover for the sake of this discussion)"
The part that infuriates me about all of this is that many teachers have done nothing to adjust their courses to AI, even though it's fairly trivial. Just do in-class writing assignments! If you want to be really sadistic (i.e., make sure students do some work), give them a list of 10 potential essay topics ahead of the in-class essay! They'll use AI to go through the ins and outs of each one. Tada, all fixed. You can have AI grade the assignments, too. Or the other students. Go back and reread the teaching sections of Zen and the Art of Motorcycle Maintenance. It's basically all there.
I do have sympathy for the idea that schoolwork teaches effort. We should make sure kids still have things to work hard on. But I do wonder if all of this ends up immensely raising the value of homeschooling or unschooling, because then the kids have to figure out and pursue what they actually want to, instead of using AI to check off boxes they don't care about.
My courses have long held students accountable for completing the readings and showing up to class. I have quizzes, discussions, and exams. My institution has assigned me an online course this semester. The powers that be do not give a shit about cheating. I have no resources to prevent it -- not even a crappy lockdown browser (let alone proctoring services). There are students who live on campus but take online classes. The university touts the modality's "accessibility." The Dean of Bullshit/Distance Education brags that online classes are the fastest to fill. Students like them because they offer the flexibility to cheat.
I just got through a Zoom meeting with one such student who could not explain basic concepts in a 100-level course. His eyes kept sliding to his monitor to see his ChatGPT history. Trapped in an obvious contradiction, he indigantly said, "You can't prove that I used AI."
Fair point, online courses make it much harder, as do lack of support from the administration and the issue of students coming in without the right knowledge. I was too hasty in my original comment.
You seem to think that professors are there to teach. But that is usually not the case: they are there to do research. Teaching is a mild annoyance, the cost of doing business. Do you know of the Student-Professor Non-Aggression Pact? It reads as follows:
"I, the professor, promise not to make this class too demanding. As long as you show up, give a token effort on the assignments, and do not get on my bad side, you will get a good grade. In exchange, you agree not to complain about me recycling the same slide deck, exams, and assignments I've used for the last 15 years, you will not overly burden me with requests for office hours or extra help, and you will give me a good teaching evaluation at the end of the semester. Thus, I will have more time and freedom to write grants, conduct research, and otherwise further my career."
Cur non utrumque?
It’s not trivial to shift from take-home writing assignments to in-class ones. First, the point of the two types of writing assignments is different (even back in 1950) - the in-class writing assignment is about what you can recall and get out quickly, while the take-home one is about how you can plan and structure a paper given some time. Second, the in-class writing assignment takes up time that you had originally planned for something else. Third, reading and grading in-class writing is much more painful than reading and grading something that students had some time to think about (even though a good chunk of them didn’t do much more or better).
All of this can be fixed of course, maybe even without too much difficulty, if you’re willing to say that just like science classes often have a four hour lab session in addition to the regular class time, then humanities classes have a four hour writing lab session in addition to regular class time. (And students have access to a locked down lab computer with all the documents they are studying and some secondary sources, but no AI that lets them skip the steps of coming up with an idea, coming up with arguments for it, and structuring the writing.)
You're right, I definitely overstated my case. I shouldn't have called it trivial. I'm comparing it to the prospect of having a broken course, which it seems is what many teachers have currently. It seems like trying out new options of some kind as fast as possible is preferable to what we have now.
Students would also miss out on the practice of organizing their time, although I'm not sure I ever successfully learned that in school.
It might be interesting to see what new forms of writing come up in this context. I could imagine a 1-2 day long "write-athon" or classes that are entirely writing labs.
Yes this is the way I want to go with some of my classes but I’ll have to see how scheduling that can work.
"Using AI is going to be the most important skill, ..."
Agreed, and I want to emphasize that WHAT it is sane to learn has changed drastically with the availability of AI.
One key point that I think could use more emphasis is: When the students graduate, they are going to _continue_ to have access to AI. (Setting aside ASI and machine takeover for the sake of this discussion) Like books (as you cited "this pattern dates back at least to Socrates"), calculators, and Google before them, the existence and availability of AI massively changes what is worthwhile to learn "by heart". Now, I'm treating AI as a tool here, and that approximation is going to go out the window once ASI agents exist, but, for the moment, AI is _also_ a tool.
Essay-writing skills that were sane skills to teach in 2010 are now as mistaken a target as manual multiplication of twelve digit numbers.
As you wrote: "Whereas as times change, the portfolio of skills and knowledge shifts."
As Yglesias wrote:
"The AI challenge for higher education isn’t that it’s undermining the assessment protocols (as everyone has noticed you can fix this with blue books or oral exams if you bother trying) it’s that it’s undermining the financial value of the degree!"
Yup. Replicating the language skills of ChatGPT "by hand" is not a paying proposition. (To the extent that jobs survive at all, not the way I'd bet past 2035) Higher education has to find something _else_ to teach, something that employers are willing to pay for.
Edit: And, of course, _right now_ the reliability of AI is still sufficiently flaky that being able to notice that it is hallucinating is important. (If it stays under human control) I don't expect that to last. At some point I'd expect generally available AIs to be as or more reliable than a conscientious and competent human worker, at which point a human auditing the AI's responses may, on average, _introduce_ errors rather than eliminating them.
We can do some totally wild things with current AI, and it’s look to get better soon (like: combining with an automated theorem prover is almost within reach, and I must be one of the maybe … a couple of dozen people in the entire world .. who would actually use that)
How you cite AI, in an academy context, is tricky. I guess I could go with writing it up like an edited journalistic interview, with me the interviewer and the AI as the interview subject. We have existing conventions for how you attribute that.
[ some years, I edit the proceedings of the Security Protocols Workshop, and we publish an edited transcript of the Q+A session after each author’s presentation. We declare up front that these are edited, in much the same way a TV station would edit an interview before broadcast. Could treat an AI session similarly, I guess. ]
There was some news that Rohan Pandey had quit OpenAI to work on Sanskrit OCR. I can the case for that,
(a) if your timescale to AGI is really, really short, the race is on to get stuff into the AGI’s training corpus that you want it to have
(b) the corpus of Sanskrit literature is one of the top priority items (and Rogan can safely assume that other people are frantically working on the other priority items)
Which is to say, one of the responses we could have is to forget about teaching _people_; the race is on to explain everything humans know to AGI.
I had a friend who was a former monk. He told me stories of sitting in “meditation “: he got really good at falling asleep in a sitting position. Still, when the monks leading the meditation saw he’d fallen asleep they’d hit him with a stick to wake him up. So, yeah some monks definitely cheat at meditation , particularly when it’s done in an organized environment.
I think the author is too critical about the value of university. In technical fields, possibly with the important exception of programming-related stuff, univeristy seems rather valuable. I'm basically usaware of any major sicientist who dropped from their undergrad. This might not be the case in programming/computer science, where its easier to learn and start building proyects on your own.
In (non-programming related) technical fields, university seems to add value, possibly as a commitment device to "force" you to go through a large body of knowledge and techniques which you need to master before you can make any relevant contribution. In this respect, if AI breaks the commitment device, it is a real problem.
Yep. My thought as a linguist is there, too. If you cheat, you may trick here and now, but you've broken your understanding of the subject (and that's why I had to essentially relearn statistics way later than I was officially taught it).
> I'm basically usaware of any major sicientist who dropped from their undergrad.
Science these days is highly credentialist. Without an appropriate affiliation, it’s hard to get published, even hard to get access to some source materials (like data) or lab resources, and even harder to get funding. Without a degree, hard to get the affiliation.