When I did my Computer Science degree the vast majority of courses were 50% final, 30% midterm - even programming exams were hand written, proctored by TAs in class or in the gymnasium - assignments/labs/projects were a small part of your grade but if you didn’t do them the likelihood you’d pass the term exams was pretty darn low.
I personally dislike placing a heavy emphasis on exams. Assignments/projects have been consistently the most enjoyable and rewarding parts of the courses I've taken so far in university.
It's a shame that they are also way more susceptible to cheating with AI.
The problem with exams is that everyone has a bad experience with a poorly written one. Well-written exams will have questions that test students at different levels of understanding across the whole curriculum.
So a student who only understands the basics should be able to answer most of the easy questions and students who have a deeper understanding can answer the harder ones.
Well-written exams should feel pretty fair and leave students feeling like the result they got is proportional to the effort they put into studying the material (or at least how well they personally felt they understood the material).
> It's a shame that they are also way more susceptible to cheating with AI.
They were more prone to cheating before AI, too.
Cheating has always existed at some level, but from talking to my couple of friends who teach undergrad level courses the attitudes of students toward cheating have been changing even before AI was everywhere. They would complain about cohorts coming through where cheating was obvious and rampant, combined with administrations who started going soft on cheating because they didn’t want to lose (paying) students.
AI has taken it further, with students justifying it not as cheating but as using tools at their disposal.
I was talking to my friend about this last week and he was frustrated that several of his students had submitted papers that had all the signs of ChatGPT output, so he asked them simple questions about their papers. Most of them “couldn’t remember” what they wrote about.
It’s strange to me because when I went to college getting caught cheating was a big problem that resulted in students getting put on probationary watch and being legitimately scared of the consequences. Now at many schools cheating is routine and students push the boundaries of what they can get their classes to accept because they have no fear of any punishment. YMMV depending on the institution
I went to college as a MechE so unsure if compsci was different. But overall, all the “fun” projects were labs. We have three semesters of hell and all 3 semesters had 2-3 labs, and we write 20 pages or so for EACH lab a week (usually a team of 2-3).
Also way more susceptible to cheating in traditional non-AI ways. And your mark ends up depending a lot on how much time you have to invest independent of how good you are at the course material.
Assignments and projects are great for learning, but suck for evaluation.
That is the traditional view, the view of those who want to improve their own knowledge and abilities, and presumably the view of those who would like to consider the degree to be a meaningful credential.
However I suspect that there are many who 1) are more concerned about the short term outcome, 2) consider the degree/diploma to be little more than a meal ticket or arbitrary gatekeeping without any connection to learning, 3) view the work as a pointless barrier to being handed said diploma, and/or 4) don't see the value of human learning in a world where jobs are done by AI and AI systems routinely outperform humans on complex tasks.
You say that but I was a Class of 2013, aka during the massive hiring boom of the teens. I tutored a friend of mine with a Ds get Degrees mentality who eventually graduated and now works an ass-in-seat job for Booz Allen or one of those types. I used to joke about it with another friend, that his diploma ought to include an asterisk and a half dozen other names for how much we ultimately did on his grades take homes. I’m pretty sure he makes about the same as me by now purely on tenure.
Personally, I dropped out despite a full ride+ becuase why would I put in work for a no name state school when I already has an FTE job as a developer out of high school anyway.
Turns out fraudulent action can still get the bag.
I agree with your premise about why accurate evaluation matters, but your post comes across as pretty bitter. Unless you’re at the job with him, you really don’t know that it’s a “I just need to show up” job he has at Booz Allen. Perhaps he has other great traits like a high social or emotional intelligence that make him good at his job beyond whatever was being evaluated on those projects you helped him with.
Part of the purpose for evaluation is to provide feedback. I'm not going to claim that the form of feedback is great, but it does offer motivation to improve.
The other thing that feedback feeds into is credentials. I realize that some people are dismissive of this aspect of the degree, but it is important to pursue further studies or secure a job. While you can argue that these people are only cheating themselves, and some of them are cheating themselves, a great many will continue to cheat as they advance in academia or the workforce. In other words, they are cheating others out of opportunities.
Then I suppose we can go back to having computer labs that can only access white listed domains and other study materials. Students code there to ensure no cheating.
The labs I was in weren't connected to the Internet at all, only a local intranet. Though, they were all running pre-oracle solaris if memory serves, so I'm probably dating myself a bit.
When I did tertiary studies in programming there wasn't AI but we did our programming exams in pencil and paper. The "beneficial" prep we had and I had since high school was using punch cards. And 24h turnaround time for compiles. That really makes you think. And you learn how to desk check even thousand line programs. Intense focus, structuring for readability (to catch typos) and simplicity (catch logic errors) helped enormously. Was not unusual to change hundred lines of code and submit knowing that it wouldn't compile but will throw up the other errors I couldn't find. Our exams would give us 4-6 attempts for clean compile AND correct output. The only space where I experience same challenge now (40+ yrs later) is embedded code. Desktops and web stuff have LSPs and dynamic reloads and interpreted code (not a thing for me when learning) with instant feedback.
Lots of skills from those old days that have been lost/ignored in the pretence of productivity.
Yeah i really valued learning to code when I didn't have the internet available, if taught me patience and deep thinking, problem decomposition and organic (brain) execution
Syntax seems like a stupid thing to test in university level courses. That's trade school stuff. And I don't mean that as a criticism of trade schools, they just have a different focus.
Syntax is not the focus of your testing, but it’s often a pre-requisite to be clearly and accurately speaking the same language. Think not of taking off points for missing a semicolon but instead understanding the difference between the syntax for a method call and a property access. The different syntax conveys different meaning and so we should expect some basic level of accuracy to the language in question. At least that’s how I see it.
Knowledge is built on foundations. Knowing syntax in one language is necessary to be able to do anything practical, which interacts with theory. You build valuable schema of the world by iterative theory and practice.
Half related: reminds me of my physics teacher's test of how observant we were. The extra credit question on the test was "what is your teacher's favorite color?", which she had so far given no indication of. But while watching us she was walking all over the room in every possible direction, because the answer was on a piece of paper taped to her back.
In one of my classes the approach was the opposite, I’m expected to do Ph.D level work as an undergrad and am expected to use AI.
In a different one she just said so long as you say AI was used you’re fine to use it.
In the rest of them AI is considered cheating.
To say we have discrepancies in the rules in an understatement. No one seems to have the exact answer on how to do it. I personally feel like expecting Ph.D level work is the best method as of now, I’ve learned more by using AI to do things about my head than hard core studying for a semester.
If it’s any consolation, this problem of discrepancies in rules is very common at universities now.
I teach at two universities in Japan and occasionally give lectures on AI issues at others, and the consensus I get from the faculty and students I talk with is that there is no consensus about what to do about AI in higher education.
Education in many subjects has been based around students producing some kind of complex output: a written paper, a computer program, a business plan, a musical composition. This has been a good method because, when done well, students could learn and retain more from the process of creating such output than they would from, say, studying for and taking in-class tests. Also, the product often mirrored what the students would be doing in their future lives, so they were learning useful skills as well.
AI throws a huge spanner into that product-based pedagogy, because it allows students to short-cut the creation process and thus learn little or nothing. Also, it is no longer clear how valuable some of those product-creation skills (writing, programming, planning) will be in the years ahead.
And while the fundamental assumptions behind some widely used teaching methods are being overthrown, many educators, students, and administrators remain attached to the traditional ways. That’s not surprising, as AI is so new and advancing so rapidly that it’s very difficult to say with any confidence how education needs to change. But, in my opinion at least, it does need to change at a very fundamental level. That change won’t be easy.
It's not inherently contradictory, just like using a calculator could be considered cheating depending on the context. If you're just learning basic arithmetic, a calculator is cheating since it shortcuts the path to learning. OTOH in calculus, a calculator is necessary. You still have to have a deep understanding of the concepts and functions to succeed.
It's still a new tech so I'm not surprised a lot of teachers have different takes on it. But when it comes to education, I feel like different policies are reasonable. In some cases it's more likely to shortcut learning, and in other cases it's more likely to encourage learning. It's not entirely one or the other.
A better example might be physics and math classes. I was learned derivatives and integrals at the same time in those two classes, but the math one required we learn how it all works (using limits to understand why the derivative rules work, without using calculators, for example), while in physics we just memorized the rules and were expected to use the calculator.
Exactly, AI is the next calculator. Right now the consensus is that it just does the work for you, in my opinion that says more about us not having the right questions than actual laziness. In a world where the only questions are basic arithmetic, calculators do all the work for you. My opinion is that the future what used to be done by academics will be done by high schoolers and new academics will be producing work at a rate no one could’ve ever predicted.
For example, the professor who’s leading me in this project had a fellowship at a certain university in England and said he exclusively coded using claude code for a month straight, their purpose was to solve a vaccine for a specific disease and by using AI tools such as claude code they’re several months ahead of schedule.
For that specific one, it’s more of an independent project analyzing complex systems for 6 credits, I’m gonna be expected to submit a paper to arXiv on the subject with the professor as a co-author (fingers crossed). He said I can use claude code or any AI. I’m required to do X amount of hours per week and then submit a thorough report after about 2 months.
>I’ve learned more by using AI to do things about my head than hard core studying for a semester.
How do you know you actually learned, instead of being fed slop by the AI that isn't true at all? If you didn't study, then I doubt you'll really know if the AI is lying to you or not. I have to wonder if your teacher will too, sounds like they have kind of checked-out from actually teaching.
I'm really not seeing how you can do PhD level work as an undergrad. You wouldn't have the foundational knowledge necessary to do PhD level work, and you have no idea how much of what you're learning is accurate.
Without going into too much detail, when I said “Ph.D level” I’m meaning active research that adds a meaningful contribution to a field. I’ll probably be posting on here in a couple months about it but I’ve been doing thousands of tests with beefy GPUs on a certain theory we have about small 9b LLMs under certain external constraints.
Am I saying I’m as knowledgeable or capable as a Ph.D right now? Absolutely not. There’s just not really a terminology that correctly describes accelerated learning and iteration by use of AI since the technology is so new. I can’t speak for others but as someone who’s a senior in my physics degree, I’ve been actually learning faster by using AI. It’s either a mental crutch or mental accelerator. The difference is in if you want it to completely do work for you or if you try to learn and follow along.
It’s a very under explored and new area right now, how higher learning is effected by using AI as a tool instead of as a cheating device, but historically, new tools like the calculator or computer have done a lot to accelerate learning once new rules are in place.
What's interesting is that as I understand, folks are using things like Google Docs for papers, and that it's (apparently) straight forward to do analysis on a Google Doc to see, well, the life of the document. How it was typed in, how fast, what was pasted and cut back out.
My understanding is that the Google Doc is not a word processing document, it's an event recording of a word processor. So, in theory, you could just "play back" watching the document being typed in and built to "see" how it was done.
I only mention this because given the AIs, I'm sure even with a typewriter, it's more efficient to have the AI do the work, and then just "type it in" to the typewriter, which kind of invalidates the entire purpose of it in the first place.
The typing in part is inevitable. May as well have a "perfect first draft" to type it in from in the first place.
And we won't mention the old retro interfaces that let you plug in a IBM Selectric as a printer for your computer. (My favorite was a bunch of solenoids mounted above the keys -- functional, but, boy, what a hack.)
TaaS -- Typing as a service. Send us your Markdown file and receive a typed up, double spaced copy via express shipping the next day!
This would take about 1 day for some student to realize you can instruct one of the LLMs to operate the computer screen for you and have it type and fake edit a document for you. The tip would spread among the cheaters and the metric would become harder to judge by itself.
That's certainly one way to abstractly automate a task: Just pay someone else to do it. (This is a concept that regular people employ every day in the real world.)
Another way to automate this particular task is that some typewriters have (serial/parallel) ports to connect to a computer. It's not a daunting task at all for a student who is skilled in the art of using the bot to have one of these typewrites be the output target.
Even Microsoft Word stores revision history inside .docx files, and that’s been used to expose plagiarism. I heard about one case where a student took an existing paper (I believe from a previous year/student) and pasted it into Word. They then edited it just enough to make it look different.
However, they didn’t remove the embedded revision history in the .docx file they submitted, so that went about as well as you can expect.
Are you sure about that? I could easily see this happen with a web document link, but for a docx file the change tracking is off by default and pretty obtrusive. Basic metadata would be fine, formatting might be quirky but that's not exactly a smoking gun...
It’s been a while since I heard about it, but IIRC the professor was a stickler for a very specific paper format, so they would distribute a .docx template file with Track Changes already enabled and require students to write their papers using that template.
I also think that when track changes was first introduced in earlier versions of MS Word, there wasn’t as much concern about privacy/telemetry as there is now, so it wasn’t made as prominently obvious.
Hmm, I have some old daisy-wheel printers in the closet that I've been meaning to strip down for stepper motors, maybe I should refurb them instead :-)
You should look up the history of the Loebner Prize [1]. There’s a shocking amount of technological development in some chatbots that went toward simulating mistakes and typing patterns to make them seem more human-like.
In some of the later Loebner competitions, when text was transmitted to the human character by character, the bot would even simulate typos followed by backspacing on screen to make it look more realistic.
Yeah I definitely think LLMs contributed to its demise. To be honest, nobody in academic AI circles took it very seriously, because it kind of devolved into a contest over who could create the most convincing illusion of intelligence.
Participants spent more time polishing up the natural language parsing aspects in conjunction with pre‑programming elaborate backstories for their chatbot's bios among other psychological tricks. In the end, the whole competition was more impressive as a social engineering exercise, since the real goal kinda became: how can I trick people into thinking my chatbot is a human?
But reading through some of the previous competition chatbot transcripts still makes for fascinating reading.
I used to make my classes 60-80% project work, 40-80% quizzes all online.
I now do 50% project work, 50% in person quizzes, pencil on paper on page of notes.
I'm increasingly going to paper-driven workflows as well, becoming an expert with the department printer, printing computer science papers for students to read and annotate in class, etc.
Ironically, the traditional bureaucratic lag in university might actually help: we still have a lot of infrastructure for this sort of thing, and university degrees may actually signal competence-beyond-ai-prompting in the future.
I always preferred the "you get some grades along the way to gauge your progress but the lion's share of the weight went to the proctored exams" method unless the lion's share of the normal work was also proctored anyways (at which point it doesn't really matter how it's done).
The reason was less for myself and more because anything group related suddenly shot up in quality when the other individual work classmates were graded on couldn't be fudged.
The things I don’t like about putting too much weight in the exams are:
* It’s sort of unnecessarily high stakes for the students; a couple hours to determine your grade for many hours of studying.
* It’s pretty artificial in general; in “real life” you have the ability to go around online and look for sources. This puts a pretty low ceiling on the level of complexity you can actually throw at them.
Exams happen all the time in real life. Or rather, situations where you can't just look up fundamental knowledge. Job interviews, presentations, even mundane work tasks - all these require you to know the basics quickly "The basics" are relative, of course, but I often point out to my students: "you don't care if your doctor needs to look up the specific interactions of your various meds. You do care if you see them googling 'what is an appendix'." Proctored, in-person exams are the only reliable mechanism we have for ascertaining if a specific individual has mastered key fundamentals and can answer relevant questions about them in a relatively timely fashion. Everything else is details and thresholds - how fast do you need to be able to recall, how deep, what details are fundamental. From there, I think it's fine to hate poorly made exams, and it's a given that many folks making exams have no idea what they're doing (or don't have the resources to do it right). But the premise of an exam is not completely divorced from reality.
> It’s pretty artificial in general; in “real life” you have the ability to go around online and look for sources.
Sort of. In real life, you are expected to have immediate knowledge of your field and (in some environments) be able to perform under pressure. I'm not going to pretend the curriculum is a perfect match for what people should know, but it does provide a common baseline to be able to have a common point of reference when communicating with colleagues. I would suggest the most artificial thing about exams is the format.
> It’s sort of unnecessarily high stakes for the students; a couple hours to determine your grade for many hours of studying.
I don't like dismissing the ordeal of people who face test anxiety, but tests are not really high stakes. There is a potential that a person will have to repeat a course if it is a requirement for their degree. At least at the institutions I attended, the grade distribution across exams and assignments, combined with a late drop date, meant that failing a course was only an option if you choose it to be. A student may be forced to face some realities about their dedication/priorities, work habits, time management, interests, abilities, etc.. It may force a student to make some hard decisions about where they want their life to lead, but it does not bar them from success in life. And those are the worse case scenarios. A more typical scenario is that you end up with a lower GPA.
I think it's all about speed. In "real life" everything can be looked up, but exam optimizes to not even having to look it up. Then any research becomes much faster.
Whether it's good or bad I don't know, I think US higher education focuses too much on ability to produce huge amounts of mediocre work, but that's the idea behind exams.
One of the reasons I've always encouraged software people to learn to touch type has nothing to do with typing speed - it's about reducing/eliminating the cognitive load of typing, you want to be thinking in expressions (sentences) not letters. (The increase in effectiveness comes from not getting distracted by the mechanics of typing...)
In real life you need to know the options and their trade-offs to solve a given problem. You don't need to know all the techniques perfectly, but you do need to be able to characterize them and compare them, from rote memory.
I agree, I think many people who rail against exams underestimate how important memory is to more complicated skills. How can you debug a complex application if you have to keep looking up every operator and keyword in the language you're using? It'd be like trying to interpret poetry in a foreign language but you have to look up every single noun. I'm not saying people can't do it, but it's tedious, slow, and you probably wouldn't think of them as a "professional worth paying for their service". Some amount of memorization is key.
This is where the alternative of a course with the other (still monitored for graded activities) option comes in. The downside of that tends to force in person synchronous rather than custom scheduling of regular tests.
The point is more about whether the graded work is actively reviewed than which individual choice is ideal or not though. Whether it's electronic or written, remote or in person, weighted towards exams vs continuous are all orthogonal debates to the problem of cheating/falsely claiming work.
I had attended a few courses over a decade ago and just completed a degree recently. The methods of cheating have changed, but not because of pencils vs keyboards.
High stakes artificial exams can help prepare you for artificial stakes at job interviews where you need to crank out a working solution in 30 mins with jet lag and someone looking over your shoulder
That's true. They do better-prepare an applicant for a job that filters on a person's ability to accomplish arbitrary things in a vacuum that is completely disconnected from the real world.
That's probably a good thing to filter on for, say, the navigation role on all kinds of crafts (from land to sea to space). There are naval roles where navigating with a sextant and memory is an important skill to have, and to test for.
But that operating-in-a-vacuum skill doesn't relate well to roles that don't need to exist in a vacuum. In most of the jobs in the real world, we get to use tools -- and when the tools go out to lunch, we don't revert to the Old Ways.
When an accountant's computer dies, they don't transition back to written arithmetic and paper ledgers. Instead, someone who fixes computers gets it going again, and they get back to work as soon as that's done.
Obviously they're both supposed to be proxy measures, not realistic scenarios. I was mostly joking before but I do think exams provide a pretty good proxy for ability in the subject if the teacher is decent. Interviews not so much unless the applicant is similarly prepared with foreknowledge of what they will be tested on and had some time to prepare and given recent practice.
So at 50%, someone who uses AI to get 100% of the homework grade will earn a D (sometimes passing) if they can get at least a 20% on your quizzes, and a C (always passing) if they get at least a 40%. Did you make your exam so difficult that students who truly didn't learn the material earn less than 20-40%? Because if it was, say, multiple choice questions with four possible answers, then you can expect them to earn at least 25% just by chance.
While that answers their direct question, they do bring up a good point -- how often are you handing out less than 25% scores on exams? Id imagine any professor to do that to get some severe criticism that would make even a cheater pretty livid
Why are people promoting the idea that exams are not written or given in person anymore? I graduated relatively recently and maybe had 1 take home exam during my entire education. Every other exam was proctored in person and written. The professor who made the take home exam also made it much more difficult than a normal exam so I would not really say it was easier than a normal in person test.
Things have changed drastically since COVID-19, at least in the US. Tons of schools and universities shifted to online systems, and never abandoned the systems they built up when it was time to go back to school.
I graduated in 2020, so I've only gotten to see the changes secondhand through friends and family who are teachers, and through my sibling who graduated a few years after me. But the difference is staggering.
Take home exams were very common when I was in school, which was before you could get answers on the internet. After internet answer and cheating sites came along, a professor would have to either not care and let cheating run rampant, or struggle to constantly make unique new kinds of take home questions somehow. AI has basically killed that option too.
I loved take home exams because they allowed me to study before hand but not have the insane pressure and condensed studying required for exams in the classroom. Even though they were normally much harder and longer, I liked them. I felt I learned much more through them because I could take the time to understand concepts I had missed without feeling the time pressure of in-person exams.
It's a shame that humans find a way to cheat ourselves out of things that benefit us by over "optimizing" the wrong things.
Exams in classroom with all the time pressure is also an important part of education. May be they should be low percentage of grade to prevent too much stress but it's am important learning experience
I'd like to see some data on this. My general-ed recall is minimal, and in programming before school, I certainly learned a ton more by coding than by testing. That's my perception of my time in school, as well.
I disagree. Take home exams represent how work and progress occurs in the "real" world. There's nothing in the post education world that resembles in-person exams.
Maybe the medical profession is a counter example.
> There's nothing in the post education world that resembles in-person exams.
I’d argue that dealing with any high criticality operational incident is like an in person exam (maybe even the most difficult kind, the open book one) if you are the one responsible for fixing it. Everyone is looking at you, you have time pressure to solve it ASAP and you can’t afford the time to dig through all the docs on the spot. So there’s at least some similarity with some real life situations.
Is there really much point though? I think AI will keep improving, and there will be more and more incentive to use an AI which costs $20/month, instead of a human writer that costs $30/hour. If someone want's an article written, and if people like the AI article as much as the human one, what stops anyone everyone using AI?
The only answer I can think of is that people must believe AI writing will stay below human level for many years, but if so why?
i dont think that way, ai will became better, but human-taste writing just feels different. like hand-made furniture vs factory-made furniture. they have different class.
I think the AI writing becoming better means it will appear more human rather than like better AI writing. I think the difference in feeling is similar to early attempts to generate faces with AI, which also seemed wierdly wrong in ways which were hard to describe, but now it's very hard to tell them apart.
It does! That’s why you can ask to be evaluated by a commission of professors.
If you don’t pass after 3 tries, commission is mandatory.
You also have a paper trail of written exams and midterms to back you up. If you keep getting good grades and failing the oral, people will find that obviously suspicious.
Honestly the only times I had any trouble in the orals were the exams where I baaaaarely passed the written. Usually oral feels like the chill easy part compared to written because you can have a back-n-forth with the professor.
> It does! That’s why you can ask to be evaluated by a commission of professors.
Still concerning from a statistical/psych fairness aspect.
There's a famous example of the Boston Symphony trying to fairly judge unseen applicants in 1952, and their results kept getting gender-skewed until they adjusted for the fact judges were reacting to the sound of shoes (e.g. high heels) when the candidate moved around behind the divider.
> That’s why you can ask to be evaluated by a commission of professors.
Ah yes, the classic "if you think the system is abusing you, you shall out yourself to the system that's abusing you if you want any chance of recourse." Because a tribunal run by the people you're lodging a complaint against can't possibly be biased.
If you don't get one job you should have - there are others - it's unfortunate but not life altering.
If 3 years into your marine biology program a professor who always teaches a mandatory course fails you because you're a woman who wears non traditional dress - you're not graduating and now there are no jobs. (And this is an example that actually happened to someone I know - not in a western country)
I like this. Related, this semester I've been using handwritten quizzes in class. A simple change that's been one of the best things as it changed students' expectations of class prep. Kind of do the readings and sort of prep and you can coast in class. But if you need to write out quiz answers you're forced to know the material better as well as maintain the ability to express yourself.
I also use low-point bonus questions to test general knowledge (huge variation on subjects I thought everyone knew).
I’ve been typing for years since the 80s. However, even in the 90s I found any extended period of handwriting to be painful and laborious. I don’t think I could handle an instructor that insisted on handwritten long form but I’d happily accept a compromise in the form of a typewriter.
Reading all these comments, I feel like US universities are a joke.
I had to do all the exams in person. 100% of the grade was decided at the exam. Millions of people graduated this way and they are fine. No students were harmed in the process.
The “it does X for you” aspect of technology is not completely without its downsides, for various values of X.
For example, take “X” to be “walking”. Do we have the technology that allows us to pretty much never have to walk? Sure. As far as I am aware, though, we do not generally favour a lifestyle of being bound to a mobility aid by choice, and in fact we have found that not walking when able in the long run creates substantial well-being issues for a human. (Now, we have found ways to alleviate some of those issues for those who aren’t able, but clearly it is not sufficient because we still walk.)
The problem is exacerbated immensely as the value of X approaches something as fundamental to one’s humanity as “thinking”.
So you didn't have to do any course work? No collaboration? No labs? I'm not aware of any University that doesn't have coursework outside of online diploma mills.
In my undergrad, coursework did not count towards the grade for the module. But you earned the right to sit for the final exam by passing the courswork.
FWIW my Dad taught me how to type at 4yo on a huge Imperial typewriter. My spelling took an enormous leap in capability in a few weeks. Primary school teachers were amazed at the words I could spell correctly. (Didn't help my handwriting though which was still like intoxicated chicken scratch on a good day).
My school couldn't afford typewriters in the 1980's and early 1990's.
We wrote assignments by hand using a pencil or pen.
Is that really complicated?
When I got to college and everything had to be typed I still wrote everything by hand on paper and edited with an eraser and a red pen to reorganize some sentences or paragraphs. Then I would go to the computer lab and type it in and print it out.
Things like this are well-intentioned but idk why there aren't more teachers creating optional "side quests" like these for students that want them instead of forcing them to do things like these
optional "side quests" would allow teachers to create some standard accepted "main quest" curriculum and then just create a bunch of (even possibly "fun") "side quests" students can work on in their spare time for extra skill development
Remembering my college typewriter-use-by-quarters (coins) on a timer like being at the laundromat, I kind of love this.
At UT Arlington in the Stone Age we had a typewriter lab so folks without home computers with printers could still produce their papers typed, which was required. I had to get a roll of quarters ($10) to do a single paper. And the erase tape was always so used up it was useless.
It was one of the most sadistic things I remember about my college experience, trying to type on those crappy typewriter on a timer. With no errors. And I literally wrote it by hand before trying to transcribe it.
If students cheat they hurt only themselves. Make sure they understand the consequences for cheating (missing out on learning) and that's about all you can do.
Depends on your measuring stick. Cheating themselves out of an education? Yep. Cheating themselves into a credential -> job - the status / remuneration of which is almost entirely divorced from the quality of the education, being aligned rather with the name of the organization on the diploma.
Former (second-generation) college professor, here. I find it almost impossible to be cynical enough about the US education industry.
The thing is, when colleges don't test students' ability properly before issuing a credential, employers start testing job applicants' ability after they've received it.
And they'll do it with all the 'unnecessarily high stakes' and 'risk of unconscious bias' and 'not truly representative' problems that written exams have; and a bunch of extra problems too.
This is untrue. Students who graduate without actually absorbing knowledge as laid out in the curriculum devalue the degree when they show up in the workforce lacking that knowledge. This is part of why new grads are undesirable job candidates, there’s a chance you are paying a higher wage for someone who may not have learned anything.
When i attended university (almost a decade ago i guess, time flies) we didn't have a single exam on the computer. All exams were on paper or oral, most were without notes too. Computer science does not require computers.
This is usually true, but it is also true that some classes are graded "on a curve" and so grade inflation could hurt people who are honestly doing work. Also, cheaters tend to suck all the air out of a room. For example, my I.T. instructor designed a really nice oral quiz slide-show for the entire classroom. I found it a few hours before the class, I watched it in its entirety, and then when he tried to run it live, I spoilered all the answers before any other student could answer. I wasn't strictly cheating, but I wasn't being fair to my classmates' learning process, either.
I had a typewriter growing up and I remember thinking it was the coolest thing. I was amazed by it and tried writing several stories. Eventually my dad bought me a crappy old computer that was only really good for writing, and that was cool too. I loved that thing. It was small too, with an integrated monitor and keyboard, so it didn't take over the whole desk where I still used pencil and paper often
Imagine being able to do some writing without notifications going off every few seconds, and where you're not always one click away from a search engine and some website scientifically designed to drag your attention down a rabbit hole and keep it there
There's an entire industry of "distraction free writing devices" based mostly on that nostalgia/yearning (not to say that it isn't effective, but the effectiveness is not actually being measured :-)
If AI can do the work, maybe the test should be more focused on what AI can’t do? This is like anyone still doing a traditional coding interview with leetcode problems just because they haven’t yet done the work to figure out what to test for in a world where Claude Code exists.
Gyms are a great example actually because tractors exist to do the economically useful work. You now optionally go to the gym to benefit from fake labor that used to be the side effect of useful work. The fake labor is now what colleges are trying to sell, and it's going to kill them.
3,000 years ago, physical labor was a component of most jobs. Today gyms are for people who can afford to attend them and don't have a day job that naturally exercises them through labor. People exercising purely for health benefits, and not because the strength benefits them in their job and in other facets of their life, is new.
Huh? The gym analogy doesn’t even make sense. People didn’t go to gyms when they were farming with oxen. Gyms are popular now precisely because tractors exist and you don’t need manual labor to farm anymore but people still need the physical exercise for their health. Society has adapted to the arrival of new life-changing technology. Our education system needs to adapt to new technology like AI too. You can probably uplevel a lot of courses and cover a lot more interesting topics than before and teach real application of things you learned aided by AI. Just like when I was doing a CS major 20 years ago, they didn’t spend too much time teaching me assembly programming beyond 1 or 2 lectures (they let me use a compiler for programming assignments!).
Maybe instead of trying to teach around the abacus, we need to teach the higher level things you can reach with MATLAB.
We're doing these students a major disservice making them live in the old world. It's our fault for being inflexible, but their world is going to be wholly different and we should just embrace that.
One consequence of LLM fraud at scale making remote/online tests & document submission worthless is it might act as a giant revitalizing boost for the bricks-and-mortars school systems. Suddenly having real teachers and students in room together has value again, for credibility and authenticity alone.
LLMs are also making having a public repo code portfolio be much more worthless as a sign of legitimacy
My colleagues that teach hard skills courses (like data structures and algorithms) either love AI and incorporate it into their teaching at every moment possible, or despise it in the same way graphing calculators were by high school math teachers when they were introduced nearly 30 years ago.
I teach soft skills classes to engineering students, and I'm unconcerned with students using AI. I write my problems in a way such that, if the student truly understands the assignment, prompting the AI to solve the problem and iterating on it takes a similar amount of time to doing the work themselves. AI is not very good at writing introspectively about the student. In other words, AI isn't going to be helpful when the homework question is "A fellow student comes to you asking for suggestions on how to maximize their chances at landing an internship. What advice do you give them that's immediately actionable?"
Try it, plug that into ChatGPT or your favorite LLM. It parrots the same generic tips everyone tells you, with very little on "how" do perform the action in an effective way. Read it, copy it into your advice document, get a poor grade. Try telling other students to take this advice. Note how they don't because the advice isn't actually actionable enough for them to take action.
LLMs are also not very good at the follow-up question "In a previous assignment you gave specific and actionable advice to a peer on the job search. Which of these suggestions were so good you are now doing them?" A number of students write a "Mental Gymnastics" essay, claiming they are following all their suggestions (because they think that's what the professor wants to hear) while the evidence they provide demonstrates they are not. A student asking an LLM to write the essay for them consistently produces a digital 'pat on the back'; a mental gymnastics essay that ultimately makes the student realize how unwilling they are to solve the #1 problem in their college career.
I've done away with exams wherever possible. I stick to project-heavy courses. What I've found to be far more concerning than AI use is the increasing loss of social skills and ability to cooperate within the younger generations. The number of students who would prefer to fail a class instead of talk to literally any human being is astounding.
The number of students who refuse to build soft skills, and believe that tech is truly a meritocracy where the only thing that matters is 'lines of code', there's no politics, and they won't work call or crunch or give code reviews, is also astounding.
I’m confused about too many things being measured at once. Is Phelps banning AI to ensure her students are fit to pass terminal examination? And doing so to ensure that her class has a good pass rate, proving she is a good teacher and can keep her job? What if her cohort are particularly dumb? Is she incentivized to make it easy to pass her classes to get that A you paid so much for? Or hard or make that A worth something?
My mentor, a PhD in classics, told me it was never about outcomes and only about improvement. I suppose that answers my question. If your AI gets you an A at the start of the course and an A at the end, then, in the sense that you have not succeeded over anything, you have failed.
My impression was she just brings the typewriters into class as a one-day novelty thing per course, not that it becomes the norm for the whole semester. The goal is to give the students a taste of what the old-fashioned way is like, to get them thinking about it.
I like open note exams (and perhaps open book exams, as you need to know the book well to know which page to look at) - it forces you to condense the material to the salient points and operationalise it to solve what would be more challenging problems than a simple recall exam.
When I see 'cheat sheets' - designed to be hidden on the back of calculators or whatever - then I see true application of human ingenuity and intellect.
This will only work until somebody figures out how to connect an AI to the typewriter which will have some sort of MIC, and the person will start dictating into it with AI-assisted revisions. Once the dictation is over, the AI-enabled typewriter will be instructed to type the work out.
Testing and instruction should be modified to account for AI. If a student uses an Agentic AI for work, learning, research, then when test time comes, the student should be required to stand in the front of the class and teach the class what they have learned, i.e. "Teach Back" all they learned to the entire class student body and teacher. The entire class, instructor included, will also be required to participate in a Q&A session to make sure that student's learning is not just made up of memorization, e.g. restate the information learned but using different words, different scenarios, etc.
That makes sense. The CX-2 calculators are a bit less like the iPad era and more like the equivalent of calc I/II classes which only let you use specific TI models versus an app on your smartphone.
It reminds me of a family friend who's a bit older and did their scuba certification using dive tables, whereas when I did my PADI, I was able to use a dive computer.
Might be an unpopular opinion in this thread, but college was made worthless for most degrees as soon as the internet got popular and silly performative shit like this is the death knell. College is about learning how to work in an industry. I'd predict an uptick in trade schools and other hands-on work like medicine, and a continuing downturn in so-called formal education for anything white-collar, programming included. Students are customers. Businesses are going to use AI going forward. No reason to waste time on this.
Education is a nice side effect sometimes but yeah, I don't know how you could reach any other conclusion. If you're motivated to learn for learning's sake, college is an annoying slog that you know you don't need post-millenium. I literally left college early and started making money instead of spending it, because I got tired of demonstrating to my professors that I already knew everything they were teaching and that it'd be a waste of time for me to come to class.
Or maybe you chose to waste your time because you treated college as a way to get a piece paper instead of as the only time in your life when you are surrounded by experts who will spend an hour a week answering any questions you can think of.
No time wasted at all, that option is also trivially available outside of college, it's called "email". There's a whole industry in tricking new adults into believing that college is not about getting a piece of paper, it's gross, and it's avoidable. I paid off a year of unnecessary college debt in 1/4 of a year of doing real work I learned how to do in my free time. It's a trap and articles like this where colleges are working as hard they can to make education less useful prove it.
It would have been more wasted time had I continued after a single year. I went to my first year of college on the advice of my well-meaning parents who are old and like most old people thought it was still important, and yet they agreed with my decision to leave after the first year on an offer for a real six-figure job because there was nothing to learn that I hadn't or couldn't have learned on my own. At least one of my own professors also openly wondered why I was there at all.
To your second question - less than a hundred, but tens. Most people who are worth listening to publish their work and their thoughts. Email is free. Experts love to answer questions about their work, professors hate doing extra work for no extra pay. The incentives here are not confusing. How much time have I taken? Confusing question. These are real people with real passion, and they answer questions with that in mind. Professors are obligated to puke up an answer. I've gotten responses in most cases, in some I haven't. When I don't get answers it's because the targets are smart and busy. If I wanted more engagement with my random questions I'd offer money, and if I had offered money every time I'd still be below par on the money I wasted on college. If I wanted to justify it - I'd say I learned enough to validate that paying real money for another 3-6 years would have been less valuable than burning it for heat.
> At least one of my own professors also openly wondered why I was there at all.
I think you completely misunderstood this interaction.
There are 2 possible explanations.
1. You are so smart/knowledgeable that the professor thinks you are beyond college.
2. You were acting like such an arrogant know-it-all that the professor was being sarcastic.
I’ve seen #1, but I’ve seen #2 many times.
You sound like you have a huge chip on your shoulder about not having a degree. I had the same issue at one point before I went back and finished (after working as a professional developer for a while), so I recognize it.
When I did go back, I asked questions in class, I went to office hours to ask questions, and I did research projects with professors. Some back of the envelope math says it would have costs me about twice what I got out owing if I’d paid for an equal amount of time with whatever experts I could find.
My strong suspicion based on the few posts I’ve read is that your attitude is the reason you had such poor interactions with instructors.
I had excellent interactions with my instructors. I interacted with them like human beings and they understood that their limited time would be better spent with students who didn't have the same energy I did. Several professors, when asked, put me through an impromptu whiteboard quiz and said yeah, do your own thing. It's great that you participated in the process in your own way. In my case I asked if I could show up for the final tests and nothing else, because the intermediate work would have been useless, received permission, and passed.
Chip on my shoulder - no, and it's a silly label to begin with. Understanding that it's for other people who value the paper more than intrinsic understanding, yeah.
EDIT: I will concede in some way that I'm proud of not having a degree, and it does influence my thoughts on this topic. I've met some real idiots that do, and I don't consider it a serious differentiator.
Also looking up the thread - at my early jobs, I was surrounded by many people who were interested in educating me on any topic I could think of, because similarly we were all being paid for our time. The difference between that and school was the assumption that we were both motivated and capable.
We already had AI proof education.
It's a shame that they are also way more susceptible to cheating with AI.
So a student who only understands the basics should be able to answer most of the easy questions and students who have a deeper understanding can answer the harder ones.
Well-written exams should feel pretty fair and leave students feeling like the result they got is proportional to the effort they put into studying the material (or at least how well they personally felt they understood the material).
They were more prone to cheating before AI, too.
Cheating has always existed at some level, but from talking to my couple of friends who teach undergrad level courses the attitudes of students toward cheating have been changing even before AI was everywhere. They would complain about cohorts coming through where cheating was obvious and rampant, combined with administrations who started going soft on cheating because they didn’t want to lose (paying) students.
AI has taken it further, with students justifying it not as cheating but as using tools at their disposal.
I was talking to my friend about this last week and he was frustrated that several of his students had submitted papers that had all the signs of ChatGPT output, so he asked them simple questions about their papers. Most of them “couldn’t remember” what they wrote about.
It’s strange to me because when I went to college getting caught cheating was a big problem that resulted in students getting put on probationary watch and being legitimately scared of the consequences. Now at many schools cheating is routine and students push the boundaries of what they can get their classes to accept because they have no fear of any punishment. YMMV depending on the institution
Assignments and projects are great for learning, but suck for evaluation.
Another example, lit classes where the grade is based on time limited, open book exams, hand written in "blue books"
Read the book, pay attention in class, spend 90 min writing an essay, and you are done.
However I suspect that there are many who 1) are more concerned about the short term outcome, 2) consider the degree/diploma to be little more than a meal ticket or arbitrary gatekeeping without any connection to learning, 3) view the work as a pointless barrier to being handed said diploma, and/or 4) don't see the value of human learning in a world where jobs are done by AI and AI systems routinely outperform humans on complex tasks.
Personally, I dropped out despite a full ride+ becuase why would I put in work for a no name state school when I already has an FTE job as a developer out of high school anyway.
Turns out fraudulent action can still get the bag.
The other thing that feedback feeds into is credentials. I realize that some people are dismissive of this aspect of the degree, but it is important to pursue further studies or secure a job. While you can argue that these people are only cheating themselves, and some of them are cheating themselves, a great many will continue to cheat as they advance in academia or the workforce. In other words, they are cheating others out of opportunities.
Lots of skills from those old days that have been lost/ignored in the pretence of productivity.
In a different one she just said so long as you say AI was used you’re fine to use it.
In the rest of them AI is considered cheating.
To say we have discrepancies in the rules in an understatement. No one seems to have the exact answer on how to do it. I personally feel like expecting Ph.D level work is the best method as of now, I’ve learned more by using AI to do things about my head than hard core studying for a semester.
I teach at two universities in Japan and occasionally give lectures on AI issues at others, and the consensus I get from the faculty and students I talk with is that there is no consensus about what to do about AI in higher education.
Education in many subjects has been based around students producing some kind of complex output: a written paper, a computer program, a business plan, a musical composition. This has been a good method because, when done well, students could learn and retain more from the process of creating such output than they would from, say, studying for and taking in-class tests. Also, the product often mirrored what the students would be doing in their future lives, so they were learning useful skills as well.
AI throws a huge spanner into that product-based pedagogy, because it allows students to short-cut the creation process and thus learn little or nothing. Also, it is no longer clear how valuable some of those product-creation skills (writing, programming, planning) will be in the years ahead.
And while the fundamental assumptions behind some widely used teaching methods are being overthrown, many educators, students, and administrators remain attached to the traditional ways. That’s not surprising, as AI is so new and advancing so rapidly that it’s very difficult to say with any confidence how education needs to change. But, in my opinion at least, it does need to change at a very fundamental level. That change won’t be easy.
It's still a new tech so I'm not surprised a lot of teachers have different takes on it. But when it comes to education, I feel like different policies are reasonable. In some cases it's more likely to shortcut learning, and in other cases it's more likely to encourage learning. It's not entirely one or the other.
For example, the professor who’s leading me in this project had a fellowship at a certain university in England and said he exclusively coded using claude code for a month straight, their purpose was to solve a vaccine for a specific disease and by using AI tools such as claude code they’re several months ahead of schedule.
Nice idea. What class and what work are you doing then?
How do you know you actually learned, instead of being fed slop by the AI that isn't true at all? If you didn't study, then I doubt you'll really know if the AI is lying to you or not. I have to wonder if your teacher will too, sounds like they have kind of checked-out from actually teaching.
Am I saying I’m as knowledgeable or capable as a Ph.D right now? Absolutely not. There’s just not really a terminology that correctly describes accelerated learning and iteration by use of AI since the technology is so new. I can’t speak for others but as someone who’s a senior in my physics degree, I’ve been actually learning faster by using AI. It’s either a mental crutch or mental accelerator. The difference is in if you want it to completely do work for you or if you try to learn and follow along.
It’s a very under explored and new area right now, how higher learning is effected by using AI as a tool instead of as a cheating device, but historically, new tools like the calculator or computer have done a lot to accelerate learning once new rules are in place.
My understanding is that the Google Doc is not a word processing document, it's an event recording of a word processor. So, in theory, you could just "play back" watching the document being typed in and built to "see" how it was done.
I only mention this because given the AIs, I'm sure even with a typewriter, it's more efficient to have the AI do the work, and then just "type it in" to the typewriter, which kind of invalidates the entire purpose of it in the first place.
The typing in part is inevitable. May as well have a "perfect first draft" to type it in from in the first place.
And we won't mention the old retro interfaces that let you plug in a IBM Selectric as a printer for your computer. (My favorite was a bunch of solenoids mounted above the keys -- functional, but, boy, what a hack.)
TaaS -- Typing as a service. Send us your Markdown file and receive a typed up, double spaced copy via express shipping the next day!
Another way to automate this particular task is that some typewriters have (serial/parallel) ports to connect to a computer. It's not a daunting task at all for a student who is skilled in the art of using the bot to have one of these typewrites be the output target.
Like this: https://chatgpt.com/share/69e405db-1b44-83ea-baf3-6af41fe577...
However, they didn’t remove the embedded revision history in the .docx file they submitted, so that went about as well as you can expect.
I also think that when track changes was first introduced in earlier versions of MS Word, there wasn’t as much concern about privacy/telemetry as there is now, so it wasn’t made as prominently obvious.
oh look there is a llm trained on key loggers to spew slop at your personally predicted error rate; bonus if it identifies to USB as keyboard.
In some of the later Loebner competitions, when text was transmitted to the human character by character, the bot would even simulate typos followed by backspacing on screen to make it look more realistic.
https://en.wikipedia.org/wiki/Loebner_Prize
Participants spent more time polishing up the natural language parsing aspects in conjunction with pre‑programming elaborate backstories for their chatbot's bios among other psychological tricks. In the end, the whole competition was more impressive as a social engineering exercise, since the real goal kinda became: how can I trick people into thinking my chatbot is a human?
But reading through some of the previous competition chatbot transcripts still makes for fascinating reading.
Isn't that really what all these AI companies are doing too? It sure seems like it is.
I now do 50% project work, 50% in person quizzes, pencil on paper on page of notes.
I'm increasingly going to paper-driven workflows as well, becoming an expert with the department printer, printing computer science papers for students to read and annotate in class, etc.
Ironically, the traditional bureaucratic lag in university might actually help: we still have a lot of infrastructure for this sort of thing, and university degrees may actually signal competence-beyond-ai-prompting in the future.
We'll see.
The reason was less for myself and more because anything group related suddenly shot up in quality when the other individual work classmates were graded on couldn't be fudged.
* It’s sort of unnecessarily high stakes for the students; a couple hours to determine your grade for many hours of studying.
* It’s pretty artificial in general; in “real life” you have the ability to go around online and look for sources. This puts a pretty low ceiling on the level of complexity you can actually throw at them.
Sort of. In real life, you are expected to have immediate knowledge of your field and (in some environments) be able to perform under pressure. I'm not going to pretend the curriculum is a perfect match for what people should know, but it does provide a common baseline to be able to have a common point of reference when communicating with colleagues. I would suggest the most artificial thing about exams is the format.
> It’s sort of unnecessarily high stakes for the students; a couple hours to determine your grade for many hours of studying.
I don't like dismissing the ordeal of people who face test anxiety, but tests are not really high stakes. There is a potential that a person will have to repeat a course if it is a requirement for their degree. At least at the institutions I attended, the grade distribution across exams and assignments, combined with a late drop date, meant that failing a course was only an option if you choose it to be. A student may be forced to face some realities about their dedication/priorities, work habits, time management, interests, abilities, etc.. It may force a student to make some hard decisions about where they want their life to lead, but it does not bar them from success in life. And those are the worse case scenarios. A more typical scenario is that you end up with a lower GPA.
Whether it's good or bad I don't know, I think US higher education focuses too much on ability to produce huge amounts of mediocre work, but that's the idea behind exams.
The point is more about whether the graded work is actively reviewed than which individual choice is ideal or not though. Whether it's electronic or written, remote or in person, weighted towards exams vs continuous are all orthogonal debates to the problem of cheating/falsely claiming work.
I had attended a few courses over a decade ago and just completed a degree recently. The methods of cheating have changed, but not because of pencils vs keyboards.
That's probably a good thing to filter on for, say, the navigation role on all kinds of crafts (from land to sea to space). There are naval roles where navigating with a sextant and memory is an important skill to have, and to test for.
But that operating-in-a-vacuum skill doesn't relate well to roles that don't need to exist in a vacuum. In most of the jobs in the real world, we get to use tools -- and when the tools go out to lunch, we don't revert to the Old Ways.
When an accountant's computer dies, they don't transition back to written arithmetic and paper ledgers. Instead, someone who fixes computers gets it going again, and they get back to work as soon as that's done.
I graduated in 2020, so I've only gotten to see the changes secondhand through friends and family who are teachers, and through my sibling who graduated a few years after me. But the difference is staggering.
It's a shame that humans find a way to cheat ourselves out of things that benefit us by over "optimizing" the wrong things.
Maybe the medical profession is a counter example.
I’d argue that dealing with any high criticality operational incident is like an in person exam (maybe even the most difficult kind, the open book one) if you are the one responsible for fixing it. Everyone is looking at you, you have time pressure to solve it ASAP and you can’t afford the time to dig through all the docs on the spot. So there’s at least some similarity with some real life situations.
The only answer I can think of is that people must believe AI writing will stay below human level for many years, but if so why?
Not sure anyone even attempted to cheat in that scenario. And the conversations were usually great, although very stressful for us cramming types
If you don’t pass after 3 tries, commission is mandatory.
You also have a paper trail of written exams and midterms to back you up. If you keep getting good grades and failing the oral, people will find that obviously suspicious.
Honestly the only times I had any trouble in the orals were the exams where I baaaaarely passed the written. Usually oral feels like the chill easy part compared to written because you can have a back-n-forth with the professor.
Still concerning from a statistical/psych fairness aspect.
There's a famous example of the Boston Symphony trying to fairly judge unseen applicants in 1952, and their results kept getting gender-skewed until they adjusted for the fact judges were reacting to the sound of shoes (e.g. high heels) when the candidate moved around behind the divider.
Ah yes, the classic "if you think the system is abusing you, you shall out yourself to the system that's abusing you if you want any chance of recourse." Because a tribunal run by the people you're lodging a complaint against can't possibly be biased.
If you don't get one job you should have - there are others - it's unfortunate but not life altering.
If 3 years into your marine biology program a professor who always teaches a mandatory course fails you because you're a woman who wears non traditional dress - you're not graduating and now there are no jobs. (And this is an example that actually happened to someone I know - not in a western country)
I also use low-point bonus questions to test general knowledge (huge variation on subjects I thought everyone knew).
I had to do all the exams in person. 100% of the grade was decided at the exam. Millions of people graduated this way and they are fine. No students were harmed in the process.
What a narrow set of skills to send into your economy.
What is the "it" that AI does for you?
This is assuming you know how to get good work out of AI in the first place. But even that is turning out to be a skill in and of itself.
Context helps immensely, for example. Think of what you can do that someone outside tech can't.
For example, take “X” to be “walking”. Do we have the technology that allows us to pretty much never have to walk? Sure. As far as I am aware, though, we do not generally favour a lifestyle of being bound to a mobility aid by choice, and in fact we have found that not walking when able in the long run creates substantial well-being issues for a human. (Now, we have found ways to alleviate some of those issues for those who aren’t able, but clearly it is not sufficient because we still walk.)
The problem is exacerbated immensely as the value of X approaches something as fundamental to one’s humanity as “thinking”.
We wrote assignments by hand using a pencil or pen.
Is that really complicated?
When I got to college and everything had to be typed I still wrote everything by hand on paper and edited with an eraser and a red pen to reorganize some sentences or paragraphs. Then I would go to the computer lab and type it in and print it out.
optional "side quests" would allow teachers to create some standard accepted "main quest" curriculum and then just create a bunch of (even possibly "fun") "side quests" students can work on in their spare time for extra skill development
At UT Arlington in the Stone Age we had a typewriter lab so folks without home computers with printers could still produce their papers typed, which was required. I had to get a roll of quarters ($10) to do a single paper. And the erase tape was always so used up it was useless.
It was one of the most sadistic things I remember about my college experience, trying to type on those crappy typewriter on a timer. With no errors. And I literally wrote it by hand before trying to transcribe it.
Good luck, we’re all counting on you.
If you're not interested in learning the course content, then what are you doing there? Pretty expensive waste of time.
I very fondly recall many of the course I did at university. The exams were a helpful motivating factor even for the interesting courses.
https://austinhenley.com/blog/aihomework.html
One of my best college professors would review such essays in-person, one-on-one twice each semester.
Former (second-generation) college professor, here. I find it almost impossible to be cynical enough about the US education industry.
This statement is more defensible after removing “only”. If it “only” hurt the cheaters, there would be no need to police cheating at all.
And they'll do it with all the 'unnecessarily high stakes' and 'risk of unconscious bias' and 'not truly representative' problems that written exams have; and a bunch of extra problems too.
Imagine being able to do some writing without notifications going off every few seconds, and where you're not always one click away from a search engine and some website scientifically designed to drag your attention down a rabbit hole and keep it there
[0]: https://writerdeckos.com/
Gyms aren't redundant because tractors exist.
We're doing these students a major disservice making them live in the old world. It's our fault for being inflexible, but their world is going to be wholly different and we should just embrace that.
LLMs are also making having a public repo code portfolio be much more worthless as a sign of legitimacy
My colleagues that teach hard skills courses (like data structures and algorithms) either love AI and incorporate it into their teaching at every moment possible, or despise it in the same way graphing calculators were by high school math teachers when they were introduced nearly 30 years ago.
I teach soft skills classes to engineering students, and I'm unconcerned with students using AI. I write my problems in a way such that, if the student truly understands the assignment, prompting the AI to solve the problem and iterating on it takes a similar amount of time to doing the work themselves. AI is not very good at writing introspectively about the student. In other words, AI isn't going to be helpful when the homework question is "A fellow student comes to you asking for suggestions on how to maximize their chances at landing an internship. What advice do you give them that's immediately actionable?"
Try it, plug that into ChatGPT or your favorite LLM. It parrots the same generic tips everyone tells you, with very little on "how" do perform the action in an effective way. Read it, copy it into your advice document, get a poor grade. Try telling other students to take this advice. Note how they don't because the advice isn't actually actionable enough for them to take action.
LLMs are also not very good at the follow-up question "In a previous assignment you gave specific and actionable advice to a peer on the job search. Which of these suggestions were so good you are now doing them?" A number of students write a "Mental Gymnastics" essay, claiming they are following all their suggestions (because they think that's what the professor wants to hear) while the evidence they provide demonstrates they are not. A student asking an LLM to write the essay for them consistently produces a digital 'pat on the back'; a mental gymnastics essay that ultimately makes the student realize how unwilling they are to solve the #1 problem in their college career.
I've done away with exams wherever possible. I stick to project-heavy courses. What I've found to be far more concerning than AI use is the increasing loss of social skills and ability to cooperate within the younger generations. The number of students who would prefer to fail a class instead of talk to literally any human being is astounding.
The number of students who refuse to build soft skills, and believe that tech is truly a meritocracy where the only thing that matters is 'lines of code', there's no politics, and they won't work call or crunch or give code reviews, is also astounding.
My mentor, a PhD in classics, told me it was never about outcomes and only about improvement. I suppose that answers my question. If your AI gets you an A at the start of the course and an A at the end, then, in the sense that you have not succeeded over anything, you have failed.
When I see 'cheat sheets' - designed to be hidden on the back of calculators or whatever - then I see true application of human ingenuity and intellect.
Testing and instruction should be modified to account for AI. If a student uses an Agentic AI for work, learning, research, then when test time comes, the student should be required to stand in the front of the class and teach the class what they have learned, i.e. "Teach Back" all they learned to the entire class student body and teacher. The entire class, instructor included, will also be required to participate in a Q&A session to make sure that student's learning is not just made up of memorization, e.g. restate the information learned but using different words, different scenarios, etc.
It reminds me of a family friend who's a bit older and did their scuba certification using dive tables, whereas when I did my PADI, I was able to use a dive computer.
Oh
You just said that it was a waste of time. So was it or not?
> that option is also trivially available outside of college, it's called “email”.
How many experts have you cold emailed over the years and how much of their time have you taken?
To your second question - less than a hundred, but tens. Most people who are worth listening to publish their work and their thoughts. Email is free. Experts love to answer questions about their work, professors hate doing extra work for no extra pay. The incentives here are not confusing. How much time have I taken? Confusing question. These are real people with real passion, and they answer questions with that in mind. Professors are obligated to puke up an answer. I've gotten responses in most cases, in some I haven't. When I don't get answers it's because the targets are smart and busy. If I wanted more engagement with my random questions I'd offer money, and if I had offered money every time I'd still be below par on the money I wasted on college. If I wanted to justify it - I'd say I learned enough to validate that paying real money for another 3-6 years would have been less valuable than burning it for heat.
I think you completely misunderstood this interaction.
There are 2 possible explanations.
1. You are so smart/knowledgeable that the professor thinks you are beyond college.
2. You were acting like such an arrogant know-it-all that the professor was being sarcastic.
I’ve seen #1, but I’ve seen #2 many times.
You sound like you have a huge chip on your shoulder about not having a degree. I had the same issue at one point before I went back and finished (after working as a professional developer for a while), so I recognize it.
When I did go back, I asked questions in class, I went to office hours to ask questions, and I did research projects with professors. Some back of the envelope math says it would have costs me about twice what I got out owing if I’d paid for an equal amount of time with whatever experts I could find.
My strong suspicion based on the few posts I’ve read is that your attitude is the reason you had such poor interactions with instructors.
Chip on my shoulder - no, and it's a silly label to begin with. Understanding that it's for other people who value the paper more than intrinsic understanding, yeah.
EDIT: I will concede in some way that I'm proud of not having a degree, and it does influence my thoughts on this topic. I've met some real idiots that do, and I don't consider it a serious differentiator.
Also looking up the thread - at my early jobs, I was surrounded by many people who were interested in educating me on any topic I could think of, because similarly we were all being paid for our time. The difference between that and school was the assumption that we were both motivated and capable.