College professors are going back to paper exams and handwritten essays to fight students using ChatGPT::The growing number of students using the AI program ChatGPT as a shortcut in their coursework has led some college professors to reconsider their lesson plans for the upcoming fall semester.
When I was in College for Computer Programming (about 6 years ago) I had to write all my exams on paper, including code. This isn’t exactly a new development.
So what you’re telling me is that written tests have, in fact, existed before?
What are you some kind of education historian?
He’s not pointing out that handwritten tests are not something new, but that using handwritten tests over typing them to reflect the student’s actual abilities is not new.
I had some teachers ask for handwritten programming exams too (that was more like 20 years ago for me) and it was just as dumb then as it is today. What exactly are they preparing students for? No job will ever require the skill of writing code on paper.
Maybe something like, a whiteboard interview…? They’re still incredibly common, especially for new grads.
A company that still does whiteboard interviews one I have no interest in working for. When I interview candidates I want to see how they will perform in their job. Their job will not involve writing code on whiteboards, solving weird logic problems, or knowing how to solve traveling salesman problem off the top of their heads.
That’s a valid opinion, and I largely share it. But, all these students need to work somewhere. This is something the industry needs to change before the school changes it.
Also, I’ve definitely done white board coding discussions in practice, e.g., go into a room, write up ideas on the white board (including small snippets of code or pseudo code).
I’ve done that too back before the remote work era, but using a whiteboard as a visual aid is not the same thing as solving a whole problem on a whiteboard.
It’s a close enough problem; a lot of professors I’ve known aren’t going to sweat the small stuff on paper. Like, they’re not plugging your code into a computer and expecting it to build, they’re just looking for the algorithm design, and that there’s no grotesque violation of the language rules.
Sure, some are going to be a little harder “you missed a simicolon here”, but even then, if you’re doing your work, that’s not a hard thing to overcome, and it’s going to cost you a handful of points (if that sort of stuff is your only mistake you get a 92 instead of a 100).
Which is equally useless. In the end you’re developing a skill that will only be used in tests. You’re training to be evaluated instead of to do a job well.
deleted by creator
I personally never had a problem performing well in those tests, I happen to have the skill to compile code in my head, and it is a helpful skill in my job (I’ve been a software engineer for 19 years now), but it’s definitely not a required skill and should not be considered as such.
Same. All my algorithms and data structures courses in undergrad and grad school had paper exams. I have a mixed view on these but the bottom line is that I’m not convinced they’re any better.
Sure they might reflect some of the student’s abilities better, but if you’re an evaluator interested in assessing student’s knowledge a more effective way is to make directed questions.
What ends up happening a lot of times are implementation questions that ask from the student too much at once: interpretation of the problem; knowledge of helpful data structures and algorithms; abstract reasoning; edge case analysis; syntax; time and space complexities; and a good sense of planning since you’re supposed to answer it in a few minutes without the luxury and conveniences of a text editor.
This last one is my biggest problem with it. It adds a great deal of difficulty and stress without adding any value to the evaluator.