College professors are going back to paper exams and handwritten essays to fight students using ChatGPT::The growing number of students using the AI program ChatGPT as a shortcut in their coursework has led some college professors to reconsider their lesson plans for the upcoming fall semester.
Essay writing was always my Achilles’ heel until I discovered a professional writing service online. Hiring their team of skilled writers has completely transformed my approach to assignments. Now, every nursing essay examples I submit is crafted to perfection, which has notably boosted my academic standing. This service has not only provided me with high-quality essays but has also given me peace of mind and the freedom to focus on other critical aspects of my studies.
Chatgpt jest darmowy, więc wielu uczniów korzysta z niego, aby w prosty sposób odrabiać zadania domowe. Potrzebujemy ściślejszej kontroli w tej kwestii
Removed by mod
Removed by mod
Wouldn’t it make more sense to find ways on how to utilize the tool of AI and set up criteria that would incorporate the use of it?
There could still be classes / lectures that cover the more classical methods, but I remember being told “you won’t have a calculator in your pocket”.
My point use, they should prepping students for the skills to succeed with the tools they will have available and then give them the education to cover the gaps that AI can’t solve. For example, you basically need to review what the AI outputs for accuracy. So maybe a focus on reviewing output and better prompting techniques? Training on how to spot inaccuracies? Spotting possible bias in the system which is skewed by training data?
That’s just what we tell kids so they’ll learn to do basic math on their own. Otherwise you’ll end up with people who can’t even do 13+24 without having to use a calculator.
people who can’t even do 13+24 without having to use a calculator
More importantly, you end up with people who don’t recognize that 13+24=87 is incorrect. Math->calculator is not about knowing the math, per se, but knowing enough to recognize when it’s wrong.
I don’t envy professors/teachers who are hacing to figure out novel ways of determining the level of mastery of a class of 30, 40, or 100 students in the era of online assistance. Because, really, we still need people who can turn out top level, accurate, well researched documentation. If we lose them, who will we train the next gen LLM on? ;-)
end up with people who don’t recognize that 13+24=87 is incorrect
I had a telecom teacher who would either allow you to use a calculator, but you had to get everything right.
Or go without and you could get away with rougher estimates.Doing stuff like decibels by hand isn’t too bad if you can get away with a ballpark and it’s a much more useful skill to develop than just punching numbers in a calculator.
Removed by mod
Prof here - take a look at it from our side.
Our job is to evaluate YOUR ability; and AI is a great way to mask poor ability. We have no way to determine if you did the work, or if an AI did, and if called into a court to certify your expertise we could not do so beyond a reasonable doubt.
I am not arguing exams are perfect mind, but I’d rather doubt a few student’s inability (maybe it was just a bad exam for them) than always doubt their ability (is any of this their own work).
Case in point, ALL students on my course with low (<60%) attendance this year scored 70s and 80s on the coursework and 10s and 20s in the OPEN BOOK exam. I doubt those 70s and 80s are real reflections of the ability of the students, but do suggest they can obfuscate AI work well.
Is AI going to go away?
In the real world, will those students be working from a textbook, or from a browser with some form of AI accessible in a few years?
What exactly is being measured and evaluated? Or has the world changed, and existing infrastructure is struggling to cling to the status quo?
Were those years of students being forced to learn cursive in the age of the computer a useful application of their time? Or math classes where a calculator wasn’t allowed?
I can hardly think just how useful a programming class where you need to write it on a blank page of paper with a pen and no linters might be, then.
Maybe the focus on where and how knowledge is applied needs to be revisited in light of a changing landscape.
For example, how much more practically useful might test questions be that provide a hallucinated wrong answer from ChatGPT and then task the students to identify what was wrong? Or provide them a cross discipline question that expects ChatGPT usage yet would remain challenging because of the scope or nuance?
I get that it’s difficult to adjust to something that’s changed everything in the field within months.
But it’s quite likely a fair bit of how education has been done for the past 20 years in the digital age (itself a gradual transition to the Internet existing) needs major reworking to adapt to changes rather than simply oppose them, putting academia in a bubble further and further detached from real world feasibility.
If you’re going to take a class to learn how to do X, but never actually learn how to do X because you’re letting a machine do all the work, why even take the class?
In the real world, even if you’re using all the newest, cutting edge stuff, you still need to understand the concepts behind what you’re doing. You still have to know what to put into the tool and that what you get out is something that works.
If the tool, AI, whatever, is smart enough to accomplish the task without you actually knowing anything, what the hell are you useful for?
But that’s actually most of the works we have nowadays. IA is replacing repetitive works such as magazine writers or script writers
Writers are repetitive work???
Well, it seems they will be replaced, at least certain writers. https://www.npr.org/2023/05/20/1177366800/striking-movie-and-tv-writers-worry-that-they-will-be-replaced-by-ai Also, callcenters https://www.bbc.com/news/business-65906521 And junior programmers. The problem here it’s not my opinion, those already happened so its not debatable.
Only the ones too dumb to incorporate ai usage into their work and grade accordingly. Going to be a load of kids who aren’t just missing out a learning how to best use modern tools but who have wasted their time learning obsolete skills.
Thankfully those kids will be able to get a proper education from AI soon.
It just brings into question what the point of exams are.
AI in its current form is equivalent to the advent of the typewriter. Its just empowering you to do a whole lot more a whole lot faster.
Not using it is dumb.
AI is a tool that can indeed be of great benefit when used properly. But using it without comprehending and verifying the source material can be downright dangerous (like those lawyers citing fake cases). The point of the essay/exam is to test comprehension of the material.
Using AI at this point is like using a typewriter in a calligraphy test, or autocorrect in a spelling and grammar test.
Although asking for handwritten essays does nothing to combat use of AI. You can still generate content and then transcribe it by hand.
That argument is great until someone gets maimed or killed because the “AI” got it wrong and the user didn’t know enough to realize.
You know idiots with AI do that all the time everyday right?
My broader point (in your metaphor) is that calligraphy tests are irrelevant at this point. The world changed. Theres not going back.
That argument is great until someone gets maimed or killed because the “AI” got it wrong and the user didn’t know enough to realize.
Because of an essay?
We need to teach people to work with technology. Not pretend it doesnt exist. When these models came out, the world changed. If you arent using them right now you are being left behind