trinicorn [comrade/them]

  • 0 Posts
  • 10 Comments
Joined 4 months ago
cake
Cake day: January 16th, 2025

help-circle
  • and my comment said that in >50% of my classes, what it did was genuinely foster learning in the way described. Not all of the evils of the US school system even conflict with this very basic model of learning, and regardless schools aren’t a monolith. A blanket statement about how schools operate in the US isn’t appropriate in this case, because it’s a gross exaggeration of how useless they are and doesn’t apply across the board.

    edit: and to be clear I agree that classes/assignments that only foster learning in theory, are meaningless, but many still do in practice



  • I’m not sure that I agree that the model “only tests are graded” is a good ideal. What is generally thought of as homework, sure, that is counter-productive to grade, but for things like essays, projects, etc. I’m not sure that’s true. There is only so much essay you can write in one sitting, and the practice polishing, restructuring and generally just increased time spent thinking through that essay, that is afforded by it being done at home, is valuable.

    And I think you underestimate what an LLM can (at least in theory) produce. Especially if you let students pick topics or take liberties with structure, etc. at all. What you’re asking of teachers, when you say that students successfully using LLMs to pass their class is an indictment of their coursework, is an obligation to always provide sufficiently novel prompts and questions and such that an “AI” can’t answer well.

    I agree that an LLM probably can’t convincingly synthesize two concepts that were both represented separately in its training data (though I expect they’ll get closer to being able to pass this off for non-complex examples), but what if the synthesis itself was in the training data? In HS and undergrad level courses, how often are the topics at hand really novel enough to rely on that not being the case? Or how often is the syllabus really flexible enough to allow teachers to reframe all assessments into synthesis questions? And as these companies get better at incorporating fresh material, how often will teachers have to completely rethink their coursework to keep up. This isn’t a treadmill that it’s reasonable to expect teachers to get on or condemn them for being imperfect at detecting.

    The problem isn’t that teachers can’t tell, it’s that they can’t prove it. The difference between a student who isn’t really getting it 100% but is trying and one who used AI and the slop it put out doesn’t quite make logical sense is not that cut and dry and they don’t deserve the same grade.

    As a matter of practicality, what you describe may become necessary for serious educational institutions, but I wouldn’t lay that on the teachers or say that it’s ideal in any abstract sense, absent LLMs.




  • I’m seeing a lot of normalization of this (and other bad things) in high schoolers. They weren’t really paying attention to the wider world until a few years ago so anything older than a year or so has basically been around forever and doesn’t need to be questioned. Some will learn to question it as they grow and mature, but few have really yet at their age.

    Basically they seem to take at face value that it produces worthwhile output and is useful and intelligent. tbf this applies to many adults too, but I hate the conflation of “parrots sometimes-relevant, sometimes-real material” with intelligence just because it sounds confident doing it, and I see it constantly



  • downbear

    The whole point is that A) the goal of school assignments isn’t to get the right answer it’s to learn to understand the surrounding concepts and how to get the right answer in a more generalizable way and B) the students aren’t learning anything if its copy pasted from an AI. And C) frankly the LLM doesn’t usually “solve” it. Its outputs are often easily distinguishable, poor answers, that just look good enough at first glance to hit submit.

    What about an LLM producing plausible output (the one thing it’s built to do) in response to a prompt (the question/assignment) actually means the coursework is poorly designed?

    I genuinely want to know your thought process here. Is it just that teachers should be expected to outpace cheating technology or that you genuinely think anything that can convincingly be done by an LLM isn’t worth having a human do it?

    Writing an essay on a topic is not just a way of assessing your knowledge of the topic, it’s great practice for communicating your ideas in a coherent polished form in general. Just because an LLM can write something that sometimes passes for a human-written essay doesn’t mean that essays are useless now…


  • I wonder how leftists in these countries will prepare for an increasingly illiterate working class

    honestly considering the origin story of a lot of AES I don’t think literacy is a prereq.

    attention span I guess might be more of an issue, but I think deteriorating living conditions will make reality harder to ignore. I don’t really think we win by winning over an actual majority of people with reasoned argument, we win by being in the right place at the right time on the right side of declining living conditions. You need an organized core who have at least some solid basis in theory, but the broader movement around that core don’t have to already understand the theory to be on our side, though hopefully they will learn it as they go