Similarly, in research, the trajectory points toward systems that can increasingly automate the research cycle. In some domains, that already looks like robotic laboratories that run continuously, automate large portions of experimentation and even select new tests based on prior results.
At first glance, this may sound like a welcome boost to productivity. But universities are not information factories; they are systems of practice. They rely on a pipeline of graduate students and early-career academics who learn to teach and research by participating in that same work. If autonomous agents absorb more of the “routine” responsibilities that historically served as on-ramps into academic life, the university may keep producing courses and publications while quietly thinning the opportunity structures that sustain expertise over time.
The same dynamic applies to undergraduates, albeit in a different register.



I would say that THIS is the biggest risk of AI. its not what it does, its what people believe it does. Especially people who aren’t capable of actually assessing its performance.
Or C-level execs that are so out of touch with what their employees do and are convinced it can and/or should replace one or all of an employees job duties.