

No offense to you, but I’m assuming your mention of integrals here are those that you’d frequently encounter in a regular course of calculus in the English-speaking world (little tidbit that I came across a while ago is that universities in a number of countries don’t make a distinction between real analysis and calculus). But I studied in NA, so I can come at your ask from this angle.
To make it perfectly clear though: real analysis (and some beyond) and calculus are looking at the same thing, but coming at it from different angles. Calculus focuses on what’s computable, to have the ability to look at how real values (as in, real numbers) change given a particular function. OTOH, real analysis is, as the name suggests, a study of real numbers, this nebulous idea of “distance between numbers along with other distance-y properties” that we call a space, and the functions that can act in this space.
Here’s an example of the difference in treatment.
In calculus, the idea of differentiability is usually introduced as “the tangent at a point”. And that’s a fairly easily understood idea, and it’s fine to gloss over the details when most of the functions that you will come across and use are going to be differentiable functions.
In real analysis, which is usually an early class in pure mathematics, the treatment is a lot more rigourous: you have to very explicitly define what something is, and it becomes your framework to prove that something IS the thing you’ve defined. The “tangent on a point” isn’t lost, but the way it’s described leaves you with no space for vague interpretations of what’s considered differentiable or not.
The same goes for integrability. And yes there are different ways to think about inevitability to expand on the types of functions that would be considered integrable. In calculus, the Riemannian method is likely the only method that one will ever see. And that’s fine! It’s easy, if not tedious, to compute! And it’s already incredibly useful. Most functions that a student in calculus will ever have to integrate are all continuous anyways.
But Lesbesgue was able to create a definition of an integral that allows us to handle even certain non-continuous functions. The problem? It’s not as easily computable as there isn’t all the derivative rules common in calculus (calculus is a “method of calculation”), even though the intuitive intepretation of the Lesbesgue integral is that instead of slicing the area under the curve downwards, you slice sidewards!
Hopefully that’s easy enough to follow, but let me know if you’d like me to explain further. Trying to grok this old part of my brain here for this.
Edit: I’m adding this on because I think I may have just recalled a very fundamental knowledge with regards to measure theory, that all non-negative functions defined in some measurable space are all integrable using the measure. To be really fair, we straight up just defined integrability to be that, because, and I’m being veeeerrry handwavy here, if you can measure parts of the range of a function in smaller pieces in some way, then you could just add the parts up. A measurable space as just some space where you can put some kind of measurement (think of how you measure things) on a collection of points in some space, think a bunch of numbers.
How easily can we make up methods of calculation that would allow us to take any such function, apply some symbolic manipulation, and arrive at the integral, though, is a completely different ask, and I don’t yet know if there’s effort being put in here.








I know Lemmy hates AI with a fiery passion (and I too hate it for various reasons), but the ability to make this sort of prediction in a way far more stable than whatever else came before with natural language processing (fancy term of the day for those who havem’t heard of it), and however inefficiently built and ran it is, is useful if you can nudge it enough in a certain direction. It can’t do functional things reliably, but if you contain it to only parse human language and extract very specific information, show it in a machine-parsable way, and then use that as input for something you can program, you’ve essentially built something that feels like it can understand you in human language for a handful of tasks and carry out those tasks (even if the carrying out part isn’t actually done by an LLM). So pedantically, it’s not AI, but most people not in tech don’t know or care about the difference. It’s all magic all the way down like how computers should just magically do what they’re thinking of. That’s not changed.
My point though, and this isn’t targeting you specifically dear OC, is that we can circlejerk all we want here, but echoing this oversimplification of what LLMs can do is pretty irrelevant to the bigger discourse. Call these companies out on their practices! Their hypocrisy! Their indifference to the collapse of our biosphere, human suffering, letting the most vulnerable to hang high and dry!
Tech is a tool, and if our best argument is calling a tool useless when it’s demonstrably useful in specific ways, we’re only making a fool of ourselves, turning people away from us and discouraging others from listening to us.
But if your goal is to feel good by letting one out, please be my guest.
Peace