• 0 Posts
  • 254 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
  • Kogasa@programming.devtoScience Memes@mander.xyzListen here, Little Dicky
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    2 days ago
    1. I also have a masters in math and completed all coursework for a PhD. Infinitesimals never came up because they’re not part of standard foundations for analysis. I’d be shocked if they were addressed in any formal capacity in your curriculum, because why would they be? It can be useful to think in terms of infinitesimals for intuition but you should know the difference between intuition and formalism.

    2. I didn’t say “infinitesimals don’t have a consistent algebra.” I’m familiar with NSA and other systems admitting infinitesimal-like objects. I said they’re not standard. They aren’t.

    3. If you want to use differential forms to define 1D calculus, rather than a NSA/infinitesimal approach, you’ll eventually realize some of your definitions are circular, since differential forms themselves are defined with an implicit understanding of basic calculus. You can get around this circular dependence but only by introducing new definitions that are ultimately less elegant than the standard limit-based ones.


  • Ok, but no. Infinitesimal-based foundations for calculus aren’t standard and if you try to make this work with differential forms you’ll get a convoluted mess that is far less elegant than the actual definitions. It’s just not founded on actual math. It’s hard for me to argue this with you because it comes down to simply not knowing the definition of a basic concept or having the necessary context to understand why that definition is used instead of others…











  • 1000 or so moves

    Ok, this nerd-snipes me a bit. The real point of the Towers of Hanoi problem, at least if you’re doing it in a discrete math class, is to count exactly the number of moves for n discs. Call this number f(n).

    To make this post easier to read, we will label the discs from 1 to n, with 1 being the bottom disc and n being the top. We will also label the pegs A, B, and C, with A being the starting peg and C being the target peg. This lets us describe moves in a compact notation: 3: A -> C means disc 3 moves from A to C.

    For n=0, we vacuously need 0 moves to win.

    For n=1, we need just 1 move: 1:A->C.

    For n=2, we can do it in 3 moves, so f(2) = 3. The sequence is:

    • 2: A->B # unblock the next move
    • 1: A->C # move 1 to its final location
    • 2: B->C # move 2 to its final location

    Now suppose we know f(n) for n between 1 and N, with N >= 2. For n=N+1, we can move N+1: A -> C and attempt to use our strategy for moving the remaining N discs from A to C, shuffling N+1 around as needed. Since it’s the smallest disc, we can always move it to any pillar, and any time we want to move something to the pillar it’s on, it must be moved first. Let’s see how that plays out for N=2:

    • 3: A->C # move the N+1 disc
    • 2: A->B # start the N-disc strategy
    • 3: C->B # unblock the next step
    • 1: A->C # next step of N-disc strategy
    • 3: B->A # unblock the next step
    • 2: B->C # last step of N-disc strategy
    • 3: A->C # finish the N+1-disc problem

    We hazard a guess that every step of the N-disc strategy is preceded by a move of the N+1 disc, plus one final move. In other words, f(N+1) = 2f(N) + 1. Some careful analysis will justify this guess.

    Now we have a recurrence relation for f(n), but how to solve it? A classical technique is “guess the formula magically, then prove it by induction.” It’s certainly doable here if you compute a few values by hand:

    f(2) = 3

    f(3) = 2(3) + 1 = 7

    f(4) = 2(7) + 1 = 15

    f(5) = 2(15) + 1 = 31

    f(6) = 2(31) + 1 = 63

    You’ll probably see it right away: f(n) = 2^n - 1. Indeed, we can prove this by induction: the base case n=2 holds as f(2) = 3 = 2^(2) - 1, and if f(n) = 2^n - 1 then f(n+1) = 2f(n) + 1 = 2(2^n - 1) + 1 = (2^(n+1) - 2) + 1 = 2^(n+1) - 1 as desired.

    We may conclude f(12) = 2^(12) - 1 = 4095. In other words, with 12 discs, exactly 4095 moves are required.

    Bonus: an alternative to the “guess and prove” technique that is generalizable to a broad class of recurrence relations. The technique is called “generating functions.” Given the sequence f(n), we consider the formal power series

    F(x) = f(0) + f(1)x + f(2)x^(2) + … + f(n)x^(n) + …

    This F(x) is the “ordinary generating function” for the sequence f(n). Strictly speaking, it may not be a well-defined function of x since we haven’t made any assumptions about convergence, but we will assume (for now, and prove later) that the set of such formal power series behaves algebraically much like we expect. Namely, given the recurrence relation above, we can write:

    F(x) - f(0) - f(1)x = f(2)x^(2) + f(3)x^(3) + f(4)x^(4) + … + f(n)x^n + …

    = (2f(1) + 1)x^(2) + (2f(2) + 1)x^(3) + … + (2f(n-1) + 1)x^(n) + …

    = 2x(f(1)x + f(2)x^(2) + … + f(n)x^(n)) + (x^(2) + x^(3) + … + x^(n) + …)

    = 2x(F(x) - f(0)) + x^(2)/(1-x)

    In our case, we have f(0) = 0, f(1) = 1, f(2) = 3 so we can write more succinctly:

    F(x) - x = 2xF(x) + x^(2)/(1-x)

    Solving for F,

    F(x)(1 - 2x) = x + x^(2)/(1-x)

    = x(1 + x/(1-x))

    F(x) = x(1 + x/(1-x))/(1 - 2x)

    = x/(2x^(2) - 3x + 1)

    Ok, great. We’ve found that our generating function, convergence notwithstanding, is that rational function. We can use partial fraction decomposition to write it as

    F(x) = 1/(1 - 2x) - 1/(1-x)

    which has the advantage of telling us exactly how to compute the coefficients of the Taylor series for F(x). Namely,

    1/(1-x) = 1 + x + x^(2) + … + x^(n) + …

    1/(1 - 2x) = 1 + 2x + 4x^(2) + … + 2^(n) x^(n) + …

    So F(x) = (1-1) + (2-1)x + (4-1)x^(2) + … + (2(n)-1)x(n) + …

    The nth coefficient of the Taylor series for F about 0 is 2^(n)-1, and by the definition of F as the ordinary generating function for f(n), we have f(n) = 2^(n) - 1. (The rigorous justification for ignoring convergence here still needs to be done; for now, this can be seen as a useful magic trick.)


  • Not necessarily do logic, but mimic it, like it can mimic coherent writing and basic conversation despite only being a statistical token muncher. The hope is that there’s sufficient information in the syntax to model the semantics, in which case a sufficiently complex and well-trained model of the syntax is also an effective model of the semantics. This apparently holds up well for general language tasks, meaning “what we mean” is well-modeled by “how we say it.” It’s plausible, at face value, that rigorous argumentation is also a good candidate, which would give language models some way of mimicking logic by talking through a problem. It’s just not very good in practice right now. Maybe a better language model could do better, maybe not for a reasonable cost.






  • Kogasa@programming.devtoScience Memes@mander.xyznyet
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 month ago

    You can imagine tracing a path along a Klein bottle to see that it only has one side. To get more precise than that requires some topological context. If you slice it down the middle it turns into two Möbius strips and an orientation of the Klein bottle would induce an orientation of the strips, which are non-orientable. Alternately it has zero top integer homology, which you can get from looking at a triangulation. The orientable double cover of a Klein bottle is a torus, which is connected (if it were orientable, the double cover would be two disconnected Klein bottles).