Skip Navigation

Posts
2
Comments
139
Joined
3 yr. ago

  • There you go arguing in bad faith again by putting words in my mouth and reducing the nuance of what was said.

    You do know dissertations are articles and don't constitute any form or rigorous proof in and of themselves? Seems like you have a very rudimentary understanding of English, which might explain why you keep struggling with semantics. If that is so, I apologise because definitions are difficult when it comes to language, let alone ESL.

    I didn't dispute that NNs can arrive at a theorem. I debate whether they truly understand the theorem they have encoded in their graphs as you claim.

    This is a philosophical/semantical debate as to what "understanding" actually is because there's not really any evidence that they are any more than clever pattern recognition algorithms driven by mathematics.

  • You're being downvoted because you provide no tangible evidence for your opinion that human consciousness can be reduced to a graph that can be modelled by a neural network.

    Addidtionally, you don't seem to respond to any of the replies you receive in good faith and reach for anecdotal evidence wherever possible.

    I also personally don't like the appeal to authority permeating your posts. Just because someone who wants to secure more funding for their research has put out a blog post, it doesn't make it true in any scientific sense.

  • Seems to me you are attempting to understand machine learning mathematics through articles.

    That quote is not a retort to anything I said.

    Look up Category Theory. It demonstrates how the laws of mathematics can be derived by forming logical categories. From that you should be able to imagine how a neural network could perform a similar task within its structure.

    It is not understanding, just encoding to arrive at correct results.

  • It wouldn't reverse engineer anything. It would start by weighting neurons based on it's training set of Pythagorean triples. Over time this would get tuned to represent Pythag in the form of mathematical graphs.

    This is not "understanding" as most people would know it. More like a set of encoded rules.

  • So somewhere in there I'd expect nodes connected to represent the Othello grid. They wouldn't necessarily be in a grid, just topologically the same graph.

    Then I'd expect millions of other weighted connections to represent the moves within the grid including some weightings to prevent illegal moves. All based on mathematics and clever statistical analysis of the training data. If you want to refer to things as tokens then be my guest but it's all graphs.

    If you think I'm getting closer to your point can you just explain it properly? I don't understand what you think a neural network model is or what you are trying to teach me with Pythag.

  • They operate by weighting connections between patterns they identify in their training data. They then use statistics to predict outcomes.

    I am not particularly surprised that the Othello models built up an internal model of the game as their training data were grid moves. Without loooking into it I'd assume the most efficient way of storing that information was in a grid format with specific nodes weighted to the successful moves. To me that's less impressive than the LLMs.

  • Brah, if an AI was conscious, how would it know we are sentient?! Checkmate LLMs.

  • I feel like an AI right now having predicted the descent into semantics.

  • Think you're slightly missing the point. I agree that LLMs will get better and better to a point where interacting with one will be indistinguishable from interacting with a human. That does not make them sentient.

    The debate is really whether all of our understanding and human experience of the world comes down to weighted values on a graph or if the human brain is hiding more complex, as-yet-undiscovered, phenomena than that.

  • Your meme probably wasn't dank enough then.

  • Standard descent into semantics incoming...

    We define concepts like consciousness and intelligence. They may be related or may not depending on your definitions, but the whole premise here is about experience regardless of the terms we use.

    I wouldn't say Fibonacci being found everywhere is in any way related to either and is certainly not an expression of logic.

    I suspect it's something like the simplest method nature has of controlling growth. Much like how hexagons are the sturdiest shape, so appear in nature a lot.

    Grass/rocks being conscious is really out there! If that hypothesis was remotely feasible we couldn't talk about things being either consciousness or not, it would be a sliding scale with rocks way below grass. And it would be really stretching most people's definition of consciousness.

  • Bold of you to assume any philosophical debate doesn't boil down to just that.

  • ...or even if consciousness is an emergent property of interactions between certain arrangements of matter.

    It's still a mystery which I don't think can be reduced to weighted values of a network.

  • Welp looks like we both know the arguments and fall on different sides of the debate then.

    Much better than being confidently wrong like most LLMs...

  • Thank you, much more succinctly put than my attempt.

  • This whole argument hinges on consciousness being easier to produce than to fake intelligence to humans.

    Humans already anthropomorphise everything, so I'm leaning towards the latter being easier.

  • The key word here is "seems".

  • Orders of magnitude of differece between the most complex known object in the universe and some clever statistical analysis.

    We understand very little about the human brain. For example, we don't know if it leverages quantum interactions or whether it can be decoupled from its substrate.

    LLMs are pattern matching models loosly based on the structure of neurons that work well for deriving predictions from a vast body of data but are not anywhere near human brain level of understanding. I personally don't think they will ever be until we have solved the hard problem of conciousness.

  • I have a theory... They are sophisticated auto-complete.