Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)L
Posts
30
Comments
704
Joined
2 yr. ago

  • The article says they suspect this was done by people who have an interest in hunting, since those people often complain that the eagles target birds like pheasants.

  • Sorry for a casual, what do you mean cap at 60hz?

    I just use Firefox on Ubuntu, which fifteen years ago seemed like enough.

    Which also doesn't seem that casual, but this shit is too much to keep up with. Today my engineer dad was complaining about search engines having too many ads and I asked what he used, and he said besides Google on the one computer he uses Bing on the other.

  • No, this is looking at it wrong. You get to fuck the sexy hybrid human-dog abomination

  • Well which is it, that she tried to pepper spray a winter spider, or that the victim had it coming?

  • Best detail I think is that they door dashed Arby's.

  • Good work!

  • No, I know. I didn't see any possible market for a product like this, but you shared you're already doing what this product does, but manually. So I was wondering how much value you see here.

  • Don't know if it changed since you commented, but the article I read included a bunch more than that

  • Would you consider spending $600 plus $7/month for this? (Assuming it was actually secure not like this one)

  • What could they possibly tell me about my health by visually inspecting my shit? I see the website mentions detecting blood, but pretty sure I can do that too...

  • I responded to your other comment, but yes, I think you could set up an llm agent with a camera and microphone and then continuously provide sensory input for it to respond to. (In the same way I'm continuously receiving input from my "camera" and "microphones" as long as I'm awake)

  • I'm just a person interested in / reading about the subject so I could be mistaken about details, but:

    When we train an LLM we're trying to mimic the way neurons work. Training is the really resource intensive part. Right now companies will train a model, then use it for 6-12 months or whatever before releasing a new version.

    When you and I have a "conversation" with chatgpt, it's always with that base model, it's not actively learning from the conversation, in the sense that new neural pathways are being created. What's actually happening is a prompt that looks like this is submitted: "{{openai crafted preliminary prompt}} + "Abe: Hello I'm Abe".

    Then it replies, and the next thing I type gets submitted like this: "{{openai crafted preliminary prompt}} + "Abe: Hello I'm Abe + {{agent response}} + "Abe: Good to meet you computer friend!"

    And so on. Each time, you're only talking to that base level llm model, but feeding it the history of the conversation at the same time as your new prompt.

    You're right to point out that now they've got the agents self-creating summaries of the conversation to allow them to "remember" more. But if we're trying to argue for consciousness in the way we think of it with animals, not even arguing for humans yet, then I think the ability to actively synthesize experiences into the self is a requirement.

    A dog remembers when it found food in a certain place on its walk or if it got stabbed by a porcupine and will change its future behavior in response.

    Again I'm not an expert, but I expect there's a way to incorporate this type of learning in nearish real time, but besides the technical work of figuring it out, doing so wouldn't be very cost effective compared to the way they're doing it now.

  • Yeah, it seems like the major obstacles to saying an llm is conscious, at least in an animal sense, is 1) setting it up to continuously evaluate/generate responses even without a user prompt and 2) allowing that continuous analysis/response to be incorporated into the llm training.

    The first one seems like it would be comparatively easy, get sufficient processing power and memory, then program it to evaluate and respond to all previous input once a second or whatever

    The second one seems more challenging, as I understand it training an llm is very resource intensive. Right now when it "remembers" a conversation it's just because we prime it by feeding every previous interaction before the most recent query when we hit submit.

  • I have no mouth and I must beam

  • Every day? But not at the scale where I need to view the whole globe

  • My dogs prefer lettuce, as long as it's crispy

  • For me I think it helps to think of error correction. When two computers are exchanging information it's not just one way, like one machine just sends a continuous stream to the other and then you're done. The information is broken up into pieces, and the receiving machine might say "I didn't receive these packets can you resend." And there are also things like checking a hash to make sure the copied file matches the original file.

    How much more error correction do you think we should have in human conversation, when your idea of the "file transfer protocol" is different than the other participant? "I think you're saying X, is that correct?" Even if you think you completely understand, a lot of times the answer is "no, actually... blah blah."

    You brought up the idea of neurodivergents providing more detail, which can be helpful. But even there, one person may have a different idea about which details are relevant, or what the intended goal of the conversation is.

    Taking a step beyond that, I recognize that I am not a computer, and I'm prone to making errors. I may think I'm perfectly conveying all the necessary information, but experience has shown that's not always true. Whether or not the problem is on my end or the other person's, if I'm trying to accomplish a given objective, it's in my personal interest to take extra steps to ensure there's no misunderstanding.

  • Today I Learned @lemmy.world

    TIL if you live in Pennsylvania and make minimum wage you'd have to work 105 hours a week to afford a "modest" one bedroom rental.

    nlihc.org /oor/state/pa
  • Memes @lemmy.ml

    Scalable

  • Technology @lemmy.world

    Attendees at Bored Ape NFT event report vision loss and extreme pain

    www.theverge.com /2023/11/6/23948464/bored-ape-nft-event-eye-injury-sunburn-uv-exposure
  • politics @lemmy.world

    Universal Basic Income: In July three Pennsylvania lawmakers said they were going to introduce a bill calling for study of UBI. Has anyone seen updates since then?

    www.penncapital-star.com /blog/state-lawmakers-to-introduce-legislation-for-universal-basic-income-study/
  • politics @lemmy.world

    House speaker elections can be better with ranked choice voting

    fairvote.org /house-speaker-elections-can-be-better-with-ranked-choice-voting/
  • Political Memes @lemmy.world

    "All the land was claimed by others before we were born"

  • Ask Lemmy @lemmy.world

    How many patients can one doctor take care of per year? How many people can one farmer feed a year?

  • Memes @lemmy.ml

    School lunch debt, what about normal lunch debt?

  • Showerthoughts @lemmy.world

    Was thinking about how sometimes a therapist can give bad advice, and if you're not thinking about the situation clearly, how would you know? Clearly...

  • Memes @lemmy.ml

    Why won't they stay where they belong?