• 4 Posts
  • 984 Comments
Joined 2 years ago
cake
Cake day: August 5th, 2023

help-circle
  • Here’s a question. I’m gonna preface it with some details. One of the things I used to do for the US Navy was the development of security briefs. To write a brief it’s essentially you pulling information from several sources (some of which might be classified in some way) to provide detail for the purposes of briefing a person or people about mission parameters.

    Collating that data is important and it’s got to be not only correct but also up to date and ready in a timely manner. I’m sure ChatGPT or similar could do that to a degree (minus the bit about it being completely correct).

    There are people sitting in degree programs as we speak who are using ChatGPT or another LLM to take shortcuts in not just learning but doing course work. Those people are in degree programs for counter intelligence degrees and similar. Those people may inadvertently put information into these models that is classified. I would bet it has already happened.

    The same can be said for trade secrets. There’s lots of companies out there building code bases that are considered trade secrets or deal with trade secrets protected info.

    Are you suggesting that they use such tools in the arsenal to make their output faster? What happens when they do that and the results are collected by whatever model they use and put back into the training data?

    Do you admit that there are dangers here that people may not be aware of or even cognizant they may one day work in a field where this could be problematic? I wonder this all the time because people only seem to be thinking about the here and now of how quickly something can be done and not the consequences of doing it quickly or more “efficiently” using an LLM and I wonder why people don’t think about it the other way around.


  • Cars do have that in what amounts to a TCU or Telematics Control Unit. The main problem here isn’t whether or not cars have that technology. It’s about the relevant government agency forcing companies like Tesla (and other automakers) to produce that data not just when there’s a crash, but as a matter of course.

    I have a lot of questions about why Tesla’s are allowed on public roads when some of the models haven’t been crash tested. I have a lot of questions about why a company wouldn’t hand over data in the event of a crash without the requirement of a court order. I don’t necessarily agree that cars should be able to track us (if I buy it I own it and nobody should have that kind of data without my say so). But since we already have cars that do phone this data home, local, state, and federal government should have access to it. Especially when insurance companies are happy to use it to place blame in the event of a crash so they don’t have to pay out an insurance policy.







  • atrielienz@lemmy.worldtoGames@lemmy.worldThank you, Thor! 🥳
    link
    fedilink
    English
    arrow-up
    105
    arrow-down
    3
    ·
    edit-2
    1 day ago

    For those who don’t know, this streamer is only tangentially related to the stop killing games petition because he made a comment about it being BS because he misinterpreted what it was supposed to do. He used his misinterpretation to spread false information about this petition leading to it not getting the support it initially should have.

    When the guy behind the petition made a statement saying he didn’t think the petition was going to get enough signatures in part because of the misinformation being spread about it, PirateSoftware doubled down on his false claims and all of this lead to people doing the research they should have done in the first place and deciding to support the petition after all.

    What we should probably be learning from this is that we should do our own research, and find out things instead of taking the word of random people online.

    Edit: electric has brought to my attention that it wasn’t just one clip, but in fact a whole video dedicated to spreading misinformation that was made by Thor from PirateSoftware. Just wanted to be clear about that.



  • There’s one major problem with what you’re saying. It’s that ICE is actively jailing people without giving them due process. As an entity it is assuming guilt which is in direct conflict with the constitution. Because it’s violating the rights of the people it is no longer a government agency acting for the people, and because it’s actively breaking the law it is not protected. If you can’t understand that without due process they can and possibly will arrest you and deport you somewhere regardless of your constitutional right to reside in the US then you are in fact missing the main point of this app and there’s a reason people are down-voting you.

    Also, you’re making a lot of assumptions about what the app is for, and still posit no actual proof of your position. You have made an assumption here and when confronted about your opinion based on that assumption you have continued to double down instead of even considering the alternatives.

    And speed traps aren’t intended to be a detterant. I don’t know why you think that’s the case but in fact they are set up specifically to catch speeders. The deterrence is a bonus. But a lot of police departments make money for their municipality via speeding tickets. So don’t try to play like we can just ignore this so you can feel like you’ve won.


  • In all actuality I believe the point of being able to report a speed trap is to allow people to avoid getting caught breaking the law which amounts to the same thing.

    Google maps and Waze can absolutely be used to show where to attack law enforcement. They can also be used to avoid law enforcement. What you’re saying is that you feel like the intention of the app is to break the law in some way but you’ve been given a similar app that does basically the same thing and you back up nothing or what you’ve said with documented case law or even the laws you think this app is breaking. Cool. Good talk.





  • Word roots say they have a point though. Artifice, Artificial etc. I think the main problem with the way both of the people above you are using this terminology is that they’re focusing on the wrong word and how that word is being conflated with something it’s not.

    LLM’s are artificial. They are a man made thing that is intended to fool man into believing they are something they aren’t. What we’re meant to be convinced they are is sapiently intelligent.

    Mimicry is not sapience and that’s where the argument for LLM’s being real honest to God AI falls apart.

    Sapience is missing from Generative LLM’s. They don’t actually think. They don’t actually have motivation. What we’re doing when we anthropomorphize them is we are fooling ourselves into thinking they are a man-made reproduction of us without the meat flavored skin suit. That’s not what’s happening. But some of us are convinced that it is, or that it’s near enough that it doesn’t matter.