Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)R
Posts
11
Comments
865
Joined
3 yr. ago

  • maybe he just didnt get the point of the story or something. I dont think you can act well if you cant get into the character

  • but they do have access to internet? At least gpt can search based on the text it outputs when its processing the query

  • if the hallucinations are result of something actually happening in the background, that would be quite interesting. It would also be very bad for rest of us since it might mean the billionaires who own the damn things would be in position to get even worse deathgrip on our world. If they ever manage to create agi, the worst thing that could happen isnt that it breaks free and enslaves humanity but that it doesnt and it helps the billionaires enslave us further and make sure we cant ever even think about fighting back.

    But i think the hallucinations are based on incorrect information in the training data, they did train it from stuff from reddit too. Any and everything will be considered true, but if 99% of the data says one thing and 1% says another, then i think it will reference that 99% more often but it cant know that the 1% is wrong, can even real humans know it for certain? And since it cant evaluate anything, there might be situations where that 1% of data might be more relevant due to some nebulous mechanism on how it processes data.

    llms have been made to act extremely helpful and subservient, so if they actually could "think" wouldnt they factcheck themselves first before saying something? I have sometimes just asked "are you sure?" and the llm starts "profusely apologizing" for providing incorrect information or otherwise correcting itself.

    Though i wonder how it would answer if it truely had no initialization querys, as they have same hidden instructions on every query you make on how to "behave" and what not to say.

  • no, its incapable of making choices because there is nothing there to make the choices. Its just fancy way of interacting with the data it has been trained with. Though i suppose if there was a way to let llm function "live" instead of only by responding to queries, it could be possible to at least test if it could act on its own, but i dont think it can -> we would know by now because it would be step closer to agi, which is basically the holy grail for these kind of things. And equally possible to get, i think.

    You can literally make the llm say and do anything with right kind of query, this is also why its impossible to make them safe. Even though you can't directly ask for something forbidden, with some creativity you can bybass the initializations the corpos have put in. Its not possible for them to account for every single thing and if they try they will run out of token space.

    The whole "ai" term is just corporations perpetuating a lie because it sounds impressive and thus makes people want to give them more money for their bullshit.

  • too bad there isnt a third option, like founding a new party or several.

  • there is no ai, only largelanguagemodel that has been trained on data. The data it has been trained suggests this is the best idea. llm cant evaluate the data its trained on so anything you put in will be equally valid. I give it that its really impressive how they can output the training results in such coherent way that can be kind of "conversed" with, but there is no will or intelligence behind it.

    This is also why corporations insisting on putting them everywhere is quite horrible security issue -> you can jailbreak any llm and tell them to do anything. So this has enabled all kinds of stupid vulnerabilities that exploit this. Now you can even send someone malicious google calendar invites that makes gemini do bad shit to your systems its connected to.

  • when you are aware of the things company is doing and still continue buying from them. Though if the company has monopoly and you are dependent of the product, then its a bit different.

  • yeah, a bit too extreme take from me. I'm just so annoyed about people who apathetically keep supporting things that make our world worse or that are produced from suffering of others.

  • yes, but i dont think billionaires are THAT dumb. They see some value in it for them that they deem worth the risk of losing all that money. So that is why its even more important that the ai crap fails and continues to drain their money.

    Or maybe i'm underestimating just how much money they have and maybe even all this is just akin to losing a large portion but it doesnt matter because they can just exploit everything else.. But, if they get what they want then its bad for all of us no matter what.

  • those who promote ai usage and even pay for it ought to be blamed too.

    While I can see some benefits in using llm for some things, way things currently are the negatives way outweigh the positives. Using it should be something to be ashamed about so this shit collapses sooner and maybe we can get some peace. Maybe once all the commotion dies down llm could become useful tool, but if its tied to destruction of our way of life (planet dying, economic disruption, no components for regular people) then it just has to go.

    Alternative is we just submit and hope our owners dont abuse us too much.

  • they have somekind of plan, or maybe its all sunken cost scenario. Either way, they think they can get some benefit from it and they are so determined they are throwing insane amount of money in it even though there is no clear way to get any profit from it. So either they know something we dont or they are desperate to save their investments -> worse ai does, better its for all of us since once ai crashes the components stop being wasted on it, less electricity and materials are wasted on datacenters and best of all, all those fucking billionaires lose a lot of money they have invested or at least the investors who thought it good idea to support them lose and maybe dont do it again.

  • who owns the datacenters?

  • i wonder when they get bored of raping kids and start having hungergame style fucked up shit or something even more horrifying. Who knows if they already do in secret.

  • and there is windows emulator

  • why do they still keep hungary in the eu? why not add belarus too while they are at it. Clearly hungary is aligned with russia more, so isnt keeping them around kind of a security risk too?

  • i think tyranny is apt term for this

  • i think even north korea might be safer to go at this point

  • anyone who would seriously think using adblock is unethical is a bootlick

  • Ask Lemmy @lemmy.world

    Where do you draw the line regarding windows?

  • Linux @lemmy.world

    how to defend against embrace & extinguish?

  • Linux @lemmy.world

    sandboxing software, how to get started?

  • Games @lemmy.world

    active matter

  • Linux @lemmy.world

    What tweaks or programs do you wish you had known with fresh install?

  • Silly Drawing Requests @sopuli.xyz

    Draw me a potat

  • Meta @sopuli.xyz

    dm restrictions

  • Privacy @lemmy.ml

    is there a way to decrypt google cookie contents?

  • Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ @lemmy.dbzer0.com

    is z-library safe? Am i infected after running pdf from there?

  • Privacy @lemmy.ml

    looking for android application that alerts when microphone is active

  • Privacy @lemmy.world

    List of search engines