Skip Navigation

Posts
7
Comments
1410
Joined
3 yr. ago

  • Deleted

    Permanently Deleted

    Jump
  • I blame smartphones

  • If there's one person who knows their applied zk proofs, it's that guy.

  • There are some pretty strong arguments that even zk proof is a flawed way of preserving privacy though, in a variety of ways. It prevents pseudonymity by enabling one-user-one-account, and it leaves users vulnerable to being coerced to reveal their full online activities by handing over cryptographic keys.

  • I haven't played neopets, is there something about its design that would make the situation better than roblox?

  • LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning ...

    Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, and move about the world; our range of what we can experience and think about remains vast.

    But take away language from a large language model, and you are left with literally nothing at all.

    The author seems to be making the assumption that a LLM is the equivalent of the language processing parts of the brain (which according to the cited research supposedly focus on language specifically and the other parts of the brain do reasoning) but that isn't really how it works. LLMs have to internally model more than just the structure of language because text contains information that isn't just about the structure of language. The existence of Multimodal models makes this kind of obvious; they train on more input types than just text, whatever it's doing internally is more abstract than only being about language.

    Not to say the research on the human brain they're talking about is wrong, it's just that the way they are trying to tie it in to AI doesn't make any sense.

  • Every politically charged term ends up having highly disputed definitions, but I think most of those will acknowledge that the term has way more baggage than just the idea of taking care of yourselves and neighbors.

    From Wikipedia:

    Socialism is an economic and political philosophy encompassing diverse economic and social systems[1] characterised by social ownership of the means of production,[2] as opposed to private ownership.[3][4][5] It describes the economic, political, and social theories and movements associated with the implementation of such systems.

    It represents a whole set of beliefs about how the world works, in addition to political goals. Someone might broadly agree with the idea that people should be taken care of, but have strong objections about the specifics. One of those beliefs that I'll object to is the idea that just about everything should be understood as being about class conflict; I don't think that's always accurate.

  • That's literally what the comment above it was doing too though. It's a very common anti-AI argument to appeal to social proof.

  • We can’t afford to make any of this. We don’t have the money for the compute required or to pay for the lawyers to make the law work for us

    I don't think this is entirely true; yeah, large foundational models have training costs that are beyond the reach of individuals, but plenty can be done that is not, or can be done by a relatively small organization. I can't find a direct price estimate for Apertus, and it looks like they used their own hardware, but it's mentioned they used ten million gpu hours, and GH200 gpus; I found a source online claiming a rental cost of $1.50 per hour for that hardware, so I think the cost of training this could be loosely estimated to be something around 20 million dollars.

    That is a lot of money if you are one person, but it's an order of magnitude smaller than the settlements of billions of dollars being paid so far by the biggest AI companies for their hasty unauthorized use of copyrighted materials. It's easy to see how copyright and legal costs could potentially be the bottleneck here preventing smaller actors from participating.

    It should benefit the people, so it needs to change. It needs to be “expanded” (I wouldn’t call it that, rather “modified” but I’ll use your word) in that it currently only protects the wealthy and binds the poor. It should be the opposite.

    How would that even work though? Yes, copyright currently favors the wealthy, but that's because the whole concept of applying property rights to ideas inherently favors the wealthy. I can't imagine how it could be the opposite even in theory, but in practice, it seems clear that any legislation codifying limitations on use and compensation for AI training will be drafted by lobbyists of large corporate rightsholders, at the obvious expense of everyone with an interest in free public ownership and use of AI technology.

  • But we can't afford to pay. I don't think open models like the one in the OP article would be developed and released for free to the public if there was a complex process of paying billions of dollars to rightsholders in order to do so. That sort of model would favor a monopoly of centralized services run only by the biggest companies.

  • Deleted

    Permanently Deleted

    Jump
  • I am thankful for the safety feature where locking the lid also depresses a button that allows the food processor to operate, but I also keep it unplugged when the lid is off for an extra layer of redundancy.

  • I use FreeTube, but this doesn't seem like a "youtube piracy" solution because it is streaming the content directly from youtube, which can ultimately prevent access; I already am blocked from watching certain videos that require you to be logged in to watch.

    The problem is basically, if there is a specific youtube video you want to watch, but youtube insists that you must provide ID to see it, right now I don't think there's actually a lot of recourse for that because there are too many such videos for anyone else to actually host them or offer torrents or anything.

  • There's a question of how to do that, or at least how to best stop destructive business practices, and related to that, how these strategies work and where the money is coming from. From the explanation given here it sounds like their answer is, the bank loaning the money gets ripped off and left with a worthless company. Which seems a little implausible and needs more explanation about why that is possible as a common trend. But since I keep seeing GPTisms, I'm thinking maybe that detail was slightly off in some important way, because the writing is LLM approximation bullshitting.

    Anyway it's just frustrating because this topic is interesting to me but there isn't any real information here that can reasonably be relied on.

  • I'm assuming that mandatory ID checks would make yt-dlp not work

  • How do you even do that?

  • Pretty sure ChatGPT wrote this article, not sure how accurate any of this is when there's no sources.

  • TikTok

    I think you're always going to have problems with a lack of authenticity on platforms where opaque algorithms do all the work of deciding what gets popular and what gets shown to who.

  • but the kinds of people who grape others generally don’t feel shame

    I think this is probably not true.

    the primary tool society uses to respond to grape, assault, prison, ostracizing or murder is, so like, so what is there less shame?

    Those tools aren't equally available to everyone, they are expressions of power, which some people have access to more than others.

  • however without apps like this if you think about it we’d have to be back to driving everywhere to go eat at most places most restaurants profits would be down still and you would have to work at those shit restaurants for even worse pay

    How about instead of all that we go back to people making their own food at home and let the restaurant industry die off altogether