Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)O
Posts
0
Comments
230
Joined
3 yr. ago

  • I think you might have missed my point. I wasn't listing stuff I had trouble understanding. I was listing stuff that didn't make much sense. The distinction is relevant. The end result, even if you manage to find some excuse that extends the already generous benefit of doubt, it still doesn't result in anything useful or informative.

    I'm also not using fancy words (or..?). The only fancy thing that stands out is the the "Bloom filter", which isn't a fancy word. It's just a thing, in particular a data structure. I referenced it because its an indication of an LLM, in behaving like the stochastic parrot that it is. LLMs don't know anything, and no transformer based approach will ever know anything. The "filter" part of "bloom filter" will have associations to other "filters", even tho it actually isn't a "filter" in any normal use of that word. That's why you see "creator filter" in the same context as "bloom filter", even though "bloom filter" is something no human expert would put there.

    The most amusing and annoying thing about AI slop, is that it's loved by people who don't understand the subject. They confuse an observation of slop (by people who... know the subject), with "ah, you just don't get it", by people who don't.

    I design and implement systems and "algorithms" like this, as part of my job. Communicating them efficiently is also part of that job. If anyone came to me with this diagram, pre 2022, I'd be genuinely concerned if they were OK, or had some kind of stroke. After 2022, my LLM-slop radar is pretty spot on.

    But hey, you do you. I needed to take a shit earlier and made the mistake of answering. Now I'm being an idiot who should know better. Look up Brandolini's law, if you need an explanation for what I mean.

  • I'm not too happy to spend time pointing out flaws in AI slop. That kind of bullshit asymmetry feels a bit too much like work. But, since you're polite about it, and seem to ask in good faith...

    First of all this is presented as a technical infographic on an "algorithm" for how a recommendation engine will work. As someone whose job it is to design similar things, it explains pretty much nothing of substance. It does, however, include many concepts that would be part of something like this, with fuzzy boxes and arrow that make very little sense. With some minor trivial parts you can assume from the problem description itself. It's all just weird and confusing. And, "confusing" not in the "skill issue" sense.

    So let's see what this suggested algorithm is.

    1. It starts out with "user requests the feed", and depending on whether or not you have "preference" data (prior interests or choices, etc), you give either a selection based on something generic, or something that you can base recommendations on. Well... sure. So far, silly, and trivial.
    2. "Scoring and ranking engine". And below this, a pie diagram with four categories. Why are there lines between only the two top categories, and the engine box? Seems weird, but, OK. I suppose all four are equally connected, which would be clearer without the lines. Also, what are the ratios here? Weights for importance, of some sort? "Time-Decayed"? I hope that's not the term that stuck for measuring retention/attention time.
    3. On the three horizontal "Source Streams" arrows coming in from the left, its all just weird. The source streams are going to be... generated content, no? But let's give it the befit of the doubt and assume it's suggesting that, given generated content, some of it might can be considered relevant for "personal preference" and has a "filter: hidden creators", but, none of that makes any sense. The scoring and ranking engine is already suggested to do this part.... The next one is "Popular (high scores) filter: bloom filter (already seen)". Which mixes concepts. A bloom filter is the perfect thing to confuse an LLM, because it has nothing to do with filters in the exact same context "filters" was used for the above source stream. Something intelligent wouldn't make this mistake. But, it does statistically parrot it's way to suggest that a bloom filter might have something to do with a cost effective predicate function that could make sense for a "has seen before". However, why is this here?

    I'll just leave it at that. This infographic would make a lot of sense if it was created by some high schoolers who were tasked to do something like this. Came up with some relevant sounding concepts. Didn't fully understand any of them. Which is also exactly the kind of stuff LLMs do

    I don't think loops hired a bunch of kids, so LLM it is.

    And the like "Our new For You algorithm is pretty complex, so we created this infographic to make it easier to understand!", doesn't help the case against LLM either. There a many complex parts of a recommendation engine, but none of the things in this infographic explain or illuminate those complex parts...

    But, I might be wrong, and this is their earnest attempt at explaining how their algorithm works. In which case, they are just bad at either explaining it, or at designing it, most likely both. Then again, if I'm right, and this is generated by an LLM still gives the same impression, but leaves some room for "someone who isn't technical, asked an LLM, and phoned this in because it looked cool, and people who don't know any better will think so too!"

  • That's way too reductive.

  • This infographic reeks of AI slop.

  • It might seem like it, especially for late diagnoses, but I don't think the ratio of individuals with ADHD is increasing.

    They are getting hit with an information overload unlike anything previous generations have had to deal with. And, what was possible to cope with earlier in life, now in mid-life, results in a choice of either a mental breakdown from exhaustion, or medication that helps deal with the symptoms.

    There are some technological and cultural trends that exasperates the issue, especially short form social media, which I think governments have failed at protecting the younger generation from. It's not like it's an easy thing to fix. Try banning sugar from the sugar addicted children who's sense of identify and self worth is made out of sugar. Not to mention capitalistic forces salivating over how dirt cheap and easy it is to manipulate them en mass.

  • What exactly about it is it you feel should be illegal?

  • I ask myself "do the consequences of them doing something bad, outweigh the money they make by doing it anyways?". Individuals might have, and follow, moral principles. Large companies do not.

  • Sheesh. Another community I'm more than happy to ignore.

  • Except for supply chain attacks. You get a foot in the door, and open the rest with impunity

  • Which brings me back to. "What are you saying?". Banning from what?

  • It doesn't explain the previous comment tho. The "block Linux" is what I'm not getting. Did you mean drop Linux native, because proton/wine is so good that it isn't necessary?

  • What are you talking about?

  • This isn't just a "people in charge" problem. Way too many developers are part of the problem by using LLMs for the wrong problems.

  • Banking apps have worked without issues, in my experience.

    The only things I've had issues with, is in-app purchases. Paid apps work, but the in app stuff is hit and miss.

  • Lead poisoning has been my working theory to explain the last 50 years.

  • This is the simple checklist for using LLMs:

    1. You are the expert
    2. LLM output is exclusively your input

    All other use is irresponsible. Unless of course the knowledge within the output isn't important.

  • I haven't seen anything yet. At work, the ones that praise AI the loudest are exceptionally highly correlated with the people who lack a good understanding of the core concepts. The ones that just float around cargo culting and looking busy by making noise.

    That said, LLMs are still useful tools, that are highly misused. What they're useful for, is a lot less than most think. The user needs to be the expert. If they're not, they'd be better off reading a book on the matter (and how things are looking, it might have to be one written before LLMs came out).

  • That's cool. We do it the boring way of getting a notification to the hassio app. It unfortunately uses google's notification api, and I'm not too happy with Google knowing when I do laundry.