Skip Navigation

Posts
0
Comments
413
Joined
3 yr. ago

Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.

Spent many years on Reddit and is now exploring new vistas in social media.

  • People's heights change over time too. Men and women can nevertheless have different average heights.

  • They're giving you services in exchange for your contents.

    Does nobody even think about TOS any more? You don't have to read any specific one, just realize the basic universal truth that no website is going to accept your contents without some kind of legal protection that allows them to use that content.

  • A notable exception is the Stargate franchise, where Earth's spacecraft are largely run by the US Air Force.

  • If all Russia has to do to get people to back off is cry "escalation!" Then might as well just surrender to them now.

  • Where do I go to buy Lemmy Gold?

  • Frankly, these NATO expansions and its general re-invigoration are a larger loss for Russia than anything they could possibly gain in Ukraine. Their Baltic fleet is now useless. Kaliningrad is useless.

    Combined with all the other damage Ukraine has inflicted on Russia, they're basically spiralling the drain and I see no possible way Russia could rise in prominence in the future. Even if goodness forbid they were to "win" the current war they're fighting with Ukraine, that won't help them, it'll only hurt Ukraine.

  • Article mentioned 400-word chunks, so much less than paper-sized.

  • Not to mention that a response "containing" plagiarism is a pretty poorly defined criterion. The system being used here is proprietary so we don't even know how it works.

    I went and looked at how low theater and such were and it's dramatic:

    The lowest similarity scores appeared in theater (0.9%), humanities (2.8%) and English language (5.4%).

  • That's why I was suggesting such a simple approach, it doesn't require AI or machine learning except in the most basic sense. If you want to try applying fancier stuff you could use those basic word-based filters as a first pass to reduce the cost.

  • Somewhere in between, sure. But don't interpret that to mean that the most likely real number is exactly in the middle. I consider Russian numbers to be way less credible.

  • Another more general property that might be worth looking for would be substantially similar posts that get cross-posted to a wide variety of communities in a short period of time. That's a pattern that can have legitimate reasons but it's probably worth raising a flag to draw extra scrutiny.

    One idea for making it computationally lightweight but also robust against bots "tweaking" the wording of each post might be to fingerprint each post based on rare word usage. Spam is likely to mention the brand name of whatever product it's hawking, which is probably not going to be a commonly used word. So if a bunch of posts come along that all use the same rare words all at once, that's suspicious. I could also easily see situations where this gives false positives, of course - if some product suddenly does something newsworthy you could see a spew of legitimate posts about it in a variety of communities. But no automated spam checker is perfect.

  • I actually have a friend who's involved in a situation like this right now. He got laid off from his old job a few months back and while he was job hunting he started working on a project with a couple other friends that could be worth a fair bit of money. He's had job offers since then and he got a lawyer to write up a description of the project he's working on that could be inserted into those "I'm keeping the rights to this stuff" contract sections.

    It's a bit different for him because it's stuff that he's actively working on right now, though. It sounds like your case might be simpler, if it's stuff you haven't done yet and don't plan to try working on while employed with this current employer I suspect you won't need to worry about it. Though of course, IANAL.

  • Indeed, and many of the more advanced AI systems currently out there are already using LLMs as just one component. Retrieval-augmented generation, for example, adds a separate "memory" that gets searched and bits inserted into the context of the LLM when it's answering questions. LLMs have been trained to be able to call external APIs to do the things they're bad at, like math. The LLM is typically still the central "core" of the system, though; the other stuff is routine sorts of computer activities that we've already had a handle on for decades.

    IMO it still boils down to a continuum. If there's an AI system that's got an LLM in it but also a Wolfram Alpha API and a websearch API and other such "helpers", then that system should be considered as a whole when asking how "intelligent" it is.

  • It was the British spelling.

  • Call it whatever makes you feel happy, it is allowing me to accomplish things much more quickly and easily than working without it does.

  • There was an interesting paper published just recently titled Generative Models: What do they know? Do they know things? Let's find out! (a lot of fun names and titles in the AI field these days :) ) That does a lot of work in actually analyzing what an AI image generator "knows" about what they're depicting. They seem to have an awareness of three dimensional space, of light and shadow and reflectivity, lots of things you wouldn't necessarily expect from something trained just on 2-D images tagged with a few short descriptive sentences. This article from a few months ago also delved into this, it showed that when you ask a generative AI to create a picture of a physical object the first thing the AI does is come up with the three-dimensional shape of the scene before it starts figuring out what it looks like. Quite interesting stuff.

  • And even if local small-scale models turn out to be optimal, that wouldn't stop big business from using them. I'm not sure what "it" is being referred to with "I hope it collapses."

  • Conversely, there are way too many people who think that humans are magic and that it's impossible for AI to ever do insert whatever is currently being debated here.

    I've long believed that there's a smooth spectrum between not-intelligent and human-intelligent. It's not a binary yes/no sort of thing. There's basic inert rocks at one end, and humans at the other, and everything else gets scattered at various points in between. So I think it's fine to discuss where exactly on that scale LLMs fall, and accept the possibility that they're moving in our direction.

  • I actually think public perception is not going to be that big a deal one way or the other. A lot of decisions about AI applications will be made by businessmen in boardrooms, and people will be presented with the results without necessarily even knowing that it's AI.