Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)T
Posts
12
Comments
510
Joined
3 yr. ago

He / They

  • gimme dat more powerful GPU!

  • Given the crossposts, New Delhi, India.

  • Note that this vuln is in the desktop GUI, not ollama itself (Ollama Core). It is also unrelated to the models themselves.

  • It's absolutely possible to have privacy-preserving age verification, but it requires government to trust someone else than themselves, which they'll never do.

    Go to gas station. Hand cashier your ID, like you do to buy liquor. Cashier gives you a one time use token/password to the age verification system website. You go there and upload a public key you generate before the OTP expires. Voila: now the site could easily verify that you control the private key and (trusting) that consequently you are of age, without having to know literally anything about you.

    It only breaks if you treat everyone as a criminal until proven innocent (e.g. "you might have stolen that key", etc), which is exactly what every government implementing this is doing, which is why it's not really about protecting kids from porn at all but about removing privacy.

  • It's almost like positions of power over others enable people to do bad stuff.

  • Removed Deleted

    Permanently Deleted

    Jump
  • Is the lack of source code access the problem with change.org? It's just a one-sided popularity poll website masquerading as activism.

  • It sounds like he's an Operations/SRE specialist, but his quals seem like he'd be overqualified for most Ops/SRE roles unless it's a director or VP. Especially with the shift to devops, he might need to shift domains or grow out of pure Ops work. It's going to be nearly impossible to hire into an Architect or Director role unless he already has that on his resume.

  • The problem with training AI bots is that they will model the human behavior from the bad environment per their training, but not the human psychological reactions to the changing environments, so it's not really going to tell you whether the different platform makes humans behave differently.

  • One of the interesting things of exploring other languages on especially social media is that you realize just how un-moderated the US platforms are for anything but English. When people talk about Facebook advancing genocides, it's the platforms not bothering to moderate non-English content but still applying their maximum-engagement algorithms in those spaces, so you get this snowballing of negative content.

    Be wary if you go looking for non-English social media (it's actually not hard at all, you use a VPN and change either your OS or browser locale settings), because you can easily end up seeing some grisly stuff.

  • Doesn't look like anything to me

  • Congrats (¬`‸´¬)

    Happy for you ( •̀⤙•́ )

    Nice ( ` ᴖ ´ )

  • That's not diminishing returns in terms of time and speed, which is CanadaPlus' point. 100km/h faster is 100km/h faster, not 100% increase each time. The time reduction is perfectly in line with the added speed, so for 100 kilometers of distance:

    100km/h = 1hr -> 200km/h = 1/2hr -> 300km/h = 1/3hr -> 400km/h = 1/4hr

    It would be diminishing returns if doubling the speed each time didn't halve the travel time, but "diminishing input = diminishing output", or 100% -> 50% -> 25%, etc, is not diminishing returns, that's linear.

    The first time they added/input twice as much speed. The second time they didn't.

    An actual example of diminishing returns would be the cost to speed ratio, where doubling the budget each time will not result in a doubled speed, e.g.

    $10m = 100km/h -> $20m = 200km/h -> $40m = 325km/h -> $80m = 525km/h

  • I actually asked my locally running LLM(s) to rework my resume and specifically to add in any common skills or tools for the roles that I didn't have listed (8 years as a generalist you touch a LOT of stuff, and I hadn't remembered quite a few of them), and removed any that weren't applicable.

    I've been getting a decent number of interviews (3 this week, 2 last).

    One would hope a network engineer knows how to configure routers, but if you just say Cisco, the AI won’t give it as much weight as when you say both

    Honestly this isn't just an AI issue, this is also a recruiter issue. The hiring manager gives a role description and a list of skills or other keywords for the posting, but the recruiter doesn't know what half of them are. An actual human may not know that "Cisco" + "network engineer" = configured routers. Hell, I've had people ask me if Cisco (who I actually did work for, but not as a network engineer) is the food company, thinking of Sysco.

  • From what I'm seeing and hearing in the tech space, I think the opposite is true. I think the current admin's war on non-white people is making companies really wary of hiring H1B holders (even European ones) and even green card holders.

    A lot of companies are just halting hiring altogether for a bit, and the ones who are hiring are looking for local, laid-off tech workers at lower salaries, who have to take it because there's such a glut of them to compete with. Somewhat counterintuitively, this doesn't mean an easier time for Americans to get hired, it means fewer overall Americans getting hired period (which the recent jobs reports prove to be the case).

    Companies tend to hire visa'd workers when they are doing rapid business expansion, because that's when saving the 20-30% per-head adds up (e.g. if you're saving 20% per-head when hiring 100, you're saving yourself 20 salaries-worth, but if you're hiring 5, you're better off getting the most experienced ones who give you the best bang-for-your-buck). And no one is doing rapid business expansions in this economy.

  • That sucks, that's way beyond what anyone I've met has been out for. They're either very specialized, in an area that requires in-person work (and they're not nearby to anyone), or there's something that's red-flagging them.

  • and there is not a single actually profitable company

    This is a little misleading, because obviously FAANG (and others) are all building AI systems, and are all profitable. There are also tons of companies applying machine learning to various areas that are doing well from a profitability standpoint (mostly B2B SaaS that are enhancing extant tools). This statement is really only true for the glut of "AI companies" that do nothing but produce LLMs to plug into stuff.

    My personal take is that this is just revealing how disconnected from the tech industry VCs are, who are the ones buying into this hype and burning billions of dollars on (as you said) smoke and mirrors companies like Anthropic and OpenAI.

  • The other side is that the mass layoffs of the last year mean that there are plenty of experienced people to hire over new grads. I can't imagine any company right now taking on the cost and risk of training up entry level folks when they can hire a 10+ yr senior in that role who's been job hunting for 5 months, for the same or a little more than the entry level salary.

  • "Polly want a cracker" has been around since before anyone alive today was born, and that's the same thing as what LLMs are doing in essence (mimicking human speech), but no one was taking advice from parrots.

  • It's a sad reflection of our current state when being able to string together coherent sentences is impressive enough to many as to be confused with truth and/or intelligence.