Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)A
Posts
0
Comments
509
Joined
3 yr. ago

  • I dunno what country you're in, but in my country you are required by law to have a valid reason to reject a job candidate. That reason can be pretty simple, such as "your application was not as strong as other candidates" but you need to be able to back that claim up if you're challenged (and you can be challenged on it).

    The recommended approach is to have a list of selection criteria, and carefully consider each one then write it down and keep a record of the decision for a while, incase you end up on the wrong end of a discrimination lawsuit. Candidates have the right to ask why they were unsuccessful (and they should ask - to find out what they can do better to improve their chances next time. As a hiring manager I would note down anyone who asks and consider offering them a job in the future, bypassing the normal recruitment process).

    I rank each criteria from one to ten, then disregard the worst scoring candidates until I have a short list that I can compare directly (at that point, I wouldn't worry too much about numbers. You are allowed to say "you were a great candidate, but we had multiple great candidates and had to pick one. Sorry".

    If your selection criteria includes "they need to wear nice clothes" then you're treading on very dangerous territory and could be breaking the law. The damages here are commonly six months pay at the salary of the position they applied for, and can also include a court order for you not to be involved in the hiring process going forward.

    It's perfectly reasonable to require someone to dress well if they have a customer facing role... but that requirement should be implemented at work and not during the job interview. I'm well aware that a lot of hiring managers rely heavily on these things to make their decision but they should not be doing that. It's not as bad as picking someone because they're a straight white male candidate (which is also very common), but it's still a bad policy.

  • McDonalds isn’t going anywhere, no matter how bad their hiring practices get.

    I disagree. Screwing up your hiring process is a Darwin Award level mistake for a company. McDonalds is very very good at hiring people and a big part of that is their willingness to hire people who aren't good enough and then giving those people the training they need to succeed at work.

    Choosing not to hire someone because they like baseball is insane and there's no way that would fly at McDonalds.

  • This isn't the only fine though. It's one of several they've been hit by in recent years and more might be coming. They are also getting bigger over time.

  • OpenAI runs on Azure, which is carbon neutral.

  • OpenAI’s take is someone will create this technology - it might as well be them since their motivation is relatively pure. OpenAI is a non profit and they do work hard to minimise the damage their tech can cause. Which is why this video generation feature has not been launched yet.

  • Um… the Taylor Swift porn deepfakes were Dall-e.

    Sure - they try to prevent that stuff, but it’s hardly perfect. And not all bullying is easily spotted. Imagine a deepfake of a kid sending a text message, but the bubbles are green. Or maybe they’re smiling at someone they hate.

    Also, stable diffusion is more than good enough for this stuff. It’s free and any decent gaming laptop can run that. Takes mine 20 seconds to produce a decent deepfake… I’ve used it to touch up my own photos.

  • The difference is it costs billions of dollars to run a company manufacturing printers and it’s easy for law enforcement to pressure them into not printing money.

    It costs nothing to produce an AI image, you can run this stuff on a cheap gaming PC or laptop.

    And you can do it with open source software. If the software has restrictions on creating abusive material, you can find a fork with that feature disabled. If it has stenography, you can find one with that disabled too.

    You can tag an image to prove a certain person (or camera) took a photo. You can’t stop people from removing that.

  • TLDR: a year ago AI video was garbage. Today it’s almost as good as one that would cost a few hundred thousand dollars to pay a human production team to make (according to someone who’s professional work is creating those videos).

    It’s not quite there - hands glitch out occasionally. Sometimes animation doesn’t quite line up right (e.g. walking might skip a step) but it’s 99% there and and the improvements over the last 12 months are astounding. That last 1% surely won’t take long to close.

    There was a landscape drone video from a helicopter that looked absolutely real.

    Note this is not publicly available yet - OpenAI said they are still working on safety features to reduce the risk of it being used to create content that they want no part in.

  • Huh? If I wear headphones outside, they will literally be drenched in sweat when I take them off. And you can't exactly put headphones in a washing machine to remove sweat from the foam padding either. They'll start to stink in no time.

  • wearing earphones while walking outside is a niche usage

    Speak for yourself, I do it three or so hours a day.

    Indoors or in a car... that's where I never wear earphones. I prefer speakers for that.

  • Sure but "almost always" is not "always". Maybe don't judge these until you've heard them?

  • Most native apps are trash too.

  • It's more than that - for example in Safari after seven days with a bookmark, all data the website stores on device is deleted.

    With a PWA saved to your home screen, your data is kept until you delete the icon from your home screen.

    Also, PWAs don't have a browser toolbar.

  • is that one word or 7?

    It's a word, with a formal dictionary definition: "the technique or practice of responding to an accusation or difficult question by making a counter-accusation or raising a different issue".

    It has it's origins in politics.

  • Is this a sleazy thing to do? Yup

    That makes it illegal. The DMA explicitly requires gatekeepers be "proactive" (that's their words) towards opening up their platform. Removing features just in the EU is the opposite of that.

  • The EU could force Apple to sell their iPhone business. That's listed as the maximum penalty for a DMA violation for companies that "systematically" fail to open up their platform.

  • The term you're looking for is "vindication".

  • US: “Human rights? What are those? Are they in the constitution?”

    There actually are strong privacy rights written into the constitution. Unfortunately they don't fit well with modern data collection creating loopholes big enough to drive a truck through.

    And nothing is being done to close those loopholes. In fact the opposite... end to end encryption, for example, would close most of the loopholes. Legislators are using "think of the children!" arguments to try to stop companies from upgrading services to use E2EE.

  • That's because Europe has actual experience with having their privacy invaded and it wasn't just to show you relevant ads. During the war my grandparents burned letters and books after reading them. And they had nothing to hide either - and all of the ones they burned were perfectly innocent and legal... but even those can be taken out of context and used against you during a police investigation.

    The UN formally declared privacy as a human right a few years after the war ended. Specifically in response to what happened during the war.

    A lot of the data used by police to commit horrific crimes was collected before the war, for example they'd go into a cemetery home and find a list of people who attended a funeral six years ago, then arrest everyone who was there. You can't wait for a government to start doing things like that - you have to stop the data from being collected in the first place.

    Imagine how much worse it could be today, with so much more data collected and automated tools to analyse the data. Imagine if you lived in Russian occupied Ukraine right now - what data can Russia find about you? Do you have a brother serving in Ukraine's army? Maybe your brother would defect if you were taken hostage...