Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)Z
Posts
1
Comments
95
Joined
3 yr. ago

  • That’s because they believe that Hamas’s attack on Oct 7 was in retaliation for Israel’s prior actions while Israel is using Oct 7 to retaliate against all of Palestine. Palestinians are going to support the side that is not bombing them and that they believe is standing up to the persecution they’ve experienced up until and including now.

  • It’s not that great of a solution, though. I dunno if anyone remembers but, when Gatekeeper (the interface to do this) first was added to MacOS, it was in response to a malware “virus scanner” that was out called MacKeeper. It was advertised as a malware scanner/Mac maintenance tool but it was just an ad platform that would inject all kinds of crap into your browser and run all kinds of keyloggers and things in the background.

    As soon as Gatekeeper was released, the MacKeeper website made a specific page that had step-by-step instructions for how to disable Gatekeeper and it would prompt you to visit the page if MacKeeper ever made it onto your system. If you ever re-enabled it, it would prompt you to disable again and show you the instructions.

    It’s an endless cat and mouse game. The only way this works is if they put it in as a multi-step terminal process. Novice users will not fuck with the terminal unless they know what they’re doing and are comfortable with the consequences.

  • Laziness alone is a pretty big reason. MFA was available and users were prompted to set it up. The fact that they didn’t should tell you something.

  • This assumes that the compromised credentials were made public prior to the exfiltration. In this case, it wasn’t as the data was being sold privately on the dark web. HIBP, Azure, and Nextcloud would have done nothing to prevent this.

  • It's a big enough detractor to make it cumbersome. It's not that easy to automate pulling an MFA code from an email when there are different providers involved and all that. The people that pulled this off pulled it off via a botnet and I would be very surprised if that botnet was able to recognize an MFA login and also login, get the code, enter it, and then proceed. It seems like more effort than it's worth at that point.

  • It’s just odd that people get such big hate boners from ignorance. Everything I’m reading about this is telling me that 23andMe should have enabled forced MFA before this happened rather than after, which I agree with, but that doesn’t mean this result is entirely their fault either. People need to take some personal responsibility sometimes with their own personal info.

  • So forced MFA is the only way to prevent what happened? That’s basically what you’re saying, right?

    Their other mechanisms would prevent credential stuffing (e.g., rate limits, comparing login locations) so how was this still successful?

  • How much we talking? I’ll take that bet.

  • This wasn’t a brute force attack, though. Even if they had brute force detection, which I’m not sure if they don’t or not, that would have done nothing to help this situation as nothing was brute forced in the way that would have been detected. The attempts were spread out over months using bots that were local to the last good login location. That’s the primary issue here. The logins looked legitimate. It wasn’t until after the exposure that they knew it wasn’t and that was because of other signals that 23andMe obviously had in place (I’m guessing usage patterns or automation detection).

  • I guess we just have different ideas of responsibility. It was 23andMe’s responsibility to offer MFA, and they did. It was the user’s responsibility to choose secure passwords and enable MFA and they didn’t. I would even play devil’s advocate and say that sharing your info with strangers was also the user’s responsibility but that 23andMe could have forced MFA on accounts who shared data with other accounts.

    Many people hate MFA systems. It’s up to each user to determine how securely they want to protect their data. The users in question clearly didn’t if they reused passwords and didn’t enable MFA when prompted.

  • I already said they could have done more. They could have forced MFA.

    All the other bullet points were already addressed: they used a botnet that, combined with the "last login location" allowed them to use endpoints from the same country (and possibly even city) that matched that location over the course of several months. So, to put it simply - no, no, no, maybe but no way to tell, maybe but no way to tell.

    A full investigation makes sense but the OP is about 23andMe's statement that the crux is users reusing passwords and not enabling MFA and they're right about that. They could have done more but, even then, there's no guarantee that someone with the right username/password combo could be detected.

  • They did. They had MFA available and these users chose not to enable it. Every 23andMe account is prompted to set up MFA when they start. If people chose not to enable it and then someone gets access to their username and password, that is not 23andMe's fault.

    Also, how do you go about "preventing compromised credentials" if you don't know that the credentials are compromised ahead of time? The dataset in question was never publicly shared. It was being sold privately.

  • No, but I didn't consent to give that info to family either. If I was worried about my data getting in the hands of strangers, I wouldn't have shared it with strangers which is what happened here. Unless you count a 4th cousin that you've never met "family", why would you give them access to your data?

  • There was a button that said "share my data with this account". If that person went and shared that info publicly, how is that any different? The accounts accessed with accessed with valid credentials through the normal login process. They weren't "breached" or "hacked".

  • Your mom has my contact information. You can ask her.

    /pwn3d.

  • No.

    See... it's that easy.

  • That is not at all what they said.

  • The only way to stop this would be for 23andme to monitor these "hack lists"

    Unfortunately, from the information that I've seen, the hack lists didn't have these credentials. HIBP is the most popular one and it's claimed that the database used for these wasn't posted publicly but was instead sold on the dark web. I'm sure there's some overlap with previous lists if people used the same passwords but the specific dataset in this case wasn't made public like others.

  • I'm seeing so much FUD and misinformation being spread about this that I wonder what's the motivation behind the stories reporting this. These are as close to the facts as I can state from what I've read about the situation:

    1. 23andMe was not hacked or breached.
    2. Another site (as of yet undisclosed) was breached and a database of usernames, passwords/hashes, last known login location, personal info, and recent IP addresses was accessed and downloaded by an attacker.
    3. The attacker took the database dump to the dark web and attempted to sell the leaked info.
    4. Another attacker purchased the data and began testing the logins on 23andMe using a botnet that used the username/passwords retrieved and used the last known location to use nodes that were close to those locations.
    5. All compromised accounts did not have MFA enabled.
    6. Data that was available to compromised accounts such as data sharing that was opted-into was available to the people that compromised them as well.
    7. No data that wasn't opted into was shared.
    8. 23andMe now requires MFA on all accounts (started once they were notified of a potential issue).

    I agree with 23andMe. I don't see how it's their fault that users reused their passwords from other sites and didn't turn on Multi-Factor Authentication. In my opinion, they should have forced MFA for people but not doing so doesn't suddenly make them culpable for users' poor security practices.