Skip Navigation

Posts
2
Comments
274
Joined
3 yr. ago

  • Will you be able to handle all these panels as it becomes economically reasonable for people to replace them?

  • Back in my day, mice had balls.

  • I don't think it's 'user error' exactly. Maybe when this has occurred, something in the frunk has obstructed the closing of the hood so it almost latched, but the deformed switch is detecting it as closed. I think they might be adjusting the switch sensitivity in software (maybe it uses a Hall effect sensor and a magnet?) so that this almost-closed condition will be reported as just being open.

  • The only thing I can think of is if the sensor is a hall effect sensor that detects something (the switch?) being depressed by the hood. The sensitivity of the hall effect sensor might be tuneable. They may be able to reduce the sensitivity so it still detects a properly closed hood, but reports an improperly closed hood as open.

    It's annoying that the report just says it's fixed in software without explaining how.

  • I guess the funny thing is that each Git commit is internally just a file. Branches and tags are just links to specific commit files and of course commits link to their parents. If a branch gets deleted or jumped back to a previous commit, the orphaned commits are still left in the filesystem. Various Git actions can trigger a garbage collection, but unless you generate huge diffs, they usually stick around for a really long time. Determining if a commit is orphaned is work that Git usually doesn't bother doing. There's also a reflog that can let you recover lost commits if you make a mistake.

  • Except PGP is a substring of the 'technically correct' term. It's like someone saying you're playing on your Nintendo - "Um, actually it's a Nintendo 64."

  • I think Github keeps all the commits of forks in a single pool. So if someone commits a secret to one fork, that commit could be looked up in any of them, even if the one that was committed to was private/is deleted/no references exist to the commit.

    The big issue is discovery. If no-one has pulled the leaky commit onto a fork, then the only way to access it is to guess the commit hash. Github makes this easier for you:

    What’s more, Ayrey explained, you don’t even need the full identifying hash to access the commit. “If you know the first four characters of the identifier, GitHub will almost auto-complete the rest of the identifier for you,” he said, noting that with just sixty-five thousand possible combinations for those characters, that’s a small enough number to test all the possibilities.

    I think all GitHub should do is prune orphaned commits from the auto-suggestion list. If someone grabbed the complete commit ID then they probably grabbed the content already anyway.

  • Ah - Actually reading the article reveals why this is actually an issue:

    What's more, Ayrey explained, you don't even need the full identifying hash to access the commit. "If you know the first four characters of the identifier, GitHub will almost auto-complete the rest of the identifier for you," he said, noting that with just sixty-five thousand possible combinations for those characters, that's a small enough number to test all the possibilities.

    So enumerating all the orphan commits wouldn't be that hard.

    In any case if a secret has been publicly disclosed, you should always assume it's still out there. For sure, rotate your keys.

  • Well, sort of. GitHub certainly could refuse to render orphan commits. They pop up a banner saying so but I don't see why they should show the commit at all. They could still keep the data until it's garbage collected since a user might re-upload the commit in a new branch.

    This seems like a non-issue though since someone who hasn't already seen the disclosed information would need to somehow determine the hash of the deleted commit.

  • You fraud.

  •  
        
    ----BEGIN PGP SIGNATURE-----
    iQIzBAEBCgAdFiEETYf5hKIig5JX/jalu9uZGunHyUIFAmaB8YEACgkQu9uZGunH
    yUKi7Q/+OJPzHWfGPtzk53KnMJ3GC8KQGEUCzKkSKmE0ugdI9h1Lj4SkvHpKWECK
    Y1GxNujMPRM/aAS2M97AEbtYolenWzgYmO1wt131/hEG4tk+iYeB2Sfyvngbg5KI
    y4D7mqpcVWYSf6S13vUX8VuyKeTxK6xdkp95E0wPVLfJwx5o5nH0njLXxeW0IblY
    URLonem/yuBrJ6Ny3XX9+sKRKcdI9tOqhMhTxPcQySXcTx1pAG7YE7G5UqTbJxis
    wy7LbYZB5Yy0FO3CtRIkA+cclG4y2RMM9M9buHzXTWCyDuoQao68yEVh4OdqwH1U
    5AUnqdve5SiwygF/vc50Ila6VjJ4hyz1qVQnjqqD96p7CSVzVudLDDZMQZ8WvqLh
    qaFr51xJvH6p6/CP1ji4HHucbJf6BhtSqc8ID9KFfaXxjfZHiUtgsVDYMV0e7u9v
    lhcDH/3kmw/JImX25qsEsBeQyzOJsBvxOYD3lZrwSY9+7KNGVQstFrEvCuVPHr72
    BQJPIhg3+9g6m36+9Uhs1N6b8G9DsZ6OgnNqr9dGturUg6CtRsLSpqoZq0FT9cLA
    tnFTJDaXgx1DZnsLGDSoQQYjZ3vS+YYZ8jG86KGLFyXVK+uSssvorm9YR1/GGOy7
    suaxro72An+MxCczF5TIR9n3gisKvcwa8ZbdoaGd9cigyzWlYg8=
    =EgZm
    ----END PGP SIGNATURE-----
    
      
  • What do you mean? The shadow mask ensures the gun for each colour can only hit the phosphors of that colour. How would a lower resolution change that?

  • As far as we know, the input was a file filled with zeroes

    CrowdStrike have said that was not the problem:

    This is not related to null bytes contained within Channel File 291 or any other Channel File.

    That said, their preliminary incident review doesn't give us much to go on as to what was wrong with the file.

    You're speculating that it was something easy to test for by a third party. It certainly could have been but I would hope it's a more subtle bug which, as you say, can't be exhaustively tested for. Source code analysis definitely would have surfaced this bug so either they didn't bother looking or didn't bother fixing it.

  • How would you prove that no input exists that could crash a piece of code? The potential search space is enormous. Microsoft can't prevent drivers from accepting external input, so there's always a risk that something could trigger an undetected error in the code. Microsoft certainly ought to be fuzz testing drivers it certifies but that will only catch low hanging fruit. Unless they can see the source code, it's hard to determine for sure that there are no memory safety bugs.

    The driver developers are the ones with the source code and should have been using analysis tools to find these kinds of memory safety errors. Or they could have written it in a memory safe language like Rust.

  • It's a proprietary config file. I think it's a list of rules to forbid certain behaviours on the system. Presumably it's downloaded by some userland service, but it has to be parsed by the kernel driver. I think the files get loaded ok but the driver crashes when iterating over an array of pointers. Possibly these are the rules and some have uninitialised pointers but this is speculation based on some kernel dumps on twitter. So the bug probably existed in the kernel driver for quite a while, but they pushed a (somehow) malformed config file that triggered the crash.

  • For this Channel File, yes. I don't know what the failure rate is - this article mentions 40-70%, but there could well be a lot of variance between different companies' machines.

    The driver has presumably had this bug for some time, but they've never had a channel file trigger it before. I can't find any good information on how they deploy these channel files other than that they push several changes per day. One would hope these are always run by a diverse set of test machines to validate there's no impact to functionality but only they know the procedure there. It might vary based on how urgent a mitigation is or how invasive it'll be - though they could just be winging it. It'd be interesting to find out exactly how this all went down.

  • It should be relatively straightforward to script the recovery of cloud VM images (even without snapshots). Good luck getting the unwashed masses to follow a script to manually enter recovery mode and delete files in a critical area of the OS.

  • How does Falcon store these channel files on Linux? I don't know how an immutable distro would handle this given CrowdStrike push several of these updates per day and presumably use their own infrastructure to deploy them.

    I guess if you pay them enough they could customize the deployment to work with whatever infrastructure you have but it's all proprietary so I have no idea if they're really doing that anywhere.