Skip Navigation

Posts
0
Comments
359
Joined
5 yr. ago

  • Thanks for the link, and the clarification (I didn't know about april 2026).. although it's still confusing, to be honest. In your link they seem to allude to this just being a way to maintain a voluntary detection that is "already part of the current practice"...

    If that were the case, then at which point "the new law forces [chat providers] to have systems in place to catch or have data for law inforcements"? will services like signal, simplex, etc. really be forced to monitor the contents of the chats?

    I don't find in the link discussion about situations in which providers will be forced to do chat detection. My understanding from reading that transcript is that there's no forced requirement on the providers to do this, or am I misunderstanding?

    Just for reference, below is the relevant section translated (emphasis mine).

    In what form does voluntary detection by providers take place, she asks. The exception to the e-Privacy Directive makes it possible for services to detect online sexual images and grooming on their services. The choice to do this lies with the providers of services themselves. They need to inform users in a clear, explicit and understandable way about the fact that they are doing this. This can be done, for example, through the general terms and conditions that must be accepted by the user. This is the current practice. Many platforms are already doing this and investing in improving detection techniques. For voluntary detection, think of Apple Child Safety — which is built into every iPhone by default — Instagram Teen Accounts and the protection settings for minors built into Snapchat and other large platforms. We want services to take responsibility for ourselves. That is an important starting point. According to the current proposal, this possibility would be made permanent.

    My impression from reading the dutch, is that they are opposing this because of the lack of "periodic review" power that the EU would have if they make this voluntary detection a permanent thing. So they aren't worried about services like signal/simplex which wouldn't do detection anyway, but about the services that might opt to actually do detection but might do so without proper care for privacy/security.. or that will use detection for purposes that don't warrant it. At least that's what I understand from the below statement:

    Nevertheless, the government sees an important risk in permanently making this voluntary detection. By permanently making the voluntary detection, the periodic review of the balance between the purpose of the detection and privacy and security considerations disappears. That is a concern for the cabinet. As a result, we as the Netherlands cannot fully support the proposal.

  • The thing is that the moment 1 country voluntarily decides to mandate it, the infrastructure for the backdoor needs to be set up in place, as a mandatory requirement.

    For it to be truly optional there should be a demand for the opposite too (ie. countries placing fines to those who do chat scanning), otherwise it doesn't make much difference... any chat app that wants to work globally in the EU is gonna end up implementing the scanning EU-wide, even if there are countries that don't enforce it.

  • Where is this explained? the article might be wrong then, because it does state the opposite:

    scanning is now “voluntary” for individual EU states to decide upon

    It makes it sound like it's each state/country the one deciding, and that the reason "companies can still be pressured to scan chats to avoid heavy fines or being blocked in the EU" was because of those countries forcing them.

    Who's the one deciding what is needed to reduce “the risks of the of the chat app”? if it's each country the ones deciding this, then it's each country who can opt to enforce chat scanning.. so to me that means the former, not the latter.

    In fact, isn't the latter already a thing? ...I believe companies can already scan chats voluntarily, as long as they include this in their terms, and many do. A clear example is AI chats.

  • The thing is.. that even if there are countries publicly rejecting this, once the infrastructure is in place and a backdoor exists due to it being enforced by some other country, how can you be sure it's not being used / exploited?

    Even in the (hypothetical) case that the government is not using it (regardless of what they might say to the public), I wouldn't trust that this backdoor would be so secure that nobody else than a government could make use of it.

  • I believe Germany is now in favor of this new proposal, according to https://fightchatcontrol.eu/

    Only Italy, Netherlands, Czech Republic and Poland are against. This seems to be based on "leaked documents from the September 12 meeting of the EU Council's Law Enforcement Working Party".

  • the local sending side has some way to control the state their particle wavefunctions collapse into (otherwise they’re just sending random noise).

    Do they? My impression is that, like the article says, "their states are random but always correlated". I think they are in fact measuring random values on each side, it's just that they correlate following Schroedinger's equation.

    I believe the intention is not "sending" specific data faster than light.. but rather to "create Quantum Keys for secure information transmission". The information between the quantum particles is correlated in both sides, so they can try to use this random data to generate keys on each side in a way that they can be used to create a secure encryption for communication (a "Quantum Network that will be used for secure communication and data exchange between Quantum Computers"), but the encrypted data wouldn't travel faster than light.

  • I'm not sure if that'd be what it'd look like.. distributions are hardly ever that heterogeneous.

    I'd bet all the GrapheneOS users would get together in their own corner and nerd out about their customizations.

    For the record: 1 in 25 is 4% ...the image gives (intentionally?) the illusion of the proportion being higher.

    1. The Pixel is easily unlockable, so one can install custom firmware without being a "pro", its hardware is (or was reverse-engineered to be) compatible enough to make the experience seamless, with a whole firmware project / community that it's exclusively dedicated on that specific range of hardware devices, making it a target for anyone looking for a phone where to install custom Android firmware on.

    But I'd bet it's a mix of 2 and 3.

  • Code being visible is not very useful if you can't distribute it, extend it, expand it and improve it.

  • Many Keepass clients have support for not actually deleting entries, and instead moving them to a "Trash" subgroup inside the kdb that is ignored when searching entries. Also they usually keep track of the history of changes to each entry, to make it non-destructive.

    Coupled with Syncthing typically automatically creating backups whenever it encounters conflicting changes, I feel this should be enough, at least for me personally.

  • Firefox 85 partitions all of the following caches by the top-level site being visited

    This means that while the identification number is not the same cross-website (webs with different top-level domain will get different value), it will still work to identify the user accessing that website in a way that "can be stored almost persistently and cannot be easily cleared by the user [..] by flushing the cache, closing the browser or restarting the operating system, using a VPN or installing AdBlockers". Which was what this tracking method was claiming to do.

    Going to https://demo.supercookie.me/ still gives you a consistent ID on Firefox (and I would guess, Chrome too), across reboots that isn't protected by adblocker or similar privacy tool. Though one thing they are wrong about is incognito mode, at least in Firefox, the cache for incognito mode is new every session so the id would change if you do use incognito.

  • It's unclear what you are trying to say. The question was what would switching license do. There's 2 scenarios: 1) either Google is really not doing changes in ffmpeg source internally right now ...or 2) they are in fact making changes to it internally (perhaps for encoding with their own codecs, etc.) which they are not releasing back to the public (since the code is LGPL, and not AGPL)

    With situation 1, they can simply continue using ffmpeg, even if it were to switch to AGPL. They would have no need/obligation to release anything, whether they decide to fund development or not. The way I see it, only if it's situation 2, will Google be affected by a license change. However, if the use they make of ffmpeg is just to have their own encoder program for use with specific codecs, they might as well decide to stop using ffmpeg for this purpose instead and have their own program to work with their encoders. Most of the encoding work is already being done in the encoding libraries separately released (like libaom, which Google licensed under BSD-2).

    But even in the rare case of Google having made changes that (after license change) they would suddenly decide to be willing to share with the community despite having not done so before... the whole problem with this bug-reporting mess is that most of the issues reported by the automated tools aren't something really that impactful/important, they are things that even Google would not really be that interested to fix.. (why would Google need to fix a codec that only affects a videogame cinematic from 1995?). These reports are just the result of automated & indiscriminated AI analysis, slop.

  • AGPL is more "copyleft", but not really more "permissive", in the sense that AGPL adds the extra requirement of forcing anyone using the software in a server to provide the source to those people who use the service this server provides.

    It plugs a loophole of the other GPL licenses that allows companies to not share any custom modifications as long as they don't directly share the binaries (they can offer a service using internally modified binaries, but as long as they don't distribute the binaries themselves they don't have to share the source code from those modifications, even if they are GPL).

    However, I also don't think the change would really solve this particular bug-reporting trouble. Most likely Google has not patched these vulnerabilities internally either, or at least the biggest chunk of them (since most of them are apparently edge cases that would most likely not apply to Google's services anyway).

  • Sounds like a prioritization issue. They could configure the git bots to automatically flags all these as "AI-reported" and filter them out from their TODO, considering them low priority by default, unless/until someone starts commenting on the ticket and bringing it up to their attention / legitimizing it.

    EDIT: ok, I just read about the 90-days policy... I feel then the problem is not the reporting, but the further actions Google plans based on an automated tool that seems to be inadequate to judge the severity of each issue.

  • I'm afraid of the price.. this looks much more capable and powerful than the Index, which was quite expensive, I suspect it might end up in a similar price range, if not higher. But let's hope.

    Interestingly, it seems to be using a snapdragon ARM-based unit. Which means it requires another layer of emulation/translation for running Steam games standalone. It's said it uses FEX (https://fex-emu.com/), probably combined/integrated with Proton.

  • Do you typically have a cable tethered to your sunglasses?

    I personally would prefer the small counterweight instead of the cable, specially if it helps securing the glasses while I move around.

  • According to LTT, the section containing the computer just weights under 190 grams (that's about the weight of an average medium-sized apple).

    The battery is the counterweight.. which is actually a good thing to have.. I have a fist generation Quest and the main problem with that one was the weight distribution. Adding weight to the back actually made it more bearable. Just by looking at how thin the front part of this one is, I can tell this is gonna be so much more comfortable.

  • Sure, but if it wasn't triaged why consider it "medium impact"? I feel when tight on resources, it's best to default to "low priority" for all issues whose effect (ie. to the end-user, or to the software depending on it) isn't clearly scoped and explained by the reporter. If the reporters (or those affected) have not done the job to explain why it's important for them to have this fixed then it's probably not so important for them to have it fixed. Some projects even have bots that automatically close issues whenever there has not been activity for a certain time (though I'd prefer labeling it / categorizing as "low engagement" or something so it can be filtered out when swamped, instead of simply closing it).

    About "public confidence".. I feel that this would rather be "misplaced confidence" if you are basing it on a number that is "massaged" to hide issues. Also this is an open source project we are talking about, there isn't an investment fund behind it or a need for people to have absolute loyalty or blind trust. The code is objectively there, the trust should never be blind. I'd be more suspicious of a project as popular, frequently updated & ubiquitous as ffmpeg if it didn't have a long list of reports. Specially when they are (allegedly) not properly triaged. Anyone who decides to choose ffmpeg based on the number of issues open without actually investigating from their end how relevant that number actually is... well.. they can go look for a different software.

  • I agree.. I mean they are not forced to fix the issues, if the issue is obscure and not many people are affected, then there's no reason why they can't just mark it as "patches welcome" and leave it there. I feel this is a problem in the policy the project might have for prioritization, not really a problem in QA / issue report.

    For context:

    The latest episode was sparked after a Google AI agent found an especially obscure bug in FFmpeg. How obscure? This “medium impact issue in ffmpeg,” which the FFmpeg developers did patch, is “an issue with decoding LucasArts Smush codec, specifically the first 10-20 frames of Rebel Assault 2, a game from 1995.”

    To me, the problem shouldn't be the report, but categorizing it as "medium impact" if they think fixing it isn't "a valuable use of an assembly programmer’s time".

    Also:

    the former maintainer of libxml2 [...] recently resigned from maintaining libxml2 because he had to “spend several hours each week dealing with security issues reported by third parties. Most of these issues aren’t critical, but it’s still a lot of work.

    Would it be truely better if the issues wouldn't be reported? what's the difference between the issue not being reported and the issue not being fixed because it's not seen as a priority?