Skip Navigation

Posts
0
Comments
359
Joined
5 yr. ago

  • @PumpkinSkink@lemmy.world said "reject all", not "reject optional cookies" or "allow essential". If the website offers a "reject all" button (which many do, even if that's not mandated by the law), it actually does reject even the essential cookies. In my experience, the times I've chosen to press such button it always result on the banner showing again if you refresh the page.

    And "Could be seen as" is subjective too. They could argue that having the banner, even if inconvenient, does not really break the website. They can also easily argue that since the point of the law was to get them to request consent then they are actually being even safer in terms of compliance by asking more.

    Also, I still would rather have the possibility of no banners, not even the first time I open the page. The configuration from the browser following the standard could set a default for all websites and potentially avoid the popup to begin with. Then the responsibility would be with the browser, not the website.

  • That doesn't work, because rejecting all cookies means it's impossible for the page to remember whether you skipped the banner.. so the result is that the banner will always show.

    The real solution would be to have this be a browser / HTML standard. Similar to other permissions managed by the browser (like permission to get camera/mic, permission to send notifications, etc).. then each browser can have a way to respond to these requests for permission that we can more fully control/customize.. with a UI owned by the browser that is consistent across websites and with settings that can be remembered browser-side (so the request can be automatically denied if that's what you want).

  • "Essential" is still very vague. All purposes should be categorized. If used for session/identity, then it should be categorized as "session/identity", there should not be a category defined as "essential".

    You can also make a karaoke page that does not work without access to the microphone, but still the browser has a dedicated permission request for this, it does not get mixed up into a bucket of generic "essential" permissions only because that page doesn't work without using the microphone.

    There should be a whole HTML standard similar to the Notification.requestPermission() (which requests permission to send browser notifications), but with a granular set of permissions for storage of data for different purposes.

    And this should be a browser standard, not a custom popup in the logic of the website itself that will be styled differently on each page, allowing all sort of anti-patterns. I should be able to control, from the browser, what the defaults should be for each individual category of data, without having to click through every single website I visit individually. The UI to request for consent should be controlled by the browser, not by the page.

  • I mean, in the Linux terminal you can literally do anything a computer can do. You can play with your PC speaker with beep, dim your screen with brightnessctl, etc. Why would you assume there wasn't a command for suspending? :P

    You can also use rtcwake and program the PC to come back from suspension automatically at a certain time.. I used to set up my small laptop with music to wake up with it as a morning alarm.

  • On the upside, the end user needs to use up less data for the same content. This is particularly interesting under 4G/5G and restrictive data plans, or when accessing places / servers with weak connection. It helps avoid having to wait for the "buffering" of the video content mid-playback.

    But yes, I agree each iteration has diminishing returns, with a higher bump in requirements. I feel that's a pattern we see often.

  • It's not like improper use of "steal" is unheard of, I see all the time people use "I'm gonna steal that" and similar even when it applies to things openly given for free. And considering that it's quite clear that the MIT allows others to take without sharing back (it's the main difference with GPL) I'm quite sure the commenter was aware that it wasn't really theft, yet chose that word probably with the intention to insult the practice, rather than as a fair descriptor.

    So yes, you're right, it isn't theft... but I don't think that was the point of the comment.

  • Note that high-quality + low-bitrate AV1 setup often requires using parameters that rise the time and processing power beyond what's typically sensible in an average setup without hw encoder. And compared with h265 this would be even higher since not only is h265 less complex and faster to begin with, but it also is often hw accelerated.

    Here there's a 2020 paper comparing various encoders for high quality on fullHD: https://www.researchgate.net/publication/340351958_MSU_Video_Codec_Comparison_2019_part_IV_High-Quality_Encoding_aom_rav1e_SVT-AV1_SVT-HEVC_SVT-VP9_x264_x265_ENTERPRISE_VERSION

    "First place in the quality competition goes to aom [AOMedia's AV1 encoder], second place goes to SVT-AV1, and third place to x265"

    And av1 codecs are younger, so I wouldn't be surprised if they have improved over the h265 ones since the article.

    Here's the settings they used in aom, for reference:

     
        
    aomenc.exe --width=%WIDTH% --height=%HEIGHT% 
        --fps=%FPS_NUM%/%FPS_DENOM% --bit-depth=8 --end-usage=vbr 
        --cpu-used=0 --target-bitrate=%BITRATE_KBPS% --ivf --threads=32 
        --tune=ssim -o %TARGET_FILE% %SOURCE_FILE%
    
      
  • Compression and efficiency is often a trade-off. H266 is also much slower than AV1, under same conditions. Hopefully there will come more AV1 hw encoders to speed things up.. but at least the AV1 decoders are already relatively common.

    Also, the gap between h265 and AV1 is higher than between AV1 and h266. So I'd argue it's the other way around. AV1 is reported to be capable of ~30-50% bitrate savings over h.265 at the cost of speed. H266 differences with AV1 are minor, it's reported to get a similar range, but more balanced towards the 50% side and at the cost of even lower speed. I'd say once AV1 encoding hardware is more common and the higher presets for AV1 become viable it'd be a good balance for most cases.

    The thing is that h26x has a consortium of corporations behind with connections and an interest to ensure they can cash in on their investment, so they get a lot of traction to get hw out.

  • This! Also there's AI upscaling, if good enough it could (in theory) make a 1080p video show with a 4k quality only very few lucky and healthy young people would be able to tell apart. In the meantime, my eyesight progressively gets worse with age.

  • Good. I mean, regardless of where anyone stands on this war, I'm really tired about how countries use the middle east as a way to run endless proxy wars... the whole thing would have either fizzled out or stabilized by now in one direction or another had it not been for all the countries instigating and pouring weapons into it.

  • It's actually the lazy way. I only work once, then copy that work everywhere. The copying/syncing is surprisingly easy. If that's what you call "package management" then I guess doing "package management" saves a lot of work.

    If I had to re-configure my devices to my liking every time I would waste time in repetition, not in an actual improvement. I configured it the way I liked it once already, so I want to be able to simply copy it over easily instead of re-writing it every time for different systems. It's the same reason why I've been reusing my entire /home partition for ages in my desktop, I preserve all my setup even after testing out multiple distros.

    If someone does not customize their defaults much or does not mind re-configuring things all the time, I'm sure for them it would be ok to have different setup on each device.. but I prefer working only once and copying it.

    And I didn't say that bash is the only config I have. Coincidentally, my config does include a config.fish I wrote ages ago (14 years ago apparently). I just don't use it because most devices don't have fish so it cannot replace POSIX/Bash.. as a result it naturally was left very barebones (probably outdated too) and it's not as well crafted/featureful as the POSIX/bash one which gets used much more.

  • Manually downloading the same shell scripts on every machine is just doing what the package manager is supposed to do for you

    If you have a package manager available, and what you need is available there, sure. My Synology NAS, my Knulli, my cygwin installs in Windows, my Android device.. they are not so easy to have custom shells in (does fish even have a Windows port?).

    I rarely have to manually copy, in many of those environments you can at least git clone, or use existing syncing mechanisms. In the ones that don't even have that.. well, at least copying the config works, I just scp it, not a big deal, it's not like I have to do that so often.. I could even script it to make it automatic if it ever became a problem.

    Also, note that I do not just use things like z straight away.. my custom configuration automatically calls z as a fallback when I mistype a directory with cd (or when I intentionally use cd while in a far/wrong location just so I can reach faster/easier).. I have a lot of things customized, the package install would only be the first step.

  • It's not only clusters.. I have my shell configuration even in my Android phone, where I often connect to by ssh. And also in my Kobo, and in my small portable console running Knulli.

    In my case, my shell configuration is structured in some folders where I can add config specific to each location while still sharing the same base.

    Maybe not everything is general, but the things that are general and useful become ingrained in a way that it becomes annoying when you don't have them. Like specific shortcuts for backwards history search, or even some readline movement shortcuts that apparently are not standard everywhere.. or jumping to most 'frecent' directory based on a pattern like z does.

    If you don't mind that those scripts not always work and you have the time to maintain 2 separate sets of configuration and initialization scripts, and aliases, etc. then it's fine.

  • If you want your scripts to "always work" you'll need to go with the most common/standard language, because the environments you work on might not be able to use all of those languages.

  • I agree completely with that sentiment, I had the same problem, the output of most commands was interpreted in a way that was not compatible with the way Nu structures data and yet it still rendered as if it were a table with 1 single entry... it was a bit annoying.

  • Didn't they also stop using the þ in Modern English?

    Why use þ (Þ, thorn) but not ð (Ð, eth)? ...and æ (Æ, ash) ...might as well go all the way if you want to type like that.

  • powershell, in concept, is pretty powerful since it's integrated with C# and allows dealing with complex data structures as objects too, similar to nushell (though it does not "pretty-print" everything the way nushell does, at least by default).

    But in practice, since I don't use it as much I never really get used to it and I'm constantly checking how to do things.. I'm too used to posix tools and I often end up bringing over a portable subset of msys2, cygwin or similar whenever possible, just so I can use grep, sed, sort, uniq, curl, etc in Windows ^^U ...however, for scripts where you have to deal with structured data it's superior since it has builtin methods for that.

  • I prefer getting comfortable with bash, because it's everywhere and I need it for work anyway (no fancy shells in remote VMs). But you can customize bash a lot to give more colored feedback or even customize the shortcuts with readline. Another one is pwsh (powershell) because it's by default in Windows machines that (sadly) I sometimes have to use as VMs too. But you can also install it in linux since it's now open source.

    But if I wanted to experiment personally I'd go for xonsh, it's a python-based one. So you have all the tools and power of python with terminal convenience.

  • I do apply the same standard to gecko. [...] However those criticisms are immaterial to the decision this judge had to make.

    Then your "same deal with webkit" statement was equally immaterial.

    its not a contradiction. the difference here is every browser you mentioned as ‘alternatives’ are not well funded dont actively add new functionality in the same way mozilla/google do.

    That argument isn't negating the sentence I wrote. I think you used "incorrect" when you meant "correct, but...".

    However, I don't think Mozilla is better funded than Apple and the other companies I mentioned behind Webkit.

    And I didn't directly mention specific chromium browsers as 'alternatives'.. the alternatives I was talking about were options those browsers could take against Google... I don't think you understood the point.

    which is completely immaterial when they don’t develop/add new features for the web.

    Ironically, NOT developing/adding features has been the major way in which the opposition has been successfully pushing against Google's "standards". Webkit being the second top engine in users and opposing those features while still being a stable and well maintained base (it's not like they don't have a pipeline) with many corporations behind it (not just Apple, even Valve partnered with WebkitGtk maintainers), is a blockade to Google's domination just as much as Gecko.

    The web is already bloated enough.. I think we need browsers that are more prudent when it comes to developing/adding new features and instead focus more on maintenance.