• 0 Posts
  • 275 Comments
Joined 5 years ago
cake
Cake day: February 15th, 2021

help-circle

  • On the upside, the end user needs to use up less data for the same content. This is particularly interesting under 4G/5G and restrictive data plans, or when accessing places / servers with weak connection. It helps avoid having to wait for the “buffering” of the video content mid-playback.

    But yes, I agree each iteration has diminishing returns, with a higher bump in requirements. I feel that’s a pattern we see often.


  • It’s not like improper use of “steal” is unheard of, I see all the time people use “I’m gonna steal that” and similar even when it applies to things openly given for free. And considering that it’s quite clear that the MIT allows others to take without sharing back (it’s the main difference with GPL) I’m quite sure the commenter was aware that it wasn’t really theft, yet chose that word probably with the intention to insult the practice, rather than as a fair descriptor.

    So yes, you’re right, it isn’t theft… but I don’t think that was the point of the comment.



  • Compression and efficiency is often a trade-off. H266 is also much slower than AV1, under same conditions. Hopefully there will come more AV1 hw encoders to speed things up… but at least the AV1 decoders are already relatively common.

    Also, the gap between h265 and AV1 is higher than between AV1 and h266. So I’d argue it’s the other way around. AV1 is reported to be capable of ~30-50% bitrate savings over h.265 at the cost of speed. H266 differences with AV1 are minor, it’s reported to get a similar range, but more balanced towards the 50% side and at the cost of even lower speed. I’d say once AV1 encoding hardware is more common and the higher presets for AV1 become viable it’d be a good balance for most cases.

    The thing is that h26x has a consortium of corporations behind with connections and an interest to ensure they can cash in on their investment, so they get a lot of traction to get hw out.



  • Good. I mean, regardless of where anyone stands on this war, I’m really tired about how countries use the middle east as a way to run endless proxy wars… the whole thing would have either fizzled out or stabilized by now in one direction or another had it not been for all the countries instigating and pouring weapons into it.


  • It’s actually the lazy way. I only work once, then copy that work everywhere. The copying/syncing is surprisingly easy. If that’s what you call “package management” then I guess doing “package management” saves a lot of work.

    If I had to re-configure my devices to my liking every time I would waste time in repetition, not in an actual improvement. I configured it the way I liked it once already, so I want to be able to simply copy it over easily instead of re-writing it every time for different systems. It’s the same reason why I’ve been reusing my entire /home partition for ages in my desktop, I preserve all my setup even after testing out multiple distros.

    If someone does not customize their defaults much or does not mind re-configuring things all the time, I’m sure for them it would be ok to have different setup on each device… but I prefer working only once and copying it.

    And I didn’t say that bash is the only config I have. Coincidentally, my config does include a config.fish I wrote ages ago (14 years ago apparently). I just don’t use it because most devices don’t have fish so it cannot replace POSIX/Bash… as a result it naturally was left very barebones (probably outdated too) and it’s not as well crafted/featureful as the POSIX/bash one which gets used much more.


  • Manually downloading the same shell scripts on every machine is just doing what the package manager is supposed to do for you

    If you have a package manager available, and what you need is available there, sure. My Synology NAS, my Knulli, my cygwin installs in Windows, my Android device… they are not so easy to have custom shells in (does fish even have a Windows port?).

    I rarely have to manually copy, in many of those environments you can at least git clone, or use existing syncing mechanisms. In the ones that don’t even have that… well, at least copying the config works, I just scp it, not a big deal, it’s not like I have to do that so often… I could even script it to make it automatic if it ever became a problem.

    Also, note that I do not just use things like z straight away… my custom configuration automatically calls z as a fallback when I mistype a directory with cd (or when I intentionally use cd while in a far/wrong location just so I can reach faster/easier)… I have a lot of things customized, the package install would only be the first step.


  • It’s not only clusters… I have my shell configuration even in my Android phone, where I often connect to by ssh. And also in my Kobo, and in my small portable console running Knulli.

    In my case, my shell configuration is structured in some folders where I can add config specific to each location while still sharing the same base.

    Maybe not everything is general, but the things that are general and useful become ingrained in a way that it becomes annoying when you don’t have them. Like specific shortcuts for backwards history search, or even some readline movement shortcuts that apparently are not standard everywhere… or jumping to most ‘frecent’ directory based on a pattern like z does.

    If you don’t mind that those scripts not always work and you have the time to maintain 2 separate sets of configuration and initialization scripts, and aliases, etc. then it’s fine.





  • powershell, in concept, is pretty powerful since it’s integrated with C# and allows dealing with complex data structures as objects too, similar to nushell (though it does not “pretty-print” everything the way nushell does, at least by default).

    But in practice, since I don’t use it as much I never really get used to it and I’m constantly checking how to do things… I’m too used to posix tools and I often end up bringing over a portable subset of msys2, cygwin or similar whenever possible, just so I can use grep, sed, sort, uniq, curl, etc in Windows ^^U …however, for scripts where you have to deal with structured data it’s superior since it has builtin methods for that.


  • I prefer getting comfortable with bash, because it’s everywhere and I need it for work anyway (no fancy shells in remote VMs). But you can customize bash a lot to give more colored feedback or even customize the shortcuts with readline. Another one is pwsh (powershell) because it’s by default in Windows machines that (sadly) I sometimes have to use as VMs too. But you can also install it in linux since it’s now open source.

    But if I wanted to experiment personally I’d go for xonsh, it’s a python-based one. So you have all the tools and power of python with terminal convenience.


  • I do apply the same standard to gecko. […] However those criticisms are immaterial to the decision this judge had to make.

    Then your “same deal with webkit” statement was equally immaterial.

    its not a contradiction. the difference here is every browser you mentioned as ‘alternatives’ are not well funded dont actively add new functionality in the same way mozilla/google do.

    That argument isn’t negating the sentence I wrote. I think you used “incorrect” when you meant “correct, but…”.

    However, I don’t think Mozilla is better funded than Apple and the other companies I mentioned behind Webkit.

    And I didn’t directly mention specific chromium browsers as ‘alternatives’… the alternatives I was talking about were options those browsers could take against Google… I don’t think you understood the point.

    which is completely immaterial when they don’t develop/add new features for the web.

    Ironically, NOT developing/adding features has been the major way in which the opposition has been successfully pushing against Google’s “standards”. Webkit being the second top engine in users and opposing those features while still being a stable and well maintained base (it’s not like they don’t have a pipeline) with many corporations behind it (not just Apple, even Valve partnered with WebkitGtk maintainers), is a blockade to Google’s domination just as much as Gecko.

    The web is already bloated enough… I think we need browsers that are more prudent when it comes to developing/adding new features and instead focus more on maintenance.





  • it definitely dictates it when you’re talking about things like APIs exposed etc.

    I gave examples of the opposite in an earlier comment. Though it’s unclear what level of APIs you refer to here, specially given that you said “same deal with webkit” (which, again, is not under google). You might as well apply the same deal to gecko too.

    incorrect. very few browsers will […]

    This is a contradiction. If few browsers will do it, then my statement that it can happen is correct, and I included that just as one among a list of many other possible choices, including entirely killing their project and contributing to the death of Chromium’s ecosystem, making a scene about it and further sway public opinion towards alternatives… in fact, another option could be to have their team move over to contribute to one of the existing Webkit alternatives, or fork one of those with whichever cosmetic changes their userbase likes. The point was that the final say on what those projects will do is a decision those projects can make, not Google.