Skip Navigation

Posts
0
Comments
1390
Joined
3 yr. ago

Just your normal everyday casual software dev. Nothing to see here.

People can share differing opinions without immediately being on the reverse side. Avoid looking at things as black and white. You can like both waffles and pancakes, just like you can hate both waffles and pancakes.

been trying to lower my social presence on services as of late, may go inactive randomly as a result.

  • more than partially actually. 85% of Mozillas income comes from search engine deals.Then when you look at the revenue reports for the year, its stated in it that.

    Approximately 85% and 81% of Mozilla’s revenues from customers with contracts were derived from one customer for the years ended December 31, 2023 and 2022, respectively. Receivables from that one customer represented 70% and 64% of the December 31, 2023 and 2022 outstanding receivables, respectively.

    I'm no accountant and while Google is not specified. That sounds like the signs are pointing at google being 85% of the projects income.

  • That's hilarious. I'm guessing its a result of an auto-redacter which is set to redacts urls or something? since the original would be

     
        
    --enable-largefile
    Enable support for large files (http://www.sas.com/standards/large_
    file/x_open.20Mar96.html) if the operating system requires special compiler
    options to build programs which can access large files. This is enabled by
    default, if the operating system provides large file support.
    
      
  • yea was about to say the only difference between this article and the US is that in the US it would be death in the office or at home not the hospital bed.

  • They are very nice. They share kernelspace so I can understand wanting isolation but, the ability to just throw a base Debian container on, assign it a resource pool and resource allocation, and install a service directly to it, while having it isolated from everything without having to use Docker's emphereal by design system(which does have its perks but I hate troubleshooting containers on it) or having to use a full VM is nice.

    And yes, by Docker file I would mean either the Docker file or the compose file(usually compose). By straight on the container I mean on the container, My CTs don't run Docker period, aside from the one that has the primary Docker stack. So I don't have that layer to worry about on most CT's

    As for the memory thing, I was just mentioning that Docker does the same thing that containers do if you don't have enough RAM for what's been provisioned. The way I had taken that original post is that specifying 2 gigs of RAM to the point the system exhausts it's ram would cause corruption and the system crashes, which is true but docker falls for the same issue if the system exhausts it's ram. That's all I meant by it. Also cgroups sound cool, I gotta say I haven't messed with them a whole lot. I wish proxmox had a better resource share system to designate a specific group as having X amount of max resources, and then have the CT or vm's be using those pools.

  • It will happen guys! I swear! 🗞️

  • Yea I plan to try out the new Proxmox version at some point to try that out, thank you again.

  • I think we might have a different definition of Virtualized and containers. I use IBM's and Comptias definitions.

    IBM's definition is

     
        
    Virtualization is a technology that enables the creation of virtual environments from a single physical machine, allowing for more efficient use of resources by distributing them across computing environments.
    
    
      

    The IBM page themselves acknowledges that containers are virtualization on their Containers vs Virtual Machines page. I call virtualization as an abstraction layer between the hardware and the system being run.

    Comptia's definition of containers would be valid as well. Which states that containers are a virtualization layer that operates at the OS level and isolates the OS from the file system. Whereas virtual machines are an abstraction layer between the hardware and the OS.

    I grew this terminology from my comptia networking+ book from 12 years ago though, which classifies Virtualization as "a process that adds a layer of abstraction between hardware and the system" which is a dated term since OS level virtualization such as Containers wasn't really a thing then.

  • Will be looking into that, I haven't upgraded from 8.4 yet. That sounds like a pretty decent thing to have. Thanks!

  • Your statements are surprising to me, because when I initially set this system up I tested against that because I had figured similar.

    My original layout was a full docker environment under a single VM which was only running Debian 12 with docker.

    I remember seeing a good 10gb different with ram usage between offloading the machines off the docker instance onto their own CT's and keeping them all as one unit. I guess this could be chalked down to the docker container implementation being bad, or something being wrong with the vm. It was my primary reason for keeping them isolated, it was a win/win because services had better performance and was easier to manage.

  • Sorry, make legal requires the lawyer subroutine which requires full access to everything to verify you have the money to be able to make such a claim

  • as much as I would love this. If it ever did become a thing, what you would see wouldn't be companies taking the fine, you would see companies "off-branching" and having income be reported on a parent company that is contracted to the offending company. like in the case of alphabet, they would likely just migrate the android division to be a contractee that they have full control over that they never terminate the contract for. They no longer "own" android legally, they contract android to do their bidding. So when it ends up in court, it ends up as a "well Android did it not us" much like how Amazons third party delivery services worked when they tried to enforce unionization laws.

  • some important clarification though, that is a hard cap, realistically it will likely be quite a bit less.

  • Concidering that they were estimated to be making 31 billion USD off the android ecosystem alone back in 2016 over 10 years 2006-2016, im sure it's not even a drop in the bucket now.

  • I expect that eventually windows will be anti-trusted again by established nations. we haven't seen it since explorer but, eventually it will happen again.

  • This is a great way to say it. I feel the same. You put the same effort in regardless where it comes from.

  • When you say moderated, do you mean a comment or did you do another post? if its a comment is that something your instance does? or did it just fail to send. you peaked my curiosity because I wasn't aware of instances filtering comments, only posts.

  • I'm not a mod but, to me I see self hosting as maintaining your own setup. If it's hosted in a cloud you still are maintaining the setup you are just offloading hardware responsibilities to someone else.

    It's not like you are signing up for google photos and then saying "yo guys I have my own photos self hosted", you still are putting the pain and suffering into making it work, you just aren't worrying about the hardware or network requirements (outside of security)

    Being said, some people firmly see ""self-hosting" as you buy the parts, install and configure everything and it's coming out of your house.

    It's a sticky situation, imo that type of ideology also throws any type of using a DNS/DDOS host out the window as well., but again YMMV depending on who you ask.

    I definitly think if you are installing -> configuring -> maintaining and then -> using. you meet the definition of self hosting.

    edit: Being said, looking at the log, your deleted post was the one about your current external host provider dropping you due to heavy load(they were eco friendly) right? I can kind of see why they felt this didn't meet the environment of the community. But i see both sides of the argument.

  • are you are saying running docker in a container setup(which at this point would be 2 layers deep) uses less resources than 10 single layer deep containers?

    I can agree with the statement that a single VM running docker with 10 containers uses less than 10 CT's with docker installed then running their own containers(but that's not what I do, or what I am asking for).

    I currently do use one CT that has docker installed with all my docker images, which I wouldn't do if I had the ability not to but some apps require docker) but this removes most of the benefits you get using proxmox in the first place.

    One of the biggest advantages of using the hypervisor as a whole is the ability to isolate and run services as their own containers, without the need of actually entering the machine. (like for example if I"m screwing with a server, I can just snapshot the current setup and then rollback if it isn't good) Throwing everything into a VM with docker bypasses that while adding headway to the system. I would need to backup the compose file (or however you are composing it) and the container, and then do my changes. My current system is a 1 click make my changes, if bad one click to revert.

    For resource explanation. Installing docker into a VM on proxmox then running every container in that does waste resources. You have the resources that docker requires to function (which is currently 4 gigs of ram per their website but when testing I've seen as low as 1 gig work fine)+ cpu and whatever storage it takes up which is about half a gig or so) in a VM(which also uses more processing and ram than CT's do as they no longer share resources). When compared to 10 CT's that are finetuned to their specific app, you will have better performance running the CT's than a VM running everything, while keeping your ability to snapshot and removing the extra layer and ephemeral design that docker has(this can be a good and bad thing, but when troubleshooting I learn towards good).

    edit: clarification and general visibility so it wasnt bunched together.

  • I don't like how everything is docker containerized.

    I already run proxmox, which containerizes things by design with their CT's and VM's

    Running a docker image ontop of that is just wasting system resources. (while also complicating the troubleshooting process) It doesn't make sense to run a CT or VM for a container, just to put docker on it and run another container via that. It also completly bypasses everything that proxmox provides you for snapshotting and backup because proxmox's system is for the entire container, and if all services are running on the same container all services are going to be snapshotted.

    My current system allows me to have per service snapshots(and backups), all within the proxmox webUI, all containerized, and all restricted to their own resources. Docker is just not needed at this point.

    A docker system just adds extra headway that isn't needed. So yes, just give me a standard installer.

  • Fair note: A lot of these comments are under the assumption you are using the os_prober feature in grub to detect the windows install. While this is usually the case, if you have ever supplied a manual entry for windows in /etc/grub.d to specify where your windows install is (I was forced to do this because os_prober refused to see my windows system after recreating my EFI partition), you will need to delete that entry again in order for the windows option to disappear on grub.

    It would still boot your Linux mint, but you would have a windows entry in the boot menu that didn't go anywhere.