Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)I

invalidusernamelol [he/him]

@ invalidusernamelol @hexbear.net

Posts
24
Comments
834
Joined
5 yr. ago

  • I feel like all of this has happened before...

  • I feel lucky to just be able to do this

  • Providing minimal support is a huge aspect. It's a small shop and large orders like this aren't uncommon. I'll definitely look into optimizing the emmc, as that's a huge bottleneck. The primary goal is always to eek out as much life from devices that were slated to be landfilled as possible and provide minimal working solutions for free or as close to free as possible.

    These Chromebooks are veritable e-waste and making it so we can get some last mile usage out of them while having a system that's moderately fault tolerant (btrfs is good for the unreliability of the emmc) is key. Plus the AB updates mirror normal Chromebook functionality.

    The atomic style with flatpak also makes it really hard for an inexperienced end user to fully bork their system as the base image and root is read only. Having all the user files in a separate volume also means it's trivial to migrate them to a new machine or wipe an old one. This is essentially an experiment at this point, but we've had a ton of very positive feedback from people about Linux. All the elderly people find it easier to use since they aren't constantly being pushed notifications and spyware. Plus the atomic updates mean they don't have to worry about manually running apt/dnf upgrade to get updates and the whole process is just handled automatically in the background.

  • Yeah, it's really easy to accomplish with some basic tensor flow models. The only upshot of large hardware is being able to have a plethora of models.

    Plus looking at the Chinese developments, they're focusing on model distillations which are orders of magnitude more efficient than generalized LLMs.

  • Straight up the bad guys in Raiders of the Lost Arc lol

    Also love that his book list included One Piece for some fucking reason, I can't believe people paid $200 to listen to him talk about manga in a religious context.

  • Just because the biblical antichrist is total hogwash, doesn't mean some delusional fascist billionaire can't become their version of the antichrist. He's basically just made it clear that he wants a diffuse genocide while he locks himself in his bunker.

    He's not actually the religious antichrist, not he's gonna do everything he can to replicate the myth because he's absolutely nuts and has control over the largest military industrial state in history.

  • When reached for comment, Thiel’s spokesperson, Jeremiah Hall, did not dispute the veracity of the material given to the Guardian. Hall did correct a piece of the Guardian’s transcription and clarified an argument made by Thiel about Jews and the antichrist.

  • Yep, a Markov chain solver that's the size of a city is massively useful for a lot of specific planning cases. Especially once you're looking at large quantities of data. The actual use case for the US is kinda moot though, since the markets aren't structured. They're controlled entirely by the whims of a couple people which isn't predictable by a Markov chain.

    China will see a lot of utilization and benefit though, Gosplan used hand computed Markov chains, but an already centrally planned economy with tons of historical data and predictable, mostly bubble free history will actually be able to provide useful planning outputs.

    The US I think knows this, and isn't planning on using this machine to stabilize the economy, but instead as a weapon of war. Domestically, they'll use it to pre-crime and mass arrest people that they don't like. Abroad they'll use it as an inverse planner to give them targets that will specifically destabilize the economy.

  • Totally agree with you there. The atomic stuff is new to me as well, but after seeing the weird compatibility and stability issues that these things had with a regular install, I figured it was worth the plunge to have the A/B update system. Also means installing a bunch of stuff is harder for the end user unless they know the 3 commands you need to bypass the ostree lol.

    I wish people would just natural have a better time understanding technology, but reality is that 90% of the people we distribute computers to barely know how to use a keyboard and many don't speak English. These machines are single purpose devices, and any additional security we add will just make them toss it in the trash.

    We do have education sessions where we teach people how to use a browser, how to open Gmail, how to identify a scam, etc. I've been wanting to expand those classes to have some basic Intro to Linux, intro to Python, intro to Bash, etc. type classes that teach people the bare minimum so they can start learning. But that's only gonna work with the kids. Any older person that gets one of these devices needs to have it work as frictionless as possible with a minimal amount of interaction.

  • These will be going to single users as far as I know. They're priced to the org at ~$30 and that money comes from their grant.

    These are some of the lowest end Chromebooks I've worked with tbh. The emmc is so incredibly slow that the network speeds are bottlenecked by it lol. The A B updates that ostree does take around 15 minutes to build and use ~80% of the CPU in the background (luckily that's only done once a day at midnight or if they specifically request updates).

    LUKS encryption would be easy to enable, but someone would inevitably forget their password and we'd have to break the news that the resume they were working on is lost forever. I'll probably include instructions on how to encrypt specific folders so they can have secure locations that they set up.

    If we set up LUKS with a shop password and share it, that also just kinda defeats the purpose. Now they're all using the same password. Could use the device code as a salt, but that's still easy to guess and hard to remember.

    Running an rm -rf on the var partition should be moderately quick, and since it's a btrfs filesystem, we could also just totally overwrite the logical volume and reassign it. On non spinning disk storage, overwriting the block headers is more than enough to scuttle access to the data.

  • Already messed with kickstart, it's super useful to automate anything you can setup in the installer. Packages are a bit more difficult since you can only install from their base package list as far as I can tell. Plus using atomic means I need to use a flatpak which is installed in userland after install.

    Thanks everyone for all the ideas, mkosi seems like a really need utility that I'll probably end up using.

  • Probably gonna end up just trying to simplify the installation process, and maybe just have an install script that removes the Firefox layers, then installs the required stuff from flathub. Can also overwrite the default configs for browsers and such with a setup script that pulls them from a repo.

  • The reason for using atomic is so updates are automatic, and a device can be wiped by nuking /var/ which is essentially exactly the same as factory resetting a Chromebook.

    I did look into Kickstart, I might end up using it. Seems more designed for automating the installer process, and not post install system config. I'm probably gonna do some more research into saving and exporting ostree layers so I can manage the package layers and just manually copy over the /var/ from an already configured system.

    End of the day there's only so much you can do with 16GB of emmc storage, and because most other distros we tried on these machines were no longer maintained or incredibly unstable. The layer cache can be limited and old versions can be pruned. Since these are meant to be incredibly minimal, the base system is only ~2GB after install and config. Which fits in the 2GB of Ram. Definitely not winning any speed races, but for 3-4 tabs and minor workloads it's usable.

    I'll have to investigate encryption aspect, I think easily nukable might be enough. Especially if there's a performance cost to decrypting the drive on startup. Especially since most of the users of these devices are gonna be using G-Drive for file storage anyways.

  • There's goofy design patterns and fun comments all over lol, the whole project is super cool, but absolutely someone's passion project.

    I say this as someone who deeply respects what they have been able to accomplish. Namely making a Python application that can run on most versions of Python 2.7+ and Python 3.x.

    If you look at a lot of the backend stuff, they use a huge number of single and double character variables that live for the duration of an object and primarily use classes as namespaces. I'm just more used to seeing Python code that builds out clear abstractions and interfaces with the existing data model or implements interaction with that data model so you can use each component separately. Here's an example of an init for one of their monolithic classes (it's ~5k lines in total)

     python
        
    class SvcHub(object):
        """
        Hosts all services which cannot be parallelized due to reliance on monolithic resources.
        Creates a Broker which does most of the heavy stuff; hosted services can use this to perform work:
            hub.broker.<say|ask>(destination, args_list).
    
        Either BrokerThr (plain threads) or BrokerMP (multiprocessing) is used depending on configuration.
        Nothing is returned synchronously; if you want any value returned from the call,
        put() can return a queue (if want_reply=True) which has a blocking get() with the response.
        """
    
        def __init__(
            self,
            args: argparse.Namespace,
            dargs: argparse.Namespace,
            argv: list[str],
            printed: str,
        ) -> None:
            self.args = args
            self.dargs = dargs
            self.argv = argv
            self.E: EnvParams = args.E
            self.no_ansi = args.no_ansi
            self.tz = UTC if args.log_utc else None
            self.logf: Optional[typing.TextIO] = None
            self.logf_base_fn = ""
            self.is_dut = False  # running in unittest; always False
            self.stop_req = False
            self.stopping = False
            self.stopped = False
            self.reload_req = False
            self.reload_mutex = threading.Lock()
            self.stop_cond = threading.Condition()
            self.nsigs = 3
            self.retcode = 0
            self.httpsrv_up = 0
            self.qr_tsz = None
    
            self.log_mutex = threading.Lock()
            self.cday = 0
            self.cmon = 0
            self.tstack = 0.0
    
            self.iphash = HMaccas(os.path.join(self.E.cfg, "iphash"), 8)
    
            if args.sss or args.s >= 3:
                args.ss = True
                args.no_dav = True
                args.no_logues = True
                args.no_readme = True
                args.lo = args.lo or "cpp-%Y-%m%d-%H%M%S.txt.xz"
                args.ls = args.ls or "**,*,ln,p,r"
    
            if args.ss or args.s >= 2:
                args.s = True
                args.unpost = 0
                args.no_del = True
                args.no_mv = True
                args.reflink = True
                args.dav_auth = True
                args.vague_403 = True
                args.nih = True
    
            if args.s:
                args.dotpart = True
                args.no_thumb = True
                args.no_mtag_ff = True
                args.no_robots = True
                args.force_js = True
    
            if not self._process_config():
                raise Exception(BAD_CFG)
    
            # for non-http clients (ftp, tftp)
            self.bans: dict[str, int] = {}
            self.gpwd = Garda(self.args.ban_pw)
            self.gpwc = Garda(self.args.ban_pwc)
            self.g404 = Garda(self.args.ban_404)
            self.g403 = Garda(self.args.ban_403)
            self.g422 = Garda(self.args.ban_422, False)
            self.gmal = Garda(self.args.ban_422)
            self.gurl = Garda(self.args.ban_url)
    
            self.log_div = 10 ** (6 - args.log_tdec)
            self.log_efmt = "%02d:%02d:%02d.%0{}d".format(args.log_tdec)
            self.log_dfmt = "%04d-%04d-%06d.%0{}d".format(args.log_tdec)
            self.log = self._log_disabled if args.q else self._log_enabled
            if args.lo:
                self._setup_logfile(printed)
    
            lg = logging.getLogger()
            lh = HLog(self.log)
            lg.handlers = [lh]
            lg.setLevel(logging.DEBUG)
    
            self._check_env()
    
            if args.stackmon:
                start_stackmon(args.stackmon, 0)
    
            if args.log_thrs:
                start_log_thrs(self.log, args.log_thrs, 0)
    
            if not args.use_fpool and args.j != 1:
                args.no_fpool = True
                t = "multithreading enabled with -j {}, so disabling fpool -- this can reduce upload performance on some filesystems, and make some antivirus-softwares "
                c = 0
                if ANYWIN:
                    t += "(especially Microsoft Defender) stress your CPU and HDD severely during big uploads"
                    c = 3
                else:
                    t += "consume more resources (CPU/HDD) than normal"
                self.log("root", t.format(args.j), c)
    
            if not args.no_fpool and args.j != 1:
                t = "WARNING: ignoring --use-fpool because multithreading (-j{}) is enabled"
                self.log("root", t.format(args.j), c=3)
                args.no_fpool = True
    
            for name, arg in (
                ("iobuf", "iobuf"),
                ("s-rd-sz", "s_rd_sz"),
                ("s-wr-sz", "s_wr_sz"),
            ):
                zi = getattr(args, arg)
                if zi < 32768:
                    t = "WARNING: expect very poor performance because you specified a very low value (%d) for --%s"
                    self.log("root", t % (zi, name), 3)
                    zi = 2
                zi2 = 2 ** (zi - 1).bit_length()
                if zi != zi2:
                    zi3 = 2 ** ((zi - 1).bit_length() - 1)
                    t = "WARNING: expect poor performance because --%s is not a power-of-two; consider using %d or %d instead of %d"
                    self.log("root", t % (name, zi2, zi3, zi), 3)
    
            if args.s_rd_sz > args.iobuf:
                t = "WARNING: --s-rd-sz (%d) is larger than --iobuf (%d); this may lead to reduced performance"
                self.log("root", t % (args.s_rd_sz, args.iobuf), 3)
    
            zs = ""
            if args.th_ram_max < 0.22:
                zs = "generate thumbnails"
            elif args.th_ram_max < 1:
                zs = "generate audio waveforms or spectrograms"
            if zs:
                t = "WARNING: --th-ram-max is very small (%.2f GiB); will not be able to %s"
                self.log("root", t % (args.th_ram_max, zs), 3)
    
            if args.chpw and args.have_idp_hdrs and "pw" not in args.auth_ord.split(","):
                t = "ERROR: user-changeable passwords is not compatible with your current configuration. Choose one of these options to fix it:\n option1: disable --chpw\n option2: remove all use of IdP features; --idp-*\n option3: change --auth-ord to something like pw,idp,ipu"
                self.log("root", t, 1)
                raise Exception(t)
    
            noch = set()
            for zs in args.chpw_no or []:
                zsl = [x.strip() for x in zs.split(",")]
                noch.update([x for x in zsl if x])
            args.chpw_no = noch
    
            if args.ipu:
                iu, nm = load_ipu(self.log, args.ipu, True)
                setattr(args, "ipu_iu", iu)
                setattr(args, "ipu_nm", nm)
    
            if args.ipr:
                ipr = load_ipr(self.log, args.ipr, True)
                setattr(args, "ipr_u", ipr)
    
            for zs in "ah_salt fk_salt dk_salt".split():
                if getattr(args, "show_%s" % (zs,)):
                    self.log("root", "effective %s is %s" % (zs, getattr(args, zs)))
    
            if args.ah_cli or args.ah_gen:
                args.idp_store = 0
                args.no_ses = True
                args.shr = ""
    
            if args.idp_store and args.have_idp_hdrs:
                self.setup_db("idp")
    
            if not self.args.no_ses:
                self.setup_db("ses")
    
            args.shr1 = ""
            if args.shr:
                self.setup_share_db()
    
            bri = "zy"[args.theme % 2 :][:1]
            ch = "abcdefghijklmnopqrstuvwx"[int(args.theme / 2)]
            args.theme = "{0}{1} {0} {1}".format(ch, bri)
    
            if args.nid:
                args.du_who = "no"
            args.du_iwho = n_du_who(args.du_who)
    
            if args.ver and args.ver_who == "no":
                args.ver_who = "all"
            args.ver_iwho = n_ver_who(args.ver_who)
    
            if args.nih:
                args.vname = ""
                args.doctitle = args.doctitle.replace(" @ --name", "")
            else:
                args.vname = args.name
            args.doctitle = args.doctitle.replace("--name", args.vname)
            args.bname = args.bname.replace("--name", args.vname) or args.vname
    
            if args.log_fk:
                args.log_fk = re.compile(args.log_fk)
    
            # initiate all services to manage
            self.asrv = AuthSrv(self.args, self.log, dargs=self.dargs)
            ramdisk_chk(self.asrv)
    
            if args.cgen:
                self.asrv.cgen()
    
            if args.exit == "cfg":
                sys.exit(0)
    
            if args.ls:
                self.asrv.dbg_ls()
    
            if not ANYWIN:
                self._setlimits()
    
            self.log("root", "max clients: {}".format(self.args.nc))
    
            self.tcpsrv = TcpSrv(self)
    
            if not self.tcpsrv.srv and self.args.ign_ebind_all:
                self.args.no_fastboot = True
    
            self.up2k = Up2k(self)
    
            self._feature_test()
    
            decs = {k.strip(): 1 for k in self.args.th_dec.split(",")}
            if not HAVE_VIPS:
                decs.pop("vips", None)
            if not HAVE_PIL:
                decs.pop("pil", None)
            if not HAVE_RAW:
                decs.pop("raw", None)
            if not HAVE_FFMPEG or not HAVE_FFPROBE:
                decs.pop("ff", None)
    
            # compressed formats; "s3z=s3m.zip, s3gz=s3m.gz, ..."
            zlss = [x.strip().lower().split("=", 1) for x in args.au_unpk.split(",")]
            args.au_unpk = {x[0]: x[1] for x in zlss}
    
            self.args.th_dec = list(decs.keys())
            self.thumbsrv = None
            want_ff = False
            if not args.no_thumb:
                t = ", ".join(self.args.th_dec) or "(None available)"
                self.log("thumb", "decoder preference: {}".format(t))
    ... # I hit the Hexbear character limit
    
      
  • Absolutely hiding evidence. Their withdrawal needed to be supervised

  • Copyparty is cool, but it's still a very green app. If you want to use that, I'd run it as a container within your TrueNAS/FreeNAS install. That way you get the foundation of the tested software with a fun UI added on top.

    Also having read some of the copy party source, it was written by a madman

  • Always feel dirty using USB for stuff that isn't basic I/O lol. Plus if I'm building my own I'll probably wanna go 10 gig because why not

  • Fuck yeah

  • Geez...

    It ends with Daffy speaking fake Chinese with the subtitles "Help I'm being held prisoner in a Chinese laundry" because he didn't pay his bill