Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)T
Posts
0
Comments
983
Joined
3 yr. ago

  • I'd take each of your metrics and multiply it by 10, and then multiply it by another 10 for everything you haven't thought about, then probably double it for redundancy.Because "fire temp" is meaningless in isolation. You need to know the temperature is evenly distributed (so multiple temperature probes), you need to know the temperature inside and the temperature outside (so you know your furnace isn't literally melting), you need to know it's not building pressure, you need to know it's burning as cleanly as possible (gas inflow, gas outflow, clarity of gas in, clarity of gas out, temperature of gas in, temperature of gas out, status of various gas delivery systems (fans (motor current/voltage/rpm/temp), filters, louvres, valves, pressures, flow rates)), you need to know ash is being removed correctly (that ash grates, shakers, whatever are working correctly, that ash is cooling correctly, that it's being transported away etc).The gas out will likely go through some heat recovery stages, so you need to know gas flow through those and water flow through those. Then it will likely be scrubbed of harmful chemicals, so you need to know pressures, flow rates etc for all that.And every motor will have voltage/current/rpm/temperature measurements. Every valve will have a commanded position and actual position. Every pipe will have pressure and temperature sensors.

    The multiple fire temperature probes would then be condensed into a pertinent value and a "good" or "fault" condition for the front panel display.The multiple air inlet would be condensed into pertinent information and a good/fault condition.Pipes of a process will have temperature/pressure good/fault conditions (maybe a low/good/over?)

    And in the old days, before microprocessors and serial communications, it would have been a local-to-sensors control/indicator panel with every reading, then a feed back to the control room where it would be "summarised". So hundreds of signals from each local control/indicator panel.

    Imagine if the control room commanded a certain condition, but it wasn't being achieved because a valve was stuck or because some local control over-rode it.How would the control room operators know where to start? Just guess?When you see a dangerous condition building, you do what is needed to get it under control and it doesn't happen because...You need to know why.

  • I love cli and config files, so I can write some scripts to automate it all.It documents itself.Whenever I have to do GUI stuff I always forget a step or do things out of order or something.

  • Spoiler alert: it's just yaml

  • The bubble is propped up by governments.They don't need "as good as an employee but faster". They just need "faster", so they can process gathered data on an enormous scale to filter out the "potentially good" from the "not worth looking at".Then they use employees to actually assess the "potentially good" data.

    Seriously, intelligence agencies don't need "good ai", they just need "good enough ai".And they have that already.

  • What the fuck is a french fry? You mean Freedom Fries?

  • No idea. Haven't started digging into it yet

  • 27:1 kd would be accused of cheating in video games.

    Because this stat isn't really a stat and isn't hyped or published, I'm going by DDG AI assist which suggests US k:d in Iraq is 44:1

    The U.S. military suffered approximately 4,492 deaths and around 32,292 wounded during the Iraq War, while estimates suggest around 200,000 Iraqi civilians were killed. This results in a rough kill-to-death ratio of about 44:1, favoring U.S. forces, though this does not account for all combatants and the complexities of the conflict.

    Considering that Ukraine isn't killing civilians... Classic AI bullshit uselessness.If I killed 27 enemy aggressors while defending my country, I would die happy. I don't ever want to be in that position, I don't think anyone should ever be in that position. But that is an achievement, under the circumstances, to be proud of

  • TIDALs continued awesomeness suggests suitable alternatives.Spotify pays Joe Rogan how much? And pays artists how little?TIDAL does music.I changed a few years ago, and all I miss are the integrations.I'm lucky that I have decent speakers & dac on my desktop, and decent IEMs. So I can listen to music where I want.But I can't buy a "tidal speaker" in the way I could buy a "Spotify speaker".But I'm arrogantly confident enough to waste some money solving this with home assistant, some rpi/nucs, and some speakers. I feel I don't need (I actually don't want a vendor locked in) "just works" solution, and I'm happy rolling my own.

  • Yeh, either proxy editing (where it's low res versions until export).

    Or you could try a more suitable intermediary codec.I presume you are editing h.264 or something else with "temporal compression". Essentially there are a few full frames every second, and the other frames are stored as changes. Massively reduces file size, but makes random access expensive as hell.

    Something like ProRes, DNxHD... I'm sure there are more. They store every frame, so decoding doesn't require loading the last full frame and applying the changes to the current frame.You will end up with massive files (compared to h.264 etc), but they should run a lot better for editing.And they are lossless, so you convert source footage then just work away.

    Really high res projects will combine both of these. Proxy editing with intermediary codecs

  • I'd rather they u-turned shitty ideas than waffle-stomp them through.How the fuck the OSA seemed to just drift through is astounding

  • They are only open sourcing the spaghetti that sticks to the wall.Carefully curated and redacted/obscured/replaced where appropriate.Probably removing hard coded rules like "always show musk tweets" and stuff like that

  • What I'd recommend is setting up a few testing systems with 2-3GB of swap or more, and monitoring what happens over the course of a week or so under varying (memory) load conditions. As long as you haven't encountered severe memory starvation during that week – in which case the test will not have been very useful – you will probably end up with some number of MB of swap occupied.

    And

    [... On Linux Kernel > 4.0] having a swap size of a few GB keeps your options open on modern kernels.

    And finally

    For laptop/desktop users who want to hibernate to swap, this also needs to be taken into account – in this case your swap file should be at least your physical RAM size.

  • I've been using EndeavourOS for 12 months now.Very light steam gaming. Office stuff is basically web browsers (occasionally I have to swap to windows boot for silly excel spreadsheets that don't work online). Programming is delightful.It's been solid, and the installer was great.The major issues have been from dual booting windows (disable fast boot!) and from not updating frequently enough (keychain issues, tho endeavouros has plenty of "newb needs to update" helpers).

    I love it. It's mine, I own that laptop, and endeavouros works for me. I feel so much more in control than I ever did on windows.I do have some basic experience running Debian servers (VMs for single service, or docker stuff), and I do programming.

  • I did this my my new pixel 8 pro. I loved it.It was so easy, it worked, I was in control of my device.

    Contactless payment didn't work.Which is a deal breaker for me.

    I looked at some fin-tech solutions, I even bought a pixel watch (which didnt work because I have a workspace account). None of them let me work around the issue. Contactless just wouldn't work.

    Had to go back to stock android.I'm constantly checking in on their attribution/verification/whatever status that would allow them to offer contactless payment (currently offered by android/apple/banks, but no open source software).I want grapheneos and contactless so badly!

  • Yeh, ventoy takes an extra step (but ventoy is itself an extra step): find the iso from a legit source instead of using the media creation tool, install software to edit iso, add unattended.xml to the iso, plop iso on ventoy drive.

    Anyone playing around with or working with Linux/windows:Check out ventoy. I think they've solved their issues of binary blobs and it is so useful.Create a Ventoy usb drive. Drag any and all OS ISOs onto the USB stick. Boot from the USB, choose which ISO to actually boot.Want to switch flavours of live Linux (or try another installer)? Boot from usb, choose different ISO.Absolutely fantastic software

  • Yeh, the 16/32 in the screenshot and that 2 sticks are dead suggests they have 4x 8gb sticks, and lends credence that one channel is being messed with.They said they tested the ram on multiple systems, but they might have just thrown both "dead" sticks in there at the same time - leading to a similar failure mode as they are both on the same channel.

    I bet 1 stick is dead, and they could probably get away with 24gb of ram in a 3/2 channel distribution

  • Maybe 1 is causing the other to fail?Could try the sticks individually.

    It is strange that 2 sticks fail at the same time. It smells like a symptom instead of the root issue.

  • FCKGW?

  • It's not that difficult, is it?

    I mean, it's not like running a program on an already installed windows, or using the windows 11 installer to install from windows.Otherwise, it's the basic steps for installing any OS except for creating the unattended.xml file.

    Use the media creation tool to create install media on a USB drive, work through the generator (Google what you need to), drop the resulting XML onto the drive, reboot from USB and install as normal.