Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)N
Posts
1
Comments
481
Joined
3 yr. ago

  • You can, but I found it a bit laggy. It basically wraps your tcp stream over https, so I think the extra overhead was what was slowing it down.

  • Hackers aren't the only way to meddle in an election, just the easiest to categorize and deal with.

  • The internet in it's heyday, when it was a genuinely thrilling place to find information, and quite a lot of weirdness, and before it was swamped by corporate interests.

    I remember starting out with gopher and a paper print out of 'The big dummies guide to the internet' which was a directory of almost every gopher and ftp site (pre web) along with a description of what you'd find there. Then the web came along and things got really good for a while. Once big corporations got involved it all went down hill.

  • Bah, a magnetised needle and a steady hand is the one true way to edit code on your prod system.

  • There's a difference between 'processing' the text and 'parsing' it. The processing described in the section you posted it fine, and you can manage a similar level of processing on HTML. The tricky/impossible bit is parsing the languages. For instance you can't write a regex that'll relibly find the subject, object and verb in any english sentence, and you can't write a regex that'll break an HTML document down into a hierarchy of tags as regexs don't support counting depth of recursion, and HTML is irregular anyway, meaning it can't be reliably parsed with a regular parser.

  • Ah, ok. You'll want to specify two allowedip ranges on the clients, 192.168.178.0/24 for your network, and 10.0.0.0/24 for the other clients. Then your going to need to add a couple of routes:

    • On the phone, a route to 192.168.178.0/24 via the wireguard address of your home server
    • On your home network router, a route to 10.0.0.0/24 via the local address of the machine that is connected to the wireguard vpn. (Unless it's your router/gateway that is connected)

    You'll also need to ensure IP forwarding is enabled on both the VPS and your home machine.

  • Sort of. If you're using wg-quick then it serves two purposes, one, as you say, is to indicate what is routed over the link, and the second (and only if you're setting up the connection directly) is to limit what incoming packets are accepted.

    It definitely can be a bit confusing as most people are using the wg-quick script to manage their connections and so the terminology isn't obvious, but it makes more sense if you're configuring the connection directly with wg.

  • The allowed IP ranges on the server indicate what private addresses the clients can use, so you should have a separate one for each client. They can be /32 addresses as each client only needs one address and, I'm assuming, doesn't route traffic for anything else.

    The allowed IP range on each client indicates what private address the server can use, but as the server is also routing traffic for other machines (the other client for example) it should cover those too.

    Apologies that this isn't better formatted, but I'm away from my machine. For example, on your setup you might use:

    On home server: AllowedIPs 192.168.178.0/24 Address 192.168.178.2

    On phone: AllowedIPs 192.168.178.0/24 Address 192.168.178.3

    On VPS: Address 192.168.178.1 Home server peer: AllowedIPs 192.168.178.2/32

    Phone peer: AllowedIPs 192.168.178.3/32

  • The big stumbling block I see with this approach is that it's not just the maintainers who do the work, as others also contribute code fixes, documentation and help in the community.

    I can see the very real need to support the core maintainers on the projects we use, but I can also see that causing friction if the others who contribute to a project being successful and useful are overlooked. I know that some projects' communities put bounties on bugs they want dealt with, which helps to a degree, but still leaved many contributors effectively donating their time whilst a core group get paid. For instance, I've submitted and had accepted several patches across several projects that I use. They've usually been tobadd functionality that I wanted and saw others wanted too. I don't think I'd want paying for them, but I'd probably feel different if I knew the person accepting the pull request was being paid, either commercially or via a scheme like this. Maybe that will work out in practice, but I'd be worried about the change in dynamic.

    I don't have a good solution to this, but I thought i'd offer it as a different viewpoint.

  • I was attempting to be facetious and mimic the self help type advice you see about daily habits, but yes, if you want to get away with it, bigger is probably better.

  • Have you considered supplementing your income by committing massive fraud?

    You need to start by making small changes to your daily habits, and build up to massive fraud. If you try to do it all at once the habit wont stick.

  • I haven't had any issues painting most plastics with the general hobby type spray paints. I know there are some that'll fail or damage the surface, but I've had good results with Plastikote (other brands are available, etc).

    I should probably have been more specific about using spray paints for plastics rather than general ones.

  • Maybe a good pair of headphones and the careful application of some spray paint? Mask and holes or areas you don't want to colour, then apply several light coats until suitable pinkness is achieved. I suspect you'll get bonus points for personalisation.

  • They sound usable enough. If you're interested in it, have you considered running a LLM or similar? I think they cluster. If they've got GPUs you could try Stablediffusion too.

    Mind you, at that price point I think we're past the point of just thinking of them as compute resources. Use them as blocks, build a fort and refuse to come out unless someone comes up with a better idea.

  • It really depends on what sort of workload you want to run. Most programs have no concept of horizontal scaling like that, and those that do usually deal with it by just running an instance on each machine.

    That said, if you want to run lots of different workloads at the same time, you might want to have a look at something like Kubernetes. I'm not sure what you'd want to run in a homelab that would use even 10 machines, but it could be fun to find out.

  • That's fair, but the result seems to be the same; he's nowhere near as caustic when interacting with people as he used to be. I had quite a lot of sympathy with the message in most of his technical rants, but the delivery was counterproductive. If he's changed that I think he's done well.

  • I think even he realized his tocicity was a problem a few years ago, so he took time out to work on that and seems much more balanced now.

  • Sorry for th slow answer, I've been away. There is a way, if it's still useful to you:

    First, create a named fifo, you only need to do this once:

     
        
    mkfifo logview
    
      

    Run your rsync in one pane, with a filtered view in the second:

     
        
    tmux new 'rsync ...options... |& tee logview' \; split-window -h 'grep "denied" logview'
    
      

    Replace ...options... with your normal rsync command line.

    That should give you a split view, with all the normal messages on the left, and only messages containing 'denied' on the right.

    The |& makes sure we capture both stdout and stderr, tee then writes them to the fifo and displays them. split-window tells tmux to create a second pane, and display the output of grep.

  • Tmux is a very helpful terminal multiplexer, meaning it can split your terminal into multiple panes. So, create two side by side panes, then one way of doing it is:

    • on the left, run your cmd | tee >(grep 'denied' > error.log)
    • on the right, run tail -f error.log

    The tee process takes it's standard in, and writes itbto both standard out, so you see all the lines, and the path it's been given. The >(...) operator runs the grep in a subprocess, and returns the path to it's standard input pipe, so grep receives every line, and writes the denied lines to a log file which you display with tail in the other pane.

    Rather than using a file for error.log you could also use a named pipe in much the same way.