

I’ve been running my own mail for 10+ years. I recommend rspamd for spam filtering. It took the place of SpamAssasin, grey listing, SPF checking, etc. All in one single system.
I’ve been running my own mail for 10+ years. I recommend rspamd for spam filtering. It took the place of SpamAssasin, grey listing, SPF checking, etc. All in one single system.
Depends on the watermark method used. Some people talk about watermarking by subtly adjusting the words used. Like if there’s 5 synonyms and you pick the 1st synonym, next word you pick the 3rd synonym. To check the watermark you have to access to the model and probabilities to see if it matches that. The tricky part about this is that the model can change and so can the probabilities and other things I don’t fully understand.
How do you expect the packets to actually route? If you run Tailscale and your VPN on your phone, they might fight with each other for control of the routing table.
If you’re trying to use Tailscale exit note to then route through Tailscale to one node running gluetun to Mullvad. That’s going to be complex because against they both want to mess with the routing table.
Tailscale natively supports Mullvad: https://tailscale.com/mullvad
Okay it was a little hard to read since your post was missing formatting. TS_SUBNETS is what controls what CIDRs are announced through Tailscale. Since you’re not using Docker networking for Jellyfin, it would be whatever subnet the host is on. Maybe it’s 192.168.x.y
Gluetun doesn’t make any sense here. You’re forcing all the traffic for from Jellyfin to go through Mullvad, but you need to be able to connect to Jellyfin because Jellyfin is a service you connect to.
Since your Tailscale is host network mounted, you’ll be able to expose your Docker network subnets over Tailscale then access Jellyfin. This is done via the TS_SUBNETS env variable. Docker will use a 172.16.0.0/12 subnet.
You probably intend to gluetun your downloading software, not Jellyfin.
My pet peeve is websites animating in content at page load like data charts. Just show the dang chart, don’t animate a bar chart unless new data is being added to show real-time content.
Was that anything more than just rumors? Letting a currently monopolistic company keep the browser because another bad billionaire might buy it and do something bad with it just prevents anything from changing.
It is a choice, but one that I find important to adopting an alternate. I keep my wallet slim on purpose. Telling people their choices are wrong because you don’t agree with them is not going to get widespread adoption which is important for the long term health and success of such a ambitious project.
Your options are to run smaller models or wait. llama3.2:3b fits on my 1080 Ti VRAM and is sufficiently fast. Bigger models will get split between VRAM and RAM and run slower but it’ll work.
Not all models are Gen AI style LLMs. I run GPU based speech to text models on my GPU too for my smart home.
It has to be the employees not the state because companies withhold it and remit directly to the IRS. Not saying you should do this, but if you increase your withholdings exemptions then it won’t go to the IRS. Though you will owe it in April and may have to pay penalties for underwitholding.
Who organized this form? Is there something official to make it look like it’s not just signing me up for spam?
That’s also why certain contact lenses can’t be worn overnight or for long periods of time because they aren’t as breathable. At least that’s what my eye doctor said when I got them.
I don’t think there is a technical issue or any kind of complexity at issue here, the problem seems trivial even though I haven’t worked the details. It is moot since it’s broken on purpose to preserve “They’s” business model.
I’m explaining what the technical problems are with your idea. It seems like you don’t fully understand the technical details of these networking protocols and that’s okay but I’ve summarized a few non trivial technical problems that aren’t just people keeping multicast from being used. I assure you if multicast worked, big tech would want to use it. For example, Netflix would want to use it to distribute content to their CDN boxes and save tons of bandwidth.
I don’t know who they is in the case, but let’s think about this for a minute.
Technically what do you need for this to work?
How many Multicast Addresses do you need? How are multicast addresses assigned? Can anybody write to any multicast address? How do I decide that 239.53.244.53 is for my file not your movie? How do we know who is listening? This is effectively BGP, but more tricky because depending on the answer to the previous question you may not benefit from any network block sizes to reduce the routing info being shared. How do you decide when to start transmitting a file? Is anybody listening? Does anybody care?
You seem latched on to assume that technically would work and haven’t asked if it is actually technically a good solution. P2P is going to work better than multicast
Multicast addresses are handled specially in routers and switches all over the world.
Changing that would require massive firmware updates everywhere to get this to work and we can’t even get people to adopt IPv6. Nevermind the complexity in figuring out to how manage IGMP group membership at the Internet scale.
Given the complexity with either change, its better to adopt IPv6 and use PeerTube. Multicast at the Internet scale won’t work and IPv6 is less work
Assuming multicast worked across the internet, it’s not going to work in practice. Multicast works by sending a packet and fanning it out to all receivers.
It works with broadcast TV like IPTV because everybody is watching the same few set of channels at the same time, but on YouTube I can watch any video at any time. How does a mythical Transmitter know what video packets to send when? Are they on loop? Are clients receiving packets for videos they don’t care about?
You might be interested in PeerTube which uses unicast peer to peer to distribute videos in a way that works.
I use a variant of this: https://github.com/linuxserver/docker-wireguard
You don’t need two different containers for this. They’re going to either fight each other for control over the networking tables or run wireguard in wireguard
So I had a chance to try this out. It wasn’t on Google Play Store, only F-Droid. There isn’t really SSO support, you either login with User/Password or a token. Instead, I login with my browser, get the token and paste it in. That works fine, but an ideal world is just pop up an browser WebView and go through the flow, then grab the token. Maybe it was intentional, but PaperlessShare registered as an Open handler for PDFs and the share menu, whereas this is only share menu. This seems to mean that I need to grant file access, whereas the open handler didn’t need that I think.
Overall, it does the job and gets my docs uploaded.
Its just part of Google’s plan to continuously move logic between their apps. They’ll slowly move Google Fit logic into Health Connect, then a few laters back out into G Fit.
I don’t know if it’s good or bad. Health Connect just needs an easier entry point.