nothing per se, depends on implementation, see my other reply for some insights on key management
- Posts
- 0
- Comments
- 50
- Joined
- 2 yr. ago
- Posts
- 0
- Comments
- 50
- Joined
- 2 yr. ago
hi! sorry for throwing this here without explaining much, explaining a bit seems definitely due diligence!
so, i need to make some things clear, skip if you know these already:
fediverse
the fediverse is not a single software, rather a collection of softwares speaking a common language (sharing a protocol: activitypub). the classic example is mail: on gmail you can email folks on outlook. they just know how to send messages to other instances/servers/deployments, and how to receive. for example, email (SMTP) expects data formatted in a certain manner (lots of headers and a body, kinda) on port 25. Activitypub expects activities (json-ld documents) coming over inboxes (POST to http endpoints).
compatibility
now, say emissary sends an encrypted message to a mastodon user. mastodon doesn't know what to do with that document! it's a garbled mess of encrypted data, what is mastodon supposed to do with it? there are no rules for this in the spec! the post claims "federated" (aka, across multiple servers) e2ee messaging, and that already exists with multiple solutions. what they mean to me is either
- they are making a new e2ee chat: great! emissary users will get a way to message other emissary users. but that's it: you need to be on emissary, like with matrix you need to be on matrix
- they are making a fediverse e2ee chat: this isn't easy! you can't just make it for yourself, you need to clearly define how it works, and everyone must implement it too. otherwise mastodon or lemmy won't know what to do with the message you sent
spec
they link two specs: MLS (an IETF spec defining scalable e2ee messaging), and activitypub-e2ee. the first one is great: i think matrix wants to move their encryption to that? it's good, but it's a spec: you need to adapt it to your use. the second one is how MLS can be applied to Activitypub communication: the thing we care about! unfortunately the later spec is just a draft, so it needs more work and it's unlikely that it will see adoption in this state.
asymmetric encryption
so now i need to go a bit into asymmetric encryption, in this case RSA. there's a lot of great examples if you put "asymmetric encryption" or "rsa" into google, but i'll try my best here. imagine 2 folks trying to communicate, Alice and Bob, but they need to have a postperson deliver their messages. they don't want such postperson reading them! how to do that? A and B both get two "keys": one private and one public. these keys are related to each other: a pubkey "has" a privkey, and vice versa. these keys are also "magic" (math, good luck if u wanna dig in here, if you're not into math just trust me the keys are magic). using a public key, you can encrypt a message so that only the related private key can decrypt it. and using a private key you can encrypt a message so that only its public key can decrypt it. the second case is for identity proofing, we care about the first one: if A and B make their public keys public (heh), they can both use such keys to create messages meant only for either A or B, assuming they still hold their private keys, and nobody else. because
mathmagicactivitypub keys
in activitypub every actor holds a private and public key. this is how the protocol does "authorized fetch", meaning making sure an activity truly comes from the actor claiming to send it. so we can use these keys for doing e2ee!
Alice <---> A's server <---> B's server <---> Bob
Alice can ask her server to get Bob's public key from Bob's server, and then encrypt a message for Bob and send it via the servers without anyone snooping in. Great?
NO!
A's server can lie about bob's key, give a random key, decrypt the message, then encrypt it with bob's real pubkey and send it. this way bob knows nothing and A's server can read the message. Same way, A's server can give Bob's server a fake pubkey for alice, so it can read the incoming message and then encrypt and re-send to alice with her real key. So trust is broken!
the spec offers 3 solutions to this:
- trusting your server, which is kind of the starting point and we don't want that
- having a third party validate keys (either a centralized solution which Alice and Bob ask, or some yet-to-invent federated way to handle keys. we're kinda back at point one)
- having alice and bob exchange keys themselves (maybe send them on matrix or signal, delegating the "identify and trust" issue to those services)
"knowing irl"
some users compared the issue with "knowing each other irl" but it's not the same. on signal, i trust you to be you, and our conversation to be private. if i search you by username, i can just message you. trusting your username is you is a meaningless discourse here: you are your username. i'm writing this to "Abundance114", i don't care who you are, i just want this to reach "Abundance114". so on signal i plug your user and our keys automagically reach each other safely. this spec doesn't explain how this happen: i would need to first identify and trust you, Abundance114, and then find a way to safely communicate with you so we can exchange keys.
i hope this wan in-depth enough! i'm not an encryption expert, if any is here i'm open to critics, but this seems reasonable to me with my protocol and encryption experience. basically i believe this post is hype bait: whatsapp is e2ee, but who has the keys? do you trust meta? sure, the message travels encrypted, but who can read it? only you? an e2ee system is not just its encryption tech, but the way keys are securely shared
this is misleading and sensationalistic. if emissary implements e2ee, it's not "e2ee for the fediverse", it's " e2ee for emissary users". did mastodon talk about e2ee? did lemmy?
also the MLS-in-activitypub draft proposes for trusted key exchange either " trust the server" (lmao), use a centralized key authority (wow) or have users manually verify their keys out of band (so basically use matrix to assure your chat is encrypted). source: https://swicg.github.io/activitypub-e2ee/architectural-variations.html#validating-end-to-end-encryption
fedi devs need to stop clickbaiting, and fedi users should learn a bit more about their protocol to avoid getting misled this way
what os are you going to use on your smartphone if you remove software from google and apple?
aosp, fdroid, no gservices
what VR headset
not into vr so can't say
what telecom
sadly, not a good one. i wish i had a choice, but this isn't software
are you only shopping in local food markets?
sort of? i get fresh stuff from actual markets when i can and when i go for groceries i avoid ultra processed stuff from big multinationals, making sure of the provenance and the maker of the stuff i get, supermarkets also sell stuff from local producers
lemmy creators are bigots
eh, im still leeching off some other person hosting, im not going to host lemmy and im slowly making my own thing
also can you provide examples? i heard it multiple times, I'm not contesting it, just kinda want to see myself, like with vaxry, and not only trust second hand accusations
i don't want to be a cop and background check
no absolutely fine i don't check all my software too, but when i hear a callout i dont hide behind "art and artist" mentality and move off the bigot's stuff
preference is s weak motivation honestly. i prefer google maps yet i still don't want google and make do with OSM
I'm simply interested in having control over my PC
but you don't, you still depend on vaxry. can you maintain, update, fix and recompile hyprland yourself? if so, fork it and start boycotting vaxry. if not, what control are you talking about? it's just preference
this whole argument to me sounds like "i prefer a WM with smooth animations and an active discord so im going to overlook the problematic maintainer im going to give clout to and start depending on"
i'm not on wayland so i can't try any of these, but there are lists you can browse from (https://wiki.archlinux.org/title/Wayland#Compositors for example)
you are setting quite restrictive and arbitrary limits
well supported
what do you mean?
with smooth animations
what counts as "smooth animations"?
if your message boils down to "something which looks really good to me and that has a discord i can go into and ask for help", you may have set the requirements tight enough to only include hyprland, but that's not a valid excuse in my opinion to avoid boycotting problematic developers
your argument is a bit extreme, it doesn't need to only be software from nice folks, it just needs to not be software made by not nice folks
apart from sqlite, i think everything is replaceable with a bit of compromise
what things made by not nice folks are you locked into?
lemmy is not a great comparison, there's like 3 alternatives, there are tens if not more hyprland alternatives.
i don't think software is just software, why would this tech be exempt? pilot-less aircrafts is just tech, just like software, but we do remember that drones bomb people. supporting problematic developers is not "as bad" as building killing machines, but it's the same principle: looking the other way when it's convenient. we should aim to ostracize and isolate problematic devs, and it starts by not using their software, because doing so gives them clout and relevance
taking care of bad servers is instance admin business, you're conflating the user concerns with the instance owner concerns
generally this thread and previous ones have such bad takes on fedi structure: a federated and decentralized system must delegate responsibility and trust
if you're concerned about spam, that's mostly instance owner business. it's like that with every service: even signal has spam, and signal staff deals with it, not you. you're delegating trust
if you want privacy, on signal you need to delegate privacy to software. on fedi to server owners too, but that's the only extra trust you need to pay
sending private messages is up to you. if i send a note and address it only to you, i'm delegating trust to you to not leak it, to the software to keep it confidential, and to the server owner to not snoop on it. on signal you still need to trust the software and the recipient
this whole "nothing is private on fedi" is a bad black/white answer to a gray issue. nothing is private ever, how can you trust AES and RSA? do you know every computer passing your packet is safe from side chain attacks to break your encryption? you claimed to work in security in another thread, i would expect you to know the concept of "threat modeling"
lemmy's approach still relies on audience targeting for privacy, just like mastodon. using a distinct object type (which is off spec btw) is "more secure" just because nobody else knows what lemmy is doing
it's not unrealistic to keep trust at the server level. following your rationale, you can't trust my reply, or any, because any server could modify the content in transit. or hide posts. or make up posts from actors to make them look bad.
if you assume the network is badly behaved, fedi breaks down. it makes no sense to me that everything is taken for granted, except privacy.
servers will deliver, not modify, not make up stuff, not dos stuff, not spam you, but apparently obviously will leak your content?
fedi models trust at the server level, not user. i dont need to trust you, i need to trust just your server admin, and if i dont i defederate
good reply but private items are not "quite literally blasted out to anyone who listens", AP spec has audience targeting and content gets sent capillarly, like email. a Note for bob gets sent ONLY to bob's server
as:Public content gets broadcasted by some software (relays) and inbox forwarded by others (mastodon, mitra).
linking barely relevant threads is a bit annoying
your complaints on "unlisted vs public" are completely unrelated to the issue at hand
your analysis that relates to this pixelfed flaw is just:
Privacy Enforcement:
- No explicit requirements for how receiving servers should restrict visibility based on audience fields
- No requirements that servers must hide content from non-addressed users
these aren't good analyses: content should be private by default, nowhere is stated otherwise. if you feel like this common sense practice is somewhat arbitrary, it's actually mandated by GDPR and more data protection laws.
if you want to rule lawyer that "acktually spec doesnt EXPLICITLY say that you cant show stuff meant for alice to bob if bob asks" and ignore this web good practice (probably implied by the many privacy remarks in the spec but let's ignore those) which is actually mandated by governments, feel free to still ignore the incompetence displayed by dansup in implementing something that every other fedi software managed, go for it
even if you were right, even if the spec was really that vague, even if it wasn't a good practice and requirement, in a federation parties cooperate. pixelfed breaking a common agreement is defederation worthy, and dansup remains either incompetent for implementing badly something easy or toxic for federating ignoring what the federation requires
you're still not addressing the point, just linking other posts back and forth and moving the goalpost
audience targeting is NOT a new abstraction by mastodon, it's part of ActivitySTREAMS, not even ActivityPUB
rtfm and do NOT give a rest to bad behaving software
how is it a failure of mastodon that pixelfed doesn't respect audience targeting? it's not like it's something that mastodon made up, this isn't about unlisted/public
variety of made up reasons
you are not engaging with the argument, just stating ideals
fedi developers should get paid? yes, look at gts and mastodon
fedi devs should also be held accountable of their fumbles
dansup showed quite some incompetence in handling security, delivering features, communicating clearly and honestly and treating properly third party devs
it's fair for one person to not be able to handle a big software with big instance and big usercount. mastodon has a legal entity and a team, gts has no flagship instance, is aggressively open source and gathered a lot of contributors, dansup is winging it alone and failing
let's just make a big fixed point of failure of dansup, what could go wrong ... ?
check out mitra too, could probably use some funding because it's transparent and delivers rather than promising the moon and delivering CVEs (but with a grant AND a kickstarter, maybe pay some other devs??????)
like there are thousands of fedi projects, give 10 bucks to the little dev doing it for fun in their bedroom, more money will not make dansup more competent
periodic reminder to not touch dansup software and to move away from pixelfed and loops
dansup is not competent and quite problematic and it's not even over
developers with less funding (even 0) contributed way more to fedi, they're just less vocal
dansup is all bark no bite, stop falling for it
receiving posts is trivial but you need to convince others to send it to you. i can't just set up a malicious instance and get your private posts, i need to convince you to send them to me, and once convinced i can use any normal software to access it, no malicious custom thing needed. literally just follow me from a mastodon.social throwaway and you get my followers-only posts. content addressing is great on fedi and your instance sends your private posts exactly to who you want and noone else. pixelfed receives a private posts and shows it to third parties, its not the system's fault.
fedi is not great for sexting because your pics just sit in clear on your server admin's machine and all dms are easily searchable on db, it's a whole other issue
email works the same way. it's impossible to implement private emails? if you cc your email to im.going.to@leak.it and it leaks, would it be fair to complain about the whole email system?
e: should have read deeper first its already been said
TLDR: an e2ee channel means "everything passing over this channel is super secure and private, but it needs some keys for this to work". e2ee means something: you can not care about most issues with delivery and protection and such, but you need to care about the keys. if you don't do that, you are probably ruining the security of such e2ee channel
end-to-end-encryption solves one issue: transport over untrusted middleware, doesn't mean much by itself. it's being flung around a lot because without proper understanding sounds secure and private.
it's like saying that i ship you something valuable with a super strong and impenetrable safe. but what do i do with the key? e2ee is the safe, solves the "how can i send you something confidential when i dont trust those who deliver it", and it means much! it's a great way to do it.
but it solves one problem giving a new one: what to do with the key? this usually can be combined with other technologies, such as asymmetric encryption (e.g. RSA), which allows having keys which can be publicly shared without compromising anything. so i send you an impenetrable code-protected safe with an encrypted code attached, and only your privkey can decrypt the code since i used your pubkey!
(note: RSA is used for small data since encryption/decryption is cpu intensive. usually what happens is that you share an AES key encrypted with RSA, and the payload is encrypted using that AES key. AES is symmetric: one key encrypts and decrypts, but AES keys are small. another piece of technology attached to make this system work!)
but now comes the user-friendliness issue: very few are big enough nerds to handle their keys. hell, most folks don't even want to handle their passwords! so services like matrix offer to hold your keys on the server, encrypted with another passphrase, so that you don't need to bother doing that, just remember 2 passwords or do the emoji compare stuff. it's meh: compromising the server could allow getting your keys and kinda spoils e2ee, and once i stole you one password i can probably steal you both, but it's convenient and reasonably secure for most. you can absolutely opt out, but every time you log from a new device, you can't read anything sent before unless you export and import your keys manually.
what does whatsapp do? i don't know! but it kind of magically works. if they do e2ee, where are the keys???? how does meta handle reports if messages are e2ee???????
i'm not sure about signal but everyone praises it so i guess it's good? also it seems you can't restore messages via network, you need an export from a previous install, so it seems your keys live inside your app data, which is good and safe i guess.
also, e2ee works if you can trust the key you're sending to! as mentioned in the 'activitypub keys' section before, if you ask a middleman the key for your recipient, can you trust that's the real key? e2ee doesn't cover that, it's not in its scope
so what does e2ee mean? it means: super strong channel, ASSUMING keys are safe and trusted. e2ee as a technology doesn't solve "all privacy" or guarantee that nobody snoops in per se. it offers a super safe channel protected by keys, and lets you handle those keys how you more see fit. which meaning deciding who you trust to send, how you let others know how to encrypt for you (aka share your pubkey) and how you will keep your privkey safe.
thanks for coming to my TED talk btw