I never knew who I was. I still don't know who I am. It doesn't matter anyway.
@Feyd@programming.dev @technology@lemmy.world
I'm not referring only to the feature per se, I'm also referring to any pop-up designed to appear throughout the navigation to "remind the user about the superb features".
Said pop-up is explicitly mentioned on their "confirmation dialog" upon turning off (screenshot attached below):
You won't see new or current AI enhancements in Firefox, or pop-ups about them.
It speaks volumes about how much a dark pattern this is, the fact that the opt-off has a confirmation dialog, while the further proceeding with logging in with Anthropic/OpenAI/Google/Meta account doesn't seem to have a confirmation dialog.
And the fact that the confirmation feels "menacing" and defaulted to cancelling the opting-off (i.e. pressing "esc" or clicking outside the window; one must click the primary-colored "block" button which, contrasted to a grayish "Cancel" button, may psychologically induce the user into thinking "block" is a dangerous action), quite similar to the
about:configwarning screen.Ah, and the clanker options: notice the lack of alternative options for those who want a custom clanker, such as DeepSeek, Qwen, Z AI, Brazilian Maritaca IA and Amazônia IA (to mention some non-Chinese LLMs), or even something running locally through ollama. Seemingly no option for using a custom, possibly self-hosted LLM endpoint. The fact that all the options offered are all heavily corporate options (with Mistral being the "least corporate" of them all, but still Global Northern nonetheless) might tell us something...
All of these dark patterns, among others not mentioned, are the object of my critique, not just the fact that Mozilla is shoving clankers unto Firefox.
Whenever a feature needs an invasive pop-up and the opt-out brings up a second pop-up that requires further confirmation (but none seems to be offered upon actually using said feature), it is called a dark pattern, no matter if said feature requires further configuration.
@Ulrich@feddit.org @technology@lemmy.world
Because people overwhelmingly do not change any defaults whatsoever
Most roosters wouldn't normally seek the paws of the fox to be hugged by, what an astonishing news!
You see, that's exactly what plays favorably for things pushed with "opt-out" mechanisms, anything. If people are less likely to change the settings to better enhance their UX (be it due to a lack of knowledge, a lack of proactive pursuit or because they deem their current settings "good enough"), this means people would be more likely to have the clankers shoved down their throats if said clankers were to be part of default settings.
In fact, if settings would very likely go unchanged, then Mozilla could push anything, absolutely anything under they will, "shall be the whole of the Law" with the legally-required "opt-out" mechanisms in place.
In the foreseeable future, we'd have Firefox as a new "Agentic Browser" where a clanker does all the tiring and utterly boring effort of "browsing the web" as the user watches their credit card being depleted by prompt injections carefully placed amidst Unicode exploits across the web by scammers. But, hey, let us not worry, there's always a button to turn it off! 😄
@Ulrich@feddit.org @technology@lemmy.world
If it’s opt-in it may as well not exist
Just because if it were opt-in, people wouldn't have chosen to activate it, and fewer people would use it and the graph line wouldn't go up for the shareholders to appreciate? Then, maybe, just maybe, it would be quite a strong evidence that this isn't really something that the users want, don't ya think?
For whatever reason, they have decided it’s important.
There's the reason, right above this paragraph: one can only achieve what people would certainly refuse, if they pushed it onto people by use of force (not necessarily physical force, but, for example, dark pattern is a technical means of "force").
A fox can't convince the roosters to become her food, if the roosters were to have a stake on deciding in this regard, less roosters would become a tasty dinner for the cute fox, because becoming a tasty dinner isn't exactly a demand from roosters. Hence why the fox must grab the roosters, but in this case the fox gives them an option to escape from her paws.
Ah, notice your own phrasing: "They have decided". Who have decided? Not the user, not the party interested in their own UX/UI, but the very archontic architects of a kind of digital apparatus we've been compelled to use for participating in this digital realm of society (risking social ostracism if we don't), the World Wide Web.
And when a decision is made upon someone, without regard for the very someone upon which the decision is being made, even when there's some kind of "opting out" from the object of decision, we had a name for that: it was called "non-consensual relationship".
@avidamoeba@lemmy.ca @technology@lemmy.world
The problem still remains: why's this thing "opt-out" and not "opt-in"? Why not make it an official, totally optional (as in voluntarily wanting to have it and, only then, proceeding to have it) plug-in or extension that the user (let us remember the meaning of "User Agent": an agent acting on behalf of the user, not a piece of software who's become "the user") could install at any moment, out of their own will?
I'm far from being an anti-AI person, I myself use those clankers on a daily basis. However, I use them because I want to, while I still want to, not because they were pushed unto me.
Mechanisms of "opt-out" where there should be an "opt-in" is a form of dark pattern.
In fact, the very concept of "opting-out" is a dark pattern per se, because it implies something pushed unto a person, something from which they were "allowed" the "right to leave".
Yeah, it's awesome to have means of "opting-out" from something, but having an "opt-out" mechanism in place doesn't mitigate the very fact that it was coercively pushed unto the person beforehand and didn't require explicit consent from the person unto which the thing was pushed.
Speaking of "consent", situations like these are not that much different from the dark pattern "Yes / Not now" we've been seen everywhere: in certain scenarious, this insistence and disregard for explicit consent would verge the criminal (e.g. harassment), but suddenly it's "okay" when corporations (and the State itself) do it.
If, say, a situation where someone is being harassed and, only after having started to harass, the harasser offers the harassed a means to leave the harassment, does this make the harasser less of a harasser? Because that's the same absurd logic behind the corporate advocacy whenever it's said "oh, but Mozilla is offering an opt-out, you can always turn off 'sponsored shortcuts' (that is, after having been faced by the shortcut from a Jeff Bezos corp as you proceeded to open a new tab for accessing the opting-out settings, but that's totally okay), 'sponsored wallpapers', and the 'Anonym tracking', and now you can, check this out, you can turn off the clankers, too! Wow, isn't that such a cute corp, the corp with the cute fiery fox mascot?".
Not to say how it's gonna end up cluttering the upstream with (more) binary blobs, adding to the Sisyphean struggle that WaterFox, IronFox, LibreWolf, Fennec, among other Firefox forks, have been experiencing upon trying to de-enshittificate the enshittificated and de-combobulate the combobulated.
"Mozilla needs to make money". Yeah, yeah, because the very fundamental, immutable principle of cosmic existence boils down to "there's no such thing as a free lunch", amirite? After all, "money" is clearly within the table of elementary particles alongside quarks and gluons, isn't it? And Mozilla needs to make money... We had a tool for that: it's called donations.
@WhyJiffie@sh.itjust.works @technology@lemmy.world
Possibly. I don't know the specific acronym they use, but regardless of the acronym: to me, it smells and looks like NDAs insofar it's some kind of analogous version of a "secretive initiation ritual" for a developer who's just trying to help an open-source community. It's an agreement where the developer accepts that anything they contribute free-of-charge is going to be used for enterprise (paid) purposes and any contribution is subject to be altered or removed as the management pleases, sometimes it also involves literal NDA if private (often "enterprise/premium edition") repos are intertwined with the open-source ("community edition") repos.
The ideal open-source, at least to me, would require a developer, any developer no matter who they are or how long their experience is, whenever they wanted to contribute with their coding skills, to simply do a PR or fork a repo, with no bureaucratic or "selling the soul to the Great Corporate" requirements for doing so.
Developing is already mentally demanding for a developer, and adding licensing shenanigans to the equation only complicates things, because now the developer, who's used to talk the language of computers, would need to become knowledgeable about ambiguous social cues, corporate legalese and the differences between a "MIT" and a "GPL" (that's one of the main reasons why I'm quite fond of WTFNMFPL licensing: no legalese).
@Akasazh@lemmy.world @usa@lemmy.ml
Because cryptocurrency data centers (normally) don't deal with AI, and the object of comparison was all about AIs vs humans energetic consumption. In their specific speech, Sam Altman was trying to justify (albeit in a very twisted manner) the energetic thirst from their ChatGPT and the alike. So my napkin math focused on this specific comparison they made, hence why I tried to leave crypto and other non-AI-related data centers out of the equation.
If I were to include cryptocurrency into this equation, surely the entire comparison would lean heavily towards data centers, because things like crypto mining are highly energetically demanding.
And very polluting indeed. Really. If we consider the chronological aspect, crypto data centers did pollute and consume more than all AI data centers: Bitcoin is functioning since 2010 (when the block 0, aka Genesis block, was mined), it's been 16 years, uninterruptedly (I don't remember seeing news headlines such as "Bitcoin operations are currently down", so it's been operating for 16 years in a row), while ChatGPT, the one to open the Dantesque gates we've been facing nowadays, was released to the public in 2022, only 4 years ago and with several moments of interruption and downtime.
@cmeu@lemmy.world @nostupidquestions@lemmy.world
Others already replied what it is: something to do with blockchain (not Bitcoin, but a blockchain nevertheless).
Just to add something, as someone who also uses to use Nostr alongside the Fediverse: this "fyld" (likely an automated account) also has a Nostr nprofile, posting the exact same thing over there, and they likely do a similar thing across other social protocols and platforms, such as ATmosphere (Bluesky), although I don't have a Bluesky account anymore to confirm this.
At first glance, it does look like spam, and I muted them both there (didn't mute here because it only appears for lemmy.world; lemmy.ml doesn't seem to federate with that community), due to the annoying frequency of posting...
...but for those who are looking for random numbers whenever there are no TTRPG dices (or, in my case, Ouija boards) nearby, I'd say it's quite a source of randomness with all the fancy colors and hex nibbles. Definitely not a cryptographically safe one (please do not derive a password from that), but for creative purposes, it certainly suffices 😆
@yogthos@lemmy.ml @usa@lemmy.ml
On the one hand, we shall bring some napkin math to the table.
A human brain consumes something around 20W (Balasubramanian V. Brain power. Proc Natl Acad Sci U S A. 2021 Aug 10;118(32):e2107022118. doi: 10.1073/pnas.2107022118. PMID: 34341108; PMCID: PMC8364152).
One hour = 20 Wh, or 20 × 3.6 = 72 kJOne day = 72 kJ × 24 hours = 1728 kJ or 1.728 MJOne year = 1.728 MJ × 365.25 = 631.152 MJ20 years = 12.6 GJ
The entire world population in 2024 (you'll understand soon why I'm using 2024) was estimated as 8,141,808,945 (World Bank Group, World Development Indicators)Rough brain power consumption for all humans who were alive in 2024 (I'm using values for one year instead of 20yo bc the 8 bi. accounts for all ages) = 8,141,808,945 × 631.152 MJ = 5.14 EJ (Exajoules)
Globally, data centers (excluding cryptocurrency mining) used an estimated 415 terawatt-hours (TWh) in 2024(Agrawal H. "Data Center Energy Consumption: How Much Energy Did/Do/Will They Eat?, Clean Energy Forum, University of Yale, 2025 Nov 12, https://cleanenergyforum.yale.edu/2025/11/12/data-center-energy-consumption-how-much-energy-diddowill-they-eat)
In Joules it's 415 TWh × 3.6 = 1.494 EJ (Exajoules)
My napkin math may be heavily inaccurate (hence "napkin") but, yeah, Math tells us humans (roughly) consumed more than all non-cryptocurrency data centers (1.49 EJ is less than the 5.14 EJ required by 8 billion Homo sapiens for thinking).
And I'm only considering brain power. The number would certainly be bigger if I were to consider the rest of metabolic consumption, this would further consolidate the entire humanity, when taken together, as indeed consuming more energy than AI data centers worldwide.
On the other hand, hell no! I'm not gonna agree with Sam Altman! Especially bc they're ignoring several factors.
For starters, the fact that AI and their data centers required humans, so the "human energetic bill" is shared with AIs, not disconnected from them. After all, AI is not something existing in a vacuum.
Fossil fuel, the elephant in the room, is another factor in play: I didn't research a CO2 side-by-side comparison between human-emitted CO2 (from biological processes such as respiration) and the the amount of CO2 emitted to keep said data centers running, but this can't be ignored.
Homo sapiens (usually) don't ingest fossil fuels (i.e. in normal situations, we don't drink gasoline... nor we eat coal).
Meanwhile, global data centers seem far from achieving green energy (e.g. hydro power), they rely heavily on fossil fuels, therefore they're expected to be breathing out more CO2 than humans.
Tables would only turn regarding CO2 when (and if, a big if, considering how AI is currently at the hands of corps who, in turn, deny and ignore the climate change because "line must go up") data centers pivoted to full (and true, not the "green-washing" creative accounting that tech corps usually do) green energy.
@INeedMana@piefed.zip @technology@lemmy.world
Yeah, me too. Unfortunately, the forks can only get so far in removing upstream AI garbage and other proprietary/corporate-oriented whistles-and-bells. If, say, some AI feature becomes so ingrained inside Firefox upstream, so deeply it ends up becoming some hard dependency for fundamental functioning of the browser (i.e. a feature that, if removed at the code-level, would render Firefox simply unable to function), no WaterFox, IronFox, Fennec or LibreWolf would be able to keep up with the latest versions: they'd either need to do a hard fork trying to independently maintain an entire codebase for a browser, or they'd need to use downgraded versions.
Not even to say about licensing shenanigans. We've seen many open-source projects suddenly changing their licensing to include legalese thin letters. We've seen open-source projects requiring developers to sign up some kind of NDA before being allowed to contribute with code. Seems like initially-open licenses aren't written on stone when it comes to big projects, and Firefox is a big project.
The universe of open-source software is being slowly hijacked by corporate interests. This is not different with Firefox, which (as I said in another reply to someone in this thread a few minutes ago) is Mozilla's main product (if not the main product, it's certainly among their main projects). The same Mozilla which has been pivoting to AI (e.g. acquisition of Anonym; subtle phrasing changes from "About Firefox" page which used to state how "Firefox will never sell your data", now this phrase is gone).
I use WaterFox on a daily basis. It's by far the best browser I've been using. I tried LibreWolf but it doesn't really likes my Portuguese ABNT2 keyboard (which has accents I use often), even after disabling ResistFingerprint, so I ended up sticking with WaterFox. On mobile, I use Fennec on a daily basis, and I'm worried about the end of "sideloading" on Android which will likely mess with its installation. But I'm aware of how both browsers rely on upstream code from Mozilla Firefox, whose enshittification is already an ongoing phenomenon. And that's really depressing when it comes to the future of browser landscape, because we're hoping for a true alternative. Servo is the last bastion of said hope (until it gets EEE'd by corporate interests, given how Linux Foundation itself is increasingly surrounded by corpos.
I'm more of a GNU/Stallman person who values autonomy and libreness as non-negotiable principles. I'm only using Android because I'm stuck with it due to certain societal impositions (banks and gov apps), otherwise I'd be long using a custom phone, which wouldn't even be Linux, but something way more "unorthodox" for a phone such as FreeBSD or Illumos/OpenIndianna, systems of which I already used on a PC environment and got quite fond of.
@woelkchen@lemmy.world @technology@lemmy.world
It takes the same 30 seconds of using caniuse.com (screenshot below), which doesn't list WebKit-GTK specifically, but lists Safari (which is WebKit under the hood), for it to become clear how many things are missing from Safari implementation (which is WebKit).
To be fair, yes, there are many bleeding edge features, some of them implemented only on WebKit/Safari, but those Safari-only features are kind of proprietary features (prefixed by
-webkit-). Similarly, there are indeed many features still missing from Firefox while already implemented for the two other engines (such as CSS@function).But my point, which I should've gone into further detail earlier,, is that WebKit, primarily maintained by Apple (originally authored by Apple, and a trademark of Apple since 2013), doesn't have the same, browser-focused teams found on Mozilla (whose main product is Firefox) and Google (whose main product is advertisement through their platforms, including Chrome, so Chrome is part of their main focus just because that's essential to keep the ads running and telemetry sneaking on the user). Apple is more focused on other businesses, such as hardware and UI, Safari and WebKit are their side-project.
@yogthos@lemmy.ml @programming@lemmy.ml
The x86css didn't work because CSS
@functionrules aren't yet implemented on Firefox (by extension, Waterfox). I'm not gonna spin up the Chromium.Then I tried other projects from this lyra.horse website, I tried the CSS clicker (a clicker game which uses no JS, just CSS and HTML). It's very interesting. There are a few glitches (e.g. the "Name your website:" should behave like
input[type='text']but actually behaves liketextarea, thus allowing newlines where the semantic (a title) expects none; IIRC, there are CSS properties allowing a[contenteditable]element to restrict the input to an one-line text) but interesting nonetheless.The only problem, besides the limited support for certain state-of-the-art features across browser engines, is the fact that this "CSS-oriented functional programming" ends up requiring more processing power than JS does, because JS has optimizations that CSS often lack.
Don't get me wrong: it's really interesting, and I'm quite fond of unorthodox approaches to programming. I myself once used
nodemon(a live-reloading CLI tool intended for Node.js but also usable for other programming languages) to compile and run an Assembly (GNU Assembly) Linux program as the code was being edited, and I also used the same Assembly tool-chain to code a "program" whose compilation result wasn't an actual runnable program, but a whole, valid BMP (Bitmap) image structure, full with a linear gradient, I achieved this by using compiler macros. This is how much I'm fond of unorthodox programming, so I'm far from being against CSS programming, much to the contrary: it's awesome!...... but this whole approach, using CSS as a whole functional programming language, unfortunately ends up heating my old poor I5-7200U laptop...
@paraphrand@lemmy.world @technology@lemmy.world
Oh, right, WebKit, I forgot mentioning it, thanks for reminding me of it!
It's the engine I likely used the least throughout my digital existence. I mean, I likely used Lynx more than I used WebKit, hence my forgetfulness.
However, if we're talking about the WebKit-based Linux browsers (such as Konqueror), IIRC, they're a bit out of spec when it comes to the "modern Web": WebKit's adoption of latest specs tends to be slower than Firefox and Chromium.
Now, if we're talking about Safari specifically, then... it's part of Apple's walled garden, one where even "Firefox from App Store" is actually a reskinned Safari (at least in iOS).
Be it Safari or Konqueror, deep inside, the WebKit engine seems to me like the "Apple's Chromium", so mentioning WebKit doesn't really improve the awful prospect for browser engines that we're facing nowadays.
@Beep@lemmus.org @technology@lemmy.world
Ah, the smell of irony by the morning! Adopting a programming language often praised by its "safety", while the entire pretension of "safety" is alchemically transmuted into a sewage and deliberately flushed up (not down) by a clanker who drinks from the cesspool with the same determination and thirst that of a Chevy Opala gurgling down entire Olympic pools worth of gasoline.
Being serious now, the foreseeable future for Web browsing is definitely depressing: Chromium needs no introduction (used to be an interesting browser until Google's mask "don't be evil" fell and straightforwardly revealed their corporate face and farce), Firefox have been "welcoming the new AI overlords" for a while, text browsers (such as Lynx) are far from feasible for a CAPTCHA(and Anubis)-driven web... now, one of the latest and fewest glimmers of hope, an alternative Web browser engine, is becoming the very monster the fight against which was promised to be the launchpad purpose ("They who fights with monsters should be careful lest they thereby become a monster"). I wouldn't be surprised if Servo were to enshittify, too. Being able to choose among the sameness is such a wonderful thing, isn't it?
I mean, I'm not the average Lemmy user who got this (understandably) deep hatred against AI, I am able to hold a nuanced view and finding quite interesting uses (especially when it comes to linguistics) for the clankers (especially the "open-weighted" ones). However, this, to shoving AI everywhere and using AI to "code for you", it's a whole different story. A software should be programmed in the way programming (as posited by Ada Lovelace) was intended to, not "vibe coded" by a fancy auto-completer who can't (yet) deal with Turing completeness, especially when it comes to a whole miniature operational system that browsers became nowadays. When coding a whole OS, AI shouldn't even be touched by a two million light-years pole, let alone by a two-feet pole.
Fediverse @lemmy.world evil.social down for weeks
@MalReynolds@slrpnk.net @science@lemmy.world
That's... highly interesting, thanks for recommending it! I'll be pondering on this reading.
Perhaps we all emerge again at the big crunch
As someone who believes in some kind of cyclical cosmos (Ordo ab Chao, Chao ab Ordine), it pretty much matches the way I try to make sense of it religiously, although I also believe (or, deep inside, I want to believe) there's a chance that this cosmic cycle is able to grind to a halt somehow, due to how, scientifically speaking, decay is something observed for cyclical processes (e.g. in the water cycle, some water is always "lost" from the water cycle, not "lost" as matter stuff, but "lost" as recyclable water; similar thing happens for nitrogen cycle and carbon cycle and even the biological food web/transfer of energy from plants to herbivores to carnivores; even orbiting undergo decay as the orbiting cycles repeat; cycles aren't 100% efficient because of an omnipresent decay) and this may apply for information as well (the transformation of information wouldn't be 100% efficient and would be subjected to this decay). We shall think of decay not as "loss" (because it would violate Conservation of Energy), but as a branching and merging between parallel cycles going on (e.g. in a nutshell, the water lost from earthly water cycles becomes part of other cycles, such as sparse H2O molecules as vapor ending up escaping to outer space, never getting to precipitate as rain, and becoming eventually attracted by orbiting stuff such as falling towards asteroids or into the Moon, falling towards planets at vicinity, or falling towards to the Sun (less likely due to how it requires a higher delta-v), as chaotically as n-body orbits can get)
Then there's a hypothesis "zero-sum universe" stating that the overall energy from the entire universe sums up to exactly zero, so the whole universe is also accounted when it comes to the laws of conservation of energy. If fluctuations decay as the cycles happen, this can still add up to zero, until everything is infinitesimally close to zero.
But, again, I'm highly speculating across several, seemingly unrelated concepts (which somehow "click" in my ND mind). In the end of the day, it's something seemingly beyond what we, with scientific rigor, could empirically get to observe and prove.
@QueenHawlSera@sh.itjust.works @science@lemmy.world
are you saying that we wake up again after death
(Disclaimer: I'm being speculative while trying to connect scientific principles and hypotheses. I'm aware this is not strict science, even though I'm trying to keep my religious beliefs aside.)
Much to the opposite, akin to a PC which was powered off forever. The transition between "powered-on" and "forever powered-off" states is something unexpected to the "software" (the "sentience" emerged inside our brains). Living beings, especially those with nervous system like us, are wired to being alive, and death is unexpected and unknown state, so this "transition" (dying) is confusing. As all senses across the body become numb, adaptiveness plays a role, with the cortices trying to compensate for the lack of sensory input (including inputs from within the brain itself, as synapses begin to fail), including a heightened activity of long-term memory as it tries to remember what exactly led to this "dying" state (part of fight-or-flight response): there's the Near-Death Experience ppl often recall experiencing after effectively dying but getting to be reanimated.
I believe there's an extra-baryonic factor in play too (what spiritualists would call "spiritual realm" would be another "brane" from a multibrane cosmos, with everything having "spiritual matter", not necessarily self-rearranging ("living") and the so-called "soul" merely another emergent property of a physical structure made of "spiritual stuff", akin to a baryonic sentience), but it's belief so I'm keeping this out.
As the emergent properties within sentience are inexorably bounded to its "hardware" (i.e. the body and its nervous system), the way matter is constantly subjected to entropy is an intrinsic part of cognition (i.e. brain gets wired and accustomed to the effects of entropy, trying to adapt as the years pass and aging happens, just like (geologically) life adapted outside water during Late Devonian and (individually) astronauts aboard ISS become accustomed and develop muscle memory for microgravity motion).
This means, if Black Hole Cosmology is to be considered, that the effects of "existing inside a black hole" are indirect part of how life adapts (e.g. adapting to the way time "stretches" as the years pass). The energy within matter (chemical reactions that keep happening even after death, especially those from decomposition processes) means that the physical structure from which "sentience" emerged will be subjected to the entropy long after being rendered unable to self-rearrange as living being. The brain may've died, but its organic molecules are still undergoing reactions at the microscopic level, until being completely transformed by decomposition and eventually fossilized. As there's no working memory registering mechanisms anymore, it's essentially "undergoing time without registering it", pretty much akin to the "time gap from general anesthesia".
@Quilotoa@lemmy.ca @mildlyinteresting@lemmy.world
1/5: Score 8.99, selection H210 S25 B73, original H214 S16 B58 (closest by hue, but I recalled it as more saturated and more bright than it really was)
2/5: Score 9.74, selection H155 S91 B78, original H146 S72 B78 (nailed the brightness, but I was biased towards cyan/blue and, again, recalled it as more saturated than it really was)
3/5: Score 9.19, selection H145 S59 B58, original H155 S80 B48 (closest by hue, but I was biased towards cyan/blue, this time ending up with a less saturated mental recalling of it, still brighter once again)
4/5: Score 8.37, selection H271 S18 B79, original H251 S19 B99 (almost nailed saturation, but I was biased towards blue; this time I had a darker mental recall of it, maybe I was unconsciously overcompensating my drift towards brighter)
5/5: Score 9.74, selection H28 S87 B50, original H24 S89 B49 (the closest I got to nailing it, off by mere 1 level of brightness, 2 levels of saturation, 4 degrees of hue, which is a recurrent bias towards green).
Final score: 46.03/50 (ranked 47880 out of 381487, their humorous score description: "suspiciously accurate, we're going to need to see your browser history.")
I took notes after each round, so I could analyze my own color accuracy, as someone who's highly familiar with color wheels (I'm a developer and also a hobbyist artist who uses a drawing app for doing digital art). I'm not sure whether my bias towards cooler colors has to do with the screen white balance/temperature (I played on smartphone, and the screen is slightly a cold white, even though Android's White balance is set to the midpoint between cold and warm; will eventually replay it on PC) or if it has to do with my heightened sensorial bias to red (which, as paradoxical as it may sound, ends up pushing me to guess a color as less red than it is because I unconsciously expect the apparent color as redder, i.e. "this color is probably appearing redder than it really is because I've become overly sensitive to red, so it must be bluer/greener" chain of thought).
Yes, I know it's meant to be just a game. lol
@ramble81@lemmy.zip @asklemmy@lemmy.world
On the one hand, I'm quite fond of who I've become since my spiritual awakening, aware of how this world is a Demiurgic theather of illusions behind Matrioshka layers of determinism (physical - societal - biological - ontological), embracing taboos and trying to seek the "wilt, shall be the whole of the Law" while still being a laughable, infinitesimal Khabs restrained by an endless Khu.
I find happiness whenever I feel the cold warmth of Her powerful presence. I find happiness whenever I learn novel things I somehow find a synchronicity with. Happiness never truly lasts, it's always temporary. As soon as I realize, She flew back to the night veil once again, the new thing I'm learning became mundane routine, and I hate mundane routine.
But being fond of who I've become is different from "being happy by myself" or "loving myself".
Accepting or even "loving" myself don't suffice in a world that requires me to "live in society", which often (if not always) means compromising, hiding or even abandoning my own authenticity and sincerity, surrendering myself to a social phagocytosis.
I mean, can't hire myself, can't pay my own paycheck, can't sell things to myself to "make a living" if I'm seeking not to rely on employers, can't rent myself a home, can't pay the rent to myself.
Living in society requires things beyond "myself". To survive a life I didn't even ask or consent to to begin with, I need others other than myself. I need others to sell me the food my body compels me to eat daily, others to sell/give me resources to grow my own food if I'm seeking not to rely on buying food, others who'd sell me soil to grow the food (a rented place would take it all away as soon as I became unable to afford the rent or if I were to move somewhere else), others who'd pay me so I could afford owning a house.
Loving oneself doesn't bring food, water and shelter. Loving oneself doesn't bring one a paycheck. Loving oneself doesn't pay one's taxes. The answer to "survive" is but "loving yourself".
In the end, my rebellious mind screams: why should I even "learn to be happy by myself"? This phrasing sounds like imposition, as if every human being must accept oneself, must "love" oneself, no choices, just like survival has no choices (at least no diplomatic ways) but to "obey" and comply with one's own body. When did I consent with a "myself", to begin with? When did I ask to be born? I didn't, my "self" was imposed unto me by two humans, whose selves were imposed unto them by other two pairs of humans, and so on, like some kind of endless curse, the curse of biological reproduction.
I may be fond of my own self sometimes, but I'm not "loving" it or "learning to be happy" with it because I'm refusing Demiurgic illusions. The inexorable death imposed unto me is enough imposition, and no matter what I do, everything ends, especially happiness, and my "self" as well, and this world, and even this cosmos.
@IntrovertTurtle@lemmy.zip @foggy@lemmy.world @hayyy@thelemmy.club @mentalhealth@lemmy.world
I'm not among the downvoters, I can't really say for those who downvoted you both, but maybe it has to do with your both "talk to therapist" while knowing absolutely nothing about the OPs background, whether they could financially afford a therapist (therapy is often a paid service) or the meds prescribed (something we're required to purchase), whether they did "talk to a therapist" before talking to "randos on the internet"? You ppl didn't even consider the slightest possibility that the OP were, deep inside, trying to connect with someone, trying to find a like-minded friend?
To be fair, OP didn't describe their background, didn't detail further... But this speaks volumes as someone trying to connect with someone while being selective about what they could say publicly (even when behind a pseudonym).
When someone posts something like "hey ppl, is it normal to be depressive?" without describing why, it's very likely that the person is hoping someone to come and ask "hey, why are those thoughts making it to your mind?" or even a mere "hi, I saw your post, uh, you can talk to me if you want". Some may label this behavior "attention-seeking", but isn't this a living being (human and whatnot) thing to do, trying to find and connect with beings alike?
But instead of trying to connect back, it's outsourced to "therapist", regardless of conditions financial, even societal... do you know there are people who, depending on their country, can't really simple "walk into a therapist room" due to their sexual orientation, ethnicity, religion, characteristics upon which they would be persecuted and/or harassed if they tried to seek someone IRL?
Using my own personal anecdote: I lost count of how many "professionals" I sought, I got even deeper thoughts than those described by the op. I did "seek help" since my childhood... Still NONE solved my "problem", whatever my "problem" is. Partly because my "problem" involves non-mundane matters, and a psychiatrist, upon hearing how I'm a devotee of Lilith, pushes me the label of "Schizotypal" because the society around me (Brazilian) is overly christian (but if, instead of mentioning "Lilith" or "Lucifer", I were to mention "Our Lady of Aparecida" or "Jesus Christ", then it'd be suddenly "normal"). Luckily, I'm not violently persecuted in Brazil (yet) for being an demonolater, but there are precedents.
In the end, "asking strangers from a social media instead of therapists" may be an attempt to connect (hopefully safely) to like-minded people who could, hopefully, understand better in a potential friendship than a therapist (who's not a friend, but a doctor doing a job) could in mere 2 hours per week.
Sorry if I'm being rude, but it's just that whenever I see the "seek a therapist" or "call this number" advises, I can't help but notice how hollow and totally unaware of a person's situation those are.
Just Post @lemmy.world Mastodon Live Feed is gone

@Feyd@programming.dev @technology@lemmy.world
When we develop a system (I used to work as a DevOps for almost 10 years), the technical aspects aren't the only aspects being accounted for: especially when it comes to the front-end (i.e. the UI the user sees, the UX how user interaction will happen and how it may be perceived by them), psychology (especially behaviorism) is sine qua non.
Shapes and colors often carry archetypal meanings: a red element feels "dangerous", a window with a yellow triangle icon feels to be "warning" about something, a green button feels "okayish". I mean, those are the exact same principles behind traffic lights.
And signs and symbols, ruling the world, don't exist in a vacuum: a colored button besides a monochromatic button may, psychologically, lead to a feeling that the colored button is the proper way to proceed.
But... there's a twist: imagine you have a light-gray "Cancel" and a colored (regardless of the color) "Block". "Block" is a strong word. The length of the label text also does impart psychological effects. The human brain may see: "huh, I have this button which reads 'block' and it's quite strong, and this other button which reads 'cancel' and it's more easy to the eyes, maybe 'block' is dangerous". Contrast matters: the comparison between a substrate and the substances is pretty much how we're wired to navigate this world as living beings.
Now, corporations such as Apple (Safari), Google (Chromium), and very likely Mozilla (Firefox) as well, they have entire hordes of psychologists directly working for them, likely the same psychologists who'll work together with their HR departments for evaluating the candidates who applied for a job position there. These psychologists, and/or psychoanalysts, they know about Jungian archetypes, they know about fight-or-flight response and other facets of our deeply-ingrained instincts, they know about how colors are generally perceived by the human brain. Those psychologists likely played a role when a brand was chosen, or when an advertisement pitch was made. They know what they're doing.
UX/UI decisions are far from random choices from the leading team of project management engineers, it involved designers with psychologists. Again: they know what they're doing, they know it pretty well. They know how the users are likely to keep the functionality. They know how the users, as Ulrich said, are very unlikely to touch the settings, likely to keep the defaults, no matter what those defaults are. Because they know humans are driven by the "least-effort" instinct, which is quite of a fundamental principle shared among living beings as a byproduct of the "lowest energetic point" (thermodynamic equilibrium) principle.
To me, a former full-stack developer, the newer Firefox interfaces don't feel like Firefox is being psychologically fair and honest with the user's mind. Dark patterns are often subtle, and they're part of a purposeful, corporate decision.