Off-and-on trying out an account over at @tal@oleo.cafe due to scraping bots bogging down lemmy.today to the point of near-unusability.
- JumpNSFW
eroticroleplay
Privacy @lemmy.world Musk's Starlink updates privacy policy to allow consumer data to train AI
Technology @lemmy.world Micron to boost DRAM output with $1.8bn chip fab buy
Technology @lemmy.world SK hynix to spend $13 billion on the world's largest HBM memory assembly plant amid the worst shortage on record — South Korea facility to handle packaging and testing for AI memory campus
Hardware @lemmy.world NVIDIA has reportedly ended GeForce RTX 5070 Ti production and it's now end-of-life
Technology @lemmy.world Zuckerberg eyes massive [datacenter] expansion with Meta Compute play
Technology @lemmy.world What are your technology mispredictions?
California @lemmy.world Where California's reservoir levels stand going into 2026
Ask Lemmy @lemmy.world How do you feel that forums/social media have changed over the period of time that you've used them?
Hardware @lemmy.world Framework raises DDR5 RAM prices again with per GB price hike
California @lemmy.world Why electricity prices in California are so high
Lemmy Support @lemmy.ml Broken mbin/lemmy post interaction
Technology @lemmy.world SODIMM-to-DIMM adapters offer a workaround for DDR5 price hikes
politics @lemmy.world Two More Heritage Foundation Trustees Resign Over Support for Tucker Carlson
Selfhosted @lemmy.world Framework stops selling separate DDR5 RAM modules to fight scalpers
News @lemmy.world Trump administration temporarily barred from revoking University of California funding
Technology @lemmy.world AI country singer Breaking Rust tops Billboard with ‘Walk My Walk’
News @lemmy.world Tesla sales resume fall in European markets in October
Games @lemmy.world What are your favorite games from a worldbuilding standpoint?
Community Promo @lemmy.ca Please provide a good, explanatory description on communities that you create
politics @lemmy.world Understanding the Coming Premium Apocalypse










If everything that I've seen in the past has said that 1+1 is 4, then sure --- I'm going to say that 1+1 is 4. I will say that 1+1 is 4 and be confident in that.
But if I've seen multiple sources of information that state differing things --- say, half of the information that I've seen says that 1+1 is 4 and the other half says that 1+1 is 2, then I can expose that to the user.
I do think that Aceticon does raise a fair point, that fully capturing uncertainty probably needs a higher level of understanding than an LLM directly generating text from its knowledge store is going to have. For example, having many ways of phrasing a response will also reduce confidence in the response, even if both phrasings are semantically compatible. Being on the edge between saying that, oh...an object is "white" or "eggshell" will also reduce the confidence derived from token probability, even if the two responses are both semantically more-or-less identical in the context of the given conversation.
There's probably enough information available to an LLM to do heuristics as to whether two different sentences are semantically-equivalent, but you wouldn't be able to do that efficiently with a trivial change.