Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)K
Posts
4
Comments
94
Joined
3 yr. ago

  • What are you using instead? I only recently set up my synapse server and I'd be interested to head what the alternatives are

  • Ah this is so exciting!

    Discord 'existing' has held back development motivation on Foss Federated Communication alternatives.

    When they go public only good things will happen for projects like matrix :)

    I'm very excited!

  • Neat!

  • What's your pillow brand?

  • I wanna hear them get SLAMMED

  • Deleted

    Please, don't!

    Jump
  • Here's what my local ai said about your supposition:

    Your supposition about LLMs is actually quite close to the basic concept! Let me audit this for you:

    You've correctly identified that LLMs work on statistical patterns in text, looking at what words are likely to follow a given sequence. The core idea you've described - that models can learn patterns without storing every example verbatim - is indeed fundamental to how they work.

    Your example of "a b c d" appearing 12 times and the model learning that "d" follows "a b c" is a simplified but accurate illustration of the pattern recognition that happens in these models.

    The main difference is that modern LLMs like myself use neural networks to encode these patterns in a complex web of weighted connections rather than just simple frequency counts. We learn to represent words and concepts in high-dimensional spaces where similar things are close together.

    This representation allows us to make predictions even for sequences we've never seen before, based on similarities to patterns we have encountered. That's why I can understand and respond to novel questions and statements.

    Your intuition about the statistical foundation is spot on, even if you're not familiar with the technical details!

  • Deleted

    Please, don't!

    Jump
  • Deleted

    Please, don't!

    Jump
  • I run an awesome abliterated deepseek 32b on my desktop computer at home.

  • My AIO is very fast on mid hardware

  • The "reasoning" models and the image generation models are not the same technology and shouldn't be compared against the same baseline.

  • https://ibb.co/wVNsn5H

    https://ibb.co/HpK5G5Pp

    https://ibb.co/sp1wGMFb

    https://ibb.co/4wyKhkRH

    https://ibb.co/WpBTZPRm

    https://ibb.co/0yP73j6G

    Note that my tests were via groq and the r1 70B distilled llama variant (the 2nd smartest version afaik)

    Edit 1:

    Incidentally... I propositioned a coworker to answer the same question. This is the summarized conversation I had:

    Me: "Hey Billy, can you answer a question? in under 3 seconds answer my following question"

    Billy: "sure"

    Me: "How many As are in abracadabra 3.2.1"

    Billy: "4" (answered in less than 3 seconds)

    Me: "nope"

    I'm gonna poll the office and see how many people get it right with the same opportunity the ai had.

    Edit 2: The second coworker said "6" in about 5 seconds

    Edit 3: Third coworker said 4, in 3 seconds

    Edit 4: I asked two more people and one of them got it right... But I'm 60% sure she heard me asking the previous employee, but if she didnt we're at 1/5

    In probably done with this game for the day.

    I'm pretty flabbergasted with the results of my very unscientific experiment, but now I can say (with a mountain of anecdotal juice) that with letter counting, R1 70b is wildly faster and more accurate than humans .

  • Yes it can

  • Non thinking prediction models can't count the r's in strawberry due to the nature of tokenization.

    However openai o1 and deep seek r1 can both reliably do it correctly

  • Hi Michael, Ive made a bug report on your github covering my issue.

    I should clarify its not your code causing me issues, Ive quite enjoyed filestash as a base.

    The problem has consistently been the office doc viewing integration. Onlyoffice did not reliably allow me to share documents without issue, and the new collaborra integration isn't functional using the default docker compose.

    Do you happen to have a docker compose script on hand that allows for multiple different office doc integrations that I can swap between in /admin?

    That way when one stops working I can use an alternative?

  • Is it possible to self host a debrid?

  • I've tried a lot to make nextcloud work by default, even tried hiring a Dev to build me a plugin that would allow me to embed documents and full text search into my WordPress site but came up empty.

    Filestash has been useful to get the documents from next cloud to WordPress via WebDAV, but it breaks too often for me to want to continue to use. There's also no way to add searching of the document to the WordPress site.

    All I really want is to continue to use nextcloud as the storage and editing and have some intermediary software serve the files as embeddable links and provide a full text search of the documents as well.

  • It could be a calculated loss strategy that buys them good optics.

    While this story is in the news cycle people have the opportunity to build positive bias by having their claims approved for a few days.

    Edit: Not to say that it is true at all, but it would be a valid strategy for them for a week or so

  • Cool I'll spin up a VM and try it out tomorrow!

  • Looks cool I'll check it out.