Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)I

invalidusernamelol [he/him]

@ invalidusernamelol @hexbear.net

Posts
24
Comments
803
Joined
5 yr. ago

  • Not always, all PRs usually start as forks unless the person is part of the project and can do their work on a branch.

  • Yeah, there's not even a working branch yet. And she'll need to set up a CI pipeline to keep it synced with the upstream while making sure the AI stuff doesn't get back in. It's already 9 commits behind and nothing has even been done.

  • I think at this point no one really knows the context, it just sounds cool and is better than using a common superlative.

    The reason is kinda fits it just because the root word was been used a lot in religious contexts without a specified ontological basis (since Christianity is implied). Which means that people are using it that way without the Christian context. Which initially makes it confusing, but if you just slot in "Facebook Zeitgeist" as your ontological basis, it actually makes sense even if that wasn't the intended use.

  • Art of the Problem did a good video on this recently. Stick with it since he buries the lede and it only comes back when he shows like 3 minutes of uncut David Graber. There is a bit of liberal idealism in there, but he's not wrong about how democracy is meant to make direct control through financial markets more difficult.

  • I believe and ontology is just a collection of concepts. So "ontological" is frequently used in relation to the Bible or Catholic ontology.

    X being ontologically Y in that case would mean that in the context of the referenced ontology, X fits in the category of Y.

    Without an ontology specified, the usual fallback is either context or just Christianity. So "this picture is ontologically evil" would mean that "this picture is satanic" in the Christian context, or, in the Facebook context, it just distills down to a synonym for "very" or "definitively" since it's meant to reenforce that the Facebook ontology would determine that the photo is evil.

  • Ontologically is just God/bible shit. I usually hear it's used by pastors and religious folk, so I'm cool with it's meaning being changed to be a synonym for "very".

  • That's a comm guywire, they're usually rated for ~5000lbs, which is about exactly what the cyber truck weighs. I'm mostly impressed that that old pole didn't pop

  • She's just a German history enthusiast!

  • That new one they're working on looks interesting. Doesn't seem to be a waifu collector at least

  • Oh I know, it's just the syntax part that would be nice. Lisp syntax is great for highly functional stuff whereas it feels kinda forced in JS.

    Like I said I mostly use Python, so "functional" to me is a comprehension statement (which I think is great syntax), but that type of thing just flows better with syntax specifically designed for it.

  • As someone who's never really had to use JS for anything, man is it a messy language. I use Python mostly which has its issues, but if at least has the capability of being pretty robust if you care.

    I do wish that Java wasn't the zeitgeist and the scheme style language was used instead...

  • Title

    Jump
  • There's also only so much you can do to make Anya Taylor Joy look bad, she's just kinda got that energy.

  • Title

    Jump
  • She's fallen so low that she's turned into a cigarette ad!

  • AI is just Tech NAFTA

  • Weird RP for the Chinese bourgeois

  • NLTK just does Chomsky diagrams and tokenizes text based on parts of speech. It's mostly a bunch of hash tables and optimized algorithms with a simple pre trained machine learning model (VADER) that can do rudimentary sentiment analysis.

    I can see how just jamming text in a pipeline is a simpler solution though, since you need to build the extraction model by hand using NLTK.

  • That's a fair use case. I just didn't have the patience for it lol. I've always been better at just failing repeatedly until I succeed, which is basically what GPTs do, but instead of me getting the benefit of that process, they get it and then immediately forget.

    Might try the rubber ducking thing at some point, but most of my code is in a field where there's not really many good examples and the examples that do exist tend to be awful, so it's pure hallucination. I've seen some stuff colleagues have vibe coded and it gives me the ick/bad code smells.

  • I used to use NLTK back in highschool and college for sentiment analysis and it was usually decently accurate at scale (like checking average sentiment of a tag) and ran surprisingly fast even on an old laptop. Are the open models as performant as NLTK?