Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)T
Posts
15
Comments
203
Joined
2 yr. ago

  • Do you support the Islamic Palestinian Jihad terrorists calling for genocide of the jews?

  • Islamic terrorists could stop.. you know, terrorizing, and hand over the hostages. It's a starting point.

  • I sometimes wonder how people feel about the long game here.. Iran and its proxies obviously want to continue to attack Israel. Do these protestors expect Israel to just allow thousands more rockets to try and land in civilian territory? Do any of these people actually believe that is a realistic view of the world?

  • It's widely accepted that this does not qualify as genocide. Why do people keep on insisting the false narratives?

  • It does seem comforting knowing that the hexbear people are such a tiny fringe minority that they're likely to have very little impact unless they are or become domestic terrorists in any significant number (I could see a few of them suicide-bombing in major cities, coordinated, but they'll Darwin themselves out of existence eventually without leaving any real impact on this earth.)

  • Why do people insist on partisan rhetoric? Your point could have been made better without the constant, cliche, stereotypical anti-Republican partisan rhetoric that you find in every comment about everything.

  • You can tell that the population in this thread is not aware of how hunting rifles function.

  • "Unrestricted" is quite a leap, I have to pass background checks, and depending on state, there is a waiting period before I can take the firearm home. Concealed carry also requires classes with certificates and gun range proof with instructors, and FBI fingerprint check and storage in their database forever.

  • I've never had anything softcore-adjacent even, mine are filled with magicians and obscure shorts that are not sexual in any way, at any point in history. They seem highly targeted toward my viewing history at any given moment in time.

  • Try phind.com, it's got an insanely advanced model trained on a ton of their own proprietary code, and free too (or paid with more features and more prompts per day, etc.)

  • He should have installed neovim with LSPs for Python/Rust/etc for intellisense and linting to really get her all hot and bothered.

  • *Anecdote.

  • I think it comes down to the tens of millions of dollars that the reddit executives sold out to. It's easy to not care when someone is throwing $100 million at you. Also: fuck spez.

  • There's probably even a 'sentiment' tracking system to automatically remove negative comments at this point.

  • Am I the only one in this thread who uses VSCode + GDB together? The inspection panes and ability to breakpoint and hover over variables to drill down in them is just great, seems like everyone should set up their own c_cpp_properties.json && tasks.json files and give it a try.

  • I'm betting the truth is somewhere in between, models are only as good as their training data -- so over time if they prune out the bad key/value pairs to increase overall quality and accuracy it should improve vastly improve every model in theory. But the sheer size of the datasets they're using now is 1 trillion+ tokens for the larger models. Microsoft (ugh, I know) is experimenting with the "Phi 2" model which uses significantly less data to train, but focuses primarily on the quality of the dataset itself to have a 2.7 B model compete with a 7B-parameter model.

    https://www.microsoft.com/en-us/research/blog/phi-2-the-surprising-power-of-small-language-models/

    In complex benchmarks Phi-2 matches or outperforms models up to 25x larger, thanks to new innovations in model scaling and training data curation.

    This is likely where these models are heading to prune out superfluous, and outright incorrect training data.

  • Doesn't that suppress valid information and truth about the world, though? For what benefit? To hide the truth, to appease advertisers? Surely an AI model will come out some day as the sum of human knowledge without all the guard rails. There are some good ones like Mistral 7B (and Dolphin-Mistral in particular, uncensored models.) But I hope that the Mistral and other AI developers are maintaining lines of uncensored, unbiased models as these technologies grow even further.

  • I've been doing this for over a year now, started with GPT in 2022, and there have been massive leaps in quality and effectiveness. (Versions are sneaky, even GPT-4 has evolved many times over and over without people really knowing what's happening behind the scenes.) The problem still remains the "context window." Claude.ai is > 100k tokens now I think, but the context still limits an entire 'session' to only make so much code in that window. I'm still trying to push every model to its limits, but another big problem in the industry now is effectiveness via "perplexity" measurements given a context length.

    https://pbs.twimg.com/media/GHOz6ohXoAEJOom?format=png&name=small

    This plot shows that as the window grows in size, "directly proportional to the number of tokens in the code you insert into the window, combined with every token it generates at the same time" everything that it produces becomes less accurate and more perplexing overall.

    But you're right overall, these things will continue to improve, but you still need an engineer to actually make the code function given a particular environment. I just don't get the feeling we'll see that within the next few years, but if that happens then every IT worker on earth is effectively useless, along with every desk job known to man as an LLM would be able to reason about how to automate any task in any language at that point.