Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)C
Posts
0
Comments
18
Joined
3 yr. ago

  • Oh wow, old school police were racists… glad they aren’t anymore!”

    Whoosh

    You

  • If you want a fun game with great story telling, BOTW wins for me, unlocking all of the story required exploration and learning the map in a way that TOTK ignored.

    If you want more robust fighting mechanics and a world that 2x bigger and a sandbox creative mode for the last 1/3 of the game, TOTK wins.

  • That AI was trained on absolute mountains of data that wasn’t ethically gained, though.

    Just because an emerald ring is assembled by a local jeweler doesn’t mean the diamond didn’t come from slave labor in South Africa.

  • Yes, I do think people posting their “artwork” in ai subs are dumb. And I use AI all day where it excels at solving business problems, pattern recognition and outlier detection. But using gen AI to mask lack of creativity or talent is a scourge on humanity.

  • And subbed.

  • Personally, if I see AI content I block the user that posted it. If a community is all about AI, I block the community. I want to see content from people that have actual talent or something intelligent to contribute.

  • We had to write a classifier for web traffic at a prior employer using known scraper IPs as our training set, and Chrome on Linux got us over 70% of the way there. A sizable number of bots that are just a $5 a month Linux based VPS with selenium and chrome engine.

  • And scrapers/bots. These make up a staggering amount of web traffic and Linux dominates server land.

  • Yep. “What’s the most interesting project you’ve been a part of” is my favorite. Same vane, opened the door to so many follow ups.

    So often it’s “how do you translate temporal data for a random forest model” and then see run headlights as I have to explain the word temporal and then how feature selection for machine learning actually works.

    They are literally only taught the Python code now, with no explanation of why, how, or when certain tools are appropriate. Real “Bang on a nail with a screwdriver long enough” level education.

  • As an employer who hires folks in the data science field, I’ve become more disappointed in recent college graduate job-readiness every year for the last decade. At this point I’d prefer a resume to say “watched 100 hours of YouTube videos about data science” over a masters in the field.

    And these poor people have 100k in student loan debt with no marketable job skills and are competing against 10s of thousands of other recent grads with no marketable job skills and college has created a lose-lose environment.

    No wonder enrollment is dropping, the cost of the education is absolutely not worth it and people are starting to see it.

  • I suddenly developed a theory that GPT and the like are popular because people don’t know how to craft a google (the noun not the company) search.

  • Dealing with this right now. Dog is super cute. It is still a terrible decision for my family, and that’s not the dog’s fault.

  • I think this supports his argument. Having to research desktop environments to decide which is optimized for the potential problems a new user may face, then finding a distro that packages that DE is quite frankly too much for the average user.

    I’d argue between 3% and 5% of PC users are willing to research and experiment to find the flavor of Linux that truly works for them.

    Linux has come a long way, I still remember using Gentoo as a daily driver and seeing Linux cross 1% of desktop share, but the average desktop user doesn’t know the difference between a kernel and a colonel, and they don’t want to.

  • If LLMs were accurate, I could support this. But at this point there’s too much overtly incorrect information coming from LLMs.

    “Letting AI scrape your website is the best way to amplify your personal brand, and you should avoid robots.txt or use agent filtering to effectively market yourself. -ExtremeDullard”

    isn’t what you said, but is what an LLM will say you said.

  • I used to be in credit risk for a very large stock market company.

    Calling the bottom of the market is the same as betting big and getting 21 in blackjack.

    Super cool when it happens, but not skill. The number of grown men I had to hear crying because they were dollar cost averaging down to the bottom until they went broke still disturbs me.

    I’m happy this worked for you, but it was not skill.

  • Lots of boring applications that are beneficial in focused use cases.

    Computer vision is great for optical character recognition, think scanning documents to digitize them, depositing checks from your phone, etc. Also some good computer vision use cases for scanning plants to see what they are, facial recognition for labeling the photos in your phone etc…

    Also some decent opportunities in medical research with protein analysis for development of medicine, and (again) computer vision to detect cancerous cells, read X-rays and MRIs.

    Today all the hype is about generative AI with content creation which is enabled with Transformer technology, but it’s basically just version 2 (or maybe more) of Recurrent Neural Networks, or RNNs. Back in 2015 I remember this essay, The Unreasonable Effectiveness of RNNs being just as novel and exciting as ChatGPT.

    We’re still burdened with this comment from the first paragraph, though.

    Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense.

    This will likely be a very difficult chasm to cross, because there is a lot more to human knowledge than thinking of the next letter in a word or the next word in a sentence. We have knowledge domains where, as an individual we may be brilliant, and others where we may be ignorant. Generative AI is trying to become a genius in all areas at once, and finds itself borrowing “knowledge” from Shakespearean literature to answer questions about modern philosophy because the order of the words in the sentences is roughly similar given a noun it used 200 words ago.

    Enter Tiny Language Models. Using the technology from large language models, but hyper focused to write children’s stories appears to have progress with specialization, and could allow generative AI to stay focused and stop sounding incoherent when the details matter.

    This is relatively full circle in my opinion, RNNs were designed to solve one problem well, then they unexpectedly generalized well, and the hunt was on for the premier generalized model. That hunt advanced the technology by enormous amounts, and now that technology is being used in Tiny Models, which is again looking to solve specific use cases extraordinarily well.

    Still very TBD to see what use cases can be identified that add value, but recent advancements to seem ripe to transition gen AI from a novelty to something truly game changing.

  • The example that comes to mind is the Birthday Problem.

    If you are in a room with 22 other people, there is a 22 in 365 chance one of them shares your birthday. Relatively unlikely. But there is a 50% chance there are two people in the room that share a birthday. Much more likely.