Skip Navigation

帖子
1
评论
1132
加入于
3 yr. ago

  • I mean, technically the time that the bible implies anal sex is bad is that time where some guys from Sodom wanted to rape a couple of angels who were disguised as men.

    It strikes me that the lack of consent would be the issue here, yet somehow the takeaway for religious people is that gay bad? I don't e

  • I was surprised at the amount of things that just broke when I tried Wayland a couple of months ago, but that's a lot of time to fix bugs and implement missing features.

  • I'll grab us a nice cup of tea and a blanket, and we can reminisce about the days we could get up from the sofa without pulling one muscle or other.

  • This is the final straw that made install Mint. I was so sick of the constant screens after every update continuously pestering me with shit I didn't want, like "YOU ASKED BEFORE AND SAID NO, ASK ME AGAIN AND ILL PUNCH YOU IN THE DICK"

    So I metaphorically did.

  • Adds up

    跳过
  • But what about second lunch?!

  • Lies! I know you've memorized 23 lines in the Wayward Queen attack up to move 19.

  • G.O.R.P.

    跳过
  • I'm scared to ask, but is there context to this?

  • Rule

    跳过
  • Yes.

  • Are you writing The Testicle Monologues? I was captivated, when's the off-Broadway premiere!

  • But then how can you tell that it's not an actual conscious being?

    This is the whole plot of so many sci-fi novels.

  • I'll bite.

    How would you distinguish a sufficiently advanced word calculator from an actual intelligent, conscious agent?

  • Let me grab all your downvotes by making counterpoints to this article.

    I'm not saying that it's not right to bash the fake hype that the likes of altman and alienberg are making with their outlandish claims that AGI is around the corner and that LLM are its precursor. I think that's 100% spot on.

    But the news article is trying to offer an opinion as if it's a scientific truth, and this is not acceptable either.

    The basis for the article is the supposed "cutting-edge research" that shows language is not the same as intelligence. The problem is that they're referring to a publication from last year that is basically an op-ed, where the authors go over existing literature and theories to cement their view that language is a communication tool and not the foundation of thought.

    The original authors do acknowledge that the growth in human intelligence is tightly related to language, yet assert that language is overall a manifestation of intelligence and not a prerequisite.

    The nature of human intelligence is a much debated topic, and this doesn't particularly add to the existing theories.

    Even if we accept the authors' views, then one might question if LLMs are the path to AGI. Obviously many lead researchers in AI have the same question - most notably, Prof LeCun is leaving Meta precisely because he has the same doubts and wants to progress his research through a different path.

    But the problem is that the Verge article then goes on to conclude the following:

    an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

    This conclusion is a non sequitur. It generalizes a specific point about the capacity of LLMs to evolve into true AGI or not, into an "AI dumb" catchall that ignores even the most basic evidence that they themselves give - like being able to "solve" go, or play chess in a way that no human can even comprehend - and, to top it off, conclude that "it will never be able to" in the future.

    Looking back at the last 2 years, I don't think anyone can predict what AI research breakthroughs might happen in the next 2, let alone "forever".

  • Are you ok?

  • Mostly farting. Also during sex.

  • Same. I don't think he is my brother.

  • The guy who has to pick them up is livid, I'm sure.

  • Amazing

    跳过
  • I'll make a bold prediction that we won't have 25 months this year either. Maybe next century.

  • CVS style

    跳过
  • A more serious answer - it depends greatly on where I'm working and what we're doing.

    I've worked in places where we'd receive outsource work. Usually we'd get fairly detailed instructions about what to do and what to avoid, that were discussed between our PMs/architects and the client, including tests for example that were agreed upon. You were supposed to follow those to the letter, but the most important part was that you needed to deliver quickly because the customer wanted to keep costs to a minimum. "Useless questions" (from their perspective) were seriously frowned upon, so if it wasn't specified, the expected approach was to do whatever was quicker.

    This occasionally lead to situations where their QC/UATs would identify issues with their business rules, but as long as it was compliant with the requirements we received, it would then come back to be changed (at additional cost, depending on how big the change needed to be).

    Once accepted though, job done, grab your next work item and move on. Months later they could run into a situation like the one in the printer and come back asking for a fix, but very likely that would go into the CR bucket and a quote would be provided.

    Of course if you're working for a company that actually cares about what they're building, the philosophy is completely different. If I'm working on our products, then I build a good understanding of what I'm working on, and I'm expected to flag any concerns or issues I encounter even before it reaches QC.

    That said, I've never heard of a developer ever being criminally charged other than intentional misconduct - like, in the world. Look at the IBM Queensland Health payroll system fiasco, I'm not sure anyone was even fired, let alone prosecuted.

    Or even the Boeing 737-MAX crashes - how do you build a system that pitches the nose down repeatedly, without limitations? Those guys who worked on the MCAS software would 100% have considered a scenario where an angle-of-attack sensor would provide bad data, and the consequences of repeated trim, but alas - 2 planes crashed, 350 people died, and what are the consequences? Some payouts...

  • CVS style

    跳过
  • Look around you, you'll find "unrestricted fields in a public-facing app" (from a practical perspective) everywhere. Shrek's script has what, less than 50k characters? That's nothing, you can fit that in a Facebook post and still have more than enough to write a full movie review.

    Where this would likely raise flags is when somebody decided that it needs to be printed, but that could be a different team, maybe outsourced, maybe after the main app was developed, maybe it's just some "plug-and-play" system that also handles bulk printing jobs, who knows.