I use AI as a tool. AI should be a tool to help with job, not to take jobs. Same as calculator. Yep people will be able to code faster with AIs help, so that might mean less demand, at least for IT. But u still gotta know what the exact prompt u need to ask
LLMs != AI
LLMs strict subset of AIPls be a bit more specific about what you hate about the wide field of AI. Otherwise it’s almost like saying you hate computers, because they can run applications that you don’t like.
Peoples are lazy when they using calculators!
I used to work in a software architecture team that used AI to write retrospectives, and upcoming projects, and everything needed to have a positive spin, that sounds good but mean nothing.
Extra funny when I find out people use AI to summarize it. So the comical cycle of bullet points to text and back again is real.
I had enough working at the company when my team was working on the new “fantastic” platform, cut corners to reach the deadline on something that will not be used by anyone… and its being built for the explicit purpose of making a better development and working environment.
How about AI that runs as part of a SaaS app? Thts what I think about when I think commercial uses…
It’s completely clownshit to think that you’re going to be able to differentiate AI within 2 years. Maybe less than that. Veo2 is insane.
Using AI is telling people they shouldn’t care about your IP because you clearly don’t care about theirs when it passes through the AI lens.
Stop making using AI sound based
i use AI every day in my daily work, it writes my emails, performance reviews, project updates etc.
…and yeah, that checks out!
Meanwhile, Terence Tao: https://youtube.com/watch?v=zZr54G7ec7A
So… about his AI generated picture beside his name…
Just because it’s generic doesn’t mean it’s AI generated
I’ve been using a cartoonish profile picture for my work emails, teams portrait and other communications for many years. There is almost no way to tell that kind of icon apart from AI generated icons at that size anyway.
And even if it was, that’s not the point of the conversation. Fixating on that is such bad faith it betrays a defensiveness about AI generated content, so it’s particularly important that someone like you get this message, let me reiterate clearly:
I have a role of responsibility, I hire people and use company budget to make decisions on other companies and products we’ll be paying for. When making these decisions I don’t look at the email signatures of people or the icons they use. I look at their presentation materials and if that shit is AI generated I know immediately it’s just a couple people pretending to be an agency or company, or some company that doesn’t quality-control their slides and presentation decks. It shows laziness. I would rather go with a company that has data and specs rather than lean on graphics anyway. So if those graphics are also lazy AF that’s a hard pass. Not my first rodeo, I’ve learned to listen to experience.
I was pointing out the irony, nothing more. Not every comment needs to be read too deeply into. Even my avatar picture is AI generated, it was from those blockchain AI generated reddit ones years ago.
I don’t think anyone thought it was that funny or interesting of a comment, it reads the same kind of petty AI-bro comment that people make who are absolutely hooked on blindly defending AI in all capacities.
Sure it did… and judging from your previous rant, I’m sure it has nothing to do with you maybe reading much deeper into things that aren’t really there.
Ironically, an LLM could’ve made his post grammatically correct and understandable.
His post is fairly gramatically correct and quite understandable
If you had a hard time understanding the point being made in that post, you could probably be replaced by AI and we wouldn’t notice the difference.
I think this problem will get worse because many websites that’s used for “your own research” will lose human traffic to watch ads and more bots just scraping their data, reducing motivation to keep the websites running. Most people just take the least resistant path so AI search will be the default soon I think
Yes, I hate this timeline
Eventually they will pay AI companies to integrate advertisements into the llm’s outputs.
Omg I can see it happening. Instead of annoying intrusive ads, this new type will be so natural as if your close friend is suggesting it.
More dystopian future. Yes we need it /s
It’s so annoying because you’re correct. I’m finding it harder and harder to use a search engine for things. Hell, the web in general is becoming unusable. It’s all shit.
rant
here’s my pesonal gripe. Imagine looking for a solution to a problem and find a reddit thread in your search engine of choice. Wording in the description seems to match your exact issue and it’s one of the first results. You click on it and…
What i’d do to that smug fuck if I ever got my hands on him aside, me behind a VPN and deleted reddit account can’t find the answer. But i can go to chatgpt without logging in to answer my fucking queries and it does it with more efficiency than looking at random sites for things, which most of the time are just sloppy shitty mirrors of Stack Overflow or Quora that BLATANTLY copy content from those websites and just use slightly better SEO…
Also, reddit did this in because of the API shit they pulled. AND THEN THEY SOLD FUCKING ACCESS TO GOOGLE. So me, a fucking person (as far as I know anyway), isn’t privileged enough to view the fucking crumbs of information that google just fucking gobbles on a daily basis. Fuck that. This is a move that makes me feel less important than my fucking roomba.
I just feel so mad. I want the old web back. I just want my duckduckgo to work well.
Sorry for the rant.
I ordered some well rated concert ear protection from the maker’s website. The order waited weeks to ship after a label was printed and likely forgotten. I went to find a place to call or contact a human there, all they had was a self-described AI chat robot that just talked down to me condescendingly. It simply would not believe my experience.
I eventually got the ear protection but I won’t be buying from them again. Can’t even staff some folks to check email. I eventually found their PR email address but even that was outsourced to a PR firm that never got back to me. Utter shit, AI.
Never thought about ear protection for concerts, sounds cool. I’ll have to look into other options though, if anyone has any recommendations, let me know
A number of companies make “tuned” ear plugs to allow some sound through with a desired frequency curve, but reduce SPL to safe levels. I’ve used Etymotic, which sound great but I personally like a little more reduction. Alpine which I thought had enough reduction but too much coloring, and I settled on Earpeace, for like $25 on-line. Silicone, re-usable and easy to clean and they come with three filters to swap in or out depending on your needs / tastes.
I’m glad you mentioned the company directly as I also want to steer clear of companies like this.
That would’ve been such an easy disputed charge and get the plugs somewhere else. I’m not wasting a second on something like that, just telling my credit card company they didn’t uphold their end of the deal, and that’s that. I will lose hearing out of spite if this happened to me, because I’m an idiot.
I’ll waste a few moments. It becomes a puzzle. Assuming you managed to make it through the maze, you retrospectively analyze where would 99% of the country have dropped out of the flow and given up?
Then it’s an email to the attorney general if necessary! (I mean that’s been rare but when something is egregious)
🤓
Absolutely. Cc dispute is an under-used method of recourse.
I will lose hearing out of spite if this happened to me
Genuinely admire your self awareness
I’ve lost hearing for stupider reasons. Spite seems downright reasonable to me.
That’s really good to know about these things. They’ve been on sale through Woot. I guess there’s a good reason for that.
Oh man, sad that’s the customer service cause I deeply love my loops. I was already carrying them with me everywhere I went so I grabbed a pill keychain thing and attached them to my keys so I’d never forget to grab them.
Yeah this happened back earlier this year. I had lost a pair from a purchase years ago and replaced them. Guessing they are laying off people/support contracts like so many stupid business owners. I was sure that my order would be stuck in limbo forever after the experience, but they eventually showed up. Never again.
Wow, that’s extremely disappointing. I had a really positive experience with them a few years ago when I wanted to exchange what I got (it was too quiet for me), and they just sent me a free pair after I talked to an actual person on their chat thing. It’s good to know that’s not how they are anymore if I ever need to replace them.
Alright I don’t like the direction of AI same as the next person, but this is a pretty fucking wild stance. There are multiple valid applications of AI that I’ve implemented myself: LTV estimation, document summary / search / categorization, fraud detection, clustering and scoring, video and audio recommendations… "Using AI” is not the problem, “AI charlatan-ing” is. Or in this guy’s case, “wholesale anti-AI stanning”. Shoehorning AI into everything is admittedly a waste, but to write off the entirety of a very broad category (AI) is just silly.
I have ADHD and I have to ask A LOT of questions to get my brain around concepts sometimes, often cause I need to understand fringe cases before it “clicks”, AI has been so fucking helpful to be able to just copy a line from a textbook and say “I’m not sure what they meen by this, can you clarify” or “it says this, but also this, aren’t these two conflicting?” and having it explain has been a game changer for me. I still have to be sure to have my bullshit radar on, but thats solved by actually reading to understand and not just taking the answer as is. In fact, scrutinizing the answer against what I’ve learned and asking further questions has felt like its made me more engaged with the material.
Most issues with AI are issues with capitalism.
Congratulations to the person who downvoted this
They use a tool to improve their life?! Screw them!
Here’s hoping over the next few years we see little baby-sized language models running on laptops entirely devour the big tech AI companies, and that those models are not only open source but ethically trained. I think that will change this community here.
I get why they’re absolutist (AI sucks for many humans today) but above your post as well you see so much drive-by downvoting, which will obviously chill discussion.
I don’t think AI is actually that good at summarizing. It doesn’t understand the text and is prone to hallucinate. I wouldn’t trust an AI summary for anything important.
Also search just seems like overkill. If I type in “population of london”, i just want to be taken to a reputable site like wikipedia. I don’t want a guessing machine to tell me.
Other use cases maybe. But there are so many poor uses of AI, it’s hard to take any of it seriously.
I don’t think AI is actually that good at summarizing. It doesn’t understand the text and is prone to hallucinate. I wouldn’t trust an AI summary for anything important.
This right here. Whenever I’ve tried using an LLM to summarize, I spent more time fact-checking it (and finding the inevitable misunderstandings and outright hallucinations—they’re always there for anything of substance!) than I’d spend writing my own damned summary.
There is, however, one use case I’ve found where LLMs work better than alternatives … provided you do due diligence. To put it bluntly, Google Translate and its ilk of similar slop from Bing, Baidu, etc. suck. They are god-awful at translation of anything but straightforward technical writing or the most tediously dull prose. LLMs are far better translators (and can be instructed to highlight cultural artifacts, possible transcription errors, etc.) …
… as long as you back-translate in a separate session to check for hallucination.
Oh, and Google Translate-style translators really suck at Classical Chinese. LLMs do much better (provided you do the back-translation check for hallucination).
If I understand how AI works (predictive models), kinda seems perfectly suited for translating text. Also exactly how I have been using it with Gemini, translate all the memes in ich_iel 🤣. Unironically it works really well, and the only ones that aren’t understandable are cultural not linguistic.
I guess this really depends on the solution you’re working with.
I’ve built a voting system that relays the same query to multiple online and offline LLMs and uses a consensus to complete a task. I chunk a task into smaller more manageable components, and pass those through the system. So one abstract, complex single query becomes a series of simpler asks with a higher chance of success. Is this system perfect? No, but I am not relying on a single LLM to complete it. Deficiencies in one LLM are usually made up for in at least one other LLM, so the system works pretty well. I’ve also reduced the possible kinds of queries down to a much more limited subset, so testing and evaluation of results is easier / possible. This system needs to evaluate the topic and sensitivity of millions of websites. This isn’t something I can do manually, in any reasonable amount of time. A human will be reviewing websites we flag under very specific conditions, but this cuts down on a lot of manual review work.
When I said search, I meant offline document search. Like "find all software patents related to fly-by-wire aircraft embedded control systems” from a folder of patents. Something like elastic search would usually work well here too, but then I can dive further and get it to reason about results surfaced from the first query. I absolutely agree that AI powered search is a shitshow.
I don’t think AI is actually that good at summarizing.
It really depends on the type and size of text you want it to summarize.
For instance, it’ll only give you a very, very simplistic overview of a large research paper that uses technical terms, but if you want to to compress down a bullet point list, or take one paragraph and turn it into some bullet points, it’ll usually do that without any issues.
Edit: I truly don’t understand why I’m getting downvoted for this. LLMs are actually relatively good at summarizing small, low-context-necessary pieces of information into bullet points. They’re quite literally made as code that interprets the likelihood of text based on an input. Giving it a small amount of text to rewrite or recontextualize is one of its best strengths. That’s why it was originally mostly implemented as a tool to reword small isolated sections in articles, emails, and papers, before the technology was improved.
It’s when they get to larger pieces of information, like meetings, books, wikipedia articles, etc, that they begin to break down, due to the nature of the technology itself. (context windows, lack of external resources that humans are able to integrate into their writing, but LLMs can’t fully incorporate on the same level)
But if the text you’re working on is small, you could just do it yourself. You don’t need an expensive guessing machine.
Like, if I built a rube-goldberg machine using twenty rubber ducks, a diesel engine, and a blender to tie my shoes, and it gets it right most of the time, that’s impressive. but also kind of a stupid waste, because I could’ve just tied them with my hands.
you could just do it yourself.
Personally, I think that wholly depends on the context.
For example, if someone’s having part of their email rewritten because they feel the tone was a bit off, they’re usually doing that because their own attempts to do so weren’t working for them, and they wanted a secondary… not exactly opinion, since it’s a machine obviously, but at least an attempt that’s outside whatever their brain might currently be locked into trying to do.
I know I’ve gotten stuck for way too long wondering why my writing felt so off, only to have someone give me a quick suggestion that cleared it all up, so I can see how this would be helpful, while also not always being something they can easily or quickly do themselves.
Also, there are legitimately just many use cases for applications using LLMs to parse small pieces of data on behalf of an application better than simple regex equations, for instance.
For example, Linkwarden, a popular open source link management software, (on an opt-in basis) uses LLMs to just automatically tag your links based on the contents of the page. When I’m importing thousands of bookmarks for the first time, even though each individual task is short to do, in terms of just looking at the link and assigning the proper tags, and is not something that takes significant mental effort on its own, I don’t want to do that thousands of times if the LLM will get it done much faster with accuracy that’s good enough for my use case.
I can definitely agree with you in a broader sense though, since at this point I’ve seen people write 2 sentence emails and short comments using AI before, using prompts even longer than the output, and that I can 100% agree is entirely pointless.
Even there it will hallucinate. Or it will get confused by some complicated sentences and reverse the conclusion.
It can, but I don’t see that happen often in most places I see it used, at least by the average person, although I will say I’ve deliberately insulated myself a bit from the very AI bro type of people who use it regularly throughout their day, and mostly interact with people who are using it occasionally during research for an assignment, rewriting part of their email, etc, so I recognize that my opinion here might just be influenced by the type of uses I personally see it used for.
In my experience, when it’s used to summarize, say, 4-6 sentences of text, in a general-audience readable text (i.e. not a research paper in a journal) that doesn’t explicitly rely on a high level of context from the rest of the text (e.g. a news article relies on information it doesn’t currently have, so a paragraph out of context would be bad, vs instructions on how to use a tool, which are general knowledge) then it seems to do pretty well, especially within the confines of an existing conversation about the topic where the intent and context has been established already.
For example, a couple months back, I was having a hard time understanding subnetting, but I decided to give it a shot, and by giving it a bit of context on what was tripping me up, it was successfully able to reword and re-explain the topic in such a way that I was able to better understand it, and could then continue researching it.
Broad topic that’s definitely in the training data + doesn’t rely on lots of extra context for the specific example = reasonably good output.
But again, I also don’t frequently interact with the kind of people that like having AI in everything, and am mostly just around very casual users that don’t use it for anything very high stakes or complex, and I’m quite sure that anything more than extremely simple summaries of basic information or very well-known topics would probably have a lot of hallucinations.
See, when I have 4-6 sentences to summarize, I don’t see the value-add of a machine doing the summarizing for me.
(Note: the above sentence is literally a summary of about a dozen sentences I wrote elsewhere that contained more details.)
See, when I have 4-6 sentences to summarize, I don’t see the value-add of a machine doing the summarizing for me.
Oh I completely understand, I don’t often see it as useful either. I’m just saying that a lot of people I see using LLMs occasionally are usually just shortening their own replies to things, converting a text based list of steps to a numbered list for readability, or just rewording a concept because the original writer didn’t word it in a way their brain could process well, etc.
Things that don’t necessarily require a huge amount of effort on their part, but still save them a little bit of time, which in my conversations with them, seems to prove valuable to them, even if it’s in a small way.
I feel like letting your skills in reading and communicating in writing atrophy is a poor choice. And skills do atrophy without use. I used to be able to read a book and write an essay critically analyzing it. If I tried to do that now, it would be a rough start.
I don’t think people are going to just up and forget how to write, but I do think they’ll get even worse at it if they don’t do it.
Our plant manager likes to use it to summarize meetings (Copilot). It in fact does not summarize to a bullet point list in any useful way. Breakes the notes into a headers for each topic then bullet points The header is a brief summary. The bullet points? The exact same summary but now broken by sentences as individual points. Truly stunning work. Even better with a “Please review the meeting transcript yourself as AI might not be 100% accurate” disclaimer.
Truely worthless.
That being said, I’ve a few vision systems using an “AI” to recognize product that doesn’t meet the pre taught pattern. It’s very good at this
This is precisely why I don’t think anybody should be using it for meeting summaries. I know someone who does at his job, and even he only uses it for the boring, never acted upon meetings that everyone thinks is unnecessary but the managers think should be done anyways, because it just doesn’t work well enough to justify use on anything even remotely important.
Even just from a purely technical standpoint, the context windows of LLMs are so small relative to the scale of meetings, that they will almost never be able to summarize it in its entirety without repeating points, over-explaining some topics and under-explaining others because it doesn’t have enough external context to judge importance, etc.
But if you give it a single small paragraph from an article, it will probably summarize that small piece of information relatively well, and if you give it something already formatted like bullet points, it can usually combine points without losing much context, because it’s inherently summarizing a small, contextually isolated piece of information.
I think your manager has a skill issue if his output is being badly formatted like that. I’d tell him to include a formatting guideline in his prompt. It won’t solve his issues but I’ll gain some favor. Just gotta make it clear I’m no damn prompt engineer. lol
I didn’t think we should be using it at all, from a security standpoint. Let’s run potentially business critical information through the plagiarism machine that Microsoft has unrestricted access to. So I’m not going to attempt to help make it’s use better at all. Hopefully if it’s trash enough, it’ll blow over once no one reasonable uses it. Besides, the man’s derided by production operators and non-kool aid drinking salaried folk He can keep it up. Lol
Okay, then self host an open model. Solves all of the problems you highlighted.
Or, you know, don’t use LLMs. That also solves all those problems too, costs less, and won’t hallucinate your way into lawsuits or whatever.
Nobody is a “prompt engineer”. There is no such job, for all practical purposes, and can’t be one given that the degenerative AI pushers change their models more often than healthy people change their underwear.
Right, I just don’t want him to think that, or he’d have me tailor the prompts for him and give him an opportunity to micromanage me.
Its just a statistics game. When 99% of stuff that uses or advertises the use of “AI” is garbage, then having a mental heuristic that filters those out is very effective. Yes you will miss those 1% of useful things, but thats not really an issue for most people. If you need it you can still look for it.
Yep. AI research has advanced for decades. It’s essentially math. Don’t be mad at math. Be mad at the salesmen lying about their unfinished, unregulated and unsupervised services that should be products but are being served as early access subscriptions to take advantage of the hype and ignorance while it lasts (and the fact that its easier to avoid refunds this way).
But what about me and my overly simplistic world views where there is no room for nuance? Have you thought about that?
I use claude to ask it coding questions. I don’t use it to generate my code; I mostly use it to do a kind of automated code review to look for obvious pitfalls. It’s pretty neat for that
I don’t use any other AI-powered products. I don’t let it generate emails, I don’t let it analyze data. If your site comes with a built in LLM powered feature, I assume
- It sucks
- You are a con artist
AI is the new Crypto. If you are vaguely associated with it, I assume there’s something criminal going on
I use AI to script code.
For my minecraft server.
I rely on expert humans to do tech work for my team and their tools.
I am not anti-AI per-say, I just know what works best and what leads to best results.
AI is the new Crypto. If you are vaguely associated with it, I assume there’s something criminal going on
Nothing to add here. I just like this so much that I want it duplicated.
I mostly use it to do a kind of automated code review
Same here, especially when I’m working with plain JS. Just yesterday I was doing some benchmarking and it fixed a variable reference in my code unprompted by commenting the small fix as part of the answer when I asked it something else. I copy-pasted and it worked perfectly. It’s great for small scope stuff like that.
But then again, I had to turn off Codeium that same day when writing documentation because it kept giving me useless and distracting, paragraph-long suggestions restating the obvious. I know it’s not meant for that, but jeez, it reminded me so much of Bing’s awfully distracting autocomplete.
I’ve never felt this sort of technology before that, when it works, it feels like you’re gliding on ice, and when it doesn’t, it feels like ice skating on a dirt road.