Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)M
Posts
4
Comments
610
Joined
3 yr. ago

  • Him pressing the button had nothing to do with media literacy. Neither she nor the article bother to include a picture of the popup or its direct verbiage so we have no idea what it even said.

    It's almost like that sentence you quoted continues "I temporarily disabled the ‘data consent’ option because I wanted to see whether I would still have access to all of the model’s functions". Though I'm sure you know how it ends and conveniently left it out because it doesn't support your view. Plenty of sites or applications limit functionality if you decide not to share your data. The professor wanted to know what functionality they might be unable to use going forward. That generally has nothing to do with preexisting data. If I turn off location data for my photos, it doesn't retroactively remove the data from previous photos. It's not unbelievable to expect to still have access to previous chats.

    Suggesting he completely fabricated or used AI to generate his data is unfounded based on the article and his statements.

    You never touched on her commenting about coding and never even addressed the point that she didn't bother to even look at what the button said before making a video about it. You're free to think her criticism is well deserved, but I do not think she bothered to justify her video beyond "AI bad". You can dislike AI while still being critical of people who agree with you on that point.

  • I appreciate the article link. I'm obviously not pro AI, but the framing in the video is really uncharitable. Without seeing the exact text of the prompt to disable data consent, I can totally see it being ambiguous about deleting data. If you're accustomed to opt out prompts and it does not explicitly use "permanently delete all your chat history" or something similar it's possible you thought it was as innocuous as a cookie opt out. If it is ambiguous then this article serves the exact purpose it was written for. She opens by saying she appreciates articles where people are open about embarrassing topics but then lambasts this guy, several times for things I didn't see him admit to. She takes issue with him using the word conversations as though it implies he is besties with it but they are very casually referred to as conversations and she does so herself. I totally understand not liking the way this guy does his job, or even thinking this means he's not even doing his job, but she seems to be making a lot of assumptions. She's probably right on several points, but the video might as well have been written by AI because it just regurgitates my opinions on AI back to me with no real relevance to the article she's talking about. She also on multiple occasions just casually drops that asking AI to write code for you, with the presumption that you don't understand it, is fine actually. She does this unironically. I almost stopped watching but wanted to finish it in case there was a gotcha at the end about media literacy. There was not.

  • Why did you need a shitty AI image for this?

  • People are dying in the streets fighting. It just happened. That's what a lot of this is about. In the history of the world there is very little immediate success for these kinds of movements. Unfortunately, it's a process to build up momentum and engage in solidarity. You pointing the finger at Americans as though authoritarianism and government overreach isn't a growing problem everywhere is not helping. The problem facing America is in part the result of other countries making exceptions for America on the world stage for generations. Now the government has turned against our allies and they're regretting having relied on America for protection. Had they not been so reliant on America in the first place we wouldn't have been able to swing our might in whatever direction the government wanted without fear of repercussions. The cowardice of other leaders has made our leaders worse and now we all are suffering, because make no mistake, the world suffers when people in it suffer. Hopefully the end of the tunnel results in the righting of the ship, but plenty of suffering has already happened and will continue to happen so please don't act like people haven't already died fighting against this, because they have.

  • That was an interesting read, but I am not convinced that they understand the "problem" they are trying to address. That would also explain the vagueness of the title. Clearly they think something needs to change because of AI, but they have not explained why, or defined what, or the parameters for a positive change. It makes it feel arbitrary.

    At one point he suggests that telling people who are taking the exam after you what specifically is on the exam is not cheating, though his students seemed to think it is. If telling people is encouraged then people taking the test first just have a more difficult task and their results are more likely to reflect their knowledge of the subject. At that point just give people the exam questions early. I had a professor that would give out a study guide and would exclusively pull exam questions from the study guide with the numbers changed. It was basically homework, but you were guaranteed to have seen everything on the exam already and that was such a great way to ensure 1) people fully understood the scope of the test 2) relieve stress about testing. If they don't see a problem with only certain people knowing exact questions and answers ahead of time, then I'm not sure they understand what cheating is.

    Unrelated, but they also blame outlook for why young people hate email. I had to use outlook for a bit and it does suck, but my hatred for email is unrelated.

    I'm glad they are experimenting with different methods for testing, but without really knowing more about the class itself this comes off as though this is just a filler class in a degree program and that the test doesn't really matter because their understanding of the subject doesn't really matter. In another blog he refers to the article about how AI failed at running a vending machine which was making the rounds a bit ago. In it he laments that we're going to have to "prepare for that stupid world" where AI is everywhere. If you think we can still fight that, I don't think accepting AI as a suitable exam tool is the way to do it, even if you make students acknowledge hallucinations. At that point you're normalizing it. 2/60 is actually not bad for using AI, as he said there will always be those students, but the blog makes me question the content of the class more than anything else.

  • I care very little about this particular person, but who allows people that speak like this to become "artists"? There is no way this person has anything actually interesting to say, and if they did, they lack the ability to communicate it, much less through a medium as indirect as art.

  • I would leave a review and stop going. I have had to switch providers for services myself and it sucks but I refuse to give any money to people doing this if it can be helped. Even worse, people doing this so sloppily that they don't bother to even read what they copy/paste.

  • Oh definitely. That's exactly why I specified that it needs to be explicitly transcription software. Even speech to text on my phone gets it wrong with enough regularity that I check it every time. I can't imagine what it would be like if I was using less common words In a medical setting. I don't love the idea of every word said in a doctor's office being recorded and that recording being on record forever, but to a certain extent I can understand doctors who might think that would be helpful. What I don't think would be helpful is having anyone except the doctor or another trained medical professional summarize that information. I do understand that doctors are human and mess up and miss things and might even take worse notes than AI would, but at least it is a Doctor who is doing that. It is a human being who met the other human being and sat in a room with them who is making these decisions.

  • I understand different things need different codes, otherwise the labs or other doctors don't know what they need to do, but the fact a doctor can order a test and then a random person with no medical background, or even just a machine, can tell the doctor "no, that's not needed" is a waste in the system and a waste of people's, often sick people's, time. There is no code for homeopathy tests or procedures, so there's no way for them to be prescribed. A doctor can't order 10 ccs of water danced on by fairies, so I don't see that being an issue. My point is that whatever a doctor orders shouldn't need that much oversight. If at the end of the year the government wants to run a "how many MRIs per patient" check on providers they can still do so, but that doesn't require every MRI to be justified in the moment. Then you can investigate the practice for potential fraud if they're not actually doing the MRIs and just charging for them, or if they're being ordered unnecessarily. Doctors should be trusted to do right by their patients. If the government had a going rate for procedures and there was one place where people can see everything that was billed as part of their care things might not need as much oversight.

    I admittedly don't work in the field, and I'm aware Medicare/medicaid fraud does happen, but considering the waste (on all sides including patients) created by all the overhead I think we'd come out ahead by just trusting doctors and checking end of year stats.

    Also, through the ACA people do technically get quite a lot of choice in their healthcare benefits. I don't think it's a good system because in my opinion your income shouldn't dictate your quality of care or coverage, but if you're looking for more choice in the US and have not looked at the marketplace I recommend doing so. I personally think single payer is better, but definitely look there if you're looking for HSA or deductible differences.

  • They should be banned from having any unlocking restrictions after they were found to have violated the initial FCC mandates placed on them. Absolutely disgraceful. No accountability.

  • I do not want AI involved in my patient-doctor communication at all. If transcription software is needed, though I'm not convinced it is, then they can use transcription software, but at the end of the day I think a human being should be the one responsible and making decisions regarding what is and is not officially listed in a medical record. AI is not sufficiently advanced enough for me to trust that it will not make mistakes that could endanger lives.

    If we wanted to save time with billing codes, we could just do away with them and have a system that just lets people get the healthcare they need. If a test is ordered, that test should be entered as is by the doctor and not need any additional interpretation or overhead. I don't do medical billing, but I can't imagine a reason it needs to be more complicated than that.

    Specialized AI double checking radiology may have a use, but I still don't see it as a replacement as much as a second check.

  • Yea completely likely there's more going on. Sometimes kids with different needs can be more physical and it's possible this kind of occurrence is seen as just part of the job. Not saying that's acceptable, but it's a possibility. With no other context though it's not a great response if taken at face value.

  • I'm hoping this was cut off from a longer email, but if it was not she probably could've at least expected the parents to apologize on behalf of their kid, ensure they spoke to their kid about why it's inappropriate to hit people, probably have the kid apologize, and depending on some other factors offer some kind of compensation for the glasses at least as a token gesture.

    If your kid hit someone in the face hard enough to break their glasses and your only response is maybe they were hungry here's how I can address that, I can potentially see why they might have done it in the first place.

  • I was about to defend the lack of contributions and then I kept reading. I have a handful of different accounts I use and some have the same look about them, but yea the investor thing is an obvious tell.

  • That's an interesting take considering the post is about Nazis and evil scientist is a popular Nazi trope. Feel free to look into Josef Mengele and co for some chilling examples of what "adequate and rational moral grounds" can look like to people. The whole Nazi thing was built on the "science" of a superior race. The moralizing "bleeding hearts and artists" are often keeping unscrupulous people in the sciences at bay and ensuring science is grounded in humanity, which it should be. Bad people exist in every field and no field should be considered above reproach from another.

  • Linux is currently easier to use than Windows.

    Claim in dispute

    People who think otherwise are Windows users who think different equals worse.

    In this case different is worse. If you're used to a restaurant that serves carrots and I serve you peas you can argue that it's not worse it's just different. If you're used to a restaurant that serves carrots and I tell you I don't know what carrots are and I don't have any alternative suggestions, but if you can find a store that provides what you're talking about, appropriately transport that to my location and teach me how to cook them I will do that, then I think it's fair to say I'm just a worse restaurant. What's not comparable is easy of use. If you don't understand how a lack of plug and play affects ease of use then there's nothing I can say that will fundamentally bridge that gap.

  • I'm not the person you're responding to, but if I have headphones or speakers or a mouse that aren't plug and play on Linux which is what I'm used to on windows, I think it's fair to say that my experience with Linux is less easy than with windows. The average user is not going to consider that a hardware issue, and it isn't a hardware issue. If it's a driver issue, I'd call that a software issue. Im glad to hear your grandma is not having issues with Linux, but as a Linux user I have to agree with the other commenter. A not insignificant amount of people will run up against some issues with Linux that the average user is likely not equipped to solve. I'm not saying that it means Linux is bad, but it really isn't helpful to act like that's a complete fabrication.

  • What a strange framing for the article. It mostly focuses on fascism, but manages to speak down to the people who saw this very obvious outcome. Maybe there's a level of irony in not seeing, but if you're using this "lib" framing, which I don't necessarily agree with but conceding that point, maybe tie that back into how fascists ridicule their detractors. The resistance movement was substantial and full of lots of different people, but the writer points to "the shrillest" of them as though they were uppity women screaming into the void. They might not have accomplished a lot, but framing them as hysterical undercuts your point, even if you are admitting they were right. They were not hysterically correct, they were correct. I'm assuming there's some irony in there, but even so, that is not enough work done in my opinion to be using that framing. If anyone recalls, the pussy hats were essentially in solidarity with women he open referred to molesting. Let's see where we are now, oh he's all over the Epstein files and refusing to release them. Fascists want you to think solidarity is cringe and that protesters are shrill. In reality solidarity is the only place we can truly derive strength from and protesting is one way we exercise our power and freedoms. I'd love if they had taken even a few sentences to mention those things instead of feeding into the fascist narrative about libs.

  • Also, AI continues to get more indistinguishable from actual images. If someone shares revenge porn but acts like it's AI, the victim should not have to prove one way or the other. Currently, I think real or AI should be treated the same, but it's possible I'm overlooking some unintended consequences of that.

  • In the article they quote someone who said he's glad his code has been used by LLMs because he's always been working to democratize tech, but I still struggle to see how this democratized it. A handful of companies control access to the AI and will use that as an excuse to keep more people out of tech. I'm glad he is ok with his work having been used, but plenty of people aren't and wouldn't want companies like Microsoft or Google profiling off them without compensation. I wonder if his opinion would change if he was one of the people let go and replaced by AI. If then he could see that this isn't about democratizing things, but further concentrating wealth in the hands of these large corporations. Additionally, this has reduced friction for bad actors and slop generators. Is he glad his work makes people confident contributing to FOSS projects even if they have no idea what the quality of the code is like increasing overhead time for the people maintaining the project? I think a totally free AI for personal use in a society much better than ours would actually be acceptable, but we don't live in that society, we live in this one. Any use of AI now is just rewarding these companies for a "move fast and break things" mentality that truly does not care what things end up being broken (the economy, ownership rights, individual families, the environment, professional integrity, trust in the open source community, etc).

  • Gardening @lemmy.world

    Orchid Spikes Getting Too Long

  • RPGMemes @ttrpg.network

    When it's been more than a week since the last session and we have to piece together what was happening

  • LGBTQ+ @beehaw.org

    Harry Potter TV Series Boycott

  • Gaming @beehaw.org

    Opinions on Content Creator Packs?