Skip Navigation

Posts
0
Comments
1390
Joined
3 yr. ago

Just your normal everyday casual software dev. Nothing to see here.

People can share differing opinions without immediately being on the reverse side. Avoid looking at things as black and white. You can like both waffles and pancakes, just like you can hate both waffles and pancakes.

been trying to lower my social presence on services as of late, may go inactive randomly as a result.

  • sadly, it's a little more complex than just enabling it. The supported self host deployment uses docker, and the docker containers that are available don't contain the interfaces for voice or video calling as they are not up to date.

    If I understand it right, to enable it would mean you need to either pull the source yourself and run it off of docker, or make a custom docker image using a version of stoat web that contains the ability to do voice calls.

    reading the draft of the linked issue, it looks like the author isn't doing voice call for the reason that they don't know the proper way to integrate it into the docker image.

    So to answer it: yes it looks like you can use voice servers on the current self hosted model, but you can't use pre-existing docker images, and it will require you to manually add the new web UI in and patch where needed.

  • Just a fair warning in reply to this that the self-hosted version of Stoat doesn't currently have voice chat. It's an open issue that's currently paused until they can finish their rework.

    If you have the skill for it, it seems like you can patch work the existing voice chat back in, but it's not part of their initial setup and there's no instructions on how to do so properly

  • TIL I even had an application key there. I don't think I've ever used it.

  • Personally, it seems like it's trustworthy again. The previous owner of the repo did eventually admit that they authorized the transfer, but, The entire transfer process was extremely sketchy and had no chain of custody or trust. It was just the repository got deleted, and then a few days later showed under a whole blank state again with a user with no profile, no contribution history, and it was just a trust me bro, I knew the original maintainer look I have the keys to prove it.

    The maintainer of the Google Play build of it seems to trust them though, and they are established in the community, plus they archived their sync thing builds again in favor of just using one repo, so it's likely fine.

    For future people wondering about it as well, it doesn't help that the new maintainer of the app has deleted every issue that had to do with the migration, so you no longer can research the issue for yourself. The only information you have available to you is the discussion chain listed on the community forums, But any type of issue that they link to were deleted.

    Personally though, I plan on keeping my current version pinned to prior to the transfer until either I'm forced to update due to bugs or I feel comfortable with the current maintainer again. I'm not sure how long that will be.

    For an app that contains very sensitive information, I was not impressed with how the transfer process underwent.

  • don't take me wrong. I would love an alternative as well! I just think that the personality of the fediverse as a whole goes against what companies are actually looking for in partners

  • I agree with that on direct plans. I don't agree with that on indirect lanes. the emissions to passenger ratio should be lower on a full 130 passenger jet that is going to another more populated airport nearby, and then hopping to the destination port with a lower passenger count(this would raise ticket prices some, but I wouldn't expect game changing amounts), than a direct flight plan that has a full jet one direction, and then only 1/4 occupancy on the direct route back.

    I don't actually care about full emission count though, I just want the emissions to be used responsibly. a low passenger to emission ratio would be what I find the most useful, but I doubt its what anyone would actually supply.

  • I personally might to be honest. but you are right, it would be only if the pricing was similar and one airline had a lower carbon print I would choose the lower carbon print. (people like me is probably why they don't supply this info normally). It doesn't make sense to spent a bunch more for it though.

    Being said, if the pricing was similar, I expect there would be enough people like me though, they would start cancelling flight lanes like we are seeing with the tourist trade with Canada and Florida. Air Canada alone has canceled over 10% of its CA to FL based flights due to lack of flyers (only about 20 flight lanes though, but that's still a good start).

  • Fully agree. if they actually go through with banning it, piracy will thrive. People aren't going to just not play games as a result of not having access to a game. Smaller launchers will rise, people will download from other sources. A method of obtainment will be found.

  • An LI alternative wouldn't be super helpful, you would need mainstream and companies to want to use it, and any open alternative would fail to meet that goal. The wants of the employee and the wants of the employer don't mix, thats why LinkedIn looks so bad to the employee. It's not meant to be for the employee, its meant to be for the employer. If it was the other way around the employers wouldn't use it.

  • I don't think thats a unfair ask. One local representative in each country seems perfectly fair for me.

    Being said? the user information part? strictly locked to their own content. If the user account is registered in that country they have access. Providers could 100% do that with most operational databases out there. It's a requirement for stores in order to do payment information. Steam and Epic already do this as it is.

    Should they be able to access that information in the first place is a different discussion, that needs to be had in that corresponding country, but if the country has already decided it needs access to continue, there's no reason it should have access to all user data. The only thing they really have claim to is their own countries data.

  • my issue with what would happen if this ruling solidifies is the precident that it causes.

    I could not care less about reaction videos, they are really low effort videos that I don't understand why are so popular.

    My issue entirely is that if the plaintiff wins in this case, it's effectively saying any type of downloaded video on youtube would classify as circumventing DRM, which would open an avenue aside from a fair use violation for studios to go after content creators for.

    Look at lets plays for example. Those operate almost entirely on fair use clauses. I fear that if we start ruling that recording or downloading videos that your computer is able to decode (as this is all that the youtube downloader is doing, just instead of it going to the client its sending to a file), that means by same principle, recording a video game that contains DRM would also be considered circumventing a DRM. Which would outlaw lets plays.

    This is a very bad precedent regardless of if its just low quality trash reaction videos or not.

  • I should ask them at some point how it is now that its been deployed for a bit. I wouldn't expect so either based off how I've seen open sourced projects using stuff like that, but they also haven't been complaining about it screwing up at all.

  • That was my general thought process prior to them telling me how the system worked as well. I had seen claude workflows which does similar, but to that level I had not seen before. It was an eye opener.

  • yea I'm not sure, maybe something to do with the term SAS but yea not enough unredacted stuff to really know.

  • Thank you for expanding on it. That was a pretty interesting read, gotta love indecisiveness in your standards

  • can you elaborate on type="datetime-local" not existing? It's been supported in almost every mainstream browser since basically 2012. The last mainstream to adopt it was Safari in 2021. There is argument that FF didn't have proper support till 2021 as well but, that's because it was lacking the "time" part of the element. So they modified how it worked for awhile to work like the type="date" element, that has since been resolved.

    being said, I do agree with you on a lot of those. it would be nice to have some form of UI validation. That is one of it's flaws that could be expanded on. a disabled dates or invalid days tag on the input would be a lot easier (like allowedDays being a comma separated list of daynames or numbers like how the time standard is), but also add a lot of complexity to it for something that should be being validated via scripts both server and client side. Not all browsers have the clear button as well which is a problem because it's an extra step when you do make a mistake on it. They do offer a valid range tag though to allocate valid ranges for dates, but it's so primitive that for a scheduler it can't really be used unless its on a week by week basis

  • I do agree, LLM generated code is inaccurate, which is why they have to have the throw it back in stage and a human eye looking at it.

    They told me their main concern is that they aren't sure they are going to properly understand the code the AI is spitting out to be able to properly audit it (which is fair), then of course any issue with the code will fall on them since it's their job to give final say of "yes this is good"

  • The scary part is how it already somewhat is.

    My friend is currently(or at least considering) job hunting because they added AI to their flow and it does everything past the initial issue report.

    the flow is now: issue logged -> AI formats and tags the issue -> AI makes the patch -> AI tests the patch and throws it back if it doesn't work -> AI lints the final product once working -> AI submits the patch as pull.

    Their job has been downscaled from being the one to organize, assign and work on code to an over-glorified code auditor who looks at pull requests and says "yes this is good" or "no send this back in"

  • I know exactly what date UI you are talking about and it's a firm agree. Whoever decided that a date UI needed to have the inability to select a year without hitting back 3 times, then ontop of that decided to make it so it undid your month and day selection when you did so, did the world a massive disfavor designing it.

    What is wrong with the simple type="datetime-local" or type="date" UI's that every mainstream browser has native. It's 3 clicks, you can specify the year at the top, and then month & date in the main body. Why even introduce layers to it. Have everything on the same layer.