Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)S
Posts
0
Comments
40
Joined
3 yr. ago

  • I get that there are better choices now, but let's not pretend like a straw you blow into is the technological stopping point for limb-free computer control (sorry if that's not actually the best option, it's just the one I'm familiar with). There are plenty of things to trash talk Neuralink about without pretending this technology (or it's future form) is meritless.

  • The issue on the copyright front is the same kind of professional standards and professional ethics that should stop you from just outright copying open-source code into your application. It may be very small portions of code, and you may never get caught, but you simply don't do that. If you wouldn't steal a function from a copyleft open-source project, you wouldn't use that function when copilot suggests it. Idk if copilot has added license tracing yet (been a while since I used it), but absent that feature you are entirely blind to the extent which it's output is infringing on licenses. That's huge legal liability to your employer, and an ethical coinflip.


    Regarding understanding of code, you're right. You have to own what you submit into the codebase.

    The drawback/risks of using LLMs or copilot are more to do with the fact it generates the likely code, which means it's statistically biased to generate whatever common and unnoticeable bugged logic exists in the average github repo it trained on. It will at some point give you code you read and say "yep, looks right to me" and then actually has a subtle buffer overflow issue, or actually fails in an edge case, because in a way that is just unnoticeable enough.

    And you can make the argument that it's your responsibility to find that (it is). But I've seen some examples thrown around on twitter of just slightly bugged loops; I've seen examples of it replicated known vulnerabilities; and we have that package name fiasco in the that first article above.

    If I ask myself would I definitely have caught that? the answer is only a maybe. If it replicates a vulnerability that existed in open-source code for years before it was noticed, do you really trust yourself to identify that the moment copilot suggests it to you?

    I guess it all depends on stakes too. If you're generating buggy JavaScript who cares.

  • We should already be at that point. We have already seen LLMs' potential to inadvertently backdoor your code and to inadvertently help you violate copyright law (I guess we do need to wait to see what the courts rule, but I'll be rooting for the open-source authors).

    If you use LLMs in your professional work, you're crazy. I would never be comfortably opening myself up to the legal and security liabilities of AI tools.

  • That's significantly worse privacy-wise, since Google gets a copy of everything.

    A recovery email in this case was used to uncover the identity of the account-holder. Unless you're using proton mail anonymously (if you're replacing your personal gmail, then probably not) then you don't need to consider the recover email as a weakness.

  • I think it's more the dual-use nature of defense technology. It is very realistic to assume the tech that defends you here, is also going to be used in armed conflict (which historically for the US, involves in many civilian deaths). To present the technology without that critical examination, especially to a young audience like Rober's, is irresponsible. It can help form the view that this technology is inherently good, by leaving the adverse consequences under-examined and out of view to children watching this video.

    Not that we need to suddenly start exposing kids to reporting on civilian collateral damage, wedding bombings, war crimes, etc... But if those are inherently part of this technology then leaving them out overlooks a crucial outcome of developing these tools. Maybe we just shouldn't advertise defense tech in kids media?

  • I don't believe that explanation is more probable. If the NSA had the power to compell Apple to place a backdoor in their chip, it would probably be a proper backdoor. It wouldn't be a side channel in the cache that is exploitable only in specific conditions.

    The exploit page mentions that the Intel DMP is robust because it is more selective. So this is likely just a simple design error of making the system a little too trigger-happy.

  • Wow, what a dishearteningly predictable attack.

    I have studied computer architecture and hardware security at the graduate level—though I am far from an expert. That said, any student in the classroom could have laid out the theoretical weaknesses in a "data memory-dependent prefetcher".

    My gut says (based on my own experience having a conversation like this) the engineers knew there was a "information leak" but management did not take it seriously. It's hard to convince someone without a cryptographic background why you need to {redesign/add a workaround/use a lower performance design} because of "leaks". If you can't demonstrate an attack they will assume the issue isn't exploitable.

  • Ya know, all perfectly fair.

    Good choice on reddit. As much as I love a good 'ol sneer, there's a lot of jargon and clowning to wade through. There are a lot of genuinely solid critiques of his views there, though.

    I appreciate you doing your due diligence on this, but I'm not really sure where to keep this discussion going. I still stand by my original comment's warning. Reading Siskind is probably not going to corrupt an unassuming reader to immediately think XYZ bad thing. His writings tend to be very data heavy and nuanced, to his credit.

    Is he Hitler 2.0? No, far from it.

    But he shares a set of core assumptions with the other ideologies, and the circles between his community and the other communities have large overlap. If you start with one, it's likely you encounter the other. If you start to think like one, it's a small jump to start thinking like the other. (From experience).

    In my opinion, anyone encountering Siskind for the first time is well-served by an understanding of TESCREAL—which they are likely to encounter in either his posts, its comments, or linked material—, and its critiques—which should help them assess what they encounter through a critical lense.

    That's more or less what I wanted to give caution about, which may or may not have come across correctly.

    (Not that his stuff is entirely innocent either, but beside the point)

  • I understand, a good instinct to have. Unfortunately I have read so much in such a piecemeal way I cannot really compile a specific list. But I can point you to where "evidence" can be found. I don't expect you to read any of this, but if you want to evaluate Alexander's views further it will help:

    • The New York Times did a piece on him that does a good job outlining Alexander's ties to and influence on Silicone Valley. Probably the best actual piece of journalism on him.
    • There used to be a reddit community (/r/SneerClub) that would read his (mountainous, as you point out) posts and pull out errors and misteps to "sneer" at, but that's been dead since the API revolts. The old posts are still up. Basically you had a club of people that spent years finding (cherry-picking, mind) the juicy bits for you.
    • You may find some passing reference to Alexander is one of Émile Torres's articles or interviews on the subject of TESCREAL, but probably nothing substantial.
    • If you spend time on communites like LessWrong and the EA Forum, you will see heavy reference to and influence from Alexander's writing among members.

    A lot of what I say comes from my experiencd spending way too much time following these socisl circles and their critics online. Unfortunately, the best way I know to see for yourself is to dive in yourself. Godspeed, if you choose to go that way.

    Edit: of course, reading his work itself is a great way , too, if you have time for that.

  • The example is pretty standard, but I feel obligated to caution people about the author (just because he's linked to here and some unassuming people might dive in).

    Scott Alexander falls loosely under the TESCREAL umbrella of ideologies. Even in this article, he ends up concluding the only way out is to build a superintelligent AI to govern us... which is like the least productive, if not counterproductive, approach to solving the problem. He's just another technoptimist shunting problems onto future technologies that may or may not exist.

    So, yeah, if anyone decides they want to read more of his stuff, make sure to go in informed / having read critiques of TESCREALism.

  • What even is federation in the context of a distributed vcs like Git? Does it mean federation of the typical dev ops tools (issues, PRs, etc.)?

  • I have a thing for experimental CAD and modeling softwares, but hadn't heard of PicoCAD! I'll have to try it out, thanks for sharing.

    Some other cools ones:

  • Yeah, I was complaining too. Or am I not understanding you?

  • Having express self-checkoit is great. The Kroger near me went full-self-checkout. They have large kiosks that mimmic the traditional checkout belt kiosks, except the customer scans at the head of the belt and the items move into the bagging area.

    If you have a full cart, you scan all the items, checkout, walk to the end of the belt, and bag all of your items. Takes twice as long as bagging while a cashier scans (for solo shoppers), and because of the automatic belt the next customer cannot start scanning until you finish bagging, or their items will join the pile of your items.

    It effectively destroys all parallelism is the process (bagging while scanning, customers pre-loading their items with a divider while the prior customer is still being serviced), and with zero human operated checkouts running you get no choice

  • Sorry for the long reply, I got carried away. See the section below for my good-faith reply, and the bottom section for "what are you implying by asking me this?" response.


    From the case studies in my scientific ethics course, I think she probably would have lost her job regardless, or at least been "asked to resign".

    The fact it was in national news, and circulated for as long as it did, certainly had to do with her identity. I was visiting my family when the story was big, and the (old, conservative, racist) members of the family definitely formed the opinion that she was a 'token hire' and that her race helped her con her way to the top despite a lack of merit.

    So there is definitely a race-related effect to the story (and probably some of the "anti- liberal university" mentality). I don't know enough about how the decision was made to say whether she would have been fired those effects were not present.


    Just some meta discussion: I'm 100% reading into your line of questioning, for better or worse. But it seems you have pinned me as the particular type of bigot that likes to deny systemic biases exist. I want to just head that off at the pass and say I didn't mean to entirely deny your explanation as plausible, but that given a deeper view of the cultural ecosystem of OpenAI it ceases to be likely.

    I don't know your background on the topic, but I enjoy following voices critical of effective altruism, long-termism, and effective accelerationism. A good gateway into this circle of critics is the podcast Tech Won't Save Us (the 23/11/23 episode actually discusses the OpenAI incident). Having that background, it is easy to paint some fairly convincing pictures for what went on at OpenAI, before Altman's sexuality enters the equation.

  • Fair enough. I disagree, but we're both in the dark here so not much to do about it until more comes to light.

  • I mean, their press release said "not consistently candid", which is about as close to calling someone a liar as corporate speak will get. Altman ended up back in the captain's chair, and we haven't heard anything further.

    If the original reason for firing made Altman look bad, we would expect this silence.

    If the original reason was a homophobic response from the board, we might expect OpenAI to come out and spin a vague statement on how the former board had a personal gripe with Altman unrelated to his performance as CEO, and that after replacing the board everything is back to the business of delivering value etc. etc.

    I'm not saying it isn't possible, but given all we know, I don't think the fact that Altman is gay (now a fairly general digestible fact for public figures) is the reason he was ousted. Especially if you follow journalism about TESCREAL/Silicon Valley philosophies it is clear to see: this was the board trying to preserve the original altruistic mission of OpenAI, and the commercial branch finally shedding the dead weight.

  • I seriously doubt it had anything to do with his wedding. I don't think the sexuality of a CEO is that big an issue in this day (see: Tim Cook).

    Especially considering how Atman's has steered OpenAI vs. the boards' stated mission, it seems much more likely that his temporary ousting had to do with company direction rather than his sexuality.

  • Amdahl's isn't the only scaling law in the books.

    Gustafson's scaling law looks at how the hypothetical maximum work a computer could perform scales with parallelism—idea being for certain tasks like simulations (or, to your point, even consumer devices to some extent) which can scale to fully utilize, this is a real improvement.

    Amdahl's takes a fixed program, considers what portion is parallelizable, and tells you the speed up from additional parallelism in your hardware.

    One tells you how much a processor might do, the only tells you how fast a program might run. Neither is wrong, but both are incomplete picture of the colloquial "performance" of a modern device.

    Amdahl's is the one you find emphasized by a Comp Arch 101 course, because it corrects the intuitive error of assuming you can double the cores and get half the runtime. I only encountered Gustafson's law in a high performance architecture course, and it really only holds for certain types of workloads.