Skip Navigation

Posts
0
Comments
61
Joined
3 yr. ago

  • Yeah, I don't know why anyone knowledgeable would expect them to be good at chess. LLMs don't generalise, reason or spot patterns, so unless they read a chess book where the problems came from...

  • Not well, apparently.

  • Thanks for the suggestion. sudo cat /sys/module/nvidia_drm/parameters/modeset indeed prints N, so I'll try adding that to my system config.

  • I think the Xorg vs Wayland situation is not too dissimilar to that of Windows vs Linux. Lots of people are waiting for all of their games/software work (just as well or better) on Linux before switching. I believe that in most cases, switching to Linux requires that a person goes out of their way to either find alternatives to the software they use or altogether change the way they use their computer. It's a hard sell for people who only use their computer to get their work done, and that's why it is almost exclusively developers, tech-curious, idealists, government workers, and grandparents who switch to Linux (thanks to a family member who falls into any subset of the former categories). It may require another generation (of people) for X11 to be fully deprecated, because even amongst Linux users there are those who are not interested in changing their established workflow.

    I do think it's unreasonable to expect everything to work the same when a major component is being replaced. Some applications that are built with X11 in mind will never be ported/adapted to work on Wayland. It's likely that for some things, no alternatives are ever going to exist.

    Good news is that we humans are complex adaptive systems! Technology is always changing - that's just the way of it. Sometimes that will lead to perceived loss of functionality, reduction in quality, or impeded workflow in the name of security, resource efficiency, moral/political reasons, or other considerations. Hopefully we can learn to accept such change, because that'll be a virtue in times to come.

    (This isn't to say that it's acceptable for userspace to be suddenly broken because contributors thought of a more elegant way to write underlying software. Luckily, X11 isn't being deprecated anytime soon for just this reason.)

    Ok I'm done rambling.

  • As soon as it works. A recent update included Plasma 6.0.2 (on NixOS unstable/24.05) which apparently defaults to wayland, but it just exits to login right away. I'm not in a mood to tinker, so for now I plan to simply wait for things to Just Work. When I select "wayland" and things work and look the same (or better) is when I'm happy to rid myself of the horror that is X11, because as horrible as X11 is, it simply isn't giving me trouble these days - my system is stable and I like keeping it that way.

    Edit: perhaps important to mention that I'm using a GTX 1070.

    Edit 2: I realise that I'm sort of contradicting myself with how I worded the above. I don't mean to imply that I'm not willing to sacrifice anything to embrace Wayland; just that as it stands I don't think the benefits of Wayland outweighs my ability to use this computer the way I need to.

  • Because they have no basis on which to decide where to go. It's like buying toothpaste but there are hundreds of options, none of which you know anything about, so you get whichever seems most popular. It minimises the risk of ending up with something which is unpopular for good reasons.

  • It isn't "looking" that is meant by "observation". "Observation" is meant to convey the idea that something (not necessarily sentient) is in some way interacting with an object in question such that the state(s) of the object affects the state(s) of the "observer" (and vice versa).

    The word is rather misleading in that it might give the impression of a unidirectional type of interaction when it really is the establishment of a bidirectional relationship. The reason one says "I observe the electron" rather than "I am observed by the electron" is that we don't typically attribute agency to electrons the way we do humans (for good reasons), but they are equally true.

    Edit: a way of putting it is that the electron can only be said to be in a particular state if it matters in any way to the state of whomever says it. If I want to know what state an electon is in, it must appear to me in some state in order for me to get an answer. If I never interact with it, I can't possibly get such an answer and the electron then behaves as if it was actually in more than one state at once, and all those states interfere with each other, and that looks like wavelike patterns in certain measurements.

    Edit 2: just to be clear, I used an electron as an example, but it's exactly the same for anything else we know of. Photons, bicycles, protons, and elephants are all like this, too. It's just that the more fundamental particles you involve and the more you already know about many of them, the fewer the possible answers are for any measurement you could make.

  • I'm curious as to what you think about the actual meaning of those sentences, then. Do you think that there ought to be protection against consequences, regardless of what one says? Should there be any exceptions at all? What is the domain of applicability? Certain types of expression, certain types of topics, intended audience, etc?

    Edit: oh and what about freedom from? Is there any situation in which a person has a right to shut someone down from "expressing themselves" to them without their consent?

  • While all of it is doable, be aware that it takes time and effort to learn Nix and NixOS. It can be difficult to figure out how to get a particular environment set up properly. There is a lot of documentation, but it doesn't always give easy answers if you have specific requirements for a particular dev environment and such.

    It's been a few years since I worked with Unity3D professionally, but I did so in NixOS with very little trouble. Rust has very good Nix infrastructure and so do many other languages. I can't tell you anything about UE5 or the other proprietary tools, but there are FHS-compatibility helpers (steam-run usually works fine when I need to run arbitrary binaries made for 'normal' distros).

    If you're willing to figure things out sometimes (and especially in the beginning) and are motivated to take your OS to the next level, NixOS is definitely worth it. Been using it for many years and I can't imagine ever using a mutable OS again as a daily driver (unless the way I use my computer drastically changes). I configured everything just the way I want it; it's magical to have almost everything in one place and being able to try different things without fear of breaking something.

  • Firstly, I'm willing to bet only a minority of users regularly use those buttons. Secondly, you're talking about the most popular LLM(s) out there. What about all the other LLMs almost nobody is using but are still being developed/researched? Where do they find humans willing to sit and rate all the garbage their LLM puts out?

  • I know LLMs are used to grade LLMs. That isn't solving the problem, it's just better than nothing because there are no alternatives. There aren't enough humans willing to endlessly sit and grade LLM responses.

  • For that you need a program to judge the quality of output given some input. If we had that, LLMs could just improve themselves directly, bypassing any need for prompt engineering in the first place.

    The reason prompt engineering is a thing is that people know what is expected and desired output and what isn't, and can adapt their interactions with the tool accordingly, a trait uniquely associated with adaptive complex systems.

  • Like a completely mad or autistic artist that is creating interesting imagery but has no clue what it means.

    Autists usually have no trouble understanding the world around them. Many are just unable to interface with it the way people normally do.

    It’s a reflection of our society in a weird mirror.

    Well yes, it's trained on human output. Cultural biases and shortcomings in our species will be reflected in what such an AI spits out.

    When you sit there thinking up or refining prompts you’re basically outsourcing the imaginative visualizing part of your brain. [...] So AI generation is at least some portion of the artistic or creative process but not all of it.

    We use a lot of devices in our daily lives, whether for creative purposes or practical. Every such device is an extension of ourselves; some supplement our intellectual shortcomings, others physical. That doesn't make the devices capable of doing any of the things we do. We just don't attribute actions or agency to our tools the way we do to living things. Current AI possess no more agency than a keyboard does, and since we don't consider our keyboards to be capable of authoring an essay, I don't think one can reasonably say that current AI is, either.

    A keyboard doesn't understand the content of our essay, it's just there to translate physical action into digital signals representing keypresses; likewise, an LLM doesn't understand the content of our essay, it's just translating a small body of text into a statistically related (often larger) body of text. An LLM can't create a story any more than our keyboard can create characters on a screen.

    Only once/if ever we observe AI behaviour indicative of agency can we start to use words like "creative" in describing its behaviour. For now (and I suspect for quite some time into the future), all we have is sophisticated statistical random content generators.

  • Yeah a real problem here is how you get an AI which doesn't understand what it is doing to create something complete and still coherent. These clips are cool and all, and so are the tiny essays put out by LLMs, but what you see is literally all you are getting; there are no thoughts, ideas or abstract concepts underlying any of it. There is no meaning or narrative to be found which connects one scene or paragraph to another. It's a puzzle laid out by an idiot following generic instructions.

    That which created the woman walking down that street doesn't know what either of those things are, and so it can simply not use those concepts to create a coherent narrative. That job still falls onto the human instructing the AI, and nothing suggests that we are anywhere close to replacing that human glue.

    Current AI can not conceptualise -- much less realise -- ideas, and so they can not be creative or create art by any sensible definition. That isn't to say that what is produced using AI can't be posed as, mistaken for, or used to make art. I'd like to see more of that last part and less of the former two, personally.

  • No, different apps this time.

    Edit: Oh I see, you meant that each app needs to be manually updated once first

  • Doesn't seem to be working for me. I just saw that there were a bunch of stalled notifications (19 hours old, stalled as in stuck at downloading/ready to install) and when I go into the app it's just the same old offer to download and then after that I get the option to install each one separately.

  • I updated to 1.19 and have two app updates listed as available. They are not updated automatically and there is no F-Droid setting for background updates that I can find. In order to install the two aforementioned updates I am required to first download them and then, for each one, I have to press install and then confirm on a popup.

    To be fair, those updates were available before I updated F-Droid, so whatever mechanism that is supposed to be triggered may not have been because the updates were not new?

    Nevertheless I am excited about the prospect, because updating my apps have been such a pain that I constantly procrastinate dealing with it. Sitting with the phone in front of me, clicking a few times, waiting, clicking a few times, waiting, then repeat... never leaving the app and making sure it doesn't fall asleep.. it is not a fun activity.

  • It's not so much the hardware as it is the software and utilisation, and by software I don't necessarily mean any specific algorithm, because I know they give much thought to optimisation strategies when it comes to implementation and design of machine learning architectures. What I mean by software is the full stack considered as a whole, and by utilisation I mean the way services advertise and make use of ill-suited architectures.

    The full stack consists of general purpose computing devices with an unreasonable number of layers of abstraction between the hardware and the languages used in implementations of machine learning. A lot of this stuff is written in Python! While algorithmic complexity is naturally a major factor, how it is compiled and executed matters a lot, too.

    Once AI implementations stabilise, the theoretically most energy efficient way to run it would be on custom hardware made to only run that code, and that code would be written in the lowest possible level of abstraction. The closer we get to the metal (or the closer the metal gets to our program), the more efficient we can make it go. I don't think we take bespoke hardware seriously enough; we're stuck in this mindset of everything being general-purpose.

    As for utilisation: LLMs are not fit or even capable of dealing with logical problems or anything involving reasoning based on knowledge; they can't even reliably regurgitate knowledge. Yet, as far as I can tell, this constitutes a significant portion of its current use.

    If the usage of LLMs was reserved for solving linguistic problems, then we wouldn't be wasting so much energy generating text and expecting it to contain wisdom. A language model should serve as a surface layer -- an interface -- on top of bespoke tools, including other domain-specific types of models. I know we're seeing this idea being iterated on, but I don't see this being pushed nearly enough.[^1]

    When it comes to image generation models, I think it's wrong to focus on generating derivative art/remixes of existing works instead of on tools to help artists express themselves. All these image generation sites we have now consume so much power just so that artistically wanting people can generate 20 versions (give or take an order of magnitude) of the same generic thing. I would like to see AI technology made specifically for integration into professional workflows and tools, enabling creative people to enhance and iterate on their work through specific instructions.[^2] The AI we have now are made for people who can't tell (or don't care about) the difference between remixing and creating and just want to tell the computer to make something nice so they can use it to sell their products.

    The end result in all these cases is that fewer people can live off of being creative and/or knowledgeable while energy consumption spikes as computers generate shitty substitutes. After all, capitalism is all about efficient allocation of resources. Just so happens that quality (of life; art; anything) is inefficient and exploiting the planet is cheap.

    [1]: For example, **why does OpenAI gate external tool integration behind a payment plan while offering simple text generation for free?** That just encourages people to rely on text generation for all kinds of tasks it's not suitable for. Other examples include companies offering AI "assistants" or even AI "teachers"(!), all of which are incapable of even remembering the topic being discussed 2 minutes into a conversation. [2]: I get incredibly frustrated when I try to use image generation tools because I go into it with a vision, but since the models are incapable of creating anything new based on actual concepts I only ever end up with something incredibly artistically compromised and derivative. I can generate hundreds of images based on various contortions of the same prompt, reference image, masking, etc and still not get what I want. THAT is inefficient use of resources, and it's all because the tools are just not made to help me do art.

  • It’s not like corporations are some animal who can’t help but be who they are.

    That's exactly what they are. They are composed of people only to the extent that a car is composed of wheels.

    If it's otherwise in working order, a flat tire will be replaced and the car will be going wherever it's meant to go. Profit city is where all roads lead to, and a flat tire (or four) can only delay for so long.

    If you want to hold corporations to moral standards, you have to change the incentives (destinations) and restructure corporations to be actually owned and controlled by people who are then held to those moral standards (put more of the car into the wheels).