Skip Navigation

Posts
3
Comments
65
Joined
3 yr. ago

  • Right! I guess this is precisely my point--big corporations are running with it, and so the future will be whatever they make it. But I want to make my future, which is why I've built solar panels on my home, built my own server, re-used old computer parts in my closet, hosted my own server, and am running a GPU with my own ollama and whisper AI algorithms on it. I'm hoping to take control and not just be a consumer of corporate enshittification.

  • You're right, it's a bit tongue-in-cheek. But it's a fun name, and I found a lot of people didn't understand "no code / low code" and even more didn't really get excited about it. Vibecoding is interesting to people, I think.

  • You make a good point about software being potentially low capital. Open source is a great counter example.

    But I wonder how do we know what people need? Are the solutions out there actually good for everyone? My daughter is not a coder, but started vibecoding her own habit tracker app last week. She's very excited about her motivation system of stars and flowers, and the nuances of how to make it just right for her. She wrote 19 pages on a google doc describing her app. It's almost like a requirements document, and if she had $30k I bet she could hand that document over to a software engineer and they could build a mobile app for her.

    If she hadn't built this app, I wonder how many habit tracker apps would have also advertised to her, or sold her habit data? If a person is not a software engineer, they kind of have to live with other people's decisions in the digital sphere (and some folks, I've found, aren't even able to evaluate software for safety, privacy, alignment with their values etc. let alone build it).

    I guess I just wonder what the world would be like if the bar for personalized software were dropped so everyone could create just what is needed, for them, wherever they are and in whatever community they find themselves.

  • I might be misunderstanding, but it sounds like you're angry at AI, or at least, you'd like it to diminish not grow in use.

  • We often think of "AI" as what is promoted by big corporations--but it doesn't have to be. The math, algorithms, and machines that run AI can all be ours, and I think we can run them responsibly. For example--I run an AI transcription service just for myself on an old GPU. It works quite well. I also have solar panels installed on my home. I think it can be carbon neutral.

  • I recently bought a frame.work mini-PC and plan to run my own models, solar-powered.

  • That's what I've been working towards!

  • I don't think I have intrusive thoughts. I'm happy, generally pretty creative (hobbies, coding, etc.). Sometimes politics and world affairs get me down, but I don't feel like they are "intrusive", more like affecting my mood. I like how /u/0x01@lemmy.ml put it--I kind of let my mind do whatever it does, and I try to be an observer of what unfolds. I think meditation practice has helped with this practice (Vipassana or Insight meditation specifically).

  • Look for escape hatches. I run a self-hosted Cloudron server. The software I host on my home server is FOSS via Cloudron, but Cloudron itself is a service that keeps each of the FOSS apps up to date with security upgrades and data migrations when necessary. It's a huge boon to running a self-hosted server.

    But when it comes down to it, they could potentially close up somehow (new leadership, get bought out, shut down etc.) They've left an escape hatch though--you can bundle and build your own apps, with a CloudronManifest.json etc. This would allow me to continue to run and update software if I absolutely needed to, without their support.

  • It's tricky. There is code involved, and the code is open source. There is a neural net involved, and it is released as open weights. The part that is not available is the "input" that went into the training. This seems to be a common way in which models are released as both "open source" and "open weights", but you wouldn't necessarily be able to replicate the outcome with $5M or whatever it takes to train the foundation model, since you'd have to guess about what they used as their input training corpus.

  • Removed Deleted

    Permanently Deleted

    Jump
  • I see. Yeah, I agree with you there.

  • Removed Deleted

    Permanently Deleted

    Jump
  • I think you're right circa a few years ago. However, as someone working in AI, I don't think it is true any longer. I'm not saying the substack article is legit, btw, just that the fulcrum has shifted--fewer people can now do much more, aided by algorithms and boosted by AI system prompts. Especially if it's a group internal to a company that has database access etc.

  • I do work with LLMs, and I respect your opinion. I suspect if we could meet and chat for an hour, we'd understand each other better.

    But despite the bad, I also see a great deal of good that can come from LLMs, and AI in general. I appreciated what Sal Khan (Khan Academy) had to say about the big picture view:

    There's folks who take a more pessimistic view of AI, they say this is scary, there's all these dystopian scenarios, we maybe want to slow down, we want to pause. On the other side, there are the more optimistic folks that say, well, we've gone through inflection points before, we've gone through the Industrial Revolution. It was scary, but it all kind of worked out.

    And what I'd argue right now is I don't think this is like a flip of a coin or this is something where we'll just have to, like, wait and see which way it turns out. I think everyone here and beyond, we are active participants in this decision. I'm pretty convinced that the first line of reasoning is actually almost a self-fulfilling prophecy, that if we act with fear and if we say, "Hey, we've just got to stop doing this stuff," what's really going to happen is the rule followers might pause, might slow down, but the rule breakers--as Alexander [Wang] mentioned--the totalitarian governments, the criminal organizations, they're only going to accelerate. And that leads to what I am pretty convinced is the dystopian state, which is the good actors have worse AIs than the bad actors.

    https://www.ted.com/talks/sal_khan_how_ai_could_save_not_destroy_education?subtitle=en

  • My daughter (15f) is an artist and I work at an AI company as a software engineer. We've had a lot of interesting debates. Most recently, she defined Art this way:

    "Art is protest against automation."

    We thought of some examples:

    • when cave artists made paintings in caves, perhaps they were in a sense protesting the automatic forces of nature that would have washed or eroded away their paintings if they had not sought out caves. By painting something that could outlast themselves, perhaps they wished to express, "I am here!"
    • when manufacturing and economic factors made kitsch art possible (cheap figurines, mass reprints, etc.), although more people had access to "art" there was also a sense of loss and blandness, like maybe now that we can afford art, this isn't art, actually?
    • when computers can produce images that look beautiful in some way or another, maybe this pushes the artist within each of us to find new ground where economic reproducibility can't reach, and where we can continue the story of protest where originality can stake a claim on the ever-unfolding nature of what it means to be human.

    I defined Economics this way:

    "Economics is the automation of what nature does not provide."

    An example:

    • long ago, nature automated the creation of apples. People picked free apples, and there was no credit card machine. But humans wanted more apples, and more varieties of apples, and tastier varieties that nature wouldn't make soon enough. So humans created jobs--someone to make apple varieties faster than nature, and someone to plant more apple trees than nature, and someone to pick all of the apples that nature was happy to let rot on the ground as part of its slow orchard re-planting process.

    Jobs are created in one of two ways: either by destroying the ability to automatically create things (destroying looms, maybe), or by making people want new things (e.g. the creation of jobs around farming Eve Online Interstellar Kredits). Whenever an artist creates something new that has value, an investor will want to automate its creation.

    Where Art and Economics fight is over automation: Art wants to find territory that cannot be automated. Economics wants to discover ways to efficiently automate anything desirable. As long as humans live in groups, I suppose this cycle does not have an end.

  • Recently met with my local pastor to see how we could include kids/teens in community programs that intersect with the church. One of the major hurdles is that kids have new expectations around how to meet up--especially online--and the few touch points during the week are in person only. Trying to find ways to meet people where they're at. It was a good first meeting, although she (the pastor) is not tech savvy, so I expect we'll have a few more conversations before we find a good way forward.

  • Thanks for posting this. I am 4th gen since my family (i.e. great grandfather) served in a war.

    I think generations that have not gone through war have a hard time recognizing war-induced inter-generational trauma, since it's often the case that men who went through that hell didn't want to bring it home and talk about it, for various reasons (e.g. PTSD, shame, thoughtfulness).

    Their behaviors might have caused kids and grand-kids to suffer (e.g. physical abuse, emotional abuse), but those kids might not understand why their dad, grandpa, etc. behaved the way he did, so maybe the source of the problem gets buried and forgotten.

  • Because their creators allowed them to ponder and speculate about it.