True! I'm an AI researcher and using an AI agent to check the work of another agent does improve accuracy! I could see things becoming more and more like this, with teams of agents creating, reviewing, and approving. If you use GitHub copilot agent mode though, it involves constant user interaction before anything is actually run. And I imagine (and can testify as someone that has installed different ML algorithms/tools on government hardware) that the operators/decision makers want to check the work, or understand the "thought process" before committing to an action.
Will this be true forever as people become more used to AI as a tool? Probably not.
Long, boring, hard to pay attention to. I read philosophy and theory sometimes but it's few and far between for those reasons. I really have to be in a special mood to sit down and read something that dense.
Same. I've found I'm most productive from like 3-7 pm, which sucks. I'd like to be productive in the morning or in the early afternoon instead of mostly past regular work hours.
Hahaha, as someone that works in AI research, good luck to them. The first is a very hard problem that won't just be prompt engineering with your OpenAI account (why not just use 3D blueprints for weapons that already exist?) and the second is certifiably stupid. There are plenty of ways to make bombs already that don't involve training a model that's an expert in chemistry. A bunch of amateur 8chan half-brains probably couldn't follow a Medium article, let alone do ground breaking research.
But like you said, if they want to test the viability of those bombs, I say go for it! Make it in the garage!
I agree with him. You have to take measures to protect the populace, like with traffic laws. If people can't abide by those rules and the science is sound (which it is in this case and in the case of traffic laws), then measures have to be taken to protect the community from those that refuse to abide by the verifiably safer option without due cause.
What those measures are can be deliberated amongst the community. Could be fines, could be jail time. I don't know what would compel someone to get a vaccine, but that could be determined over time.
He's my hero! I hope he's still getting something out of life as he's in Brazil with his wife. A true heartache knowing that his time here is coming to an end, because for as long as he could, he was still commenting on international affairs. A warrior for the truth, and I'm going to miss his commentary.
This looks great! I imagine the documents you upload are used for RAG?
If so, do you also show citations in the chat answers for what context the model used to answer the user's query?
I ask because Verba by weaviate does that, but I like yours more and I'd like to switch to it (I've had a hard time getting Verba to work in the past).
True! I'm an AI researcher and using an AI agent to check the work of another agent does improve accuracy! I could see things becoming more and more like this, with teams of agents creating, reviewing, and approving. If you use GitHub copilot agent mode though, it involves constant user interaction before anything is actually run. And I imagine (and can testify as someone that has installed different ML algorithms/tools on government hardware) that the operators/decision makers want to check the work, or understand the "thought process" before committing to an action.
Will this be true forever as people become more used to AI as a tool? Probably not.