It probably uses Retrieval Augmented Generation, which can still hallucinate, but usually does a better job for niche questions and it can even provide a source sometimes depending on how you set it up
I hate it when people use unnecessary terms to describe something.
It’s a script that runs a search and then the LLM takes the output of that and reformats it into an answer. It’s the same as feeding it a document and having it rephrase something.
It’s a script that runs a search and then the LLM takes the output of that and reformats it into an answer.
“I hate when people use concise, reasonably common, and understandable terminology. Why can’t we just expand everything into full sentences that are also oversimplified?”
RAG is literally just polling for information and rewriting it. It’s the same garbage that gave us Gemini telling us to put glue on pizza to prevent the cheese from slipping off.
You can, and should be more critical of where you source the information but it’s not going to magically make language models actually intelligent. It’s not going to make them reason, or be able to properly select what is relevant or not. Just because you give it a bunch of scientific papers doesn’t mean that the stuff they output will be accurate or not misleading.
Literally here. And sorry, before you posted this, I did quickly edit my comment to “oversimplified”. Because technically yes, it’s searching and using what it’s retrieved mixed with a (modified) user prompt to generate an output. But it’s searching based on a prompt (rewriting it to aid retrieval), often reranking results, stripping the query-specific context from the results into chunks, attempting to resolve contradictions between sources (which is objectively more than just rephrasing), and then synthesizing between whatever its pretraining is and what its retrieval results are (thus “retrieval-augmented generation”). That’s why I amended it to “oversimplified”: you’re, for no explicable reason, taking well-established terminology that you think people shouldn’t use (for being “unnecessary”), expanding it out to sentence-length, and even then oversimplifying the process.
LLMs do not possess the ability to reason over the information that it is fed. It converts it to numbers and performs arithmetics on it. Augmenting it with scripts won’t change the fundamental nature of how it works.
It takes information and regurgitates it. There is no analytical capability present that makes it able to distinguish the importance between a small segue and the main points. They can just as easily combine several separate facts into a single point, and phrase things in a way that a footnote has as much weight as the main subjects.
Hiding the actual workings behind silly marketing buzzwords serves to sensationalise what these things actually do. It feeds the AI hysteria and further muddles the discussion around them. It’s why laymen think these models are basically magic and buy into the idea that they’re somehow going to solve all our problems.
I love machine learning. It is, and has historically been a fantastic tool for plenty of tasks, but it isn’t magic.
If I implement a script to automate database migrations during application deployment I could definitely market that as Deployment Ready Database Optimisations or some other BS term, but that doesn’t make it more than a simple automation.
LLMs do not possess the ability to reason over the information that it is fed.
Ah, yes, I forgot that if an LLM has no conscious ability to reason, then we shouldn’t have any terminology to describe the general process it’s using to create an output. Case closed. I’m glad you’ve enlightened us about how useful jargon isn’t actually useful. Data goes in, data goes out; you can’t explain that.
That isn’t what I said. You’re doing a pretty good LLM impression yourself.
I hate it when people use unnecessary terms to describe something.
Hiding the actual workings behind silly marketing buzzwords serves to sensationalise what these things actually do.
That is why I hate marketing buzzwords.
Putting an LLM to process the output of a search in a repository of scientific papers isn’t going to automatically make the output useful or accurate. Papers aren’t necessarily high quality just because they’ve been published, just look at the garbage that Lisa Littman, Kenneth Zucker, and their ilk have shat out over the decades.
An LLM, no matter how many scripts or cleverly written prompts you augment it with, will never be able to differentiate good science from bad, and will just as easily give equal credence to garbage papers as it will to actual quality ones. That’s a problem, without “hallucinations” even entering the picture.
Edit: I think the overall idea of the site is awesome, knowledge should be freely available. I just don’t see the value add that an LLM provides. I only see problems with it.
Sure, but RAG has a Wikipedia article about the specifics of the process, history of its use, links to papers and articles about it and its advantages and drawbacks. It’s also useful as a feature on a matrix for comparing one tool or model’s capabilities to another. None of that is true of the sentence.
Virtually all of computing could be reduced to voltages across terminals changing over time, but it can still be useful to give specific terms to specific applications of this process, so we have something to talk about.
Per sci-hub.ru this has been available since March 6th.
"Hear the good news: recent advances in artificial intelligence enabled Sci-Hub to launch a robot that gives scientifically-grounded responses to questions. The robot starts with searching for relevant literature in Sci-Hub database, then turns to selecting and reading most recent studies, and composes the answer based on this information. The answer includes all the references, and each referenced article can be read on Sci-Hub with one click.
Unlike question-answering robots that were based upon the early generation of neural networks, Sci-Hub bot does not hallucinate and is not making up scientific facts and does not cite sources that do not exist. To support its statements, Sci-Bot uses articles from Sci-Hub database. Questions can be asked in any language, and answers can be saved on server and shared.
The alpha version only supports answerig one question, and a more advanced variation that supports conversation mode is coming soon. Right column displays example questions that has been answered by robot - push the question to see the generated answer."
Thanks for doing what I should have done, I actually red that and thought it sounded great.
The claim of “no hallucination” should of course be taken with a grain of salt, as other comments have pointed out.
Sci-hub has been an invaluable resource. I posted a question yesterday at work. There was a queue, and it was time to leave, so I’ll see what the result was when I get over there. I’ve avoided using AI, but this was too tempting. My question was in a area where I have some knowledge, so I’m hoping I’ll be able to spot any problems in the reply.
Speaking of hallucinations, I think the best way to see them is to go to Google Gemini (Reddit is selling them Reddit posts) and start a conversation about Reddit account you have and act as you don’t know anything. It usually starts good but as it progresses you can see how it is making shit up. The more you ask the more insane it gets.
And this is supposedly having all the comments at its disposal.
I also tried Lemmy as I’m sure they are also indexing it. It is telling me that I’m actually admin who created Lemmy.dbzer0.com
I doubt it’s fine-tuned, it’s likely just one of the open-weight LLMs with RAG. I’ve done similar things, and they don’t really work as well as I’d like (the most relevant chunks of text aren’t always ranked the highest/have the least embedding distance, and the models still hallucinate sometimes).
And without hallucinations ??? That sounds freaking awesome
Of course not.
Aye?
You’re them! You’re the person! Holy shit!!
That’s why you hate the internet???
Clearly.
Sorry 'bout that
Yeah they added “Don’t hallucinate” to the prompt.
Seems like the kind of prompt a hallucination would say
Likely not
yeah, no.
It probably uses Retrieval Augmented Generation, which can still hallucinate, but usually does a better job for niche questions and it can even provide a source sometimes depending on how you set it up
deleted by creator
I hate it when people use unnecessary terms to describe something.
It’s a script that runs a search and then the LLM takes the output of that and reformats it into an answer. It’s the same as feeding it a document and having it rephrase something.
“I hate when people use concise, reasonably common, and understandable terminology. Why can’t we just expand everything into full sentences that are also oversimplified?”
Point it out then.
RAG is literally just polling for information and rewriting it. It’s the same garbage that gave us Gemini telling us to put glue on pizza to prevent the cheese from slipping off.
You can, and should be more critical of where you source the information but it’s not going to magically make language models actually intelligent. It’s not going to make them reason, or be able to properly select what is relevant or not. Just because you give it a bunch of scientific papers doesn’t mean that the stuff they output will be accurate or not misleading.
They’re still just token prediction engines.
Literally here. And sorry, before you posted this, I did quickly edit my comment to “oversimplified”. Because technically yes, it’s searching and using what it’s retrieved mixed with a (modified) user prompt to generate an output. But it’s searching based on a prompt (rewriting it to aid retrieval), often reranking results, stripping the query-specific context from the results into chunks, attempting to resolve contradictions between sources (which is objectively more than just rephrasing), and then synthesizing between whatever its pretraining is and what its retrieval results are (thus “retrieval-augmented generation”). That’s why I amended it to “oversimplified”: you’re, for no explicable reason, taking well-established terminology that you think people shouldn’t use (for being “unnecessary”), expanding it out to sentence-length, and even then oversimplifying the process.
LLMs do not possess the ability to reason over the information that it is fed. It converts it to numbers and performs arithmetics on it. Augmenting it with scripts won’t change the fundamental nature of how it works.
It takes information and regurgitates it. There is no analytical capability present that makes it able to distinguish the importance between a small segue and the main points. They can just as easily combine several separate facts into a single point, and phrase things in a way that a footnote has as much weight as the main subjects.
Hiding the actual workings behind silly marketing buzzwords serves to sensationalise what these things actually do. It feeds the AI hysteria and further muddles the discussion around them. It’s why laymen think these models are basically magic and buy into the idea that they’re somehow going to solve all our problems.
I love machine learning. It is, and has historically been a fantastic tool for plenty of tasks, but it isn’t magic.
If I implement a script to automate database migrations during application deployment I could definitely market that as Deployment Ready Database Optimisations or some other BS term, but that doesn’t make it more than a simple automation.
Ah, yes, I forgot that if an LLM has no conscious ability to reason, then we shouldn’t have any terminology to describe the general process it’s using to create an output. Case closed. I’m glad you’ve enlightened us about how useful jargon isn’t actually useful. Data goes in, data goes out; you can’t explain that.
That isn’t what I said. You’re doing a pretty good LLM impression yourself.
That is why I hate marketing buzzwords.
Putting an LLM to process the output of a search in a repository of scientific papers isn’t going to automatically make the output useful or accurate. Papers aren’t necessarily high quality just because they’ve been published, just look at the garbage that Lisa Littman, Kenneth Zucker, and their ilk have shat out over the decades.
An LLM, no matter how many scripts or cleverly written prompts you augment it with, will never be able to differentiate good science from bad, and will just as easily give equal credence to garbage papers as it will to actual quality ones. That’s a problem, without “hallucinations” even entering the picture.
Edit: I think the overall idea of the site is awesome, knowledge should be freely available. I just don’t see the value add that an LLM provides. I only see problems with it.
Need the deets asap with all that hot tea low key context? Get on the RAG!
Pre-order access for $5.99/USD month for your first 12 months. You know the next one comin’ soon!
Sure, but RAG has a Wikipedia article about the specifics of the process, history of its use, links to papers and articles about it and its advantages and drawbacks. It’s also useful as a feature on a matrix for comparing one tool or model’s capabilities to another. None of that is true of the sentence.
Virtually all of computing could be reduced to voltages across terminals changing over time, but it can still be useful to give specific terms to specific applications of this process, so we have something to talk about.
is way easier to search then:
So if people want to look into it further and research what it is, instead of taking some persons 1 sentence explanation, they can.
Ironically trying to search for that phrase would work better in a RAG then a standard key word search.
So… Search… Assisted… Generation?
RAG is a name from a research paper that very accurately describes what happens, but your argument seems to say you just don’t like acronyms.
Looking at the whole thing as a workflow, you’d be correct.
But RAG can be a bit more than just running a search, which implies keyword based regex style search.
“RAG”’s tough in acronym form though concept is quite popular right now - decent summary btw I’d say (fully non-expert)
Obviously not, because that’s not possible.
What fun would that be?
I’ll keep the hallucinations for myself, tyvm.
Per sci-hub.ru this has been available since March 6th.
"Hear the good news: recent advances in artificial intelligence enabled Sci-Hub to launch a robot that gives scientifically-grounded responses to questions. The robot starts with searching for relevant literature in Sci-Hub database, then turns to selecting and reading most recent studies, and composes the answer based on this information. The answer includes all the references, and each referenced article can be read on Sci-Hub with one click.
Unlike question-answering robots that were based upon the early generation of neural networks, Sci-Hub bot does not hallucinate and is not making up scientific facts and does not cite sources that do not exist. To support its statements, Sci-Bot uses articles from Sci-Hub database. Questions can be asked in any language, and answers can be saved on server and shared.
The alpha version only supports answerig one question, and a more advanced variation that supports conversation mode is coming soon. Right column displays example questions that has been answered by robot - push the question to see the generated answer."
Thanks for doing what I should have done, I actually red that and thought it sounded great. The claim of “no hallucination” should of course be taken with a grain of salt, as other comments have pointed out.
Sci-hub has been an invaluable resource. I posted a question yesterday at work. There was a queue, and it was time to leave, so I’ll see what the result was when I get over there. I’ve avoided using AI, but this was too tempting. My question was in a area where I have some knowledge, so I’m hoping I’ll be able to spot any problems in the reply.
I’d be interested in having your feedback !!
LOL, of course not.
Speaking of hallucinations, I think the best way to see them is to go to Google Gemini (Reddit is selling them Reddit posts) and start a conversation about Reddit account you have and act as you don’t know anything. It usually starts good but as it progresses you can see how it is making shit up. The more you ask the more insane it gets.
And this is supposedly having all the comments at its disposal.
I also tried Lemmy as I’m sure they are also indexing it. It is telling me that I’m actually admin who created Lemmy.dbzer0.com
From what I understand from the sales brochure, these types of “AI” that are modeled on highly curated data are far less prone to hallucinations.
I doubt it’s fine-tuned, it’s likely just one of the open-weight LLMs with RAG. I’ve done similar things, and they don’t really work as well as I’d like (the most relevant chunks of text aren’t always ranked the highest/have the least embedding distance, and the models still hallucinate sometimes).
Hallucination is Inevitable.