Next, they must formulate a research question and design and conduct an experiment in pursuit of an answer.
Then, they must analyse and interpret the results of the experiment, which may raise yet another research question.
Can a process this complex be automated? Last week, Sakana AI Labs announced the creation of an “AI scientist” – an artificial intelligence system they claim can make scientific discoveries in the area of machine learning in a fully automated way.
Using generative large language models (LLMs) like those behind ChatGPT and other AI chatbots, the system can brainstorm, select a promising idea, code new algorithms, plot results, and write a paper summarising the experiment and its findings, complete with references.
Wow, sounds even less useful than navel gazing internet comments. Can’t wait for scientific papers to have the exact same problems that a Google search does, where you’re inundated with barely relevant AI slop.
Ooof, this is not going to end well. I was optimistic about AI like everyone else but hype upon weakly-good arguments upon humiliating disaster just … doesn’t make me confident.
I believe in human scientists to get around the flaws here but only SOME human scientists can I believe in 🙄🤷♂️
What about the peer review process?
Another LLM of course.
I can’t tell if you’re serious or not :|
Me neither.
That’s interesting, do you have any thoughts on the matter?
Technology-wise: super interesting.
Consequences wise: ohgod. This will flood academia even more than it currently already is
I hadn’t considered the possibility of an AI trained to be a scientist until two weeks ago when a Kurzgesagt video brought it up. Funny to see that being worked on.
well, up for a bunch of bs papers
Why instead od creating actualy usefull tools to help human sciencists, we are creating their shitty AI versions? Is it some effort to stop AI hype train from derailing?