Image credits: Evgeny Gromov/Getty Images
For researchers, reading scientific articles can be extremely time-consuming. According to a survey, scientists spend seven hours a week researching information. Another survey suggests that systematic literature reviews – scientific syntheses of evidence on a particular topic – take an average of 41 weeks for a five-person research team.
But it doesn’t have to be that way.
At least that’s the message from Andreas Stuhlmüller, co-founder of an AI startup, Get, which designed a “research assistant” for scientists and R&D laboratories. With backers including Fifty Years, Basis Set, Illusion, and angel investors Jeff Dean (the chief scientist of Google) and Thomas Ebeling (the former CEO of Novartis), Elicit is building an AI-powered tool to eliminate the most tedious aspects of literature analysis.
“Get is a research assistant that automates scientific research with linguistic models,” Stuhlmüller told TechCrunch in an email interview. “Specifically, it automates literature analysis by searching for relevant articles, extracting key information about studies, and organizing the information into concepts.”
Elicit is a for-profit company spun out of Should, a nonprofit research foundation launched in 2017 by Stuhlmüller, a former researcher at Stanford’s Computation and Cognition Lab. Elicit’s other co-founder, Jungwon Byun, joined the startup in 2019 after leading the growth of online lending company Upstart.
Using a variety of first-party and third-party models, Elicit searches and discovers concepts in articles, allowing users to ask questions such as “What are all the effects of creatine?” or “What are all the datasets that have been used to study logical reasoning?” and get a list of answers from the academic literature.
“By automating the systematic review process, we can immediately save time and money for the academic and industry research organizations that produce these reviews,” Stuhlmüller said. “By reducing costs sufficiently, we open the door to new use cases that were previously cost-prohibitive, such as just-in-time updates when the state of knowledge in an area changes. »
But wait, you might say: don’t linguistic models tend to make things up? Indeed, they do. Meta’s attempt at a linguistic model to rationalize scientific research, Galacticwas removed just three days after its launch, after it was discovered that the model frequently referenced fake research papers that seemed right but weren’t actually factual.
Stuhlmüller says, however, that Elicit has taken steps to ensure its AI is more reliable than most purpose-built platforms.
On the one hand, Elicit breaks down the complex tasks performed by its models into “human-understandable” elements. This allows Elicit to know, for example, how often different models make things up when they generate summaries, and then help users identify which answers to check – and when.
Elicit also attempts to calculate the overall “trustworthiness” of a scientific article by taking into account factors such as whether the trials conducted in the research were controlled or randomized, the funding source, potential conflicts, and size. tests.
“We don’t create chat interfaces,” Stuhlmüller said. “Solicited users apply language models in the form of batch tasks… We never simply generate answers using models, we always link answers to scientific literature to reduce hallucinations and make it easier to verify the work of the models. »
I’m not necessarily convinced that Elicit has solved some of the major problems plaguing language models today, given their intractability. But his efforts certainly seem to have piqued the interest – and perhaps even the trust – of the research community.
Stuhlmüller claims more than 200,000 people use Elicit each month, representing 3x year-over-year growth (as of January 2023), from organizations including the World Bank, Genentech and Stanford. “Our users are asking to pay for more powerful features and run Elicit at a larger scale,” he added.
Presumably, it was this momentum that led to Elicit’s first funding round – a $9 million round led by Fifty Years. The plan is to invest the majority of the new funds in further development of Elicit’s product as well as expanding Elicit’s team of product managers and software engineers.
But what is Elicit’s plan to make money? Good question – and it’s a question I asked Stuhlmüller point-blank. He highlighted Elicit’s paid tier, launched this week, which allows users to search articles, extract data and summarize concepts on a larger scale than the free tier. The long-term strategy is to make Elicit a general research and reasoning tool – one that entire companies would shell out for.
One possible barrier to Elicit’s commercial success is open source efforts such as the Allen Institute for AI’s Open Language Model, which aim to develop a large free language model optimized for science. But Stuhlmüller says he sees open source more as a complement than a threat.
“Right now, the main competition is human labor – research assistants hired to painstakingly extract data from articles,” Stuhlmüller said. “Scientific research is a huge market and research workflow tools do not have a major incumbent. This is where we will see entirely new AI-driven workflows emerge.