Using AI and Robots to Advance Science
Even though we invented it, humans can be pretty bad at science. We need to eat and sleep, we sometimes let our emotions regulate our behavior, and our bodies are easily and irreparably damaged – all of which can stand in the way of scientific achievement.
Human researchers will always play a role in science, but recent developments out of Argonne National Laboratory make the case that we should let robots do some of the work. Specifically, Argonne researchers are
working on what they refer to as “autonomous discovery.” The lab hopes to increase productivity in science by relying on physical robots programmed with versatile AI software.
Casey Stone, a Computer Scientist and Bioinformatician at Argonne, recently gave a speech about autonomous discovery at an Argonne Outloud conference.
“Autonomous discovery will help individual scientists conduct more experiments and reach results faster,” Stone said in an interview after her speech. “In complex and large-scale experiments, robotics can run the experiments overnight and the experiments can be parallelized across multiple copies of the same robot. This would free up scientists’ time and allow them to focus on coming up with other creative solutions or focusing their lab time on smaller scale investigations that might lead to new hypotheses.”
The concept of robots doing hard or boring work has enraptured science fiction writers for decades, but actually achieving it is a difficult task. Stone outlined the challenges currently facing researchers as well as the opportunities that autonomous discovery presents.
Software and Hardware Modularity
One of the main challenges facing scientists who want to dive into autonomous discovery is a need for a high amount of modularity in both the software and hardware involved. Stone pointed out that Argonne scientists are working on complex problems that span many areas of experimentation, and as such they need their robots to be as flexible as possible to adapt to the changing needs of an experiment.
For the hardware, Argonne places each robotic instrument on its own cart. Each cart contains all of the computing power and sensors needed to make the instrument work and ensure that it functions as designed. The beauty of this system is that each instrument is self-contained, so the scientists can unhook the cart from the rest of the instruments, roll it away, and roll in another instrument without disrupting the rest of the system.
This modularity also allows scientists to use more of the instruments they need. If a specific robot is taking longer than others in the process, researchers can hook in more of that same type of robot to the system to parallelize that step and increase speed.
Stone stated that the current cart system is just the first iteration of hardware modality. She spoke about a future laboratory where humans don’t need to roll the instruments into place themselves.
“Instead, the instruments are located on mobile platforms that can roll themselves into formation based on the needs of an experiment,” Stone said. “In a situation like that, we could take advantage of optimization algorithms to arrange the instruments in the optimal way to complete the experiment as fast as possible.”
For software, Stone stated that the code for each instrument is contained in individual sections on Argonne’s AD-SDL GitHub repository. For example, all the code needed to control the PF400 robotic arm can be found here, while all the code needed to control the OT-2 liquid handling robot is here.
Keeping this code separate for each instrument makes it easier to set up robotic labs because the researchers only need the code that is relevant to the setup of the instruments they currently want to use.
The combination of the instrument itself, the associated code from the GitHub repository, and the computers/sensors needed to allow the instrument to function is called a module. Stone stated that a module is a self-contained unit that can be added or removed from the overall robotic laboratory like a Lego brick.
Each module broadcasts certain information to the rest of the system – like what actions it’s able to complete, if it is ready to receive a command, and what resources it has available. Each module can receive commands, execute the command, and then indicate when the command has been completed. Then, a REST API server handles the distribution of experimental actions to the instruments in the correct order. The server waits for each command to complete before sending the next command.
“This way, each instrument functions completely independently, and the server is responsible for integrating them together,” Stone said. “If you remove one instrument and replace it with something else, there are minimal code changes needed to get the system up and running again. “
Stone also noted that scientists do not need to employ the hardware modularity strategy to take advantage of the software modularity. These resources were designed to be as versatile as they are useful, and Argonne researchers are working hard to remove as many roadblocks as possible.
This mentality of sharing resources cuts to the heart of Argonne’s work with autonomous research. All of the software Argonne has developed here is open-source, and Stone underscored the collectivist nature of this work.
“As a national lab, our goal is to make discoveries and to spur innovation, rather than to profit from our scientific discoveries, “Stone said. “We try to make scientific advancements accessible and beneficial to the communities around us. Making our code open-source enables other groups to bring automated discovery into their scientific process, even if they may not have the funds to pay for the more expensive proprietary solutions for scientific instrument integration.”
While this dedication to the advancement of science itself is noble in its own right, Argonne scientists are helping themselves by helping others. By making this code open-source, researchers can develop a collective knowledge base around robotics and instrument integration. Any scientist who uses this code can contribute to the same software stack and build on the discoveries of others.
Humans Still Run the Show
This kind of autonomous research is exciting, but it’s important to note here that humans aren’t being written out of the scientific process. Stone stated that humans will still play a crucial role in every step of the research journey.
In Stone’s mind, an autonomous research experiment would begin with a human scientist formulating a research question or hypothesis. Then, the scientist would direct AI to train on relevant data. The researcher would have to check to make sure that the AI output is logical. Additionally, the scientist would perform tasks like fixing instrument errors or supplying more labware to the system.
Once the robotic experimentation is complete, a scientist would check that the data produced is of sufficient quality before the data is passed back to the AI to update the models. Finally, when the autonomous discovery loops have completed many rounds and reached a result, the researcher can further validate the results with manual tests.
A potential area that autonomous discovery could advance is the study of antimicrobial peptides. These are small proteins that help organisms like humans protect themselves from infections by acting like natural antibiotics.
Peptides are made up of a sequence of amino acids, and there are 20 common amino acids. If a scientist wanted to design an antimicrobial peptide with a length of 10 amino acids (which is short for antimicrobial peptides) and there are 20 amino acids to choose from, the scientist would end up with 2010 total possibilities for peptide sequences. This is more than 10 trillion possible sequences to test.
Of course, a knowledgeable scientist would be able to narrow down some of these possibilities, but the fact remains that it would be nearly impossible to run all the experiments necessary to reach an optimal result using traditional methods.
This is where autonomous discovery could be immensely helpful. The scientist could train the AI on large amounts of data related to known antimicrobial peptides and their sequences. Then, the AI would learn patterns in those sequences that might contribute to their effective antimicrobial nature. Thus, the AI software would narrow down the number of sequences to test. After this, the scientist could hand off the physical operation of these experiments to one of the robotic carts described above. If the researcher discovers a physical bottleneck, or if one of the carts isn’t working properly, they can swap hardware in and out as needed.
Humans will always play a crucial role in research. However, our scientific progress over the years has been tied to the tools we use. Versatile AI software with modular robotic hardware both combine to form one of the most revolutionary tools science has ever seen.
As these autonomous discovery systems become more capable, they may one day make leaps of scientific understanding that were previously unimaginable to the human mind alone.
Related