Thursday, August 6, 2015
Computer model could explain how simple molecules took first step toward life
Nearly four billion years ago, the earliest precursors of life on Earth emerged. First small, simple molecules, or monomers, banded together to form larger, more complex molecules, or polymers. Then those polymers developed a mechanism that allowed them to self-replicate and pass their structure on to future generations.
We wouldn't be here today if molecules had not made that fateful transition to self-replication. Yet despite the fact that biochemists have spent decades searching for the specific chemical process that can explain how simple molecules could make this leap, we still don't really understand how it happened.
Now Sergei Maslov, a computational biologist at the U.S. Dept. of Energy (DOE)'s Brookhaven National Laboratory and adjunct professor at Stony Brook Univ., and Alexei Tkachenko, a scientist at Brookhaven's Center for Functional Nanomaterials (CFN), have taken a different, more conceptual approach. They've developed a model that explains how monomers could very rapidly make the jump to more complex polymers. And what their model points to could have intriguing implications for CFN's work in engineering artificial self-assembly at the nanoscale. Their work is published in The Journal of Chemical Physics.
To understand their work, let's consider the most famous organic polymer, and the carrier of life's genetic code: DNA. This polymer is composed of long chains of specific monomers called nucleotides, of which the four kinds are adenine, thymine, guanine, and cytosine (A, T, G, C). In a DNA double helix, each specific nucleotide pairs with another: A with T, and G with C. Because of this complementary pairing, it would be possible to put a complete piece of DNA back together even if just one of the two strands was intact.
While DNA has become the molecule of choice for encoding biological information, its close cousin RNA likely played this role at the dawn of life. This is known as the RNA world hypothesis, and it's the scenario that Maslov and Tkachenko considered in their work.
The single complete RNA strand is called a template strand, and the use of a template to piece together monomer fragments is what is known as template-assisted ligation. This concept is at the crux of their work. They asked whether that piecing together of complementary monomer chains into more complex polymers could occur not as the healing of a broken polymer, but rather as the formation of something new.
"Suppose we don't have any polymers at all, and we start with just monomers in a test tube," explained Tkachenko. "Will that mixture ever find its way to make those polymers? The answer is rather remarkable: Yes, it will! You would think there is some chicken-and-egg problem—that, in order to make polymers, you already need polymers there to provide the template for their formation. Turns out that you don't really."
Instilling memory
Maslov and Tkachenko's model imagines some kind of regular cycle in which conditions change in a predictable fashion—say, the transition between night and day. Imagine a world in which complex polymers break apart during the day, then repair themselves at night. The presence of a template strand means that the polymer reassembles itself precisely as it was the night before. That self-replication process means the polymer can transmit information about itself from one generation to the next. That ability to pass information along is a fundamental property of life.
"The way our system replicates from one day cycle to the next is that it preserves a memory of what was there," said Maslov. "It's relatively easy to make lots of long polymers, but they will have no memory. The template provides the memory. Right now, we are solving the problem of how to get long polymer chains capable of memory transmission from one unit to another to select a small subset of polymers out of an astronomically large number of solutions."
According to Maslov and Tkachenko's model, a molecular system only needs a very tiny percentage of more complex molecules—even just dimers, or pairs of identical molecules joined together—to start merging into the longer chains that will eventually become self-replicating polymers. This neatly sidesteps one of the most vexing puzzles of the origins of life: Self-replicating chains likely need to be very specific sequences of at least 100 paired monomers, yet the odds of 100 such pairs randomly assembling themselves in just the right order is practically zero.
"If conditions are right, there is what we call a first-order transition, where you go from this soup of completely dispersed monomers to this new solution where you have these long chains appearing," said Tkachenko. "And we now have this mechanism for the emergence of these polymers that can potentially carry information and transmit it downstream. Once this threshold is passed, we expect monomers to be able to form polymers, taking us from the primordial soup to a primordial soufflé."
While the model's concept of template-assisted ligation does describe how DNA—as well as RNA—repairs itself, Maslov and Tkachenko's work doesn't require that either of those was the specific polymer for the origin of life.
"Our model could also describe a proto-RNA molecule. It could be something completely different," Maslov said.
Order from disorder
The fact that Maslov and Tkachenko's model doesn't require the presence of a specific molecule speaks to their more theoretical approach.
"It's a different mentality from what a biochemist would do," said Tkachenko. "A biochemist would be fixated on specific molecules. We, being ignorant physicists, tried to work our way from a general conceptual point of view, as there's a fundamental problem."
That fundamental problem is the second law of thermodynamics, which states that systems tend toward increasing disorder and lack of organization. The formation of long polymer chains from monomers is the precise opposite of that.
"How do you start with the regular laws of physics and get to these laws of biology which makes things run backward, which make things more complex, rather than less complex?" Tkachenko queried. "That's exactly the jump that we want to understand."
Applications in nanoscience
The work is an outgrowth of efforts at the Center for Functional Nanomaterials, a DOE Office of Science User Facility, to use DNA and other biomolecules to direct the self-assembly of nanoparticles into large, ordered arrays. While CFN doesn't typically focus on these kinds of primordial biological questions, Maslov and Tkachenko's modeling work could help CFN scientists engaged in cutting-edge nanoscience research to engineer even larger and more complex assemblies using nanostructured building blocks.
"There is a huge interest in making engineered self-assembled structures, so we were essentially thinking about two problems at once," said Tkachenko. "One is relevant to biologists, and second asks whether we can engineer a nanosystem that will do what our model does."
The next step will be to determine whether template-aided ligation can allow polymers to begin undergoing the evolutionary changes that characterize life as we know it. While this first round of research involved relatively modest computational resources, that next phase will require far more involved models and simulations.
Maslov and Tkachenko's work has solved the problem of how long polymer chains capable of information transmission from one generation to the next could emerge from the world of simple monomers. Now they are turning their attention to how such a system could naturally narrow itself down from exponentially many polymers to only a select few with desirable sequences.
"What we needed to show here was that this template-based ligation does result in a set of polymer chains, starting just from monomers," said Tkachenko. "So the next question we will be asking is whether, because of this template-based merger, we will be able to see specific sequences that will be more 'fit' than others. So this work sets the stage for the shift to the Darwinian phase."
Source: Brookhaven National Laboratory
Wednesday, July 29, 2015
How Much Is Too Much? Information Overload in Disease and Drug Research
Biology is a rapidly evolving science. Every new discovery uncovers new layers of complexity that must be unraveled in order to understand the underlying biological mechanisms of diseases and for successful drug development.
Driven both by community need and by the increased likelihood of positive returns on the large investment required, drug discovery research has often focused on identifying and understanding the most common diseases with relatively straightforward causes and those that affect large numbers of individuals. Today, companies continue to push the boundaries of innovation in order to alleviate the debilitating effects of complex diseases—those that affect smaller patient populations or show high variability from patient to patient. This requires looking deeper into available data.
The big data revolution
Key to understanding complex and variable diseases is the need to examine data from large numbers of afflicted patients. More than 90% of the world’s data has been created in the past two years, and the pace is accelerating. High-throughput technologies create ever-expanding quantities of data for researchers to mine. But in addressing one problem, another has developed—how can researchers find the specific information they need among the mass of data?
Beyond the simple issue of scale, data diversity also plays a key role. Twenty years ago, before the draft human genome sequence was finished, researchers could get a publication accepted into a journal by determining the sequence of a single gene. But with our growth in knowledge, successful research now depends more on understanding the biological complexity that comes from vast networks of interactions between genes, proteins and small molecules, not only from the sequence itself. In this environment, how can researchers determine what information is most important to understanding a particular disease?
Finding the right data
With approximately one million scientific articles published annually, scientists have a daunting task to find relevant papers for their work. They are drowning in a data deluge, and even highly complex queries return hundreds of possible answers. Twenty years ago researchers could feel fairly confident that they could keep up with the most important discoveries by reading a handful of journals. Today, important and high-quality research is published in an ever-expanding collection of journals—recent estimates from Google analytics suggest as many as 42% of highly cited papers appear in journals that are not traditionally highly cited—so researchers must cast a wide net to ensure they don’t miss key discoveries. How can they be confident that they have identified the most current and relevant research without missing a critical piece of the puzzle?
Although researchers often start to learn about a new disease using generalized search tools like PubMed or Google Scholar, more specialized tools and approaches that can connect information from multiple sources are needed to filter the massive lists of possible responses down to a manageable and relevant set. For instance, Elsevier offers research tools such as Reaxys in the chemistry space, and Pathway Studio used by biologists. The solutions include information available from Elsevier and also other publishers’ journals and articles. Each also provides focused search tools, so researchers can leverage multiple data sources and build a comprehensive and detailed picture of their disease based upon relevant data.
A "Big" project
DARPA’s "Big Mechanism" project has tasked teams from leading universities and data providers with helping improve the discoverability of scientific data. Elsevier is helping with one part of this project; developing "deep-reading" algorithms in conjunction with Carnegie Mellon Univ. to uncover almost all relevant data from a scientific publication. Understanding the role of KRAS in cancer activation was chosen as a test case due to its complexity: KRAS goes by at least five synonyms in the literature and interacts with more than 150 other proteins, many with dozens to hundreds of their own synonyms—a daunting task. Once developed, these "deep-reading" tools can be extended to work with a wide range of other genes, proteins and diseases.
Developing effective discovery tools requires significant scientific expertise to ensure data is categorized correctly in order for computers to "read" and extract the relevant data requested. As per the KRAS example, unless data is categorized correctly, a researcher could end up needing to input over 500 search terms. In short, discovery tools need extensive and refined taxonomies to be of value. A combination of deep biological domain knowledge and sophisticated software development skills are needed to develop computer-based "deep-reading" tools that can match human accuracy, while retaining the computer’s speed advantage to sift through the massive data collections.
The way we work
Understanding the way scientists do their work is essential to developing tools that match their unmet data management needs. In addition to searching a diverse collection of external data sources, researchers often have their own proprietary research data collections that must be integrated with other sources to provide the most complete picture. These tools must help the researcher identify the most relevant data for their particular task.
Since humans are very good at visually recognizing patterns in information, information should be presented in a way that lets users visualize that information. Tools that allow different views of the data can help users connect the dots and draw their own conclusions. It’s the difference between trying to read a long list of subway stations in a foreign language, and viewing a graphical map of the subway.
The research challenge
Searching the diverse collections of data to discover actionable insights into the biology of a disease is a huge challenge. The growth of data is outpacing our ability to analyze it, so new, more sophisticated tools and approaches are needed to help researchers connect the dots, no matter where that information is located. With the right discovery support, organizations can facilitate researchers’ interpretation of experimental data, leading to greater insight into the mechanisms of disease and accelerating biological research. This will help them to invent, validate and commercialize new, clinically effective treatments, faster and more efficiently
Driven both by community need and by the increased likelihood of positive returns on the large investment required, drug discovery research has often focused on identifying and understanding the most common diseases with relatively straightforward causes and those that affect large numbers of individuals. Today, companies continue to push the boundaries of innovation in order to alleviate the debilitating effects of complex diseases—those that affect smaller patient populations or show high variability from patient to patient. This requires looking deeper into available data.
The big data revolution
Key to understanding complex and variable diseases is the need to examine data from large numbers of afflicted patients. More than 90% of the world’s data has been created in the past two years, and the pace is accelerating. High-throughput technologies create ever-expanding quantities of data for researchers to mine. But in addressing one problem, another has developed—how can researchers find the specific information they need among the mass of data?
Beyond the simple issue of scale, data diversity also plays a key role. Twenty years ago, before the draft human genome sequence was finished, researchers could get a publication accepted into a journal by determining the sequence of a single gene. But with our growth in knowledge, successful research now depends more on understanding the biological complexity that comes from vast networks of interactions between genes, proteins and small molecules, not only from the sequence itself. In this environment, how can researchers determine what information is most important to understanding a particular disease?
Finding the right data
With approximately one million scientific articles published annually, scientists have a daunting task to find relevant papers for their work. They are drowning in a data deluge, and even highly complex queries return hundreds of possible answers. Twenty years ago researchers could feel fairly confident that they could keep up with the most important discoveries by reading a handful of journals. Today, important and high-quality research is published in an ever-expanding collection of journals—recent estimates from Google analytics suggest as many as 42% of highly cited papers appear in journals that are not traditionally highly cited—so researchers must cast a wide net to ensure they don’t miss key discoveries. How can they be confident that they have identified the most current and relevant research without missing a critical piece of the puzzle?
Although researchers often start to learn about a new disease using generalized search tools like PubMed or Google Scholar, more specialized tools and approaches that can connect information from multiple sources are needed to filter the massive lists of possible responses down to a manageable and relevant set. For instance, Elsevier offers research tools such as Reaxys in the chemistry space, and Pathway Studio used by biologists. The solutions include information available from Elsevier and also other publishers’ journals and articles. Each also provides focused search tools, so researchers can leverage multiple data sources and build a comprehensive and detailed picture of their disease based upon relevant data.
A "Big" project
DARPA’s "Big Mechanism" project has tasked teams from leading universities and data providers with helping improve the discoverability of scientific data. Elsevier is helping with one part of this project; developing "deep-reading" algorithms in conjunction with Carnegie Mellon Univ. to uncover almost all relevant data from a scientific publication. Understanding the role of KRAS in cancer activation was chosen as a test case due to its complexity: KRAS goes by at least five synonyms in the literature and interacts with more than 150 other proteins, many with dozens to hundreds of their own synonyms—a daunting task. Once developed, these "deep-reading" tools can be extended to work with a wide range of other genes, proteins and diseases.
Developing effective discovery tools requires significant scientific expertise to ensure data is categorized correctly in order for computers to "read" and extract the relevant data requested. As per the KRAS example, unless data is categorized correctly, a researcher could end up needing to input over 500 search terms. In short, discovery tools need extensive and refined taxonomies to be of value. A combination of deep biological domain knowledge and sophisticated software development skills are needed to develop computer-based "deep-reading" tools that can match human accuracy, while retaining the computer’s speed advantage to sift through the massive data collections.
The way we work
Understanding the way scientists do their work is essential to developing tools that match their unmet data management needs. In addition to searching a diverse collection of external data sources, researchers often have their own proprietary research data collections that must be integrated with other sources to provide the most complete picture. These tools must help the researcher identify the most relevant data for their particular task.
Since humans are very good at visually recognizing patterns in information, information should be presented in a way that lets users visualize that information. Tools that allow different views of the data can help users connect the dots and draw their own conclusions. It’s the difference between trying to read a long list of subway stations in a foreign language, and viewing a graphical map of the subway.
The research challenge
Searching the diverse collections of data to discover actionable insights into the biology of a disease is a huge challenge. The growth of data is outpacing our ability to analyze it, so new, more sophisticated tools and approaches are needed to help researchers connect the dots, no matter where that information is located. With the right discovery support, organizations can facilitate researchers’ interpretation of experimental data, leading to greater insight into the mechanisms of disease and accelerating biological research. This will help them to invent, validate and commercialize new, clinically effective treatments, faster and more efficiently
Simulations lead to design of near-frictionless material
Argonne National Laboratory scientists used Mira to identify and improve a new mechanism for eliminating friction, which fed into the development of a hybrid material that exhibited superlubricity at the macroscale for the first time. Argonne Leadership Computing Facility (ALCF) researchers helped enable the groundbreaking simulations by overcoming a performance bottleneck that doubled the speed of the team's code.
While reviewing the simulation results of a promising new lubricant material, Argonne researcher Sanket Deshmukh stumbled upon a phenomenon that had never been observed before.
"I remember Sanket calling me and saying 'you have got to come over here and see this. I want to show you something really cool,'" said Subramanian Sankaranarayanan, Argonne computational nanoscientist, who led the simulation work at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility.
They were amazed by what the computer simulations revealed. When the lubricant materials—graphene and diamond-like carbon (DLC)—slid against each other, the graphene began rolling up to form hollow cylindrical "scrolls" that helped to practically eliminate friction. These so-called nanoscrolls represented a completely new mechanism for superlubricity, a state in which friction essentially disappears.
"The nanoscrolls combat friction in very much the same way that ball bearings do by creating separation between surfaces," said Deshmukh, who finished his postdoctoral appointment at Argonne in January.
Superlubricity is a highly desirable property. Considering that nearly one-third of every fuel tank is spent overcoming friction in automobiles, a material that can achieve superlubricity would greatly benefit industry and consumers alike. Such materials could also help increase the lifetime of countless mechanical components that wear down due to incessant friction.
Experimental origins
Prior to the computational work, Argonne scientists Ali Erdemir, Anirudha Sumant, and Diana Berman were studying the hybrid material in laboratory experiments at Argonne's Tribology Laboratory and the Center for Nanoscale Materials, a DOE Office of Science User Facility. The experimental setup consisted of small patches of graphene (a two-dimensional single-sheet form of pure carbon) sliding against a DLC-coated steel ball.
The graphene-DLC combination was registering a very low friction coefficient (a ratio that measures the force of friction between two surfaces), but the friction levels were fluctuating up and down for no apparent reason. The experimentalists were also puzzled to find that humid environments were causing the friction coefficient to shoot up to levels that were nearly 100 times greater than measured in dry environments.
To shed light on these mysterious behaviors, they turned to Sankaranarayanan and Deshmukh for computational help. Using Mira, the ALCF's 10-petaflops IBM Blue Gene/Q supercomputer, the researchers replicated the experimental conditions with large-scale molecular dynamics simulations aimed at understanding the underlying mechanisms of superlubricity at an atomistic level.
This led to their discovery of the graphene nanoscrolls, which helped to fill in the blanks. The material's fluctuating friction levels were explained by the fact that the nanoscrolls themselves were not stable. The researchers observed a repeating pattern in which the hollow nanoscrolls would form, and then cave in and collapse under the pressure of the load.
"The friction was dipping to very low values at the moment the scroll formation took place and then it would jump back up to higher values when the graphene patches were in an unscrolled state," Deshmukh said.
The computational scientists had an idea to overcome this issue. They tried incorporating nanodiamond particles into their simulations to see if the hard material could help stabilize the nanoscrolls and make them more permanent.
Sure enough, the simulations proved successful. The graphene patches spontaneously rolled around the nanodiamonds, which held the scrolls in place and resulted in sustained superlubricity. The simulation results fed into a new set of experiments with nanodiamonds that confirmed the same.
"The beauty of this particular discovery is that we were able to see sustained superlubricity at the macroscale for the first time, proving this mechanism can be used at engineering scales for real-world applications," Sankaranarayanan said. "This collaborative effort is a perfect example of how computation can help in the design and discovery of new materials."
Not slippery when wet
Unfortunately, the addition of nanodiamonds did not address the material's aversion to water. The simulations showed that water suppresses the formation of scrolls by increasing the adhesion of graphene to the surface.
While this greatly limits the hybrid material's potential applications, its ability to maintain superlubricity in dry environments is a significant breakthrough in itself.
The research team is in the process of seeking a patent for the hybrid material, which could potentially be used for applications in dry environments, such as computer hard drives, wind turbine gears, and mechanical rotating seals for microelectromechanical and nanoelectromechanical systems.
Adding to the material's appeal is a relatively simple and cost-effective deposition method called drop casting. This technique involves spraying solutions of the materials on moving mechanical parts. When the solutions evaporate, it would leave the graphene and nanodiamonds on one side of a moving part, and diamond-like carbon on the other side.
However, the knowledge gained from their study is perhaps even more valuable, said Deshmukh. He expects the nanoscroll mechanism to spur future efforts to develop materials capable of superlubricity for a wide range of mechanical applications.
For their part, the Argonne team will continue its computational studies to look for ways to overcome the barrier presented by water.
"We are exploring different surface functionalizations to see if we can incorporate something hydrophobic that would keep water out," Sankaranarayanan said. "As long as you can repel water, the graphene nanoscrolls could potentially work in humid environments as well."
Simulating millions of atoms
The team's groundbreaking nanoscroll discovery would not have been possible without a supercomputer like Mira. Replicating the experimental setup required simulating up to 1.2 million atoms for dry environments and up to 10 million atoms for humid environments.
The researchers used the LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) code to carry out the computationally demanding reactive molecular dynamics simulations.
With the help of ALCF catalysts, a team of computational scientists who work directly with ALCF users, they were able to overcome a performance bottleneck with the code's ReaxFF module, an add-on package that was needed to model the chemical reactions occurring in the system.
The ALCF catalysts, in collaboration with researchers from IBM, Lawrence Berkeley National Laboratory and Sandia National Laboratories, optimized LAMMPS and its implementation of ReaxFF by adding OpenMP threading, replacing MPI point-to-point communication with MPI collectives in key algorithms, and leveraging MPI I/O. Altogether, these enhancements allowed the code to perform twice as fast as before.
"With the code optimizations in place, we were able to model the phenomena in real experimental systems more accurately," Deshmukh said. "The simulations on Mira showed us some amazing things that could not be seen in laboratory tests."
And with the recent announcement of Aurora, the ALCF's next-generation supercomputer, Sankaranarayanan is excited about where this line of research could go in the future.
"Given the advent of computing resources like Aurora and the wide gamut of the available two-dimensional materials and nanoparticle types, we envision the creation of a lubricant genome at some point in the future," he said. "Having a materials database like this would allow us to pick and choose lubricant materials for specific operational conditions."
While reviewing the simulation results of a promising new lubricant material, Argonne researcher Sanket Deshmukh stumbled upon a phenomenon that had never been observed before.
"I remember Sanket calling me and saying 'you have got to come over here and see this. I want to show you something really cool,'" said Subramanian Sankaranarayanan, Argonne computational nanoscientist, who led the simulation work at the Argonne Leadership Computing Facility (ALCF), a DOE Office of Science User Facility.
They were amazed by what the computer simulations revealed. When the lubricant materials—graphene and diamond-like carbon (DLC)—slid against each other, the graphene began rolling up to form hollow cylindrical "scrolls" that helped to practically eliminate friction. These so-called nanoscrolls represented a completely new mechanism for superlubricity, a state in which friction essentially disappears.
"The nanoscrolls combat friction in very much the same way that ball bearings do by creating separation between surfaces," said Deshmukh, who finished his postdoctoral appointment at Argonne in January.
Superlubricity is a highly desirable property. Considering that nearly one-third of every fuel tank is spent overcoming friction in automobiles, a material that can achieve superlubricity would greatly benefit industry and consumers alike. Such materials could also help increase the lifetime of countless mechanical components that wear down due to incessant friction.
Experimental origins
Prior to the computational work, Argonne scientists Ali Erdemir, Anirudha Sumant, and Diana Berman were studying the hybrid material in laboratory experiments at Argonne's Tribology Laboratory and the Center for Nanoscale Materials, a DOE Office of Science User Facility. The experimental setup consisted of small patches of graphene (a two-dimensional single-sheet form of pure carbon) sliding against a DLC-coated steel ball.
The graphene-DLC combination was registering a very low friction coefficient (a ratio that measures the force of friction between two surfaces), but the friction levels were fluctuating up and down for no apparent reason. The experimentalists were also puzzled to find that humid environments were causing the friction coefficient to shoot up to levels that were nearly 100 times greater than measured in dry environments.
To shed light on these mysterious behaviors, they turned to Sankaranarayanan and Deshmukh for computational help. Using Mira, the ALCF's 10-petaflops IBM Blue Gene/Q supercomputer, the researchers replicated the experimental conditions with large-scale molecular dynamics simulations aimed at understanding the underlying mechanisms of superlubricity at an atomistic level.
This led to their discovery of the graphene nanoscrolls, which helped to fill in the blanks. The material's fluctuating friction levels were explained by the fact that the nanoscrolls themselves were not stable. The researchers observed a repeating pattern in which the hollow nanoscrolls would form, and then cave in and collapse under the pressure of the load.
"The friction was dipping to very low values at the moment the scroll formation took place and then it would jump back up to higher values when the graphene patches were in an unscrolled state," Deshmukh said.
The computational scientists had an idea to overcome this issue. They tried incorporating nanodiamond particles into their simulations to see if the hard material could help stabilize the nanoscrolls and make them more permanent.
Sure enough, the simulations proved successful. The graphene patches spontaneously rolled around the nanodiamonds, which held the scrolls in place and resulted in sustained superlubricity. The simulation results fed into a new set of experiments with nanodiamonds that confirmed the same.
"The beauty of this particular discovery is that we were able to see sustained superlubricity at the macroscale for the first time, proving this mechanism can be used at engineering scales for real-world applications," Sankaranarayanan said. "This collaborative effort is a perfect example of how computation can help in the design and discovery of new materials."
Not slippery when wet
Unfortunately, the addition of nanodiamonds did not address the material's aversion to water. The simulations showed that water suppresses the formation of scrolls by increasing the adhesion of graphene to the surface.
While this greatly limits the hybrid material's potential applications, its ability to maintain superlubricity in dry environments is a significant breakthrough in itself.
The research team is in the process of seeking a patent for the hybrid material, which could potentially be used for applications in dry environments, such as computer hard drives, wind turbine gears, and mechanical rotating seals for microelectromechanical and nanoelectromechanical systems.
Adding to the material's appeal is a relatively simple and cost-effective deposition method called drop casting. This technique involves spraying solutions of the materials on moving mechanical parts. When the solutions evaporate, it would leave the graphene and nanodiamonds on one side of a moving part, and diamond-like carbon on the other side.
However, the knowledge gained from their study is perhaps even more valuable, said Deshmukh. He expects the nanoscroll mechanism to spur future efforts to develop materials capable of superlubricity for a wide range of mechanical applications.
For their part, the Argonne team will continue its computational studies to look for ways to overcome the barrier presented by water.
"We are exploring different surface functionalizations to see if we can incorporate something hydrophobic that would keep water out," Sankaranarayanan said. "As long as you can repel water, the graphene nanoscrolls could potentially work in humid environments as well."
Simulating millions of atoms
The team's groundbreaking nanoscroll discovery would not have been possible without a supercomputer like Mira. Replicating the experimental setup required simulating up to 1.2 million atoms for dry environments and up to 10 million atoms for humid environments.
The researchers used the LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) code to carry out the computationally demanding reactive molecular dynamics simulations.
With the help of ALCF catalysts, a team of computational scientists who work directly with ALCF users, they were able to overcome a performance bottleneck with the code's ReaxFF module, an add-on package that was needed to model the chemical reactions occurring in the system.
The ALCF catalysts, in collaboration with researchers from IBM, Lawrence Berkeley National Laboratory and Sandia National Laboratories, optimized LAMMPS and its implementation of ReaxFF by adding OpenMP threading, replacing MPI point-to-point communication with MPI collectives in key algorithms, and leveraging MPI I/O. Altogether, these enhancements allowed the code to perform twice as fast as before.
"With the code optimizations in place, we were able to model the phenomena in real experimental systems more accurately," Deshmukh said. "The simulations on Mira showed us some amazing things that could not be seen in laboratory tests."
And with the recent announcement of Aurora, the ALCF's next-generation supercomputer, Sankaranarayanan is excited about where this line of research could go in the future.
"Given the advent of computing resources like Aurora and the wide gamut of the available two-dimensional materials and nanoparticle types, we envision the creation of a lubricant genome at some point in the future," he said. "Having a materials database like this would allow us to pick and choose lubricant materials for specific operational conditions."
Finalists Announced for the 2015 R&D 100 Awards
Rockaway, NJ, July 22, 2015 – R&D Magazine today announced the Finalists for the 53rd annual R&D 100 Awards, which honor the 100 most innovative technologies and services of the past year. This year’s Winners will be presented with their honors at the annual black-tie awards dinner on November 13, 2015 at Caesars Palace, Las Vegas, Nevada.
The Finalists were selected by an independent panel of more than 70 judges. This year’s Finalists represent many of industry’s leading organizations and national laboratories, as well as many newcomers to the R&D 100 Awards, often referred to as the “Oscars of Invention.”
For the first time in its history, the winners of the R&D 100 Awards will be honored for exemplary accomplishments from across five categories: Analytical Test, IT/Electrical, Mechanical Devices/Materials, Process/Prototyping, and Software/Services. The 2015 Awards will also honor excellence in four new special recognition categories – Market Disruptor (Services), Market Disruptor (Products), Corporate Social Responsibility, and Green Tech.
"This was a particularly strong year for research and development, led by many outstanding technologies that broadened the scope of innovation,” said R&D Magazine Editor Lindsay Hock. “We are honored to recognize these products and the project teams behind the design, development, testing, and production of these remarkable innovations and their impact in the field. We look forward to celebrating the winners in November.”
A detailed list of the 2015 Finalists can be found here.
In addition to the awards gala, this year’s event has been expanded to include the two-day R&D 100 Technology Conference featuring an impressive line-up of 28 educational sessions presented by high-profile speakers. The conference will also feature two keynote addresses from noted innovator Dean Kamen, and Thom Mason, PhD, Director of Oak Ridge National Laboratory. Panel discussions devoted to the future of R&D and the annual R&D Global Funding Forecast round out the program. The educational sessions have been divided into four tracks focusing on key areas of R&D: R&D Strategies & Efficiencies, Emerging Technologies & Materials, Innovations in Robotics & Automation, and Instrumentation & Monitoring.
For more information on the R&D 100 Awards Finalists contact Lindsay Hock, at 973-920-7036, lindsay.hock@advantagemedia.com.
Click here to register to attend the 2015 R&D 100 Awards dinner on November 13 at Caesars Palace in Las Vegas.
The Finalists were selected by an independent panel of more than 70 judges. This year’s Finalists represent many of industry’s leading organizations and national laboratories, as well as many newcomers to the R&D 100 Awards, often referred to as the “Oscars of Invention.”
For the first time in its history, the winners of the R&D 100 Awards will be honored for exemplary accomplishments from across five categories: Analytical Test, IT/Electrical, Mechanical Devices/Materials, Process/Prototyping, and Software/Services. The 2015 Awards will also honor excellence in four new special recognition categories – Market Disruptor (Services), Market Disruptor (Products), Corporate Social Responsibility, and Green Tech.
"This was a particularly strong year for research and development, led by many outstanding technologies that broadened the scope of innovation,” said R&D Magazine Editor Lindsay Hock. “We are honored to recognize these products and the project teams behind the design, development, testing, and production of these remarkable innovations and their impact in the field. We look forward to celebrating the winners in November.”
A detailed list of the 2015 Finalists can be found here.
In addition to the awards gala, this year’s event has been expanded to include the two-day R&D 100 Technology Conference featuring an impressive line-up of 28 educational sessions presented by high-profile speakers. The conference will also feature two keynote addresses from noted innovator Dean Kamen, and Thom Mason, PhD, Director of Oak Ridge National Laboratory. Panel discussions devoted to the future of R&D and the annual R&D Global Funding Forecast round out the program. The educational sessions have been divided into four tracks focusing on key areas of R&D: R&D Strategies & Efficiencies, Emerging Technologies & Materials, Innovations in Robotics & Automation, and Instrumentation & Monitoring.
For more information on the R&D 100 Awards Finalists contact Lindsay Hock, at 973-920-7036, lindsay.hock@advantagemedia.com.
Click here to register to attend the 2015 R&D 100 Awards dinner on November 13 at Caesars Palace in Las Vegas.
Subscribe to:
Comments (Atom)
