Learning to grow: Control of material self-assembly using evolutionary reinforcement learning.


Journal

Physical review. E
ISSN: 2470-0053
Titre abrégé: Phys Rev E
Pays: United States
ID NLM: 101676019

Informations de publication

Date de publication:
May 2020
Historique:
received: 29 12 2019
accepted: 29 03 2020
entrez: 25 6 2020
pubmed: 25 6 2020
medline: 25 6 2020
Statut: ppublish

Résumé

We show that neural networks trained by evolutionary reinforcement learning can enact efficient molecular self-assembly protocols. Presented with molecular simulation trajectories, networks learn to change temperature and chemical potential in order to promote the assembly of desired structures or choose between competing polymorphs. In the first case, networks reproduce in a qualitative sense the results of previously known protocols, but faster and with higher fidelity; in the second case they identify strategies previously unknown, from which we can extract physical insight. Networks that take as input the elapsed time of the simulation or microscopic information from the system are both effective, the latter more so. The evolutionary scheme we have used is simple to implement and can be applied to a broad range of examples of experimental self-assembly, whether or not one can monitor the experiment as it proceeds. Our results have been achieved with no human input beyond the specification of which order parameter to promote, pointing the way to the design of synthesis protocols by artificial intelligence.

Identifiants

pubmed: 32575260
doi: 10.1103/PhysRevE.101.052604
doi:

Types de publication

Journal Article

Langues

eng

Sous-ensembles de citation

IM

Pagination

052604

Auteurs

Stephen Whitelam (S)

Molecular Foundry, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, California 94720, USA.

Isaac Tamblyn (I)

National Research Council of Canada, Ottawa, Ontario, Canada and Vector Institute for Artificial Intelligence, Toronto, Ontario, Canada.

Classifications MeSH