December 3, 2022, Room 275 - 277, New Orleans Ernest N. Morial Convention Center


About

The Machine Learning and the Physical Sciences workshop aims to provide an informal, inclusive and leading-edge venue for research and discussions at the interface of machine learning (ML) and the physical sciences. This interface spans (1) applications of ML in physical sciences (ML for physics), (2) developments in ML motivated by physical insights (physics for ML), and most recently (3) convergence of ML and physical sciences (physics with ML) which inspires questioning what scientific understanding means in the age of complex-AI powered science, and what roles machine and human scientists will play in developing scientific understanding in the future.

Recent years have seen a tremendous increase in cases where ML models are used for scientific processing and discovery, and similarly, instances where tools and insights from the physical sciences are brought to the study of ML models. The harmonious co-development of the two fields is not a surprise: ML methods have had great success in learning complex representations of data that enable novel modeling and data processing approaches in many scientific disciplines. Indeed, in some sense, ML and physics are concerned with a shared goal of characterizing the true probability distributions of nature. As ML and physical science research becomes more intertwined, questions naturally arise around what scientific understanding is when science is performed with the assistance of complex and highly parameterized models. Taken to the extreme, if an ML model is developed for a scientific task and demonstrates robustness and generalizability but lacks interpretability in terms of an existing scientific knowledge basis, is this still a useful scientific result?

The breadth of work at the intersection of ML and physical sciences is answering many important questions for both fields while opening up new ones that can only be addressed by a joint effort of both communities. By bringing together ML researchers and physical scientists who apply and study ML, we expect to strengthen the much needed interdisciplinary dialogue, introduce exciting new open problems to the broader community, and stimulate the production of new approaches to solving challenging open problems in the sciences. Invited talks from leading individuals in both communities will cover the state-of-the-art techniques and set the stage for this workshop, which will also include contributed talks selected from submissions. The workshop will also feature an expert panel discussion on "Philosophy of Science in the AI Era" --- focusing on topics such as scientific understanding in the age of extremely complex ML models, automating science via machines, and ML models as source of inspiration for scientific discoveries. Finally, there will be multiple community building activities such as a voluntary mentorship opportunity and round table discussions on curated topics to foster connection building and facilitate knowledge sharing across disciplines, backgrounds, and career stages.

NeurIPS 2022

The Machine Learning and the Physical Sciences 2022 workshop will be held on December 3, 2022 at the New Orleans Convention Center in New Orleans, USA as a part of the 36th annual conference on Neural Information Processing Systems (NeurIPS). The workshop is planned to take place in a hybrid format inclusive of virtual participation.

Schedule

See also the official NeurIPS workshop schedule.

07:50 – 08:20 Opening remarks

08:00 – 08:30 Invited talk: "Deep Learning and Ab-Initio Quantum Chemistry and Materials"
David Pfau (DeepMind)
08:30 – 08:45 Contributed talk: "Characterizing information loss in a chaotic double pendulum with the Information Bottleneck"
Kieran Murphy (University of Pennsylvania)
08:45 – 09:15 Invited talk: "Prospects for understanding the physics of the Universe"
Hiranya Peiris (UCL / Stockholm University)
09:15 – 09:30 Contributed talk: "Physical Data Models in Machine Learning Imaging Pipelines"
Marco Aversa (University of Glasgow)
09:30 – 10:00 Invited talk: "My Considerations on Machine Learning"
Giorgio Parisi (Sapienza University of Rome)
10:00 – 11:00 Poster session 1 and break

11:00 – 12:00 Panel: "Philosophy of Science in the AI Era"
Kathleen Creel (Northeastern University), Mario Krenn (MPI for the Science of Light), Emily Sullivan (Eindhoven U. of Technology)
12:00 – 01:15 Lunch

01:15 – 01:45 Invited talk: "Scaling up material discovery via deep learning"
E. Doğuş Çubuk (Google Brain)
01:45 – 02:15 Invited talk: "Collider Physics Innovations Powered by Machine Learning"
Vinicius Mikuni (LBNL / NERSC)
02:15 – 02:30 Contributed talk: "Simplifying Polylogarithms with Machine Learning"
Aurélien Dersy (Harvard University / IAIFI)
02:30 – 03:00 Invited talk: "Magnetic control of tokamak plasmas through Deep Reinforcement Learning"
Federico Felici (EPFL)
03:00 – 03:15 Contributed talk: "Posterior samples of source galaxies in strong gravitational lenses with score-based priors"
Alexandre Adam (Université de Montréal)
03:15 – 03:30 Break

03:30 – 04:00 Invited talk: "Supporting Food Security in Africe using Machine Learning and Earth Observations"
Catherine Nakalembe (University of Maryland), Hannah Kerner (Arizona State University)
04:00 – 04:05 Closing remarks

04:05 – 05:00 Poster session 2

Speakers

Panelists

Papers

1 A Curriculum-Training-Based Strategy for Distributing Collocation Points during Physics-Informed Neural Network Training [paper] [poster] [event]
Münzer, Marcus*; Bard, Christopher
2 A Neural Network Subgrid Model of the Early Stages of Planet Formation [paper] [poster] [event]
Pfeil, Thomas*; Cranmer, Miles; Ho, Shirley; Armitage, Philip; Birnstiel, Tilman; Klahr, Hubert
3 A New Task: Deriving Semantic Class Targets for the Physical Sciences [paper] [poster] [event]
Bowles, Micah R*
4 A Novel Automatic Mixed Precision Approach For Physics Informed Training [paper] [poster] [event]
Xue, Jinze; Subramaniam, Akshay*; Hoemmen, Mark
5 A Self-Supervised Approach to Reconstruction in Sparse X-Ray Computed Tomography [paper] [poster] [event]
Mendoza, Rey; Nguyen, Minh; Weng Zhu, Judith; Perciano, Talita; Dumont, Vincent; Mueller, Juliane; Ganapati, Vidya*
6 A Trust Crisis In Simulation-Based Inference? Your Posterior Approximations Can Be Unfaithful [paper] [poster] [video] [event]
Hermans, Joeri; Delaunoy, Arnaud*; Rozet, François; Wehenkel, Antoine; Begy, Volodimir; Louppe, Gilles
7 A fast and flexible machine learning approach to data quality monitoring [paper] [poster] [event]
Letizia, Marco*; Grosso, Gaia; Wulzer, Andrea; Zanetti, Marco; Pazzini, Jacopo; Rando, Marco; Lai, Nicolò
8 A hybrid Reduced Basis and Machine-Learning algorithm for building Surrogate Models: a first application to electromagnetism [paper] [event]
Ribes, Alejandro*; Persicot, Ruben; Meyer, Lucas T; Ducreux, Jean-Pierre
9 A physics-informed search for metric solutions to Ricci flow, their embeddings, and visualisation [paper] [poster] [event]
Jain, Aarjav*; Mishra, Challenger; Lió, Pietro
10 A probabilistic deep learning model to distinguish cusps and cores in dwarf galaxies [paper] [poster] [event]
Expósito, Julen*; Huertas-Company, Marc; Di Cintio, Arianna; Brook, Chris; Macciò, Andrea; Grant, Rob; Arjona, Elena
11 A robust estimator of mutual information for deep learning interpretability [paper] [poster] [video] [event]
Piras, Davide*; Peris, Hiranya ; Pontzen, Andrew; Lucie-Smith, Luisa; Nord, Brian; Guo, Ningyuan (Lillian)
12 Ad-hoc Pulse Shape Simulation using Cyclic Positional U-Net [paper] [poster] [event]
Li, Aobo*; Gruszko, Julieta; Bos, Brady; Caldwell, Thomas; León, Esteban; Wilkerson, John
13 Adaptive Selection of Atomic Fingerprints for High-Dimensional Neural Network Potentials [paper] [poster] [event]
Sandberg, Johannes E*; Devijver, Emilie; Jakse, Noel; Voigtmann, Thomas
14 Addressing out-of-distribution data for flow-based gravitational wave inference [paper] [poster] [event]
Maximillian, Dax*; Green, Stephen R; Wildberger, Jonas Bernhard; Gair, Jonathan; Puerrer, Michael; Macke, Jakob; Buonanno, Alessandra; Schölkopf, Bernhard
15 Adversarial Noise Injection for Learned Turbulence Simulations [paper] [poster] [event]
Su, Jingtong*; Kempe, Julia; Fielding, Drummond; Tsilivis, Nikolaos; Cranmer, Miles; Ho, Shirley
16 Amortized Bayesian Inference for Supernovae in the Era of the Vera Rubin Observatory Using Normalizing Flows [paper] [poster] [event]
Villar, Victoria A*
17 Amortized Bayesian Inference of GISAXS Data with Normalizing Flows [paper] [poster] [event]
Zhdanov, Maksim*; Randolph, Lisa; Kluge, Thomas; Nakatsutsumi, Motoaki; Gutt, Christian; Ganeva, Marina; Hoffmann, Nico
18 Anomaly Detection with Multiple Reference Datasets in High Energy Physics [paper] [poster] [event]
Chen, Mayee*; Nachman, Benjamin; Sala, Frederic
19 Applications of Differentiable Physics Simulations in Particle Accelerator Modeling [paper] [poster] [event]
Roussel, Ryan*; Edelen, Auralee
20 Applying Deep Reinforcement Learning to the HP Model for Protein Structure Prediction [paper] [poster] [event]
Yang, Kaiyuan*; Huang, Houjing; Vandans, Olafs; Murali, Adithyavairavan; Tian, Fujia; Yap, Roland H.C.; Dai, Liang
21 Astronomical Image Coaddition with Bundle-Adjusting Radiance Fields [paper] [poster] [event]
Hutton, Harlan*; Palegar, Harshitha; Ho, Shirley; Cranmer, Miles; Melchior, Peter M; Eubank, Jenna
22 Atmospheric retrievals of exoplanets using learned parameterizations of pressure-temperature profiles [paper] [poster] [event]
Gebhard, Timothy D*; Angerhausen, Daniel; Konrad, Björn; Alei, Eleonora; Quanz, Sascha; Schölkopf, Bernhard
23 CAPE: Channel-Attention-Based PDE Parameter Embeddings for SciML [paper] [poster] [event]
Takamoto, Makoto*; Alesiani, Francesco; Niepert, Mathias
24 CaloMan: Fast generation of calorimeter showers with density estimation on learned manifolds [paper] [poster] [video] [event]
Cresswell, Jesse*; Ross, Brendan L; Loaiza-Ganem, Gabriel; Reyes-Gonzalez, Humberto; Letizia, Marco; Caterini, Anthony
25 Can denoising diffusion probabilistic models generate realistic astrophysical fields? [paper] [poster] [event]
Mudur, Nayantara*; Finkbeiner, Douglas
26 Certified data-driven physics-informed greedy auto-encoder simulator [paper] [poster] [video] [event]
He, Xiaolong*; Choi, Youngsoo; Fries, William; Belof, Jonathan; Chen, Jiun-Shyan
27 Characterizing information loss in a chaotic double pendulum with the Information Bottleneck [paper] [poster] [event]
Murphy, Kieran A*; Bassett, Danielle S
28 ClimFormer - a Spherical Transformer model for long-term climate projections [paper] [poster] [event]
Ruhling Cachay, Salva; Mitra, Peetak P*; Kim, Sookyung; Hazarika, Subhashis; Hirasawa, Haruki; Hingmire, Dipti S; Singh, Hansi; Ramea, Kalai
29 Closing the resolution gap in Lyman alpha simulations with deep learning [paper] [poster] [event]
Jacobus, Cooper H*; Harrington, Peter ; Lukić, Zarija
30 Clustering Behaviour of Physics-Informed Neural Networks: Inverse Modeling of An Idealized Ice Shelf [paper] [poster] [event]
Iwasaki, Yunona*; Lai, Ching-Yao
31 Combinational-convolution for flow-based sampling algorithm [paper] [poster] [event]
Tomiya, Akio*
32 Computing the Bayes-optimal classifier and exact maximum likelihood estimator with a semi-realistic generative model for jet physics [paper] [poster] [event]
Cranmer, Kyle; Drnevich, Matthew*; Greenspan, Lauren; Macaluso, Sebastian; Pappadopulo, Duccio
33 Continual learning autoencoder training for a particle-in-cell simulation via streaming [paper] [poster] [event]
Stiller, Patrick*; Makdani, Varun; Pöschel, Franz; Pausch, Richard; Debus, Alexander; Bussmann, Michael; Hoffmann, Nico
34 Contrasting random and learned features in deep Bayesian linear regression [paper] [poster] [event]
Zavatone-Veth, Jacob A*; Tong, William; Pehlevan, Cengiz
35 Control and Calibration of GlueX Central Drift Chamber Using Gaussian Process Regression [paper] [poster] [event]
McSpadden, Diana*; Jeske, Torri; Jarvis, Naomi; Lawrence, David; Britton, Thomas; Kalra, Nikhil
36 Cosmology from Galaxy Redshift Surveys with PointNet [paper] [poster] [video] [event]
Anagnostidis, Sotirios-Konstantinos*; Thomsen, Arne; Refregier, Alexandre; Kacprzak, Tomasz; Biggio, Luca; Hofmann, Thomas; Troester, Tilman
37 D-optimal neural exploration of nonlinear physical systems [paper] [poster] [event]
Blanke, Matthieu*; Lelarge, Marc
38 DIGS: Deep Inference of Galaxy Spectra with Neural Posterior Estimation [paper] [poster] [event]
Khullar, Gourav*; Nord, Brian; Ciprijanovic, Aleksandra; Poh, Jason; Xu, Fei; Samudre, Ashwin
39 DS-GPS : A Deep Statistical Graph Poisson Solver (for faster CFD simulations) [paper] [poster] [event]
Nastorg, Matthieu*
40 Data-driven discovery of non-Newtonian astronomy via learning non-Euclidean Hamiltonian [paper] [poster] [event]
So, Oswin*; Li, Gongjie; Theodorou, Evangelos; Tao, Molei
41 De-noising non-Gaussian fields in cosmology with normalizing flows [paper] [poster] [event]
Rouhiainen, Adam*; Münchmeyer, Mortiz
42 Decay-aware neural network for event classification in collider physics [paper] [poster] [event]
Kishimoto, Tomoe*; Morinaga, Masahiro; Saito, Masahiko; Tanaka, Junichi
43 Deconvolving Detector Effects for Distribution Moments [paper] [poster] [event]
Desai, Krish*; Nachman, Benjamin; Thaler, Jesse
44 Decorrelation with Conditional Normalizing Flows [paper] [poster] [event]
Klein, Samuel*; Golling, Tobias
45 Deep Learning Modeling of Subgrid Physics in Cosmological N-body Simulations [paper] [poster] [event]
Chatziloizos, George-Mark; Lanusse, François; Cazenave, Tristan*
46 Deep Learning-Based Spatiotemporal Multi-Event Reconstruction for Delay-Line Detectors [paper] [poster] [event]
Knipfer, Marco*; Gleyzer, Sergei; Meier, Stefan; Heimerl, Jonas; Hommelhoff, Peter
47 Deep-pretrained-FWI: combining supervised learning with physics-informed neural network [paper] [poster] [video] [event]
MULLER, ANA PAULA OLIVEIRA*; Bom , Clecio Roque; Costa, Jessé Carvalho; Faria, Elisângela Lopes ; de Albuquerque, Marcelo Portes ; de Albuquerque, Marcio Portes
48 Deformations of Boltzmann Distributions [paper] [poster] [event]
Mate, Balint A*; Fleuret, François
49 Detecting structured signals in radio telescope data using RKHS [paper] [poster] [event]
Tsuchida, Russell*; Yong, Suk Yee
50 Detection is truncation: studying source populations with truncated marginal neural ratio estimation [paper] [poster] [event]
Anau Montel, Noemi*; Weniger, Christoph
51 DiffDock: Diffusion Steps, Twists, and Turns for Molecular Docking [paper] [poster] [event]
Corso, Gabriele*; Stärk, Hannes; Jing, Bowen; Barzilay, Dr.Regina; Jaakkola, Tommi
52 Differentiable Physics-based Greenhouse Simulation [paper] [poster] [video] [event]
Nguyen, Nhat M.*; Tran, Hieu; Duong, Minh; Bui, Hanh; Tran, Kenneth
53 Differentiable composition for model discovery [paper] [poster] [event]
Rochman Sharabi, Omer*; Louppe, Gilles
54 Discovering Long-period Exoplanets using Deep Learning with Citizen Science Labels [paper] [poster] [event]
Malik, Shreshth A*; Eisner, Nora; Lintott, Chris; Gal, Yarin
55 Diversity Balancing Generative Adversarial Networks for fast simulation of the Zero Degree Calorimeter in the ALICE experiment at CERN [paper] [poster] [video] [event]
Dubiński, Jan Michał *; Deja, Kamil; Wenzel, Sandro; Rokita, Przemysław; Trzcinski, Tomasz
56 Do Better QM9 Models Extrapolate as Better Quantum Chemical Property Predictors? [paper] [poster] [event]
ZHANG, YUCHENG*; Charoenphakdee, Nontawat; Takamoto, So
57 Do graph neural networks learn jet substructure? [paper] [poster] [event]
Mokhtar, Farouk*; Kansal, Raghav; Duarte, Javier
58 Domain Adaptation for Simulation-Based Dark Matter Searches with Strong Gravitational Lensing [paper] [poster] [event]
Kumbam, Pranath Reddy; Gleyzer, Sergei; Toomey, Michael W*; Tidball, Marcos
59 Dynamical Mean Field Theory of Kernel Evolution in Wide Neural Networks [paper] [poster] [event]
Bordelon, Blake A; Pehlevan, Cengiz*
60 Efficiently Moving Instead of Reweighting Collider Events with Machine Learning [paper] [poster] [event]
Mastandrea, Radha*; Nachman, Benjamin
61 Elements of effective machine learning datasets in astronomy [paper] [poster] [event]
Boscoe, Bernadette*; Do , Tuan
62 Employing CycleGANs to Generate Realistic STEM Images for Machine Learning [paper] [poster] [event]
Khan, Abid A*; Lee, Chia-Hao; Pinshane, Huang; Clark, Bryan
63 Emulating Fast Processes in Climate Models [paper] [poster] [event]
Brenowitz, Noah D*; Perkins, W. Andre; Nugent, Jacqueline M.; Watt-Meyer, Oliver; Clark, Spencer K.; Kwa, Anna; Henn, Brian; McGibbon, Jeremy; Bretherton, Christopher S.
64 Emulating cosmological growth functions with B-Splines [paper] [poster] [event]
Kwan, Ngai Pok*; Modi, Chirag; Li, Yin; Ho, Shirley
65 Emulating cosmological multifields with generative adversarial networks [paper] [poster] [event]
Andrianomena, Sambatra HS*; Hassan, Sultan; Villaescusa-Navarro, Francisco
66 Energy based models for tomography of quantum spin-lattice systems [paper] [poster] [event]
J., Abhijith*; Vuffray, Marc D; Lokhov, Andrey
67 FO-PINNs: A First-Order formulation for Physics~Informed Neural Networks [paper] [poster] [event]
Gladstone, Rini Jasmine*; Nabian, Mohammad Amin; Meidani, Hadi
68 Fast kinematics modeling for conjunction with lens image modeling [paper] [poster] [event]
Gomer, Matthew R*; Biggio, Luca; Ertl, Sebastian; Wang, Han; Galan, Aymeric; Van de Vyvere, Lyne; Sluse, Dominique; Vernardos, Georgios; Suyu, Sherry
69 Finding NEEMo: Geometric Fitting using Neural Estimation of the Energy Mover’s Distance [paper] [poster] [event]
Kitouni, Ouail*; Williams, Mike; Nolte, Niklas
70 Finding active galactic nuclei through Fink [paper] [poster] [event]
Russeil, Etienne Sédick*; Ishida, Emille; Peloton, Julien; Moller, Anais; Le Montagner, Roman
71 First principles physics-informed neural network for quantum wavefunctions and eigenvalue surfaces [paper] [poster] [event]
Mattheakis, Marios*; Schleder, Gabriel R; Larson, Daniel; Kaxiras, Efthimios
72 Flexible learning of quantum states with generative query neural networks [event]
Zhu, Yan; Wu, Ya-Dong*; Bai, Ge; Wang, Dong-Sheng; Wang, Yuexuan; Chiribella, Giulio
73 From Particles to Fluids: Dimensionality Reduction for Non-Maxwellian Plasma Velocity Distributions Validated in the Fluid Context [paper] [poster] [event]
da Silva, Daniel E*
74 GAN-Flow: A dimension-reduced variational framework for physics-based inverse problems [paper] [poster] [event]
Dasgupta, Agnimitra*; Patel, Dhruv; Ray, Deep; Johnson, Erik; Oberai, Assad
75 GAUCHE: A Library for Gaussian Processes in Chemistry [paper] [poster] [video] [event]
Griffiths, Ryan-Rhys*; Klarner, Leo; Moss, Henry B; Ravuri, Aditya; Truong, Sang; Rankovic, Bojana; Du, Yuanqi; Jamasb, Arian R.; Schwartz, Julius; Tripp, Austin J; Kell, Gregory; Bourached, Anthony; Chan, Alex J; Moss, Jacob; Guo, Chengzhi; Lee, Alpha; Schwaller, Philippe; Tang, Jian
76 Galaxy Morphological Classification with Deformable Attention Transformer [paper] [poster] [event]
KANG, SEOKUN; Shin, Min-su; Kim, Taehwan*
77 Generating Calorimeter Showers as Point Clouds [paper] [poster] [event]
Schnake, Simon Patrik*; Krücker, Dirk; Borras, Kerstin
78 Generating astronomical spectra from photometry with conditional diffusion models [paper] [poster] [event]
Doorenbos, Lars*; Cavuoti, Stefano; Longo, Giuseppe; Brescia, Massimo; Sznitman, Raphael; Márquez Neila, Pablo
79 Geometric NeuralPDE (GNPnet) Models for Learning Dynamics [paper] [poster] [event]
Fasina, Oluwadamilola Fasina*; Krishnaswamy, Smita; Krishnapriyan, Aditi
80 Geometric path augmentation for inference of sparsely observed stochastic nonlinear systems [paper] [poster] [event]
Maoutsa, Dimitra*
81 Geometry-aware Autoregressive Models for Calorimeter Shower Simulations [paper] [poster] [event]
Liu, Junze*; Ghosh, Aishik; Smith, Dylan; Baldi, Pierre; Whiteson, Daniel
82 Graph Structure from Point Clouds: Geometric Attention is All You Need [paper] [poster] [event]
Murnane, Daniel*
83 Graphical Models are All You Need: Per-interaction reconstruction uncertainties in a dark matter detection experiment [paper] [poster] [event]
Peters, Christina*; Higuera, Aaron; Liang, Shixiao; Bajwa, Waheed; Tunnell, Christopher
84 HGPflow: Particle reconstruction as hyperedge prediction [paper] [poster] [event]
Dreyer, Etienne*; Kakati, Nilotpal; Armando Di Bello, Francesco
85 HIGlow: Conditional Normalizing Flows for High-Fidelity HI Map Modeling [paper] [poster] [event]
Friedman, Roy*; Hassan, Sultan SH
86 How good is the Standard Model? Machine learning multivariate Goodness of Fit tests [paper] [poster] [event]
Grosso, Gaia*; Letizia, Marco; Wulzer, Andrea; Pierini, Maurizio
87 HubbardNet: Efficient Predictions of the Bose-Hubbard Model Spectrum with Deep Neural Networks [paper] [poster] [event]
Zhu , Ziyan*; Mattheakis, Marios; Pan, Weiwei; Kaxiras, Efthimios
88 Hybrid integration of the gravitational N-body problem with Artificial Neural Networks [paper] [poster] [event]
Saz Ulibarrena, Veronica*; Portegies Zwart, Simon F; Sellentin, Elena; Koren, Barry; Horn, Philipp; Cai, Maxwell
89 HyperFNO: Improving the Generalization Behavior of Fourier Neural Operators [paper] [poster] [event]
Alesiani, Francesco*; Takamoto, Makoto; Niepert, Mathias
90 Identifying AGN host galaxies with convolutional neural networks [paper] [poster] [event]
Guo, Ziting*; Wu, John; Sharon, Chelsea
91 Identifying Hamiltonian Manifold in Neural Networks [paper] [poster] [event]
Song, Yeongwoo; Jeong, Hawoong*
92 Improved Training of Physics-informed Neural Networks using Energy-Based priors: A Study on Electrical Impedance Tomography [paper] [poster] [event]
Pokkunuru, Akarsh*; Rooshenas, Pedram; Strauss, Thilo; Abhishek, Anuj; Khan, Taufiquar R
93 Improving Generalization with Physical Equations [paper] [poster] [event]
Wehenkel, Antoine*; Behrmann, Jens; Hsu, Hsiang; Sapiro, Guillermo; Louppe, Gilles; Jacobsen, Joern-Henrik
94 Inferring molecular complexity from mass spectrometry data using machine learning [paper] [poster] [event]
Gebhard, Timothy D*; Bell, Aaron; Gong, Jian; Hastings, Jaden J. A.; Fricke, George M; Cabrol, Nathalie; Sandford, Scott; Phillips, Michael; Warren-Rhodes, Kimberley; Baydin, Atilim Gunes
95 Insight into cloud processes from unsupervised classification with a rotation-invariant autoencoder [paper] [poster] [event]
Kurihana, Takuya*; Franke, James A; Foster, Ian; Wang, Ziwei; Moyer, Elisabeth
96 Interpretable Encoding of Galaxy Spectra [paper] [poster] [event]
Liang, Yan*; Melchior, Peter M; Lu, Sicong
97 Intra-Event Aware Imitation Game for Fast Detector Simulation [paper] [poster] [event]
Hashemi, Hosein*; Hartmann, Nikolai; Sharifzadeh, Sahand; Kahn, James; Kuhr, Thomas
98 Learning Electron Bunch Distribution along a FEL Beamline by Normalising Flows [paper] [poster] [event]
Willmann, Anna*; Couperus Cabadağ, Jurjen Pieter; Chang, Yen-Yu; Pausch, Richard; Ghaith, Amin; Debus, Alexander; Irman, Arie; Bussmann, Michael; Schramm, Ulrich; Hoffmann, Nico
99 Learning Feynman Diagrams using Graph Neural Networks [paper] [poster] [event]
Norcliffe, Alexander LI*; Mitchell, Harrison; Lió, Pietro
100 Learning Integrable Dynamics with Action-Angle Networks [paper] [poster] [event]
Daigavane, Ameya*; Kosmala, Arthur; Cranmer, Miles; Smidt, Tess; Ho, Shirley
101 Learning Similarity Metrics for Volumetric Simulations with Multiscale CNNs [paper] [poster] [event]
Kohl, Georg*; Chen, Liwei; Thuerey, Nils
102 Learning Uncertainties the Frequentist Way: Calibration and Correlation in High Energy Physics [paper] [poster] [event]
Gambhir, Rikab*; Thaler, Jesse; Nachman, Benjamin
103 Learning dynamical systems: an example from open quantum system dynamics. [paper] [event]
Novelli, Pietro*
104 Learning latent variable evolution for the functional renormalization group [paper] [poster] [event]
Medvidović, Matija*; Toschi, Alessandro; Sangiovanni, Giorgio; Franchini, Cesare; Millis, Andy; Sengupta, Anirvan; Di Sante, Domenico
105 Learning the nonlinear manifold of extreme aerodynamics [paper] [poster] [event]
Fukami, Kai*; Taira, Kunihiko
106 Learning-based solutions to nonlinear hyperbolic PDEs: Empirical insights on generalization errors [paper] [poster] [event]
Thonnam Thodi, Bilal*; Ambadipudi, Sai Venkata Ramana; Jabari, Saif Eddin
107 Leveraging the Stochastic Predictions of Bayesian Neural Networks for Fluid Simulations [paper] [poster] [event]
Mueller, Maximilian*; Greif, Robin; Jenko, Frank; Thuerey, Nils
108 Likelihood-Free Frequentist Inference for Calorimetric Muon Energy Measurement in High-Energy Physics [paper] [poster] [event]
Masserano, Luca*; Lee, Ann; Izbicki, Rafael; Kuusela, Mikael; Dorigo, Tommaso
109 Lyapunov Regularized Forecaster [paper] [poster] [event]
Zheng, Rong*; Yu, Rose
110 ML4LM: Machine Learning for Safely Landing on Mars [paper] [poster] [event]
Wu, David D*; Chung, Wai Tong; Ihme, Matthias
111 Machine Learning for Chemical Reactions \\A Dance of Datasets and Models [paper] [poster] [event]
Schreiner, Mathias*; Bhowmik, Arghya; Vegge, Tejs; Busk, Jonas; Jørgensen, Peter B; Winther, Ole
112 Machine learning for complete intersection Calabi-Yau manifolds [paper] [poster] [event]
Erbin, Harold*; Tamaazousti, Mohamed; Finotello, Riccardo
113 Machine-learned climate model corrections from a global storm-resolving model [paper] [poster] [event]
Kwa, Anna*
114 Modeling halo and central galaxy orientations on the SO(3) manifold with score-based generative models [paper] [poster] [event]
Jagvaral, Yesukhei*; Lanusse, Francois; Mandelbaum, Rachel
115 Molecular Fingerprints for Robust and Efficient ML-Driven Molecular Generation [paper] [poster] [event]
Tazhigulov, Ruslan N.*; Schiller, Joshua; Oppenheim, Jacob; Winston, Max
116 Monte Carlo Techniques for Addressing Large Errors and Missing Data in Simulation-based Inference [paper] [poster] [event]
Wang, Bingjie*; Leja, Joel; Villar, Victoria A; Speagle, Joshua
117 Multi-Fidelity Transfer Learning for accurate database PDE approximation [paper] [poster] [event]
Liu, Wenzhuo*; Yagoubi, Mouadh; Schoenauer, Marc; Danan, David
118 Multi-scale Digital Twin: Developing a fast and physics-infused surrogate model for groundwater contamination with uncertain climate models [paper] [poster] [event]
Wang, Lijing*; Kurihana, Takuya; Meray, Aurelien; Mastilovic, Ilijana; Praveen, Satyarth; Xu, Zexuan; Memarzadeh, Milad; Lavin, Alexander; Wainwright, Haruko
119 NLP Inspired Training Mechanics For Modeling Transient Dynamics [paper] [poster] [video] [event]
Ghule, Lalit J*; Ranade, Rishikesh; Pathak, Jay
120 Neural Fields for Fast and Scalable Interpolation of Geophysical Ocean Variables [paper] [poster] [event]
Johnson, Juan Emmanuel*; Lguensat, Redouane; fablet, ronan; Cosme, Emmanuel; Le Sommer, Julien
121 Neural Inference of Gaussian Processes for Time Series Data of Quasars [paper] [poster] [event]
Danilov, Egor*; Ciprijanovic, Aleksandra; Nord, Brian
122 Neural Network Prior Mean for Particle Accelerator Injector Tuning [paper] [poster] [event]
Xu, Connie *; Roussel, Ryan; Edelen, Auralee
123 Neural Network-based Real-Time Parameter Estimation in Electrochemical Sensors with Unknown Confounding Factors [paper] [poster] [event]
Jariwala, Sarthak*, Yin, Yue; Jackson, Warren; Doris, Sean
124 Neuro-Symbolic Partial Differential Equation Solver [paper] [poster] [event]
Akbari Mistani, Pouria*; Pakravan, Samira; Ilango, Rajesh; Choudhry, Sanjay; Gibou, Frederic
125 Normalizing Flows for Fragmentation and Hadronization [paper] [poster] [event]
Youssef, Ahmed*; Ilten, Phil; Menzo, Tony; Zupan, Jure; Szewc, Manuel; Mrenna, Stephen; Wilkinson, Michael K.
126 Normalizing Flows for Hierarchical Bayesian Analysis: A Gravitational Wave Population Study [paper] [poster] [event]
Ruhe, David*; Wong, Kaze; Cranmer, Miles; Forré, Patrick
127 Offline Model-Based Reinforcement Learning for Tokamak Control [paper] [poster] [event]
Char, Ian*; Abbate, Joseph; Bardoczi, Laszlo; Boyer, Mark; Chung, Youngseog; Conlin, Rory; Erickson, Keith; Mehta, Viraj; Richner, Nathan; Kolemen, Egemen; Schneider, Jeff
128 On Using Deep Learning Proxies as Forward Models in Optimization Problems [paper] [poster] [event]
Albreiki, Fatima A*; Belayouni, Nidhal; Gupta, Deepak K
129 One Network to Approximate Them All: Amortized Variational Inference of Ising Ground States [paper] [poster] [event]
Sanokowski, Sebastian*; Berghammer, Wilhelm; Kofler, Johannes; Hochreiter, Sepp; Lehner, Sebastian
130 One-Class Dense Networks for Anomaly Detection [paper] [poster] [event]
Karr, Norman*; Nachman, Benjamin; Shih, David
131 One-shot learning for solution operators of partial differential equations [paper] [poster] [event]
Lu, Lu*; Jiao, Anran; Pathak, Jay; Ranade, Rishikesh; He, Haiyang
132 PELICAN: Permutation Equivariant and Lorentz Invariant or Covariant Aggregator Network for Particle Physics [paper] [poster] [event]
Offermann, Jan*; Bogatskiy, Alexander; Hoffman, Timothy; Miller, David
133 PIPS: Path Integral Stochastic Optimal Control for Path Sampling in Molecular Dynamics [event]
Holdijk, Lars*; Du, Yuanqi; Hooft, Ferry; Jaini, Priyank; Ensing, Bernd; Welling, Max
134 Particle-level Compression for New Physics Searches [paper] [poster] [event]
Huang, Yifeng*; Collins, Jack; Nachman, Benjamin; Knapen, Simon; Whiteson, Daniel
135 Phase transitions and structure formation in learning local rules [paper] [poster] [event]
Zunkovic, Bojan*; Ilievski, Enej
136 Physical Data Models in Machine Learning Imaging Pipelines [paper] [poster] [event]
Aversa, Marco*; Oala, Luis; Clausen, Christoph; Murray-Smith, Roderick; Sanguinetti, Bruno
137 Physics solutions for privacy leaks in machine learning [paper] [poster] [video] [event]
Pozas-Kerstjens, Alejandro*; Hernandez-Santana, Senaida; Pareja Monturiol, Jose Ramon; Castrillon Lopez, Marco; Scarpa, Giannicola; Gonzalez-Guillen, Carlos E.; Perez-Garcia, David
138 Physics-Driven Convolutional Autoencoder Approach for CFD Data Compressions [paper] [poster] [event]
Olmo, Alberto*; Zamzam, Ahmed S; Glaws, Andrew; King, Ryan
139 Physics-Informed CNNs for Super-Resolution of Sparse Observations on Dynamical Systems [paper] [poster] [event]
Kelshaw, Daniel J*; Rigas, Georgios; Magri, Luca
140 Physics-Informed Convolutional Neural Networks for Corruption Removal on Dynamical Systems [paper] [poster] [event]
Kelshaw, Daniel J*; Magri, Luca
141 Physics-Informed Machine Learning of Dynamical Systems for Efficient Bayesian Inference [paper] [poster] [event]
Dhulipala, Som*; Che, Yifeng; Shields, Michael
142 Physics-Informed Neural Networks as Solvers for the Time-Dependent Schrödinger Equation [paper] [poster] [event]
Shah, Karan*; Stiller, Patrick; Hoffmann, Nico; Cangi, Attila
143 Physics-informed Bayesian Optimization of an Electron Microscope [event]
Ma, Desheng*
144 Physics-informed neural networks for modeling rate- and temperature-dependent plasticity [paper] [event]
Arora, Rajat; Kakkar, Pratik; Amit, Chakraborty; Dey, Biswadip*
145 Plausible Adversarial Attacks on Direct Parameter Inference Models in Astrophysics [paper] [poster] [event]
Horowitz, Benjamin A*; Melchior, Peter M
146 Point Cloud Generation using Transformer Encoders and Normalising Flows [paper] [poster] [event]
Käch, Benno*; Krücker, Dirk; Melzer, Isabell
147 Posterior samples of source galaxies in strong gravitational lenses with score-based priors [paper] [event]
Adam, Alexandre*; Coogan, Adam; Malkin, Nikolay; Legin, Ronan; Perreault-Levasseur, Laurence; Hezaveh, Yashar; Bengio, Yoshua
148 Probabilistic Mixture Modeling For End-Member Extraction in Hyperspectral Data [paper] [event]
Hoidn, Oliver*; Mishra, Aashwin; Mehta, Apurva
149 Qubit seriation: Undoing data shuffling using spectral ordering [paper] [poster] [event]
Acharya, Atithi*; Rudolph, Manuel; Chen, Jing; Miller, Jacob; Perdemo-Ortiz, Alejandro
150 Real-time Health Monitoring of Heat Exchangers using Hypernetworks and PINNs [paper] [poster] [event]
Majumdar, Ritam; Jadhav, Vishal; Deodhar, Anirudh; Karande, Shirish; Vig, Lovekesh; Runkana, Venkataramana*
151 Recovering Galaxy Cluster Convergence from Lensed CMB with Generative Adversarial Networks [paper] [poster] [event]
Parker, Liam H*; Han, Dongwon; Ho, Shirley; Lemos, Pablo
152 Reducing Down(stream)time: Pretraining Molecular GNNs using Heterogeneous AI Accelerators [paper] [poster] [event]
Bilbrey, Jenna A*; Herman, Kristina; Sprueill, Henry; Xantheas, Sotiris; Das, Payel; Lopez Roldan, Manuel; Kraus, Mike; Helal, Hatem; Choudhury, Sutanay
153 Renormalization in the neural network-quantum field theory correspondence [paper] [poster] [event]
Erbin, Harold*; Lahoche, Vincent; Ousmane Samary, Dine
154 SE(3)-equivariant self-attention via invariant features [paper] [poster] [event]
Chen, Nan*; Villar, Soledad
155 Scalable Bayesian Inference for Finding Strong Gravitational Lenses [paper] [poster] [event]
Patel, Yash P*; Regier, Jeffrey
156 Score Matching via Differentiable Physics [paper] [poster] [event]
Holzschuh, Benjamin J*; Vegetti, Simona ; Thuerey, Nils
157 Score-based Seismic Inverse Problems [paper] [poster] [event]
Ravula, Sriram*; Voytan, Dimitri P; Liebman, Elad; Tuvi, Ram; Gandhi, Yash; Ghani, Hamza H ; Ardel, Alexandre; Sen, Mrinal; Dimakis, Alex
158 Self-supervised detection of atmospheric phenomena from remotely sensed synthetic aperture radar imagery [paper] [poster] [event]
Glaser, Yannik*; Sadowski, Peter; Stopa, Justin
159 Semi-Supervised Domain Adaptation for Cross-Survey Galaxy Morphology Classification and Anomaly Detection [paper] [poster] [event]
Ciprijanovic, Aleksandra*; Lewis, Ashia; Pedro, Kevin; Madireddy, Sandeep; Nord, Brian; Perdue, Gabriel Nathan; Wild, Stefan
160 Set-Conditional Set Generation for Particle Physics [paper] [poster] [event]
Ganguly, Sanmay; Heinrich, Lukas*; Kakati, Nilotpal; Soybelman, Nathalie
161 Shining light on data [paper] [poster] [event]
Kumar, Akshat*; Sarovar, Mohan
162 Simplifying Polylogarithms with Machine Learning [paper] [poster] [event]
Dersy, Aurelien*; Schwartz, Matthew; Zhang, Xiaoyuan
163 Simulation-based inference of the 2D ex-situ stellar mass fraction distribution of galaxies using variational autoencoders [paper] [poster] [event]
Angeloudi, Eirini*; Huertas-Company, Marc; Falcón-Barroso, Jesús; Sarmiento, Regina; Walo-Martín, Daniel; Pillepich, Annalisa; Vega Ferrero, Jesús
164 Skip Connections for High Precision Regressors [paper] [poster] [event]
Paul, Ayan*; Bishara, Fady; Dy, Jennifer
165 Source Identification and Field Reconstruction of Advection-Diffusion Process from Sparse Sensor Measurements [paper] [poster] [event]
Daw, Arka*; Yeo, Kyongmin; Karpatne, Anuj; Klein, Levente
166 Stabilization and Acceleration of CFD Simulation by Controlling Relaxation Factor Based on Residues: An SNN Based Approach [paper] [poster] [event]
Dey, Sounak*; Banerjee, Dighanchal; Maurya, Mithilesh; Ahmad, Dilshad
167 Statistical Inference for Coadded Astronomical Images [paper] [poster] [event]
Wang, Mallory; Mendoza, Ismael*; Regier, Jeffrey; Avestruz, Camille; Wang, Cheng
168 Strong Lensing Parameter Estimation on Ground-Based Imaging Data Using Simulation-Based Inference [paper] [poster] [event]
Poh, Jason*; Samudre, Ashwin; Ciprijanovic, Aleksandra; Nord, Brian; Frieman, Joshua; Khullar, Gourav
169 Strong-Lensing Source Reconstruction with Denoising Diffusion Restoration Models [paper] [poster] [event]
Karchev, Kosio*; Anau Montel, Noemi; Coogan, Adam; Weniger, Christoph
170 SuNeRF: Validation of a 3D Global Reconstruction of the Solar Corona Using Simulated EUV Images [paper] [poster] [event]
Bintsi, Kyriaki-Margarita*; Jarolim, Robert; Tremblay, Benoit; Santos, Miraflor P; Jungbluth, Anna; Mason, James; Sundaresan, Sairam; Vourlidas, Angelos; Downs, Cooper; Caplan, Ronald; Muñoz-Jaramillo, Andrés
171 Super-resolving Dark Matter Halos using Generative Deep Learning [paper] [poster] [event]
Schaurecker, David*; Li, Yin; Ho, Shirley; Tinker, Jeremy
172 Tensor networks for active inference with discrete observation spaces [paper] [poster] [event]
Wauthier, Samuel T*; Vanhecke, Bram; Verbelen, Tim; Dhoedt, Bart
173 The Senseiver: attention-based global field reconstruction from sparse observations [paper] [poster] [event]
Santos, Javier E*; Fox, Zachary; Mohan, Arvind T; Viswanathan, Hari S; Lubbers, NIcholas
174 Thermophysical Change Detection on the Moon with the Lunar Reconnaissance Orbiter Diviner sensor [paper] [poster] [event]
Delgado-Centeno, Jose Ignacio*; Bucci, Silvia; Liang, Ziyi; Gaffinet, Ben; Bickel, Valentin T; Moseley, Ben; Olivares, Miguel
175 Time-aware Bayesian optimization for adaptive particle accelerator tuning [paper] [poster] [event]
Kuklev, Nikita*; Sun, Yine; Shang, Hairong; Borland, Michael; Fystro, Gregory
176 Topological Jet Tagging [paper] [poster] [event]
Thomas, Dawson S*; Demers, Sarah; Krishnaswamy, Smita; Rieck, Bastian A
177 Towards Creating Benchmark Datasets of Universal Neural Network Potential for Material Discovery [paper] [poster] [event]
Takamoto, So*; Shinagawa, Chikashi; Charoenphakdee, Nontawat
178 Towards a non-Gaussian Generative Model of large-scale Reionization Maps [paper] [poster] [event]
Lin, Yu-Heng*; Hassan, Sultan SH; Régaldo-Saint Blancard, Bruno; Eickenberg, Michael; Modi, Chirag
179 Towards solving model bias in cosmic shear forward modeling [paper] [poster] [event]
Remy, Benjamin*; Lanusse, Francois; Starck, Jean-Luc
180 Training physical networks like neural networks: deep physical neural networks [paper] [poster] [event]
Wright, Logan*; Onodera, Tatsuhiro; Stein, Martin; Wang, Tianyu; Schachter, Darren; Hu, Zoey; McMahon, Peter
181 Transfer Learning with Physics-Informed Neural Networks for Efficient Simulation of Branched Flows [paper] [poster] [event]
Pellegrin, Raphael PF*; Bullwinkel, Jeffrey B; Mattheakis, Marios; Protopapas, Pavlos
182 Uncertainty Aware Deep Learning for Particle Accelerators [paper] [poster] [event]
Rajput, Kishansingh*; Schram, Malachi; Somayaji, Karthik
183 Uncertainty quantification methods for ML-based surrogate models of scientific applications [paper] [poster] [event]
Basu, Kishore; Hao, Judy; Hintz, Delphine ; Shah, Dev; Palmer, Aaron; Hora, Gurpreet Singh; Nwankwo, Darian; White, Laurent*
184 Using Shadows to Learn Ground State Properties of Quantum Hamiltonians [paper] [poster] [event]
Tran, Viet T.*; Lewis, Laura; Kofler, Johannes; Huang, Hsin-Yuan; Kueng, Richard; Hochreiter, Sepp; Lehner, Sebastian
185 Validation Diagnostics for SBI algorithms based on Normalizing Flows [paper] [poster] [event]
Linhart, Julia*; Gramfort, Alexandre ; Rodrigues, Pedro
186 Virgo: Scalable Unsupervised Classification of Cosmological Shock Waves [paper] [poster] [event]
Lamparth, Max*; Böss, Ludwig; Steinwandel, Ulrich; Dolag, Klaus
187 Wavelets Beat Monkeys at Adversarial Robustness [paper] [poster] [event]
Su, Jingtong*; Kempe, Julia
188 Why are deep learning-based models of geophysical turbulence long-term unstable? [event]
Chattopadhyay, Ashesh K*; Hassanzadeh, Pedram

MLST Paper Award

The contribution Ad-hoc Pulse Shape Simulation using Cyclic Positional U-Net [paper] [poster] by Aobo Li, Julieta Gruszko, Brady Bos, Thomas Caldwell, Esteban León, and John Wilkerson was selected for the MLST Paper Award. The award is sponsored by Machine Learning: Science and Technology and acknowledges an outstanding submission to the workshop.

Program Committee (Reviewers)

We acknowledge the 284 members of the program committee for providing reviews on a very tight schedule and making this workshop possible. They are listed in alphabetical order below.

Aakash Patil (MINES ParisTech, CEMEF-CNRS , PSL - Research University), Abhijeet Parida (Childrens National), Abhinanda Ranjit Punnakkal (UiT The Arctic University of Norway), Abhinav Java (Adobe, MDSR Labs), Abhishek Abhishek (UBC), Adam Coogan (Ciela Institute (Université de Montréal & Mila)), Agnimitra Dasgupta (University of Southern California), Aizhan Akhmetzhanova (Harvard University), Amit Kumar Jaiswal (University College London), Andreas Schachner (University of Cambridge), Andres Vicente Arevalo (Instituto de Astrofísica de Canarias (IAC)), Anindita Maiti (Northeastern University), Ankita Shukla (Arizona State University), Anna Jungbluth (University of Oxford), Antoine Wehenkel (University of Liège), Antonio Mastropietro (Politecnico di Torino), Anurag Saha Roy (Saarland University), Arka Daw (Virginia Tech), Armi Tiihonen (Aalto University), Arnaud Delaunoy (University of Liege), Arrykrishna Mootoovaloo (Imperial College London), Arshad Rafiq Shaikh (BYJU'S), Asif Khan (University of Edinburgh), Asim Kadav (NEC Labs), Athénaïs Gautier (IMSV, University of Bern), Atilim Gunes Baydin (University of Oxford), Austin Clyde (Argonne National Laboratory), Babak Rahmani (EPFL), Barry M Dillon (University of Heidelberg), Batuhan Koyuncu (Saarland University), Ben Blaiszik (The University of Chicago), Benjamin Nachman (Lawrence Berkeley National Laboratory), Benne W Holwerda (University of Louisville), Bharath Ramsundar (DeepChem), Bilal Thonnam Thodi (New York University Abu Dhabi), Biprateep Dey (University of Pittsburgh), Brian Nord (Fermi National Accelerator Laboratory), Bruno Raffin (University of Grenoble), Cesar Quilodran-Casas (Imperial College London), Chenyang Li (Argonne National Laboratory), Chin Chun Ooi (IHPC), Christoph Weniger (University of Amsterdam), Christopher Hall (RadiaSoft LLC), Chulin Wang (Northwestern University), Clecio Bom (Brazilian Center for Research in Physics), Conrad M Albrecht (German Aerospace Center), Cristiano De Nobili (Pi School), Daniel J Kelshaw (Imperial College London), Daniel Murnane (Lawrence Berkeley National Laboratory), Daniel Muthukrishna (Massachusetts Institute of Technology), Daniel A Serino (LANL), Danielle Maddix (Amazon Research ), David Ruhe (University of Amsterdam), David Wang (TRIUMF), Dax Maximillian (MPI for Intelligent Systems, Tübingen), Debanjan Konar (CASUS - Center for Advanced Systems Understanding, Helmholtz-Zentrum Dresden-Rossendorf (HZDR)), Devesh Upadhyay (Ford Motor Co.), Di Luo (Massachusetts Institute of Technology), Dimitra Maoutsa (Technical University of Berlin), Dion Häfner (Pasteur Labs & ISI), Donlapark Ponnoprat (Chiang Mai University), Duccio Pappadopulo (Bloomberg), Elham E Khoda (University of Washington), Emanuele Usai (Brown University), Emine Kucukbenli (Harvard University), Engin Eren (DESY), Enrico Rinaldi (University of Michigan), Erik Buhmann (Universität Hamburg), Felix Wagner (HEPHY Vienna), Fernando Torales Acosta (Lawrence Berkeley National Lab), Francisco Villaescusa-Navarro (Princeton University), Franco Pellegrini (École normale supérieure, Paris), Francois Lanusse (CEA Saclay), François Rozet (University of Liège), Gabriel Nathan Perdue (Fermilab), Gaia Grosso (CERN), Gary Shiu (University of Wisconsin-Madison), Georges Tod (CRI), Gianni De Fabritiis (Universitat Pompeu Fabra), Gourav Khullar (University of Pittsburgh), Graham W Van Goffrier (UCL), Grant M Rotskoff (Stanford University), Gregor Kasieczka (Universität Hamburg), Guillermo Cabrera-Vives (University of Concepción), Haimeng Zhao (Tsinghua University), Hala Lamdouar (University of Oxford), Hannes Stärk (Massachusetts Institute of Technology), Haozhu Wang (Amazon Web Services), Harold Erbin (MIT, IAIFI, CEA-LIST), Harry Qiaohao Liang (MIT), Harsh Sharma (UC San Diego), Hector Corzo ( Center for Chemical Computation and Theory at UC Merced), Henning Kirschenmann (University of Helsinki), Hosein Hashemi (LMU Munich), Hubert Bretonniere (Institut d'Astrophysique Spatiale), Hyungjin Chung (KAIST), Ieva Kazlauskaite (University of Cambridge), Inigo V Slijepcevic (University of Manchester), Irina Espejo Morales (New York University), Ishan D Khurjekar (University of Florida), Jack Collins (SLAC National Accelerator Lab), James Spencer (DeepMind), jean-roch vlimant (California Institute of Technology), Jesse Thaler (MIT), Jessica Karaguesian (Massachusetts Institute of Technology), Jianan Zhou (Nanyang Technological University), Jingyi Tang (Stanford University), Jocelyn Ahmed Mazari (Extrality), John F Wu (Space Telescope Science Institute), Jordi Tura (Leiden University), Jose Luis Silva (Linköping University), Joshua Yao-Yu Lin (University of Illinois at Urbana-Champaign), Juan Emmanuel Johnson (IGE), Julian Suk (University of Twente), Junichi Tanaka (ICEPP, The University of TOkyo), Junze Liu (University of California, Irvine), Kai Fukami (University of California, Los Angeles), Kamile Lukosiute (University of Amsterdam), Karan Shah (Center for Advanced Systems Understanding (CASUS)), Keith Brown (Boston University), Keming Zhang (UC Berkeley), Ken-ichi Nomura (University of Southern California), Kim Andrea Nicoli (TU Berlin), Krish Desai (University of California, Berkeley), Kyongmin Yeo (IBM Research), Lalit J Ghule (Ansys Inc.), Laurent White (Advanced Micro Devices, Inc.), Leander Thiele (Princeton University), Lei Wang (IOP, CAS), Li Yang (Google Research), Lipi Gupta (NERSC, Berkeley National Lab), Luca Biggio (ETH Zürich), Lucas T Meyer (INRIA), Ludger Paehler (Technical University of Munich), M. Maruf (Virginia Tech), Madhurima Nath (Slalom Consulting, LLC), Maksim Zhdanov (Helmholtz-Zentrum Dresden-Rossendorf), Manuel Sommerhalder (Universität Hamburg), Marc Huertas-Company (Paris Observatory), Marcel Matha (German Aerospace Center), Marcin Pietroń (AGH UST), Marco Letizia (MaLGa, DIBRIS - University of Genoa), Mariano J Dominguez (IATE), Mariel Pettee (Lawrence Berkeley National Lab), Marios Mattheakis (Harvard University), Masaki Adachi (University of Oxford), Matteo Manica (IBM Research), Matthieu Blanke (INRIA Paris), Maximilian Croci (Microsoft Research), Maxwell X. Cai (SURF / Leiden University), Mayank Panwar (National Renewable Energy Laboratory), Mehmet A Noyan (Ipsumio B.V.), Mia Liu (Purdue University), Michael Bauerheim (ISAE-SUPAERO), Michael Deistler (University of Tuebingen), Michael R Douglas (Harvard CMSA), Mike Williams (Massachusetts Institute of Technology), Mikel Landajuela (Lawrence Livermore National Laboroatory), Mohammad Kordzanganeh (The University of Manchester), Muhammad F Kasim (Machine Discovery), Nadim Saad (Stanford University), Natalie Klein (Los Alamos National Laboratory), Neel Chatterjee (University of Minnesota, Twin Cities), Neerav Kaushal (Michigan Technological University), Nick B McGreivy (Princeton University), Nils Thuerey (Technical University of Munich), Nishant Panda (Los Alamos National Lab), Ole Winther (DTU and KU), Olivier Saut (CNRS-INRIA), Omer Deniz Akyildiz (University of Warwick & The Alan Turing Institute), Onur Kara (Hindsight Technology Solutions), Othmane Rifki (Spectrum Labs), Pablo Martin, Pankaj Rajak (Argonne National Laboratory), Pao-Hsiung Chiu (Institute of High Performance Computing), Parama Pal (Tata Consultancy Services Limited), Patrick Stiller (Helmholtz-Zentrum Dresden-Rossendorf), Paula Harder (Fraunhofer ITWM), Pedro L. C. Rodrigues (Inria), Peer-Timo Bremer (LLNL), Peter Harrington (Lawrence Berkeley National Laboratory (Berkeley Lab)), Peter McKeown (DESY), Peter Steinbach (HZDR), Pietro Vischia (Université catholique de Louvain), Pim de Haan (University of Amsterdam / Qualcomm AI Research), Prabhakar Marepalli (Engineer), Prajith P (Tata Consultancy Services Limited), PS Koutsourelakis (TUM), Qi Zhang (The Hong Kong Polytechnic University), Radha Mastandrea (UC Berkeley), Raghav Kansal (UC San Diego), Rajat Arora (Advanced Micro Devices (AMD)), Ralph Kube (Princeton Plasma Physics Laboratory), Rao Muhammad Umer (Institute of AI for Health (AIH), Helmholtz Muenchen), Raunak Borker (Ansys), Redouane Lguensat (IPSL), Rhys EA Goodall (Chemix.ai), Rianne van den Berg (Google Brain), Ricardo Vinuesa (KTH Royal Institute of Technology), Riccardo Alessandri (University of Chicago), Richard Feder (California Institute of Technology), Rishikesh Ranade (Ansys Inc), Roberto Bondesan (Qualcomm AI Research), Robin Schneider (Uppsala University), Rodrigo A. Vargas Hernández (Chemistry department University of Toronto), Rohin Narayan (Southern Methodist University), Ronan Legin (University of Montreal), Russell Tsuchida (Data61/CSIRO), Ryan D Hausen (University of California Santa Cruz), Ryan-Rhys Griffiths (University of Cambridge), S Chandra Mouli (Purdue University), Sam Foreman (Argonne National Laboratory), Sam F Lewin (University of Cambridge), Sam Vinko (University of Oxford), Sandeep Madireddy (Argonne National Laboratory), Sankalp Gilda (ML Collective), Sascha Caron (Radboud University Nijmegen), Satpreet H Singh (University of Washington), Savannah J Thais (Princeton University), Sebastian Dorn (Max-Planck Institute), Sebastian Kaltenbach (Technical University of Munich), Sebastian Wagner-Carena (Stanford University), Sébastien Fabbro (NRC Herzberg Astronomy and Astrophysics), Sergey Shirobokov (Twitter), Shahnawaz Ahmed (Chalmers University of Technology), Shiyu Wang (Emory University), Shriram Chennakesavalu (Stanford University), Shubhendu Trivedi (MIT), Siddharth Mishra-Sharma (MIT), Sijie He (University of Minnesota), Simon Olsson (Chalmers University of Technology), Simon Patrik Schnake (Deutsches Elektronen-Synchrotron DESY), Sirisha Rambhatla (University of Waterloo), Som Dhulipala (Idaho National Laboratory), Somya Sharma (U. Minnesota), Stephan Günnemann (Technical University of Munich), Stephen D Webb (RadiaSoft LLC), Sudeshna Boro Saikia (University of Vienna), Suraj Srinivas (EPFL), Surya Kant Sahu (Skit.ai), Sven Krippendorf (LMU Munich), Tamil Arasan Bakthavatchalam (Saama AI Research Lab), Tatiana Likhomanenko (Apple), Thomas Beckers (University of Pennsylvania), Thomas M McDonald (University of Manchester), Tian Xie (Microsoft Research), Tingting Xuan (Stony Brook University), Tobias Golling (UniGe), Tobias Ignacio Liaudat (Commisariat à l'Energie Atomique (CEA)), Tomasz Szumlak (AGH University of Science and Technology), Tomo Lazovich (Lightmatter), Tri Nguyen (MIT), Tryambak Gangopadhyay (Amazon), Vanya BK (Indian Institute Of Technology, Madras), Vedurumudi Priyanka (Sridevi Women's Engineering College ), Venkataramana Runkana (Tata Consultancy Services Limited), Victor Bapst (DeepMind), Victoria A Villar (Columbia University), Vinicius M Mikuni (NERSC), VISHAL DEY (The Ohio State University), Vitus Benson (Max-Planck-Institute for Biogeochemistry), Vudtiwat Ngampruetikorn (The Graduate Center, CUNY), Wai Tong Chung (Stanford University), William Coulton (Flatiron Institute), Xian Yeow Lee (Iowa State University), Xiang Fu (MIT), Xiangming Meng (The University of Tokyo), Xiangyang Ju (LBNL), Xiaolong Li (University of Delaware), Xinyan Li (IQVIA), Yao Fehlis (AMD), Yasemin Bozkurt Varolgünes (Max Planck Institute for Polymer Research), Yazan Zaid (None), Yilin Chen (Stanford University), Yingtao Luo (Carnegie Mellon University), Youngwoo Cho (Korea Advanced Institute of Science and Technology), Yu Wang (University of Michigan), Yuan Yin (Sorbonne Université, CNRS, ISIR, F-75005 Paris, France), Yuanqi Du (Cornell University), Yuanqing Wang (Memorial Sloan Kettering Cancer Center), Yuqi Nie (Princeton University), Yuwei Sun (The University of Tokyo / RIKEN AIP), Zixing Song (The Chinese University of Hong Kong), Ziyan Zhu (Stanford University)

Call for papers

Call for papers

In this workshop, we aim to bring together physical scientists and machine learning researchers who work at the intersection of these fields – i.e., applying machine learning to problems in the physical sciences (physics, chemistry, mathematics, astronomy, materials science, biophysics, and related sciences) or using physical insights to understand and improve machine learning techniques.

We invite researchers to submit work particularly in the following areas or areas related to them:

  • ML for Physics: Applications of machine learning to physical sciences including astronomy, astrophysics, cosmology, biophysics, chemistry, climate science, earth science, materials science, mathematics, particle physics, or any related area;
  • Physics in ML: Strategies for incorporating prior scientific knowledge into machine learning algorithms, as well as applications of physical sciences to understand, model, and improve machine learning techniques;
  • ML in the scientific process: Machine learning model interpretability for obtaining insights to physical systems; Automating multiple elements of the scientific method for discovery and operations with experiments;
  • Any other area related to the subject of the workshop, including but not limited to probabilistic methods that are relevant to physical systems, such as deep generative models, probabilistic programming, simulation-based inference, variational inference, causal inference, etc.

We invite authors to follow the guidelines and best practices from the NeurIPS conference.

Contributed Talks

Several accepted submissions will be selected for contributed talks. Contributed talks can be in-person or remote depending on the preference of the presenter.

Posters

Accepted work will be presented as posters during the workshop. At the same time as the in-person poster session, we will also facilitate a virtual poster session in GatherTown. Authors of submitted papers will be able to indicate their preference for an in-person presentation or a virtual presentation. Furthermore, in order to facilitate viewing presentations in different time zones, the authors of each accepted paper will get the opportunity to submit a 5 minute video that summarizes their work.

In case the number of posters that can be presented in-person is limited by the available physical space, a subset of works will be selected to be presented virtually. We will try to keep the authors preference for in-person/virtual poster presentations in mind during this selection. The remaining posters can be presented during the virtual poster session, and through the 5 minutes videos that will be uploaded to the workshop website.

Important note for work that will be/has been published elsewhere

All accepted works will be made available on the workshop website. This does not constitute an archival publication or formal proceedings; authors retain full copyright of their work and are free to publish their extended work in another journal or conference. We allow submission of works that overlap with papers that are under review or have been recently published in a conference or a journal, including physical science journals. However, we do not accept cross-submissions of the same content to multiple workshops at NeurIPS. (Check the list of accepted workshops this year.)

Submission instructions

Submit your work on the submission portal.

Submit paper

  • Submissions should be anonymized short papers (extended abstracts) up to 4 pages in PDF format, typeset using the NeurIPS paper template.
  • The authors are required to include a short statement (approximately one paragraph) about the potential broader impact of their work, including any ethical aspects and future societal consequences, which may be positive or negative. The broader impact statement should come after the main paper content. The impact statement and references do not count towards the page limit.
  • The NeurIPS style template includes a paper checklist intended to encourage best practices for responsible machine learning research (see associated guidelines). Although we require authors to complete the checklist in order to raise awareness of and encourage these practices we expect, given the scope and format of the workshop, that many of the checklist items will not be applicable to the submitted papers. As such, answering "no" or "n/a" to the checklist items will not reflect adversely on submissions and we do not expect authors to further qualify their answers.
  • Appendices are highly discouraged, and reviewers will not be required to read beyond the first 4 pages and the impact statement.
  • A workshop-specific modified NeurIPS style file will be provided for the camera-ready versions, after the author notification date.
  • Workshop organizers retain the right to reject submissions for editorial reasons: for example, any paper surpassing the page limitation or not including the broader impact statement will be desk-rejected.
  • Submissions will be kept confidential until they are accepted and until authors confirm that they can be included in the workshop. If a submission is not accepted, or withdrawn for any reason, it will be kept confidential and not made public.

Review process

Submissions that follow the submission instructions correctly (i.e., are not rejected due to editorial reasons, such as exceeding the page limit, missing the impact statement, etc,) are sent for peer-review. Below are some of the key points about this process that are shared with the reviewers and authors alike. Authors are expected to consider these in preparation of their submissions and when deciding to apply for the reviewer role.

  • Papers are 4 pages long. Appendices are accepted but highly discouraged; the reviewers will not be required to read the appendices.
  • There will be multiple reviewers for each paper.
  • Reviewers will be able to state their confidence in their review.
  • We will provide an easy-to-follow template for reviews so that both the pros and the cons of the submission can be highlighted.
  • Reviewers will select their field of expertise so that each submission has reviewers from multiple fields. During the matching process, the same list of subject fields is used for submissions and reviewer expertise in order to maximize the quality of reviews.
  • Potential conflicts of interest based on institution and author collaboration are addressed through the CMT review system.
  • Criteria for a successful submission include: novelty, correctness, relevance to the field, at the intersection of ML and physical sciences, and showing promise for future impact. Negative or null results that add value and insight are welcome.
  • There will be no rebuttal period. Minor flaws will not be the sole reason to reject a paper. Incomplete works at an advanced progress stage are welcome.

Instructions for accepted papers

Authors of accepted papers are expected to upload their camera-ready (final) paper and a poster by the deadlines given on this page. Optionally they can also record a short (5-minute) video describing their work.

Camera-ready paper

Please produce the "camera-ready" (final) version of your accepted paper by replacing the "neurips_2022.sty" style file with the "neurips_2022_ml4ps.sty" file available here and using the "final" package option (that is, "\usepackage[final]{neurips_2022_ml4ps}") to include author and affiliation information. The modified style file replaces the first-page footer to correctly refer to the workshop instead of the main conference. It is acceptable if your paper goes up to five pages (excluding acknowledgements, references, paper checklist, and any appendices if present) due to author and affiliation information taking extra space on the first page. The five-page limit is strict, and appendices are allowed but discouraged.

Please revise your paper as much as possible to address reviewer comments reasonably. The revision would include minor corrections and/or changes directly addressing reviewer comments. Beyond these points, it is not acceptable to have any significant new material not present in your paper's reviewed version. Please upload the final PDF of your paper by the camera-ready deadline by logging in to the CMT website (the same one used for the submissions) and using the camera-ready link shown with your existing submission.

Poster

Please upload your poster using the central NeurIPS poster upload page and follow the instructions given there regarding the file formats and resolutions. To see the poster listed in the NeurIPS poster upload page, the co-author who is uploading the poster for a paper needs to be logged in to the neurips.cc website using the same email address they used in their paper submission. If you encounter a problem regarding NeurIPS accounts (e.g., you have multiple accounts associated with different email addresses and you need to merge these accounts into a single one), please consult the NeurIPS account FAQs and get in touch with the main NeurIPS conference organization who are handling accounts and registrations.

The poster sessions will take place both in-person and virtually during the workshop.

  • Physical presentation: You must come with your poster printed, preferrably on a lightweight paper of at most 24W x 36H inches. Your poster will be taped to the wall.
  • Remote presentation: Virtual poster sessions will be held online at the same time as the physical poster sessions. Further instructions will be sent later.

For the authors of contributed talks, posters are optional.

Optional video

You can record a short video in addition to your poster using a platform of your own choice (e.g., Youtube). Videos will be added to the workshop website, together with the papers and posters. The video should be a brief (less than 5 minutes) presentation of your work in the accepted paper. Uploading a video is optional. You should submit the URL of your presentation on CMT with the camera-ready version of your paper.

Important dates

  • Submission Deadline: September 22 September 29, 2022, 23:59 AoE
  • Review Deadline: October 8 October 14, 2022, 23:59 AoE
  • Author (accept/reject) notification: October 15 October 20, 2022, 23:59 AoE
  • Camera-ready (final) paper deadline: November 19, 2022, 23:59 AoE
  • Poster deadline: November 19, 2022, 23:59 AoE
  • Workshop: December 3, 2022

Organizers

For questions and comments, please contact us at ml4ps2022@googlegroups.com.

Steering Committee

Sponsors

  • IAIFI

  • MLST

  • APS GDS

Sponsors are welcome. Please contact us.

Location

NeurIPS 2022 will be a hybrid conference with physical and virtual participation. The physical component will take place at the New Orleans Ernest N. Morial Convention Center, 900 Convention Center Blvd, New Orleans, LA 70130, United States