Machine learning is increasingly used in particle physics.

Original author: Manuel Gnida
  • Transfer


Experiments at the Large Hadron Collider produce about a million gigabytes of data every second. Even after reduction and compression, the data obtained at the LHC in just an hour, by volume, are comparable with the data obtained by Facebook for the whole year.

Fortunately, particle physicists do not have to deal with this data manually. They work in conjunction with a form of artificial intelligence that learns to conduct independent data analysis using machine learning technology.

“Compared to traditional computer algorithms that we develop to conduct a specific type of analysis, we make the machine learning algorithm so that it decides what analysis to do, which as a result saves us an innumerable man-hours of development and analysis,” says physicist Alexander Radovich from the College of William and Mary, working in the Nova neutrino experiment.

Radovich and a group of researchers outlined the range of current applications and future prospects of MO in particle physics in a summary published in Nature in August 2018.

Sifting big data


To process the huge amounts of data obtained in modern experiments - like those that go to the LHC - the researchers use "triggers" - special equipment that works in conjunction with software, deciding in real time which data to leave for analysis and which ones to get rid of .

At the LHCb detector in the experiment, which can shed light on why there is much more matter than antimatter in the Universe, the MO algorithms make at least 70% of such decisions, said one scientist, Mike Williams from the Massachusetts Institute of Technology mentioned bulletins. “MO plays a role in almost all aspects of working with data in the experiment, ranging from triggers to analyzing the remaining data,” he says.

Machine learning demonstrates significant advances in analysis. At the huge ATLAS and CMS detectors at the LHC, thanks to which the Higgs particle was discovered, there are millions of sensors, the signals of which need to be brought together in order to get meaningful results.

“These signals make up a complex data space,” says Michael Kagan of the SLAC National Accelerator Laboratory of the US Department of Energy, who works on the ATLAS detector, and was involved in the creation of the report. "We need to understand the relationship between them in order to draw conclusions - for example, that a certain trace of a particle in the detector was left by an electron, a photon, or something else."

MO is also beneficial for experiments with neutrinos. NOva, which serves Fermilab, studies how neutrinos move from one species to another when traveling through the Earth. These neutrino oscillations have the potential to reveal the existence of new types of neutrinos, which, according to some theories, may turn out to be dark matter particles. NOva detectors look for charged particles that appear when neutrinos collide with material in the detector, and MO algorithms determine them.

From machine learning to depth learning


The recent advancement in the field of MO is often called in-depth training, and it promises to further expand the field of application of MO in particle physics. Under GO, they usually mean the use of neural networks: computer algorithms with architecture inspired by the dense dense network of neurons of the human brain.

These neural networks are independently trained in certain analysis tasks using training, when they process test data, for example, from simulations, and receive feedback on the quality of their work.

Until recently, the success of neural networks was limited because it was very difficult to train them, says co-author Kazuhiro Terao, a researcher with SLAC, who works in the neutrino experiment MicroBooNE, which studies neutrino oscillations in the framework of the short-term Fermilab program. The experiment will be part of the future of the Deep Underground Neutrino Experiment . “These difficulties limited our ability to work with simple neural networks with a depth of a couple of layers,” he says. “Thanks to the advancement of algorithms and computing equipment, we now know much more about how to create and train more capable neural networks with hundreds or thousands of layers.”

Many of the breakthroughs in GO are due to the commercial development of technological giants and the explosion of data that they have created over the past two decades. “For example, NOva uses a neural network made similar to the GoogleNet architecture,” says Radovich. “It improved the experiment to a degree that could only be achieved by increasing data collection by 30%.”

Fruitful soil for innovation


The MO algorithms are becoming more and more complex and finely tuned from day to day, opening up unprecedented opportunities for solving problems in the field of particle physics. Many of the new tasks for which they can be applied are related to computer vision, as Kagan says. “It’s like face recognition, but in particle physics, the properties of images are more abstract and complex than ears or noses.”

The data of some experiments, for example, NOvA and MicroBooNE, can be quite easily converted into real images, and AI can immediately be used to determine their features. On the other hand, the images from the results of experiments at the LHC must first be reconstructed on the basis of an intricate set of data obtained from millions of sensors.

“But even if the data does not look like images, we can still apply methods from computer vision if we process the data correctly,” says Radovich.

One of the areas in which such an approach can be very useful is the analysis of jets of particles that appear in large quantities in the LHC. Jets are narrow streams of particles whose traces are extremely difficult to separate from each other. Computer vision technology can help you understand these jets.

Another new application of GO is the simulation of data on particle physics that predicts, say, what happens in particle collisions at LHC, which can be compared with real data. This kind of simulation is usually slow and requires incredibly large computing power. AI could conduct such simulations much faster, which in the end could be a useful addition to traditional research methods.

“Only a few years ago, no one could have imagined that deep-seated neural networks could be trained to“ see ”data based on random noise,” says Kagan. “Although this work is still at a very early stage, it already looks quite promising, and will probably help solve data problems in the future.”

The benefits of healthy skepticism


Despite the obvious breakthroughs, MO enthusiasts often face skepticism from their colleagues, in particular, since MO algorithms for the most part work like “black boxes” without giving practically any information about exactly how they came to a certain conclusion.

“Skepticism is very healthy,” says William. “If we use MOs for triggers, discarding some data, such as on LHCb, then we need to be very careful about this issue and lift the bar very high.”

Consequently, in order to strengthen the positions of MO in particle physics, one must constantly try to improve the understanding of how algorithms work and, if possible, make a cross-comparison with real data.

“We constantly need to try to understand what the computer algorithm is doing and evaluate its results,” Terao says. - This is true for any algorithm, not only MO. Therefore, skepticism should not slow down progress. ”

The rapid advancement has already allowed some researchers to dream of what might be possible in the near future. “Today, we mostly use MOs to look for features in our data that can help us answer some questions,” Terao says. “Ten years from now, the MO algorithms may be able to independently raise their own questions and understand that they have discovered a new physics.”

Also popular now: