The large collaborations in high-energy physics analyze a large amount of data on a daily basis. Different practices have been consolidated and improved through the past decade. A brief overview of the most common statistical techniques used in searches for new physics and precision measurements is presented.
An overview of common statistical methods and machine learning approaches deployed at the LHCb experiment will be discussed. Particular focus will be given to recent developments using novel techniques relevant to heavy flavour physics.
We present a new tensor network algorithm for calculating the partition function of interacting quantum field theories in 2 dimensions. It is based on the Tensor Renormalization Group (TRG) protocol, adapted to operate entirely at the level of fields. This strategy was applied in Ref.[1] to the much simpler case of a free boson, obtaining an excellent performance. Here we include an arbitrary...
Understanding the nature of confinement, as well as its relation with the spontaneous breaking of chiral symmetry, remains one of the long-standing questions in high-energy physics. The difficulty of these task stems from the limitations of current analytical and numerical techniques to address nonperturbative phenomena in non-Abelian gauge theories. The situation becomes particularly...
We introduce a Metropolis-Hastings Markov chain for Boltzmann distributions of classical spin systems. It relies on approximate tensor network contractions to propose correlated collective updates at each step of the evolution. We present benchmarks for a wide variety of instances of the two-dimensional Ising model, including ferromagnetic, antiferromagnetic, (fully) frustrated and...
In lattice QCD simulations, a large number of observables are calculated on each Monte Carlo sample of gauge fields, and their statistical fluctuations are correlated with each other as they share the same background gauge field. By exploiting the correlation, a machine learning regression model can be trained to predict the values of the computationally expensive observables from the values...
Critical slowing down and topological freezing are key obstacles to progress in lattice QCD calculations of hadronic properties causing the cost of ensemble generation to severely diverge in the continuum limit. Recently, a class of machine learning techniques known as flow-based models has been successfully applied to produce exact sampling schemes that can circumvent critical slowing down...
We study the machine learning techniques applied to the lattice gauge theory's critical behavior, particularly to the confinement/deconfinement phase transition in the SU(2) and SU(3) gauge theories. We find that the neural network, trained on lattice configurations of gauge fields at an unphysical value of the lattice parameters as an input, builds up a gauge-invariant function, and finds...
The unsupervised search for overdense regions in high-dimensional feature spaces, where locally high population densities may be associated with anomalous contaminations to an otherwise more uniform population, is of relevance to applications ranging from fundamental research to industrial use cases. Motivated by the specific needs of searches for new phenomena in particle collisions, we...
Matrix inversion problems are often encountered in experimental physics, and in particular in high-energy
particle physics, under the name of unfolding. The true spectrum of a physical quantity is deformed by
the presence of a detector, resulting in an observed spectrum. If we discretize both the true and observed
spectra into histograms, we can model the detector response via a matrix....
Calculating analytic properties of Euclidean propagators is a demanding task, in particular if one considers non-perturbative approaches, such as Dyson-Schwinger equations. At the same time, once calculated in the complex domain, these correlators provide valuable insights into various properties associated with the proagating degree of freedom, and can serve as input to bound state equations....
One of the main limitations in particle physics analyses with ML-based selection is the understanding of the implications of systematic uncertainties. The usual approach being the training using samples without systematic effects and estimating their contribution to the magnitudes measured on modified test samples. We propose here a method based on data augmentation to incorporate the...
The statistical significance that characterizes a discrepancy
between a measurement and theoretical prediction is usually
calculated assuming that the statistical and systematic
uncertainties are known. Many types of systematic uncertainties
are, however, estimated on the basis of approximate procedures and
thus the values of the assigned errors are themselves uncertain.
...
Evaluating extremely low p-values with importance sampling techniques in discovery-oriented HEP analyses.
Many results in current particle physics studies are derived using asymptotic approximations to calculate the p-value (or the significance) of the hypothesis tested. It is difficult to ensure to which extent the requirements for these approximations are valid in cases where the number...
In this talk I present our progress on lattice gauge equivariant convolutional neural networks (L-CNNs). These new types of neural networks are a variant of convolutional neural networks (CNNs) which exactly preserve lattice gauge symmetry. By explicitly accounting for parallel transport in convolutions and allowing for bilinear operations inside the network, we show that L-CNNs can be used to...
The crucial role played by the underlying symmetries of high energy physics and lattice field theories calls for the implementation of such symmetries in the neural network architectures that are applied to the physical system under consideration. In this talk we focus on the consequences of incorporating translational equivariance among the network properties, particularly in terms of...
The simulation-based inference is a powerful approach that can deal with various challenges ranging from discovering hidden properties to simulation algorithms tuning and optimising device configurations. Such methods as evolutionary algorithms or Bayesian optimisation usually help to address those challenges. However, those approaches rely on assumptions that might not hold. Recently, a...
Accurate and fast simulation of particle physics events is crucial for the high-energy physics community. Simulating particle interactions with the detector is both time consuming and computationally expensive. With its proton-proton collision energy of 13 TeV, the Large Hadron Collider is uniquely positioned to detect and measure the rare phenomena that can shape our knowledge of new...
In this talk, we introduce machine learning techniques for lattice QCD. Lattice QCD is one of the most successful methodologies of quantum field theory, which provides us quantitative values of QCD. On the other hand, machine learning enables us to treat big structured data. In particular, neural networks are widely used since it has universal approximation property while it cannot be exact....