[MUSIC] One more constraint that we have to take into account is grounded in the fact that classifier is trained for simulated signal versus real background. And not all features are perfectly simulated. Monte Carlo and real data have different distributions, or as they say, they disagree. Look at the plots below. So if we train on classifier that picks this difference, and we get over-estimated efficiency for the classifier. So the approach to check for such over-estimation is the following. So we pick a control channel of similar topology. So for example, in the case of tau to three muons, it is D sub s into phi pi, and phi goes to muons. So you'll get three particles that have to come through a single point. And this channel is known pretty well, and it can be extracted from the data. And then we compare performance of classifier on simulated and real data samples using one of the metric. For example, Kolmogorov-Smirnov test. So we compare cdf's of one distribution of the classifier output and the second distribution classifier output. And then we demand that the distance to be below certain margin. So the question is, how can we include these criteria in the training loop? One of the approach suggested by Vicens Gaitan is called data doping. So here is the scheme. So we have Monte Carlo simulated sample and real data, and there are analysis channel and control channel, and there are labels ABCD for those parts of data that we consider. And we want our classifier to discriminate A from B, but not C from D. And the solution is pretty elegant. So we add fraction of simulated signal C to the training sample with background label, and this approach is called doping. So you can follow the link to see presentation by Vicens to get more details. And this approach works pretty nicely. I'll show you figures in next slide. I just want to show additional idea that can be applied to this case, so it is grounded at the paper by Ulyanov and Kaligs, that this is called Gradient Reversal or Domain Adaptation with Gradient Reversal. So the neural network consists of three parts. The green one, it is a feature extractor that builds meaningful representation of the features. And there are two ends or two tails of this neural network. The blue one is trying to discriminate between signal and background. And there is a red one that is learned to discriminate between Monte Carlo and real data. So the loss function of this red part is inversed when it's sent down the green head of this network. So it means that this feature extractor should build as meaningful representation as possible for the data, such that you can discriminate signal from background, but you can not discriminate Monte Carlo from real sample. So on this slide we'll see how those two techniques compare, so data doping and domain adaption, so you see that sensitivity or area on the ROC curve for those classifiers are roughly the same. But Kolmogorov-Smirnov test and Cramer-von Mises test for domain adaptation approach are a little bit better. It is due to the fact that for the main adaptation you can tune learning rates of those two heads, or actually all those three parts of the network. So you can get a tradeoff between quality of the classifier and quality in terms of domain adaptation. So more details you can find following the link below. So we're coming to conclusion. We have tipped strategies for searching for the new physics approaches that can be applied in some of the collider experiments. Discovery of anything beyond Standard model is pretty tough challenge. Those chess figures are pretty stubborn. We have examined what kind of constraints are used and what are the essential approaches used to deal with those. And how machine learning techniques could be used to cope with imposed restrictions. I hope the machine learning techniques described here are applicable to other contexts as well. For example, ethical machine learning theme is mostly about making classifier predictions flat, with regards to some ethical feature. You will have a chance to play with those on your own, and I hope you will have some fun with these as well.