On unfolding

A while ago I was included in a discussion between an ATLAS experimentalist who had been told that some "unfolding" was needed in their analysis, and a theorist who had previously been a strong advocate of avoiding dangerous "unfoldings" of data. So it seemed that there was a conflict between the experimentalist position of what would be a good data processing and the view from the theory/MC generator community (or at least the portion of it who care about getting the details correct). In fact the real issue was just one of nomenclature: the u-word having been used to represent both a good and a bad thing. So here are my two cents on this issue, since they seemed to help in that case. First what the experimentalist was referring to as "unfolding" was almost certainly the "ok" kind: unfolding to hadrons, photons and leptons with lifetimes of at least ctau0 = 10 mm.

This is a one-size-fits-all approach to what we regard as a "primary" particle in the experiments in that it gives a rough measure of the sort of distance that a particle of a particular species might travel, in the form of the distance that light travels in the mean decay time of that species. It's not a great measure: individual particles do not all decay with the mean proper lifetime of their species, and Lorentz time dilation / length contraction means that actually highly boosted particles with speeds close to c will fly much further than the light speed rule of thumb suggests (cf. cosmic ray muons). But it's a well- defined rule and the only semi-short-lived particles whose decays must accordingly be treated by a detector simulation are KS0 and Lambda. (In practice the exact lifetime cut number may differ depending on whether the factor of c is taken as a round $3 x 10^8$ or a more accurate number in configuring the generator... but as long as the same discrete particle species are classed as stable/unstable this detail doesn't matter.)

So this kind of unfolding is a good and necessary thing in that it is intended purely to remove residual detector and reconstruction algorithm effects which have not been included in calibration procedures. We do it by constructing a mapping from our simulations of reconstructed events back to the "stable truth" record of particles produced by an MC generator using this sort of mean lifetime cut, and end up in most cases being fairly insensitive to the details of whether or not the hadron production and decay modelling was super-accurate. And we absolutely have a duty to do this as experimentalists, because we are the only people who can be expected to understand our own detector details and to possess the detailed detector simulation need to construct that mapping.

This kind of unfolding is distinct from extrapolations such as "correction" of measurements within the detector "acceptance", i.e. the angular coverage of active detector elements and their limited ability to detect particles with little kinetic energy, to include the particles that weren't seen. The spectra of very low-energy particles are actually very badly known, so if you correct your data to include some MC model's guess at what's going on, then you have degraded your data to some extent, and in really bad cases you have "measured" Pythia or Herwig. Another class of very bad unfolding is to "correct" data to parton level, i.e. attempt to remove the effects of hadronization, multiple interactions within the proton, and other things which we can never see. This sort of correction was popular in Run 1 of the Tevatron, and there is a reason that barely anyone uses that data now... it's corrupted to the point of uselessness by model-dependence. No-one with a current model, much more sophisticated than was available back in the early/mid '90s, wants to try and re-include the behaviour of some obsolete calculation to test their fancy new one, so you might wonder what the point was of measuring it at all. Well, I'm sure some tenured positions resulted from those papers, but scientifically it's of only passing interest: we need to defend against making our own data useless, and the best way to do that is to make minimal corrections... "say what you see" as Roy Walker used to drone on Catchphrase.

So dodgy acceptance extrapolations and unphysical parton-level corrections are definitely in that "bad" set of unfolding targets. Electroweak bosons (W and Z) are an awkward case, since lots of people like to "correct back" to those as well. We're having a lot of discussion in ATLAS about things like this and I'm glad to say that the trend is toward using "lepton dressing" definitions which do not require use of explicit W and Z particles from MC event records, and hence to publish measurements of e.g. "dilepton pT" and other distributions which are strongly correlated with the parton- level calculation but have the benefit of being forever well-defined based on what we could actually see. In fact this approach is what Rivet has done since the start and hence I am a) compelled to regard it as a good thing, and b) going to give myself some credit for pushing this approach over the years!

It seems likely that ATLAS (and our competitors) will keep on doing extrapolations and Born-level Z comparisons in precision electroweak measurements for some time yet, but always as an interpretation step in addition to the "fiducial" measurement of particles we could resolve. This is fine, as long as we assess a reasonable extrapolation uncertainty, since comparison of total cross-sections for different processes (with different acceptances) is an interesting physics thing to do, and development of MC models isn't the sole purpose of the LHC. My opinion is that this "Born level" correction will also go away at some point, because the whole concept of a well-defined propagator momentum breaks down the moment that effects of EW loop corrections become substantial compared to experimental resolution, as they do for high-pT electroweak events that we will start to probe in earnest in LHC Run 2. And the relevant theory tools are improving to include these effects: at some point there will be no excuse to say that your PDF fitting or whatever is based on simulations where EW effects aren't included, because those codes will be obsolete. I heard one of the most prominent theorists working on MC simulations of such NLO EW corrections recently say in an ATLAS Standard Model group meeting "don't correct to Born to compare to our state-of-the-art predictions", which is rather a reversal of the usual situation where "corrections" to Born MC are made "because it's what the theorists want"!

There are also many cases where it just doesn't matter what Z definition you use, and there I think the argument depends on what you need to compare to: if you are looking at Z+6 jets observables then maybe there is a case for unfolding in a way which can be compared to fixed-order partonic calculations (although I think the question "why aren't you using Sherpa or MadGraph/aMC@NLO rather than MCFM?" is nowadays the appropriate response). But if you are looking at e.g. underlying event observables in leptonic Z events then publish the results using the dressed dilepton observable -- the sorts of models which are missing the detailed QED FSR that might make a difference in that case will certainly not have any modelling of the soft QCD crap that the observables are actually studying! (And in that case the dependence of those quantities on the detailed Z/dilepton kinematics is anyway extremely weak.) The Higgs analyses remain one of the few places where comparison (or reweighting) to analytic resummation calculations remains mainstream, and I hope in the next 5 years those techniques will be embedded into mainstream MC generators so we can do precision Higgs measurements as well as searches, with good control over modelling uncertainties... otherwise physically sound unfolding will not be possible. The key thing is that even if such corrections are applied for comparison to today's limited theory tools, what we actually measured should also always be preserved for posterity, with minimal corruption and model-dependence, so that our data remains valid forever and useful for as long as possible.

Now let's just address the fact that this isn't perfect. There is some model dependence in detector unfolding for sure, because we rely on models of both the fundamental proton interaction physics and the detector to build our unfolding system. This is why we use procedures like iterative Bayesian unfolding to try and reduce the prior dependence introduced by our simulation details, we use multiple input models, etc. So,provided no extrapolation is involved the effects of model dependence on unfolding should be small. But if you extrapolate, there is no data component in Bayes' theorem and the result will be 100% model dependent. Unfolding, as with any data processing, always introduces some increased uncertainty, and this should be explicitly estimated by studying e.g. the number of iterations used, using several MC models, etc. ... and the uncertainty bands need to inflate accordingly. The argument is that it's nearly always worth this small increase in uncertainty to make the data better represent the pure physics of what is going on, rather than mix it up with a bunch of uninteresting (to theorists) stuff like non-linear response of the ATLAS calorimeter.

In short, we need to know what we're doing... but that's the fun bit. Unfolding is a good thing, when we do it in a controlled way and never correct back to seductive-looking event record contents like "the true Z" or "the true quark/gluon jet". When we do the latter, we unintentionally cripple our own data and implicitly waste a lot of public money. None of us want to do that, and the good news is that the LHC experiments are aware of the danger and are actively moving toward more robust truth definitions as we enter a new precision era where this stuff really matters.

Comments

Comments powered by Disqus