MEASUREMENT OF THE TOTAL NEUTRINO-NUCLEON CROSS-SECTION WITH ATMOSPHERIC MUON NEUTRINOS IN ICECUBE By Sarah Caitlin Nowicki A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of Physics – Doctor of Philosophy 2022 ABSTRACT The discovery of neutrino oscillations, and therefore that they have non-zero mass, provided the first conclusive proof of physics beyond the Standard Model [1]. Recently, neutrinos have become a valuable addition to the field of multi-messenger astrophysics to improve the study of distant astrophysical phenomena [2]. Fundamental to the operation of the numerous current neutrino experiments across a broad energy-scale is knowledge of the neutrino interaction rate or cross- section. In neutrino experiment, there exists a gap in the measurement of the neutrino-nucleon cross-section between approximately 300 GeV and a TeV. In this region, the Standard Model cross- section transitions away from its linear relationship with neutrino energy. This thesis describes a measurement of the total muon-neutrino-nucleon cross-section from 100 GeV to 5 TeV via a flux- dependent analysis performed on a dataset of through-going neutrino-induced muons detected by the IceCube Neutrino Observatory. An updated energy reconstruction utilizing DirectReco, a new method that generates event hypotheses using direct photon propagation, and tailored to the physics of the region of interest is applied. This provides several benefits, including improved accuracy of the event hypotheses and incorporation of the latest models of the glacial detector medium. The cross-section is measured as a normalization of the predicted flux where the parameters are scaling factors on the Standard Model cross-section, the Cooper-Sarkar-Mertsch-Sarkar (CSMS) model [3]. Two bins are defined, 100 - 350 GeV, overlapping with existing data, and 350 GeV - 5 TeV, the unmeasured region of phase space. A prior is calculated from existing accelerator neutrino beam cross-section measurements for the overlapping energy range of 100 - 350 GeV, additional input in constraining the fit for the low-energy region of the sample. The resultant Standard Model cross- section scaling factors are calculated both with the accelerator prior applied (case A) and without +0.08 and (case B). The results are, for 100 - 350 GeV and 350 GeV - 5 TeV regions, respectively, 0.82−0.07 1.23+0.07 +0.07 +0.08 −0.07 for case A and 0.27−0.05 and 0.78−0.07 for case B. This is the world’s first measurement for the 350 GeV to 5 TeV energy region of the neutrino-nucleon cross-section, although forthcoming accelerator neutrino experiment FASER𝜈 will provide a comparable measurement of the cross- section of this energy range in the near future [4]. Both indicate tension with Standard Model, although interpretation of the result is subtly different for the two cases. Case A is a "best possible" measurement that utilizes the current maximal public knowledge of the cross-section. Case B is an IceCube-data-driven result, relatively free of influences inherent in existing cross-section models. The results presented here set a benchmark for future prospects of improvements and expansion of the IceCube-based cross-section studies – to higher energies, and to multiple flavours – and for the broad field. The point is, you’re cool, dope, fresh, and smart-brained. [...] You’re awesome. Be nicer to yourself. - The Good Place iv ACKNOWLEDGMENTS This thesis is dedicated to everyone who helped, supported, encouraged and believed in me along the long, long path to get here. v TABLE OF CONTENTS CHAPTER 1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 CHAPTER 2 PHYSICS OF ATMOSPHERIC NEUTRINOS . . . . . . . . . . . . . . . . 4 CHAPTER 3 THE ICECUBE NEUTRINO DETECTOR . . . . . . . . . . . . . . . . . . 18 CHAPTER 4 GENERAL ICECUBE EVENT RECONSTRUCTION . . . . . . . . . . . . 27 CHAPTER 5 PRECISION EVENT RECONSTRUCTION USING DIRECTLY SIMULATED HYPOTHESES - DIRECTRECO . . . . . . . . . . . . . . . . . . . . . . . 36 CHAPTER 6 THE NEUTRINO-NUCLEON CROSS-SECTION ANALYSIS . . . . . . . 44 CHAPTER 7 RESULTS OF THE CROSS-SECTION ANALYSIS . . . . . . . . . . . . . 61 CHAPTER 8 CONCLUSION (...OR JUST THE BEGINNING?) . . . . . . . . . . . . . . 80 BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 APPENDIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 vi CHAPTER 1 INTRODUCTION Studies of neutrinos have revolutionized the field of particle physics and they have become an important tool for learning more about the Universe from the smallest to largest of scales. Particles and their interactions are well-described by the Standard Model (see Figure 1.1). The first conclusive evidence of physics beyond the Standard Model was realized in the neutrino sector [1]. While they are predicted to be massless, the discovery of neutrino oscillations, a quantum mechanical process only possible for massive particles, proved otherwise. Non-zero neutrino masses remain the only confirmed physics beyond the Standard Model, but have since been incorporated by a superficial extension [1]. More recently, neutrinos have become an important element of multi- messenger astrophysics, joining photons, cosmic rays and gravitational waves as intermediaries. Using different messenger particles with distinct properties allows new and enhanced study of astrophysical objects [2]. A point of commonality across neutrino research spanning various disciplines, target energy ranges, and detector technology, is the requirement of a knowledge of the interaction rate or cross- section. Neutrino-nucleon cross-sections are not only a topic of fundamental importance to many neutrino experiments, but also a valuable topic of study in their own right. They represent a source of systematic uncertainty in many analyses. An improved characterization of the cross-sections can reduce ambiguity in subsequent studies, facilitating advances in diverse areas of physics knowledge. Experimental measurement of the value also provides an important check on the Standard Model prediction, including the potential to being to light new physics. With sufficient precision, experimental measurements may reach the stage where it is possible to distinguish between multiple cross-section models. Cross-section measurements currently exist across a broad range of neutrino energies and multiple flavours. For the relevant energy scale of the combined IceCube-DeepCore hybrid detector, GeV to PeV, there is a distinct region where direct measurements of the neutrino-nucleon cross- section have not occurred. Particle accelerator experiments have made a number of neutrino- 1 Figure 1.1 Standard Model of particle physics [1] showing the elementary particles that make up the Universe. Fermions are identified in 3 ‘generations’, each containing a pair of particles for leptons and a pair for quarks. Fundamental forces are mediated by the exchange of force carrier ‘bosons’. nucleon cross-section measurements spanning from below a GeV to 350 GeV [1], the current upper limit in energy for this neutrino source. A complementary approach utilizes neutrinos produced in the atmosphere, and via distant astrophysical processes, to provide a data sample. Said neutrinos may be utilized in cross-section studies by leveraging the neutrino-Earth absorption effect, which is energy- and path-dependent. Previous neutrino absorption measurements have demonstrated the power of this technique, [5–8], with more forthcoming [9]. This effect becomes smaller as neutrino energy decreases, with measurements reaching down to the few TeV region. The combination of neutrino sources and methods provides a unique target at energies between O(100) GeV to O(10 TeV) for the analysis considered here. This thesis presents the world’s first measurement of the total muon neutrino-nucleon charged 2 current (CC) cross-section from 350 GeV to 5 TeV, intermediate to IceCube’s low- and high-energy regions. A flux-dependent measurement of the cross-section is performed for two energy bins: the lower of which is defined to overlap directly with the highest energy accelerator cross-section measurements. A prior is calculated from accelerator data and the cross-section is fit both with and without this prior, generating a result using all available knowledge and one that does not rely on other experimental results. Chapter 2 discusses the theory of atmospheric neutrinos, from their production and interaction, to photon emission from the charged daughter leptons. Chapter 3 provides an overview of the detector systems utilized in producing the data for the analysis, IceCube Neutrino Observatory. Event reconstructions employing standard photon lookup tables for IceCube data are described in Chapter 4. An updated event reconstruction method with direct photon propagation, DirectReco, that generate advanced event hypotheses, is presented in Chapter 5. Chapter 6 provides the detailed investigation of the muon neurino-nucleon cross-section analysis. The results of the analysis are outlined and discussed in Chapter 7. A brief summary and discussion of future enhancements of the analysis is included in Chapter 8. 3 CHAPTER 2 PHYSICS OF ATMOSPHERIC NEUTRINOS This chapter provides a brief outline of the physical processes that give rise to atmospheric neutrinos, the dominant signal for the analysis presented here, their interactions, and the mechanisms by which those interactions are detectable. 2.1 Cosmic rays Neutrinos are produced via a number of sources, including: fusion processed in the Sun; fission processes in nuclear reactors and the Earth; human-made accelerators; and in cosmic ray interactions in the atmosphere. Cosmic rays are high-energy particles produced in astrophysical processes, with a flux that is largely isotropic in direction [10, 11]. Discovered by Victor Hess in 1912 [12], they are composed of roughly 74% protons, 18% helium and the remaining fraction are heavier nuclei, with an energy range from ∼109 eV to more than 1020 eV [1]. These relativistic particles are incident on the Earth’s atmosphere at a rate of approximately 100 per square meter per second. Cosmic rays remain a very active area of research more than 100 years after their discovery due to the mystery surrounding their origin at the highest energies, and the desire to understand the acceleration mechanisms that produce the highest energy particles [13]. For the lower energy cosmic ray particles, relevant for this atmospheric neutrino analysis, ∼100 GeV to ∼ TeV, the acceleration mechanism is relatively well-understood, specifically shock acceleration due to interaction of the cosmic ray primaries with the shock waves from a supernova [10]. To first order, the Fermi mechanism involves the charged particles interacting with planar shock fronts repeatedly, bouncing back and forth between the upstream and downstream fronts [10]. Particles trapped between these magnetic reflectors gain increasing energy with each collision, “[l]ike a ping-pong ball caught between converging paddles" [14]. This produces a power-law energy spectrum consistent with what has been observed [10] (see Figure 2.1). One finds two clear breaks in the power law spectrum at ∼1016 eV and ∼1018 eV, referred to as the ‘knee’ and ‘ankle’, respectively. One possible explanation for the existence of these features is that the knee represents the upper limit of an acceleration mechanism for galactic sources, and the ankle represents the 4 Figure 2.1 Experimental measurements of cosmic ray flux are shown by the blue data points. The ‘knee’ and ‘ankle’ of the spectrum, i.e., breaks in the power law spectrum, are labelled as such. The red dashed line included highlights the points at which the spectral index changes. Plot from [14]. upper limit of the galactic source itself, or the transition to extra-galactic sources [1]. For our purposes here, the spectrum at energies below the knee are of direct interest. 2.2 Atmospheric neutrino flux Primary cosmic rays may interact with Earth’s atmosphere, producing secondary daughters. These secondaries continue to interact and decay, generating more particles to form an ‘air shower’, illustrated in Figure 2.2. Pions and kaons are the principal resulting particles. Their decays, (e.g., see equation 2.1), as well as the decays of their daughter muons, generate atmospheric neutrinos. 𝜋 ± → 𝜇± + 𝜈 𝜇 ( 𝜈¯ 𝜇 ) (2.1) ± ± 𝜇 → 𝑒 + 𝜈𝑒 ( 𝜈¯ 𝑒 ) + 𝜈¯ 𝜇 (𝜈 𝜇 ) Kaons decay mainly to pions or muons, and pions are produced in the air shower interactions more frequently than the (more massive) kaons [11]. Using this model of pion decay and assuming 5 Figure 2.2 Illustration of an air shower caused by the interaction of a cosmic ray within the Earth’s atmosphere [15]. 6 Figure 2.3 Contribution of 𝜋 and 𝐾 to the production of 𝜈 and 𝜇 in the atmosphere as a function of neutrino energy. The solid line is for vertically travelling particles; the dashed lines are for particles at 60◦ from the vertical [11]. that all particles undergo decay in flight, we predict the following flavour and particle-antiparticle ratios for the atmospheric neutrinos: 𝜈 𝜇 ∼ 𝜈¯ 𝜇 ∼ 2𝜈𝑒 . (2.2) 𝜈𝑒 /𝜈¯ 𝑒 ∼ 𝜇+ /𝜇− These ratios are an important part of the prediction of an accurate atmospheric neutrino spectrum and the measurable experimental observables. However, reality is much more complicated than described in this simple model. Muons have a characteristic decay length that depends on energy, and muons resulting from interactions with 𝐸 𝜈 ≳ 2.5 GeV have a decay length that exceeds the typical production height in the atmosphere, ∼15 km [10]. Thus, those muons reaching the Earth’s surface without undergoing decay result in a modification of the previously stated ratios for low energies. A further effect on the ratios is induced by muon energy loss in the atmosphere, which can exceed 2 GeV [10]. Another element is the relative contributions of pions and kaons (see Figure 2.3). One finds the neutrino production is dominated by pions at low energies, the region of interest for atmospheric neutrino oscillations, transiting to kaons as the energy increases above ∼100 GeV [11]. 7 Atmospheric neutrinos, like their cosmic ray parents, follow a power-law spectrum that is a the convolution of the cosmic ray spectrum upon reaching Earth’s atmosphere with the neutrino yield of the cosmic ray particle. Analytic approximations of the spectrum can – at times – be used instead of the full calculation of the flux. For muon neutrinos and antineutrinos with energy ≫ 1 GeV from pion and kaon decays, for example:     𝑑𝑁 𝜈 (𝐸 𝜈 , 𝜃) 𝜙 𝑁 (𝐸 𝜈 ) 𝑍 𝑁 𝜋 (1 − 𝑟 𝜋 ) 𝛾 𝑍 𝑁𝐾 (1 − 𝑟 𝐾 ) 𝛾 = + 0.635 , 𝑑𝐸 𝜈 (1 − 𝑍 𝑁 𝑁 )(𝛾 + 1) 1 + 𝐵 𝜋𝜈 cos 𝜃 · 𝐸 𝜈 /𝜖 𝜋 1 + 𝐵𝐾𝜋 cos 𝜃 · 𝐸 𝜈 /𝜖 𝐾 (2.3) with the differential primary spectrum of cosmic ray nucleons with energy 𝐸 0 : 𝑑𝑁 −(𝛾+1) 𝜙 𝑁 (𝐸 0 ) = = 𝐴 × 𝐸0 . (2.4) 𝑑𝐸 0 Here the 𝐵𝑖 terms in equation 2.3 depend on the hadron attenuation lengths, on the decay kinematics 𝑚 2𝜇 and on 𝑟𝑖 = 𝑚 𝑖2 . The 𝑍𝑖 factors are spectrum-weighted moments describing the physics of production of parent pions and kaons and the 𝜖𝑖 are the characteristic energy of decay of the particle [11]. Described above is the ‘conventional’ atmosphere neutrino flux. There is an additional com- ponent known as the ‘prompt’ flux, originating from the decay of heavier, short-lived charmed hadrons [11]. The fast charm decay results in a higher spectral index than the conventional flux, but the heavier hadrons are less likely to be produced than the pions and kaons that contribute to the conventional flux. The prompt flux is sub-dominant at the energies of the analysis considered here, becoming relevant above 𝐸 𝜈 ∼ 10 TeV [16]. 2.3 Astrophysical neutrino flux A diffuse flux of neutrinos from astrophysical sources was been identified [17] and confirmed via other analysis methods and for all three neutrino flavours [18–20]. It continues to be an active arena of investigation and further study was the original purpose of the event sample and analysis technique [21] utilized in this analysis. For the purposes of this cross-section analysis, however, these neutrinos are considered high-energy, background events. The simplest form of the flux 8 spectrum is considered here, a single power law model in energy of the form   −𝛾astro 𝜈 𝜇 +𝜈¯𝜇 𝐸𝜈 Φastro (𝐸 𝜈 ) = 𝜙astro × , (2.5) 100TeV where the normalization parameter 𝜙astro has the units 10−18 GeV−1 cm−2 s−1 sr−1 and the spectral index 𝛾astro is dimensionless. 2.4 Neutrino physics Although neutrinos were initially postulated more than 80 years ago [22], their properties continue to be challenging to measure. Neutrinos are electrically neutral elementary particles with a small non-zero mass, and only interact via gravitational and weak interactions. Each of the three neutrino generations have been measured, where each is paired with a corresponding charged lepton that defines the neutrino’s flavour: electron, muon and tau. The Standard Model of particle physics predicts massless neutrinos and the measurement of neutrino oscillations, and therefore massive neutrinos, is the only confirmed evidence for physics beyond the Standard Model [1]. 2.4.1 Interactions Neutrinos undergo weak interactions via the exchange of either a neutral 𝑍 0 boson (neutral current (NC) interaction) or a charged 𝑊 ± boson (charged current (CC) interaction). The CC interaction produces a charged daughter lepton (see Figure 2.4a), whereas a NC interaction produces a neutral daughter neutrino (see Figure 2.4b). In addition to whether a 𝑍 0 or 𝑊 ± boson is exchanged, depending on the energy, neutrinos can undergo different types of interactions with a nucleon. Starting at ∼100 GeV, neutrinos have sufficient energy to undergo deep inelastic scattering (DIS) interactions exclusively, dominant for the analysis presented here and shown in Figure 2.5. Here the neutrino interacts with a constituent quark rather than the nucleon in its entirety, allowing study of the nucleon structure [23]. The final state of a DIS interaction includes both a leptonic system (charged or neutral lepton daughter) and a hadronic element (created when the impacted quark recombines to form a shower). 9 (a) A CC neutrino interaction. (b) A NC neutrino interaction. Figure 2.4 Feynman diagrams of a CC and a NC interaction [23]. Though the electron neutrino is shown, the diagrams do not change for muon and tau neutrinos apart from the flavour of the neutrino, and in the case of the CC interaction, its lepton daughter. Figure 2.5 Feynman diagram of CC neutrino deep inelastic scattering [23]. 2.4.2 Cross-sections Atmospheric neutrino cross-sections can be of order 10−39 cm2 [23]. Due to the low interaction probability, very large-volume neutrino detectors and/or extremely intense neutrino sources are necessary to obtain statistically significant measurements. In the case of atmospheric neutrinos, the measurement is a convolution of the neutrino flux and the cross-section with the detector properties [1]. For the energy range of the analysis presented here, 100 GeV to 5 TeV, DIS is the dominant interaction and others may be neglected. A linear dependence on neutrino energy is predicted by the quark-parton model, i.e., where ‘partons’ are the constituents of the nucleon, in this case quarks [24]. This is expected for DIS, where a neutrino undergoes a point-like scattering with an individual quark. As the neutrino energy increases to approximately 500 GeV, the interaction vertex is no longer dominated by the 𝑊 ± boson mass and the predicted cross-section no longer 10 linearly increases with energy [23]. Thus this analysis spans an anticipated transition region of cross-section behavior. For this analysis, the Cooper-Sarkar Mertsch Sarkar (CSMS) model [3] is considered as repre- sentative of the Standard Model. This is a next-to-leading order (NLO) quantum chromodynamics (QCD) calculation of the DIS cross-section, ranging from 50 GeV up to 500 EeV. The parton distribution functions (PDFs) describing the quarks in the model are fit using the conventional DGLAP formalism [25–28]. Fitting the PDFs is an iterative process where experimental data is used for comparison, hence the model is tuned on data from accelerator experiments. The full expression of the relevant cross-section is a double differential CC cross-section for neutrino (𝜈) and antineutrino (𝜈)¯ production on isoscalar nucleon (𝑁) targets: d2 𝜎(𝜈( 𝜈)𝑁) ¯ 𝐺 2𝐹 𝑀𝑊 4 d𝑥 d𝑄 2 =  𝜎𝑟 (𝜈( 𝜈)𝑁) 2 2𝑥 ¯ , (2.6) 4𝜋 𝑄 2 + 𝑀𝑊 with reduced cross-sections 𝜎𝑟 (𝜈( 𝜈)𝑁): ¯ 𝜎𝑟 (𝜈𝑁) = 𝑌+ 𝐹2𝜈 (𝑥, 𝑄 2 ) − 𝑦 2 𝐹𝐿𝜈 (𝑥, 𝑄 2 ) + 𝑌− 𝑥𝐹3𝜈 (𝑥, 𝑄 2 )   (2.7) ¯ = 𝑌+ 𝐹2𝜈¯ (𝑥, 𝑄 2 ) − 𝑦 2 𝐹𝐿𝜈¯ (𝑥, 𝑄 2 ) − 𝑌− 𝑥𝐹3𝜈¯ (𝑥, 𝑄 2 ) .   𝜎𝑟 ( 𝜈𝑁) (2.8) Isoscalar refers to equal numbers of protons and neutrons in a nucleus. The equations are described in terms of Lorentz-invariant kinematic variables: 𝑄 2 – the invariant mass of the exchanged vector boson; Bjorken 𝑥 – the fraction of the energy momentum of the incoming nucleon carried by the struck parton; Bjorken 𝑦 or inelasticity – the fraction of energy transferred to the hadronic system from the incident neutrino; 𝐺 𝐹 – the Fermi constant; and 𝑀𝑊 – mass of 𝑊 ± boson. Additionally, 𝑌± = 1 ± (1 − 𝑦) 2 . There are also structure functions 𝐹2 , 𝑥𝐹3 and 𝐹𝐿 , that are related directly to quark momentum distributions. These dimensionless structure functions "encompass the underlying structure of the target" nucleon [23]. At leading order, 𝐹𝐿 is 0 and the expressions for neutrinos are: 11 100 ν CC ν̄ CC ν NC 10−1 ν̄ NC σCC /Eν [10−38 cm2 /GeV] 10−2 10−3 10−4 10−5 103 105 107 109 1011 Eν [GeV] Figure 2.6 CSMS cross-section predictions for DIS interactions. Errors generally decrease as energy increases. Data from [3]. 𝐹2𝜈 = 𝑥 𝑢 + 𝑑 + 2𝑠 + 2𝑏 + 𝑢¯ + 𝑑¯ + 2𝑐¯ , 𝑥𝐹3𝜈 = 𝑥 𝑢 + 𝑑 + 2𝑠 + 2𝑏 − 𝑢¯ − 𝑑¯ − 2𝑐¯   , (2.9) and antineutrinos are: 𝐹2𝜈¯ = 𝑥 𝑢 + 𝑑 + 2𝑐 + 𝑢¯ + 𝑑¯ + 2𝑠¯ + 2𝑏¯ , 𝑥𝐹3𝜈¯ = 𝑥 𝑢 + 𝑑 + 2𝑐 − 𝑢¯ − 𝑑¯ − 2𝑠¯ − 2𝑏¯   . (2.10) Some additional details are required for higher order calculations, but the above represents well the dominant contributions [3]. The calculated CSMS cross-section values are plotted in Figure 2.6. Errors for the relevant 100 GeV to 5 TeV region range from ∼6% to ∼4% for neutrinos and ∼21% to ∼10% for antineutrinos. 2.4.3 Propagation and flavour oscillation This thesis analysis considers neutrinos that have propagated through the Earth to reach the detector. TeV- to PeV-scale neutrino-nucleon cross-section measurements [5–8] have utilized the 12 Figure 2.7 Diagram of the neutrino absorption effect in the Earth [5]. Left (a) depicts the different path lengths through the Earth for neutrinos travelling at different zenith angles. Right (b) depicts the neutrino transmission probability, i.e., likelihood to pass through the Earth without interaction, where the white dashed line marks the zenith angle of a neutrino skimming the core-mantle boundary that is also marked on the left. The energy and zenith dependence of this effect is clear, but it also shows that there is very little or no Earth absorption for neutrino energies 100 GeV to 5 TeV. neutrino-Earth absorption effect to infer the cross-section, where neutrinos propagating through the Earth may interact with nucleons creating an energy- and zenith-dependent disappearance effect. However, this effect decreases with decreasing energy such that the Earth becomes effectively transparent to neutrinos at ∼ 1 TeV to where it is essentially non existent at the energies considered for this analysis (see Figure 2.7). Flavour mixing is a quantum mechanical process resulting from the mixing of the flavour eigenstates (𝑒, 𝜇, 𝜏) with the mass eigenstates (1, 2, 3), and occurs via neutrino oscillation. Neutrinos interact in their flavour states which are a linear combination of the mass eigenstates, i.e., a neutrino is created with a definite flavour and a superposition of mass states. They propagate as a combination of mass eigenstates, which then have probabilities of interacting in the various flavour states. These oscillation probabilities depend on experimental observables – distance travelled to the detector and the neutrino energy. For most energies (and all baselines) considered in this analysis, ⪆ 150 GeV, this is a negligible effect [16]. 13 Figure 2.8 Wavefronts of radiation emitted by a charged particle in a dielectric medium. Left: the velocity of a charged particle is less than the velocity of light in the medium, 𝑣 < 𝑐. Right, the charged particle velocity is greater than the phase velocity of light in the medium, 𝑣 > 𝑐. The wave front of the light forms the characteristic Cherenkov cone. Diagram from [29]. 2.5 Detection principles Charged current neutrino interactions produce a charged lepton daughter particle. The charged lepton will emit Cherenkov radiation when travelling through a dielectric medium at a velocity greater than the local phase velocity of light. Detecting these Cherenkov photons is the neutrino detection mechanism employed by the IceCube Neutrino Observatory (see Chapter 3). The light is emitted at a characteristic Cherenkov angle cos 𝜃 𝑐 = (1/𝑛𝛽) (2.11) as long as the particle is travelling above the threshold velocity of 𝛽𝑡 = 1/𝑛, where 𝑛 is the index of refraction of the medium, and 𝛽 is the usual relativistic velocity, 𝑣/𝑐. The radiation is emitted in a thin conical shell with the vertex at the moving charged particle, as shown in Figure 2.8. The wavefront then has the characteristic angle, 𝜃 𝑐 , with respect to the particle’s trajectory [1]. Cherenkov radiation is emitted over the visible spectrum of light, from near infrared to ul- traviolet. However, it is strongly peaked at short wavelengths with the majority of the deposited energy towards the ultraviolet. The Frank-Tamm formula (equation 2.12) describes the number of Cherenkov photons emitted per unit distance per unit wavelength. 14 𝑑2 𝑁   2𝜋𝛼 1 = 2 1− 2 2 (2.12) 𝑑𝑥𝑑𝜆 𝜆 𝛽 𝑛 Here 𝛼 as the fine structure constant and 𝜆 the radiation wavelength [30]. For ice, the index of refraction is ∼1.33, giving a Cherenkov angle of ∼41◦ from Equation 2.11. It is noted that the amount of energy lost via Cherenkov radiation is quite small, far less than what is lost through ionization, for example [30]. The Bethe-Bloch equation (2.13) describes the energy loss of a charged particle due to ionization and excitation of the nuclei in a material, also called its electronic stopping power [1]. 2𝑛𝑒 𝑐2 𝛾 2 𝛽2   𝑑𝐸 2 2 2𝑍 1 2 𝛿 − = 4𝜋𝑁 𝐴 𝑟 𝑒 𝑚 𝑒 𝑐 𝑧 ln −𝛽 − (2.13) 𝑑𝑥 𝐴 𝛽2 𝐼 2 Here 𝑧 is the charge of the particle in units of 𝑒, 𝑍 is the atomic number of the medium, 𝐴 is the atomic mass number of the medium, 𝑚 𝑒 is the electron mass, 𝑟 𝑒 is the classical electron radius 𝑒2 4𝜋𝜖0 𝑚 𝑒 𝑐2 , and 𝑁 𝐴 is Avogadro’s number. In the above, 𝐼 is the mean excitation energy of the medium, which can be approximated by 𝐼 = 16 · 𝑍 0.9 eV for 𝑍 >1. Finally, the 𝛿 term is known as the density correction. It describes the effect of the charge density of the medium on the electric field of the particle (describing how much this screening reduces the stopping power of the particle [31]). An example of the Bethe-Bloch function and its applicable energy range for muons on copper is shown in Figure 2.9. Particles with energy losses described by the Bethe-Bloch equation are known as minimum ionizing particles [1]. The average rate of energy loss for muons can be written as   𝑑𝐸 − = 𝑎(𝐸) + 𝑏(𝐸)𝐸 , (2.14) 𝑑𝑥 where 𝑎 represents electronic stopping power described by the Bethe-Bloch equation, and 𝑏 rep- resents radiative processes (bremsstrahlung, pair production, photo-nuclear interactions). The element 𝑏 may consequently be defined as [1, 32] 𝑏 ≡ 𝑏 𝑏𝑟𝑒𝑚𝑠 + 𝑏 𝑝𝑎𝑖𝑟 + 𝑏 𝑛𝑢𝑐𝑙 . (2.15) 15 Mass stopping power [MeV cm2/g] μ+ on Cu 100 μ− Bethe Radiative Andersen- Ziegler Radiative Lindhard- effects Eμc 10 Scharff reach 1% Radiative Minimum losses ionization Nuclear losses Without δ 1 4 5 0.001 0.01 0.1 1 10 100 1000 10 10 βγ 0.1 1 10 100 1 10 100 1 10 100 [MeV/c] [GeV/c] [TeV/c] Muon momentum Figure 2.9 Muon energy loss in copper with the different regimes labelled. From [1]. Bremsstrahlung occurs when a charged particle loses energy in the form of photons after being decelerated by the electric field of a nucleus. Pair production describes the production of 𝑒 + 𝑒 − pairs in a material’s nuclear electric field by a virtual photon from the particle. Photo-nuclear interactions are again virtual photons arising from the particle interacting with the nuclei of the material, but in this case they lose energy via an inelastic collision [31]. The relative contributions of the radiative processes for water are shown in Figure 2.10. The nature of radiative losses is stochastic, resulting in electromagnetic and hadronic showers of hugely varying energies occurring randomly along the muon track. This is due to the processes’ small cross-sections and ‘hard spectra’, i.e., with a small spectral index, leading to more higher energy events [1]. As demonstrated in Figure 2.9, the effect of different energy loss mechanisms is dependent on the muon energy. Our region of interest for this analysis, 100 GeV to a few TeV, overlaps with the transition region from minimum ionizing to radiative loss dominated energies for muons in water. 16 Figure 2.10 The relative contributions of bremsstrahlung, pair production, and photo-nuclear inter- actions to the radiative processes causing energy losses of muons in water. Dashed line is from an older publication. Modified from [32]. 17 CHAPTER 3 THE ICECUBE NEUTRINO DETECTOR The IceCube Neutrino Observatory is a cubic-kilometer neutrino detector that uses naturally oc- curring glacial ice as a Cherenkov medium. We begin with a discussion of the detector design optimized for the detection of neutrinos from the GeV- to beyond the PeV-scale. A brief discussion of the instrument systems that provide the means to realize a physics-quality data stream (Section 3.2) is followed by the characterization of the morphologies of the event signatures (Section 3.3) and the modelling of the properties of the natural ice sheet (Section 3.4). 3.1 Geometry and design The IceCube Neutrino Observatory, see Figure 3.1, along with its subdetectors has been specifically designed and built to detect the Cherenkov radiation emitted by neutrino lepton daughters at the GeV scale and beyond. The detector uses the deep ice of the Antarctic glacier at the South Pole Station as the Cherenkov medium. Photomultiplier tubes (PMTs) are deployed in a 3-dimensional array to instrument more than a cubic kilometre of ice [33]. The PMTs are encased in a glass sphere, able to withstand the extreme pressures at the bottom of the glacier, along with an electronics main board for signal digitalization and an LED flasher board for in-situ calibration purposes. These integrated spheres are referred to as Digital Optical Modules (DOMs, see Figure 3.2 [34]). The IceCube array is composed of 86 cables or ‘strings’, each with 60 attached DOMs for a total of 5160 DOMs. To install each string, a 2.5 km deep hole was melted into the ice. The DOMs and connective cable were lowered into the hole to a depth of 2450 m from the surface prior to refreezing. A top-down view of IceCube (see Figure 3.3) shows the string arrangement in an approximately hexagonal grid of almost uniform 125 m spacing between strings. This designed spacing reflects the detector’s original primary goal to search for astrophysical neutrinos at the TeV - PeV energy scale [33]. A vibrant particle physics program at IceCube has also emerged (atmospheric neutrino oscillations and dark matter searches) related to the DeepCore subarray located in the bottom-half of the central region of the IceCube array. Here the target is lower energy events (∼5 GeV to 100 GeV), requiring more densely spaced photocathode area. Rather than the 18 Figure 3.1 The IceCube detector at the South Pole. The IceCube array, DeepCore subarray, and IceTop surface array are all labelled, as well as the IceCube Lab, which operates as the central counting house for the facility. The Eiffel Tower is provided for reference of scale. 19 Figure 3.2 Diagram of IceCube’s Digital Optical Module (DOM) with main components labelled [33]. The Digital Optical Modules comprise the primary instrumentation for the IceCube detector. The components are contained in a 13-inch diameter glass sphere housing. The downward facing PMT is a 10-inch diameter Hamamatsu R7081-02 for IceCube DOMs and a high quantum efficiency 10-inch diameter Hamamatsu R7081-02MOD for DeepCore DOMs. The large PMT is shielded by a mu-metal grid, to mitigate the effects of Earth’s magnetic field, and is optically coupled to the glass sphere with a silicone gel to enhance light collection efficiency. Associated electronics include a main board, delay board, and a calibration board with LED ‘flashers’, as well as a high voltage power source, providing power and control of the PMT, as well as performing signal amplification, filtering and digitization [33]. The DOMs were designed to be robust and reliable, and have realized a remarkable survival rate, with 98.4% continuing data-taking as of 2016, with failures primarily occurring during the initial commissioning period [33]. lateral spacing of 125 m and vertical spacing of 17 m of the primary array, the 8 DeepCore strings have spacings of 40 - 100 m, averaging 70 m laterally, and 7 m vertical DOM spacing [33]. The DeepCore DOMs also contain higher quantum efficiency PMTs than IceCube’s standard DOMs, providing increased sensitivity to the Cherenkov radiation from relatively low energy (dim) events [35]. Its placement allows DeepCore to leverage large portions of the IceCube detector as an active veto to detect muons from cosmic ray showers, the primary background in the study of atmospheric neutrinos [35]. Additionally, with the DOMs concentrated in the cleanest, clearest ice at the bottom of the detector, maximal photon detection with minimal attenuation and scattering may occur [35]. The third subdetector of the observatory is IceTop, the cosmic ray air shower array. It is located at the surface of the ice sheet and consists of tanks of frozen ultrapure water equipped with a pair of DOMs each. The 162 tanks are arranged in 81 stations, approximately following the IceCube grid. In addition to acting as a partial veto system for downward-travelling neutrinos, IceTop is used to 20 Figure 3.3 Top view of IceCube. Green dots represent IceCube strings. The 8 red dots in the middle represent the more densely spaced strings of the DeepCore subdetector. IceTop tanks are represented by smaller blue dots. study primary cosmic rays at energies of the PeV to EeV range [33]. 3.1.1 IceCube coordinate system IceCube uses a local coordinate system with its origin at the centre of the detector, nearly 2000 m from the surface of the ice sheet. The y-axis is aligned with the Prime Meridian and the x-axis offset 90◦ clockwise, with the z-axis pointing up towards the surface [33]. Zenith directions of particles are defined from 0 to 𝜋, with ‘down-going’ trajectories (near 0) following from the surface through the IceCube detector and ‘up-going’ trajectories (close to 𝜋) through the detector after traversing the Earth. 21 3.2 Data acquisition and triggering Photon detection occurs in an IceCube DOM when the observed charge exceeds a defined 0.25 photo-electron (PE) threshold. This detection, referred to as a ‘hit’, triggers recording and digitization of the PMT waveform over a relatively long time window, up to 6.4 𝜇s, to include late arriving photons. A time window of width 1 𝜇s is evaluated for locally coincident hits in time and space (see [33] for details). Triggering algorithms apply specific criteria – number of hits, geometrical requirements – in a specified time window to distinguish between particle interactions and uncorrelated electronic dark noise. The primary IceCube trigger is called the Simple Multiplicity Trigger (SMT), with SMT8 – identifying 8 or more hit DOMs within a 5 𝜇s time window – being the most pertinent to the studies presented here due to the target energy range and string spacing of the IceCube detector [33] [36]. 3.3 Event signatures Physics events in the detector deposit light in a few characteristic patterns. Cascade-like events occur where the Cherenkov radiation is emitted over a short distance relative to the spacing of the strings creating a spherical burst shape (see Figure 3.4a). Cascades are generated by a variety of interaction types and may be either electromagnetic or hadronic. Electromagnetic cascades are induced by CC interactions of electron and tau neutrinos. Hadronic cascades are the result of an NC interaction, as well as the initial neutrino-nucleon CC interaction, of any neutrino flavour. Hadronic showers produce less Cherenkov light, and have a higher variance in the amount of light produced per unit energy, due to the presence of neutral pions [37]. Electromagnetic showers have an approximately linear relationship between Cherenkov light produced and the input energy [37]. At sufficiently high energies, i.e., the PeV-scale, a tau lepton may generate a ‘double bang’ signature with two separate cascades – see Figure 3.4c; (N.B. The tau lepton travels a large enough distance before decaying that a secondary cascade is visibly separate from from the initial hadronic interaction.) The focus of the analysis presented in this thesis is the remaining event signature – the ‘track’ type events generated by muons in the detector (see Figure 3.4b). The initial neutrino-nucleon CC 22 (a) Cascade-like event, from a CC electron neutrino interaction. (b) Track-like event from a neutrino-induced muon. (c) ‘Double bang’ event from a neutrino-induced tau. Figure 3.4 Diagrams of the main event signatures in the IceCube detector [38]. 23 interaction produces a hadronic cascade, and the resulting muon lepton traverses the detector due to its relatively long decay lifetime and, at the energies of interest here, that it largely experiences ionizing energy loss. Cherenkov radiation is deposited in a cylinder or ‘track’, sometimes for kilometers, depending on the muon energy. Above 1 TeV, the energy deposition becomes dominated by stochastic interactions, at random intervals along the track, which have a cascade-like energy deposition. The event selection for the analysis presented here targets muons produced outside the detector, passing into the fiducial volume (and often all the way through, known as ‘through-going muons’). In this case, only the muon track and associated stochastic losses are visible in the detector. 3.4 Optical properties of the instrumented ice A fundamental challenge for IceCube is the inability to control properties of the glacial ice that makes up the detector medium. To understand the detector data, precision knowledge of the photons as they propagate through the ice is required. Photon paths are modified by scattering and absorption interactions in the ice, both with the ice itself and with the dust that was deposited as the glacier formed [39]. To overcome this challenge, IceCube incorporates several methods of in-situ calibration to extract the optical properties of the ice across the broad detector volume. During detector construction, a laser dust logger was deployed in eight of the still-liquid holes and measured high-resolution particulate profiles, providing a very accurate record of the amount of dust in the ice with varying depth [40]. As noted above, each DOM includes a flasher board designed for calibration, including 12 LEDs that can be detected 0.5 km away. By emitting photons of known wavelength and intensity from these LEDs with controllable output and recording the response of surrounding DOMs, the local optical properties of the glacial ice may be measured, including the scattering and absorption coefficients of the ice that may be extracted by performing a global fit to these data [39]. These coefficients are used to model the ice for event simulation and reconstruction. Horizontal layers of ice are defined with constant optical properties by averaging the coefficients over 10 m bins in depth. The layered ice model is a natural representation of the glacier due to its formation via precipitant deposition. Further detail has been added over iterations 24 Figure 3.5 Scattering and absorption coefficients for the ‘SPICE 3.2.1’ ice model used in this work against the depth in the ice [43]. Corresponding scattering and absorption lengths are also plotted. Instrumented depths of IceCube and the DeepCore subdetector are marked, as well as a region with high scattering and absorption known as the ‘dust layer’. The dust layer is a region of the glacier with relatively high amounts of dust from increased deposition during a period of decreased global temperature [40]. It is noteworthy that errors on these coefficients (not plotted) increase significantly outside the instrumented volume. of the models, corroborated by improvements in data-model agreement. For example, the ice layers are tilted, as observed in the dust logger data [40]. More detailed studies of the ice have identified an anisotropy [41]. The currently verified model is referred to as ‘SPICE 3.2.1’ (for ‘South Pole ice’), and a plot of the ice properties is shown in Figure 3.5. However, ice modelling continues to be a dynamic and active area of study. One area of investigation uses a different anisotropy mechanism called birefringence [42]. In addition to the bulk ice that makes up most of the detector, the refrozen ice around the DOMs, or ‘hole ice’, is also undergoing active investigation. The installation of IceCube strings was performed by a hot water drill, melting a 2.5 km deep hole in the glacier that refroze around the DOMs but did not retain the natural layered properties of the ice sheet [44]. The effects of the hole ice are included in the parameterization of the DOM angular response [39]. A hole ice model with 2 parameters, 𝑝 0 and 𝑝 1 , [45] is utilized in the MC and reconstruction for this analysis, with only 𝑝 0 varied for this work. Resulting models for a representative spread of parameter values are 25 Figure 3.6 Depiction of the hole ice model describing the angular acceptance of the DOMs [16]. The 𝑝 0 parameter is varied, resulting in differing acceptance probability dominantly from the direction of photons travelling upward into the DOM. shown in Figure 3.6. The effects of bubbles in the refrozen hole ice (parameterized by 𝑝 0 and 𝑝 1 ), cable shadowing and sensor tilt are also currently being investigated [46]. 26 CHAPTER 4 GENERAL ICECUBE EVENT RECONSTRUCTION Once we have obtained our detector data, extracting accurate values of the particle parameters is a highly non-trivial but critically important task for IceCube analyses. IceCube often uses likelihood-based methods reliant on ‘photon lookup tables’ to predict the amount of detected charge for hypothesis events. However, the naturally-occurring glacial medium constitutes a significant source of uncertainty and the ongoing development of its modelling requires a more sophisticated and flexible method of event hypothesis generation. Here a sample of event reconstructions that apply the photon table method of IceCube are described in detail. 4.1 Introduction Event selections generally use a series of reconstructions, with faster, simpler, lower level recon- structions seeding more accurate but resource-intensive higher level methods. Given the importance of estimating particle attributes and the myriad categories of events, including particle type, en- ergy, direction, and location in detector, event reconstruction is a long-studied topic within IceCube where a wide range of different methods have been developed. 4.1.1 General likelihood framework Likelihood-based reconstruction methods can be identified in three components – hypothesis, comparison function (the likelihood) and optimization algorithm (generally, a minimizer). The hypothesis is our model of the event being reconstructed. It is described by a set of parameters and seeded with an initial set of well-motivated values. The likelihood function in this case compares charge – that in the data to the prediction in the hypothesis event. This information is passed to a minimizer to navigate the typically multi-dimensional likelihood space, where the number of parameters is equivalent to the number of dimensions. IceCube uses a flexible reconstruction framework that allows the user to select a combination of these elements, internally known as ‘gulliver’ [47]. A more specialized software tool built on the framework is the ‘millipede’ family of reconstructions. Millipede is designed to reconstruct events 27 consisting of various combinations of muon tracks and electromagnetic cascades to build event hypotheses. Its most general event hypothesis consists of a muon track passing through the detector with stochastic energy losses occurring regularly along its length, analogous to the millipede insect with the legs represented by cascades. It can, however, also handle events as simple as a single cascade. A particular strength of millipede is its internal minimization of component particle energies, as described in [37]. This optimization is performed separately for every step taken during minimization of the remaining parameters, reducing the dimension of the parameter space to be handled by the selected (external) minimizer. My work in improving event reconstruction in IceCube focuses on increasing the accuracy of the predicted charge of the millipede hypothesis events, as detailed in section 4.2.3 and chapter 5. The reconstructions in the millipede software suite utilize a Poisson likelihood function including a noise term since we are considering the probability of detecting photons in a PMT, a Poisson process, in a detector with multiple sources of noise e.g., DOM pre-pulsing, after-pulsing [37]. A number of minimization algorithms have been implemented in the standard IceCube software and are available for use. 4.1.2 Muon angular reconstruction For high-level reconstruction of muon track direction, a maximum likelihood method is em- ployed. This approach calculates an unbinned likelihood considering the photon arrival time probability distribution function of the first photon to reach the photosensor. Using only the first photon reduces bias from simplifying approximations. For example, less photon scattering is involved for the first photon, reducing the dependence on accurate modelling of ice properties while also significantly increasing the reconstruction speed [48]. The muon is approximated by an ‘infinite’ muon hypothesis, that is, a muon passing completely through the detector. The charge expectations are supplied by photon lookup tables of averaged muon tracks including stochastic losses, generated using direct photon propagation. A full description of these unique spline table features is included in [49]. 28 4.1.3 Muon energy estimator The event sample described in this thesis was previously reconstructed for energy via the truncated mean of muon specific energy loss, also known as 𝑑𝐸/𝑑𝑥 – the amount of energy deposited over some distance. This quantity is correlated to muon energy at the initial neutrino- nucleon interaction, hence its value as an energy estimator. The distribution of 𝑑𝐸/𝑑𝑥 values, measured along the track, is broadened by stochastic losses, the primary method of muon energy loss above approximately 1 TeV. Removing this high value tail of the distribution, or truncating the mean of 𝑑𝐸/𝑑𝑥, improves the resolution of this method [50]. To calculate the truncated mean, the track is first divided into segments via planes perpendicular to the track. 𝑑𝐸/𝑑𝑥 is calculated for each segment via the comparison of observed and expected photo-electrons, and a fraction of the highest values are removed. The charge expectation is provided by discrete photon lookup tables, described in the next section. A truncated 𝑑𝐸/𝑑𝑥 value is calculated and the muon energy is obtained via a non-linear mapping. This method has been optimized for energies above 1 TeV [50]. 4.2 Photon lookup tables My work on event reconstruction has been primarily related to generation of expected charge for hypothesis events. Frequently, this requires pre-generated template events stored in lookup tables. These photon lookup tables were previously produced using Photonics software [51], which draws from parameterizations of GEANT3 simulation [52]. The detected charge expectation is furnished by accessing these tables that describe events at different depths and zenith directions in the ice. These tables may be used for both simulation and reconstruction. Given that the two most basic event topologies in the IceCube detector are the approximately point-like bursts of light called cascades and the long tracks generated by muons, photon lookup tables must be created for each of these event types. Cascade-like events are represented by an electromagnetic cascade. Muon segments, i.e., a finite segment of a minimum ionizing muon, can be combined to represent a muon track. Muons may also be represented as ‘infinite’ tracks, that is, passing through the detector. Infinite track tables are unique in that they generated using different 29 software (a different photon propagator) and have unique properties, as mentioned in section 4.1.2 and detailed in [49]. 4.2.1 Generation of tables The goal in table generation is to obtain the most probable representation of the event at a unit energy, 1 GeV, so that it may be scaled up to represent a the broad energy range accessible by IceCube. 50 million photons are drawn from parameterizations of GEANT3 photon production in electromagnetic showers and finite muon tracks to achieve an average statistical representation. Additional weighting is applied to convert from photon to photo-electron expectation, as well as downscaling to convert from 1 TeV to 1 GeV. These expectations are stored in 4-dimensional binned objects – 3 spatial coordinates plus time, with the origin at the vertex of the particle [51]. As described in section 3.4, the glacial detector medium is modelled as a series of discrete 10 m layers with unique scattering and absorption quantities. This means that the charge expectation for an event will depend on the specific local layer(s) of ice, making the depth and zenith angle of the particle is crucial in the hypothesis. To describe the full detector, the 4D table objects are generated over ±800 m in 10 m intervals in depth and 0◦ to 180◦ in 10◦ intervals in zenith angle in IceCube coordinates, for a total of 1539 sources. Initially, these discretely binned tables were the the final product and are still used for some reconstructions (e.g., section 4.1.3). However, there are several advantages to processing the raw tables with spline fits, described in the following section. 4.2.2 Splining of tables Discrete tables can be problematic due to numerical artifacts, which can only be combated by binning more finely, resulting in ballooning of the table size to the terabyte-scale. To reduce the size of the lookup tables, the raw 4D tables may be fit with few-order basis splines and the resulting splines may be ‘stacked’ into a 6D object and pseudo-interpolated [53]. Each full set of tables undergoes this process twice – once with timing information (4D → 6D) and once with the time dimension integrated for amplitude-only tables (3D → 5D). 30 4.2.3 Improvements to the method In generating enhanced lookup tables with an eye toward low energy events, the following steps were taken in updating the photon tables for both cascades and finite muon tracks. 4.2.3.1 Geometry Previously existing lookup tables used a spherical coordinate system for both cascades and finite muon segments [51]. Spherical coordinates are a clear match to the intrinsic geometry of a cascade event, however the intrinsic geometry of a muon track is much better suited to a cylindrical coordinate system. This modification was implemented for the finite muon segment tables. 4.2.3.2 Source Even finely binned photon tables have an inherent disadvantage of being drawn from outdated GEANT3 parameterizations. Previous tables were generated by drawing a fixed number of photons from the parameterizations to obtain an averaged distribution. The applied parameterization represented a 1 TeV event which was then downscaled to a unit event of 1 GeV. The updated tables instead draw from parameterizations that use direct propagation of particles with GEANT4 and photons with the OpenCL-based propagation software used for IceCube simulation [54]. For the direct photon propagation, 1 GeV particles were directly generated with 300 events after an optimization study of the number of events required to produce stable expectations at the spline level (testing from 100 to 900 events per table). This produces, on average, 73 million photons per table for cascades and 77 million photons per table for muons. No appreciable fluctuations in the spline fits were observed in the regions where the photon flux is reliably above the detection threshold of the PMTs. As before, the full set of tables includes particles originating at depths from -800 m to +800 m in 10 m intervals and at zenith angles from 0◦ to 180◦ in zenith in 10◦ intervals for each depth, using the IceCube coordinate system, and at the center of the detector in the x-y plane. Each of these 1539 source table files, containing the propagated photons from 300 events, required between 1 - 3 hours computational time. Each also underwent the splining process described above, for a total of 3078 spline files. The end products are a total charge amplitude spline lookup table and a 31 Figure 4.1 A comparison of a photonics-based (dashed lines) to a photon-propagator-based (solid lines) table describing a horizontal cascade event in the middle of the detector (depth of 10 m, zenith angle of 90◦ ). The local coordinate system is in spherical coordinates. The probability of detection amplitudes have been averaged over the azimuthal coordinate. The photonics table shows a perfectly sharp Cherenkov peak from a perfect point emission, whereas the smearing of the photon-propagated Cherenkov peak is due to the propagation of the electron from the full GEANT4 propagation. The difference in amplitude scales, in particular at small radii, is a known discrepancy between these parameterizations and GEANT for low energies. photon probability amplitude (time inclusive) spline. An advantage of moving to direct photon propagation is the more physically accurate results. For example, simulating events at 1 GeV rather than downscaling 1 TeV events provides a more faithful representation for the lowest energy events at this scale. Figure 4.1 provides another example that addresses the smearing of the Cherenkov peak in low-energy events due to the propagation of the electron from its point of generation. These features are particularly important for low energy events that have higher probability to be affected due to smaller string and DOM spacing. In the absence of an established procedure to verify that the spline fits are a good description of the raw photon tables, plots of discrete table and spline fit amplitudes were generated in 2D slices through the 4D objects and scanned by eye (see Figure 4.2 for example). This was performed for a representative sample of depths and zenith angles – specifically, depths: 0 m, ±360 m, and ±500 m and zenith angles: 10◦ , 90◦ , and 170◦ addressing the outer edges of the detector, the center (with 32 Slice at rho = 1.05 [m], theta = 2.50 [degree], t = 1750.16 [ns] 1.000000E-03 /home/snowicki/makingphotontables/quadquadzoverflowmuontablez-350a90z3.fits.abs.pspl CDF Raw CDF 1.000000E-04 1.000000E-05 log Amplitude [PE/m track] 1.000000E-06 1.000000E-07 1.000000E-08 1.000000E-09 1.000000E-10 1.000000E-11 1.000000E-12 -100 -50 0 50 100 150 200 250 300 350 400 Parallel distance [m] Figure 4.2 Slice of a 4D muon lookup table generated with direct photon propagation. Table describes photon detection probability for a 1 GeV muon in a local cylindrical geometry, where the origin is at the muon’s start point and the z-axis (‘parallel distance’) is aligned with its direction of propagation. ‘Perpendicular distance’ describes the r-axis. The discrete bins of the photon table are marked by the points, and the 4D spline is marked by the line. The 3D plot on the left contains the 2D slice on the right. A series of analogous 2D plots were generated for the verification of the tables, as described in the main text. the dust layer), the top portion with higher scattering, and the deep, clearest ice. This also addresses up-going, down-going and horizontal event directions. 4.2.4 Limitations of existing tables Generating, splining and verifying a set of tables is an intense effort. The generation alone requires 1 - 3 hours per table. Splining the tables requires a much more variable run time, and the amount of memory required can be quite large. Some of the source tables, ∼10%, require repeated attempts for the spline coefficient minimization to complete successfully. A fraction of the source tables, ∼5%, require extremely high amounts of memory to produce the spline fit, such that only a portion of the available large-scale computational clusters contain nodes with sufficient memory. In addition to the computational power and time to generate the splines, optimal placement of the knots and value of spline settings is a challenge. The shape of the amplitudes includes sharp peaks near the start of the event, but slow, gradual change throughout most of the space. The splines must be able to rapidly adjust to describe the sharp peaks, but this is a disadvantage farther away from the event vertex where low bin content or even empty bins can cause the splines to 33 experience large fluctuations. There is also the consideration of ‘ringing’, which can be reduced by regularization but that makes fitting peaks more difficult [53]. Placing more knots in the regions of rapid change, and fewer farther away, helps to address the above but still requires optimization of these settings. The other consideration is that the spline fit parameters must be uniform across the detector, while the ice is not. The 4D splines are combined into one object via a stacking process, which requires the full set of tables across depths and zenith angles to have identical knot settings [53]. Specifically, the bottom portion of the detector, where the ice is clearest and with the least scattering and absorption, has sharper peaks that are more challenging to describe. Thus, the requirement that all depths and angles be treated identically complicates the process, as otherwise the optimal settings would vary due to these properties. Relatedly, the verification process is both time-consuming and imperfect. A full set of spline tables contains 3078 files, each with O(100) 2D slices of the 3D or 4D object, where the exact number of slices is dependent on the specific binning. Checking the complete contents of the final product 5D and 6D spline tables by eye is not feasible. In principle, this would be improved by an automated check, such as a chi-squared fit comparing discrete tables and spline fits. However, the importance of the goodness of fit depends on location, e.g., outside the detector is less critical than the instrumented volume. The construction of such an automated verification remains unclear. Specifically, even for a reduced set of depths some source tables will describe regions outside of the detector leading to empty bins which tend to cause the spline fits return to unphysically large or small values. Restricting the splines’ ability to change gradients quickly would prevent these fluctuations, but would also prevent the spline from fitting the sharp peak of the distribution at the source vertex. Obtaining a good fit to these unused regions of the detector requires not only time to deduce an effective configuration of knots in the region, but increased memory to store and use the additional spline coefficients, reducing the effectiveness of the photon lookup tables as a reconstruction tool. Thus, although the most straight-forward approach, an algorithm checking the goodness of fit e.g., calculating the chi-square value at all points of the spline is superfluous - certain regions may be a poor fit without compromising the effectiveness of the tables. 34 4.2.5 Future considerations There are several challenges for the future use of photon lookup tables. As previously mentioned, photon lookup tables are specific to the ice model used in their generation. This means that the generation, splining and verification process must be repeated for every update of the ice model, representing a significant recurring investment. Another concern is that the modelling of the glacial ice continues to evolve in complexity and sophistication, as discussed in section 3.4. These additional layers of detail would, at best, require additional dimensions for the tables, dramatically increasing their size and memory usage to the the point of becoming unusable. Further, some models break the assumptions and approximations involved such that it may not be possible to parameterizable the information into a table. Yet another factor is the incorporation of diverse types of optical sensors for future detector upgrades. Both the different geometries of the sensors and combination of types of sensors introduce significant complications into the modelling. A different approach is needed to incorporate these elements and has been implemented to enhance the outcomes of the studies of this work, see next chapter. 35 CHAPTER 5 PRECISION EVENT RECONSTRUCTION USING DIRECTLY SIMULATED HYPOTHESES - DIRECTRECO To make use of the best modelling of the glacial detector medium, DirectReco replaces photon lookup tables with OpenCL-based photon propagation software. The resulting charge expecta- tions are input to the existing reconstruction framework to leverage its flexibility and structure. To reconstruct the through-going muon events of interest to the neutrino-nucleon cross-section measurement, a unique hypothesis tailored to the physical properties of this energy range was implemented and tested. 5.1 Concept A primary deliverable of the presented analysis is a novel reconstruction method called DirectReco. The principle is to maintain the extant reconstruction framework, described in 4.1.1, and introduce a direct replacement for the photon tables that instead uses photon propagation and the full description of an ice model to generate the charge expectation for a given hypothesis. This retains all the flexibility of hypothesis construction, choice of reconstruction elements, and code development of the gulliver [47] and millipede [37] projects, while adding the increased accuracy in ice modelling and the ability to modify that model without any lead time. The OpenCL-based photon propagation software utilized in DirectReco is also used to generate IceCube simulation. Direct simulation of the hypothesis event addresses the predicament of parameterizing ice models of increasing complexity into lookup tables that would require prohibitive amounts of computational time and memory both to generate and to use in event reconstruction. Two underlying considerations in formulating event hypotheses are 1) the accuracy of the approximation of the physics event, and 2) the ability to define the lowest dimensional space that efficiently describes the essential elements to be fitted. Direct photon simulation addresses the need for accuracy-limiting approximations to be built into the template events stored in lookup tables. Cascades are produced at the prescribed energy, rather than scaling a template. Muons are simulated as a function of energy rather than approximated as a series of ‘stacked’ track 36 segments or as an ‘infinite’ track passing through the entire detector. Additional options, including more particle types, simulating muons as a function of energy rather than length, and simulating at hypothesis energy, provide space for creativity in both construction and parameterization of hypotheses. Through the improvements of event hypotheses and direct minimization of energy parameters, DirectReco provides the opportunity to improve energy resolution, its most promising aspect. The biggest drawback to DirectReco is in the time to generate each hypothesis as photon propagation time is the bottleneck. It is therefore necessary to consider carefully the number of events to be reconstructed and the number of parameters to minimize. The opportunity to select the reconstruction model not only simplifies using updated ice models and matching the model used in reconstruction to that used to generate simulation, it also makes possible advanced studies of a wider variety of systematic uncertainties. By efficiently reconstruct- ing the same simulation with the same reconstruction but applying different ice models, the effect of changing ice models may be studied in great detail. 5.2 Hypothesis generation Event hypotheses are constructed in the millipede software module, with the number and placement of particles in the detector, and the tuneable parameters defining them, detailed there. For each minimizer step, the hypothesis, parameters and photon binning are passed to the photon propagator module. The propagator creates a geometry of potential photon detection locations and compiles it into an OpenCL kernel. It then creates a number of identical hypothesis particles (for further detail on this oversampling parameter, see section 5.3) and divides them into smaller pieces to pass to the graphical processor unit (GPU). The GPU returns the propagated photons for each portion of the hypotheses. Each photon has its hit probability weight added to the appropriate bin in each receiver. The weight is a multiplication of: the photon’s survival probability; photo-electron acceptance by the PMT after passing through optical module glass, optical gel and PMT photo- cathode as a function of wavelength; photon angular acceptance as the relative collection efficiency of the optical module as a function of photon impact angle; and normalization by the number of 37 median calc time [s] 1.2 77.3 137.0 198.2 257.8 317.2 33.6 GeV 1.0 width of llh distribution 0.8 0.6 0.4 0.2 0.00 100000 200000 300000 400000 500000 oversampling Figure 5.1 Sample optimization for a low-energy muon event. The stability of the event hypothesis increases with increasing oversampling. Stability is measured by calculating the likelihood of an event hypothesis 500 times for each value of oversampling and comparing the width of the distribution of likelihood values. hypotheses propagated, or oversampling. This binned charge prediction is then passed back to the reconstruction module for calculation of the log likelihood compared to the data. 5.3 Oversampling and optimization A unique element to DirectReco is the oversampling parameter. To obtain the best, most-likely representation of an event, the average charge expectation is desired. Each hypothesis is simulated repeatedly to obtain a reliable representation of the event. This idea is also realized in production of photon tables (see sections 4.2.1, 4.2.3.2). However, it becomes a tuneable parameter for DirectReco. In simulating more events there is a trade-off that the more photons that are propagated, the more accurate the predicted charge, but the longer each hypothesis takes to generate. Different values of oversampling are appropriate for different particle energies; the approach of propagating O(10 000 000) photons to obtain a very stable expectation holds, but since particle energy correlates with number of photons, different energies require different oversampling factors to reach sufficient statistics (see a low-energy event example in Figure 5.1). The stability of the calculated likelihood value must also be considered. If a particular point 38 in the likelihood space is tested multiple times, the variable nature of the simulation means that the charge expectation will vary from the previous representation. The change in expectation input may then cause the likelihood value to shift, creating a fluctuation in the likelihood space. An unstable likelihood space creates challenges for the minimizer, compromising the accuracy of the result and/or the minimizer’s ability to return a result. The stability of the space is therefore a primary challenge and is regulated by the oversampling. The optimization is thus the balance between high photon statistical accuracy of the event hypotheses, and the time to calculate each hypothesis. Note that other factors may also affect oversampling, such as the event type and the location in the detector. 5.4 Likelihood To account for the limited statistics of the predicted charge, a modified Poisson likelihood [55] is used in place of the standard Poisson used in default millipede [37]. For DirectReco, we simplify this to the bin-wise calculation     𝑠 𝑑 − ln L = 𝑛 𝑠 ∗ 𝑠 ∗ ln + 𝑛 𝑑 ∗ 𝑑 ∗ ln , (5.1) 𝜇 𝜇 where 𝑛𝑠 𝑠 + 𝑛𝑑 𝑑 𝜇= . (5.2) 𝑛𝑠 + 𝑛𝑑 Here 𝑛 𝑑 is the number of data trials (1 for reco), 𝑑 is the observed charge, 𝑛 𝑠 is the number of simulation trials (oversampling), and 𝑠 is the expected charge. Including simulation uncertainty smooths the likelihood space and broadens the minima, facilitating optimization. It is important conceptually to note that while there is no uncertainty associated with the charge expectation from photon tables, they are also not absolutely correct. It is unclear how one would assign an uncertainty to the table expectations. However, they have the advantage of being generated with high statistics, and therefore high accuracy, since they are generated once and used repeatedly. From the perspective of minimization, this specific point is irrelevant though as the static nature of the photon tables eliminates the difficulty of oversampling. 39 5.5 Application to intermediate energy through-going muons - DiscoFit While DirectReco is a general reconstruction tool, it may be tuned to a specific class of events for a dedicated analysis. For the analysis described in this thesis, the energy range of interest is intermediate to those typically of interest to IceCube topics, with analyses tending to focus on GeV-scale or tens of TeV and higher scales. 5.5.1 Reconstructing through-going muons The events of interest to the neutrino-nucleon cross-section measurement are neutrino-induced muons that are produced outside the detector volume and may stop inside or pass all the way through IceCube, commonly referred to as through-going muons. Through-going muon energy is a challenging event quantity to reconstruct. Other classes of events include ‘starting’ or ‘fully contained’, where neutrino interactions occur inside the IceCube volume and, for the latter, the daughter particles also stop inside the detector. Thus most or all of the energy is deposited where it may be detected. On the contrary, through-going muon estimated energy always represents a lower limit since the initial interaction, and (potentially) subsequent stochastic losses are deposited outside of the detector before and often after the muon passes through the instrumented ice. There are therefore multiple definitions of the ‘true’ muon energy, even though all are an approximation of the neutrino energy, ultimately the physical quantity of interest. Options for the definition of the ‘true’ muon energy are: the muon’s energy upon entering the detector; its energy at its nearest point to the centre of the detector; or the total amount of energy deposited in the detector. The recorded data is the energy deposited in the detector, which is less correlated to neutrino energy than the other definitions. The muon’s energy at various points in its journey is summarized in Figure 5.2. Another factor to consider for the analysis target events is their energy range, here between 100 GeV and 5 TeV. This is a transition region for muons, with energy loss mechanisms progressing from minimum ionizing particles to becoming completely dominated by stochastic losses [1]. For further detail, see section 2.5. Knowledge of these physical characteristics needs to be incorporated into the reconstruction for best results. 40 muon deposited energy muon energy at detector centre muon energy at detector entry muon energy at interaction vertex μ neutrino energy νμ Figure 5.2 An upward-going neutrino passing through the Earth and interacting in the bedrock below IceCube. The muon daughter passes through the detector and exits, continuing its trajectory. The particle’s defined ‘true’ energy depends on which point of the journey it is in, with various definitions of energy and their locations are labelled as such. 5.5.2 Event hypothesis and parameterization The most complex event hypothesis in millipede is that of a high energy muon track, built from a series of muon segments and regularly spaced cascades to represent the muon track and its stochastic losses. In practical use, the track segments may be disregarded as they contribute a relatively small amount of light that is effectively represented by the cascades. For DiscoFit, we use this knowledge to choose a representation of a through-going muon as a series of equally spaced cascades, spanning the detector. However, the cascade energies are determined using a unique method, rather than millipede’s internal optimization. The event hypothesis has one tuneable energy describing the entire muon, and each cascade is simulated as having an equal fraction of that total energy. The muon energy is optimized directly via an external minimizer, specifically the SIMPLEX algorithm [56]. The hypothesis choice reflects the physical nature of the muon energy losses at the target energy – a transition region from minimum ionizing particles to becoming dominated by stochastic losses, as detailed in section 2.5. Only energy is estimated and the other parameters are set using the 41 muon centre energy 0.350 existing reco DirectReco 0.325 log10E 0.300 0.275 energy resolution: 0.250 0.225 0.200 0.175 0.150 100 101 102 103 104 105 true energy [GeV] Figure 5.3 Energy resolution comparison of the existing reconstruction [50] and DirectReco for a subset of final level Monte Carlo events. True energy here is the muon energy at its closest point to the centre of the detector. DirectReco shows a significant improvement in resolution up to approximately 10 TeV, the target energy for this analysis. direction reconstructed with the method described in section 4.1.2. The existing energy estimator, truncated mean of muon 𝑑𝐸/𝑑𝑥, is used as an input seed. 5.5.3 Performance and comparison Next one determines the performance of DiscoFit and compares this to the reconstruction previously used for these events. A common set of events was processed, in this case a subset of final level Monte Carlo events using the selection described in section 6.2. The previous energy estimator computed the truncated mean of muon 𝑑𝐸/𝑑𝑥, a method tailored to TeV-scale and higher energy muons that are dominated by stochastic losses [50]. The energy resolutions are calculated as noted below and are plotted in Figure 5.3. Following the method outlined for determining energy resolution for a proxy observable in [37], 1D slices in true energy are taken from the 2D distribution of the proxy observable (reconstructed energy) and true energy. Each resulting 1D histogram is fitted with a Gaussian distribution to find the resolution via its sigma parameter or width of 68% of the distribution. The muon energy at its closest point to the centre of the detector is considered the true energy. There is some subtlety in the choice of true energy and the center energy is selected in an attempt 42 to show the most fair comparison between methods. DiscoFit is reconstructing the deposited energy and will show the best resolution in that comparison, whereas the existing method is tuned to the energy at the particle’s closest approach to the center of the detector, similar to the muon’s energy at detector entry. In the regime where energy deposition is dominated by stochastic losses, the truncated mean method should provide a better estimate of the muon interaction energy than a deposited energy due to the inclusion of information from those stochastic losses. Figure 5.3 shows an improvement in resolution for DiscoFit at the lowest energies up to a few TeV where the event hypothesis is a good physical description. Above these energies, DirectReco’s performance starts to degrade and the optimization of the other method becomes evident. 43 CHAPTER 6 THE NEUTRINO-NUCLEON CROSS-SECTION ANALYSIS This chapter presents the analysis of the neutrino-nucleon cross-section at intermediate energies, measured via normalization of the predicted neutrino flux. Two energy windows of interest are defined, 100 - 350 GeV and 350 GeV - 5 TeV, each with their own scaling factor on the Standard Model cross-section. A prior is calculated from accelerator-based cross-section data to constrain the lower energy bin, allowing the fit to focus on the higher energy range where no measurements currently exist. 6.1 Concept Accelerator measurements of neutrino-nucleon cross-sections use knowledge of the neutrino flux from the beam and the observed detector data to determine the cross-section. The upper limit of such measurements is currently approximately 350 GeV. At higher energies, specifically few-TeV to PeV energies, neutrino-nucleon cross-section measurements use the observation of the neutrino- Earth absorption and a model of the Earth’s interior to infer the cross-section. However, this is an energy dependent effect, lessening in magnitude as energy decreases, with the Earth becoming effectively transparent to neutrinos around 1 TeV (see Figure 2.7 and section 2.4.3). This leaves an unmeasured region, or gap, in neutrino-nucleon cross-section measurements from O(100 GeV) to O(TeV). IceCube measures very high statistics sample of atmospheric neutrino events in the intermediate energy region between ∼100 GeV and 10 TeV. Here, the linear relationship between the observed event rate and the cross-section is utilized to make a measurement relative to the baseline model. This method relies on the assumption of given models of the atmospheric neutrino flux, resulting in a model-dependent measurement. The baseline cross-section model is the Standard Model theory for our energy range of interest, the Cooper-Sarkar-Mertsch-Sarkar model [3]. The CSMS model is shown in Figure 2.6, with errors, over the full energy region for which it is calculated. These errors range from ∼6% to ∼4% for neutrinos and ∼21% to ∼10% for antineutrinos for 100 GeV to 5 TeV. 44 6.2 Event selection After a decade of data-taking and analysis development, IceCube has constructed a number of event selections tailored to different event types and physics topics. For this cross-section analysis, a high statistics sample of well-understood muons derived from 𝜈 𝜇 CC interactions with minimal background is required. Such an event sample was developed for the purposes of studying the diffuse flux of astrophysical neutrinos [18, 21], consisting of through-going muon tracks originating outside of the detector. Events originating from the bedrock region (zenith angle > 85◦ ), utilizing the earth as a shield, as well as a Boosted Decision Tree to remove atmospheric muon backgrounds achieve 99.7% neutrino purity. Note that, since the hadronic interaction occurs outside of the detector and is not part of the data or the reconstruction, it is not considered in the systematics. This analysis will use data collected from May 2010 to December 2018. It is noted the diffuse astrophysical analysis also included data from May 2009 - May 2010, during which only 59 out of 86 strings were installed and must be treated separately from the other years of data due to differences in the calibrations [16, 21]. Given that the IC59 year has a limited statistical contribution, it is excluded from this analysis for efficiency. 6.2.1 Reconstruction The existing event sample utilized an energy reconstruction optimized for the high-energy end of the spectrum (described in section 4.1.3), the focus of the astrophysical neutrino flux analysis. For the purposes of this GeV - TeV focused study, a new energy reconstruction is tailored to the low-energy part of the event sample, DiscoFit, described in section 5.5 and the resolutions of the methods are compared in section 5.5.3. Due to the physics of the event hypothesis, the performance of DiscoFit starts to degrade above a few TeV, where the existing method begins to improve due to the dominance of stochastic losses. The reconstruction of the zenith angle, discussed in section 4.1.2, is unchanged. To obtain accurate and efficient reconstruction of a large amount of simulated events with a wide range of neutrino energies, further optimization is pursued. The goal is two-fold: to keep the per-event reconstruction time to a minimum; and to reconstruct as many 𝐸 𝜈 ≤ 5 TeV signal events 45 5 TeV 1.00 fraction of signal events updated fraction of background events skipped 0.4 signal background 0.98 0.3 0.96 0.2 0.94 0.1 0.92 0.0 200 400 600 800 1000 qtot cut (a) Potential cut values on deposited charge (‘qtot cut’) against the fraction of signal events reprocessed (b) Neutrino energy and deposited charge with low with the updated reconstruction (black) and fraction energy signal and high energy background events, of background events retaining the previously exist- as labelled. The potential cut values on deposited ing reconstruction (blue). charge (also shown in 6.1a) are marked in purple. Figure 6.1 Optimization of reprocessing events using baseline simulation. The goal is to reconstruct most of the signal events while allowing most of the background events to retain the existing reconstruction. Deposited charge (‘qtot’) is considered as an energy proxy to make the distinction between which events to reprocess. The lowest tested value of 100 PE is chosen. Simulation is weighted with the Gaisser-Hillas H3a cosmic ray model [57] and SIBYLL2.3c hadronic interaction model [58]. as possible, skipping well-reconstructed higher energy background events to conserve processing time. To limit reconstruction time without loss of resolution for the target events, or introducing bias against higher charge events, an upper limit of 42 minimizer steps is based on a performance study of ∼147 000 total events, ∼130 000 with 𝐸 𝜈 ≤ 5 TeV. To restrict the number of slow, high-energy events processed with DiscoFit, deposited charge is used as an energy proxy, determining whether an event will be reprocessed. A optimization study is conducted with goal of reprocessing at least 90% of signal neutrinos in the energy range of interest (≤ 5 TeV) and detailed in Figure 6.1. The selected charge value is 100 PE, the lowest tested, yielding 91% of signal neutrinos reconstructed with DiscoFit, and 45% of background neutrinos (> 5 TeV) retain their existing reconstructed energy. The next tested value (200 PE) shows a significant decrease in ‘skipped’ background events. A plot demonstrating the 100 PE proxy is shown in Figure 6.2. 46 5 TeV signal background keep TE run DirectReco Figure 6.2 Schematic demonstrating the DiscoFit reprocessing criteria. Events with ≤ 100 PE in deposited charge are reconstructed with ‘DiscoFit’ and those > 100 PE retain their ‘truncated energy’ reconstruction. Simulation is weighted with the Gaisser-Hillas H3a cosmic ray model [57] and SIBYLL2.3c hadronic interaction model [58]. Variable Number of bins Range Spacing Reconstructed muon energy 50 102 − 107 GeV Equal in log10 space Reconstructed zenith angle 33 85◦ - 180◦ Equal in cos(zenith) space Deposited charge 2 0 - 106 PE Split at 100 PE Table 6.1 Binning of the relevant analysis observables. 6.2.2 Binning of observables The original binning of the event sample specifically considered the high-energy nature of the events of interest, with 50 logarithmically spaced bins from 102 to 107 GeV in reconstructed muon energy, and 33 bins equally spaced from 85◦ to 180◦ in cosine of reconstructed zenith. To ensure efficient uses of the sample in the analysis presented here, this binning in energy and zenith is retained, but a 3rd dimension in deposited charge is added with 2 bins - 0 to 102 PE and 102 to 106 PE. The purpose of the additional dimension is to keep events processed with different reconstruction methods separate, rather than treating them as though they are the same, resulting in a discontinuity in energy as well as lost information. An example of the expectation histograms is shown in Figure 6.3 for simulation with parameters at baseline settings. The binning is noted in 47 qtot < 100 qtot > 100 102 0.0 102 0.0 10 1 0.2 100 0.2 10 4 cos(zenith) cos(zenith) 0.4 10 2 0.4 10 7 10 4 0.6 0.6 10 10 10 6 0.8 0.8 10 8 10 13 1.0 1.0 102 103 104 105 106 107 102 103 104 105 106 107 reconstructed energy reconstructed energy (a) 2D expectation histogram for the low detected (b) 2D expectation histogram for the high detected charge bin (< 100 PE). charge bin (> 100 PE). Figure 6.3 Expectation histograms for baseline settings of all parameters. Table 6.1. 6.3 Analysis methods A forward-folding fit, where data and MC simulation are processed identically and then compared in the reconstructed space, is performed on the histogram of observables. Re-weighting is applied to the true energy and zenith angle events. The binned event counts are compared with a Pois- son likelihood including Gaussian penalties applied for several of the parameters in the fit. Fit parameters and priors are described in the following sections and summarized in Table 6.2 (end of chapter). 6.4 Sources and Modelling The Monte Carlo expectation is comprised of the sum of multiple sources of events. Sources include conventional and prompt atmospheric neutrino fluxes from cosmic ray interactions and an isotropic flux of astrophysical neutrinos. An example of expected rates for the cumulative and the individual sources is shown in Figure 6.4. In the astrophysical diffuse analysis, a fourth source of mis-reconstructed atmospheric muons is also included. This is not applied here as it constitutes approximately 0.13% of the entire sample [16], and its inclusion adds significant complexity in the applied template. While this background contributed a significant change for 48 5 10 conventional atm prompt atm 4 astrophysical 10 total expectation 3 Number of events per bin 10 Number of events per bin 3 1 10 conventional atm 10 prompt atm astrophysical total expectation 1 10 2 10 3 10 1 10 2 3 4 5 6 7 1.0 0.8 0.6 0.4 0.2 0.0 10 10 10 10 10 10 cos(reco zenith) Reco energy (a) 1D plot of expected bin contents for cos(reco (b) 1D plot of expected bin contents for recon- zenith). structed energy. Figure 6.4 Expected bin contents for 1D representations of the observables histogram (summed across the other 2 dimensions – deposited charge and energy or zenith). The parameters are at baseline settings with the exception of prompt normalization (1 instead of 0) and astrophysical flux parameters (normalization: 1 instead of 1.5, spectral index: 2 instead of 2.3). The difference in the magnitude of contribution and the shape can be clearly seen. The double peak in the energy is due to the two different reconstruction methods, where DiscoFit creates a second, shifted peak as it prefers reconstructing to lower energies. the diffuse astrophysical flux analysis, this is not the case for the cross-section parameters. The template makes a small contribution at low energies, similar to the astrophysical flux contribution for those energies, whereas the conventional flux dominates that portion of the spectrum and our signal events. A large change in rate would therefore be required to observe a significant change for this study. A bias test confirms that this is a small effect: a likelihood fit with the pre-existing muon source template included in the expectation, but with its normalization fixed to 0 in the fit, approximates the effect of a muon background component being present in the data and not accounted for in the analysis. This results in a bias of <1%, negligible compared to the predicted ∼10% uncertainty on the measurement. Atmospheric neutrino fluxes are modelled using the MCEq software package [59], with the Gaisser-Hillas (H4a) cosmic ray model [57] and SYBILL2.3c hadronic interaction model [58] selected as the baseline. Atmospheric flux modelling uncertainties are considered via nuisance parameters, and marginalized over for the extracted cross-section. 49 6.4.1 Atmospheric Modelling Parameters An important element of the analysis is the treatment of the atmospheric neutrino flux that dominates the energy range. Two parameters govern the uncertainty from the cosmic ray modelling: one modifies the cosmic ray spectral index; the other interpolates between two disparate cosmic ray models, specifically Gaisser-H4a [57] and GST-4gen [60], to represent shape differences. The definition of the cosmic ray interpolation parameter, 𝜆 CR interp , is shown in equation 6.1, where Φ indicates a neutrino flux including the simulated reweighted flux, the Gaisser H4a flux or the GST4 flux for either the conventional or prompt case, respectively. reweighted to CR-model  Φconv./prompt = Φconv./pr., H4a × 1 − 𝜆 CR interp + Φconv./pr., GST4 × 𝜆 CR interp (6.1) That is, 𝜆CR interp = 0 indicates a flux exactly like the H4a model, and 𝜆 CR interp = 1 indicates a flux exactly like the GST4 model. To describe the uncertainties from the hadronic interaction modelling, four parameters from the Barr parameterization [61] are implemented, specifically, 𝐻, which modifies pion production, and 𝑊, 𝑌 and 𝑍, which modify kaon production. The Barr parameters have Gaussian priors with widths determined by the uncertainties derived in the paper. Varying all six atmospheric parameters within the priors provides the fit with sufficient flexibility to describe a variety of atmospheric flux models [16]. 6.5 Physics Parameters This analysis measures the neutrino-nucleon cross-section as a multiplicative factor on the Standard Model prediction for these energies, CSMS [3], used in the Monte Carlo simulations. Two physics parameters are fitted; one governs neutrinos at 100 - 350 GeV and the other at 350 GeV - 5 TeV. The total CC cross-section is measured with the assumption of identical scaling for the 𝜈 𝜇 and 𝜈¯ 𝜇 cross-sections. Only deep inelastic scattering interactions are included in both the theory and the event sample. For further discussion, see section 2.4.1. 50 1.04 ν/ν̄ / (ν/ν̄ for all barr parameter = 0) 1.02 2.2 barr h 1.00 2.1 barr w barr y ν/ν̄ barr z 0.98 100 - 350 GeV bin barr h 2.0 350 - 5000 GeV bin barr w barr y 0.96 barr z 100 - 350 GeV bin 1.9 350 - 5000 GeV bin −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00 shift from nominal value [σ] −1.00 −0.75 −0.50 −0.25 0.00 0.25 0.50 0.75 1.00 shift from nominal value [σ] (b) The percent change of the 𝜈 𝜇 /𝜈¯ 𝜇 ratio compared (a) The values of the 𝜈 𝜇 /𝜈¯ 𝜇 ratio are plotted for the to the baseline are plotted for the corresponding Barr corresponding Barr parameter value. parameter value. Figure 6.5 A study of the effect of varying the Barr nuisance parameters on the 𝜈 𝜇 /𝜈¯ 𝜇 ratio compared to the default baseline value. Each parameter is varied ±1𝜎 i.e., within the width of the prior. The variation is small compared to the predicted accuracy of the cross-section measurement at ∼10%. 6.5.1 𝜈 𝜇 and 𝜈¯ 𝜇 Cross-sections for 𝜈 𝜇 and 𝜈¯ 𝜇 are quite different for the energy region of this analysis. The event sample contains a combination of the two particle types since both are produced in the atmosphere but IceCube cannot distinguish between them. As the multiplicative cross-section factors are applied to this sum of 𝜈 𝜇 and 𝜈¯ 𝜇 , the ratio 𝜈 𝜇 /𝜈¯ 𝜇 is relevant to the interpretation of the measured parameters. This forward-folding analysis is dependent on assumed baseline atmospheric models, specifically the Gaisser-H4a cosmic ray model and SIBYLL2.3c hadronic interaction mode that predict a 𝜈 𝜇 /𝜈¯ 𝜇 ratio. However, the applied Barr nuisance parameters modify the atmospheric spectrum to account for the uncertainty in its modelling. A study is performed to check the magnitude of the variation in the 𝜈 𝜇 /𝜈¯ 𝜇 ratio by the Barr parameters, summarized in Figure 6.5. Each of the 4 Barr parameters is varied within its defined prior that marks the physical range of the parameter. The resulting 𝜈 𝜇 /𝜈¯ 𝜇 ratio for each bin is calculated and compared to the 𝜈 𝜇 /𝜈¯ 𝜇 ratio at baseline i.e., with all 4 parameters set at 0. For Barr 𝐻, 𝑊 and 𝑍, the maximum change is ∼1%. For Barr 𝑌 , the maximum change is closer to 5%. Given that the predicted uncertainty on the cross-section measurement is ∼10%, this variation was not evaluated further. 51 6.5.2 Accelerator Prior The conventional atmospheric neutrino flux is dominant for our energy region of interest, shown in Figure 6.4. Its modelling includes a normalization parameter expected to be highly correlated with the cross-section parameters (see Figure 6.6). To reduce the correlation, a prior is calculated from existing accelerator-based cross-section measurements [1] for energies 100 - 350 GeV. An error-weighted mean and standard deviation is calculated from the data, available as 𝜎/𝐸 𝜈 . The prior width is taken as standard deviation divided by the mean to convert from a raw number to a dimensionless factor. This width is calculated for neutrino data (0.04) and anti-neutrino data (0.05) separately. The slightly more conservative width related to the anti-neutrinos is selected for a Gaussian penalty centred on CSMS values, i.e., 1 ± 0.05. The prior is centred at 1 because the CSMS cross-section value, the world average, and the calculated mean are all within error of each other for both 𝜈 and 𝜈. ¯ This penalty is applied to the lower-energy cross-section parameter, 100 - 350 GeV, a range chosen for the overlap with the accelerator data. Constraining the lower- energy parameter with the prior information provides an improvement on the 350 GeV - 5 TeV scaling factor, covering previously unexplored space. It is noted that the case of not including this information produces a result that is not directly dependent on external experimental data, rendering it both a meaningful result in its own right and as a cross check. 6.5.3 Conventional atmospheric neutrino flux normalization prior Another strategy to reduce correlation of the conventional flux normalization with the physics parameters is the direct inclusion of a prior. A Gaussian penalty is applied to the conventional flux normalization centred at 1 (no change from the baseline models) with a width of 0.25 [62]. This prior is also included in a previous IceCube cross-section analysis [5, 63], and in the forthcoming update of that analysis [9]. 6.6 Systematic Uncertainties A key systematic uncertainty in IceCube analyses is the modelling of the detector, in particular its glacial Cherenkov medium. The effects of several detector parameters are represented by a parameterization of a series of discrete simulation sets, allowing continuous variation in the 52 xsec_piece1 1.00 0.01 0.09-0.05-0.620.20 0.14-0.130.15 0.04-0.07-0.010.12 0.05-0.100.08 xsec_piece2 0.01 1.00-0.89-0.760.05 0.24 0.15 0.59-0.68-0.010.18 0.11-0.57-0.29-0.04-0.32 0.8 conv_norm 0.09-0.891.00 0.81-0.07-0.25-0.26-0.800.82 0.05-0.17-0.090.35 0.17 0.12 0.27 barr_h -0.05-0.760.81 1.00-0.07-0.470.02-0.610.78 0.09-0.16-0.070.29 0.03 0.13 0.19 barr_w -0.620.05-0.07-0.071.00-0.19-0.48-0.09-0.15-0.010.07-0.03-0.120.08 0.10-0.03 0.4 barr_y 0.20 0.24-0.25-0.47-0.191.00-0.45-0.10-0.18-0.110.09 0.11-0.190.18-0.12-0.03 barr_z 0.14 0.15-0.260.02-0.48-0.451.00 0.43 0.03-0.02-0.09-0.070.06-0.05-0.04-0.10 delta_gamma -0.130.59-0.80-0.61-0.09-0.100.43 1.00-0.86-0.040.16 0.08-0.01-0.240.09-0.16 0.0 CR_grad 0.15-0.680.82 0.78-0.15-0.180.03-0.861.00 0.05-0.20-0.100.14 0.21-0.030.10 prompt_norm 0.04-0.010.05 0.09-0.01-0.11-0.02-0.040.05 1.00-0.78-0.750.04 0.09 0.00-0.20 astro_norm -0.070.18-0.17-0.160.07 0.09-0.090.16-0.20-0.781.00 0.87-0.09-0.160.02-0.00 0.4 gamma_astro -0.010.11-0.09-0.07-0.030.11-0.070.08-0.10-0.750.87 1.00-0.07-0.140.05-0.02 dom_eff 0.12-0.570.35 0.29-0.12-0.190.06-0.010.14 0.04-0.09-0.071.00 0.65 0.17 0.19 ice_abs 0.05-0.290.17 0.03 0.08 0.18-0.05-0.240.21 0.09-0.16-0.140.65 1.00 0.03 0.11 ice_scat -0.10-0.040.12 0.13 0.10-0.12-0.040.09-0.030.00 0.02 0.05 0.17 0.03 1.00 0.03 0.8 ice_holep0 0.08-0.320.27 0.19-0.03-0.03-0.10-0.160.10-0.20-0.00-0.020.19 0.11 0.03 1.00 e1 _no ba e2 rm mm t_n ro_ orm iec iecba ba rr_ rr_ rr_h wy CR a _gr ga mm no a_a do o rm str c_p c_p ba rr_ nv z _ga ad mp icem_ eff _ab xse xse co lta pro ast iceice _sc ats de _ho lep 0 (a) Accelerator prior is applied to the 𝜎CSMS scaling factor (100 - 350 GeV). xsec_piece1 1.00 0.88 -0.72-0.36-0.15-0.36-0.19-0.74 0.08 0.05 -0.24-0.14 0.35 0.10 -0.23 0.37 xsec_piece2 0.88 1.00 -0.93-0.65-0.10-0.21-0.08-0.46-0.25 0.04 -0.13-0.07 0.05 -0.05-0.22 0.19 0.8 conv_norm -0.72-0.93 1.00 0.78 0.10 0.09 -0.07 0.16 0.50 0.00 0.06 0.04 -0.04 0.04 0.25 -0.10 barr_h -0.36-0.65 0.78 1.00 -0.06-0.27 0.10 -0.11 0.70 0.07 -0.06-0.02 0.12 -0.02 0.20 0.02 barr_w -0.15-0.10 0.10 -0.06 1.00 -0.01-0.46-0.04-0.08-0.00 0.07 -0.02-0.11 0.12 0.08 -0.04 0.4 barr_y -0.36-0.21 0.09 -0.27-0.01 1.00 -0.39 0.21 -0.23-0.12 0.17 0.15 -0.32 0.13 -0.02-0.17 barr_z -0.19-0.08-0.07 0.10 -0.46-0.39 1.00 0.45 -0.01-0.03-0.03-0.04-0.03-0.08 0.02 -0.18 delta_gamma -0.74-0.46 0.16 -0.11-0.04 0.21 0.45 1.00 -0.64-0.06 0.27 0.16 -0.26-0.23 0.22 -0.38 0.0 CR_grad 0.08 -0.25 0.50 0.70 -0.08-0.23-0.01-0.64 1.00 0.05 -0.21-0.12 0.14 0.20 -0.03 0.11 prompt_norm 0.05 0.04 0.00 0.07 -0.00-0.12-0.03-0.06 0.05 1.00 -0.77-0.74 0.05 0.09 -0.01-0.17 astro_norm -0.24-0.13 0.06 -0.06 0.07 0.17 -0.03 0.27 -0.21-0.77 1.00 0.86 -0.16-0.18 0.07 -0.08 0.4 gamma_astro -0.14-0.07 0.04 -0.02-0.02 0.15 -0.04 0.16 -0.12-0.74 0.86 1.00 -0.11-0.15 0.08 -0.07 dom_eff 0.35 0.05 -0.04 0.12 -0.11-0.32-0.03-0.26 0.14 0.05 -0.16-0.11 1.00 0.64 0.09 0.29 ice_abs 0.10 -0.05 0.04 -0.02 0.12 0.13 -0.08-0.23 0.20 0.09 -0.18-0.15 0.64 1.00 0.01 0.14 ice_scat -0.23-0.22 0.25 0.20 0.08 -0.02 0.02 0.22 -0.03-0.01 0.07 0.08 0.09 0.01 1.00 -0.05 0.8 ice_holep0 0.37 0.19 -0.10 0.02 -0.04-0.17-0.18-0.38 0.11 -0.17-0.08-0.07 0.29 0.14 -0.05 1.00 ec ec _no rm y rr_ z rr_ a d orm rm str o s _ab at 0 e1 e2 ba rr_ h mmCR _gr ro_ dom_ _sc _ho xse xse ba nv rr_ w ba ba _ga a t_n no a_a effice ice lep c_p i c_p i co lta mp ast mm ice de pro ga (b) Accelerator prior is not applied. Figure 6.6 Correlations of the fit parameters in the likelihood, calculated from simulation. Shortened versions of the parameter names are used due to space constraints. However, note that the parameter order is identical to that in Table 6.2, reading the y-axes from top to bottom and x-axes from left to right; refer to Table 6.2 for a description of the parameters. 53 MicroBooNE, PRL 123, 131801 (2019) CCFR (1997 Seligman Thesis) T2K, PRD 98, 012004 (2018) IHEP-JINR, ZP C70, 39 (1996) σCC / Eν (10-38 cm2 / GeV) T2K, PRD 96, 052001 (2017) 1.6 MINERvA, PRD1.695, 072009 (2017) CDHS, ZP C35, 443 (1987) T2K, PRD 93, 072002 (2016) BNL, PRD 25, 617 (1982) T2K (CH), PRD 90, 052010 (2014) GGM-SPS, PL 104B, 235 (1981) 1.4 1.489, 112003 (2014) ArgoNeuT, PRD ArgoNeuT, PRL 108, 161802 (2012) ANL, PRD 19, 2521 (1979) BEBC, ZP C2, 187 (1979) SciBooNE, PRD 83, 012005 (2011) MINOS, PRD 81, 072002 (2010) GGM-PS, PL 84B (1979) 1.2 NOMAD, PLB1.2660, 19 (2008) IHEP-ITEP, SJNP 30, 527 (1979) NuTeV, PRD 74, 012008 (2006) SKAT, PL 81B, 255 (1979) 1 1 0.8 0.8 νµ N → µ - X 0.6 0.6 0.4 0.4 0.2 0.2 νµ N → µ + X 0 0 1 10 100 150 200 250 300 350 Eν (GeV) Figure 6.7 A summary of the existing cross-section measurements by accelerator experiments [1]. 1.0 CSMS nu CCFR NuTeV 0.9 CSMS nubar NOMAD CDHS potential prior 0.8 [10 38cm2/GeV] 0.7 0.6 0.5 0.4 CC/E 0.3 0.2 0.1 100 150 200 250 300 350 E [GeV] Figure 6.8 Cross-section data from accelerator experiments for energies 100 - 350 GeV. All data points are used to calculate a prior for the cross-section parameter covering this energy range. Data are drawn from Figure 28 of [23] due to increased visibility of relevant points compared to the more comprehensive plot in Figure 6.7 from [1]. 54 likelihood fit. Monte Carlo files with varied settings of relevant detector parameters are generated across a range of possible values. Each of these ‘systematics datasets’ is processed through event selection, reconstructed and binned identically to the baseline simulation. This provides an estimate of the parameter’s effect on expected event rate for each modification. To characterize the discrete differences continuously, and handle low statistics bins prevalent at high energies robustly, kernel density estimators (KDEs) are generated and provide a per-event estimator [64]. This process, used for several iterations of the astrophysical diffuse flux analysis [18, 21], is slightly modified to account for the 3D histogram of observables here. Each bin of deposited charge is treated separately, where events from the other bin are masked and a 2D KDE in reconstructed energy and cos(zenith) is generated. For the final product, the 2D KDEs are combined such that events are modified by the appropriate KDE for their charge bin. This method ensures that events are not affected by those processed by a different energy reconstruction method. 6.6.1 Dom Efficiency Dom efficiency is a (misleadingly named) parameter describing a general scaling of light yield in the detector. It includes other factors affecting the photon detection probability, such as cable shadowing, as well as the photon collection efficiency of the DOM as a whole. The impact of ice modelling is discussed in more detail in section 3.4. Dom efficiency is varied relative to a baseline simulation, defined as an efficiency of 1. The effect of this variation on histogram bin contents is shown in Figure 6.9, including a change in the shape for energy where the shifting of reconstructed energies is expected due to modifying the light yield of events. The change in event rate for zenith is a more straight-forward increase or decrease, expected as scaling the detection efficiency changes the number of events passing the trigger threshold. 6.6.2 Bulk Ice The importance and challenges of accurately modelling the bulk ice are discussed in section 3.4. For this analysis, two systematic parameters are varied – the scattering and the absorption of the bulk ice. Each represents a global scaling of the ice model parameters. The effects of absorption are shown in Figure 6.10 and scattering in Figure 6.11. The changes in event rates are understood to be 55 3 dom eff: 0.95 dom eff: 0.95 Number of events per bin Number of events per bin 6 × 10 dom eff: 0.97 dom eff: 0.97 dom eff: 0.99 3 dom eff: 0.99 3 baseline 10 baseline 4 × 10 dom eff: 1.01 dom eff: 1.03 dom eff: 1.01 dom eff: 1.03 3 dom eff: 1.05 dom eff: 1.05 3 × 10 10 1 3 2 × 10 1 10 Ratio to baseline Ratio to baseline 1.05 1.1 1.00 1.0 0.95 0.9 2 3 4 5 6 7 1.0 0.8 0.6 0.4 0.2 0.0 10 10 10 10 10 10 cos(reco zenith) Reco energy (GeV) (a) Zenith distribution for low charge bin. (b) Energy distribution for low charge bin. 3 3 × 10 dom eff: 0.95 dom eff: 0.95 Number of events per bin Number of events per bin dom eff: 0.97 3 dom eff: 0.97 2 × 10 3 dom eff: 0.99 10 dom eff: 0.99 baseline baseline dom eff: 1.01 2 dom eff: 1.01 dom eff: 1.03 10 dom eff: 1.03 dom eff: 1.05 dom eff: 1.05 3 1 10 10 0 6 × 10 2 10 2 1 4 × 10 10 Ratio to baseline Ratio to baseline 1.1 1.1 1.0 1.0 0.9 0.9 2 3 4 5 6 7 1.0 0.8 0.6 0.4 0.2 0.0 10 10 10 10 10 10 cos(reco zenith) Reco energy (GeV) (c) Zenith distribution for high charge bin. (d) Energy distribution for high charge bin. Figure 6.9 The effect of varying the dom efficiency parameter on the total flux expectation. Re- maining parameters are fixed to baseline values. the modification of ice parameters leading to altered rates of detected photons. Zenith distributions demonstrate a small shape difference and a change in rate, while energy distributions demonstrate the change in energy-scale by crossings in the relative rates. 6.6.3 Hole Ice Modelling of the hole ice, represented as a parameterization of angular response of the DOMs and discussed in section 3.4, is included as a systematic parameter. Only the 𝑝 0 parameter of the hole ice model is fit. The different systematic values that are simulated and their corresponding angular acceptance functions are plotted in Figure 3.6. The effect on the expected bin contents is shown in Figure 6.12. Modification of the hole ice parameter causes shape changes for energy and 56 3 ice absorp: 0.95 ice absorp: 0.95 Number of events per bin 6 × 10 Number of events per bin ice absorp: 0.97 ice absorp: 0.97 ice absorp: 0.99 3 ice absorp: 0.99 3 baseline 10 baseline 4 × 10 ice absorp: 1.01 ice absorp: 1.03 ice absorp: 1.01 ice absorp: 1.03 3 ice absorp: 1.05 ice absorp: 1.05 3 × 10 10 1 3 2 × 10 1 10 Ratio to baseline Ratio to baseline 1.025 1.05 1.000 1.00 0.975 0.95 2 3 4 5 6 7 1.0 0.8 0.6 0.4 0.2 0.0 10 10 10 10 10 10 cos(reco zenith) Reco energy (GeV) (a) Zenith distribution for low charge bin. (b) Energy distribution for low charge bin. 3 3 × 10 ice absorp: 0.95 ice absorp: 0.95 Number of events per bin Number of events per bin ice absorp: 0.97 3 ice absorp: 0.97 2 × 10 3 ice absorp: 0.99 10 ice absorp: 0.99 baseline baseline ice absorp: 1.01 2 ice absorp: 1.01 ice absorp: 1.03 10 ice absorp: 1.03 ice absorp: 1.05 ice absorp: 1.05 3 1 10 10 0 6 × 10 2 10 1 10 Ratio to baseline Ratio to baseline 1.10 1.05 1.05 1.00 1.00 0.95 0.95 2 3 4 5 6 7 1.0 0.8 0.6 0.4 0.2 0.0 10 10 10 10 10 10 cos(reco zenith) Reco energy (GeV) (c) Zenith distribution for high charge bin. (d) Energy distribution for high charge bin. Figure 6.10 The effect of varying the bulk ice absorption parameter on the total flux expectation. Remaining parameters are fixed to baseline values. zenith distributions, with largest changes at the edges of the zenith distribution. This is expected given that the photon detection probability is modified in an angle-dependent fashion. 57 3 ice scatt: 0.95 ice scatt: 0.95 Number of events per bin 6 × 10 Number of events per bin ice scatt: 0.97 ice scatt: 0.97 ice scatt: 0.99 3 ice scatt: 0.99 3 baseline 10 baseline 4 × 10 ice scatt: 1.01 ice scatt: 1.03 ice scatt: 1.01 ice scatt: 1.03 3 ice scatt: 1.05 ice scatt: 1.05 3 × 10 10 1 3 2 × 10 1 10 Ratio to baseline Ratio to baseline 1.05 1.025 1.000 1.00 0.975 0.95 2 3 4 5 6 7 1.0 0.8 0.6 0.4 0.2 0.0 10 10 10 10 10 10 cos(reco zenith) Reco energy (GeV) (a) Zenith distribution for low charge bin. (b) Energy distribution for low charge bin. ice scatt: 0.95 ice scatt: 0.95 Number of events per bin Number of events per bin 3 ice scatt: 0.97 3 ice scatt: 0.97 2 × 10 ice scatt: 0.99 10 ice scatt: 0.99 baseline baseline ice scatt: 1.01 2 ice scatt: 1.01 ice scatt: 1.03 10 ice scatt: 1.03 ice scatt: 1.05 ice scatt: 1.05 3 1 10 10 0 2 10 6 × 10 1 10 Ratio to baseline Ratio to baseline 1.02 1.025 1.00 1.000 0.975 0.98 2 3 4 5 6 7 1.0 0.8 0.6 0.4 0.2 0.0 10 10 10 10 10 10 cos(reco zenith) Reco energy (GeV) (c) Zenith distribution for high charge bin. (d) Energy distribution for high charge bin. Figure 6.11 Plot depicting the effect of varying the bulk ice scattering parameter on the total flux expectation. Remaining parameters are fixed to baseline values. 58 3 hole ice p0: -0.60 hole ice p0: -0.60 Number of events per bin 6 × 10 Number of events per bin hole ice p0: -0.35 hole ice p0: -0.35 hole ice p0: -0.12 3 hole ice p0: -0.12 3 baseline 10 baseline 4 × 10 hole ice p0: 0.12 hole ice p0: 0.12 hole ice p0: 0.36 hole ice p0: 0.36 3 hole ice p0: 0.60 hole ice p0: 0.60 3 × 10 10 1 3 2 × 10 1 10 Ratio to baseline Ratio to baseline 1.02 1.025 1.00 1.000 0.98 0.975 2 3 4 5 6 7 1.0 0.8 0.6 0.4 0.2 0.0 10 10 10 10 10 10 cos(reco zenith) Reco energy (GeV) (a) Zenith distribution for low charge bin. (b) Energy distribution for low charge bin. hole ice p0: -0.60 hole ice p0: -0.60 Number of events per bin Number of events per bin hole ice p0: -0.35 3 hole ice p0: -0.35 2 × 10 3 hole ice p0: -0.12 10 hole ice p0: -0.12 baseline baseline hole ice p0: 0.12 2 hole ice p0: 0.12 hole ice p0: 0.36 10 hole ice p0: 0.36 hole ice p0: 0.60 hole ice p0: 0.60 3 1 10 10 0 2 10 6 × 10 1 10 Ratio to baseline Ratio to baseline 1.025 1.025 1.000 1.000 0.975 0.975 2 3 4 5 6 7 1.0 0.8 0.6 0.4 0.2 0.0 10 10 10 10 10 10 cos(reco zenith) Reco energy (GeV) (c) Zenith distribution for high charge bin. (d) Energy distribution for high charge bin. Figure 6.12 The effect of varying the hole ice parameter on the total flux expectation. Remaining parameters are fixed to baseline values. 59 Parameter Prior Description 100 - 350 GeV cross-section scale 1 ± 0.05 scaling 350 - 5000 GeV cross-section scale scaling conventional flux normalization 1 ± 0.25 normalization Barr 𝐻 0 ± 0.15 modify pion contributions Barr 𝑊 0 ± 0.40 modify kaon contributions Barr 𝑌 0 ± 0.30 modify kaon contributions Barr 𝑍 0 ± 0.12 modify kaon contributions cosmic ray Δ spectral index modify cosmic ray spectral index cosmic ray model interpolation 0 ± 1.0 parameterize between disparate CR models prompt flux normalization normalization astrophysical flux normalization normalization astrophysical spectral index spectral index dom efficiency light-yield scaling bulk ice absorption modify overall absorption bulk ice scattering modify overall scattering hole ice modify angular efficiency of light Table 6.2 Each of the parameters included in the analysis likelihood fit. The parameters are mostly dimensionless, with the exception of astrophysical normalization, which has units 10−18 GeV−1 cm−2 s−1 sr−1 per reference to equation 2.5. 60 CHAPTER 7 RESULTS OF THE CROSS-SECTION ANALYSIS Prior to determining the results of the analysis, a series of ‘blind fit’ checks are performed on the data. The cross-section scalings and their errors are then calculated and the results for the evaluated scenarios, with and without the accelerator prior, are studied. 7.1 Blind fits checks The IceCube Collaboration follows the scientific practice of developing ‘blind’ analyses, i.e., masking the detector data, to reduce bias. Once an analysis has been finalized and approved, the analyzer(s) may complete the analysis with the data, i.e., the data is ‘unblinded’. As part of the unblinding process for this analysis, the best fit is performed both with and without the accelerator prior included, defined as cases A and B, respectively, and several checks are performed. Specifically, the agreement between the data and simulation and the fitted values of the nuisance parameters are examined. Cross-section scalings and the highly correlated conventional flux normalization remain unknown at the ‘blind fits’ stage. Tests involving nuisance parameters are described in the respective sections for each case. Ultimately, those tests did not trigger additional changes to the analysis. To study the agreement between the data and simulation, the bin contents of the simulation with best fit parameters are compared to the data. Note that all plots in this section use best fit parameters from case A and that the plots are not appreciably different using the best fit parameters for case B. (Analogous plots for case B are included in the Appendix for reference.) A series of 1D projections for cosine of the reconstructed zenith, and energy are considered (see Figures 7.1 and 7.3). Next, 2D plots of the bin-wise likelihood are also studied, shown in Figures 7.2a and 7.4. A few (3) bins of the 3300 total are observed to have anomalously high likelihood values, indicating disagreement. The sum of the 3 masked bins makes up approximately 2/3 of the total likelihood for the entire space. These divergent bins all occur in a historically problematic region of reconstructed values – near 10 TeV and/or near the horizon for the low charge bins (< 100 PE) – see Figure 7.2. Deposited charge is used as an energy proxy; it is expected that the high-energy 61 (b) Modified binning where events are masked with deposited charge < 100 PE and reconstructed energy (a) Original binning, contained in Table 6.1. > 5.01 TeV. Figure 7.1 Data-simulation agreement summed over the cos(reco zenith) dimension for the low detected charge bin (< 100 PE). The data are plotted as well as the expectation for both baseline and best fit parameters. Bins with anomalously large values due to limited simulation statistics are highlighted, on their own (7.1a) and after applying the low-charge high-energy mask (7.1b) described in the text. (a) Bin-wise likelihood plot comparing data and sim- ulation with best fit parameters. The z-axis is limited to better show the structure - without limits the max- imum value is above 675. The 3 anomalously large bins are the yellow bins in the mask region. (b) Data histogram of observables. Figure 7.2 Low detected charge bin (< 100 PE) with the original binning from Table 6.1. The mask region of low charge, high reconstructed energy is marked with a green box. 62 region of the low-charge bin is mostly empty for events at 10 TeV and above. The problematic bins occur at some of the highest energy populated bins resulting from an apparent limitation in simulation statistics for these bins rather than a physical effect. The selected path forward is to mask out low-charge bins with reconstructed energy > 5.01 TeV for all angles. Note that the mask is applied identically for cases A and B as the same bins exhibit similar behaviour in each. The best fit values of the nuisance parameters either do not shift or by a maximum of 0.01 for 3 parameters for cases A and B, respectively. This is considered a negligible effect on the fitted parameters. The bin-wise likelihood plots are also shown with this mask applied in Figure 7.4, where 7.4a is comparable to 7.2a. The low-charge, high-energy mask is applied in each of the following results since it substantially improves the goodness-of-fit with a negligible effect on the fitted parameters. The final data histograms with the updated binning are shown in Figure 7.5. 7.2 Case A - including the accelerator prior The cross-section parameters are extremely correlated with each other and there is notable anti- correlation between the normalization of the conventional atmospheric neutrino flux and 350 - 5000 GeV cross-section scale (see Figure 7.6). The accelerator prior on the 100 - 350 GeV cross-section scale parameter reduces its correlation to other parameters. These effects are clearly demonstrated by the contours of the two-dimensional profile likelihood scans, shown in Figure 7.6. Surprisingly, the 100 - 350 GeV scaling pulls strongly against the tight prior centred at 1, fitting to 0.82+0.08 −0.07 . The previously unmeasured 350 - 5000 GeV region appears to compensate, fitting to 1.23 +0.07 +0.23 −0.07 . The conventional normalization remains well within error of nominal at 0.96−0.18 . The given errors are the 1𝜎 or 68% confidence intervals, constructed from the 2D profile likelihood scans. To obtain errors free of the correlation between the plotted variables, two-dimensional contours are used. Cross-section scaling parameter errors are obtained from the scan against the conventional normalization to remove the otherwise unacknowledged effect of this parameter in this flux-dependent measurement. This is particularly important for the final comparison to the unscaled CSMS cross-section model prediction, as shown as Figure 7.7. The conventional normalization error is obtained from scanning across the cross-section scaling, where the interval 63 best fit 105 best fit 2 × 104 baseline baseline Number of events per bin Number of events per bin data data 104 104 103 6 × 103 102 4 × 103 101 1.0 0.8 0.6 0.4 0.2 0.0 102 103 104 105 106 107 cos(reco zenith) Reco energy (GeV) (a) Cos(zenith) distribution for low-charge bin. (b) Energy distribution for low-charge bin. best fit 104 best fit baseline baseline Number of events per bin Number of events per bin 4 × 103 data 103 data 3 × 103 102 2 × 103 101 100 10 1 1.0 0.8 0.6 0.4 0.2 0.0 102 103 104 105 106 107 cos(reco zenith) Reco energy (GeV) (c) Cos(zenith) distribution for high-charge bin. (d) Energy distribution for high-charge bin. Figure 7.3 Data-simulation comparison distributions. The reconstructed energy/cos(zenith) di- mension has been summed over, depending on the plot. The data are plotted as well as the the expectation for both baseline and best fit parameters. data, best fit LLH, qtot < 100 data, best fit LLH, qtot > 100 10 8 0.0 0.0 0.2 8 0.2 6 cos(zenith) cos(zenith) 0.4 6 0.4 4 0.6 4 0.6 2 2 0.8 0.8 1.0 0 1.0 0 102 104 106 102 104 106 Energy [GeV] Energy [GeV] (a) Low detected charge bin (< 100 PE). (b) High detected charge bin (> 100 PE). Figure 7.4 Bin-wise likelihood values for the data-to-simulation (with best-fit parameters) compar- ison after adding the additional criterion such that bins < 100 PE and > 5.01 TeV are masked. 64 data, qtot < 100 data, qtot > 100 0.0 0.0 103 0.2 0.2 102 cos(zenith) cos(zenith) 0.4 102 0.4 0.6 0.6 101 101 0.8 0.8 1.0 100 1.0 100 102 104 106 102 104 106 Energy [GeV] Energy [GeV] (a) Low detected charge bin (< 100 PE). (b) High detected charge bin (> 100 PE). Figure 7.5 Histogram of observables for data with the defined mask applied to the bins. widths are identical for both scaling parameters. Since a single scaling factor is obtained for both neutrinos and anti-neutrinos from the fit, under the assumption that neutrinos and anti-neutrinos scale identically, the fitted value is used to scale both neutrino and anti-neutrino cross-section in the plot with a vertical connecting line indicating the scalings cannot vary independently. Best fit values of parameters and their corresponding errors are summarized in Table 7.1. The nuisance parameters are generally well-behaved, as seen in the one-dimensional profile likelihood scans in Figure 7.8 with best fit values listed in Table 7.2. The one anomaly is Barr 𝑊 fitting > 2𝜎 from its baseline value. Note that this parameter is associated with only a small change in the 𝜈 𝜇 /𝜈¯ 𝜇 ratio, as discussed in section 6.5.1 and shown in Figure 6.5. The effect of its fitted value on bin contents compared to the baseline is depicted in Figure 7.9. Barr 𝑊 was predicted to be the only parameter highly correlated with the 100 - 350 GeV cross-section parameter by the simulation, but shows a moderate correlation at most in the data (see Figure 7.10). Given this, no further evaluation of the parameter was conducted in the analysis. 7.3 Case B - excluding the accelerator prior Without applying the accelerator prior to the 100 - 350 GeV cross-section parameter, the cross- section scalings are highly correlated, as expected, and display significant anti-correlation to the conventional atmospheric neutrino flux normalization (see the 2D profile likelihood scans in Figure 7.11). The resultant scaling values are notably different from case A, with both cross-section 65 1.40 1.4 1.35 1.3 scale (350 - 5000 GeV) 1.30 1.2 Conventional flux scale 1.25 95% 1.1 95% 1.20 1.0 1.15 90% 0.9 90% 1.10 0.8 CSMS 1.05 0.7 1.00 68% 0.6 68% 0.6 0.7 0.8 0.9 1.0 0.6 0.7 0.8 0.9 1.0 CSMS scale (100 - 350 GeV) CSMS scale (100 - 350 GeV) (a) (b) 1.4 1.3 1.2 Conventional flux scale 1.1 95% 1.0 0.9 90% 0.8 0.7 0.6 68% 1.0 1.1 1.2 1.3 1.4 CSMS scale (350 - 5000 GeV) (c) Figure 7.6 The 2D profile likelihood plots of the cross-section parameters and the conventional flux normalization. The accelerator prior is applied to the 100 - 350 GeV cross-section parameter. Contours for 1, 2 and 3 𝜎 (68%, 90% and 95% confidence levels) are calculated via Wilks’ theorem [65]. Parameter Case A Case B 100 - 350 GeV cross-section scale +0.08 0.82−0.07 +0.07 0.27−0.05 350 - 5000 GeV cross-section scale +0.07 1.23−0.07 +0.08 0.78−0.07 conventional atmospheric flux +0.23 0.96−0.18 +0.24 1.28−0.21 Table 7.1 Best fit values of the extracted muon neutrino-nucleon cross-section parameters and the normalization of the conventional atmospheric neutrino flux. Errors are 68% confidence intervals drawn from 2D profile likelihoods. 66 0.9 CSMS ν CSMS ν̄ 0.8 scaled result 0.7 σCC /Eν [10−38 cm2 /GeV] 0.6 0.5 0.4 0.3 0.2 0.1 102 103 104 Eν [GeV] Figure 7.7 Final results for cross-section scaling parameters (black) plotted relative to unmodified CSMS cross-section for neutrinos and antineutrinos (magenta) [3]. The accelerator prior is applied to the 100 - 350 GeV scaling parameter in the fit. The dotted line represents the fact that one scaling parameter is fitted per energy bin, and the scaling is assumed to be identical for neutrinos and anti-neutrinos. In the plot, the scaling factor is applied to CSMS cross-sections separately for both neutrinos and anti-neutrinos. Error bars on result are from 68% contours calculated via Wilk’s theorem and obtained from 2D profile likelihood scans in Figure 7.6. scalings pulled to much lower than nominal: 0.27+0.07 −0.05 for 100 - 350 GeV, and 0.78+0.08 −0.07 for 350 - 5000 GeV. This effect is partially compensated by the conventional norm, which fits to greater than nominal at 1.28+0.24 −0.21 . Best fit values of these parameters and their corresponding errors are summarized in Table 7.1. Errors are obtained from the 2D profile likelihood scans of the three parameters, as done in case A. The comparison of the cross-sections with the best fit scalings to the unscaled CSMS model prediction is shown in Figure 7.12 where the scaling factors are plotted with the same convention as used in case A. Again, nuisance parameters are generally well-behaved, as can be seen in the 1D profile likelihood scans shown in Figure 7.13, with best fit values included in Table 7.2. A couple of the nuisance parameters merit further study, specifically the hole ice parameter and the cosmic ray model shape interpolation parameter. Although the hole ice best fit value (-1.18) may look 67 175 3.0 2.5 nuisance parameter outcome Barr H 150 Barr W 2.0 Barr Y 125 Barr Z 1.5 CR model interp. 100 CR spectr. index shift -2 LLH 1.0 prompt norm. astro. norm. 75 0.5 astro. spectr. index dom eff. 50 0.0 ice abs. ice scat. 25 0.5 hole ice p0 0 1.0 0.6 0.8 1.0 1.2 1.4 CSMS scaling, 100 - 350 GeV (a) 120 3 nuisance parameter outcome Barr H Barr W 100 Barr Y 2 Barr Z 80 CR model interp. CR spectr. index shift -2 LLH prompt norm. 60 1 astro. norm. astro. spectr. index 40 dom eff. 0 ice abs. ice scat. 20 hole ice p0 0 1 0.8 1.0 1.2 1.4 1.6 CSMSscaling, 350 - 5000 GeV (b) 30 3.0 2.5 nuisance parameter outcome Barr H 25 Barr W 2.0 Barr Y 20 Barr Z 1.5 CR model interp. CR spectr. index shift -2 LLH 15 1.0 prompt norm. astro. norm. 10 0.5 astro. spectr. index dom eff. 0.0 ice abs. ice scat. 5 0.5 hole ice p0 0 1.0 0.6 0.8 1.0 1.2 1.4 Conventional flux normalization (c) Figure 7.8 1D profile likelihood scans of the cross-section parameters and the conventional flux normalization (left-hand scale). The accelerator prior is applied to the 100 - 350 GeV cross-section parameter. Nuisance parameters are also plotted, with values indicated by the scale on the right- hand side of the plot. 68 5 4 baseline 10 baseline Number of events per bin Number of events per bin 2 × 10 Barr W, accel best fit: 1.08 Barr W, accel best fit: 1.08 Barr W, no accel best fit: 0.62 4 Barr W, no accel best fit: 0.62 10 4 3 10 10 2 6 × 10 3 10 1 4 × 10 3 10 Ratio to baseline Ratio to baseline 1.2 1.2 1.0 1.0 2 3 4 5 6 7 1.0 0.8 0.6 0.4 0.2 0.0 10 10 10 10 10 10 cos(reco zenith) Reco energy (GeV) (a) Cos(zenith) distribution for the low-charge bin. (b) Energy distribution for the low-charge bin. 4 baseline 10 baseline Number of events per bin Number of events per bin Barr W, accel best fit: 1.08 Barr W, accel best fit: 1.08 3 3 4 × 10 Barr W, no accel best fit: 0.62 10 Barr W, no accel best fit: 0.62 3 2 3 × 10 10 1 2 × 10 3 10 0 10 1 10 Ratio to baseline Ratio to baseline 1.2 1.2 1.0 1.0 2 3 4 5 6 7 1.0 0.8 0.6 0.4 0.2 0.0 10 10 10 10 10 10 cos(reco zenith) Reco energy (GeV) (c) Cos(zenith) distribution for the high-charge bin. (d) Energy distribution for the high-charge bin. Figure 7.9 Depicting the effect of varying the Barr 𝑊 parameter on the total flux expectation from baseline to best fit values for both case A (purple) and case B (yellow). Remaining parameters are fixed to the baseline values. anomalous, it is within the expected range of -2 to +1 per guidance from the IceCube calibration working group for this model. Its effect on the bin contents is shown in Figure 7.14. The cosmic ray shape interpolation parameter fits ∼1.5 sigma away from the prior center. However, this value is consistent with the diffuse astrophysical flux analysis [16, 21], although the shift is reversed. By the definition of this shape parameter (equation 6.1), where 0 is H4a-like and 1 is GST4-like, interpretation outside of [0,1] is less immediately clear, but is neither incorrect nor unphysical. It does have some effect on the normalization as seen in the bin content plots in Figure 7.15. 69 2.00 2.00 1.75 1.75 1.50 1.50 1.25 95% 95% 1.25 Barr W 1.00 Barr W 1.00 0.75 90% 0.75 90% 0.50 0.50 0.25 0.25 0.00 68% 68% 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.05 1.10 1.15 1.20 1.25 1.30 1.35 1.40 CSMS scale (100 - 350 GeV) CSMS scale (350 - 5000 GeV) (a) (b) 2.00 1.75 1.50 95% 1.25 Barr W 1.00 0.75 90% 0.50 0.25 68% 0.6 0.8 1.0 1.2 1.4 Conventional flux scale (c) Figure 7.10 The 2D profile likelihood plots of the cross-section parameters and the conventional flux normalization with the Barr 𝑊 parameter. The accelerator prior is applied to the 100 - 350 GeV cross-section parameter. Contours for 1, 2 and 3 𝜎 (68%, 90% and 95% confidence levels) are calculated via Wilk’s theorem [65]. 7.4 Discussion Results of all fitted parameters for cases A, B, and a few additional tests with fixed parameters, are summarized in Table 7.2. For case A, the fit finds the 100 - 350 GeV cross-section scaling is ∼20% lower, and the 350 - 5000 GeV scaling ∼20% higher, than nominal. The data exhibit a strong preference for this configuration of physics parameters, defying the strong prior centred on 1 for the 100 - 350 GeV bin. Unsurprisingly, fixing the conventional normalization to 1 results in a negligible shift in cross-section scaling. It is noted the fitted value of the 350 - 5000 GeV scaling is consistent with the published IceCube cross-section result over the energy range 6.3 TeV to 980 PeV of 1.30+0.30 −0.26 70 1.00 1.8 0.95 scale (350 - 5000 GeV) 1.6 0.90 Conventional flux scale 0.85 95% 95% 1.4 0.80 0.75 1.2 90% 90% 0.70 CSMS 1.0 0.65 0.60 68% 0.8 68% 0.15 0.20 0.25 0.30 0.35 0.40 0.15 0.20 0.25 0.30 0.35 0.40 CSMS scale (100 - 350 GeV) CSMS scale (100 - 350 GeV) (a) (b) 1.8 1.6 Conventional flux scale 95% 1.4 1.2 90% 1.0 0.8 68% 0.6 0.7 0.8 0.9 1.0 CSMS scale (350 - 5000 GeV) (c) Figure 7.11 The 2D profile likelihood plots of the cross-section parameters and the conventional flux normalization. The accelerator prior is not applied. Contours for 1, 2 and 3 𝜎 (68%, 90% and 95% confidence levels) are calculated via Wilk’s theorem [65]. [5]. While this is by no means a direct comparison, it is interesting to observe the agreement for a neighboring energy range despite numerous differences in the analyses. Differences between the analyses include: the measurement method (inference from neutrino-Earth absorption versus flux- dependent direct measurement); simulation sets; event statistics; event sample (though both target through-going muons); energy and angular reconstructions; bin selection of observables; analysis software; and implementation of systematic parameters. Taken as a whole, the results suggest that the reason for this fit configuration is physical, with a resultant a higher neutrino-nucleon cross-section than predicted for the TeV-scale by the CSMS model. 71 0.8 CSMS ν CSMS ν̄ 0.7 scaled result 0.6 σCC /Eν [10−38 cm2 /GeV] 0.5 0.4 0.3 0.2 0.1 102 103 104 Eν [GeV] Figure 7.12 Final results for cross-section scaling parameters (black) plotted relative to unmodified CSMS cross-section for neutrinos and antineutrinos (magenta) [3]. The accelerator prior is not applied. The dotted line represents the fact that one scaling parameter is fitted per energy bin, and the scaling is assumed to be identical for neutrinos and anti-neutrinos. In the plot, the scaling factor is applied to CSMS cross-sections separately for both neutrinos and anti-neutrinos. Errors are from 68% contours calculated via Wilks’ theorem [65] obtained from 2D profile likelihood scans in Figure 7.11. For case B, both cross-section scalings fit dramatically lower than the nominal value. The lower cross-section scaling values are compensated for by larger values of the conventional flux and astrophysical flux at 1.28 and 3.36, respectively, compared to 0.96 and 2.93, obtained for case A. It remains unclear why this is the preferred parameter configuration by the fit. Fixing the conventional normalization to nominal shifts the parameters slightly towards 1 (see Table 7.2). While the prompt normalization is not the focus of this study, it’s a topic of interest in the broad community. One might naively expect the 350 - 5000 GeV scaling parameter to be correlated to the prompt normalization, as the prompt flux is most visible in a similar energy range (see Figure 6.4). However, the 2D profile likelihood scans (shown in Figure 7.16) display no significant correlation between these variables for either case A or B. The prompt normalization fits to 0 for all cases studied, as shown in Table 7.2. This is consistent with other experimental findings of the prompt 72 100 3 nuisance parameter outcome Barr H Barr W 80 Barr Y 2 Barr Z CR model interp. 60 CR spectr. index shift -2 LLH 1 prompt norm. astro. norm. 40 0 astro. spectr. index dom eff. ice abs. 20 1 ice scat. hole ice p0 0 2 0.2 0.4 0.6 0.8 1.0 CSMSscaling, 100 - 350 GeV (a) 1000 4 nuisance parameter outcome Barr H 800 3 Barr W Barr Y Barr Z 2 CR model interp. 600 CR spectr. index shift -2 LLH prompt norm. 1 astro. norm. 400 astro. spectr. index dom eff. 0 ice abs. 200 ice scat. 1 hole ice p0 0 2 0.2 0.4 0.6 0.8 1.0 CSMS scaling, 350 - 5000 GeV (b) 14 3 nuisance parameter outcome Barr H 12 2 Barr W Barr Y 10 Barr Z CR model interp. 1 CR spectr. index shift -2 LLH 8 prompt norm. astro. norm. 6 0 astro. spectr. index dom eff. 4 1 ice abs. ice scat. 2 hole ice p0 2 0 0.8 1.0 1.2 1.4 1.6 1.8 Conventional flux normalization (c) Figure 7.13 The 1D profile likelihood scans of the cross-section parameters and the conventional flux normalization (left-side scale). The accelerator prior is not applied. Nuisance parameters are also plotted, with values identified by the scale on the right-side of the plot. 73 Parameter Prior Case A Case B Test 100 - 350 GeV 𝜎 scale 1 ± 0.05 0.82+0.08 −0.07 0.81 +0.07 0.27−0.05 0.29 1. (fixed) 350 - 5000 GeV 𝜎 scale 1.23+0.07 −0.07 1.22 +0.08 0.78−0.07 0.83 1. (fixed) conventional flux norm. 1 ± 0.25 0.96+0.23 −0.18 1. (fixed) +0.24 1.28−0.21 1. (fixed) 1.40 Barr 𝐻 0 ± 0.15 -0.17 -0.22 -0.03 0.16 -0.08 Barr 𝑊 0 ± 0.40 1.08 1.07 0.62 0.73 1.26 Barr 𝑌 0 ± 0.30 -0.27 -0.31 0.14 0.33 -0.43 Barr 𝑍 0 ± 0.12 0.02 0.02 0.05 0.07 -0.12 CR Δ spectral index 0.16 0.16 0.38 0.39 -0.05 CR model interpolation 0 ± 1. -0.90 -0.84 -1.51 -1.91 1.64 prompt flux norm. 0. 0. 0. 0. 0.0 astrophysical flux norm. 2.93 2.94 3.36 3.36 2.50 astrophysical index 2.43 2.43 2.46 2.46 2.40 dom efficiency 1.05 1.05 1.04 1.04 1.07 bulk ice absorption 0.99 0.99 0.99 0.99 0.99 bulk ice scattering 0.96 0.96 0.98 0.98 0.95 hole ice -0.97 -0.97 -1.18 -1.18 -0.72 LLH 1262.25 1262.30 1223 1225 1310 Table 7.2 The best fit values of all parameters included in the analysis likelihood fit (see also Table 6.2). Parameters are dimensionless, with the exception of astrophysical normalization, which has units 10−18 GeV−1 cm−2 s−1 sr−1 in reference to equation 2.5. 74 4 5 2 × 10 10 baseline Number of events per bin Number of events per bin hole ice: -1.50 4 hole ice: -1.25 10 hole ice: -1.00 hole ice: -0.75 4 hole ice: -0.50 10 10 3 hole ice: -0.25 baseline hole ice: -1.50 2 6 × 10 3 hole ice: -1.25 10 hole ice: -1.00 hole ice: -0.75 1 4 × 10 3 hole ice: -0.50 hole ice: -0.25 10 Ratio to baseline Ratio to baseline 1.05 1.05 1.00 1.00 2 3 4 5 6 7 1.0 0.8 0.6 0.4 0.2 0.0 10 10 10 10 10 10 cos(reco zenith) Reco energy (GeV) (a) Cos(zenith) distribution for the low-charge bin. (b) Energy distribution for the low-charge bin. 4 baseline 10 baseline Number of events per bin Number of events per bin 3 4 × 10 hole ice: -1.50 hole ice: -1.25 3 hole ice: -1.50 hole ice: -1.25 3 hole ice: -1.00 10 hole ice: -1.00 3 × 10 hole ice: -0.75 2 hole ice: -0.75 hole ice: -0.50 hole ice: -0.25 10 hole ice: -0.50 hole ice: -0.25 3 1 2 × 10 10 0 10 3 1 10 10 Ratio to baseline Ratio to baseline 1.05 1.0 1.00 0.9 2 3 4 5 6 7 1.0 0.8 0.6 0.4 0.2 0.0 10 10 10 10 10 10 cos(reco zenith) Reco energy (GeV) (c) Cos(zenith) distribution for the high-charge bin. (d) Energy distribution for the high-charge bin. Figure 7.14 Depicting the effect of varying the hole ice parameter on the total flux expectation. Remaining parameters are fixed to baseline values. Baseline value of hole ice parameter is 0. flux, including the most recent IceCube diffuse muon neutrino flux analysis [16, 21]. A few verification checks are performed to test the relationship between cross-section scaling and conventional flux normalization parameters. For reference, the right-most column of Table 7.2 contains the best fit parameters with cross-section scalings fixed to 1. The resultant values do not precisely reproduce the diffuse astrophysical measurement [16, 21] due to updates applied in this analysis, including: the reconstruction (see section 6.2.1); the binning (see sections 6.2.2 and 7.1); and a flux weighting (see section 6.4) utilizing an updated version of the MCEq software [59]. Additionally, the first year of data from the 59-string configuration of IceCube was not applied (see section 6.2). Due to the adjusted binning, templates for the mis-reconstructed atmospheric 75 5 2 × 10 4 10 baseline Number of events per bin Number of events per bin CR grad: -1.50 4 CR grad: -1.25 10 CR grad: -1.00 CR grad: -0.75 4 3 CR grad: -0.50 10 10 CR grad: -0.25 baseline CR grad: -1.50 2 6 × 10 3 CR grad: -1.25 10 CR grad: -1.00 CR grad: -0.75 1 4 × 10 3 CR grad: -0.50 CR grad: -0.25 10 Ratio to baseline Ratio to baseline 1.1 1.2 1.0 1.0 2 3 4 5 6 7 1.0 0.8 0.6 0.4 0.2 0.0 10 10 10 10 10 10 cos(reco zenith) Reco energy (GeV) (a) Cos(zenith) distribution for the low-charge bin. (b) Energy distribution for the low-charge bin. 4 baseline 10 baseline Number of events per bin Number of events per bin CR grad: -1.50 CR grad: -1.50 3 3 4 × 10 CR grad: -1.25 CR grad: -1.00 10 CR grad: -1.25 CR grad: -1.00 3 CR grad: -0.75 CR grad: -0.75 3 × 10 2 CR grad: -0.50 CR grad: -0.25 10 CR grad: -0.50 CR grad: -0.25 1 2 × 10 3 10 0 10 1 10 Ratio to baseline Ratio to baseline 1.2 1.2 1.0 1.0 2 3 4 5 6 7 1.0 0.8 0.6 0.4 0.2 0.0 10 10 10 10 10 10 cos(reco zenith) Reco energy (GeV) (c) Cos(zenith) distribution for the high-charge bin. (d) Energy distribution for the high-charge bin. Figure 7.15 Depicting the effect of varying the cosmic ray interpolation parameter on the total flux expectation. Remaining parameters are fixed to baseline values. Baseline value of the cosmic ray interpolation parameter is 0. muon background (see section 6.4) and a correction of high-energy muons from tau neutrinos (see [16] for more detail) are not applied. The values in the "Test" column of Table 7.2 may be considered solely a reference for the nuisance parameters obtained in the physics fits. As mentioned above, cases A and B best fits are also repeated with conventional normalization fixed to nominal and the cross-section scalings are weakly pulled towards 1 for both cases. Fixing conventional normalization or cross-section scalings to nominal worsens the likelihood in all cases tested (see Table 7.2), with the largest effect from fixing cross-section scalings. The best fit likelihood values, shown in the last row of Table 7.2, of the various cases have 76 1.0 1.0 0.8 0.8 Prompt flux scale Prompt flux scale 95% 95% 0.6 0.6 0.4 0.4 90% 90% 0.2 0.2 0.0 68% 0.0 68% 1.0 1.1 1.2 1.3 1.4 0.6 0.7 0.8 0.9 1.0 CSMS scale (350 - 5000 GeV) CSMS scale (350 - 5000 GeV) (a) Accelerator data prior is applied to 𝜎CSMS scaling (b) Accelerator data prior is not applied to 𝜎CSMS (100 - 350 GeV). scaling (100 - 350 GeV). Figure 7.16 The prompt atmospheric flux is fit to 0 and displays no correlation to the 𝜎CSMS scaling (350 - 5000 GeV) parameter. Contours for 1, 2 and 3 𝜎 (68%, 90% and 95% confidence levels) are calculated via Wilks’ theorem [65]. The 2D profile likelihood scans here indicate a similar result to the astrophysical diffuse fit result [16, 21] of 0+0.68 . differences for the evaluated scenarios. For more detail, Figure 7.17 shows the bin-wise differences in the best fit likelihoods for case A minus case B and no clear pattern or region stands out, though subtle indications near the horizon emerge. The low-charge bin has ∼4× the likelihood difference of the high-charge bin, which is reasonable given that it contains the highest statistics of the sample, including the signal events. The 100 - 350 GeV region scaling’s strong pull away from the accelerator prior in case A, and the differences in best fit likelihood value and physics parameters for cases A and B, are all indicative of possible tension between the the Standard Model expectation and IceCube’s data, or the existence of some unknown background or systematic effect. A factor to consider is that the areas of cross-section and flux modelling rely either directly or effectively on tuning models to experimental data which could be amplified by the flux-dependent nature of this analysis. On one hand, the inclusion of the accelerator prior (centered at the Standard Model and in agreement with the accelerator measurements) may be helping to keep the fit from experiencing some unanticipated degeneracy in the analysis. This opens question of whether this makes the result more correct. On the other hand, case B presents a more independent result than case A, one that is relatively free of the tuning of the constraints and a more IceCube-specific 77 data, difference of best fit LLH (accel - no accel), qtot < 100 data, difference of best fit LLH (accel - no accel), qtot > 100 2 1.5 0.0 0.0 1.0 0.2 1 0.2 0.5 cos(zenith) cos(zenith) 0.4 0 0.4 0.0 0.6 0.6 0.5 1 0.8 0.8 1.0 1.0 2 1.0 1.5 102 104 106 102 104 106 Energy [GeV] Energy [GeV] (a) The low detected charge bin (< 100 PE). (b) The high detected charge bin (> 100 PE). Figure 7.17 Difference of bin-wise likelihood values, comparing the likelihood for the best fit parameters obtained from data fit with accelerator prior (case A) subtracted by those obtained from data fit without the prior (case B). The low charge bin (a) contains ∼4× the difference of the high charge bin (b), most likely because it contains nearly all of the signal and highest statistics of the sample. Note that the total difference in likelihoods summed over the bins is not an exact match to the values in Table 7.2 due to the plot calculation not including priors. The numbers are still valuable to draw conclusions about shape and relative differences as penalties from priors are not calculated per-bin. contribution to the field. In that scenario, the resultant physical difference observed indicates the need for advancing these measurements further (see Chapter 8). One possible explanation for the analysis’ fit of the 100 - 350 GeV cross-section scaling to a lower than nominal value is a flaw in the event selection or simulation. The signal events are located at the lower edge of the applied event sample, raising an increased probability of un-simulated events being present (i.e., mis-reconstructed events below 100 GeV). In particular, the low energy region of the dataset was previously treated as background rather than the primary physics region, including during vetting of the sample. The most direct method to estimate the rate of < 100 GeV events would be to generate simulation and evaluate how many are retained in the final level event sample. The neutrino event generator used for the simulation here (nugen), however, only simulates deep inelastic scattering events and is valid to 100 GeV [66]. For lower energy IceCube analyses, the GENIE event generator is used, which has the capability to simulate multiple interaction types [67]. However, comparing GENIE to nugen simulation has a long-standing mis-match. Amongst the differences between generators is the fact that GENIE does not use the CSMS cross-section model, 78 which is problematic for this analysis. A simpler check has been performed to approximate the effect of unrepresented low energy events missing from the simulation that exist in the data. Events < 200 GeV are removed from the template (baseline simulation) and fit with the full simulation ‘data’, i.e., including events down to 100 GeV. No bias is observed in the resulting fit of the cross-section parameters. 79 CHAPTER 8 CONCLUSION (...OR JUST THE BEGINNING?) The analysis outlined in this thesis provided a first evaluation of the muon neutrino-nucleon cross-section between approximately 350 GeV and 5 TeV. The flux-dependent analysis, using a predominantly atmospheric neutrino dataset, measures a scaling factor of the Standard Model cross- section (CSMS [3]) as a normalization on the predicted neutrino flux. The baseline atmospheric flux model is calculated via MCEq software, specifically using the H4a cosmic ray interaction model and Sibyll2.3c hadronic interaction model. Improvements to the existing event sample include the energy reconstruction (DirectReco) for the energy region of interest and the binning. DirectReco calculates event hypotheses by photon propagation to directly simulate the expected charge, instead of using the historical method of lookup tables, allowing improved accuracy via simulation of actual hypothesis energies, use of newer glacial detector medium models, and greater flexibility of said event hypotheses. To reduce time required to reconstruct high-energy background events, the deposited event charge is used as an energy proxy and only low-charge (< 100 PE) events are processed with DirectReco, and high-charge (> 100 PE) events retain their previous reconstructed values. Deposited charge is included as a 3rd dimension to the histogram of observables in order to separate events processed with different reconstruction methods. The cross-section is measured in 2 true energy bins, 100 - 350 GeV and 350 GeV - 5 TeV. Accelerator-based neutrino beam cross-section measurements are used to calculate a prior applied to the overlapping, lower energy bin. A forward-folding analysis is executed. Data and simulation are processed identically and observables are compared in reconstructed space via a Poisson likelihood. This analysis has produced the world’s first measurement of the neutrino-nucleon cross-section in a large fraction of the explored energy range. Results are generated both with (case A) and without (case B) the accelerator prior applied. The accelerator prior, a Gaussian centred at nominal CSMS with a width of 0.05 on the 100 - 350 GeV cross-section scaling factor, is a significant constraint on the fit. For case A, the best fit scaling factors are 0.82+0.08 −0.07 for 100 - 350 GeV and 80 1.23+0.07 −0.07 for 350 GeV - 5 TeV. The 100 - 350 GeV parameter pulls significantly away from its prior in this case, but the other fit parameters appear to compensate, returning approximately nominal overall. It is interesting to note that the 350 - 5000 GeV scaling, the previously unmeasured region, is consistent with the preceding IceCube cross-section result, within errors, even though they measure neighbouring neutrino energy ranges and have a large number of differences between the analyses. For case B, the best fit cross-section scalings are 0.27+0.07−0.05 +0.08 for for 100 - 350 GeV and 0.78−0.07 350 GeV - 5 TeV. Both parameters are pulled much lower than the Standard Model expectation, which is only partially compensated by the highly correlated conventional flux normalization. It is currently unclear why this configuration of parameters is preferred. The results for both cases are summarized in Figure 8.1. Interpretation of results differs per-case. Case A makes use of additional available external information, presumably providing a more accurate result, but also reinforces the models dependency on tuning. Case B is less constrained, possibly indicating a deficiency in the analysis, or producing a more true physical IceCube-data-driven result. Differences between the cases are also enlightening. Several features suggest tension with the Standard Model prediction, specifically the shift in case A and the radically different best fit cross-section scalings and best fit likelihood values between cases. Further work is needed to confirm. Another method of confirmation is to compare to other experimental results, including the accelerator experiments FASER𝜈, in progress with the goal to measure neutrino-nucleon cross-section from 100 GeV to a few TeV [4]. This experiment will collect data during the recently-started LHC Run 3 and will provide a cross-check, with different neutrino sources, backgrounds, reconstruction methods and analysis tools. There is also opportunity for improvement of IceCube-based studies, both in detail and in breadth. While the analysis has demonstrated the significant potential for IceCube to explore precision cross-section measurements at intermediate energies, the results indicate the potential for areas of improvement that go beyond the scope of the work presented here. In section 7.1, low-charge events with high reconstructed energy (< 100 PE, > 5.01 TeV) are masked from the sample to improve overall agreement between the best fit simulation and data. The removal of 81 0.9 CSMS ν CSMS ν̄ 0.8 case A case B 0.7 σCC /Eν [10−38 cm2 /GeV] 0.6 0.5 0.4 0.3 0.2 0.1 102 103 104 Eν [GeV] Figure 8.1 Final results for cross-section scaling parameters plotted relative to unmodified CSMS cross-section [3] for neutrinos and antineutrinos (magenta). Case A refers to when the accelerator prior is applied to the 100 - 350 GeV scaling parameter in the fit (black). Case B refers to when the accelerator prior is not applied (yellow). The dotted vertical line connecting the points represents the fact that one scaling parameter is fitted per energy bin, and the scaling is assumed to be identical for neutrinos and anti-neutrinos. In the plot, the scaling factor is applied to CSMS cross-sections for both neutrinos and anti-neutrinos separately. Errors shown are from 68% contours calculated via Wilk’s theorem [65] obtained from 2D profile likelihood scans (Figures 7.6 and 7.11). the bins shows a negligible effect on the fitted nuisance parameters. However, implementing a likelihood that accounts for the limited statistics of the simulation, such as [55], might achieve the same ends without potential loss of information. Increasing simulation statistics is another option, though a more computationally expensive and time consuming one. A challenge of utilizing such a broad neutrino energy range is that the physics, i.e., the primary process(es) of energy deposition, differs across the sample. As a consequence, the optimal choice of energy reconstruction method will vary event-to-event. It is generally advantageous to apply a uniform method for energy reconstruction, but it is unclear how to implement this without sacrificing resolution for some subset of events. The degradation of the energy resolution for lower energy events is the impetus for updating the energy reconstruction variable to include multiple methods, discussed in section 6.2.1. Subsequently, the deposited charge is used as an energy proxy to select 82 the reconstruction method per-event. While it is a natural choice of variable for this purpose, it is possible, however, that another observable could yield superior performance in this role, perhaps a low-level energy reconstruction. Another target for improvement is the event selection. This analysis makes use of an existing event sample due to its high statistics and mature, well-understood behavior. However, the focus of the sample and its development is to extract the maximum amount of detail from the region above 10 TeV. Events at 1 TeV and below were considered a less important background in the development of the sample. An event selection tailored to the GeV - TeV neutrino energy range would potentially yield improved precision and facilitate study of the relationships between cross-section parameters and nuisance parameters. A detail that may be included in the analysis is neutrino-Earth absorption. While this is a relatively small effect for the highest energies considered in this analysis, incorporating this effect could provide increased rigour. Reducing the dependence on the flux model, could improve the robustness of the result compared to only considering normalization. Looking forward, it could also be informative to perform a cross-section analysis across a broader energy range. Once neutrino- Earth absorption is included, it would be straight-forward to extend the energy range upward to higher energies. This would also permit more points of comparison for the result. Taking a step further, an even more advanced measurement could make use of a combined sample of diffuse cascades and tracks from neutrinos, measuring cross-section for multiple flavours in one analysis. 83 BIBLIOGRAPHY [1] R. L. Workman and Others. Review of Particle Physics. PTEP, 2022:083C01, 2022. doi: 10.1093/ptep/ptac097. [2] Péter Mészáros, Derek B. Fox, Chad Hanna, and Kohta Murase. Multi-Messenger Astro- physics. Nature Rev. Phys., 1:585–599, 2019. doi: 10.1038/s42254-019-0101-z. [3] Amanda Cooper-Sarkar, Philipp Mertsch, and Subir Sarkar. The high energy neutrino cross- section in the Standard Model and its uncertainty. JHEP, 08:042, 2011. doi: 10.1007/ JHEP08(2011)042. [4] Henso Abreu et al. Detecting and Studying High-Energy Collider Neutrinos with FASER at the LHC. Eur. Phys. J. C, 80(1):61, 2020. doi: 10.1140/epjc/s10052-020-7631-5. [5] M. G. Aartsen et al. Measurement of the multi-TeV neutrino cross section with IceCube using Earth absorption. Nature, 551:596–600, 2017. doi: 10.1038/nature24459. [6] Mauricio Bustamante and Amy Connolly. Extracting the Energy-Dependent Neutrino- Nucleon Cross Section above 10 TeV Using IceCube Showers. Phys. Rev. Lett., 122(4): 041101, 2019. doi: 10.1103/PhysRevLett.122.041101. [7] Yiqian Xu. Measurement of the High Energy Neutrino-Nucleon Cross Section Using Neutrino- Induced Electromagnetic and Hadronic Showers Observed in Five Years of IceCube Data. PhD thesis, Stony Brook U., 2019. [8] R. Abbasi et al. Measurement of the high-energy all-flavor neutrino-nucleon cross section with IceCube. 11 2020. doi: 10.1103/PhysRevD.104.022001. [9] Rasha Abbasi et al. Measuring the Neutrino Cross Section Using 8 years of Upgoing Muon Neutrinos Observed with IceCube. PoS, ICRC2021:1158, 2021. doi: 10.22323/1.395.1158. [10] T. K. Gaisser. Cosmic rays and particle physics. 1990. ISBN 978-0-521-33931-5. [11] T. K. Gaisser and M. Honda. Flux of atmospheric neutrinos. Ann. Rev. Nucl. Part. Sci., 52: 153–199, 2002. doi: 10.1146/annurev.nucl.52.050102.090645. [12] Victor F. Hess. Über Beobachtungen der durchdringenden Strahlung bei sieben Freiballon- fahrten. Phys. Z., 13:1084–1091, 1912. [13] Nick Murphy. Motion of the earth and the detection of weakly interacting massive particles. lecture notes, 2014. Astronomy 253: Plasma Astrophysics at Harvard-Smithsonian Center for Astrophysics. [14] Helmholtz Alliance for Astroparticle Physics. Cosmic ray mystery. http://www.hap- 84 astroparticle.org/184.php. [15] B. Louis, V. Sandberg, G. Garvey, H. White, G. Mills, and R. Tayloe. The evidence for oscillations. Los Alamos Sci., 25:116–127, 1997. [16] Jöran Benjamin Stettner. Measurement of the energy spectrum of astrophysical muon- neutrinos with the IceCube Observatory. PhD thesis, RWTH Aachen U., 2021. [17] M. G. Aartsen et al. Observation of High-Energy Astrophysical Neutrinos in Three Years of IceCube Data. Phys. Rev. Lett., 113:101101, 2014. doi: 10.1103/PhysRevLett.113.101101. [18] M. G. Aartsen et al. Observation and Characterization of a Cosmic Muon Neutrino Flux from the Northern Hemisphere using six years of IceCube data. Astrophys. J., 833(1):3, 2016. doi: 10.3847/0004-637X/833/1/3. [19] M. G. Aartsen et al. Evidence for Astrophysical Muon Neutrinos from the Northern Sky with IceCube. Phys. Rev. Lett., 115(8):081102, 2015. doi: 10.1103/PhysRevLett.115.081102. [20] M. G. Aartsen et al. Characteristics of the diffuse astrophysical electron and tau neutrino flux with six years of IceCube high energy cascade data. Phys. Rev. Lett., 125(12):121104, 2020. doi: 10.1103/PhysRevLett.125.121104. [21] R. Abbasi et al. Improved Characterization of the Astrophysical Muon–neutrino Flux with 9.5 Years of IceCube Data. Astrophys. J., 928(1):50, 2022. doi: 10.3847/1538-4357/ac4d29. [22] W. Pauli. Dear radioactive ladies and gentlemen. Phys. Today, 31N9:27, 1978. [23] J. A. Formaggio and G. P. Zeller. From eV to EeV: Neutrino Cross Sections Across Energy Scales. Rev. Mod. Phys., 84:1307–1341, 2012. doi: 10.1103/RevModPhys.84.1307. [24] Richard P. Feynman. Very high-energy collisions of hadrons. Phys. Rev. Lett., 23:1415–1417, 1969. doi: 10.1103/PhysRevLett.23.1415. [25] Guido Altarelli and G. Parisi. Asymptotic Freedom in Parton Language. Nucl. Phys. B, 126: 298–318, 1977. doi: 10.1016/0550-3213(77)90384-4. [26] V. N. Gribov and L. N. Lipatov. Deep inelastic e p scattering in perturbation theory. Sov. J. Nucl. Phys., 15:438–450, 1972. [27] L. N. Lipatov. The parton model and perturbation theory. Yad. Fiz., 20:181–198, 1974. [28] Yuri L. Dokshitzer. Calculation of the Structure Functions for Deep Inelastic Scattering and e+ e- Annihilation by Perturbation Theory in Quantum Chromodynamics. Sov. Phys. JETP, 46:641–653, 1977. 85 [29] J. van Santen. Markov-chain monte-carlo reconstruction for cascade-like events in icecube. Master’s thesis, Humboldt-Universit𝑎t¥ zu Berlin, 2010. [30] Richard C. Fernow. Introduction to Experimental Particle Physics. 3 1983. ISBN 978-0-521- 30170-1. [31] Claus Grupen and Boris Schwartz. Particle detectors. Cambridge Univ. Pr., Cambridge, UK, 2008. [32] Donald E. Groom, Nikolai V. Mokhov, and Sergei I. Striganov. Muon stopping power and range tables 10-MeV to 100-TeV. Atom. Data Nucl. Data Tabl., 78:183–356, 2001. doi: 10.1006/adnd.2001.0861. [33] M. G. Aartsen et al. The IceCube Neutrino Observatory: Instrumentation and Online Systems. JINST, 12(03):P03012, 2017. doi: 10.1088/1748-0221/12/03/P03012. [34] R. Abbasi et al. The IceCube Data Acquisition System: Signal Capture, Digitization, and Timestamping. Nucl. Instrum. Meth. A, 601:294–316, 2009. doi: 10.1016/j.nima.2009.01.001. [35] R. Abbasi et al. The Design and Performance of IceCube DeepCore. Astropart. Phys., 35: 615–624, 2012. doi: 10.1016/j.astropartphys.2012.01.004. [36] Leif Rädel. Measurement of High-Energy Muon Neutrinos with the IceCube Neutrino Obser- vatory. PhD thesis, RWTH Aachen U., 2017. [37] M. G. Aartsen et al. Energy Reconstruction Methods in the IceCube Neutrino Telescope. JINST, 9:P03009, 2014. doi: 10.1088/1748-0221/9/03/P03009. [38] Sebastian Euler. Observation of oscillations of atmospheric neutrinos with the IceCube Neutrino Observatory. PhD thesis, RWTH Aachen University, 2014. [39] M. G. Aartsen et al. Measurement of South Pole ice transparency with the IceCube LED calibration system. Nucl. Instrum. Meth. A, 711:73–89, 2013. doi: 10.1016/j.nima.2013.01. 054. [40] South Pole glacial climate reconstruction from multi-borehole laser particulate stratigraphy. J. Glaciol., 59(218):1117–1128, 2013. doi: 10.3189/2013JoG13J068. [41] M. G. Aartsen et al. The IceCube Neutrino Observatory Part VI: Ice Properties, Reconstruction and Future Developments. In 33rd International Cosmic Ray Conference, 9 2013. [42] Rasha Abbasi et al. A novel microstructure based model to explain the IceCube ice anisotropy. PoS, ICRC2021:1119, 2021. doi: 10.22323/1.395.1119. [43] IceCube Collaboration. Icecubewiki/gallery: Spice3.2.1 layered scattering and absorption co- 86 efficients. URL https://wiki.icecube.wisc.edu/index.php/File:Spice3.2.1_layered_scatt_abs_ withlength_annotated.png. visited on 04/12/2022. [44] Martin Rongen. Calibration of the IceCube Neutrino Observatory. PhD thesis, RWTH Aachen U., 2019. [45] Philipp Eller. Unified Angular Acceptance Model (aka “Hole Ice”) Model. github respository. URL https://github.com/icecube/angular_acceptance. visited on 04/12/2022. [46] Rasha Abbasi et al. A calibration study of local ice and optical sensor properties in IceCube. PoS, ICRC2021:1023, 2021. doi: 10.22323/1.395.1023. [47] J. Ahrens et al. Muon track reconstruction and data selection techniques in AMANDA. Nucl. Instrum. Meth. A, 524:169–194, 2004. doi: 10.1016/j.nima.2004.01.065. [48] R. Abbasi et al. A muon-track reconstruction exploiting stochastic losses for large-scale Cherenkov detectors. JINST, 16(08):P08034, 2021. doi: 10.1088/1748-0221/16/08/P08034. [49] Kai Schatto. Stacked searches for high-energy neutrinos from blazars with IceCube. PhD thesis, Mainz U., 6 2014. [50] R. Abbasi et al. An improved method for measuring muon energy using the truncated mean of dE/dx. Nucl. Instrum. Meth. A, 703:190–198, 2013. doi: 10.1016/j.nima.2012.11.081. [51] Johan Lundberg, P. Miocinovic, T. Burgess, J. Adams, S. Hundertmark, P. Desiati, K. Woschnagg, and P. Niessen. Light tracking for glaciers and oceans: Scattering and absorption in heterogeneous media with Photonics. Nucl. Instrum. Meth. A, 581:619–631, 2007. doi: 10.1016/j.nima.2007.07.143. [52] Christopher Henrik V. Wiebusch. The Detection of faint light in deep underwater neutrino telescopes. Other thesis, 12 1995. [53] Nathan Whitehorn, Jakob van Santen, and Sven Lafebre. Penalized Splines for Smooth Representation of High-dimensional Monte Carlo Datasets. Comput. Phys. Commun., 184: 2214–2220, 2013. doi: 10.1016/j.cpc.2013.04.008. [54] Claudio Kopper. clsim. github respository. URL https://github.com/claudiok/clsim. [55] Dmitry Chirkin. Likelihood description for comparing data with simulation of limited statis- tics. 4 2013. [56] J. A. Nelder and R. Mead. A Simplex Method for Function Minimization. Comput. J., 7: 308–313, 1965. doi: 10.1093/comjnl/7.4.308. [57] Thomas K. Gaisser. Spectrum of cosmic-ray nucleons, kaon production, and the atmospheric 87 muon charge ratio. Astropart. Phys., 35:801–806, 2012. doi: 10.1016/j.astropartphys.2012. 02.010. [58] Anatoli Fedynitch, Felix Riehn, Ralph Engel, Thomas K. Gaisser, and Todor Stanev. Hadronic interaction model sibyll 2.3c and inclusive lepton fluxes. Phys. Rev. D, 100(10):103018, 2019. doi: 10.1103/PhysRevD.100.103018. [59] Anatoli Fedynitch, Ralph Engel, Thomas K. Gaisser, Felix Riehn, and Todor Stanev. Calcula- tion of conventional and prompt lepton fluxes at very high energy. EPJ Web Conf., 99:08001, 2015. doi: 10.1051/epjconf/20159908001. [60] Thomas K. Gaisser, Todor Stanev, and Serap Tilav. Cosmic Ray Energy Spectrum from Measurements of Air Showers. Front. Phys. (Beijing), 8:748–758, 2013. doi: 10.1007/ s11467-013-0319-7. [61] G. D. Barr, T. K. Gaisser, S. Robbins, and Todor Stanev. Uncertainties in Atmospheric Neutrino Fluxes. Phys. Rev. D, 74:094009, 2006. doi: 10.1103/PhysRevD.74.094009. [62] Morihiro Honda, T. Kajita, K. Kasahara, S. Midorikawa, and T. Sanuki. Calculation of atmospheric neutrino flux using the interaction model calibrated with atmospheric muon data. Phys. Rev. D, 75:043006, 2007. doi: 10.1103/PhysRevD.75.043006. [63] Sandra Christine Miarecki. Earth versus Neutrinos: Measuring the total muon-neutrino-to- nucleon cross section at ultra-high energies through differential Earth absorption of muon neutrinos from cosmic rays using the IceCube Detector. PhD thesis, UC, Berkeley, 1 2016. [64] Sebastian Josef Schoenen. Discovery and characterization of a diffuse astrophysical muon neutrino flux with the iceCube neutrino observatory. PhD thesis, RWTH Aachen U., 2017. [65] S. S. Wilks. The Large-Sample Distribution of the Likelihood Ratio for Testing Composite Hypotheses. Annals Math. Statist., 9(1):60–62, 1938. doi: 10.1214/aoms/1177732360. [66] Askhat Gazizov and Marek P. Kowalski. ANIS: High energy neutrino generator for neutrino telescopes. Comput. Phys. Commun., 172:203–213, 2005. doi: 10.1016/j.cpc.2005.03.113. [67] C. Andreopoulos et al. The GENIE Neutrino Monte Carlo Generator. Nucl. Instrum. Meth. A, 614:87–104, 2010. doi: 10.1016/j.nima.2009.12.009. 88 APPENDIX CASE B BLIND FIT PLOTS All plots in this appendix utilize the best fit parameters from case B fits, that is, the accelerator prior is not applied to the 100 - 350 GeV cross-section scaling parameter. (b) Modified binning where events are masked with deposited charge < 100 PE and reconstructed energy (a) Original binning, contained in Table 6.1. > 5.01 TeV. Figure A1.1 Data-simulation agreement plot, summed over the cos(reco zenith) dimension for the low detected charge bin (< 100 PE). The data are plotted as well as the expectation for both baseline and best fit parameters. Bins with anomalously large values due to limited simulation statistics are highlighted, on their own (A1.1a) and after applying the low-charge high-energy mask (A1.1b) described in the text. Corresponding plots to those in Figure 7.1. 89 (a) Bin-wise likelihood plot comparing data and sim- ulation with best fit parameters. The z-axis is lim- ited to better show the structure – without limits, the maximum value is above 675. The 3 anomalously large bins are the yellow bins in the mask region. (b) Data histogram of observables. Figure A1.2 Low detected charge bin (< 100 PE) with the original binning from Table 6.1. The mask region of low charge, high reconstructed energy is marked with a green box. Corresponding plots to Figure 7.2. 90 best fit 105 best fit 2 × 104 baseline baseline Number of events per bin Number of events per bin data data 104 104 103 6 × 103 102 4 × 103 101 1.0 0.8 0.6 0.4 0.2 0.0 102 103 104 105 106 107 cos(reco zenith) Reco energy (GeV) (a) Cos(zenith) distribution for the low-charge bin. (b) Energy distribution for the low-charge bin. best fit 104 best fit baseline baseline Number of events per bin Number of events per bin 4 × 103 data 103 data 3 × 103 102 2 × 103 101 100 10 1 1.0 0.8 0.6 0.4 0.2 0.0 102 103 104 105 106 107 cos(reco zenith) Reco energy (GeV) (c) Cos(zenith) distribution for the high-charge bin. (d) Energy distribution for the high-charge bin. Figure A1.3 Data-simulation comparison distributions. The reconstructed energy/cos(zenith) di- mension has been summed over, depending on the plot. The data are plotted as well as the the expectation for both baseline and best fit parameters. Corresponding plots to those in Figure 7.3. burn sample, best fit LLH, qtot < 100 burn sample, best fit LLH, qtot > 100 8 0.0 10 0.0 0.2 8 0.2 6 cos(zenith) cos(zenith) 0.4 6 0.4 4 0.6 4 0.6 2 0.8 2 0.8 1.0 0 1.0 0 102 104 106 102 104 106 Energy [GeV] Energy [GeV] (a) Low detected charge bin (< 100 PE). (b) High detected charge bin (> 100 PE). Figure A1.4 Bin-wise likelihood values for the data-to-simulation (with best-fit-parameters) com- parison after adding the additional criterion such that bins < 100 PE and > 5.01 TeV are masked. Corresponding plots to those in Figure 7.4. 91