L 9.. VHO Ly... . 5%..«3 .91. ‘llllllll‘llll“ 7» LEERARY Michigan fitate University This is to certify that the dissertation entitled Direct Photon Production at sqrt(s) = 1.8 TeV presented by Salvatore T. Fahey has been accepted towards fulfillment of the requirements for Ph.D. degree in __Eh)£sica__ 6%! g & Ma jdr professor- Date jag 20 /?,75— MSU is an Affirmative Action/Equal Opportunity Institution 042771 me: u RETURN aoxm romovothb Mam youtrooud. TO AVOID mes return on or Mon duo duo. DATE DUE DATE DUE DATE DUE DIRECT PHOTON PRODUCTION AT fl = 1.8 TeV By Salvatore T. Fahey A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Physics and Astronomy 1995 ABSTRACT Direct Photon Production at fl = 1.8 T eV By Salvatore T. Fahey A measurement of direct photon production from proton-antiproton collisions at the Fermilab Tevatron center of mass energy of J3 = 1.8 TeV is reported. Photons were detected in a liquid Argon calorimeter, with charged particle rejection provided by drift chambers. Subtraction of the neutral meson background was done on a statistical basis using the depth profile of the calorimeter showers which were modeled by a detailed Monte Carlo simulation. The efficiencies for direct photon detection were also studied with Monte Carlo. A comparison of the isolated direct photon cross section in the central pseudorapidity region (I 17 I< 0.9) with 3. Quantum Chromodynamics prediction is provided. The data and theory are seen to agree well over a large range of transverse momenta (12 — 100 GeV). To my parents, Paul and Rosemarie Fahey. iii es; Tl Acknowledgements I greatly appreciate the work and help of my colleagues in the photon group, especially Steve Linn, Bob Madden, Paul Rubinov, Greg Snow, and John Womersley. There were many others not directly related to my analysis who also provided much needed support. I would like to single out Rich Astur, whose patience with new students is incredible, and Norman Graf, who was always there to nudge the analysis back on track. The support of my friends, both at Fermilab and MSU, has been amazing. I would like to thank Ian Adam, Gian Di Loreto, Tom Fahland, Eric Flattum, Kate Frame, Elizabeth Gallas, Steve Jerger, Brent May, Andy Milder, Joelle Murray, and Tom Rockwell; I will miss seeing you hanging around my desk. Gene Gualtieri and Kristine Marquard deserve a very special thank you. I hope they know what a wonderful influence they have been on my life. The lion’s share of credit for this work goes to my advisor, Bernard Pope. It is impossible to imagine a better mentor. I may have lost him as an advisor by graduating, but our friendship will remain as strong as ever. Finally, my debt to my family goes beyond the obvious. My major regret about choosing this field is that it takes me so far away from them. Thank you Bridget, Gus, Paul, Mom and Dad. iv Ct Contents 1 Introduction 1 1.1 Brief Introduction to the Standard Model ................ 1 1.2 Variables for Hadron Collider Physics .................. 6 1.3 Theoretical Underpinnings of Direct Photon Production ........ 7 1.3.1 A Brief Introduction to QCD .................. 7 1.3.2 Direct Photon Production in pf) Collisions ........... 9 1.3.3 First Order Processes ....................... 11 1.3.4 Higher Order Processes ...................... 11 1.4 Previous Direct Photon Experiments .................. 14 2 Experimental Apparatus 16 2.1 The Fermilab Tevatron Collider ..................... 16 2.2 The DO Detector ............................. 19 2.2.1 The D0 Tracking System .................... 19 V 3I 3 2.2.2 The D0 Calorimeter System ................... 2.2.3 The DC Muon system ...................... 2.2.4 The D0 Trigger and Data Acquisition System ......... 2.3 A Brief History of the DO Experiment ................. Data Sample 3.1 Triggers .................................. 3.1.1 Level 1 ............................... 3.1.2 Level 2 ............................... 3.2 Offline Processing ............................. 3.2.1 The D0 Reconstruction Program ................ 3.2.2 Photon Identification ....................... 3.3 Efficiency ................................. Background Subtraction 4.1 Longitudinal Shower Profile Method .................. 4.2 Central Drift Chamber Conversion Method ............... 4.2.1 Matrix Formulation ........................ Isolated Direct Photon Cross Section 5.1 The Differential Cross Section Formula ................. 26 31 32 34 36 36 37 37 41 41 44 49 55 59 64 69 75 75 5.2 5 Che 6.1 6.2 6.3 7c. 5.1.1 The Number of Candidates, N .................. 76 5.1.2 The Photon Fraction, '7 ..................... 78 5.1.3 The Luminosity, .C ........................ 80 5.1.4 The Geometric Acceptance, a .................. 81 5.1.5 The Photon Efficiency, 6., .................... 82 5.1.6 The Bin Size, Am and A17 .................... 82 5.2 Cross Section vs Transverse Energy ................... 82 5.2.1 Comparison with Theory ..................... 83 6 Characteristics of Direct Photon Events 90 6.1 The Golden Photon Sample ....................... 90 6.2 Jet Identification and Efficiency ..................... 93 6.3 Jet production in Direct Photon Events ................. 95 7 Conclusions 99 vii List 1.1 1.2 1.3 2.1 4.1 4.2 4.3 4.4 5.1 5.2 5.3 List of Tables 1.1 1.2 1.3 1.4 2.1 4.1 4.2 4.3 4.4 5.1 5.2 5.3 The six Standard Model Quarks. .................... 4 The six Standard Model Leptons. .................... 5 Vector Bosons and their respective forces. ............... 5 Previous Direct Photon Experiments ................... 15 Proposals submitted for a detector at the DO interaction region of the Tevatron. ’[- Withdrawn, 1- Approved ............... 35 Neutral Meson Background. ....................... 57 Photon Fraction from the EMI Method. ................ 64 CDC Conversion Method Parameters ................... 71 Photon Fraction from the CDC Conversion Method ........... 73 Number of Candidates after cut for each trigger ............. 76 Photon Fraction Fit Parameters ..................... 78 Luminosity of the Photon Triggers ................... 81 5.4 Cross Section Points List 1.1 1.2 1.3 2.1 2.2 List of Figures 1.1 1.2 1.3 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 Schematic of proton-antiproton scattering. ............... 10 First Order Direct Photon Feynman Diagrams .............. 12 Examples of Next-to-leading Order Direct Photon Feynman Diagrams. 13 The Fermilab Tevatron Collider ...................... 17 Cutaway view of the DO detector ..................... 20 The D0 tracking system .......................... 21 The D0 Vertex Detector .......................... 22 Cross sectional view of TRD layer 1. The dashed lines denote one anode cell .................................. 24 End view of the CDC. .......................... 25 FDC, exploded view. ........................... 26 Cutaway view of the DO Calorimeters. ................. 27 Diagram of a calorimeter cell. ...................... 28 X 2.10 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4.1 4.2 2.10 Side view of one quadrant of the calorimeter showing the projective 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4.1 4.2 tower geometry ............................... Turn-on curve for Level 1 high ET electromagnetic calorimeter trigger. Turn-on curve for Level 2 high ET trigger. ............... z vertex position. The average position of the vertex in run Ia was not zero ................................... Isolation distributions for Monte Carlo photons and Z —> ee electron data ..................................... Missing transverse energy distribution from photon candidates. . x2 distributions from test beam electrons, pions, and electrons from W —> at! events ............................... Offline selection cut efficiency measured with Monte Carlo photon sample .................................... The efficiency of the missing ET cut vs photon ET. .......... Trigger efficiency for Monte Carlo photons which have passed offline selection cuts. ............................... Width of jet candidates from data. ................... Electromagnetic fraction of jet candidates from data. Roughly 4 / 1000 jets have an electromagnetic fraction higher than 90% . ........ xi 30 38 40 43 46 47 50 51 52 54 56 4.3 Minimum separation between photons from we and 1] decays at the first layer of the calorimeter. The horizontal line denotes the cell size at pseudorapidity of 0. .......................... 58 4.4 Histogram of the fractional energy contained in the first layer of the calorimeter for 10 GeV Monte Carlo photons and ads. ........ 60 4.5 Fraction of candidates with EM 1 / E < 1% vs transverse energy for samples of Monte Carlo photons, background, and data. ....... 62 4.6 Photon Fraction vs Transverse Energy for the Longitudinal Shower Profile Method. The solid line is a fit to the function 1 — a. x e‘bXET The dashed lines are from varying the parameters a. and b by enough to change the x2 one unit. ........................ 63 4.7 Comparison between Monte Carlo and data electrons from Z events. . 65 4.8 Photon fraction for different values of the EMl / E discriminant. The lines are fits to the three sets of data points. .............. 66 4.9 CDC dE/dx distribution .......................... 68 4.10 Comparison of the photon fraction from the two methods. ...... 74 5.1 Raw number of candidates vs transverse energy. The three large peaks are from the three different trigger thresholds. The small peak a low ET is from low energy, non-triggered photon candidates. ....... 77 5.2 Error on the Photon Fraction vs Transverse Energy ........... 79 xii 5.3 5.4 5.5 5.6 6.1 6.2 6.3 6.4 6.5 6.6 Photon Cross Section vs Transverse Momentum ............. 85 Comparison of Photon Cross Section with QCD prediction. The shaded error at the bottom represents the normalization error on the data due to luminosity uncertainty. ................... 86 Comparison of data and theoretical predictions with different p. scales. 87 Comparison of data and theoretical predictions with different parton distribution sets. ............................. 88 Side view of a direct photon candidate event. The photon is in the lower half of the Central Calorimeter and the jet can be seen in the right Endcap Calorimeter with tracks in the Forward Drift Chambers. 91 Lego plot of the direct photon event. The height of each element corresponds to the amount of energy deposited in that calorimeter Number of jets in direct photon events from the golden photon sample. 95 Photon 45 - jet ()5 for golden photon sample events with one jet ..... 96 Photon 4S - summed jet 4) for golden photon sample events with more than one jet ................................. 97 Photon and jet 1] distributions for golden photon sample events. . . . 98 Chapter 1 Introduction 1.] Brief Introduction to the Standard Model High Energy Physics (HEP) is simply the study of the most basic constituents of matter and the interactions between them. The term “High Energy” refers to the dominant tool used in extracting this information. The foundation of the experimen- tal methods used in HEP lies in Rutherford’s scattering experiment with a-particles and gold foil. The distribution of the angles of the scattered a-particles correctly pointed to the fact that gold atoms were composed of mostly empty space with a hard compact nucleus in the center. Modern HEP experiments, while far advanced in technology, still use the same scattering methodology to illuminate the inner workings of matter. The size of the details that can be resolved depends upon the wavelength of the scattered object, thus the smaller the scale of interest the higher the energy of the beam used. The quest for finer and finer detail has become a quest 1 for higher and higher energy scattering beams. As the energy of the scattering beams increased in the 1940’s and 50’s (whether in particle accelerators or in cosmic rays) another phenomenon manifested itself. Interactions at higher energy led to the production of “strange” new particles, which were not predicted by any viable theory. What had started in 1947 with Rochester and Butler’s discovery of the K 0 had exploded by 1960 to include as many as 50 “elementary” particles! Clearly high energy physicists were creating more questions than they were answering. In 1964 a mathematical shorthand for classifying the elementary particles was proposed by Gell—Mann and Zweig [1, 2]. Their idea postulated three elementary particles (six if you want to count their antiparticles), called quarks, from which the more massive particles could be built. The three quarks were called the up, _1 3 down, and strange with charges of +§, , and —31,- respectively. The proton, for example, could be thought of as being composed of two up quarks and one down quark. Ordinary everyday matter was composed of up and down quarks, while the third quark, strange, was used to explain all the strange new particles found in the 1950’s. While this model was satisfying from a theoretical standpoint it had one major experimental drawback: quarks had never been seen. A free quark, with its fractional charge, would give a clear experimental signal; a simple Millikan oil dr0p experiment could provide the necessary evidence. The quark model also had a major theoretical problem. These quarks needed to have spins of %, yet Pauli’s Exclusion Principle states that two spin i; objects cannot occupy the same state, as seemed to 3 be the case for the three strange quarks in an (2‘. Greenberg suggested a way out of the Exclusion Principle dilemma by proposing that quarks have an additional quantum number called color [3]. A A‘H‘, for example, could be composed of three different colored up quarks: red, blue, and green. A rule requiring that all naturally occurring particles are colorless limits the number of particles that can be created to be consistent with what was seen. Colorless particles are made by combinations of quarks and anti-quarks (eg blue and anti-blue), or all colors in equal amounts (cg red, blue, and green). This lack of free quarks suggested an extremely strong color force tying them together. At the time the quark model was thought of as a convenient mathematical device rather than a picture of reality. Then the experimental evidence favoring the quark model started to build up. Data taken at the Stanford Linear Accelerator Center (SLAC) in the late 1960’s of electron-proton scattering seemed to point toward the proton having three centers of charge. In 1974 a new heavy neutral particle called the J /‘¢ was discovered at Brookhaven/SLAC by Ting/ Richter [4]. Its long life, 1000 times longer than the other meson particles, seemed to point to new physics. It could be explained by the existence of a fourth quark, called charm. The charm quark explanation correctly predicted an array of new particles that were soon found, and the quark hypothesis took on a new air of respectability. Subsequent experiments found a fifth bottom quark [5], and finally a sixth, called top, in 1995 [6]. The particles that can be explained as combinations of quarks are called hadrons. But there are many other particles in the subatomic zoo that are not hadrons and are Table 1.1: The six Standard Model Quarks. [ Charge Mass (MeV/cz) down — 51,- 5- 15 up g 2-8 strange —% 100-300 charm g 1,000-1 ,600 bottom —§ 4,100-4,500 top g 180,000 not made of quarks. One class of these are the leptons. The first lepton, the electron, was discovered as far back as 1897 by J .J . Thompson. The electron neutrino (V8) was first postulated by Pauli in 1930 to explain the apparent violation of the law of conservation of energy in beta decay (the easily overlooked He was carrying away the missing energy). It was seen experimentally by Cowan and Reines in 1956 [7]. Two other electron-like objects, the muon and the tau particle, were discovered in 1937 [8] and 1975 [9] respectively. Each of these has a corresponding neutrino. The muon neutrino was discovered in 1962 [10], while the tau neutrino has yet to be experimentally confirmed. The hadrons and leptons are summarized in Tables 1.1 and 1.2 respectively. The forces between these particles are also well understood in the framework of the stande model. All interactions between particles are described by the exchange of another particle which carries the force. The four forces and their respective ex- change particles are listed in table 1.3. The quantum theory that describes the Electromagnetic and Weak forces is called Electroweak theory, a recent consolida- Table 1.2: The six Standard Model Leptons. Charge Mass (M eV/cz) electron -1 0.511 electron neutrino 0 < 5.1 x 10’6 muon -1 106 muon neutrino 0 < 0.27 tau -1 1,777 tau neutrino 0 < 31 Table 1.3: Vector Bosons and their respective forces. Force Charge Mass (M e V/ c2) photon Electromagnetic 0 0 gluon Strong 0 0 W:t Weak i1 80000 Z Weak 0 9 1000 graviton? Gravity ? ? tion of the earlier Weak theory and Quantum Electrodynamics (QED). The theory of the Strong force is called Quantum Chromodynamics (QCD). In QCD the color quantum number is the basis for all strong interactions much like charge is in elec- tromagnetic interactions. Unlike the photons of electromagnetic interactions, the exchanged gluons carry color and thus couple to themselves. There is no standard quantum theory for Gravity yet. 6 1.2 Variables for Hadron Collider Physics It is useful to define a. set of quantities that are consistent among experiments so that results can be compared. A natural variable for describing the scattering process is called the cross section. In classical physics this is simply the effective area of the target, while in quantum physics it depends on the details of the interaction. In a scattering experiment the relationship between the event rate and cross section is given by: R = (IE, (1.1) where R is the rate, a' is the cross section, and L is a measure of the beam flux, called luminosity. Luminosity is measured in units of inverse area per second. Different experiments have different detectors and cover various ranges of phase space, so what is actually measured is a differential cross section. The natural vari- ables for describing phase space are momentum (13'), energy (E), azimuthal angle (45), and polar angle (9). In hadron-hadron collisions the colliding partons typically have different momenta along the beam direction, giving the center of mass frame a Lorentz boost with respect to the lab frame. This means that in the lab frame the longitudinal (parallel to the beam) momenta will not necessarily sum to zero. Since the incoming partons have negligible transverse (perpendicular to the beam) momenta, the transverse momenta of the outgoing partons will sum to zero. Trans- verse momenta (p7) is therefore an important quantity in hadron collisions. The polar angle 0, which is measured from the beam direction, is also not invariant with respect to Lorentz boost. It is useful to define a new angular variable rapidity, which is a measure of the object’s fractional momentum along the beam axis: y = — tanh'1(%) (1.2) Rapidity tranforms under Lorentz boost as y —-> y + constant. Distributions of rapidity are unaffected by Lorentz boost. In the high energy limit where a particle’s mass is much smaller than its energy the rapidity of a particle becomes equal to its pseudorapidity (17): 9 1] = —lntan(§). (1.3) Pseudorapidity has an advantage over rapidity in that it can be calculated even when the particle’s mass is unknown. An additional variable called detector pseudorapidity (17“) is sometimes used. Detector pseudorapidity assumes an interaction vertex at the center of the detector. 1.3 Theoretical Underpinnings of Direct Photon Production 1.3.1 A Brief Introduction to QCD Direct photons are produced in interactions between quarks and gluons, and under- standing these interactions involves knowledge of QCD theory. The force between quarks and gluons is the Strong force, which has a coupling constant called a,. The probability for a specific process can be classified by the number of interaction ver- tices, and therefore the number of times a, enters the calculation. QCD calculations are performed using the mathematical methods of perturbation theory. The theo- retical prediction for a process of interest (in this case, direct photon production) can be expanded in powers of 01,. If a, is small the contributions from higher order terms are negligible, which significantly simplifies the calculation. With large (1,, higher order terms contribute more to the sum. As stated in section 1.1, the Strong force must be powerful enough to bind quarks tightly, and thus its coupling constant must be large. This would seem to make higher order terms large and render perturbation theory useless. Luckily, QCD theory is saved by the strange distance behavior of the Strong force. When quarks are close together the force between them is small and they can be treated as free. This is called “asymptotic freedom”. As the distance between quarks increases the force also increases, which gives rise to quark confinement. If the distance between quarks increases passed a critical value the energy in the Strong field creates a quark- antiquark pair. This new pair combines with the original to create two new hadrons which are colorless, and therefore have no force between them. This process is called “hadronization” . So a, becomes manageable at short distances and allows perturbation theory to work. At high energies (short distances) we can treat the quarks and gluons as essentially free partons. 9 1.3.2 Direct Photon Production in p13 Collisions The Standard Model describes the proton as consisting of two up quarks and one down quark. These “valence” quarks are held together by exchanging gluons be- tween them. In QCD theory a gluon has a non-zero probability of creating a quark- antiquark pair for a brief period of time. These quark pairs that blink in and out of existence in the proton are called “sea” quarks. In the Standard Model, therefore, the proton is made of valence quarks, sea quarks, and gluons. The momentum of the proton is carried by both quarks and gluons in roughly equal parts. Proton - antiproton scattering in the Standard Model is shown diagramatically in Figure 1.1. The process A + B -—) C + D can be broken into three distinct parts. The first part involves the probability of finding a parton of given momentum inside the hadron. The probability of finding parton a within hadron A with a momentum between a: and a: + dz is given by the Parton Distribution Function (PDF) Ga/A(a:). The second part contains the perturbative hard scattering of the partons a + b —> c + d. Finally, the probability of obtaining particle C from parton c with a momentum between 2 and z + dz is described by the fragmentation function Dc/c(z). The corresponding expression of the cross section for A + B —+ C + X (where X can be any outgoing particle) is: a'(AB —> CX) = Z jdzadzbdcha/A(ma)Gb/B(mb)Dc/C(zc) x [7(ab —> cd), (1.4) abcd where the caret indicates a parton level cross section. Thus, the probability for 10 Figure 1.1: Schematic of proton-antiproton scattering. a final state can be calculated by summing the parton level cross sections, once the appropriate parton distribution and fragmentation functions are known [11]. Unfortunately, the PDF and fragmentation parts are examples of the low energy and long distance regime, which cannot be calculated by QCD perturbation theory. They must be measured in experiment by processes such as direct photon production. When calculating direct photon production A + B —> 7 + D the fragmentation function Dc/c(z) can be replaced by 1, since the photon does not fragment and is detected directly by the experimental apparatus. This means the PDF part can be directly measured without the additional ambiguity of a fragmentation function. A further advantage of studying direct photon processes is that the photon energy can be well measured by the experimental apparatus. The dominant QCD process at 11 Tevatron energies is jet production, where an outgoing parton fragments into a “jet” of lower energy particles as a result of hadronization. There are ambiguities both in the definition of a jet and in deciding which particles belong to the original parton. In addition, the smeared out jet energy contributes to uncertainties in the experimental measurement. These problems are absent in direct photon measurements. 1.3.3 First Order Processes The advantage of direct photon physics as a probe of the gluon content of the pro- ton can be seen by examining the first order production processes. The two first order Feynman diagrams, called Gluon Compton Scattering and Quark-Antiquark Annihilation, are shown in Figure 1.2. At low values of the photon p7- the Gluon Compton Scattering process dominates, which makes the direct photon cross-section particularly sensitive to gluon distributions. In deep inelastic scattering, where a high energy electron is used to probe a proton, the gluon only enters as a second order process. 1.3.4 Higher Order Processes The number of processes that contribute to direct photon production increases sub- stantially in second order. Figure 1.3 shows a sampling of the Feynman diagrams that must be added into the calculation. As can be expected, the number of possible diagrams increases substantially at higher order. These corrections to leading order are potentially very small, but the inability to calculate to all orders can create other 12 GLUON COMPTON SCATTERING QUARK-ANTIQUARK ANNIHILATION ii 7 Figure 1.2: First Order Direct Photon Feynman Diagrams. theoretical headaches. Higher order diagrams can cause infinites in the calculation [12]. Loop diagrams lead to what are known as ultraviolet divergences. These loops are virtual states which can violate conservation of energy for a small amount of time. This violation can be arbitrarily large, which leads to an infinity when the loop is integrated over momenta. To avoid this disaster the integral is cut off at an arbitrary momentum. This procedure is called “renormalization” and introduces an arbitrary momentum scale a. This means that the strong coupling constant becomes dependent upon the scale factor, a, —> a,(a). If it were possible to calculate the theory to all orders the dependence on this renormalization scale would vanish. Various schemes for picking the arbitrary parameter p exist. The scale is set by the interaction, so choices on 13 206% 363% Figure 1.3: Examples of Next-to-leading Order Direct Photon Feynman Diagrams. 14 the order of the W of the event are common. The need to separate the theoretical prediction into a calculable part and a set of parton distribution functions introduces another parameter. This procedure is called “factorization” and denoted by the parameter A. The PDF’s depend on the choice of lambda, but can be evolved to other scales via the Gribov-Lipatov-Alterelli-Parisi (GLAP) evolution equation. As with a, the natural choice of A is on the order of the momentum of the event. For the theoretical predictions used in this analysis A is set equal to a. It is important to emphasize that the choice of scale can affect the theoretical cross section. 1.4 Previous Direct Photon Experiments The simplicity of first order processes and sensitivity to gluon distributions are ob- vious theoretical motivations for studying direct photon production. Unfortunately there are many ways of producing non-direct photons, primarily as the decay of neutral mesons. This background problem has been handled by the different ex- periments in one of two ways. The direct method eliminates the meson decays by reconstructing a mass between the photon decay products. The direct method re- quires a finely segmented detector to resolve the two photons, and can only be used at a low pT range (where the photon decay products are well separated spatially). The conversion method uses the fact that photons convert into electron-positron pairs in the presence of matter. Double photon clusters have a higher conversion probability than single photon clusters, so by measuring the conversion rate of the 15 Table 1.4: Previous Direct Photon Experiments. Beam fl Max. Place Background + Target (GeV) p7 Subtraction R806 1) + p 63 12 ISR Direct R108 p + p 45,63 12 ISR Conversion E95 p + Be 19,24 4 Tevatron Direct E629 p + C, 7r+ + C 19 5 Tevatron Direct AFS p + p, p + 13 53,63 10 ISR Direct R110 p + p 63 10 ISR Direct UAl p + 13 630 80 SppS Conversion UA2 p + 13 540,630 43 SppS Conversion UA6 p + p, p + p 24 7 SppS Direct NA3 p + C, 1ri + C 19 6 SPS Direct NA24 p + p, 7:"E + p 24 7 SPS Direct WA70 p + p, 1ri + p 23 7 PS Direct E705 p + D, «i + D 24 8 Tevatron Direct E706 p + C, «i + C 41 10 Tevatron Direct CDF p + p 1800 100 Tevatron Conversion sample the background can be subtracted statistically. The first experiment to find evidence for direct photon production was at the CERN ISR in 1976 [14] using the direct method. Since then there have been a substantial number of direct photon measurement covering a variety of kinematic ranges [13]. Table 1.4 contains a summary of some of the more modern direct photon experiments. Chapter 2 Experimental Apparatus 2.1 The Fermilab Tevatron Collider The accelerator at the Fermi National Accelerator Laboratory produces the high- est energy particles in the world. Protons and antiprotons are accelerated to 900 GeV. There are five distinct stages that bring the protons from rest to this high energy. Stage one is the Cockcroft-Walton which first adds electrons to hydrogen atoms, and then pulls these negative ions toward a positive voltage. The ions leave the Cockcroft-Walton with an energy of 750 KeV, about 30 times the energy of electrons in a television picture tube. The ions next encounter the linear accelerator, called the LINAC, which consists of drift tubes of increasing length. An oscillating electric field is applied to the drift tubes - the positive potential accelerates the negative ions. The negative potential is timed to coincide with the ions being inside the tubes, and therefore shielded from 16 17 \ TARGET HALL n, esteem" / ‘ i ’5“, H ‘- i F / cocxaorawaaron Figure 2.1: The Fermilab Tevatron Collider. the field. After leaving the LIN AC the ions are passed through a carbon foil which strips them of their electrons. The remaining protons enter the Booster synchrotron for stage three of the acceleration. The Booster is a ring 500 feet in diameter which consists of resonant cavities that accelerate the protons and magnets which bend the particles into a circular path. The beam circulates in the Booster about 20,000 times before it leaves, pumping the energy up to 8 GeV. The beam, actually a “bunch” of protons, then enters the Main Ring. The Main Ring is another synchrotron like the Booster, only 13 times larger, almost 4 miles in circumference. It lies in a 10 foot wide tunnel buried 20 feet underground. It consists of 1000 conventional copper-coiled magnets that bend and 18 focus the protons, which are accelerated to an energy of 150 GeV. The final stage is the Tevatron synchrotron, which occupies the same tunnel as the main ring. The Tevatron’s magnets are wound with superconducting wire which must be cooled to a temperature of —450 deg F by liquid helium. The superconducting magnets are needed to produce the large magnetic field necessary for bending the beam of protons which now reach an energy of 900 GeV. Antiprotons are produced by siphoning off some of the protons from the Main Ring and focussing them onto a target, usually made of nickel. The collisions pro- duce many secondary particles, some of which are antiprotons. These antiprotons are selected and transported to the Debuncher ring which sits in a separate antipro- ton tunnel (a triangular ring 500 ft per side). The Debuncher ring condenses the antiprotons into a bunch with a small range of constituent proton momenta by a process known as stochastic cooling. The antiprotons are then stored in the Ac- cumulator ring which occupies the same tunnel as the Debuncher ring. Then they are transferred to the Main ring where they circulate in the direction opposite to the protons. Finally they are injected into the Tevatron ring and ramped up to an energy of 900 GeV. The counter-clockwise rotating antiprotons collide with the clockwise rotating protons at two interaction points — BC and DO. During data run Ia the collider was operated in “six-on-six” mode, six bunches of protons collid- ing with six bunches of antiprotons. The time between colliding bunches in run Ia was 3.5 pace. The total integrated luminosity processed by the D0 experiment in run Ia was 16 pb". 19 2.2 The DC Detector Surrounding the Tevatron at the DO interaction region is the DO detector [15] (see Fig 2.2), weighing 5500 tons and standing 40 feet high. It can be separated into three subsystems — the Tracking, Calorimeter, and Muon detection systems. A particle travelling outward from the interaction region would first encounter the wires and chambers of the DO tracking system, which traces the path of all charged parti- cles. Next the particle would enter the liquid Argon Calorimeter, where electrons and hadrons would deposit their energy and stop. Muons, which have more mass than electrons and interact less often than hadrons, are able to punch through the calorimeter and hit the Muon tracking system. Their paths are bent by the iron toroid and their tracks seen in the muon chambers. Their momenta can be calcu- lated by the curvature of their tracks. The DC detector is thorough enough in its identification of particles over a large area of phase space to provide for many diverse physics analyses. The particular analysis discussed in this thesis does not use the full capabilities of this large ma- chine, but a brief discussion of all major detector subsystems is included below for completeness. 2.2.1 The D0 Tracking System The tracking system (see Fig. 2.3) consists of four separate detectors. The innermost region is covered by the Vertex Detector, which is used to pinpoint 20 l l ._J L : — ‘ : u l l J I ' \ \ i; \ \K‘ \ 2 \ \ . D6 Detector Figure 2.2: Cutaway view of the D0 detector. _‘_r_ P7 ;.___ hi: 6“ ,_- z) u I 21 l I 7 ll V I I l l I In Ill Ill lll ll] (1) 9 Central Drift Vertex Drift TranSitim Forward Drift Chamber Chamber Radiatim Chamber Detector F — I— .— — .— Figure 2.3: The DC tracking system. the proton-antiproton interaction vertex (and any possible secondary vertices). It covers the :i:2.0 region in pseudorapidity. Surrounding it in the central region (—1.2 < 17“ct < 1.2) is the Transition Radiation Detector which can be used to discriminate between electrons and pions. Furthest from the beam pipe are the drift chambers — the Central (—1.2 < 17“ct < 1.2) and Forward (1.4 (I 11““ I< 3.1) Drift Chambers. They track the path of a. charged particle and can also be used along with the Vertex Detector to determine the interaction point. Vertex Detector (VTX) The proton-antiproton collisions in the D0 interaction region do not always occur at the same spatial point. In fact, for the 1992-93 data run the vertex position could 22 Sense Grid Cathode Coarse Field Fine Field I ......... n 0 Figure 2.4: The DC Vertex Detector. be described by a Gaussian with a width of 25 cm and offset 8 cm from the center of the detector. The measurement of the transverse momentum of a particle depends on the correct determination of the vertex position. The VTX [16] has an inner radius of 3.7cm, an outer radius of 16.2cm and is 116.8cm long. It consists of three independent concentric layers of drift cells. Each layer is separated into azimuthal (45) sections — 16 for the inner layer and 32 for the two outer layers. As a charged particle passes through the VTX cells it ionizes the C 02 — ethane gas. The freed electrons then drift in an electric field and are collected on sense wires which provide a measure of the r — ¢ coordinate. The sense wires are 23 read out at both ends to provide a measurement of the position along the beam pipe (2) using charge division. Transition Radiation Detector (TRD) Charged particles radiate photons when passing through boundaries between regions of different dielectric constants. The energy depends on the Lorentz factor, which is inversely proportional to the square root of the mass of the particle. Therefore the amount of radiation from electrons is different from the amount from hadrons of the same energy. The TRD [17] takes advantage of this with three layers of 393 polypropylene radiator foils and an X-ray detector. The 18 pm thick foils are sepa- rated by a gap of 150 pm and are housed in an He filled enclosure. The X-ray detector is mounted just outside the radiator foils and contains a gas of XC(90%)02H6(10%). The radiated photons ionize the gas in the first few millimeters of the X-ray chamber and the charge is detected on the sense wires. Figure 2.5 shows a diagram of the first layer. The radiator and X-ray detector packages are separated by two mylar cylin- ders, between which dry N2 gas is circulated. This is done to prevent the radiator helium from contaminating the chamber gas. Drift Chambers (CDC and FDC) The Central and Forward Drift Chambers (CDC and FDC) operate on the same principle as the VTX. Charged particles passing through a gas liberate electrons which are collected on sense wires. Signals are induced by the sense wires on two 24 CROSS-SECTION OF TRD LAYER 1 OUTER CHAMBER SHELL 70am GRID WIRE ALUMINIZED MYLAR 8m 15mm . . O _____ _. _. .. _ _ _ _ _. _ 4 2mm’. mm C O O _____ _. _ _ __ _ _ _ __ C RADIATOR STACK CONVERSION . 0 STAGE 0] N2 30am ANODE WIRE / 100nm POTENTIAL WIRE 23pm MYLAR WINDOWS HELICAL CATHODE STRIPS Figure 2.5: Cross sectional view of TRD layer 1. The dashed lines denote one anode cell. 25 Figure 2.6: End view of the CDC. delay lines which are read out at each end and give the z-direction measurement. The CDC [18] is divided into four concentric layers parallel to the beam line (see figure 2.6). Each layer is subdivided into 32 43 cells. There are seven sense wires per cell and two delay lines. The CDC has an inner radius of 49.5cm and an outer radius of 75.4cm. The FDC [19] is a drift chamber like the CDC but with a radically different design. Whereas the CDC looks like a cylinder parallel to the beam, the FDC resembles a disk perpendicular to the beam. It is subdivided into three separate disks — two subdivided in 0 with one in d) sandwiched between (see figure 2.7). The 45 layer is divided into 36 azimuthal chambers, each with 16 sense wires that extend radially 26 Figure 2.7: FDC, exploded view. outward. The two 9 layers are divided into four quadrants. The quadrants consist of six chambers stacked radially, with eight sense wires per chamber. The two 0 layers are rotated by 1r / 4 with respect to each other. 2.2.2 The DC Calorimeter System After passing through the central tracking detectors, a particle will next en- counter the cryostat wall of the DC calorimeter system [15] (see Fig 2.8). Photons leave no tracks in drift chambers and never make it to the muon system, making the calorimeter the main detector system of interest for this work. There are in 27 or LIQUID ARGON CALORIMETER , h. END CALORIMETER Outer Hadronic (Coarse) Middle Hadronic (Fine & Coarse) , CENTRAL CALORIMETER Electromagnetic Inner Hadronic Fine Hadronic (Fine & Coarse) Coarse Hadronic Electromagnetic \ \\_// t Figure 2.8: Cutaway view of the D0 Calorimeters. 28 Resistive Coat Absorber Plates —-— G 10 Is I. At Gaps K 7 / II t II II II \\ 70u Pads Figure 2.9: Diagram of a calorimeter cell. fact three separate cryostats housing the three separate calorimeters — one Central Calorimeter (CC) and two Endcap Calorimeters (ECs). The are all liquid Argon sampling calorimeters. Liquid Argon is the medium ionized by the incident particles and sampling refers to the fact that only a fraction of the deposited energy is ac- tually measured (roughly 10 ‘76). The basic design of the calorimeter is a sandwich of liquid Argon, readout boards, and absorber plates. A particle deposits energy in the Argon by ionizing, and these electrons are collected and read out by the signal boards. The absorber plates absorb the particle’s energy and cause it to shower, i.e. create secondary particles. Figure 2.9 shows a diagram of a calorimeter cell. The Central Calorimeter covers the pseudorapidity region of | 11““ |< 1.2. It is 29 divided radially into three different sections — the Electromagnetic (EM), the Fine Hadronic (FH), and the Coarse Hadronic (CH). The EM section is the innermost region and contains four separate layers. The absorber used is uranium, which is dense enough to provide good stopping power with a limited volume. Lepton and photon showers are usually wholly contained in the EM section, which makes it the most important section for this analysis. It is divided into four depth layers (EM1-4) with longitudinal segmentation of 2,2,6.8, and 9.8 radiation lengths. The FH section is next, which also uses uranium as the absorber. There are three FH depth layers. The bulk of hadronic showers are contained in the FH layers. Farthest from the beam pipe is the single layer of the CH section, used to contain those rare hadronic showers that punch through the PH. Copper plates are used as the absorber in the CH. The two Endcap Calorimeters cover the region of 1.5 <| 11““ |< 4.5. Like the CC, they are divided into a EM, FH, CH sections. Unlike the CC, the FH section is divided into four depth layers rather than three. The absorber used in the CH section is stainless steel. Both calorimeters are segmented in projective towers of A1) x A45 = 0.1 X 0.1 which point back to the nominal interaction point (see figure 2.10). The third EM layer is further subdivided into cells of A17 x A45 = 0.05 x 0.05. This allows for additional shower shape pattern recognition. The calorimeters were calibrated and studied at the DC test beam [20]. Mo- noenergetic beams of electrons and pions from 2 to 150 GeV were aimed at various 30 a) DIV-POOQCDAN O Figure 2.10: Side view of one quadrant of the calorimeter showing the projective tower geometry. 31 sections of the calorimeter. The response was found to be linear above 10 GeV. The resolution for electrons and pions was measured and found to be 15% / x/E and 50% / JD- respectively. The region between the CC and ECs, 0.8 <| 17““ I< 1.4, consists mostly of calorimeter support structures and cryostat walls. This creates an area where energy is not well measured. To correct for this shortcoming two additional calorimetric devices were added. The first are called Massless Gaps which are mounted onto the inside of the calorimeter cryostats. They consist of two signal boards which collect the ionization energy deposited in the liquid Argon near the cryostat wall. A second type of detector, called the Inner Cryostat Detector (ICD), is mounted between the cryostats. The ICD consists of 384 scintillator tiles which are read out by phototubes. 2.2.3 The DC Muon system The DC Muon System consists of five iron toroids and three superlayers of single wire Proportional Drift Tubes (PDTs). The first layer, called the A layer, consists of four sublayers of PDTs and is mounted on the inner face of the toroid magnet. Hits in the A layer are formed into tracks which point back to tracks in the Central Detectors. The other two superlayers, B and C, are mounted on the outside of the magnet and each contain three sublayers of PDTs. Tracks in the B and C layers are matched with the A layer tracks. The muon momenta can be determined by the amount of deflection caused by the magnetic field in the toroid. The Wide Angle Muon System (WAMUS) covers the region of | 11““ I< 2.5 and the Small Angle Muon _ ’7— 32 System (SAMUS) extends the coverage to I 11““ |< 3.6. 2.2.4 The DC Trigger and Data Acquisition System The time between colliding bunches of protons and antiprotons in Tevatron run Ia was 3.5asec, or 286,000 bunch crossings per second. While the rate of inelastic colli- sions (interactions where the proton breaks apart) depends on the beam luminosity, a typical rate in run Ia was on the order of 200,000 times per second. As can be imagined, reading the 150,000 channels of information from the detector is not pos- sible at this rate. A typical event contained on the order of 250 kilobytes of data. The data was written to magnetic tape at a rate of 2 events per second. The difficult task of reducing the event rate from 200kHz to 2Hz is handled by the DO Triggering system. The DC Trigger system consists of three levels. Each subsequent level is more restrictive, yet slower and more precise, than the preceeding one. For an event to make it in to the data stream it must pass the requirements of each level. Level 0 The first level of triggering is the Level 0 detector [21]. It consists of two scintillator hodoscopes mounted on the inside faces of each of the EC cryostats, 140cm from the center of the detector. Each hodoscope has two perpendicular planes of 28 scintillating tiles. They cover the pseudorapidity range of 1.9 <| 17““ |< 4.3. In the event of an inelastic collision both hodoscopes will fire (with ~ 99% effi- 33 ciency). The vertex 2 position can be determined by timing information. The Level 0 detector can also flag possible multiple interaction events. The trigger can be set up to require a single vertex, multiple vertices, a 2 position in a specific range, or any combination thereof. Level 1, The Hardware Trigger Events passing the Level 0 requirements are sent to the Level 1 triggering system [22]. The Level 1 trigger provides a decision before the next bunch crossing, but with limited programmability. The Level 1 system can be programmed with up to 32 different requirements. Each of these separate requirements, called triggers, can be optimized for a specific physics analysis. Level 1 decisions use information from two detector systems — the Calorimeter and Muon detectors. The Calorimeter Level 1 Trigger makes fast hardware sums in trigger towers of 0.217 x 02¢. Each tower has both an electromagnetic sum (using the EM section) and a total (using the EM and FH sections). Cuts can be applied on the number of towers above a programmed threshold set, which can specify different thresholds for each tower. The Muon Level 1 Trigger counts the number of muon tracks in each region of the Muon system. If additional processing is required (for example, a cut on the muon p1) a Level 1.5 decision is requested. This incurs a detector deadtime of one beam crossing. 34 Level 2, The Software Trigger The Level 2 system [23] consists of a farm of 50 VAXstation 4000/60 processors which run Fortran analysis code. When one of the 32 Level 1 triggers has fired the full detector information is digitized and sent to one of the VAXstations. Each processor is loaded with the same code, which is modularized into software “tools”. The processor runs the appropriate tools associated with the Level 1 trigger. There can be more than one Level 2 trigger associated with each Level 1 trigger. Events that pass the Level 2 trigger are copied to tape to be analyzed further by an offline computer farm. 2.3 A Brief History of the D0 Experiment In 1981 the Fermilab directorate issued a call for expressions of interest in a detector to be built at the DO interaction point on the Tevatron accelerator ring. The new detector would take advantage of a Fermilab upgrade which would enable proton and antiproton collisions at a center-of-momentum energy of 1.8 TeV, the highest in the world. A detector had already been approved for construction at the B0 crossing point in 1978. The DC detector was planned to be a smaller and less expensive complement. The proposals submitted are summarized in table 2.1. By late 1982 some merging of the groups was already taking place. Early 1983 saw the discovery of the W and Z bosons at CERN and the cancellation of ISABELLE, a proton-proton collider slated to be built at Brookhaven. With this in mind the Physics Advisory 35 Table 2.1: Proposals submitted for a detector at the DC interaction region of the Tevatron. '|'- Withdrawn, 1- Approved | # Spokesperson Description Proposed Rejected 709 Longo forward detector 1/ 1 1 / 82 6 / 23 / 82 712 Rappl muon production 2 / 1 / 82 6 / 23 / 82 713 Price heavily ionizing particle search 2 / 5 / 82 7 / 1 / 83 714 Grannis LAPDOG (calorimetric) 2 / 5/ 82 7/ 1 / 83 717 Lach forward detector 3 / 19 / 82 6 / 23/ 82 718 Erwin calorimetric detector 4 / 1 / 82 6 / 23 / 82 722 Kenney streamer chambers 10 / 1 1 / 82 2 / 18 / 82 724 Longo calorimetric detector 10 / 26 / 82 7 / 1 / 83 725 Goulianos diffractive dissociation 1 1 / 1 / 82 7 / 1 / 83 726 Abolins calorimetric detector 11/1/82 7 / 1 / 83 727 Rosen forward calorimeter 11 / 2 / 82 6/16/831 728 Green a production (supersedes 712) l 1/ 1 / 82 7 / 1 / 83 736 Adair free quark search 4 / 1 1 / 83 7 / 1 / 83 740 Grannis D0 9/9/83 8/10/841 Committee decided to reject all DO proposals and recommend the construction of a more ambitious detector and asked Paul Grannis to head that effort. The current DC detector is the result of the merging of proposals 714, 726, and 728. Chapter 3 Data Sample 3. 1 Triggers The direct photon event rate decreases quickly with increasing photon transverse energy. In order to populate the full range of pr, from 10 to 100 GeV, the direct photon triggers were split into three streams — low, medium, and high. The rate of events with low pT photons is too large for the D0 DAQ system and only a fraction of events can be written to tape. This rate reduction is done by statistically ignoring some events, called prescaling. A prescale rate of 100 means that only one out of every 100 events is accepted by the DAQ system. The medium photon trigger was also prescaled for most of run Ia. Prescaling is not needed for the high photon trigger, because its rate is low enough for the DAQ system to accept every event. 36 37 3.1.1 Level 1 The three Level 1 triggers used involved simple cuts on the electromagnetic energy in a trigger tower. A trigger tower is a collection of 4 calorimeter readout towers ganged together, with a transverse width of A17 x Ad = 0.2 X 0.2. The cuts were set at 2.5, 7, and 12 GeV for the low, medium and high triggers respectively. Often the energy of a photon candidate is shared between two trigger towers which can cause the energy in any one tower to be less than threshold. This effect makes a trigger inefficient for candidates with an energy close to the trigger threshold. This effect was measured from data by taking events which had passed a lower threshold trigger and observing the pass rate for a higher threshold trigger (see figure 3.1). Since a lower efficiency could create a bias in the candidates that passed, only candidates with an E7» high enough for the trigger to be fully efficient were used in this analysis. 3.1.2 Level 2 The Level 2 trigger used a list of candidate towers from Level 1 as seeds and searched for the highest energy EM3 cell in the tower. It then clustered the calorimeter energy in the EM and FHl layers around the peak cell in a window of A1) x Ad = 0.3 x 0.3. Cuts were applied to the candidate cluster in the following order: 0 The transverse energy of the candidate cluster must be above a specified thresh- old. The three Level 2 thresholds used in run Ia were 6, 14, and 30 GeV. 38 Level 1 Hiqh Trigger (Er > 10 GeV) l l o I _o__°.-0—0—0—0-0—0—0—0—0-0—0—0—0—0—O- Relative Efficiency 3 I 0.6 7" -0- 0.4 — 0.2 [- +- 0 l l l 1 1 L L l l l l l l l 1 1 l l l A J I L l I I 10 20 so 40 so so Candidate Offline Er Figure 3.1: Turn-on curve for Level 1 high ET electromagnetic calorime- ter trigger. 39 o The hadronic energy of the cluster (is energy in the hadronic section of the calorimeter) must be less than 10% of the total energy. 0 The energy deposited in the EM3 layer must be greater than 10% and less than 90% of the total. 0 The shower shape in the plane transverse to the particle direction must be consistent with electron showers from the test beam. This cut defines a lateral spread variable which is simply the difference between the second moment of energy in a 0.5 x 0.5 window and the second moment in a 0.3 x 0.3 window. This number is expected to be low for photon and electron showers. The actual cut value varies with 17. o The candidate must pass an isolation cut, defined as Er=.4 _ Eduster Eduster < 15%, (3.1) where r = .4 denotes a cone of radius 0.4 around the candidate with r = (A)? + ¢2. (3.2) Turn-on curves were plotted for the Level 2 trigger as well, and data were used only in the region of 100% efficiency (see Fig. 3.2). d Relative Efficiency .0 m 0.6 0.4 0.2 40 Level 2 Com High Trigger E, > 30 GeV) - -0-O-0-O-0-0-0-0-0-0-O- -0-0-0-O~ -0-O-0-0-0- l— 0' .0. +— ,. .4). L -<>- ' -<>- llllllf§1_l_l%lLJllllll[ALLIIIllllllllLlllll 20 25 30 35 40 45 5O 5 60 5 Candidate Offline E, Figure 3.2: Turn-on curve for Level 2 high ET trigger. 41 3.2 Offline Processing Events that have satisfied the Level 2 selection criteria were written to 8 mm mag- netic tape. These tapes were then stored for further processing. Unlike Level 2, the offline processors are not subject to stringent time constraints and can therefore run more sophisticated algorithms. A farm of 100 Unix processors is used to run the DO reconstruction program (DORECO). 3.2.1 The DC Reconstruction Program The raw detector information is converted to physics information by the DORECO program. DORECO is a huge program, with over 1 million lines of code, written primarily in FORTRAN. It is modularized into separate sections which analyze the data from the separate detector subsystems. There are two primary sections used for this analysis, called ZTRAKS and CAPHEL. ZTRAKS uses the central tracking system to identify the 2 position (position along the beam line) of the primary inter- action vertex. CAPHEL is used to cluster cells of energy deposits in the calorimeter and identify them as photons or electrons. ZTRAKS Since the calorimeter measures energy and not momentum, transverse energy (ET) becomes the primary quantity of interest. At high energy ET and p7 are roughly 42 equivalent. ET is defined as ET 2 E X sin(0) (3.3) where 0 is the polar angle measured from the beam axis. Measurement of ET therefore depends upon measurement of the coordinate sys— tem origin or vertex. The beam spot has a small cross section, making the vertex stable in the a: — 3] plane. As Fig. 3.3 shows, the vertex in the z direction can vary by as 1 meter from event to event. ZTRAKS works by reconstructing charged particle tracks in the CDC and projecting them back to a vertex. The track reconstruction proceeds as follows: 0 Reconstruct tracks in the r — d plane from aligned hits in the drift chambers. The r — d tracks are then reconstructed in the r — 2 plane. Project the reconstructed tracks onto the z axis (r = 0). The z-position of each track is then histogrammed in 2 cm bins. The histogram bin with the largest number of tracks is combined with its neighboring bins to form the primary cluster. Smaller clusters form secondary vertices. If no bin contains more than two tracks, the largest contiguous set of bins is used to determined the primary vertex. The cluster mean 2 and error on the mean define the z vertex position and resolution. 0 If there is only one track in the event the z-position at r = 0 from that track is used for the primary vertex. 43 Vertex; Position ‘2w I. Mean -7.884 RES 25.57 I I f o LLLlLJJL+LAllAAllAAlLAllALAlL A A_—A -100 —60 -60 -‘0 ~20 0 20 ‘0 60 80 100 Figure 3.3: 2 vertex position. The average posi- tion of the vertex in run Ia was not zero. The resolution of the vertex position depends upon the number of tracks, but is typically approximately 1cm. Figure 3.3 shows a plot of the reconstructed vertex for the run Ia direct photon events. CAPHEL, CAlorimeter PHotons and ELectrons The CAPHEL reconstruction package creates clusters of calorimeter cells using a nearest neighbor algorithm. Readout towers (0.1 X 0.1 in r) X d) with more than 1.5 GeV of ET are used as seed towers. Neighboring towers (towers that share a common border with the seed) are added to the cluster if their ET is above 0.05 GeV. Then towers bordering this cluster are added. The cluster is expanded in this way until no neighboring towers above threshold are found. 44 3.2.2 Photon Identification Photons are uncharged and therefore leave no tracks in the tracking chambers. They deposit energy in the EM section of the calorimeter in showers with a relatively small transverse size. The cluster is also expected to be well isolated from othdr activity in the event, as can be seen from the leading order direct photon Feynman diagrams. The photon signature is therefore a small cluster of cells in the EM calorimeter with no associated track. This motivated the Level 2 cuts detailed above, and an additional stricter set of cuts were applied offline to reduce the number of background events in the sample. Fiducial cuts were also applied offline to restrict the photon candidates to an active region of the calorimeter, where the photon energy was well measured. The fiducial cuts used were: 0 The cluster must be in the Central Calorimeter with a detector 1] (11‘1“) coor- dinate of less than 1.0 (corresponds to a polar angle between about 40° and 140°). Detector 1] assumes a z-vertex of 0 cm. 0 The cluster must have a physics 1] of less than 0.9. Physics 1] used the z-vertex position found by ZTRAKS. o The z-vertex position of the event must be within 50 cm of the nominal inter- action point (206,,“ = 0). o The d coordinate of the center of the cluster must be at least 1.125° away from the detector d cracks. This puts the center of the cluster in the middle 80% of the detector module. 45 The other cuts applied offline were: 0 The cluster must have no reconstructed CDC track in a road of size A17 X Ad 2 0.1 x 0.1. o The electromagnetic fraction, defined as . EM Er=.2 EMFractzon — Total Er=.2 (3.4) of the cluster must be greater than 96%. 0 Less than 2GeV of ET in an isolation cone of radius 0.4 around the cluster (see figure 3.4): E}=-4 — E?“ < 2 GeV. (3.5) Note that the offline isolation cut was independent of photon ET, while the trigger isolation cut was not. 0 The missing ET in the event was required to be less than 20 GeV (see figure 3.5). An ET unbalance in the event is typically caused by either a neutrino, which should not be present in a direct photon event, or a noisy calorimeter cell. 0 The shower shape of the cluster was required to be consistent with the observed shower shape from electrons in the DO test beam data. The final cut deserves further explanation. The statistical fluctuations present in the development of a calorimeter shower make cuts on any single variable inefficient. A multivariate cut can potentially take these fluctuations into account, raising the 46 m Isolation Er €0.16 - v t' > . w P '80.“ — O electrons from 2 events % _ Monte Carlo photons r €0.12 — o . Z . 0.1 - 0.08 - 0.06 — r I- 0.04 _ 0.02 - + + . + 4., T I I I I ' ‘ 0111: [Ill 1111 1L11 11411LJ —1 0 1 2 3 4 5 (DA r-J Er " Er Figure 3.4: Isolation distributions for Monte Carlo pho- tons and Z —> as electron data. 47 1250 1000 750 “IIU‘IU‘UVIYV'YIUVUTIYTII[TUITIIUWTI 250 llllllllllllllLlllllAlll L--a4 5101520253035404550 Missing Transverse Energy (GeV OWI'I Figure 3.5: Missing transverse energy distribution from photon candidates. 48 background rejection along with the efficiency over any single cut [24]. A covariant matrix was defined for a sample of N test beam electrons: 1 N n — n - Mo = NEW '- a=-')(='=,- — 31') (3-6) 71 where a:- , is the value of observable i for electron n and 2?.- is the mean value of observable i for the sample. Once the matrix is tuned on a signal sample, a x2 can be computed for every candidate 1:: X2 = ZOE? - 5.)H.-,-(:cf - 51') (3-7) to with H=M4. mm Notice that if the off-diagonal elements of the H matrix are zero (i.e. there are no correlations between the observables), equation 3.7 reduces to the familiar definition det of x2. Thirty seven H matrices, one for each 17 readout tower, were tuned on samples of test beam electrons. Forty one observables were used: Fractional energy in EM layers 1,2 and 4. 0 Fractional energy in each cell in a 6 X 6 window around the cluster in EM layer 3. 108100”- Vertex 2 position (z/az). 49 Figure 3.6 shows distributions of this x2 for calorimeter showers from 25 GeV electrons and pions [25]. Electrons from W events are shown to agree very well with the test beam electron sample. It can be seen from the plot that requiring the x2 to be less than 100 provides excellent rejection of pions with good electron efficiency. 3.3 Efficiency The efficiency for the selection criteria detailed above was measured with simulated direct photons, called the Monte Carlo sample. These events were created using CERN’s GEANT[26, 27] package, which tracks the passage of particles through mat- ter. Data taken with a minimum bias trigger (only a level 0 requirement) were added to the simulated photon events to create the effect of detector noise and multiple interactions. The distributions from the Monte Carlo were found to agree with elec- trons from Z —r ee and W —> eu events (Z and W events yield an unambiguous signal). Figure 3.7 shows the offline cut efficiency vs transverse energy for this Monte Carlo photon sample. The efficiency for each cut is defined as the number of can- didates that survived the cut divided by the number before the cut. Since the cuts can be correlated, the individual efficiency of each cut depends upon the order of the cuts. The missing ET cut is absent from figure 3.7. This is because the Monte Carlo sample only simulated the photon in the event; the jet (or jets) balancing the photon 50 El electrons @ pions number of events ii 10 102 1 Figure 3.6: x2 distributions from test beam electrons, pions, and elec- trons from W —) an events. 51 >s U c 1*- S! o ' ______ _____________.._. 33 ~ WW - Lu 7‘?’ ......... *~-~. “' - n.-.-.. ..... . ---------- I ‘1 0'8- . I r- I I 0.6 - - 0 Track Veto - I EM F roction cut 0.4 — A Isolation cut . V Hmctrix cut - 0 TOTAL 0.2 — O llllllllllLlLllllllllllllllJlllllllllllIl o 10 20 so 40 so so 70 so Photon Transverse Energy (GeV) Figure 3.7: Offline selection cut efficiency measured with Monte Carlo photon sample. 52 , 2 Efficigncv of Missing E; Cut v3 Photon El 5" {/mr 42.20 / 44 c A0 1.017 .3 AI -o.issss-02 :: .- m 1— + 0.3 — Eff=A,+A.tE. oer 0.4 - 0.2 >- o I‘llllLALlLAIIlllljlllllilllllllllllIlllllJlllllA orozosowsoeoroeosoroo Transverse Energy (GeV) Figure 3.8: The efficiency of the missing ET cut vs photon ET. was not simulated. The large amount of computer time necessary to track all the particles in a jet made full event simulation impossible. This caused the Monte Carlo sample to have missing ET which is always equal to the transverse energy of the photon. The efficiency of the missing ET cut was derived by dividing the number of candidates which pass all cuts by the number which pass the cuts when the missing ET cut is not applied. Figure 3.8 shows a plot of this efficiency vs the transverse energy of the photon candidates. This was then fit to eliminate the effect of an excessive number of hot cell or W —r eu events in any one bin. Figure 3.9 is a plot of the efficiency of the trigger for Monte Carlo photons which have passed the offline selection cuts. The trigger efficiency is 100% for most of the transverse energy range. 53 54 Efficiency i 0.8 [— 0.6 _ 0.4 - 0.2 — o 11111111111llllllLlJllllllllllllllllllllllllllll 10 20 so 40 so so 70 so 90100 Photon Transverse Energy (GeV) Figure 3.9: Trigger efficiency for Monte Carlo photons which have passed offline selection cuts. Chapter 4 Background Subtraction As discussed in Chapter 1, the simplicity at the theoretical level makes direct photon production an inviting test of QCD. There are experimental issues, however, which make the measurement more problematic. The largest of these difficulties at high transverse energy is the subtraction of background from the data sample. Jet production at Tevatron energies has a cross section about three orders of magnitude larger than direct photon production. A jet is composed of both charged and neutral particles. The charged hadrons in a jet leave most of their energy in the hadronic section of the calorimeter, while some neutral hadrons often have decay modes into photons and thus can leave a sizeable fraction of energy in the EM calorimeter. So a “typical” jet will create a shower with both hadronic and electromagnetic components. It will also be larger in transverse size than a photon shower (see Fig 4.1). However, the number and type of particles that a jet will fragment into is a probabilistic process. Roughly one out of every 1000 partons will 55 56 Jet Width 800; 700'— o ‘1 ALlLlALLlllellAALlell 0.5 ‘036 “0.7 scrim + Av’) Figure 4.1: Width of jet candidates from data. form a jet with 90% of its energy carried by one neutral hadron (see Fig 4.2). It is these narrow electromagnetic jets that provide a substantial background to direct photon production. The neutral mesons in jets that can provide a background to direct photon pro- duction are listed in Table 4.1, along with their branching ratios [28] and production ratios [29]. Of these, only two (no and 1)) were found to contribute substantially to the candidate sample. Mesons which decay into more than two photons are often rejected by the isolation cut. For example, the [ff/1r0 production ratio is 0.4, but after photon cuts this is reduced to less than 0.05. Some previous direct photon experiments subtracted the neutral meson back- 57 Jet Electromagnetic Fraction 400 TIjfiTT‘ll 350 250'- 200— 150- Figure 4.2: Electromagnetic fraction of jet can- didates from data. Roughly 4/1000 jets have an electromagnetic fraction higher than 90% . Table 4.1: Neutral Meson Background. particle mass o/a,» decay branching ratio no .135 1 77 .99 r) .547 .55 77 .39 n .547 .55 3'II'0 .32 K 9 .494 .40 2W0 .31 w .781 .50 «by .09 1)! .958 1 «W017 .21 58 E LMinimum Sgporotion Between Photons in 1r°/n Decoys u 20 — ‘. E ‘ l- ‘ .. ‘ 17.5 _ g ‘, i- 2 \ I- :: \ '2 \ 15 E- ‘, i = \ " '. \ .. -._ o 12.5 '- 1‘ \\ n : ‘\ \ 10 — \ ‘ . Cell Width at 11:0 7.5 — \ s — 7“~-~ 2.5 L ............... o ‘LJlllLAllllllllllllAAlAlLlII|14111L14J11 o 2.5 s 7.5 10 12.5 15 17.5 20 Meson Energy Figure 4.3: Minimum separation between pho- tons from no and 17 decays at the first layer of the calorimeter. The horizontal line denotes the cell size at pseudorapidity of 0. ground by constructing an invariant mass between the decay products (photons). This technique can be used only if the calorimeter granularity is fine enough to re- solve the photons as separate clusters. The minimum distance between two photons from a no or 17 decay is: 2m,,o/,,L d... = —— . E (4 1) where mfo/n and E are the mass and energy of the 1r° or r], and L is the distance from the decay point. At the first layer of the DO calorimeter L = 75 cm. This separation is plotted vs energy for «0’s and 17’s in figure 4.3. As can be seen from 59 the plot, the segmentation of the DO calorimeter is not fine enough to resolve the photons from a no or 17 decay in the energy range of interest. The two photons coalesce to create one electromagnetic shower in the calorimeter. However, there are differences between single and multi-photon showers that can be exploited. Statisti- cal fluctuations in calorimeter shower development make event-by-event background subtraction impossible, but the differences in shower development can be used to estimate the amount of background in the sample on a statistical basis. 4.1 Longitudinal Shower Profile Method Photons lose energy in matter by three main processes: photoelectric absorption (7 + e -—> e’), Compton scattering (7 + e —-> 7’ + e’), and pair-production (1 —) e+e"). At photon energies above 1 GeV pair-production is by far the dominant phenomenon. Therefore a photon will pair-produce before it deposits energy into the calorimeter. The probability that a photon pair-produces depends on the number of radiation lengths of material the photon traverses (a radiation length is defined as the thickness of a given material required to reduce the mean energy of an electron beam by a factor of e). If the probability of one photon converting to an electron-positron pair 0 is P,, then the probability that at least one photon from a 11' converts is P,r = 2P, — P3. (4.2) 60 350 :- —— MC Photons ........... MC pizeros 300'— 250- 200*] 100— ...... .... ....... o".I.I.I.I.I.I.I.I.I." 0 0.025 0.05 0.075 0.1 0.125 0.15 0.175 0.2 0.225 0.25 EMi/E Figure 4.4: Histogram of the fractional energy contained in the first layer of the calorimeter for 10 GeV Monte Carlo photons and 1ro’s. Multi-photon backgrounds tend to convert and shower earlier than single photons, as figure 4.4 demonstrates. A substantial fraction of photons leave no energy in the first layer of the calorimeter. This difference can be used to estimate the fraction of single photons in the data sample. If a given cut has an efficiency of 6,, 6,, and 64.,“ for photons, background, and data respectively, then 6datalvdata = eyNy + 511er (4.3) 61 where the N is simply the number of candidates in each sample. Equation 4.3 can be rearranged to solve for the fraction of photons in the data sample: N a a _ 1r Photon Fraction, 7 = —7 = Edi—i, (4.4) N 6., — 6,r using N1r = Ndata — N,. A discriminant between photons and background was devised using the energy deposited in the first layer of the calorimeter. An 6 was defined for each sample as _ Number of Candidates with EMl/E < 1% 6 _ Total Number of Candidates (4.5) Figure 4.5 shows the behavior of this variable vs transverse energy for the three samples. The fraction of candidates that are single photons can be found from equation 4.4. Figure 4.6 shows a plot of this photon fraction vs transverse energy. Table 4.2 contains the numerical values of the data points. The choice of the discriminant value is not arbitrary, but is based on two dif- ferent criteria. The value must be chosen from a region of the distribution that is modeled well by the Monte Carlo. The Monte Carlo distributions were compared with Z ——> e+e" data, as shown in figure 4.7. The 1% value was also chosen because it maJdmizes the difference between single and multi-photon showers, and therefore reduces the error on the photon fraction (the error is proportional to (6., — 6,.)‘1). If the Monte Carlo were in perfect agreement with the data the value of the photon 62 /E ~ - i g 01% a b 0.3—— 112% 0.6 - 0.4 .— 0.2 - "' 1 1 1 l 1 1 m l 1 1 1 l 1 1 1 l 1 1 1 l 0 20 40 60 80 100 Transverse Energy Figure 4.8: Photon fraction for different values of the EMI / E discriminant. The lines are fits to the three sets of data points. 67 The Central Drift Chamber (CDC) was used in this analysis to tag conversions. Roughly 10% of photons convert in the material in front of the CDC (the exact conversion probability depends on 17). DC has no central magnetic field, so electron- positron tracks from 7 —-> e+e‘ do not curl in opposite directions and tend to lie on top of one another. Conversion tracks are identified as single tracks with twice the minimum ionizing energy. This analysis uses a sample of candidates which fulfill the selection cuts detailed in Chapter 3 with the exception of the track veto cut. The sample therefore contains the same photons and background candidates from the previous analysis, plus a sizeable portion of electrons and photon conversions. The electrons typically leave ionization in the CDC which is consistent with 1 minimum ionizing particle (mip), and photon conversions leave 2 mip ionization (see figure 4.9). One mip was defined as CDC dE/dx between 0 and 1.4, and two mips was defined as between 1.4 and 3.0. Tracks were also required to match with the centroid of the calorimeter shower with a significance of less than 5. Track significance is defined as: ..,...=¢(g;\;r+(§_;r (4.6) where R is the radial distance from the vertex to the center of the shower, Ad and A2 are the differences in azimuthal angle and beam direction between the track position and the shower center, and Reid and 62 are the position resolutions of the calorimeter. Candidates which had a track but did not pass the significance cut were dropped from the sample, rather than reclassified as 0 mip. 68 000 mig Qigtributign From Elgctrgmggngtig nggters p III—WVI‘IITTIrTI’ IVVIITTIIIITITYVI 1- 11 AAJAIA h._-~ llll‘llLLLllLLllLl‘Alll I ll 05115225335445: mips Figure 4.9: CDC dE/dx distribution. 69 4.2.1 Matrix Formulation The three final states of 0, 1, and 2 mip candidates are a mixture of the three initial states of photons, neutral mesons, and electrons. The transition from initial particle to final candidate can be modeled with a matrix formulation [31]: {0mip\ ( 3X3 \KN,\ 1mip - transformation Ne . (4.7) (2mip ) \ matrix ) (N, j The transformation matrix describes the conversion and tracking process, deter- mined from Monte Carlo and data. It can be further broken down into four separate matrices: ( tracking l { charged \ ( conversions & mip \ ( neutral \ efficiency overlaps transformation overlaps - (4-8) ( matrix J ( matrix ) ( matrix ) ( matrix ) The matrices do not commute and therefore the order is not arbitrary. Neutral Overlap Matrix There is a small chance that the underlying event can contribute a soft (low trans- verse energy) particle which will fall in the same tracking road as the particle from the hard scattering. The neutral overlap matrix accounts for these particles which 70 are uncharged. (neutrall (l—Vl 0 0” overlaps = 0 1 0 (4.9) ( matrix ) ( Vl 0 1 ) The probability of a neutral overlap, V1 , was studied from data and found to be very small (< 1%). The effect of this matrix is to move some of the single photons (N,) to double photons (N,). The number of electrons remains unchanged. Conversion and MIP Transformation Matrix The next matrix applied handles the probability that a photon will convert into a 6+6“ pair, along with accounting for the finite resolution of the dE/dx measurement. The matrix can represented as: ( conversions & mip \ f 1 — P 0 (1 _ p)2 I transformation = LIP Y 2L2P(1 — P) . (4.10) ( matrix ) ( SIP X 25'2P(1 - P) ) The probability that a photon will convert is P. L1 represents the probability for photons that the converted electron-positron pair will be separated enough for the track to be classified as one mip and L2 is the corresponding probability for photons from neutral mesons. 5'1 and S; are the probabilities for photons and background that the track will be identified as 2 mip. Note that it is not necessary for L1 + 51 (and L; + 32) to equal one, since there is a small probability that the track will be identified as greater than 2 mip. X and Y represent the probability that a single Table 4.3: CDC Conversion Method Parameters. 71 II [25 GeV 1 40 GeV I so Gevfl X 0.021 0.022 0.023 Y 0.935 0.931 0.925 L1 0.011 0.009 0.002 L2 0.049 0.031 0.025 51 0.931 0.940 0.943 52 0.745 0.852 0.967 electron track will be identified as a 2 or 1 mip track, respectively. The values of these six parameters were measured from a. well understood tracking simulation. The background was composed of «0’s, 17’s, and Kf’s in a 1:0.55:0.4 production ratio and was found to be fairly insensitive to changes in this ratio. These parameters do depend on transverse energy, as expected (see table 4.3). Decay products become more collimated at higher energies, leading to tighter tracks. Charged Particle Overlap Matrix Charged particles leave tracks without converting, so the Charged Particle Overlap Matrix is applied after the Conversion Matrix. The matrix is defined as: charged 1 — V3 0 0 overlaps = Vg 1 — Vg 0 (4-11) matrix 0 V; 1 — V3 The two probabilities, V2 and I/3, arise from the two difierent effects that charged particle tracks can create. V3 (7.5 :I: 0.5%) is the probability that a charged overlap 72 will fall inside the tracking road, while V) (1 :I: 0.2%) is the probability that the track will pass the significance cut. Thus while 7.5 % of unconverted photons are lost from the 0 mip sample, only 1 % are recovered in the 1 mip sample [30]. 'hacking Matrix The tracking matrix encompasses the efficiency of the CDC along with the software track finding algorithm: tracking 1 1 — T 1 — T efficiency = 0 T 0 . (4.12) matrix 0 0 T The tracking efficiency, T, is measured from Z —1 €+e_ events to be 0.87 :t 0.04 in the CDC. Photon Fraction Calculation The photon fraction can be calculated from equation 4.7 by solving for N,, Ne, and N," with the number of candidates in the 0, 1, and 2 mip samples as inputs. The complicated matrix manipulation was solved using the MAPLE mathematical package. The number of photons in each of the three samples (0, 1, and 2 mip) can 73 Table 4.4: Photon Fraction from the CDC Conversion Method. ET Bin Photon Statistical Systematic (GeV) Fraction Error Error 22-28 -0.03 0.18 0.23 28-35 0.18 0.10 0.19 35-45 0.35 0.06 0.16 45-70 0.35 0.09 0.16 70-90 0.51 0.20 0.13 then be found by multiplying the vector 0 (4.13) by the transformation matrix. The fraction of photons in the 0 mip sample for different transverse energy bins is shown in table 4.4. A comparison of the two methods of background subtraction is shown in fig- ure 4.10. The uncertainties inherent in the CDC method are larger, due both to errors in the parameters involved and to the small number of candidates in the con- version sample (only 10% of photons convert). Because of the larger errors the CDC method is not used for background subtracting the cross—section, but is shown here for comparison only. 74 Photon Fraction from EM1 and CDC methods c . .3 1.2 — o b- E g 1 L O EM1 Method E C o CDC Method " O. 0.8 h ++ M“ (P I .o N TITY # l l l l I I l 1 1 l l A J l L A l l 20 40 so so 100 Transverse Energy Figure 4.10: Comparison of the photon fraction from the two methods. Chapter 5 Isolated Direct Photon Cross Section 5.1 The Differential Cross Section Formula The differential cross section for p13 —> 7 + X can be written as: J20" _ N7 dedn — £agApTAn’ (5.1) where N is the total number of candidates, 7 is the photon fraction as defined in equation 4.4, [I is the luminosity, a is the geometric acceptance, 5., is the efficiency, and APT, A1] are the bin size in [)1 and 1; respectively. A discussion of these factors and their associated errors follows below. It should be noted that the cross section measured here is for isolated direct photon production, i.e. it is not inclusive. Back- ground levels at the Tevatron make the measurement of an unisolated cross section 75 76 Table 5.1: Number of Candidates after cut for each trigger. Cut Low Trigger Medium Trigger High Trigger # ‘70 # ‘70 # % 17"“ 45866 100 22029 100 98713 100 1] 37818 82.5 18342 83.3 84787 85.9 Track Veto 25033 54.6 10429 47.3 41123 41.7 EM Fraction 20154 43.9 8713 39.6 33528 34.0 Isolation 18936 41.3 6660 30.2 21771 22.1 H Matrix 10924 23.8 4455 20.2 16608 16.8 Missing ET 10903 23.8 4385 19.9 15384 15.6 Z Vertex 10282 22.4 4099 18.6 14367 14.6 difficult. This makes comparisons with other experiments which have different isola- tion requirements difficult. It is important to compare with a theoretical prediction that models the isolation cut correctly. 5.1.1 The Number of Candidates, N The candidate selection criteria were detailed in chapter 3. The data for the three triggers are shown in figure 5.1, with the number of candidates after each offline cut being shown in table 5.1. It is possible for an event to have more than ‘one good photon candidate and therefore be added into the cross section more than once. The error on N is simply the Poisson statistical error, VN. 77 Ray Number of Candidates from the 3 Triaaers T 1 103 TTTITIYI 10 I YTTI IlIlllllllLllllllllllllllLLllllllllllllllllllJL 0 1O 20 30 4O 50 60 7O 80 90 100 Transverse Energy Figure 5.1: Raw number of candidates vs transverse en- ergy. The three large peaks are from the three different trigger thresholds. The small peak a low ET is from low energy, non-triggered photon candidates. 78 Table 5.2: Photon Fraction Fit Parameters H l — a x ebeET H a 1.14 :1: 0.05 b 0.0177 :1: 0.0021 x2 0.90 5.1.2 The Photon Fraction, 7 It is necessary to scale N, the total number of candidates, to account for background contamination in the sample. The details of the background subtraction method were given in Chapter 4. The fraction of photons in the sample was determined for several bins of transverse energy and were fit to a function of the form 1 — a x e'bXET. (5.2) Table 5.2 contains the details of this fit. The functional form was chosen because it fit the data best and had the necessary physical constraints (the photon fraction cannot go above 1). The specific errors on the photon fraction for each pT bin were described in Chapter 4, with the statistical and systematic errors summed in quadrature to give the final error on each point. The final error on 7 from the fit was determined by varying the parameters of the fit enough to change the x2 by one unit. This is the dominant error on the cross section for most of the kinematic range addressed in this thesis (see figure 5.2). Fractional Error 79 Error on the Photon Fraction 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 TUITIIIIIIIIIIlITITITIIIIUIjleTITIUIIIllll 0IlLlLiJJLJllJllllllll1111114141]!lllllllllllll 1o 20 so 40 so so 70 so so Transverse Energy Figure 5.2: Error on the Photon Fraction vs Transverse Energy. 80 5.1.3 The Luminosity, L The luminosity was measured using the Level 0 scintillator counters, described in Chapter 1. The instantaneous luminosity is related to the Level 0 counting rate by: R £=.__£’2 , 5.3 m ( ) where am is the cross section subtended by the counters. It can be expressed as a product of the Level 0 efficiency and the total inelastic cross section, madame. This equation is only strictly true when the luminosity is low. Higher luminosity can cause more than one interaction per bunch crossing. Since multiple interactions only get counted once, the counting rate becomes smaller than the interaction rate. A correction for this can be calculated based on Poisson statistics. The average number of interactions per crossing, fl, is given by: 1'; = £70“), (5.4) where 1' is the time between bunch crossings (T = 3.5 psec). The correction factor for multiple interactions then becomes: £ _ fi, _ ‘- 111(1 - cmeasTULO) Linea: 1 —‘ e_fi £meas7'a'L0 (5.5) The inelastic cross section, used for determining am, is derived from the weighted average of the published values from the CDF [33, 34, 35] and E710 [36] experi- 81 Table 5.3: Luminosity of the Photon Triggers L oslty nb’ Gam Low 13.1 Gam Medium 169.0 am 14670 ments. Since there is an 8% discrepancy between the two measurements, the error on madame is scaled by x, where x2 is a measure of the consistency between the two measurements (3.43). The value used for 010 is 46.7 :i: 2.5 mb. The integrated luminosity for the run Ia photon triggers is given in table 5.3. The lone source of error on the luminosity is from the error on em, which is 5.4% [32]. 5.1.4 The Geometric Acceptance, a The geometric acceptance accounts for candidates which hit uninstrumented regions of the detector. As detailed in Chapter 3, two fiducial cuts were applied to avoid regions of the calorimeter where the energy measurement was not well understood. The first of these required that the 2 position of the interaction vertex is less than 50cm from the nominal vertex. The acceptance of this cut is 93.8%. The second fiducial cut removed candidates with shower centroids that were within 1.125° of a ()5 crack region. The acceptance of this cut is 80%. The total geometric acceptance is therefore 75%, with a negligible error (less than 1%). 82 5.1.5 The Photon Efficiency, 6., The efficiency for photons was measured with a sample of single particle Monte Carlo photons with data from minimum bias events added. The trigger and offline efficiencies are shown in figures 3.9 and 3.7 respectively. The statistical error on the efficiency is small (~ 1%), but there is a systematic error on the reliability of the Monte Carlo. The efficiencies were checked in the kinematic region of the electrons from Z decays (20 < p; < 50) and found to agree with the Monte Carlo to within 4%. An error of 4% was therefore assigned to the photon efficiency. 5.1.6 The Bin Size, APT and A17 The cross section is normalized by dividing by the size of the 127 and n bins. The size of the M bin was set at 3 GeV. The eta range (I 17 I< 0.9) was chosen to ensure that the photons were within the active region of the Central Calorimeter. There is no error on either bin size. There is an error on the pp scale, however. The error from the W mass measure- ment was found to be less than 1% . Since the direct photon cross section falls as roughly p715, this translates into an error on the cross section of 5%. 5.2 Cross Section vs Transverse Energy The isolated direct photon cross section is plotted vs the transverse energy in fig- ure 5.3 and the numerical values of the points are listed in table 5.4. The three 83 triggers are used in regions where they have flat efficiency and reasonable statistics. Each trigger is normalized by its respective luminosity. The cross section falls steeply with transverse energy in typical QCD fashion. 5.2.1 Comparison with Theory The theory curve plotted in figure 5.3 is from a Monte Carlo program from J. F. Owens [37]. It is based on a next-tocleading-logarithm calculation [38], using the CTEQ2M parton distribution functions [39]. The Monte Carlo has an isolation cut applied at the parton level to match the data. It is also smeared by 15%/ JE to match the detector resolution, but this effect is minimal and nowhere changes the theory by more than 3%. A better visual comparison of data and theory is provided by the plot of the point- by-point difference between data and theory (normalized by the theory) in figure 5.4. The theoretical prediction shows excellent agreement with the data in both shape and normalization. The default theory is a next—to-leading order prediction using the CTEQZM parton distribution set and p2 = 12%, scale. A leading order prediction is also shown for comparison. Changes in the scale will effect only the normalization of the theory; figure 5.5 shows the variation of the theoretical prediction when the scale is halved or doubled. Figure 5.6 shows how reasonable choices of parton distribution sets can effect the theory. The CTEQ2MF set contains fewer low a: gluons than CTEQ2M, while CTEQZMS contains more. The differences in the theory are small compared to the systematic errors on the data. 84 Table 5.4: Cross Section Points ET bin E—T da/de (11] Stat Err Sys Err (GeV) (GEV) (Pb) (‘70) ('75) 12-15 13.4 2.13 X 103 4.2 41.1 15-18 16.4 9.77 X 102 7.3 28.6 18-21 19.4 5.63 X 102 10.9 23.8 21-24 22.4 2.63 X 102 4.6 22.3 24-27 25.4 1.43 X 102 6.8 21.0 27-30 28.3 1.06 X 102 8.5 20.2 30-33 31.6 6.76 X 101 11.3 20.2 33-36 34.3 3.18 X 101 17.1 19.6 36-39 37.4 2.97 X 101 2.0 19.0 39—42 40.4 2.30 X 101 2.4 19.0 42-45 43.5 1.67 X 101 2.9 19.0 45-48 46.4 1.35 X 101 3.3 18.4 48-51 49.4 9.57 X 100 4.0 18.4 51-54 52.5 7.29 X 100 4.7 17.9 54-57 55.5 5.93 X 100 5.4 17.9 57-60 58.5 4.45 X 100 6.3 17.9 60-63 61.4 3.85 X 100 6.9 17.5 63-66 64.4 2.93 X 100 8.1 17.5 66-69 67.4 2.82 X 100 8.4 17.5 69-72 70.4 2.17 X 100 9.8 17.5 72-75 73.5 1.52 X 100 11.9 17.0 75-78 76.4 1.30 X 100 13.0 17.0 78-81 79.5 9.28 X 10'1 15.6 17.0 81-84 82.4 9.51 X 10‘1 15.6 17.0 84-87 85.7 1.02 X 100 15.2 16.6 87-90 88.5 7.57 X 10'1 18.0 16.6 90-93 91.6 6.25 X 10_1 20.0 16.6 93-96 94.6 3.33 X 10‘T 27.7 16.6 96-99 97.6 3.39 X 10"1 25.8 16.6 (pb/(GeV/c)) 10 10 85 C Di] Preliminary __ -o.9 < n < 0.9 E O Level 2 Trigger P, > 6 GeV/c : C] Level 2 Trigger P, > 14 GeV/c ~ A Level 2 Trigger P, > 30 GeV/c :- lnner errors - statistical E Outer errors - systematic E I 5“ Solid Curve : QCD-based Prediction (Owens. et ol) I CTEO 2M parton distribution '- NLL. " - P. : l l l l l l l l l l l l l l I l l l l l l 20 40 60 80 1 00 Transverse Momentum, P, (GeV/c) Figure 5.3: Photon Cross Section vs Transverse Momentum. 86 d Ignter errors - statésticatl. 13¢ Preliminary r rr r - em u e e o s sys 0 IC CTEQZM “2:731, ----------- Leading Order, CTEQZL, u.’=P,z p m Frlrtl .0 a (Data - Theory) / Theory 9 o N 01 Ifirlrrtlrrllr l I | l l l + + I111] + -o- T n—o" ...... """""""""" o.- oooooo I O 01 ITTIIITTTFTII 20 40 so so 100 Transverse Momentum Figure 5.4: Comparison of Photon Cross Section with QCD prediction. The shaded error at the bottom represents the normalization error on the data due to lumi- nosity uncertainty. 87 d Ignter errors - statésticotl, Dg Preliminary r rr r — m u e e o s sys e 0 IC CTEQZM “2:731: ----------- Next-to-Leading Order. crson. M’=o.5p.2 - — -- Next-to-Leading Order, CTEQZM. if=2p,2 .0 9 -# on I l I I l I l l I I I I l I l I I I (Data - Theory) / Theory 5: o N 0’ '/////////////N¢W7WliMM/WMH/fl/N/N/H/L IIIIIfiTITIIIIITIIII _1 1 #1 I I l l 1 J l 1 l l l I l l l I 1 20 40 so so 100 Transverse Momentum Figure 5.5: Comparison of data and theoretical predictions with different p scales. 88 d Ignter errors - statisticotl. Dg Preliminary er errors - s em n " 3’ ° ° CTEQZM az=P12 ----------- Next-to-Leading Order, CTEQZMF, /.(.’=p,2 - - -- Next—to-Leading Order. CTEQZMS, u’=p,’ - _ - 12+; 1 .0 a: .0 .5 (Data -— Theory) / Theory 9 o N a: Tllllllrfiflfiilllf l o a) nIIIrfir'Irlllrllirr 20 40 so so 100 Transverse Momentum Figure 5.6: Comparison of data and theoretical predictions with different parton distribution sets. 89 A previously published measurement [40, 41] of direct photon production in the same kinematic region by the CDF collaboration showed a steeper dependence with transverse energy than QCD theory. In particular, an excess of 40% above the prediction was measured for photons with transverse energy less than 20 GeV. Mod- ifications to the parton distribution functions are unable to account for this differ- ence [40]. It has been suggested that the source of this discrepancy is the difficulty in modeling the isolation cut and the photon fragmentation function in the theoretical prediction [42, 43]. Another possible explanation is a smearing of the direct photon p7 spectrum caused by “intrinsic by”, initial parton momentum in the transverse beam direction [44] . The measurement presented in this thesis does not show a disagreement with theoretical prediction at low p1. It should be noted that the isolation cut used here is different from that in reference [40, 41]. The CDF measurement required that the photon be isolated in a larger cone of r = 0.7, while r = 0.4 is used in this analysis. If improper theoretical treatment of the isolation cut is the source of the discrepancy between CDF and prediction, one would not necessarily expect to see the same excess here. Cl EV Prev? mess eflor side plot radi As Phc red Chapter 6 3». Characteristics of Direct Photon r“— Events Previous sections of this thesis have been concerned with the identification and measurement of only the photon in direct photon events. This chapter makes an effort at identifying the other objects in a direct photon event. Figure 6.1 shows a side view of a photon candidate event in the Da detector and figure 6.2 shows'a lego plot of the same event. As is expected of direct photon events, there is a jet 3.14 radians in d) away from the photon. 6.1 The Golden Photon Sample As was shown in Chapter 4, there is a substantial amount of background in direct photon events. For a study of direct photon event characteristics it is desirable to reduce this background to the smallest possible level without biasing the event. This 90 \ ..~; [5.7.] Fig low Cal 91 CAL+TKS R-Z VIEW 11-DEC-l992 17:31 Run 52557 Event 6264]20-SEP-1992 15:23 Max E: 48.7 Ge‘v’ CACP E SUM: 479.9 CCV MW in 2': 32.0 (cm) ' 0. 40 GeV cut was also applied to the golden sample; the photon fraction becomes too small for the lower p7 region. The photon fraction of the golden sample can be calculated by the formula: 6 7': 7 7 (6.1) 6data where e, and Edam are the fractions of photons and data to pass the EM1 cut and 7 is the photon fraction of the original sample. The golden sample was found to contain 75 % signal. 6.2 Jet Identification and Efficiency Jets were identified with a fixed cone algorithm. A cone was defined in 1] — 45 space with a radius of R = «A1,? + Ar}? = 0.7. The steps of the cone algorithm are as follows [45]: o Preclusters are formed from a list of towers (A17 x Ad) = 0.1 X 0.1) ordered in ET. Contiguous towers with ET > 1 GeV are merged in a cone of R = 0.3, starting with the highest ET tower. 0P1 01 age to 94 0 Preliminary jets are formed around the preclusters by summing all towers within a radius of R = 0.7, using the center of the precluster as the center of the jet. 0 A new ET weighted centroid of each jet is computed and used as the center for summing towers for a new jet. 0 The previous step is repeated until the jet centroid is stable. This usually takes 3 to 4 iterations. s Jets with ET < 8 GeV are dropped. 0 It is possible that some jet cones overlap. If the ET in the overlap region is greater than 50% of the ET of the smaller jet, the jets are merged by summing the energy in both cones and recalculating the centroid and ET. If the overlap is less than 50%, the jets are split by adding the towers in the overlap region to the jet with the closest center and recalculating the centroid and E1 of each jet. The jet finding efficiency was studied using the GEANT detector simulation pack- age. The efficiency for jets with ET > 16 GeV was found to be 95%, and this rises to 99% for jets with ET > 20 GeV [46]. 6.3 The Phol Stall sam A Vers the Pho 95 r I V 1 200*- l I 100- )- p p l l o l l A l A L 1 l 1 L l l I I O 1 2 I l A I I l l 4 5 Number of Jets “D— Figure 6.3: Number of jets in direct photon events from the golden photon sample. 6.3 Jet production in Direct Photon Events The first order direct photon processes have a final state jet which balances the photon ET (see figure 1.2). Higher order diagrams can lead to more jets in the final state. Figure 6.3 shows the number of jets present in events from the golden photon sample. QCD interactions do not involve neutrinos and should have little missing trans- verse energy in the event. In photon plus one jet events this ET balancing will cause the jet to be opposite the photon in 4:. Figure 6.4 is a plot of the difference between photon and jet ()5 for photon plus one jet events. It is peaked heavily at 1r, as would bee mor hea] Whe of t fina ure rar Ph. 96 Golden Photon Samgle. N” - 1 "l p 20- Figure 6.4: Photon 45 - jet 4) for golden photon sample events with one jet. be expected from the above argument. The ()5 symmetry between photons and jets should be broken for events with more than one jet. The Acfi between the photon and the leading jets is not as heavily peaked, as can be seen in figure 6.5. However, this figure also shows that when the jets are summed vectorially the resulting jet does retain the 5¢ distribution of the photon plus one jet events. This is because the secondary jets are typically final state radiation from the primary outgoing parton. The pseudorapidity distribution for photon and jet candidates is shown in fig- ure 6.6. While the photon is restricted to the I r] I< 0.9 region, the jet is allowed to range out to I 1] [< 4.0. The jet is not expected to balance the photon in 17. The photon-jet system is typically Lorentz boosted with respect to the lab frame, due to the l 97 Golden Photon Sample. N” > 1 '77 Y'YY Summed Jet Leading Jet d N U 75’- 25- Figure 6.5: Photon d) - summed jet ()3 for golden photon sample events with more than one jet. the different momenta of the initial partons. 98 Golden Phgtgn figmglg tn “ F 8 >100 - u so - — Photon * ---------- Summed Jet w .- w - 2° 7' Linea: o 1“i“j:‘i’L l l L l l I l.:':\l"'l' —4 -J -2 2 s 4 Figure 6.6: Photon and jet 1] distributions for golden photon sample events. C] Tlus prod agre ceHe P0P and Stat 310 (lar coflj Chapter 7 Conclusions This dissertation provides the details of the first measurement of the direct photon production cross section at the D0 detector. The measurement has been shown to agree quantitatively with QCD predictions over a large range of transverse momenta. The DC detector, with it’s emphasis on good calorimetry, has provided an ex- cellent means to make this measurement. The triggering system allowed for full population of the p1 region for a cross section that falls by 5 orders of magnitude, and for high rejection of hadronic jets. While background subtraction on an event-by-event basis was not possible, a statistical method was shown to be successful. This method relied on a detailed Monte Carlo simulation which was shown to model the data correctly. The Monte Carlo was cross-checked extensively with data from the D0 test beam, as well as collider W and Z events. The largest errors on the cross section resulted from systematics in the method of 99 bac bat an CV6 100 background subtraction, particularly at low transverse momenta. The neutral meson background causes a small signal-to-noise ratio in the low pT region, which leads to an inflation of the systematic errors. Future attempts to push the measurement to even lower p1 will have to address this issue. Other future photon analysis on run Ia data involve the center of mass scatter- ing angle distributions [47] and the invariant mass of the photon-jet system [48]. Studies of photons in the forward direction are also being undertaken [49], taking advantage of Da’s excellent forward calorimetry. A large additional amount of data (~ 100 pb‘l) is currently being accumulated during Da’s second collider run. Anal- ysis of this data will greatly reduce the cross section errors and provide for more detailed study of the direct photon event characteristics [50]. Bibliography [1] Gell-Mann, M., Phys. Lett. 8, 214 (1964) [2] Zweig, G., CERN Report 8419/Th 412 (1964) [3] Greenberg, O. W., Phys. Rev. Lett. 13 598 (1964) [4] Aubert, et al., Phys. Rev. Lett. 33 1404 (1974); Augustine, et al., Phys. Rev. Lett. 33 1406 (1974); [5] Herb, et al., Phys. Rev. Lett. 39 252 (1977) [6] Abachi, et al., Phys. Rev. Lett. 74 2632 (1995) [7] Cowan, C. L. and Reines, F., Phys. Rev. 39 273 (1959) [8] Anderson, C. D. and Neddermeyer, S. H., Phys. Rev. 51 884 (1937) [9] Perl, et al., Phys. Rev. Lett. 35 1489 (1975) [10] Danby, et al., Phys. Rev. Lett. 9 36 (1962) [11] Owens, J. F., Rev. Mod. Phys. 59 465 (1987) [12] Brock, et al., Rev. Mod. Phys. 67 157 (1995) [13] Ferbel, T. and Molson, W. R., Rev. Mod. Phys. 56 181 (1984) [14] Darriulat, et al., Nucl. Phys. B110 365 (1976) [15] Abachi, et al., Nucl. Instr. and Meth. A338 185 (1994) [16] Clark, et al., Nucl. Instr. and Meth. A279 243 (1989) [17] Detoeuf, et al., Nucl. Instr. and Meth. A279 310 (1989) [18] Pizzuto, D., Ph.D. Thesis, SUNY Stony Brook (unpublished) (1989) [19] Rajagopalan, S., Ph.D. Thesis, Northwestern University (unpublished) (1992) 101 102 [20] Abachi, et al., Nucl. Instr. and Meth. A324 53 (1993) [21] Bantly, et al., D0 internal note 1996 (unpublished) (1993) [22] Abolins, et al., Nucl. Instr. and Meth. A289 543 (1990) [23] Linnemann, et al., Proceedings of DPF’92 1641 (1993) [24] Engelmann, et al., Nucl. Instr. and Meth. A216 45 (1983) [25] Narain, et al., Proceedings of DPF’92 1678 (1993) [26] Brun, et al., CERN-DD/78/2 Rev. (1978) [27] Jonckheere, A., D0 internal note 638 (unpublished) (1987) [28] Particle Data Group, Phys. Rev. D50 (1994) [29] Salgado-Galeazzi, C. W., Ph.D. Thesis, Michigan State University (unpub- lished) (1988) [30] Womersley, J ., D0 internal note 2106 (unpublished) (1994) [31] Jerger, 8., D0 internal note 2636 (unpublished) (1995) [32] Bantly, et al., Fermilab-TM-1930 (1995) [33] Abe, et al., FERMILAB-Pub-93/232-E (1993) [34] Abe, et al., FERMILAB-Pub-93/233-E (1993) [35] Abe, et al., FERMILAB-Pub-93/234-E (1993) [36] Amos, et al., Phys. Lett. 3243 158 (1990) [37] Baer, et al., Phys. Rev. D42 61 (1990) [38] Aurenche, et al., Nucl. Phys. b286 553 (1987) [39] Botts, et al., Phys. Lett. B304 159 (1993) [40] Abe, et al., Phys. Rev. D48 2998 (1993) Abe, et al., Phys. Rev. Lett. 68 2734 (1992) [41] Abe, et al., Phys. Rev. Lett. 73 2662 (1994) [42] Berger, E. L. and Qiu, J ., Phys. Rev. D42 61 (1990) [43] Aurenche, et al., Nucl. Phys. B399 34 (1993) [44] Huston, et. al., MSU-HEP-41027 (1995) 103 [45] Hadley, N., D0 internal note 904 (unpublished) (1989) [46] Milder, A., Ph.D. Thesis, University of Arizona (unpublished) (1993) [47] Rubinov, P., Ph.D. Thesis, SUNY Stony Brook (unpublished) (1995) [48] Madden, B., Ph.D. Thesis, Florida State University (unpublished) (1995) [49] Liu, Y.-C., private communication [50] Jerger, 5., private communication "[11111]]"[11111]“