f I .‘l l .. 1. .. ‘ ; mm? 34 5w, .3?& a ... 3+ . a Mm.“ ... .z . 9.. 2W. 5. ~— 9 % . x :safiu? . w , ‘ v “.3333 . , .m. 4%. Jiviézi , . 4.....5 w. r. q. . _. I. G g .2; ,._,~ f m... h a «.13.: .. 99 h. . :5“?! $53. in“??? u «WV... fits a 30.: . 2 14‘) Plan“. ; Luann - 3 , .I . “Em. é ‘ . , .74.. : . .a 4.5.... .. 91.3 ..a a .3 . . 3.4% . u £73....an Hfiwaw: 2. my... é: . Mara... .. . . v.0. Hui . di‘ 31‘4” ,.._.<:< 3.1.4.2. 3.5, ....i,..;...\...... ..l ,4'.‘ . alafiimfiizmh g, i J‘J LIBRARY . Michigan State \ University This is to certify that the dissertation entitled MEASUREMENTS OF THE t? PRODUCTION CROSS SECTION AT Vs=1.96 TeV AND TOP MASS IN THE DIELECTRON CHANNEL presented by JOSEPH FRANCIS KOZMINSKI has been accepted towards fulfillment of the requirements for the Ph.D. degree in Physics if. (Jaws Major Professor’s Signature Hour 2.; 2005' Date MSU is an Affirmative Action/Equal Opportunity Institution .-o-n-o-c—-—-—-.—.-----.-n--o-n-o-o-o-a-n-o-o--.-.-o—n-o—o-.-.---o-o--.-.-.-.—._-_ A _ A A PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. MAY BE RECALLED with earlier due date if requested. DATE DUE DATE DUE DATE DUE —— 2/05 c2/CIRC/DatoDuelndd—p. 15 MEASUREMENTS OF THE tt PRODUCTION CROSS SECTION AT \/5 = 1.96 TeV AND TOP MASS IN THE DIELECTRON CHANNEL By Joseph Francis Kozminski A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Physics and Astronomy 2005 ABSTRACT MEASUREMENTS OF THE tt PRODUCTION CROSS SECTION AT \/§ 2: 1.96 TeV AND TOP MASS IN THE DIELECTRON CHANNEL By Joseph Francis Kozminski The first measurement of the top-antitop production cross section in proton- antiproton collisions at f = 1.96 TeV using 243 pb"1 of data collected with the DO detector at Fermilab is presented. In this analysis, only the dielectron final state is considered. Five events are observed, and 0.93 background events are expected. The measured cross section, after accounting for the expected branching ratio to the dielectron channel, is: 0- = 14 9+94 (stat) +25 (s 1' t :l: l O l ' b tt - -7.u ‘ ' —1.z< ‘33) - (mm) P a which agrees with the predicted cross section for top quarks with a mass of 175 GeV. In addition, a first-pass at a measurement of the top mass using the neutrino- weighting method is presented. This measurement is also performed in the dielectron channel using the five events observed in the cross section measurement. Copyright by JOSEPH FRANCIS KOZMINSKI 2005 To my wife, Kate, and my parents ACKNOWLEDGEMENTS There are so many people who have helped me reach this point that I’m not sure I will be able to adequately acknowledge everyone. I first want to thank my adviser, Harry Weerts. He has continually challenged me, finding questions to ask even when I think I have all my bases covered. But he has also been very encouraging and easy to talk to. I have learned much from Harry and have enjoyed working with him. I also want to thank the other MSU professors on DO: Bernard Pope, Maris Abolins, Chip Brock, and Jim Linnemann. I have been fortunate to work with such an excellent group. I must also acknowledge Dan Edmunds who showed me the ins and outs of the L1 Calorimeter trigger. He taught me a lot when I was not stuck with the mind-numbing job of burning PROMs. Thanks also goas out to the other MSU graduate students and postdocs on DO with whom I worked during my time at Fermilab. These were the first people I turned to when I had questions, and I often did not have to look much farther. They include Josh K., Adam, Reinhard, Reiner, Dugan, Roger, Josh D, and Rahmi. And Bob gets some extra recognition for helping me break into the top group and for watching over my progress. (It’s good to know the subgroup co-convenor.) He also helped get the dilepton mass analysis off the ground. Of course, there are many people outside MSU who helped me along the way, too. The people I worked with in the top group alone are too numerous to list here, but I at least want to recognize the other people working on dilepton analyses — Prolay, Stefan, Jessica, Kirti, and Ashish. And thanks to Christophe who took over as co- convenor. Also, the mass analysis would not be where it is today without MSU’s collaboration with the Arizona group. Thanks especially to Jeff, who is a co-code developer of the mass tools, and to Erich who did the original Run I analysis and has given us many useful insights. I want to recognize the DQDnuts. We may not have been able to win the softball league tournament, but at least. we showed up for every game, had fun, and came out on top more often than not. Also, thanks to the ultimate frisbee group. I only wish I had played more often. There’s nothing like running around outside to relieve stress. Much thanks also goes to Hoop, whom I knew from my ND days. He showed me the ropes when I first arrived at the lab, has been a great resource when I have needed to bounce an idea off someone, and has been a great friend. I’m also not sure how I could have made it through these years without our weekly Chinese lunches. Likewise with Gene, who was at MSU with me but wound up on the other side of the ring. We have shared many a laugh over numerous lunches through the years. I also can’t forget the friends I left behind at MSU when I came to Fermilab. Russ, Heather, Chip, Nate, Andrew, and Steve, thanks for all the good times from the multiple study sessions every week, to our Friday lunches at Peanut Barrel, to pick up games of basketball and nights sitting around playing board games. I also want to thank my family who has been very supportive and has offered much encouragement through the years. They have always been there for me, and I know that won’t change in the future. And last, but certainly not least, I want to thank my wife, Kate, for her constant love, support, and encouragement. She was with me at ND when I decided what grad school to attend; she stuck with me when I was in East Lansing and she was in Chicago (Thanks Amtrakll); and she has put up with me through my crazy hours at the lab, the overall chaos near conference deadlines, and the stress of writing this dissertation. She has been a great companion and a great friend. vi Contents List of Tables ................................. xi List of Figures (Images in this dissertation are presented in color.) xiv 1 Introduction 2 Theoretical and Phenomenological Overview 2.1 A Short History of Particle Physics ................... 2.2 The Standard Model ........................... 2.2.1 Fundamental Forces and Particles ................ 2.3 Standard Model Formalism ........................ 2.3.1 Electroweak Theory ........................ 2.4 Top Quark ................................. 2.4.1 Top Quark Production ...................... 2.4.2 Top Decay ............................. 2.4.3 Top Mass ............................. 3 Experimental Apparatus 3.1 Accelerator ................................ 3.2 Luminosity ................................ 3.3 The D0 Detector ............................. 3.3.1 Coordinate System ........................ 3.3.2 Tracking System ......................... vii CDKlCfiCJ'lODW 11 11 15 16 23 24 26 3.4 3.3.3 Preshower Detectors ....................... 3.3.4 Calorimeter ............................ 3.3.5 Muon System ........................... 3.3.6 Luminosity Monitor ....................... DO Trigger System ............................ 3.4.1 Level 1 Triggers .......................... 3.4.2 Level 2 Triggers .......................... 3.4.3 Level 3 Triggers .......................... Data and Monte Carlo Samples 4.1 4.2 4.3 Trigger Selection ............................. 4.1.1 Vocabulary ............................ 4.1.2 Analysis Triggers and Efficiencies ................ Data Set .................................. 4.2.1 Data Quality ........................... Monte Carlo ................................ 4.3.1 Monte Carlo Samples ....................... Object Reconstruction and Identification 5.1 5.2 Electrons .................................. 5.1.1 Electromagnetic Cluster Reconstruction ............ 5.1.2 Electromagnetic Cluster Identification ............. 5.1.3 Track Match ............................ 5.1.4 Likelihood ............................. 5.1.5 Electron Efficiencies and Scale Factors ............. 5.1.6 Electron Energy Resolution and Oversmearing ......... 5.1.7 Electron Charge .......................... Jets .................................... viii 39 40 4O 42 45 46 48 48 48 49 52 53 54 l' 05 59 60 71 75 76 79 5.2.1 Jet Reconstruction ........................ 79 5.2.2 Jet Identification ......................... 79 5.2.3 Jet Energy Scale ......................... 80 5.2.4 Jet Energy Resolution ...................... 81 5.2.5 Jet/EM Separation ........................ 82 5.2.6 Jet Scale Factor .......................... 83 5.3 Missing Transverse Energy ........................ 84 5.3.1 ET Resolution ........................... 86 5.4 Primary Vertex .............................. 87 5.4.1 Primary Vertex Cuts, Efficiencies, and Scale Factors ...... 88 Cross Section Analysis 90 6.1 Cut Optimization ............................. 91 6.2 Signal Efficiencies ............................. 96 6.3 Physics Backgrounds ........................... 99 6.3.1 Z ——) TT .............................. 99 6.3.2 Diboson .............................. 101 6.4 Instrumental Backgrounds ........................ 102 6.4.1 Fake ET Background ....................... 102 6.4.2 Fake Electron Background .................... 114 6.5 Expectations and Observations ..................... 120 6.6 Systematic Uncertainties ......................... 138 6.7 Cross Section ............................... 140 Mass Analysis 144 7.1 Neutrino-Weighting Method ....................... 145 7.1.1 Jet Combinatorics ........................ 148 7.2 Monte Carlo Tests ............................ 149 ix 7.2.1 Parton Tests ............................ 150 7.2.2 RECO-level Tests ......................... 151 7.3 Mass Fitting ................................ 155 7.3.1 Procedure ............................. 155 7.3.2 Maximum Likelihood thction ................. 156 7.3.3 Determination of h ........................ 158 7.3.4 Ensemble Testing ......................... 160 7.4 Mass Measurement of the Candidate Events .............. 174 7.5 Prospects for the Future ......................... 174 8 Conclusions 179 A Grid Search Results 181 B Candidate Events 184 List of Tables 2.1 2.2 3.1 4.1 4.2 5.1 5.2 5.3 O1 .5 5.5 5.6 5.7 5.8 5.9 Summary table of Standard Model Fermions. Note that the masses of the light quarks are not well-measured since they are always bound into mesons and baryons. ........................ Summary table of Standard Model Gauge Bosons ............ Layer depths in the calorimeter ...................... Summary of the dielectron triggers broken down by trigger list. version. Breakdown of integrated luminosities by trigger list. version. ..... Correlation coefficients for likelihood signal input variables in the CC. Correlation coefficients for likelihood signal input variables in the EC. Correlation coefficients for likelihood background input variables in the CC. .................................... Correlation coefficients for likelihood background input variables in the EC. .................................... EM scale factors relating Monte Carlo to data in the CC and EC [55]. Scale factors and oversmearing parameters for MC electrons [56]. Energy resolution parameters for high-pT electrons [56]. ....... Efficiencies and scale factors for requiring opposite charges for CCCC. CCEC, and ECEC electron pairs. ................... Jet energy resolution constants for jets in data and Monte Carlo [63]. xi 36 49 52 67 67 67 68 73 76 76 77 8‘2 5.10 Primary vertex cut efficiencies in Z —> 66 data and MC and a scale 5.11 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.9 6.10 6.11 6.12 6.13 6.14 factor as a function of jet multiplicity. All errors are statistical. Common vertex scale factors used in the dilepton cross section analyses. Cut choices which perform best in the grid search. The Monte Carlo cross check is given in parentheses. * indicates the cut chosen for analysis. Efficiencies of object identification and kinematic selection on t? —+ 66 Monte Carlo. Errors are statistical only. ................ Summary of the correction factors relating Monte Carlo and data effi- ciencies. Errors are statistical only. ................... Diboson background expectations at each cut. level. Errors are statis— tical only. ................................. EM cluster selections for the dielectron and photon samples used to estimate the number of ET fakes. .................... ET fake rates. .............................. ET fake rates for different. Alan. bins in the diphoton sample. ET fake ratios, numbers of tight events below the ET cut, and total expected ET fakes for the last two lines. ................ fe for different jet multiplicities. .................... Numbers of events in data with one tight and one loose electron, NH, passing the progression of cuts listed. ................. Yield summary for it —+ cc channel .................... Run numbers and event. numbers for the cc candidate events. . . . . . Data and backgrounds at each level of selection. Errors are statistical and systematic added in quadrature. .................. Summary of the relative systematic uncertainties for signal and back- ground in We . ............................... xii 88 89 95 97 98 104 121 121 141 6.15 Summary of cross section inputs for the 66 and en channels. Errors 7.1 7.2 A.1 B.1 B2 B3 B4 B5 on kag and 632-9 are total errors with systematic and statistical errors added in quadrature. .......................... 141 Possible combinations of three observed jets as b jets or ISR ...... 149 Relative background contributions to the average background tem- plate. ................................... 159 Dielectron kinematic cut optimization based on MC. ......... 183 Kinematics for event 121971122 in run 166779. ............ 184 Kinematics for event 3869716 in run 177681. ............. 185 Kinematics for event 26229014 in run 178152. ............. 185 Kinematics for event 13511001 in run 178177. ............. 185 Kinematics for event 14448436 in run 180326. ............. 186 xiii List of Figures 2.1 2.2 2.3 2.4 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 Parton model of a hard scatter process .................. Tree level diagrams for t? production in pf) collisions. ......... Top production cross section vs. top mass at J? = 1.96 TeV [8]. . . . “One-standard-deviation (39.35%) region in MW as a function of mt for the direct and indirect data, and the 90% CL region (X2 = 4.605) allowed by data. The Standard Model prediction as a function of A! H is also indicated. The widths of the M H bands reflect the theoretical uncertainty from QUIZ)” [4]. ...................... Schematic of the Fermilab accelerator chain. Adapted from [13]. . . . Side view of the DO detector [19]. ................... Diagram of pp in the DC) coordinate system. ............. DO tracking system [19]. ........................ SMT detector. .............................. Distribution of interaction points in z. Adapted from [20]. ...... Alignment of two single fiber layers to make a doublet layer [19]. Cross section of a layer of the CPS. The triangles are made of plastic scintillator with holes in the middle for the waveshifting fibers [19]. DO calorimeter [19]. ........................... 3.10 Unit cell in the calorimeter. ....................... xiv 12 13 14 3.11 A quarter of the calorimeter in the r — 2: plane of the detector showing the tower geometry. ........................... 38 3.12 Calorimeter electronics readout chain [19]. ............... 39 3.13 DQ) trigger scheme with typical trigger rates. ............. 41 3.14 L1 trigger scheme [26]. .......................... 42 3.15 L-l calorimeter trigger diagram [29]. .................. 44 3.16 Trigger flow scheme for L1 and L2. ................... 45 4.1 CEM(1,11) trigger turn-on curve. .................... 51 5.1 Smoothed, normalized likelihood input distributions for objects in the CC. The black line is signal; the red is background. These distributions (a) fan, (b) Xan, (ClET/PTs magma, (e) DOA, (f) number of tracks in an 0.05 cone, and (g) sum of track pT in an 0.4 cone around the candidate track. ........................... 65 5.2 Smoothed, normalized likelihood input distributions for objects in the EC. The black line is signal; the red is background. These distributions (a) 1m, (b) xéan. MET/pr. whip...“ (e) DCA, (f) number of tracks in an 0.05 cone, and (g) sum of track )7; in an 0.4 cone around the candidate track. ........................... 66 5.3 Likelihood distributions for signal and background in the CC (top) and EC (bottom). ............................... 69 5.4 Background efficiency vs. signal efficiency after preselection for various likelihood cuts in the CC (top) and EC (bottom). The likelihood cuts chosen for the analysis are denoted by the red squares. ........ 70 XV 5.5 5.7 5.8 5.9 5.10 5.11 6.1 6.2 6.3 6.4 erecm I D in data and Monte Carlo and the corresponding scale factors versus the distance between the electron track and the closest jet in CC (top) and EC (bottom). The green lines show the constant value fits (p0) to the scale factors. Adapted from [55]. ........... The top two plots are ftrkazlh vs 77d for CC and EC electrons, respec- tively. The bottom two plots are ctr)”, h vs 45d for CC and EC electrons, respectively. The green lines show the constant value fits (p0) to the scale factors. Adapted from [55]. .................... Comparison of Z data and corrected Z Monte Carlo. ......... Meg distributions for opposite- and like—signed electron pairs in the CCCC (right), CCEC (middle), and ECEC (left). ........... Number of EM objects in Z and Z + 2 jet events where 2 tight electrons are required. The number of Z + 2 jet events is normalized to the number of Z events in the 2nd EM bin. ................ Scale factor vs jet ET for CC, EC, and ICD jets ............. Comparison of smeared and unsmeared ET in the Z Monte Carlo to the ET in the tight dielectron (i.e. Z) data. The plot on the left shows the inclusive Z data and Monte Carlo. The plot on the right shows the same comparison for events with two or more jets .......... ET vs. Alec distribution after dilepton and 2 jet. cuts for data (top left), top (top right), WW' (middle left), Z ——> T7' (middle right), and Z ——> 86 + 2 jets (bottom) Monte Carlo. Also shown is the applied cut. Momentum tensor ellipsoid [66] ...................... Expected signal vs expected background for all cut combinations tested 73 74 77 78 83 85 87 93 94 in the grid search. The four combinations listed in Table 6.1 are circled. 96 Comparison of Z j j Monte Carlo to Z + 2 jets data. Both the corrected and uncorrected Z j j distributions are shown. ............. xvi 6.5 6.6 6.7 6.8 6.9 6.10 6.11 6.12 6.13 6.14 6.15 ET (top) and ET fake rate vs ET cut (bottom) for the Z + 2 jets data, single photon, and Z j j Monte Carlo samples [55]. .......... ET (top) and ET fake rate vs ET cut (bottom) for the single photon and dielectron plus 2 jets samples with different pT cuts applied to the photon and dielectron system [55]. ................... ET (top) and ET fake rate vs ET cut (bottom) for the tight, un- reweighted diphoton, and (reweighted) diphoton two jet. samples [55]. ET (top) and ET fake rate vs. ET cut (bottom) for tight dielectron and diphoton data samples with all cuts applied in the 0 jet case [55]. ET (top) and ET fake rate vs. ET cut (bottom) for single photon, tight dielectron, and diphoton data samples with all cuts applied in the 1 jet case [55]. ............................... ET (top) and ET fake rate vs. ET cut (bottom) for single photon, tight dielectron, and diphoton data samples with all cuts applied in the 2 jet case [55]. ............................... Detector 77 distributions for electrons in all (top left), 0 jet (top right), 1 jet (bottom left), and >= 2 jet (bottom right) events ......... Electron fake rate, fe, as a function of mm for different jet multiplici- ties. .................................... EM fake rate f6 as a function of pr for different jet multiplicities. The plot on the top shows CC electrons while the one on the bottom shows EC electrons. ............................... Electron charge for EM objects passing the track match in the sample from which fe is derived .......................... Leading (top) and second leading (middle) electron pf and A188 (bot- 107 111 117 120 tom) for background, (7, and data corresponding to line 3 of Table 6.13. 123 xvii 6.16 6.17 6.18 6.19 6.20 6.21 6.22 6.23 6.24 6.26 Leading (top) and second leading (middle) electron n and ET (bottom) for background, ti, and data corresponding to line 3 of Table 6.13. Number of jets (pT > 20 GeV) in the event (top) and leading (mid- dle) and second leading (bottom) jet pT for background, ti, and data corresponding to line 3 of Table 6.13. ................. Leading (top) and second leading (middle) jet 7] and A¢(ET, leading jet) (bottom) for background, ti, and data corresponding to line 3 of Table 6.13. ................................ HT (top), H? (middle), and 8 (left) for background, ti, and data corresponding to line 3 of Table 6.13. ................. Leading (top) and second leading (middle) electron W and A163 (bot- 124 126 127 tom) for background, tt, and data corresponding to line 4 of Table 6.13. 128 Leading (top) and second leading (middle) electron n and ET (bottom) for background, ti, and data corresponding to line 4 of Table 6.13. Number of jets (pT > 20 GeV) in the event (top) and leading (mid- dle) and second leading (bottom) jet. pT for background, tf, and data corresponding to line 4 of Table 6.13. ................. Leading (top) and second leading (middle) jet 17 and AMET, leading jet) (bottom) for background, ti, and data corresponding to line 4 of Table 6.13. ................................ HT (top), Hf} (middle), and 8 (left) for background, ti, and data corresponding to line 4 of Table 6.13. ................. Leading (top) and second leading (middle) electron pT and Meg (bot- tom) for background, tt, and data after all cuts. ........... Leading (top) and second leading (middle) electron 7) and ET (bottom) for background, tf, and data after all cuts. ............... xviii 129 130 131 132 133 1 34 6.27 Number of jets (pT > 20 GeV) in the event (top) and leading (middle) and second leading (bottom) jet pT for background, ti, and data after all cuts. .................................. 6.28 Leading (top) and second leading (middle) jet 7] and AMET, leading jet) (bottom) for background, ti, and data after all cuts. ....... 6.29 HT (top), H? (middle), and 8 (left) for background, ti, and data after all cuts. .................................. 6.30 ti selection efficiency as a function of mass. .............. 6.31 Likelihood as a function of ti production cross section. The central value and the statistical errors are shown. The dielectron cross section is shown on the top and the combined on the bottom [55]. ...... 7.1 Neutrino 7) distributions for mt = 120 (top left), 160 (top right), 180 (bottom left), and 230 (bottom right) GeV. .............. 7.2 Neutrino 7) widths vs. mt. ........................ 136 143 148 7.3 Weight distributions for four mt : 175 GeV parton-level top mass events. 150 7.4 Total weight distributions for mt = 150 GeV (left), 175 GeV (middle), and 200 GeV (right) at parton-level. .................. 7.5 Weight distribution peak mass vs. input mt. .............. 7.6 Total weight distributions for mt : 150 GeV (left), 175 GeV (middle), and 200 GeV (right) at parton-level. .................. 7.7 Ten-bin average background template showing the relative background contributions. .............................. 7.8 RMS vs mean bin value of 500 five event ensembles. ......... xix 153 7.9 7.10 7.11 7.12 7.13 7.14 Output mt vs input mt for ensemble tests using Gaussians of width 15 (top left), 20 (top right), 30 (bottom left) and 40 (bottom right) GeV. The average weight distributions are binned into five 50 GeV bins. Bin boundaries occur at 130, 180, and 230 GeV (-45, 5, and 55 GeV on the graph). .................................. 162 Output mt vs input mt for ensemble tests using Gaussians of width 15 (left) and 20 (right). The average weight distributions are binned into ten 25 GeV bins. ............................. 163 Plots of — In L values vs mt where the mass is obtained using a quadratic fit around the minimum. The ensembles are generated using Gaussians with means of 150 (top), 175 (middle), and 200 (bottom) GeV. . . . 164 Plots of — In L values vs mt where the mass is obtained using a quadratic fit around the minimum for ensembles generated with the mt = 150 (top), 175 (middle), and 200 (bottom) GeV parton-level signal samples. 165 Output mt vs input mt for ensemble tests using Monte Carlo parton- level information. The average weight distributions are binned into ten 25 GeV bins. The plot on the top shows the average output masses from five ensemble tests at each mass point using 100 events per en- semble. The one on the bottom uses five events per ensemble and 100 tests per mass point. ........................... 166 Plots of — in L values vs mt where the mass is obtained using a quadratic fit around the minimum for ensembles generated with the mt = 150 (top), 175 (middle). and 200 (bottom) GeV RECO—level signal samples. 169 XX 7.15 Output mt vs input mt for ensemble tests using Monte Carlo RECO— 7.16 7.17 7.18 7.19 7.20 7.21 7.22 level information. The average weight distributions are binned into ten 25 GeV bins. The plot on the top shows the average output masses from 100 ensemble tests at each mass point using five events per en- semble. The one on the bottom uses 100 events per ensemble and five tests per mass point. ........................... Plots of — In L values vs mt where the mass is obtained using a quadratic fit around the minimum for ensembles generated with one background and four mt = 150 (top), 175 (middle), and 200 (bottom) GeV signal events. .................................. Output mt vs input mt for ensemble tests using signal and background. At each mass point, 500 ensemble tests are run using ensembles with four signal events and one background event. ............. — in L values vs mt where the mass is obtained using a quadratic fit around the minimum for an “ensemble” consisting of the average back- ground template. ............................. Output mt vs input mt for ensemble tests using signal and background. In the plot on the top, at each mass point, five ensemble tests are run using ensembles with 80 signal events and 20 average background events. In the plot on the bottom, at each mass point, 100 ensemble tests are run using ensembles with four signal events and one average background event. ............................ Event weights for the five candidate events. .............. Average ensemble of candidate events. ................. —lnL vs mt for the candidate events. A quadratic fit around the minimum gives a measured top mass of 169.7 GeV. .......... xxi 171 172 172 B.1 B2 B3 B4 35 Run 166779 Event 121971122: RZ view (upper right), XY view (upper left), Lego View (lower). ......................... Run 177681 Event 13869716: RZ view (upper right), XY view (upper left), Lego view (lower). ......................... Run 178152 Event 26229014: RZ view (upper right), XY view (upper left), Lego view (lower). ......................... Run 178177 Event 13511001: RZ View (upper right), XY View (upper left), Lego view (lower). ......................... Run 180326 Event 14448436: RZ view (upper right), XY view (upper left), Lego View (lower). ......................... xxii 187 188 Chapter 1 Introduction This dissertation is a journey into the realm of the very tiny. This is a world where the laws of Newton and Einstein's general relativity give way, a world where quantum mechanics and quantum fields reign supreme. This is a world where energy can be converted to matter and matter to energy, a world of particles and anti-particles. This is the bizarre world of elementary particle physics. The quest to understand the fundamental building blocks of nature has a long, rich history dating back to the ancient Greeks. This understanding has evolved from its roots in natural philosophy and metaphysics into an area of natural science in which experiments attempt to confirm or disprove theories that describe the nature of the most fundamental particles. The current understanding of elementary particles comes from the Standard Model, a very successful theory. No experiment has yet disproven any of the predictions of the Standard Model, though a few inconsistencies have arisen. These inconsistencies have led many to believe that this theory is only a part of some bigger picture (supersymmetry?, string theory? ....) Chapter 2 gives a brief overview of the Standard Model. The role of the top quark is highlighted and motivations for accurate measurements of its cross section and mass are discussed, as these measurements are presented in this dissertation. Chapter 3 describes the apparatus used to conduct the experiment. namely the Tevatron pfi collider and the DO detector at the Fermi National Accelerator Laboratory ( F ermilab) located in Batavia, Illinois. Chapter 4 discusses the selection and composition of the data sets used and what Monte Carlo samples are generated. Chapter 5 explains how the data collected by the detector are reconstructed and how physics objects are identified. The data analysis is discussed in the proceeding chapters. Chapter 6 presents a measurement of the top quark cross section in the dielectron channel. Chapter 7 discusses the neutrino-weighting method for measuring the mass of the top quark in the dilepton channels along with a first pass measurement of the top quark mass in the dielectron channel. Finally, Chapter 8 summarizes the results of the analyses presented in this dissertation. Chapter 2 Theoretical and Phenomenological Overview The Standard Model is a great achievement of the twentieth century. It accurately describes almost all observed phenomena at distances smaller than the diameter of an atomic nucleus (about 10"15 m). The Standard Model has become standard material covered in every modern textbook on high energy physics, including [1], [2], and [3]. 2.1 A Short History of Particle Physics Modern elementary particle physics began, one might say, in 1897 when J .J . Thomson discovered the electron. This door-opening discovery was followed by Rutherford’s discovery of the proton in 1914 and Chadwick’s discovery of the neutron in 1932. At the same time, great breakthroughs in theoretical physics were being made. These were fueled by an observed breakdown of classical physics in some experiments. In 1900, Max Planck began the quantum revolution with his paper, “On the Theory of the Energy Distribution Law of the Normal Spectrum,” in which he attempted to explain the blackbody radiation spectrum emitted by a hot. object. The revolution took off slowly; nevertheless, just. a quarter of a century later, Schroedinger and others jumped on board and developed quantum mechanics. This theory defines a system as a state which evolves according to a wave equation rather than as a collection of particles which follows the rules of classical physics. That is, an outcome, given the initial conditions, cannot be uniquely determined in this theory; instead, one can only obtain a probability for a certain outcome to occur. This theory opens the door to phenomena which seem nonsensical in the world of human experience, but actually describe the behavior of systems at very small length scales. Moreover, Einstein’s 1905 paper on special relativity forced physicists to look at the universe in a completely different way. Time and space could no longer be viewed as separate entities; rather, this theory describes a four-dimensional universe with three spatial dimensions and one time dimension. Furthermore, energy and momentum are conserved, but rest mass is not, another great leap from the classical viewpoint. This theory also allows for phenomena that seem to defy common sense, but it accurately describes systems moving very fast (i.e. near the speed of light). Since elementary particles are very small and tend to travel very fast, a theory which incorporates both quantum mechanics and special relativity is required. The marriage of these two areas along with the field concept, which is how particle states are defined, is called relativistic quantum field theory. This theory describes phys- ical processes as an interaction of states (fields), formalized by an infinite series in increasing powers of the coupling constant (interaction strength). The leading terms of the series tend to provide a good description of observations, but the sums of the small corrections provided by the subleading terms lead to infinities running amok. Finally, Richard Feynman, Shin-Ichiro Tomonaga, and Julian Schwinger discovered how to renormalize the theory. thereby removing the infinities. Even with renormalization, a quantum field theory which correctly describes all known interactions between elementary particles had yet. to be developed. This the- ory of elementary particles, which is really a combination quantum electrodynamics (QED), the Glashow-Weinberg-Salam electro—weak theory, and quantum chromody- namics (QCD), came to be known as the Standard Model. (Gravity is the one in- teraction omitted as renormalization and gravity are not compatible in this theory.) The Standard Model, which became the favored model by the end of the 19703, has yet to be disproven by any experimental test. This theory, nevertheless, cannot completely describe elementary particles. For example, the Standard Model does not predict the masses of the quarks and leptons, nor does it predict electroweak symmetry breaking (EWSB). The masses of the W and Z bosons arise via the Higgs mechanism, but this piece of the theory was added later in an ad hoc manner. Even so, a new theory most likely would not replace the Standard Model; rather, it would be an extension of it... 2.2 The Standard Model 2.2.1 Fundamental Forces and Particles Two basic types of particles comprise the Standard Model particles — fermions and bosons. Quarks and leptons are both fermions; that is, they are spin 1/2 particles which obey the Pauli Exclusion Principle (one and only one fermion can occupy a given quantum state). These fermions are the building blocks of matter. The bosons, on the other hand, are spin 1 particles. They are force carriers; that is, they mediate interactions between fermions. Quarks, fractionally charged fermions, are the constituents of protons and neu- trons. There are six types (flavors) of quarks. Up, charm, and top quarks have an electrical charge of 2/36, where -c is the charge of an electron. Down. strange. and bottom quarks have a charge of —1/3e. There are also 6 anti-quarks with the same mass but opposite charge. (Antiparticles are denoted with a bar over the particle symbol). Any particle constructed of quarks is called a hadron. Hadrons with 3 con- stituent quarks (like the proton and neutron) are called baryons while quark-antiquark pairs are termed mesons. Baryons and mesons all have integer charge. Quarks also have a property called color charge where the three color charge “polarities” are de- noted red, green, and blue. Without color, quarks in some hadrons appear to occupy the same quantum state, which is forbidden by the Pauli Exclusion Principle. Just as quarks have color, antiquarks have anticolor. Baryons and mesons are “colorless,” which means that they are comprised of one quark of each color or anticolor (baryons) or that they are comprised of a color—anticolor pair (mesons). Quarks interact via the strong force. This force binds quarks together to form baryons and mesons. The strong force is mediated by a boson called the gluon. Gluons are massless and electrically neutral; however, they do carry color charge. In fact, gluons are characterized by a color and an anticolor unlike quarks which are characterized by one or the other. The gluon carries two polarizations since it is exchanged between two quarks in strong interactions. Because gluons carry color charge, they are also able to interact with one another. Quarks and gluons are never observed as free particles (with the exception of the top quark, which will be discussed later) because the strength of the strong force increases with increasing distance. For example, quarks close to each other inside a proton are free to shift around without much interference from the strong force. However, if a quark tries escaping, the strong force pulls it back with increasing force as the separation increases. Of course, if the quark is given a big enough kick (say in a high energy collision), it may escape from the proton. However, the strong force potential becomes so great that a new quark-antiquark pair pops out of the vacuum. The antiquark binds with the escaping quark to form a meson, and the quark joins the proton remnants to form a new hadron. This process is called fragmentation or hadronization. Leptons, on the other hand, are completely unaffected by the strong force. There are three flavors of charged leptons — electron, muon, and tau — each carrying a charge of —-e. Each of these has a corresponding anti-lepton with a charge of e. For each charged lepton flavor, there is an electrically neutral neutrino. These are appropriately named the electron neutrino, muon neutrino, and tau neutrino. Each neutrino has a corresponding anti-neutrino, analogous to the anti—leptons. Charged leptons and quarks can all interact through the electromagnetic force, which is mediated by photons. This force is the most familiar of the forces that come into play in elementary particle physics. Because of this force, like-signed objects repel each other while opposites attract, as every introductory physics student is taught. Neutrinos interact only by the weak force. The weak force is carried by the weak gauge bosons, W+, W“, and Z”. The W’s have charge +1 or -—1 while the Z0 is electrically neutral. These bosons, unlike other force carriers, have large mass (on the order of 100 GeV/c?) and, consequently, act over short (nuclear) distances. All quarks and leptons interact through the weak force, but this force is best known for its role in beta decay. Both quarks and leptons can be grouped into three generations. Each quark generation contains a +2/3e and —I/3€ charged quark while each lepton generation is comprised of a negatively charged lepton and its associated neutrino. Table 2.1 shows a summary of the Standard Model fermions grouped into these generations. The Standard Model bosons are summarized in Table 2.2. Note that masses are given in units of GeV instead of GeV/c2. In high energy physics, it is a common practice to set c and h to 1 and not write them explicitly. 2.3 Standard Model Formalism Behind these particles and forces lie the elegant mathematics of the Standard Model. As mentioned earlier, the Standard Model is the union of the electroweak theory and QCD. The Standard Model is described by local gauge theories. That is. the Generation Particle Charge Mass[4] Interactions (6) (GeV) Quarks (Spin 1/2) 1 Up (u) +2/ 3 00015-0004 EM,Weak,Strong Down ((1) —1/3 0004-0008 EM,Weak,Strong 2 Charm (c) +2/ 3 1.15-1.35 EM,Weak,Strong Strange(s) -1/3 0.08-0.13 EM,Weak,Strong 3 Top (1‘) +2/3 178.1 EM,Weak,Strong Bottom (b) -1/3 4.1-4.4 EM,Weak,Strong Leptons (Spin 1 / 2) 1 Electron Neutrino (V6) 0 < 3 x 104) Weak Electron (6) -1 0.000511 EM,Weak 2 Muon Neutrino (up) 0 < 1.9 x 10’4 Weak Muon (it) -1 0.1057 EM,Weak 3 Tau Neutrino (VT) 0 < 0.0182 Weak Tau (7‘) -1 1.777 EM,Weak Table 2.1: Summary table of Standard Model Fermions. Note that the masses of the light quarks are not well-measured since they are always bound into mesons and baryons. Particle Name Force Charge Mass[4] (6) (GeV) g Gluon Strong 0 0 W W Weak i 1 80.43 Z Z Weak 0 91 . 19 ’7 Photon Electromagnetic 0 0 Table 2.2: Summary table of Standard Model Gauge Bosons. fundamental equation (Lagrangian) describing the particles and their interactions is invariant under a phase (gauge) transformation even when the transformation is position-dependent. Local gauge theories have a couple of desirable features. By demanding that a simple Lagrangian which describes a particle’s kinetic energy satisfies local gauge invariance, interaction terms that. represent the coupling of the particle to the gauge bosons must be added to the Lagrangian. In this way, the coupling between fermions and gauge bosons simply falls out of the theory. The couplings between gauge bosons are predicted by requiring that the non-Abelian gauge groups also satisfy local gauge invariance. Moreover, t’Hooft showed that spontaneously broken local gauge theories are renormalizable. This discovery is extremely important since a non-renormalizable theory has cutoff-dependencies beyond the lowest order calculations, making it “quite meaningless” [1]. The Standard Model is constructed from the SU(3) x SU (2) x U(1) gauge group. 2.3.1 Electroweak Theory The SU(2) x U(1) part of the Standard Model, called the electroweak theory, is the unification of electromagnetic force, which is described by QED, with the weak force. The electromagnetic force is characterized by elf-‘5”) phase factors, which are unitary transformations in one dimension - the U(1) symmetry group. The weak force, on the other hand, is described by SU (2) It is thus convenient to group the particles in doublets: u c t 1/ z/ u e p ,and T . (2.1) d s b e p. 7' Then, a two-component field is considered for each doublet. The weak force is char- acterized by the 2 X 2 matrix 67:96. For the weak theory to be gauge invariant, there must be three massless gauge bosons, 14’ 0"(01 = 1. 2, 3). where W1 and W2 are charged and W3 is neutral. These bosons can only couple to left-handed fermions. At this point, the electromagnetic force, with its neutral gauge boson, B”, can be combined with the weak force to obtain the electroweak theory. In the observable world, how— ever, the electromagnetic and weak forces are separate, and the weak gauge bosons are not massless. Therefore, the electroweak symmetry must somehow be broken. This electroweak symmetry breaking can be achieved through the Higgs mecha- nism. This mechanism introduces a new field, called a Higgs field, with a non-zero vacuum expectation value (vev). This symmetry breaking leads to massive SU(2) vector bosons. The SU(2) and U(1) bosons can then be written in terms of their more familiar physical states: wi :: (VV’1$W2)/\/2 Z0 : IV3 COS 0w - BU Sill 0W (2-2) 7 ;; W3 cos 0w + B” sin OW, where 6W is a fundamental parameter called the weak-mixing angle. The l’Vi bosons still couple only to left-handed fermions; however, the Z 0 can couple to right-handed fermions now because of its mixing with B”. The Higgs mechanism predicts the ratio of the W and Z masses to be MW __ 92 ”2 x/gi + 9-3 where 91 and 92 are the U(1) and SU(2) coupling constants, respectively. MW, M Z. E cos 6W (2.3) and cos 9W have been measured independently in a number of electroweak processes and confirm this prediction. Similarly, the Higgs mechanism gives mass to the quarks and leptons; however, the actual masses are not predicted, especially not the surpris- ingly high mass of the top quark. These must be determined experimentally. One other feature of this symmetry breaking is the necessary introduction of 10 a Higgs doublet, which includes a physical spin 0 Higgs boson. This scalar Higgs couples to any object with mass such that the coupling strength increases with higher mass. Overall, the predictions of the Higgs mechanism have been extremely successful. However, one key piece of evidence for this mechanism is still missing: the Higgs boson itself has yet to be discovered. The discovery of the Higgs boson is crucial for the survival of the Higgs mechanism. While the Higgs may not be discovered at the Tevatron (Section 3.1), experiments at the Tevatron will be able to put more stringent constraints of some of the Higgs boson’s properties such as its mass. However, an accelerator called the Large Hadron Collider (LHC) is being constructed at CERN in Europe to search for the Higgs. Discovery of the Higgs will be possible there for a Higgs with a mass of up to about 1 TeV. If the Higgs is not discovered at the LHC, then alternative theories will have to be proposed and tested since the mass of the Standard Model Higgs should be well below 1 TeV. In fact, alternative theories of electroweak symmetry breaking are already being proposed [4]. 2.4 Top Quark The top quark was discovered in 1995 [5] [6] filling out the third quark generation. However, this quark is peculiar compared to the previously discovered quarks. It is more than 35 times more massive than the b quark and is the only quark which does not hadronize before decaying. These properties make the top quark interesting to study and offer some insight into the properties of the Higgs boson. 2.4.1 Top Quark Production The Tevatron, discussed in Section 3.1, is currently the only particle accelerator in the world able to produce top quarks. The top quarks are produced in high energy 11 collisions of protons (p) and anti-protons (5). At very high energies, the collisions actually occur between the quark or gluon constituents of the proton and anti-proton (i.e. the partons), which each contain some fraction of the proton’s or anti-proton’s energy. Figure 2.1 illustrates such an interaction, or a hard-scatter process. The partons each carry some fraction as (where 0 < at < 1) of the proton or anti-proton momentum defined by fi(:r,Q2), the parton distribution function (PDF). fi(:r, Q2) is a probability density for a certain parton, 2', to have momentum fraction :1: of the proton for a given invariant momentum transfer, Q2. 'Ul Figure 2.1: Parton model of a hard scatter process. The dominant method of top quark production at the Tevatron is through the strong interaction in which t? pairs are produced. The leading order (LO) top pro- duction diagrams are shown in Figure 2.2. The diagrams show that (15 annihilation and gluon fusion are the two major production channels. Since the momentum frac- tions of the quarks and anti-quarks tend to be higher than those of the gluons, (16 annihilation accounts for about 85% of the tf production rate at the Tevatron. l2 t >mm< ? Ql g r66660666€60__+._ ( g 22:“ l__,___ I g 2‘ t 7% sf“ 653; ‘Es $3 0} 6"“Q 63‘ A + e» + (gammy $3 030‘ (6’ «55¢ 65°“ (966 - - ( o g fracaacactm+ t g‘b t g t t ( Figure 2.2: Tree level diagrams for tf production in pp collisions. The likelihood that a certain final state is produced is given in terms of the cross section, a, a quantity intrinsic to colliding particles. It is defined as the number of events produced in a given time divided by the particle flux. This will be discussed in more detail in Section 3.2. Note that the cross section has units of area expressed in barns where 1 barn 2 10’2"1 c1112. The tf production cross section can be calculated using perturbative techniques out to next—to—leading order (NLO). In addition, non-perturbative techniques have been developed to estimate the size of higher order terms. One NLO+next—to—leading-log (NLL) calculation gives a prediction of at; = 6.97 pb (for \/§ : 2.0 TeV) while a NNLO+NNLL calculation gives 0,; : 8.0 d: 0.6 $0.1 pb [7] for a top quark with mass mt = 175 GeV. Note that the tf production cross section is predicted to be a few picobarns. which is on the order of 1010 times smaller than the total pf)" interaction cross section. This means that only one tf event is produced every 10 billion collisions. In addition, the theory predicts a non-negligible decrease in cross section with increasing top mass (Figure 2.3). Hence, making a mass measurement in conjunction with the cross 13 20 I I I T I I I I I I I I I I l l Mi T r I I I I t _ NLO 15 :\\ ———~ NNLO1P| , 5. .\ —-—- NNLO PIM \ \\ ————— NNLO ave ’5 810- b 5_ l 04.4111L.1r Li...aun.n. 150 160 170 180 190 200 m (GeV) Figure 2.3: Top production cross section vs. top mass at \/s : 1.96 TeV [8]. section measurement is a good way to test QCD predictions. For example, measuring a cross section lower than that predicted by QCD may indicate that the top quark has decay channels beyond those predicted by the Standard Model (Section 2.4.2). Such non-standard decays could be a sign of new physics. A significantly higher cross section, on the other hand, might mean that there is a new production mechanism for tf such as gluino production followed by the decay g —+ ft. Resonances in tf production could also lead to an enhancement in the top production cross section [9]. 14 2.4.2 Top Decay As mentioned earlier, the top quark is the only quark that can be studied as a free quark; it does not have time to hadronize before it decays (0(10‘25 s)). Since the weak interaction does not conserve quark flavor, this is the only process by which the top can decay. Nearly 100% of the time, the top quark will decay to a W and a b. A decay to c or d is also possible due to weak eigenstate mixing. (If not for this mixing, b and s quarks would not be able to decay in the Standard Model.) The amount of mixing is given by the Cabibbo-Kobayashi-Maskawa (CKM) matrix: d, Vud Vus Vub d S, : Vcd Vcs Vcb S (2 A) b, L th Vts th b L Though none of the CKM parameters related to the t0p quark have been directly measured, it is known that th =: 0.9991 :1: 0.0001 [4]. This is obtained by requiring the unitarity constraint [Vubl2 + [l/gbl2 + Ith]2 = 1 and using Vub and Vcb, which have been measured. Hence, assuming t ——> W'b is valid for nearly all top quarks. The b quark does hadronize, forming a jet of particles in the final state. The W , on the other hand, may decay into any doublet except tb since the top quark is more massive than the W. The rate for W to decay to any other pair is about equal. That is, the branching ratios for W —’ ad and W —> as are 1 / 3 each while the branching fractions for W —> eve, W ——> [ii/fl, and W —> Tl/T are each 1/9. The branching ratio to quarks appears three times higher than to leptons because the quarks come in three colors. This analysis is concerned with the dielectron final state of the if system. That is, both VV’s decay to cue, resulting in a branching fraction of 1 / 81. 15 2.4.3 Top Mass Motivations for measuring the top quark mass include testing QCD predictions for tt production and gaining further insight into the Higgs sector. The current world average measurement of the top mass is mt : 174.3 d: 3.2 i 4.0 GeV [4] However, this combination does not include the latest DO measurement from Run 1: mt : 180.1 :t: 3.6 i 4.0 GeV [10]. Since the first motivation has been mentioned already, only the impact the top mass has on understanding the Higgs sector will be discussed here. The top quark and the Higgs boson both play a key role in precision electroweak analyses. At tree level, the mass of the W can be written: 7T0 526' S H; where 3%,. = 1 — (Aggy/Mg) = 0.2228 [11]. However, one loop corrections give 770' fiGF 3%,,(1— Ar), MEV : (2.6) where Ar denotes the corrections. Included in Ar is a correction depending on mt: 3Gp'mt2 _ ‘ 2.7 8\/27r2 tan? 6w ( ) (A7‘)top R: 16 and a correction depending on the Higgs mass, M H3 ~ iioFMg cos2t9W M}, (AT)Higgs ~ W 11 Mg- (2-8) Using these corrections, it is clear that precision measurements of the W and top masses can constrain the mass of the Higgs, as shown in Figure 2.4. 80.6 I I dlirjml(1:‘) I I I I I j I7 I r I I I I I I I I d - -— indirect (1o) — - — all data (90% CL) (,3 1 ‘6“ m, [GeV] Figure 2.4: “One—standard-deviation (39.35%) region in MW as a function of mt for the direct. and indirect data, and the 90% CL region (X2 : 4.605) allowed by data. The Standard Model prediction as a function of M H is also indicated. The widths of the M H bands reflect the theoretical uncertainty from a(MZ)" [4] The top quark mass also gives some insight into the Yukawa couplings, which relate the Standard Model quarks and leptons to the source of their mass generation, namely the Higgs. The top quark is fundamentally related to its Yukawa coupling, Yt, and the Higgs vacuum expectation value, v. In fact, Yt'v ,/§ (2.9) 7721 2 Since ’0 = 247 GeV [4], Yt comes out to about 1, an interesting result which hints at new physics [7]. In this dissertation, a tool developed for calculating the top mass in the dilepton channels and some studies and results in the dielectron channel are presented. 18 Chapter 3 Experimental Apparatus The Fermi National Accelerator Laboratory (Fermilab) currently houses the world’s highest energy particle accelerator. Fermilab was commissioned on November 21, 1967, by the US. Atomic Energy Commission. It was built on the site of the village of Weston and 6800 acres of surrounding farmland about 40 miles west of Chicago [12]. The 4-mile—in-circumference Tevatron, which began operating in 1983 as the Energy Doubler, is now the highest-energy particle accelerator in the world, colliding beams of protons and anti-protons at a center-of-mass energy, \/§, of 1.96 TeV. These collisions occur inside two huge detectors, DO and GDP, which sit across from each other on the Tevatron ring. The Tevatron has had two major periods of physics running, called Run I and Run 11. Run I lasted from 1992 to 1996, running at \E :2 1.8 TeV. Then, a shutdown ensued in order to upgrade the. accelerator and the detectors for higher energy and higher luminosity running. In March 2001, Run II commenced with the current \/s = 1.96 TeV. In this chapter, a brief overview of the accelerator will be given. In addition, the D0 detector along with its triggering and data acquisition (DAQ) systems will be discussed. 19 3. 1 Accelerator Fermilab employs a series of accelerators, shown in Figure 3.1, to create the world’s highest energy particle beams. Information on the accelerators is obtained from [14], [15], and [16]. The process begins in the Cockroft—VValton pre—accelerator where hydrogen gas is ionized to create H" ions, which are subsequently accelerated by a positive voltage to 750 keV. They then enter the 130 m long linear accelerator (LINAC) where they are bunched and accelerated to 400 MeV by oscillating electric fields. At the end of the LINAC, the ions pass through a carbon foil which strips off the electrons, leaving just protons. These protons make their way to the Booster, a synchrotron ring 475 m in circumference, where they are accelerated from 450 MeV to 8 GeV in 0.033 s over the course of about 16,000 trips around the ring. Upon leaving the Booster, the protons are sent to the Main Injector. The Main Injector accelerates the bunches to 120 GeV. At this point, there are two options: the proton bunches can be further accelerated to 150 GeV and injected into the Tevatron, or they can be used to produce anti-protons. To create anti-protons, the proton bunches are steered into a nickel target. The collision produces many particles; however, for every million protons that hit the target, only about 20 anti-protons are collected. These anti-protons come off the target at many different angles and are focused into a beamline by a lithium collection lens, which is a solid lithium cylinder carrying a current [17]. The beam is sent through a pulsed magnet, which acts as a charge-mass spectrometer to remove any other particles that make it through the lens. Next, the anti-protons enter the Debuncher. This device takes the anti-protons, which have a large spread in momentum but a narrow spread in time (since the protons are fired at the nickel target in tight bunches spaced 1.5 s apart). and spreads them out in time while giving them a narrower spread in niomentmn. This process takes about. 100 ms. The rest of the time before the next bunch hits the Debuncher is used to cool the anti-protons. In other words, an element of randomness ("hotness") still exists in 20 FERMILAB'S ACCELERATOR CHAIN MAIN INJECTOR TEVATRON TARGET HALL / ANTIPROTON SOURCE .' — BOOSTER n /'~ LlNAC COCKCROFT—WALTON / //’ NEUTRIW/ . r/ .1 . I Figure 3.1: Schematic of the Fermilab accelerator chain. Adapted from [13]. the beam, making it diffuse. Stochastic cooling removes this hotness, thereby focusing the beam in both position and momentum. The anti-protons are then sent into the Accumulator where they are further cooled and focused. The anti—protons are stored here until about 1.5 x 1012 of them have been accumulated, a process which takes many hours. Once enough anti—protons have been accumulated, it is time for a new “store” in the Tevatron. The Tevatron is currently the world’s largest superconducting syn- chrotron accelerator. All of the superconducting magnets in this huge machine are cooled to 4.6 K using liquid helium. From the Main Injector, 36 bunches of protons are injected into the Tevatron. Then, the anti-protons return to the Main Injector where they, too, are accelerated to 150 GeV and injected into the Tevatron in 36 21 bunches. The anti-protons, of course, circle opposite the protons. The protons and anti-protons travel in helical orbits which are kept apart with electrostatic separators. Once the bunches are circling in the Tevatron, they are accelerated to 980 GeV using RF cavities. The proton and anti-proton bunches are steered into each other in the middle of the DO and CDF detectors, producing p1.) collisions at. \/§ 2 1.96 TeV every 396 ns. 3.2 Luminosity The production cross section for a given process is defined to be _ dN/dt L , (3.1) 0' where N is the number of events expected and L: is the luminosity. The luminosity is the particle flux from the colliding beams. In the absence of a crossing angle, the luminosity is given by __ frchNpNT) _ 2 2 27r(0p + 01—9) Ffal/(3*)~ (3.2) where freq} is the revolution frequency. B is the number of bunches per beam, Nqu) is the number of (anti-)protons per bunch, 0 Pffi) is the transverse beam size of the (anti-)proton beam, and F is a form factor depending on the bunch length (0)) and the beta function at the interaction point (6*). The luminosity is thus expressed in units of cm—Qs’l. A more useful quantity than the instantaneous luminosity is the integrated lumi— nosity, f Ldt, since this is a measure of the amount of data collected. In high energy physics, the units of f £dt are typically expressed in inverse barns (b‘l), not. cm—2. 22 The cross section can then be written as N 0’ Z W (33) and has units of barns. 3.3 The DO Detector Most of the pp interactions in the Tevatron are not very interesting. The particles simply scatter at low angles. However, the rarer hard-scatter processes, in which a parton (i.e. a quark or gluon) from the proton interacts with a parton from the anti- proton, can produce some very interesting physics. In these processes, the proton and anti-proton are broken apart, and the fragments not involved in the hard scatter continue along the beampipe. However, the partons involved in the hard scatter are annihilated, and new particles are produced. The DC detector [18][19],pictured in Figure 3.2, is a hermetic, nearly 477 detec- tor used to study the particles produced in the hard-scatter interactions, especially in those interactions which produce particles with high momentum perpendicular to the beam pipe (transverse momentum). The detector is composed of several nested subdetectors, each designed to detect and measure different objects. The main sub- detectors include the tracking system, the preshower detectors, the calorimeter, the muon system, and the luminosity monitors. These subdetectors will be discussed in this section after defining the detector coordinate system. Also, the trigger sys- tem, which selects the most interesting events to be written to tape, and the readout system will be discussed. 23 Toroid ”III/Ilia \\\\ \\\\\\\‘ III/A Ill I/llllh'x\\‘ \\\\\\\\\\Y yi/Ill/Illllll , V V I E TWO" Tracki sag? ...... psi, - . . ‘ -[ 7IIIIIIIII/IIII u\\\\\\\\\‘\\\\\ 'I’l/I/I/I/I/l . , ' :4: . . . .' ' ‘ . Figure 3.2: Side view of the DO detector [19]. 3.3.1 Coordinate System The D0 detector uses a right-handed coordinate system with the center (0,0,0) at the center of the detector. The beampipe is defined to be the z—axis with the direction of the protons being the positive direction. The polar angle. 0, is defined such that 0 = 0 lies along the beampipe in the +2 direction while 0 = 7r / 2 is perpendicular to the beampipe. The azimuthal angle, (1), is defined such that (,b = 0 points away from the center of the Tevatron ring (also the positive .r—axis). The upward direction, (I) : 7r/2, defines the positive y—axis. Figure 3.3 depicts the coordinate system used at D0. Since the parton-parton collisions do not occur at fixed \/: and since the nucleon remnants escape down the uninstrumented beam pipe, the longitudinal boost of hard 24 n .. — — — PT / ] l — —-> I l l \ l I 9., 1\ gr), [ 9] ' . . >+x (East) ' l /p +z /' (South) protons Figure 3.3: Diagram of pp in the D0 coordinate system. scatter particles is very difficult to measure. However, these particles can still be studied by applying conservation of energy and momentum in the transverse plane. Before the collision, the transverse energy of the system is zero. After the collision, the transverse energy of the proton and anti-proton remnants is negligible, making it possible to study the hard scatter particles in this plane. To do this effectively, variables for use in the transverse plane are defined: 0 ET 2 E sin 0: Transverse energy. 0 PT 2 psin0 : Mpg. + [33: Transverse momentum as shown in Figure 3.3. 0 ET: Missing transverse energy. or energy imbalance in the transverse plane. Finally, instead of using 0, it is more natural to use a variable called rapidity, y, in this environment because the multiplicity of high energy particles (dN/dy) is 25 invariant under Lorentz transformations along the z—axis. The rapidity is defined as [_1 E+p; y— 21n(E—p3)' (3.4) For particles with high boost (m/E ——> 0) at well-defined 6 values (0 << 0 << 7r), y is approximated by the pseudorapidity, 7}, where 7] : — 1n tan (g) . (3.5) The pseudorapidity calculated from (0,0,0) to a position in a given detector is referred to as detector 77, or 77d- The pseudorapidity of a particle, physics 7), is determined by 0 of the particle as measured from the interaction point, or primary vertex, and will be denoted simply as 77. A distinction is made since the collisions do not necessarily occur exactly at :5 = 0 cm. 3.3.2 Tracking System The entire tracking system is new in Run II. It is designed to detect charged particles over a large pseudorapidity range ([17] _<_ 3). The tracking system sits inside a 2 Tesla superconducting solenoid magnet, which is also new in Run II. The solenoid produces a magnetic field parallel to the beam direction inside the tracker. This field bends the paths of charged particles so that the tracking system can measure the particles’ momenta. That is, a particle with momentum p and non—zero charge q travels in a helix with radius r given by all; (18 in a solenoidal field of strength B along the z direction. Thus. measuring the curvature of the track in the r — gb plane gives a n‘ieasure of the [if while measuring the track direction in the r — 1: plane is a measure of PT/Pz- The tracking system also provides 26 secondary vertex measurements necessary for heavy flavor identification. The tracking system itself consists of two subdetectors -— the Silicon Microstrip Tracker (SMT) and Central Fiber Tracker (CFT). Figure 3.4 depicts the tracking system. lnlercryostat Delmar Central Flber Tracker Figure 3.4: DC tracking system [19]. Silicon Microstrip Tracker The SMT is the subdetector closest to the beam pipe. Thus, it is designed to be the detector with the highest position resolution so that it can probe the area around the interaction region very precisely. For this reason, the SMT plays a vital role in finding the primary interaction vertex as well as in reconstructing the secondary 27 decay vertices of short—lived bottom hadrons. This detector is a hybrid system of barrels and disks made from silicon micro-strip detectors. The barrels tend to detect the tracks of particles with small 7} values while the disks are most useful for particles with larger 7] values. The interspersed barrel and disk design, shown in Figure 3.5, is motivated by the fact that the interaction point is not necessarily in the center of the detector. Rather, the interaction point is Gaussian distributed with a mean at z = 0 cm and a: = 28 cm as shown in Figure 3.6. ' l ~>.,‘\‘l “I A t\:_{ I ‘. \ ,‘.‘\\‘ x‘ 1' H —disk --+,..;c‘v] $331 ‘ *‘ Figure 3.5: SMT detector. The six 12.4 cm long SMT barrel modules each have four concentric layers of silicon. The layers are evenly spaced with an inner radius of 2.5 cm and an outer radius of 10 cm. The four—layer barrel coverage corresponds to [77] < 1.5 coverage 28 8 _ gem:- “1 : Entries 50549 t— Mean 1.104 500— RMS 28.01 400:— 300:- : 200E— 100E— —L141I1111Il141 ILLILLLI IIJLIILJIJJLIII 300 -300 -200 -100 0 100 200 300 400 zlmeracflon (CM) Figure 3.6: Distribution of interaction points in z. Adapted from [20]. for interactions at. z : 0. Each barrel is constructed of silicon ladder assemblies that overlap in azimuth. These ladders are support structures, made of beryllium wafers reinforced by rails of composite-fiber and foam, to which the silicon strips and readout electronics are fastened. The first and third barrels each use double-sided silicon sensors with axial and 90° stereo layers in the middle four modules. The axial detectors are made from pctype and the stereo from n—type silicon wafers. Modules one and six use single-sided detectors in the first and third layers since the stereo tracking information can be obtained from the disks. The second and fourth barrels have double-sided detectors in all six modules, but the stereo layer only has a. 20 offset. The silicon barrels have a resolution of 10 microns [21]. The 12 so-called F-disks are the smaller disks used in the central part of the SMT as shown in Figure 3.5. The six innermost disks are attached to the outer sides of the barrel modules (except. at :5 : 0 cm). Three additional F-disks are also attached 29 at each end of the barrel in order to increase the silicon n acceptance. Each F-disk is constructed with 12 trapezoidal wedges arranged in a plate with a hole in the middle for the beampipe. These disks extend radially from 2.5 to 9.8 cm from the beampipe. The detectors are all double-sided. The larger H-disks are positioned at. z : i94 cm and z : 21:126 cm. These have inner radii of 9.6 cm and outer radii of 23.6 cm. The detectors are single—sided. The planes have wedges glued back—to~back to provide a 15° stereo angle. The H-disks cover about 2 S [7}] S 3, giving some minimal track information about forward- moving particles that would miss the outer fiber tracker. All of the silicon detectors are operated at temperatures of 5° — 10°C in order to reduce the effects of noise and radiation damage. This temperature is achieved using a cooling mixture of deionized ethylene glycol and water. Depending on the detector type, the signal to noise performance ranges from 12:1 to 18:1 [19]. The SMT has 792.576 readout channels. Central Fiber Tracker The Central Fiber Tracker (CFT) surrounds the SMT. It consists of 76,800 scintil- lating fibers. The fibers are arranged on 8 concentric carbon-fiber barrels with radii from 20 to 51 cm. The outer six barrels are 2.5 m long while the inner two barrels are only 1.7 m long in order to accommodate the silicon H-disks. The fibers are arranged in single—layer ribbons 128 fibers wide. These singlet layers are then joined to make doublet layers by placing the fiber centers of one layer in the spaces between the fibers of the other layer as shown in Figure 3.7. The resolution of a fiber doublet is 100 microns [21]. Two doublet layers of fibers are positioned on each of these barrels. The layer closest to the barrel is aligned with the z-axis and is called an axial layer. The next layer is aligned with about. a i3° offset with the beam axis. These are called stereo layers or u and v layers. The u and 0 layers alternate barrel by barrel such 30 that there are eight axial, four u, and four ‘2) layers in the CFT. Since the CF T covers more radial distance than the SMT, the CFT is better for determining the pp and charge of charged particles by measuring the curvature of the tracks in the solenoidal magnetic field. Figure 3.7: Alignment of two single fiber layers to make a doublet layer [19]. The scintillating fibers are 835 pm in diameter. The fiber core is polystyrene doped with 1% P-terphenyl and 1500 ppm 3-hydroxyflavone. The fibers have peak scintillation at 530 nm, which lies in the best optical transmission regime of the polystyrene. Around the core are two thin layers of cladding, the inner made from acrylic, the outer from a fluoro-acrylic material. Doubly-clad fibers are used since they transmit light more efficiently than singly-clad fibers. At one end of each fiber, there is an aluminum mirror coating to reflect the light. At the other end, a clear waveguide fiber is matched to each scinitillating fiber to transport the light from the CFT. The waveguides are identical to the scintillating fibers except that they do not contain the fluorescent dyes. The waveguides carry the light 7 to 12 meters to the readout platform. Here the waveguides are connected to cassettes, which are set in a liquid helium cryostat. The light goes through the cassettes to the Visible Light Photon Counters (VLPCs). The VLPCs are small silicon devices with arrays of photo-sensitive areas which convert the light from the fibers to electrical pulses for read-out. The VLPCs operate at about 9 K, have a quantum efficiency of greater than 80%, and have a gain of 20,000 to 50,000. At 77d : 0, the transverse momentum resolution for the DO tracking system can 31 be parameterized as 0 p : 0.00152 + 00014;) ~ 2 21 . pT T 7 3.3.3 Preshower Detectors Outside the tracking system sit the preshower detectors, which are meant to enhance electron and photon identification. These detectors are also new for Run II. They are designed to obtain an energy sampling of the particles which have just passed through the solenoid, up to about 2 radiation lengths of dense, uninstrumented material not present in Run I [22]. The preshowers also have the precision to extend the tracking and can aid in electron and photon identification. The Central Preshower Detector (CPS) sits in the central part of the detector ([7de < 1.2). It is a cylindrical detector with a radius of 72 cm squeezed into a 51 mm gap between the solenoid and the central calorimeter. This detector consists of three layers of scintillating strips. The innermost layer is axial while the outer two layers are positioned with stereo u and v angles of i22.5°. The strips have a triangular cross section with a hole running through the middle for the waveshifting fiber (Figure 3.8). Each strip is covered in a reflective material which increases light yield and reduces cross—talk. The waveshifting fibers carry the signal from the detector to the clear waveguides which transmit the light. to the VLPCs. The CPS has 7680 channels of readout. Separate detectors called the Forward Preshower Detectors (FPS) sit in the for- ward regions (1.4 < [1;] < 2.5), acting as a counterpart to the CPS. These detectors are mounted to the front faces of the two calorimeter endcaps. The triangular strips and readout are the same as those for the CPS. However, the strips are arranged in four layers, an inner u and v layer and an outer u and v layer separated by 11 mm of lead absorber. There are no axial layers in the FPS. The inner layers usually detect minimum-ionizing particles (MIPs) while the outer layers also detect the be- ginnings of showers. which are larger signals. These layers are aptly called the MIP 32 / \. 2 Layersof Mylar (0.025 x 2 = 0.050mm) Figure 3.8: Cross section of a layer of the CPS. The triangles are made of plastic scintillator with holes in the middle for the waveshifting fibers [19]. and shower layers, respectively. The 16,000 FPS channels are read out using the same VLPC system as the CFT and CPS. Since the solenoid does not sit in front of the FPS, its main function is not to im- prove energy resolution. Rather, its purpose is to help discriminate between electrons and photons since the tracking efficiencies get worse in the forward regions. That is, if a particle is not observed in the MIP layer but is seen in the shower layer, it is more likely to be a photon; whereas, a particle seen in both the MIP and shower layers is likely to be an electron. 3.3.4 Calorimeter The DC calorimeter was the pride of Run 1. The detector itself remains unchanged in Run II; however, the readout system has been upgraded. The calorimeter is designed to accurately measure the energies of the hadronic and electromagnetic objects that. 33 enter it. The calorimeter is housed in 3 huge cyrostats, one in the central region ([ndl < 1.2) and one on each end, extending coverage to ind] m 4.5. The central calorimeter (CC) weighs about 330 tons; each of the endcap calorimeters (EC) weighs about 240 tons. The calorimeter is shown in Figure 3.9. END CALORIMETER / Outer Hadronic .\ (Coarse) Middle Hadronic (Fine & Coarse) CENTRAL CALORIMETER Electromagnetic Fine Hadronic r . Inner Hadronic (Fine & Coarse) Coarse Hadronic Electromagnetic Figure 3.9: DC calorimeter [19]. The DC calorimeter is a compensating sampling calorimeter. Liquid argon is used as an active medium with depleted uranium (and copper and steel) as absorber. The incoming particles interact with the dense absorber, losing energy and show- ering. Electromagnetic (EM) and hadronic (HD) objects shower differently in the calorimeter, allowing for their identification as well as an energy measurement. EM objects interact with the uranium via two processes: pair production (7 —v e+c‘) and bremsstrahlung (c —-) 6’7). For each successive interaction, the average particle energy decreases while the number of particles increases. Collecting and measuring these secondary particles gives insight into the original EM object’s energy (E0) since 34 energy of the original particle drops exponentially: E(:1:) : EUe—x/XU (3.6) where :1: is the distance traveled and X0 is the radiation length of the material through which the particle passed. Xi) is defined both as the mean distance a high-energy electron loses all but 1/c of its energy to bremsstrahlung and as 7/ 9 of the mean free path for pair production by a high-energy photon. For uranium, X0 is approximately 3.2 mm [4]. Hadrons, on the other hand, interact with the uranium nuclei via the strong force. About a third of the secondary particles produced in these interactions are neutral pions (7rO’s), which decay primarily to photons. The rest of the secondary particles tend to interact strongly. Because of this, hadronic showers tend to be larger and develop over longer distances. The hadronic counterpart to X0 is the nuclear interaction length (A0). For uranium, A0 is about 10.5 cm [4]. Because EM objects tend to decay over a shorter distance than hadrons, the four innermost layers of both the CC and EC are the electromagnetic (EM) layers. These layers extend radially in the CC and along the z—axis in the EC. Each layer uses 3 mm (in the EC) or 4 mm (in the CC) thick depleted 238U absorbers. The next three layers in the CC and four in the EC are the fine hadronic (FH) layers. These use slightly thicker uranium absorbers, 6 mm thick. Finally, the coarse hadronic (CH) layer uses 46.5 mm thick copper (CC) or stainless steel (EC) absorbers. There is one CH layer in the CC and three CH layers in the EC. The depths of each layer are shown in Table 3.1 in units of X0 or A0. All of the layers are broken up into readout cells 0.1 x 0.1 in (7), (,0) space except. in EM layer 3. Here, the cells are 0.05 x 0.05 in (7)40) space for Him] < 2.7). The cells in layer 3 are smaller because this is where the maximum number of particles in an EM shower was expected to occur in Run 1. However, in the far forward region EM (X0) PH (/\0) CH (A0) EMl ElVl2 EMB EM4 FHI FH2 FH3 FH4 CH1 CH2 CH3 CC 2 2 7 10 1.3 1.0 0.9 3 EC 3 3 8 9 1.3 1.2 1.2 1.2 3 3 3 Table 3.1: Layer depths in the calorimeter. (Indl > 3.2), the cell size increases to 0.2 x 0.2 in all layers. These cells consist of an absorber plate followed by a gap filled with liquid argon. A G-10 board sits in the center of the gap. See Figure 3.10. When a particle enters the calorimeter, it showers inside the absorber plate, and the secondary particles from the shower ionize the argon atoms. The ionization electrons are attracted to the copper pads on the G-lO boards. These pads have a thin, high-resistivity coating and are kept at high positive voltage. The drifting electrons create an image charge on the copper pads which is read out at the edges of the board via copper traces. The gap between absorber plates is 2.3 mm, and the electron drift time across the gap is about 450 ns. Several of these unit cells are stacked on top of each other to create a layer in the calorimeter. All of the cells in the layer are read out together to obtain the energy deposited in the layer. This grouping of unit cells is a “readout cell”, and the term “cell” will refer to a readout cell in the following pages. The cells in each layer are aligned with the cells in the layers in front and behind them in order to create projective towers with each readout cell in the tower having the same ”(1 and (pd. See Figure 3.11. For most calorimeter measurements, tower energy is used instead of the energies in the individual cells. This is a measure of the ET defined in Section 3.3.1. Between the CC and EC cryostats are the inter-cryostat detectors (ICD) and “massless gap” detectors. These detectors compensate for the uninstrumented region between the cryostats; however, they do not have the energy resolution of the CC and EC. The ICD consists of slabs of scintillator between the cryostats. which are 36 fr AbsorbarPlate Pad Resistive Coat N 810 Insulator Liquid ~90" Gap Le— UnitCell —} J Figure 3.10: Unit cell in the calorimeter. read out with photomultiplier tubes. On the other hand, the massless gap detectors, located inside the cryostats, are extra readout pads that sample the shower between the CC and EC. The calorimeter readout chain is shown in Figure 3.12. A charge proportional to the energy loss of the particles traversing the cell is sent to the readout electronics through four ports in the cryostats via 30 Q coaxial cables. First, the charge is integrated in the preamplifier to produce a voltage. Then, the voltage pulses are carried by twist and flat cables to the shaper and baseline subtracter (BLS), which shape the signal and remove slowly varying offsets in the input. voltage. The shaped signal is sampled at its peak at about 320 ns. Because the Argon drift time is 430 ns, only 2/ 3 of the charge in the calorimeter is actually used. The shaped signals are stored in switched capacitor arrays (SCAs) until a Level 1 trigger decision is made (~ 4 as) If a positive decision is made, the signal is sent to a second SCA buffer to await a Level 2 trigger decision (N 100 #8). Finally, the output signal is digitized 37 Figure 3.11: A quarter of the calorimeter in the r — z plane of the detector showing the tower geometry. by the Analog to Digital Converters (ADCs) and sent to the data acquisition system (DAQ). The readout system is is designed to have no deadtime up to a Level 1 trigger rate of 10 kHz, assuming one interaction per bunch crossing [23]. The calorimeter design called for an electromagnetic energy resolution of and a hadronic energy resolution of 1|} TWSumPIddl 3m“) BLSCard \ fD’ ......” ”W" SCAMODoop) . Fllhrl Sun ”owl: Pro-up! s: “SCA aLs sun’s: Maine :1: ”MM 1 7 ADC SCAG-8 Deep) BInk1 Figure 3.12: Calorimeter electronics readout chain [19]. between the beam pipe and the calorimeter. The Run II resolution measurements are discussed in more detail in Sections 5.1.6 and 5.2.4. 3.3.5 Muon System The only directly-detectable particles able to pass through the calorimeter are high- energy muons. The muons behave as minimum-ionizing particles (MIPS) in the calorimeter, depositing only small amounts of energy. Outside the calorimeter sits the muon system, well shielded from the debris from hadronic and electromagnetic showers. The muon system is designed to identify muons and provide an independent measurement of their momentum in a toroidal magnetic field. The muon system has three main components: 0 Wide Angle MUon Spectrometer (VVAMUS) covering lnl < 1 0 Forward Angle MUon Spectrometer (FAMUS) covering 1 < |71| < 2 o Solid-iron magnet generating a toroidal field of 1.8 T. The VVAMUS consists of three layers of proportional drift tubes (PDTs) and two layers of scintillator plates with embedded wavelength shifting fibers. There are no scintillators in the middle layer. The FAMUS. on the other hand, consists of three layers each of mini—drift tubes (MDTs) and scintillator pixels. 39 Since the muon system is not used in this analysis, there is no need for further elaboration. More information can be found in [18] and [19]. 3.3.6 Luminosity Monitor An accurate measurement of the integrated luminosity is essential for making a cross section measurement. Therefore, DC has instantaneous luminosity monitors which log the number of interactions that occur in the detector. These luminosity monitors also send information to the trigger framework so that events are kept only when an interaction is detected. DC has two luminosity monitors, one attached to the inner face of each calorimeter endcap at z 2 i135 cm (shown as “Level 0” in Figure 3.4). The monitors consist of arrays of scintillation counters arranged synnnetrically around the beampipe covering 2.7 < [77d] < 4.4. The scintillation counters are wedges of scintillator with an attached phototube. Their pseudorapidity coverage provides an acceptance of (98 :l: 1)% for detecting inelastic collisions [25]. 3.4 DC Trigger System In a pf) experiment, only a few events in a million are of interest. As stated earlier, most events are not hard-scatter events. Rather, they are low-angle. non-diffractive p1”) scattering or parton scattering, neither of which are of much interest. Moreover, the original plans for Run 11 called for a beam crossing every 132 ns, and writing events to tape at a rate of 7 MHz is not technologically feasible since an average event contains about 300 kb of data. Even the current crossing rate of one event every 396 ns is orders of magnitude beyond what can be written to tape. DC). therefore. uses a trigger system to select only the most interesting events. The trigger system reduces the rate to tape to about 50 events per second in 40 7 MHz Lum = 2x1032 cm‘2 s '1 L] 396 us --> 132 ns crossing time 4— ————— > Framework 4.2 us 1* 5-10 ka I / 128 bits L2 100 [18 1 kHz \138 bits 0 Maintain low- & high-pT L3 physics 100 ms To DAQ & ' Implement fast algorithms, Tape parallel processing, 50 H\ Storage pipelining/buffering ./ ' Trigger Deadtime < 5% Figure 3.13: D0 trigger scheme with typical trigger rates. three steps, called Level 1 (L1), Level 2 (L2), and Level 3 (L3). The trigger decision is based on specific patterns in the detector corresponding to particular types of events or objects. For example. these trigger decisions might be based on the amount of energy in EM or EM+HAD calorimeter towers, on tracks. on hits in the muon system, or on missing energy (discussed in Section 5.3). Moreover, these decisions must be made quickly in order to prevent the events from piling up. At each level, the trigger becomes more sophisticated and more seltmive, and requires more time. Hence. the output rate at each trigger level drops. The basic trigger scheme used by DC is shown in Figure 3.13. Since this analysis requires only calorimeter triggers. 41 the calorimeter triggers will be emphasized in the description of the three levels of triggering. 3.4.1 Level 1 Triggers The L1 trigger provides the largest reduction in rate since it has to make a decision on every beam crossing to determine whether the event should proceed in the trigger chain. Because it has to make decisions very quickly, L1 is a hardware-based trig- ger system which uses simple algorithms implemented in Field Programmable Gate Arrays (FPG As). Ll Trigger Framework Principal Function Detector Front-End Level 1 And-Or L1_Accepts SCL Systems Electronics Triggers Terms For Each Links Geographic Section To All Detector Muon ]\ L1 Muon ReconddouTtrCiggtes Detector l/ Triqqer Calorimeter [> L1 Cal. m Detector Trigger __ ._ - #1: Trigger SCL w: L H _ Framework 128 ub End —\Qx7E \ £]K7FE LLAccepts Information Specific To Each Level 0 Geo ra hic Level 0 , Luminosity __, Segctign Detector Triqqer L—___ ...._._._J Figure 3.14: L1 trigger scheme [26]. t CFT |> [L1 Tracking Detector CTT At this level, only four detectors are used: the calorimeter, the CFT, the muon sys- tem, and the luminosity monitor. As shown in Figure 3.14. these detectors each send information to a corresponding trigger system. Then, each trigger system processes the information it. receives and delivers an accept ance decision. called an And/Or term, to the L1 trigger framework. The L1 framework takes the readiness of the data acquisition system (DAQ), as well as the And/Or terms, into account and decides whether to reject the event or send it to L2. The maximum number of specific L1 trigger terms is 128, and the event is accepted if any of the And / Or terms fire. L1 Calorimeter Trigger The calorimeter part of the L] trigger [27] uses information from trigger towers in the calorimeter. Trigger towers are constructed from four standard readout towers grouped together in a 2 x 2 pattern such that they cover 0.2 x 0.2 in (17, (1)) space. The trigger towers are read out as EM trigger towers and hadronic (HAD) trigger towers separately. The EM trigger towers sum the energy in the EM layers of the calorimeter while the HAD trigger towers sum the energies in the FH layers. The CH layers are not used in triggering since little energy is deposited there. There are 1280 trigger towers of each type broken up into 32 divisions in ¢d and 40 divisions in 77d- The EM and HAD energy sums are done on the BLS boards, discussed in Sec- tion 3.3.4. When the calorimeter output reaches theBLS board, information required by the trigger is picked off before shaping and is sent to the summers which sum the layer energies while the full readout continues to the SCAs. After the EM and HAD energies are summed in individual readout towers, the energies in the four towers read out by each BLS board are summed to create EM and HAD trigger towers. These trigger tower energies are then sent from the electronics platform beneath the detector to the first floor movable counting house (MCHI). In MCHl, these cables are connected to Calorimeter Trigger Front End (CTFE) boards [28]. These boards first digitize the EM and HAD sums for each TT as 8- bit numbers representing 0.25 GeV steps in energy plus a fixed pedestal value of 8. The 0.25 GeV bins are centered on the nominal value; for example, an input energy between 0.125 and 0.375 GeV would be rounded to 0.25 GeV. These 8-bit. numbers 43 are called EM and HAD ADC counts. The ADC counts are then independently fed into lookup memories which convert EM and HAD energies to EM ET and HAD ET and (for HAD only) apply a low energy cut to remove noise. The transverse energies are still rounded to the nearest 0.25 GeV. Run IIA Level 1 Calorimeter Trigger L1 Col Trig Analog 5 And-Or Terms . to the Trigger Tower - - L1 Framework Pckoff S nols \' L'g Levell 2560 Calorimeter Trigger Readout from the L1 Cal Trig to the Crdi'rtlsthoen Bil-12 ——> L2 Trigger Calorimeter Platform Preprocessor and J to Level 3 t Configuration Information Control T V l Monitor from COOR e.g. Reference Set Definitions. and Count Thresholds >__). T _ Bus Into rigger L1 Col. Trig. Control _<_>_/ Hardware E 5 Computer Monitor Information Serve to: TrgMon, and Various Displays Figure 3.15: Ll calorimeter trigger diagram [29]. At this point, the output energies are used in several ways. Both the EM ET and the HAD ET are sent to the EM and HAD adder trees, which produce global EM and HAD ET sums for the entire calorimeter. The EM ETs are also compared to reference ET with the results sent to counter trees which count the number of towers above certain ET thresholds. This information is then used in making L1 EM trigger decisions. The EM and HAD ETS are also summed to produce a total (TOT) TT ET. Like the EM ETs, the TOT ETs are compared to reference energies. and the number of trigger towers above certain ET thresholds is obtained. This information is used for L1 jet trigger decisions. 44 Figure 3.15 is a diagram of the L1 calorimeter trigger. More details on the L1 calorimeter trigger can be found in [27] and [30]. Detectors Ll Trim-s L2 Triggers L1: 5., towers, Backs L2: Combines consistent with e, u, j objects into e, u, j Figure 3.16: Trigger flow scheme for L1 and L2. 3.4.2 Level 2 Triggers For events which pass the L1 trigger, the L2 trigger system correlates information from different sub-detectors in order to create physics object candidates like electrons and muons. L2 also has the capability to use information from detectors not available 45 at Ll. Figure 3.16 shows the L2 trigger scheme. At this level. more time is taken to refine the information. For instance, instead of simply basing a decision on the energy in single 0.2 x 0.2' trigger towers, the L2 calorimeter trigger builds jets from 5 x 5 trigger towers centered on the highest energy (“seed") tower. In terms of hardware, L2 uses 500 MHz Alpha processors running Linux. These are mounted on single boards in VME crates. Each board runs a specific algorithm to analyze a piece of the data read out from L1. That is, one board analyzes the calorimeter data, one the central muon data, one the forward muon data. Two other boards run tracking algorithms for CFT and a subset of SMT data. All of these boards transmit their results to a global alpha board which decides whether or not to accept the event. Currently, many triggers, including the ones used in this analysis, have no Level 2 requirement. In these cases, an event. accepted at. Ll automatically passes L2. As the Tevatron luminosity increases, however, L2 will play a much more significant. role. 3.4.3 Level 3 Triggers On a Level 2 accept, the event. goes to L3. The L3 trigger system is a Linux farm where a node reads out all of the information from the subdetector readout crates (ROCs) and partially reconstructs the data for each event to determine whether it meets the L3 acceptance criteria. An independent copy of the L3 filter software runs on every L3 node so that as many events as there are nodes can be separately analyzed in parallel. The ROCs area set of about 70 VME crates, each corresponding to a piece of a subdetector or the trigger framework. Each ROC is read out. by a single board computer (SBC), powered by 933 Pentium II processors with 128 MB of RAM. Event, sizes typically range from 1 to 10 kB per crate with total event. sizes of 250 kB. The data are transferred from the SBCs to the L3 nodes via Ethernet connections. 46 The L3 processors then reconstruct the events using simpler algorithms than the full reconstruction algorithms, and perform physics selections based on software filter tools. Each filter has the specific task of identifying a certain physics object or event characteristic. There are filters for electrons, jets, muons, tracks, and ET to name a few. If an event passes a L3 criterion, it is sent through the network to a collection machine and is written to tape for offline analysis. 47 Chapter 4 Data and Monte Carlo Samples Once an event passes a trigger selection and is written to tape, it must be recon- structed in order to be useful for analysis. This is done on a farm of 250 Linux computers running the DO reconstruction software (RECO). This analysis uses data reconstructed with production release 14 (p14) of the reconstruction software. The reconstruction of physics objects will be discussed in the next chapter. This chap- ter will introduce the triggers used to select the data for the dielectron analysis and the dataset itself. In addition, the Monte Carlo samples used in the analysis will be discussed. 4. 1 Trigger Selection Events for this analysis are selected using triggers specifically designed for high-pT dielectron analyses. By selecting events passing these low-rate triggers, this analysis can be run on a very small subset of the entire data set. 4.1 .1 Vocabulary Before discussing the triggers used in the analysis it is necessary to define the vocab- ulary used regarding trigger selection. L1 EM ret'niiremcnts are identified using the 48 Trigger Trigger L1 L2 L3 List Name v12 E1_2L20 CEM(1,11) —— ELE.NLV(2,20) E2_2L20 CEM(2,6) —- ELE_NLV(2,20) E3_2L20 CEl\I'I(2,3)CEl\I(l,9) —— ELE_NLV(2,20) E1-2L15_SH15 CEM(1,11) -— ELE_NLV(2,15)ELE_SH(1,15) E2_2L15_SH15 CEM(2,6) —— ELE.NLV(2,15)ELE.SH(1,15) E3_2L15.SH15 CEM(2,3)CEM(1,9) —— ELE-NLV(2,15)ELE_SH(1,15) v11 2EM-HI CEM(2, 10) — ELE-LOOSE(1,20) v10 2EM_HI CEM(2, 10) — ELE_LOOSE(1,20) v9 2EM_HI CEM(2,10) — ELE-LOOSE(1,20) v8 2EM_HI CEM(2,10) — ELE_LOOSE(1,10) Table 4.1: Summary of the dielectron triggers broken down by trigger list version. notation CEM(-n, 3:) where n is the minimum number of trigger towers with at least :1: GeV of energy. For example, CEM(2,10) means that the trigger requirement is passed if there are at least two towers with at least 10 GeV of energy. A requirement like CEM(2,3)CEM(1,9) means that there must be at least one tower with at least 9 GeV of energy and only one different tower with at least 3 GeV of energy since the 9 GeV tower passes one of the two 3 GeV requirements. L3 conditions are defined by the L3 filter employed. The three filters used in this analysis are ELE_LOOSE(n,:r), ELE.NLV(72.,:17), and ELE.SH(n,:r). ELE_LOOSE and ELE_NLV are both electron triggers using a simple cone algorithm. ELE_NLV also applies some non-linearity corrections and uses vertex information. ELE_SH is the same as ELE.NLV with the addition of a shower shape requirement. A complete description of trigger requirements can be found on the trigger database website [31]. 4.1.2 Analysis Triggers and Efficiencies The triggers used in the dielectron analysis are summarized in Table 4.1. The triggers are broken down by trigger list. version. The trigger list has changed a number of times 49 in order to implement added functionality or to cope with higher ltnninosities, which translate into higher trigger rates. In v12, an OR of the listed triggers is used. Because the triggers changed for different trigger lists, an average trigger effi- ciency is calculated by weighting the trigger efficiency for each different version by the integrated luminosity collected during that. version (discussed in Section 4.2). The efficiency for an offline electron to pass a specific trigger requirement is obtained using the “tag-and-probe” method on a sample of Z —> 66 events in data. This method is discussed here using the L1 electron trigger efficiency as an example and will be used several more times for other efficiency measurements. First, two offline electrons with an invariant mass in a window around the Z mass (80< Alec < 100 GeV) are selected using the criteria discussed in 5.1. One electron is randomly chosen, and, if the elec- tron is matched to a trigger tower (or trigger towers) satisfying the L1 requirement within an R = “An? + Act)? :- 0.41 cone, it is designated as the “tag” electron. The second offline electron (“probe”) is then used for the efficiency calculation by exam- ining whether any trigger towers matched to it in a 0.4 cone pass the L1 requirement under examination. The efficiency is the number of matched probe electrons divided by the total number of probe electrons. It turns out that the L1 efficiency is flat in 77d and did; however, there is a turn-on in pT depending on the threshold of the trigger. An example of this is shown in Figure 4.1 for CE.\I(1,11). The function used to parameterize the L1 electron efficiency is f(r>r) 1:3 (1+ meg—1:739». (4.1) where A0, A1, and A2 are parameters which can be interpreted as the pp at which the efficiency reaches half its maximum value, the slope of the turn-on, and the maximum efficiency in the plateau region, respectively. A similar procedure is used to parameterize the L3 trigger turn on curves. A complete discussion of the L3 efficiency measurement for electrons can be found in 50 - AA A...- ...—a u. u-L CEM(1,11) Efficiency 9 P m an _TIITITIIIITIIIIIITTI P a 0.2 l l l l l l l L 1 L l l 40 50 60 pT Figure 4.1: CEM(1,11) trigger turn-on curve. [32]. One further complication arises for the dielectron analysis since there are two electrons. That is, the L1 triggers requiring two towers can be fired by one high-pT electron with its energy split between two towers or by both electrons each firing a tower. In addition, jets deposit some energy in the EM layers of the calorimeter so these can occasionally fire an EM trigger as well. Since there are several ways a dielectron event can pass the trigger, the L1 trigger efficiency is very high. These scenarios are considered by the topjrz'gger package [32], which uses the parameter- ization derived for each L1 and L3 condition to estimate the trigger efficiencies for the Monte Carlo. This package also calculates the systematic errors for the trigger efficiency based on errors in the turn-on curve fits. Trigger f L: List (pb’l) v8 21.17 v9 31.12 v10 16.01 v11 58.58 v12 116.12 Total 243.00 Table 4.2: Breakdown of integrated luminosities by trigger list version. 4.2 Data Set The data set consists of data taken between June 2002 and March 2004. This cor- responds to 243.00 pb’1 of integrated luminosity, which is broken up by trigger list version in Table 4.2. These data are reconstructed with versions p14.03.0.r (x = 0,1,2), p14.05.0y (y 2 0. 2, 2_dst), or p14.06.00 of RECO. RECO writes out the data in two forms — the data summary tier (DST) and the thumbnail (TMB). The DSTs contain all of the information needed to perform any physics analysis or even do a re-reconstruction of high-level physics objects. The TMBs, on the other hand, are about. a tenth the size of the DSTs. They contain all of the physics information needed for most analyses, leaving out much of the lower-level information stored in the DSTs. The TMBs are then skimmed by the Common Samples Group based on physics objects. For this analysis, the 2EM Common Sample Group skim, which requires two |ID| : 10,11 EM objects with pT > 7 GeV, is further skimmed by the top group with tighter cuts applied. In this analysis, two skims are used. The DIEM skim, which requires 2 EM objects with pT > 15 GeV, |ID| : 10 or 11, fem > 0.9. fiso < 0.15, and khan < 50, is the main sample used. However, the DIEM-EXTR,ALOOSE skim, which requires only 2 EM objects with [)7 > 15 GeV and |ID| :- 10 or 11. is used to obtain an estimate of the fake electron backgrormd and to measure the electron reconstruction and cluster efficiencies. (These selection criteria will be discussed in Section 5.1.) This skimming is done using a version of the topnnalyze package called “Stradavarius_updated” [33]. This package also converts the data from the TMB storage format to ROOT-based [34] ntuples, or ROOT-tuples, which are more analysis-friendly. 4.2.1 Data Quality The integrated luminosity listed in Table 4.2 is not the total luminosity collected by the DO detector; it is just the amount of data actually used in the analysis. Some of the data are unusable, mainly because of malfunctioning detectors, readout electronics, or triggers. High quality data are ensured by using good run selection and good luminosity block selection. Good run selection is based on the DO Run Quality Database. For this analysis, runs marked “Not Bad” for the SMT, CFT, and calorimeter are used. This means that, during the runs, these detectors were fully functional and exhibited no major problems. Good luminosity block selection is based on the “Ring of Fire” list and the Bad Jet/ MET LBN lists. The Ring of Fire [35] list removes all luminosity blocks in which a (b—ring of energy in the calorimeter appears. The ring was caused by a grounding problem, which is now resolved. The Bad Jet/MET LBN lists are used to remove groups of about 20 sequential luminosity blocks with suspect missing energy. A group of luminosity blocks falls on this list if it. fails any of the following criteria. [36]: o The average ET shift (\/< Er >2 + < E1, >2) of the luminosity block groups must be less than 6 GeV. o The average RMS-xy (VIRMbKEm'Z + RA:IS(Ey)2) of the RMS values of the E3; and Ey distributions of the groups must be smaller than 20 GeV. 53 o The mean of the scalar transverse energy (< SET >) distribution of the groups must be greater than 60 GeV. The data quality selection discussed to this point is implemented using the top.dq package (version v00-05-01) [37]. Less than 5% of the data are affected by these run and luminosity block selections. It turns out, however, that some events with calorimeter readout malfunctions still make it through the quality control. Since top event selection involves jets and ET both of which are very susceptible to this occurrence, another event quality cut must be applied in the analysis. A cut to remove these noisy events is defined in [38]. This cut removes events which show a significant difference between the energy read out by the L1 calorimeter and the precision calorimeter energy which goes through the full readout and reconstruction chain. This difference is quantified by the L160” f variable, which is defined to be the number of trigger towers with EgT < 2 GeV and E5411 — E$T > 1 GeV, where 3%“ is the precision readout energy, divided by the total number of trigger towers with E%T < 2 GeV. It also employs a coherent noise variable, on, defined in detail in [35], which flags events with a coherent shift in the pedestal values of all cells in one or more ADCs. In the end, this cut requires events to satisfy Llamf < 0.3 OR en : 0. The efficiency for this cut is found to be 100% in a Z sample. However, 15 — 20% of events in the loose sample used for estimating the backgrounds are rejected [38]. 4.3 Monte Carlo In addition to the data, event simulations are required in order to predict what events of interest look like in the detector. Such simulations are produced using Monte Carlo generators. The Monte Carlo generation proceeds in three steps. First, the event is simulated. Then, it is run through a model of the detector which predicts the detector response. Finally, it is reconstructed just like the data coming out of the detector. In the first step, the pp interaction is simulated using programs like Herwig [39], Alpgen [40] [41][42], or Pythia [43]. In this analysis, Alpgen v1.2, using CTEQ 6.1M [44] PDFs, models the hard scatter for most processes. Then, the Alpgen output is run through Pythia v6.2, using CTEQ5L PDFs, which handles fragmentation and decay. On top of Pythia, Ethen [45] is used to model the decays of b hadrons, and TAUOLA [46] is used to decay T’s. The DC detector is modeled using the GEANT3 package [47]. This package is used to determine the effects of the detector material and magnetic field on the particles produced in the generators as they travel through the detector. It also models ionization and secondary particles produced through interactions with the detector. The response of the detector is accounted for using the Dleim package [19]. This package merges the hard scatter event with minimum bias events; adds SMT, CFT, calorimeter, and muon system noise and inefficiencies; and digitizes the simulated ionization and shower response. The output of DOsim has the same format as the raw data. Therefore, the MC can be run through RECO and reconstructed just as the data. 4.3.1 Monte Carlo Samples The Monte Carlo samples used in this analysis are described here. Unless otherwise stated, the samples are generated with Alpgen and run through Pythia for fragmen- tation and decay. The samples use the Tune A underlying event model [48]. The lepton parton cuts are pT > 0 GeV and [7)] < 10, and the jet parton cuts are pT > 8 GeV and [77] < 3.5. The minimum distance between two jets is AR(j.j) > 0.4, but no out is applied on the minimum distance between a jet and a lepton. The tf signal sample is produced with both top quarks decaying to leptons, in- cluding T’s which decay inclusively. For the purpose of the cross section analysis. the CI! (.31 signal sample is generated with a top quark mass of 175 GeV. For the mass analysis, the signal Monte Carlo must be generated assuming many different masses for top. Therefore, tf Monte Carlo has also been generated for top masses of 120, 140, 160, 190, 210, and 230 GeV. These mass points use the Pythia underlying event instead of the Tune A underlying event. For the mass analysis, more mass points (120, 130, 140, 145, 150, 155, 160, 165, 170, 175, 180, 185, 190, 195, 200, 205, 210, 220, and 230 GeV) have been generated, this time using Tune A. WW and W2 Monte Carlo samples have been generated in order to study the diboson background. Two WIT" samples are produced - WW —+ ll and WW j j —) ll j j - since millions of WW ——+ 11 events would need to be produced to study this background in the two jet bin. The WW cross section is normalized to the NLO cross section, which is 35% higher than the LO cross section. Since a NLO cross section for WW j j is not available, the LO cross section for WW j j is also scaled up by 35%, and a 35% systematic uncertainty is applied to this cross section. The WZ sample is generated using Pythia only. In this sample, the W decays to quarks while the Z decays to 66 or up. As with the WW sample, two Z / 7* —+ TT samples are produced, a jet inclusive sample and a two jet sample. The Z ——> TT sample is produced using Pythia only. Both T’s decay to leptons, and there is an 8 GeV cut on the pp of the e or it produced in the decay. This sample is produced for Mn > 30 GeV. The Z/c/*jj —+ Trjj sample is produced using the standard Alpgen to Pythia chain. It is produced in two invariant mass regimes: 15 < Mn- < 60 GeV and 60 < MTT < 130 GeV. Finally, a jet inclusive sample and two jet sample are generated for the Z / 7* ——> ee process. Both are generated in three mass bins: 15 < Alee < 60 GeV, 60 < Mm < 130 GeV, and 130 < Ales < 250 GeV. The Z/‘y* —> ce sample is generated using Pythia alone while the Z / 7* j j —> ecjj sample is produced through the standard Alpgen to Pythia chain. This sample is often referred to as the ij sample. Chapter 5 Object Reconstruction and Identification To reconstruct events from the millions of channels of output from the detector, RECO first unpacks all of the detector information and tries to form clusters within the individual subdetectors. These clusters could be, for example, hits in the tracker or deposited energy in neighboring calorimeter cells. Various algorithms are then used to reconstruct physical objects like electrons or jets from these simple clusters. The reconstructed physical objects form the basis of the proceeding data analysis. Specif- ically, top events in the dielectron channel are distinguished from background using four basic objects: electrons, jets, ET, and primary vertices. Therefore, the recon— struction of these four objects are discussed in this chapter, as are the identification and selection of these objects. 5. 1 Electrons 5.1.1 Electromagnetic Cluster Reconstruction At the reconstruction level, an EM cluster is defined as a group of towers in a cone of radius R = 0.4 around a seed tower defined by its energy content. To be considered 57 an EM cluster, a cluster must; have a minimum transverse energy of 1.5 GeV and have 90% of its energy deposited in the EM layers of the calorimeter. The fraction of energy in the EM layers, or the EM fraction, is E t fEM: EE‘I, (5.1) tot where E E M is the cluster energy in the EM layers and Etot is the total energy of the cluster. An EM cluster without a loose, associated track is designated ID : 10; whereas, a cluster with a loose, associated track is designated [ID] = 11. An EM cluster with |ID| : 10 or 11 is termed a “loose” electron. (Throughout this dissertation, “electron” is taken to mean electron or positron since they are identical except for their charge.) Note that the calorimeter readout is “zero-suppressed,” meaning that only energies above pedestal and noise are read out. Zero-suppression is quantified as a ratio of the measured energy above the pedestal to the mean width of the noise (a) in that channel. The suppression used is 2.50 which means that the measured energy above the pedestal must be 2.5 times greater than the noise in the channel to be read out. While zero-suppression is a good way to remove some low-level noise from the calorimeter readout, there is a high incidence of calorimeter cells reading out spurious energy from electronic noise. These “hot cells” are identified and killed by a hot cell killer algorithm called NADA [49]. This algorithm looks in a 3 X 3 cell window around the cell and in this same region one layer above and one below, removing neighboring cells with energies below 100 MeV. Then, it sums the energies of the remaining neighboring cells and flags the cell as a hot cell if the sum is below a given threshold. For most layers, this threshold is 0.02 * Emu. where Ea.” is the energy of the cell under consideration. In addition, the 55296 calorimeter channels are calibrated using a pulser system. The channels are pulsed, and the response is measured and equalized to the calibration pulses. The energies are also corrected for geometrical effects in the calorimeter [50] and for non-linearities in the new readout electronics [51]. Finally, the energy lost by the electrons in the several radiation lengths of new material in front of the calorimeter has been studied with detailed simulations [50] and has been parametrized as a function of 77 and electron energy. All of these corrections are applied in the reconstruction software. 5.1.2 Electromagnetic Cluster Identification As stated above, the EM cluster is expected to have a large fEM- However, it must also have a longitudinal and lateral shape consistent with that of an electron. Each cluster is assigned a Xghl?’ or H-Matrix, based on 7 parameters which compare the values of the energy deposited in each EM layer and the total shower energy with the average distributions obtained in Monte Carlo. Electrons tend to have small ngal? values. In addition, electron candidates tend to be isolated in the calorimeter. That is. the isolation fraction, f‘ _ Etot(R < 0.4) -— EBA/(R < 0.2) (5 2) L90 — EEM(R < 0.2) a ' tends to be small, meaning that there is not. much calorimeter energy in a halo around the EM cluster. Electron clusters are therefore selected by requiring fEM > 0.9, f,” < 0.15, and XE‘al < 50. Electrons passing this calorimeter selection are called “medium“ electrons. 5.1.3 Track Match Simply finding an EM cluster, however. is not sufficient for determining whether the object is actually an electron. Photons and neutral >ions Ho‘s also tend to look like I 59 electrons in the calorimeter. To reject some of these backgrounds, the EM clusters are required to have an associated track match. An associated track is a track in a road satisfying the conditions: [Ann-1.13m] < 0-05 and |A¢EM.trk| < 005. If there is more than one track in this road, the one with the highest Profit/Emma), 2 665 2 (52 2 xvii-'1: — + — . sprra 0.43 02 is defined to be the track matched to the EM object. In equation 5.3, (qu is the differ- where (5.3) ence in ()5 between the extrapolated track impact at the EM3 layer of the calorimeter and the cluster position in the EM3 floor; 62 is the difference between the vertex position calculated from the track and that calculated from the EM cluster; and ac, and 0; are the root-mean-squares (RMS) of the experimental distributions of the associated quantities. 5.1.4 Likelihood Even with a track match, instrumental, or “fake electron,” backgrounds still remain a problem. The main sources of these backgrounds are believed to be: 0 7r0 showers which overlap a track from a nearby charged particle. o Photons which convert to c+e' pairs. 0 Charged pions that undergo charge exchange in the detector material. 0 Fluctuations of jet final states. 1 In Run I, the primary sources of background were identified to be 7r‘ overlaps and photon conversions [52]. In fact. these backgrounds could be separated using (IE/(1.1:, 60 a measurement of energy loss, and transition radiation measured by the transition ra- diation detector (TRD). Conventional cuts on these quantities were rather inefficient, leading to the development of the likelihood in Run 1. In Run II, there is no TRD, but the improved tracker and the preshower detectors provide other tools which could be used to separate the backgrounds. At this point, however, many of these tools are not yet understood well enough to be fully utilized; hence, these backgrounds are dealt with together for now. In order to distinguish real electrons from fakes, certain characteristics of these fakes must be considered in trying to choose the best discriminating variables. A 7r”, for example, is typically produced in association with charged hadrons. Because of this, the calorimeter can be used to pick up signs of hadronic activity around the EM cluster. Moreover, since the no would have to overlap a track from the charged hadrons in order to fake an electron, the track match could be poor; the track would not necessarily be isolated; and ET/pT, where ET is the transverse energy measured by the calorimeter and pr is the transverse momentum of the track measured by the tracker, would not tend toward 1, as expected for good electrons. Photon conversions typically look very electron-like in the calorimeter, though they may be slightly wider than an electron shower. However, one would expect a second track very close to the EM cluster which could be resolved by the tracker. Also, ET/pT would tend to be large. Asymmetric conversions, on the other band, would be a virtually indistinguishable background since one of the particles would be very soft. Fortunately, asymmetric conversions are very rare. Using a likelihood tends to be a more efficient method of separating good elec- trons from background than using square cuts since a likelihood considers the entire shapes of the signal and background distributions. The likelihood allows variables to be weighted by their effectiveness in discriminating signal and background unlike con- ventional cuts. That is, if an event fails a square cut, the event is rejected. However, 61 by using a likelihood, signal events that would normally fail one square cut but look very signal-like in all other variables would, most likely, be retained in the selected event sample. The electron likelihood used in this analysis is based on seven variables: f E M is included as there is still discriminating information in this distribution after the preselection cut. xgan is included for the same reason as fEM- ET/pT is a good discriminator since it tends toward one for signal but not background. /2 . 'v \. n 4 y W o . .v . v o .5 Prob“ spatial) is used since background events tend to have a worse spatial track match to EM clusters than real electrons. Distance of closest approach (DCA), the shortest distance of the selected track to the line parallel to the z—axis which passes through the primary vertex, is included since the background tends to have more events on the tails of the distribution. Number of tracks in a AR 2 0.05 cone is a good variable for removing photon conversions since these events tend to have two tracks close to each other. The scalar sum of the pT of all the tracks in a AR : 0.4 cone around, but excluding, the associated track is very useful for removing jets, which tend to have several significant tracks inside this cone. The first five were included in a preliminary electron likelihood for Run II [53]; how- ever, the last two have been developed to replace the track isolation variable in the preliminary likelihood, as that variable had topological depemlencies. Also, the initial likelihood included an 8 parameter xgyal. which has been replaced by the current 7 parameter H—Matrix. 62 The likelihood is trained entirely on data. The signal sample used for training the likelihood is a Z —> 66 sample. These events are selected to have two EM objects with W > 20 GeV, which pass the preselection cuts listed in 5.1.2 and 5.1.3. In addition, the invariant mass of the two electrons must be in the Z mass window (80 < 11466 < 100 GeV). The background sample is obtained from EM+jet events where the EM object and the jet are back-to—back. These events are mainly QCD di-jet and 7+jet events where the jet or photon fakes a preselected electron. This sample is obtained by requiring exactly one EM object passing the previously stated W and identification cuts, exactly one good jet with pT > 15 GeV, ET< 15 GeV (to remove W”s), and A¢(e,jct) > 2.5. Distributions of the seven input variables are obtained for both the signal and background samples for CC and EC electrons separately where CC is defined as [77d] < 1.1 and EC is defined as 1.5 < Ind] < 2.5. (The intercryostat region of the calorimeter is not used since the EM energy scale and EM identification are not yet. well-understood in this region.) These distributions are smoothed using linear smoothing techniques and are normalized to unit area in order to produce probability distributions for each variable (Figures 5.1 and 5.2). Now, these distributions can be used to assign a probability for a given EM object to be signal or background: Ps-ig(x)- Pbgb‘) where x is a vector of likelihood variables. That is, each likelihood variable for the object is assigned a probability to be signal or background from the binned probabil- ity distributions. Then, assuming no correlations, these probabilities can be simply multiplied together to give an overall probability for the event: P(X) =—‘ H P(.’I‘,‘). 63 The correlations can be checked by calculating the correlation coefficients, p, for each combination of two inputs, :1: and y, where : cov(a:. y) _ ECU-i — @201]- " ‘9) (5‘4) “1‘0?! _ t/Zt’lfi - T)2\/f(yj - a)? p is zero when the inputs are uncorrelated, one when it is completely correlated, and -1 when it is anti-correlated [54]. Tables 5.1 and 5.2 show the correlations between signal inputs in the CC and EC, respectively, while Tables 5.3 and 5.4 show the correlations between background inputs in the CC and EC, respectively. Most of the combinations have p’s close to zero. However, f E M and xgan, for example, exhibit some anti-correlation. In fact, keeping or removing xga" as an input variable has no impact on the performance of the likelihood. It remains in this version of the likelihood for historical purposes. Finally, to distinguish electrons from background objects, the following discrimi- nant is used: Psigb‘) Ps-z'g(X) + Pbkg(x) For electrons, £7(x) tends toward 1; whereas, £7(x) tends toward 0 for background £7(X) = (5-5) objects. The performance of the likelihood can be tested by running over the signal and background samples. Figure 5.3 shows that the likelihood separates signal from back- ground very well after the preselection cuts. This separation power can also be seen in Figure 5.4 by looking at signal and background efficiencies when cutting on the likelihoods shown in Figure 5.3 in increments of 0.02 units of likelihood. The selection cuts chosen for the analysis are £90 > 0.85 and Egg“? > 0.85 though these cuts could be different for CC and EC if the two likelihoods perform differently. Medium electrons passing the track and likelihood requirements are called “tight” electrons. 64 II 1.1.] "I --—.... 10203040500070 ' . 0.1 . 0.1 0.1 : 0.04 o b 0‘ v'vvv'vvv'v‘v'fTIj-v‘vlvvvjvvv 0.35 0 0.2 0.1 5 0.1 005 VIIIUVUIJUII'JIUUV'III c lllllllmlllllrlllllllIIFIIITIIlllllll 00.5 1 1.5 aces as: 4 4336316 ed a ‘4‘“4”. I ‘: -: j : fi , A - - - I - Lam; .4 ., «0.1 one 2 (105 0.1 2 4 to o 10 VivrfirrvivvrTfivv1v'v—v—Ivvv — r-1__l LA-11-A-.I_._:=— A 0 1 2 3 4 Figure 5.1: Smoothed, normalized likelihood input distributions for objects in the CC. The black line is signal; the red is background. These distributions are: (a) fEMi (b) X20017, (C)ET/pT, (dlxgpatial’ (e) DCA, (f) number of tracks in an 0.05 cone, and (g) sum of track W in an 0.4 cone around the candidate track. 65 VVv'vav'frfififfv'vvvv‘vvvv‘l‘Y'v , II I q '11-.¢--fi,-efi,..-,.--,-. ' O lLllllLJJJlLLlllllllljlll 0 1o bao no 10 o *TrrTTr"I'v'vnvrw'mrwrrrfi' ‘ d I d 0.4 r- I 1 ‘ 0.3 .1 C d d 0 ': d ‘ at 0.1 E q o ‘ ‘ :LLIjA l -14 -12 -10 «B -O -4 -2 0 d 'T"l""7""f"" ""l"ri"‘I—'r'TTT"'-‘ 4 q o -—1 I 1 ”d u o 1 d d o _ q .4 .1 ° 1 d . T1 I ..1 L11-“ 1! 4 G 8 10 t ' "fiTjfi'I "VT‘r I 'l' a [— —: 2 I Q 0. '1 d I _E d °‘ 1 °‘ 1 ,————t ‘1 I 1 0.1 -i I AAAILAAA AA ‘ l LAAA 0 1 2 3 4 5 Figure 5.2: Smoothed, normalized likelihood input distributions for objects in the EC. The black line is signal; the red is background. These distributions are: (a) fEMa (b) X32117, (c)ET/pT, (d)xgpatial’ (e) DCA, (f) number of tracks in an 0.05 cone, and (g) sum of track pT in an 0.4 cone around the candidate track. 66 fEM Xg’al? Xgpatial ET/pT DCA Ntrks 2: park fEM 1 -0450 0.089 -0027 -0005 -0057 -0033 xém 1 -0272 0.036 -0002 0.124 0.041 xfipatm, 1 -0112 —0.014 -0.078 -0044 ET/pT 1 0.030 0.108 0.051 DCA 1 -0012 -0.001 Ntrks 1 0.183 2P5”: 1 Table 5.1: Correlation coefficients for likelihood signal input variables in the CC. f EM X20017 Xgpatiaz ET /PT DCA Ntrks E Park fEM 1 -0.426 0.125 -0120 -0014 -0002 —0.023 XE,” 1 -0141 0.091 -0013 0.068 0.039 x§,,,,-,,, 1 -0445 -0059 -0.091 -0.069 ET/PT 1 0.001 0.124 0.093 DCA 1 0.006 -0002 Ntrks 1 0.299 2191“?" 1 Table 5.2: Correlation coefficients for likelihood signal input variables in the EC. fEflI XEZ'GH Xgpatz'al ET/PT DCA Ntrks 2 park fEM 1 -0.538 0.106 —0.014 -0002 -0059 -0040 x230,” 1 0.184 0.009 0.002 0.009 0.044 Xipatml 1 -0.080 0.003 0.078 -0003 ET/pT 1 0.000 0.006 0.079 DCA 1 0.003 —0.002 Ntrks 1 0.230 23 1051‘" 1 Table 5.3: Correlation coefficients for likelihood background input variables in the CC. 67 Table 5.4: Correlation coefficients for likelihood background input. variables in the EC. fEM 1%..” Xgpaggaj ET/PT DCA Ntrks 2 P37: fEM 1 -0550 0.129 -0047 -0002 -0017 -0.016 1%,,” 1 -0.186 0.074 -0002 0.037 0.025 xspatm, 1 -0105 -0.006 -0043 -0.013 ET/pT 1 0.009 0.227 0.139 DCA 1 0.007 0.005 Ntrks 1 0.197 21933“ 1 68 E —Slgna| — *- —Background p— l— -1 10 a ,: -2 10 : _ ...! [IllllllLJLLJllllllllLLLllllllllLlllLllllLLLlJlJJ 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Likelihood 1: . ; —Slgnal _ : -—Background -1 10 E‘ -2 10 5- 1— I - L I .3 10 — lllllllllIlllllLllllllllllllllllllllllllllLJlLJJl 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Likelihood Figure 5.3: Likelihood distributions for signal and background in the CC (top) and EC (bottom). 69 5' s 1i- - 0 - is l. no.8: C _ : o i— 3,06— 15 I 8 L . 0.4:- . 0.2:— : Q O 0"... o..P.L..1....1..1.1....1....1..1111. 0.7 0.75 0.8 0.85 0.9 0.95 1 Signal Efficiency 5' t .3 1:- ' g C 00.8: C - a o 1— 5.0.6— as L. o .. 8 _ o 0.4: .. 0.2:— : . . .... oLJLgllllllllllJJJLllllllllllll]llllllllllllL 0. .65 0.7 0.75 0.8 0.85 0.9 0.95 1 Signal Efficiency Figure 5.4: Background efficiency vs. signal efficiency after preselection for various likelihood cuts in the CC (top) and EC (bottom). The likelihood cuts chosen for the analysis are denoted by the red squares. 70 5.1.5 Electron Efficiencies and Scale Factors Measuring the tight electron efficiency is done in two steps. First, the efficiency for an electron to be reconstructed and pass medium cuts is derived. Then the efficiency for a medium electron to pass tight cuts is found. The electron efficiencies are measured uSing the tag—and-probe method discussed in Section 4.1.2. To measure the efficiency for an electron to be reconstructed and pass medium cuts, a data set consisting of a tight electron and a second track where the invariant mass of the two tracks falls into a window around the Z mass (80—100 GeV) is used. The tight electron is the tag electron. The second track is the probe. The efficiency, 6, is obtained by determining the fraction of tracks with a matched medium electron: 6 .—. chd+trk, (5.6) Ntrk where Nmed+trk is the number of probes with matched medium electrons and Ntrk is the number of probes. Since electrons behave differently in the CC and EC, all electron efficiencies are measured separately for these two regions. Figure 5.5 shows the medium electron reconstruction efficiency, (1.660,,1D, plotted with respect to the distance to the nearest jet. This efficiency is also shown for electrons in the Z ——> 66 Monte Carlo sample. Likewise, the efficiency for going from medium to tight cuts, Ctrkzrlh: is obtained using the tag and probe method. This time, a sample of events with two EM clusters in the calorimeter whose invariant mass lies in the previously defined Z window is used. The tag cluster must pass tight cuts, and the probe cluster must pass medium cuts. The efficiency is then the number of medium electrons passing tight cuts divided by the number of medium electrons. This efficiency, plotted with respect. to TM and 96d, are shown in Figure 5.6, for CC and EC separately. This efficiency is also shown for electrons in a sample of Z ——+ cc Monte Carlo. It is clear that the efficiencies measured in data and Monte Carlo are not the same. The Monte Carlo tends to have higher efficiencies since it does not describe all of the features of the real detector. For example, dead channels are not accounted for in the Monte Carlo but have a real impact in the data. Therefore, scale factors of the form 6(Zdata) K: : €(ZMC) are used to scale Monte Carlo efficiencies, 6(ZMC), to the data efficiencies, €(Zdata). In order to check for systematic effects due to electron-jet overlap, the scale factor for the medium electron reconstruction efficiency is found versus AR between the electron and closest jet in events with only one jet. The scale factors for CC and EC, shown in Figure 5.5, have no statistically significant dependence while the systematic uncertainties are determined from the scatter of the scale factor versus AR. The resulting scale factors in CC and EC are listed in Table 5.5. Figure 5.6 shows the correction factor for track-matching times likelihood effi- ciency versus 77d and 3,1. In the CC, the scale factor is obtained by fitting W with a constant and taking the systematic uncertainty to be the larger RMS of the two plots. In EC, however, the scale factor has some dependence on 7]d- The scale factor is then determined by folding the 77d dependent scale into the 77d spectrum of the EC elec- trons in the 66 and co final states of the tf Monte Carlo. The resulting average scale factors are consistent within statistical errors and are combined to form a single scale factor for EC electrons. Doing this same convolution with the background Monte Carlo gives results consistent with the scale factor found in the tf sample. Thus, the same scale factor is used throughout the analysis for EC electrons [5.5]. The resulting scale factors in CC and EC are also presented in Table 5.5. 72 Scale Factor CC EC Hrec0*1D 0.979i0.026 0.876i0.067 Ntrkarlh. 0.869:i:0.018 0.753i0.033 Table 5.5: EM scale factors relating Monte Carlo to data in the CC and EC [55]. 1.2 [— E 1.1-_— + 1: ""‘ l ...]... t ; - . Jilin-r] -t 09 L .. « . i ‘ ‘ ‘ '3 "1 ‘* i-r‘t“ : ‘ b-+-r'1 ‘ r: '- J I g l °‘ :— Estonia 1— _ “C J "‘ 0.1.]... -""'" E e 0.7 :_ recolD Z — Em..DScale Factor 0,; ‘— 1 [00 0.9787 :0.01030 os—L11111111l114111111111111:1111111111111 ' 0 0.5 1 1.5 2 2.5 3 3.5 4 DR(track. jet) 1-2 L’ . 1:_ ’- V — . 03 ............... 0.8L— L— _ ‘ arm-ID Data oat: . em.,oMC : —— firearm Scale Factor 02: p0 0:703:033441 t 01 llvllllllmnll'll lllllllhxl LtllllLl 0 0.5 1 1.5 2 2.5 3 3.5 4 DR(track.jet) Figure 5.5: 9860...”) in data and Monte Carlo and the corresponding scale factors versus the distance between the electron track and the closest jet in CC (top) and EC (bottom). The green lines show the constant value fits (pO) to the scale factors. Adapted from [55]. 0.95 °~9°r-~*‘ 4:1" 3;;33 ~a’+" —1+-'*;.L‘ ' "13...; ' . 0.85'5p—‘F T—v- ' ~7- IF T 4' 4 Y 0.80.. ... _;,,_ .. II 0.75; ‘ 'i' ;_;._ 070.— +chale Factor 0.65 E - E - Etight a 0.60.“. ' "' etight MC 0.55 T |p0 0.8686 1 0.0032 :n" 1 1 1 l 41 0"” -1.0 41.5 0 0.5 1.0 1.3 71 0.9» ".5. » _. 08~~ >:.. ro~ _ 073—4“ +” ”“14- ‘ _ 4» ~. " ”is "" The 0.5:, ... “3 --—Scale Factor 03.’ -} .eabh‘Ekfia 0.2; ‘ “' Etight 0.1:— ". 11.11.11.I...1141111..-1._..1.1.1...'.. 92.5 .20 -1.5 -1.0 -05 o 0.5 1.0 1.5 2.0 2.5 c_ 11 09'— -‘-3...">-3T‘.":. "A:L"LA:L'.‘&‘*‘A_‘LLA:L”J‘ZL-v"" 001".'*"'. ' *.T--.~- '. .‘ T‘...~...T-*‘* .: ‘;§::‘:.;,,7:'.';33:,‘,“"':‘ '33; "':‘~":‘,:.t- :;:':~-’ 0.7 . 0.67, 0.57 0.4: -—Scale Ftaactor 0,3: - E - 5119111 a 0‘2: - “' Etight MC 0.1 I po 0.8787 :1: 0.0032 0 7' 2 1 . . L L . . é . i 1 15 A {s—i 11 1 2 1) J..." _ ~ .; ..wi —-—Scale Factor . -. . e- a . ugh: ' "' EflghtMC p0 07166100059 3 i s 4’ Figure 5.6: The top two plots are emu”, vs 71d for CC and EC electrons. respectively. The bottom two plots are Ctrkdh vs 0d for (SC and EC‘ electrons. respectively. The green lines show the constant value fits ([10) to the scale factors. Adapted from [55]. 74 5.1.6 Electron Energy Resolution and Oversmearing The electron energy resolution is better in the Monte Carlo than in the data, causing the Z peak to be narrower in the Monte Carlo. In addition, the Z peak is shifted in the Monte Carlo from its position in the data. To compensate, the electron cluster energy in the Monte Carlo is smeared to reproduce the resolution in data. A scale factor is also applied in the Monte Carlo to shift the peak location. The electron energy resolution is parameterized by [‘13 ll Q £19 at” (5.7) £1) a, I 2 where C, S, and N are constant, sampling, and noise terms, respectively. Hence, the energies of the Monte Carlo electrons are adjusted by E’ : E x [a + £1: Gaus(0,0 : ac) + £2 = Gaus(0,0 : s‘m/E) (5.8) + £3 : Ga'u.s(0. 0 : n/E)]. where a is the scale factor and £1. £2, and {3 are random oversmearings obtained from Gaussian distributions with a mean of zero and a width of a. c, s, and n are the constant, sampling, and noise oversmearing coefficients. Table 5.6 gives the values of the scale and over-smearing terms in three regions: CC (in fiducial), CC (not in fiducial), and EC. An in-fiducial electron is at least 0.01 radians in 99 away from one of the 32 evenly spaced rp-cracks in the calorimeter. This distinction is made since a different energy scale is applied to in-fiducial and not-in-fiducial electrons. Once the scale and oversmearing parameters are obtained and applied to the Monte Carlo, the constant and sampling terms which determine the energy resolution can be found. Since high-pT electrons are being used, the noise term is negligible. Table 5.7 lists the values of these terms for in-fiducial CC, not—in-fiducial CC, and 75 Electron Type Scale Factor Oversmearing Parameter CC (in fiducial) 1.003:l:0.001 0.045:i:0.004 CC (not in fiducial) 0.950i0.011 0.115i0.009 EC 0.996i0.005 0.034i0.009 Table 5.6: Scale factors and oversmearing parameters for MC electrons [56]. Electron Type C S (J GeV) CC (in fiducial) 0.0439:f:0.0002 O.224:l:0.002 CC (not in fiducial) 0.1116:l:0.0011 0.385:i:0.013 EC 0.0316i0.0005 0.258i0006 Table 5.7: Energy resolution parameters for high-pT electrons [56]. EC electrons. A detailed description of how the scale factor, smearing coefficients, and resolution terms are obtained is presented in [56]. It also shows that only the scale factor and the oversmearing term provided by a : ac are needed to tune the electron energy in the Monte Carlo to match the data. Figure 5.7 shows this agreement between data and smeared Monte Carlo. 5.1 .7 Electron Charge As discussed in Section 3.3.2, the magnetic field, makes it possible to determine whether an object is positively or negatively charged in Run II. Therefore, if one is interested in, for example, Z —+ 68 or If —> cc events, a cut may be applied requir- ing that the electrons be oppositely charged. Of course, there is a small inefficiency with this cut since straight, high-pf tracks can sometimes be reconstructed with the wrong sign. Figure 5.8 shows the dielectron invariant mass distributions for CCCC, CCEC, and ECEC electron pair with opposite and like signs. As with the other electron efficiencies, the Monte Carlo does not reproduce the data exactly; therefore, scale factors must be calculated for this cut as well. These are listed in Table 5.8. 76 ITIIIIIIITIIIITIITIII Figure 5.7: Comparison of Z data and corrected Z Monte Carlo. CCCC CCEC ECEC Data Efficiency 0.997i0.001 0.955i0.003 0.906i0.011 MC Efficiency 0.9992t0.000 0.993:l:0.000 0.976i0.001 ns,-9n 0.998flc0.001 0962350003 0.928:t0.011 Table 5.8: Efficiencies and scale factors for requiring opposite charges for CCCC. CCEC, and ECEC electron pairs. 77 600 400 300 II 111 Ill 11 [II II] 11 1T1 D Opposite-Signed I: Like-Signed I; , a. . ‘L-Il. Luj’ . may. ... ' IIII -1 ..IQLLLI. 20 40 60 80 100 120 140 160 M..(GeV) IIIIIIIIllllIII]IIIIIITIIIIIIIIIIIIIIIIIIIII L 1 ' [III I ‘ L | l I_I_A $4.; 1 20 40 so so 100 120 140 160 1111,° (GeV) [IIIIIIIIIIITIIIIIIllllll[lilillllllllll] .111 l‘ " .... skiff: 21‘1"! f;hmlufi1.lh-AJJALA-l1l C) _- " fill 20 4o 60 so 100 120 140 150 M,,(GeV) Figure 5.8: All“ distributions for opposite- and like-signed electron pairs in the CCCC (right), CCEC (middle). and ECEC (left). 78 5.2 Jets 5.2.1 Jet Reconstruction Jets are reconstructed using the improved legacy cone algorithm [57] as recommended by the Run II QCD workshop. The cone size for jet reconstruction is R = 0.5. As for electrons, zero suppression and the hot cell killer are used to reduce noise. In addition, the T42 algorithm [58] [59] is applied to obtain a finer treatment of calorime- ter noise, which, in turn, improves the reconstruction of calorimeter objects. This algorithm removes 3D-isolated cells with an energy less than four times the width of the noise (40) in that cell. In addition, T42 rejects all cells with negative energies. EMl and layers 8, 9, and 10 of the intercryostat region are not considered by T42 so all of their positive energy cells are kept. In most events, T42 rejects 30% to 60% of cells. The T42 algorithm reduces the number of fake jets clustered on noise, or “noise jets,” by about a factor of two [60]. 5.2.2 Jet Identification Once the jets are reconstructed, further quality cuts are applied in order to distinguish real jets from fake jets. These cuts are: 0 0.05 < fE M < 0.95 removes electromagnetic particles at the high end and jets with a disproportionate amount of hadronic energy at the low end. 0 Coarse Hadronic Fraction (CHF) < 0.4 removes jets which deposit their energy predominantly in the coarse hadronic layers of the calorimeter since these layers should have less energy deposited in them and tend to be noisier. 0 Hot Fraction (HotF)< 10 rejects jets clustered from hot cells by cutting 011 the ratio of the highest to the next-to—highest transverse energy cell in the calorimeter. 79 o 7190 > 1 cuts out jets clustered from a single hot tower by requiring the number of towers containing 90% of the jet energy to be greater than one. Even when these quality cuts are applied, a significant number of noise jets still survive. A comparison of the energy in the L1 calorimeter towers to the energy obtained in the precision readout turns out to be very discriminating against noise jets. Therefore, an additional variable has been derived using this information. Defining LlSET to be the scalar sum of the trigger towers’ ETs in the same cone as the jet. the cut. used to reject noise jets is LISET f‘o — CHF) > 0.4(in CC. EC) or > 0.2(in ICD), where CC is [7de < 0.8, EC is "ldl > 1.5, and ICD is 0.8 < ["771] < 1.5 [61]. The efficiency for this cut is very high (> 99.5%) in all three regions [60]. 5.2.3 Jet Energy Scale The raw energies of reconstructed jets are affected by noise, calorimeter response, showering effects, and the underlying event. Therefore, the standard jet energy scale (JES) corrections are applied in an attempt to correct the jet energies back to the particle level energy, the energy the particle had before interacting with the calorime- ter. The corrected jet energy (Er-arr) is obtained from the measured energy (Enums) by ENIPOS‘ — 0 R x S ’ Ecorr : where R is the calorimeter response. S is the fraction of shower leakage outside the R = 0.5 cone, and O is the energy offset due to the underlying event. energy pile- up, multiple interactions. electronic noise, and uranium noise. R is determined by requiring ET balancing in 7+ jet events; S is obtained by measuring the energy profiles of jets; and O is derived from energy densities in minimum bias trigger events. 80 In this analysis, JctCorr v5.1 [62] is used to correct jet energies in both the data and Monte Carlo. The corrections are done jet-by-jet, and different corrections are used for jets in data and Monte Carlo. 5.2.4 Jet Energy Resolution The jet energy resolutions [63] are derived using two samples, one for jets above pT z 50 GeV and one for jets with pp < 50 GeV. For high energy jets (pT > 50 GeV), a dijet sample is used. This sample is binned into several bins based on average pT of the dijet system (< pT >2 (p1T+ 1130/2). In each bin, the distribution of the transverse momentum asymmetry, 1 _- 2 Ajj : if)? 1031 (,. 9) PT + Pi‘ is obtained. The width of this distribution, 0A, gives the jet. pT resolution by 0'. fl- : y/20 (5.10 A pT J.) For jets with W < 50 GeV, a back-to-back photon+ jet sample is used in which the asymmetry variable is defined (5.11) Since the resolution of the photon is considerably better than the resolution of the jet, apy can be ignored. The jet. resolution can then be written 0.7 I 4 1T where Rw- = Pit/175‘ corrects for the imbalance between average jet and photon pT in 81 [77] Range Data Monte Carlo N S C N S C 0.0< [77] <0.5 5.05 0.753 0.0893 4.26 0.658 0.0436 0.5< [77] <1.0 0.0 1.20 0.0870 4.61 0.621 0.0578 1.0< [77[ <1.5 2.24 0.924 0.135 3.08 0.816 0.0729 1.5< [7]] <2.0 6.42 0.0 0.974 4.83 0.0 0.0735 Table 5.9: Jet energy resolution constants for jets in data and Monte Carlo [63]. each pT bin. The results from the two pf ranges are combined, and the jet energy resolution is parameterized using 0' ‘ . N2 s2 p; PT PT The fit parameters are summarized in Table 5.9. 5.2.5 Jet/EM Separation Electrons and photons are reconstructed as both jets and as EM objects. Therefore, it is imperative to separate isolated electrons from jets in order to avoid double—counting these objects. Moreover, different. energy scales are required for electrons and jets. The EM energy scale is applied to electrons, photons, and jets dominated by photons (namely 710’s) since all of these objects tend to have similar shower shapes in the calorimeter and are contained mainly in the EM part of the calorimeter. All other objects are considered to be jets to which the jet energy scale is applied. A good EM cluster in the calorimeter is defined by the standard electron preselection cuts in the calorimeter: [ID] :- 10 or 11, fEM > 0.9, fiso < 0.15, and X6017 < 50. Any object in the jet list within AR < 0.5 of an EM object is removed from the jet list and is treated exclusively as an EM object. If an EM object does not pass tighter selection cuts, it is not reconsidered as a jet. Thus, the EM energy scale is still applied to it, not the jet. energy scale. It is 82 true that a real jet could look like an EM object, in which case, this treatment is incorrect. However, in looking at Z ——1 cc events, it is clear that this effect is not a problem. Figure 5.9 shows the number of EM objects in Z events after requiring two tight electrons. Only 130 out of 14408 events (i.e. 0.90%:L0.08% of events) have more than two EM objects, and a large fraction of these extra EM objects are most likely 710’s or photons. Since this effect is so small, this Jet / EM separation treatment is applicable. m i" 'E 4 I g10 ? . ZEvents m I ”6 I a Z+2Jet Events 3103.? .a : g : 2102;:— A E T 10 E— 1:— ElllllllllLlllLJlllllllll lLLllllllllllJ 2 2.5 3 3.5 4 4.5 5 5.5 6 "EM Figure 5.9: Number of EM objects in Z and Z + 2 jet events where 2 tight electrons are required. The number of Z + 2 jet events is normalized to the number of Z events in the 2nd EM bin. 5.2.6 Jet Scale Factor As with the electrons, jet reconstruction and identification efficiencies in the Monte Carlo are not the same as in the data. Therefore, a scale factor must also be applied 83 to the Monte Carlo jets. This scale factor is derived 011 a 7+jet sample and is found to be ET dependent [64]. Figure 5.10 shows this dependence for CC, ICD, and EC jets. The scale factor is cross checked on a statistics-limited Z + jet sample, and the scale factors derived using this method agree with the scale factors obtained using the 7+ jet sample within statistical errors. Instead of applying the jet scale factor in the analysis, the scale factor curves are folded in when topxmalyze is run over the Monte Carlo samples. Hence, the jet reconstruction and identification efficiencies in the topnrzalyzc output agree with the data. 5.3 Missing 'Ii‘ansverse Energy What primarily distinguishes top events from Z / 7* + jets events in the dielectron channel is the two yes in the final state of the top decay. Direct observation of the neutrinos is impossible; rather, they are detected as an imbalance of energy in the transverse plane. That is, the neutrinos “appear” as missing transverse energy, ET. The ET has the magnitude of the vector sum of the transverse energies of the calorimeter cells used in the cal(.,-.ulation, pointing in the opposite direction in 99 in order to balance the energy in the transverse plane. This analysis uses the standard ET calculated from the transverse energies of calorimeter cells passing T42 except for cells in the CH layers. CH cells are used only if they are contained in good jets. The raw ET is calculated by RECO before any corrections are applied. Since energy corrections are made to both EM objects and jets. the ET must also be corrected in order to account for the change in energy imbalance. The JES correction has a considerably larger impact on the ET than the EM energy corrections. The ET after these corrections is termed the calorimeter ET. High-pT muons, however, do not deposit much energy in the calorimeter. The calorimeter ET does not. account for the presence of these objects; therefore. some of 84 1.1 , 1.1 - - I 1 t 1 g 1 .................................... ..i 0.9 , 0.9 ; l . . ,: g 0.87 0.8, [ l 0.7,!- 0.7” EC , [- l 0.6} 0.6} l o. ;....1....1.11.1.LL.1.-..1 ...11 1.1.... 0. 2.1.1....1H..1....1....1...11... 11.144 30405060708090100 30405060708090100 ET(GeV) E1- (GeV) 1.1, T 1i 0.9. .... --------------------------------------------------------------- 0.8]: .7 ICD 0.67 y. f o. fiMLlii.Ll 11-1111 1111171111111111L141 30405060708090100 E1-(GeV) Figure 5.10: Scale factor vs jet ET for CC, EC, and 1CD jets. the energy imbalance in the transverse plane is due to these muons, not neutrinos. Hence, one more correction to the ET must be made: the momentum of all the muons must be subtracted vectorially from the ET- after deducting the minimum- ionizing energy deposited in the calorimeter. This ET is what is used in the analysis. Of course, in a dielectron analysis, this last correction has little impact since few events have two high-pT electrons and a high—[1T muon. 5.3.1 ET Resolution As with jets and electrons, the ET resolution in the Monte Carlo does not reproduce the ET resolution in data. The Monte Carlo ET distribution is too narrow. Therefore, the Monte Carlo ET is smeared to bring the core of the Monte Carlo distribution into agreement with the core of the distribution in data. [65] describes in detail how the oversmearing is obtained. In brief, the ET distribution, as well as the corresponding E2: and Hg distributions, is found to be narrower in the Z —> cc Monte Carlo sample than in the Z —-> cc data sample. Moreover, the ET resolution turns out to be a function of unclustered scalar ET (2 ETuncIus): which is the scalar ET of the event with the scalar values of the electron and jet pT’s subtracted. Hence, the Ex and By resolutions (the widths of the Ex and Hg distributions) for both data and Monte Carlo are plotted against ZETWCZUS. Oversmearing parameters for the x and y components of the ET, 03:1: and Ugly, are then obtained by separately subtracting, in quadrature, the Ea: and By resolutions in data and Monte Carlo as a function of Z ETundus, respectively. It turns out that these dependencies are linear and agree within error. Therefore, the same weighted average smearing factor, is used for both components of the ET: 0E], —_- aEy 2.553 + 0.00895 x ZETundu, (5.14) This oversmearing is found to be independent of data. sample and of jet multiplicity. Note that the ET resolution for two jet events is considerably worse than for zero jet events, but, within errors, the difference between data and Monte Carlo is independent of jet multiplicity. Figure 5.11 compared the smeared and unsmeared ET in the Z Monte Carlo to the ET of Z events in data. 86 MET (cow Figure 5.11: Comparison of smeared and unsmeared ET in the Z Monte Carlo to the ET in the tight dielectron (i.e. Z) data. The plot on the left shows the inclusive Z data and Monte Carlo. The plot on the right shows the same comparison for events with two or more jets. 5.4 Primary Vertex Two primary vertex reconstruction algorithms exist in the D9 software. The d0rcco package implements one during the reconstruction while the one that is used for the analysis is applied later in the dOroot package. Both algorithms use the same vertex selection method but differ in track selection and fitting techniques. d0rcco uses a looser cut on the impact parameter significance of tracks entering the fitter than dOroot. Also, dOrcco has no minimum requirement on the number of SMT hits when running on Monte Carlo while d0r00t requires at least two SMT hits per track. In data, both require two SMT hits per track. Moreover, d0r00t uses a fitting technique that both determines the position of the primary vertex and refits the tracks with the constraint that they originate from the primary vertex. Though these differences exist, the two vertex reconstruction algorithms perform comparably. Since the dOrcco primary vertex was upgraded to the d0r00t version from an older algorithm with poorer performance in production release p14.05, the d0root package is used in order to treat the entire data set. uniformly. 87 fdata(Z —+ cc) cMC(Z —-> cc) scale factor IZPVI < 600% Ntrk Z 3 Njet Z 0 0.973:t0.001 0.981i0.001 0.992i0.001 Nj t _>_ 1 0.993 :1: 0.002 0.993 i 0.001 1.000 :1: 0.002 Njet Z 2 0.996 :1: 0.004 0.996 :1: 0.001 1.000 :1: 0.004 Az(d0rcco, d0r00t) Njet Z 0 0.992 :I: 0.001 0.991 1 0.001 1.001 d: 0.001 Njct Z 1 0.995 i 0.001 0.997 i 0.001 0.998 i 0.002 Njct Z 2 0.992 :1: 0.004 0.999 :1: 0.001 0.993 :1: 0.006 Az(PV, c) Njet _>_ 0 0.988 :1: 0.001 0.998 :1: 0.000 0.990 i 0.001 Njct Z 1 0.991 d: 0.002 0.999 :t 0.001 0.992 d: 0.002 Njet 2 2 0.984 :l: 0.009 0.999 :1: 0.001 0.985 i 0.008 Table 5.10: Primary vertex cut efficiencies in Z —> cc data and MC and a scale factor as a function of jet multiplicity. All errors are statistical. 5.4.1 Primary Vertex Cuts, Efficiencies, and Scale Factors Since many quantities such as the ET and the electron track match are calculated with respect to the primary vertex, several cuts are applied in order to ensure a candidate event has high-quality reconstructed vertex. First, since the SMT is the main tool used in identifying the primary vertex, the vertex must be within the SMT fiducial region (lzpvl < 60 cm). In addition, the vertex must have at least three tracks attached to it (MN: 2 3). This cut removes events in which no vertex is found (Ntrk = 0), in which case the primary vertex is set to the center of the detector, (0,0,0), by default. Since the ET and other quantities that. depend on the primary vertex are calculated with respect to the d0rcc0 vertex, not the d0root vertex, the two vertices must be consistent. Therefore, the cut, lsz(d0rcc0) — sz(d0r00t)| < 5 cm, is applied. Finally, both electrons are required to originate from the same primary vertex since the track match depends on the primary vertex position. Hence, a cut. on the impact parameter of each electron track with respect to the primary vertex in z is applied: |Az(c, PV)| < 1 cm. 88 Scale Factor “PVUVjct Z 0) 0.989:l:0.002(stat)i0.007(syst) rem/(NM 2 1. NM 2 2) 0.997:t0.003(stat) ”Az(d0_rcco,d0-root) 1.000:l:0.001(stat) KA3(PV,C) 0..990:l:0001(stat) Table 5.11: Common vertex scale factors used in the dilepton cross section analyses. Table 5.10 lists the vertex cut efficiencies measured in Z —+ cc l\-’Ionte Carlo and data in terms of jet multiplicity. The Z ——+ cc Monte Carlo sample is used for the 0 and l jet lines while the Z 3' j ——> ccjj sample is used in the two jet line. The analysis data are used in the data column. The efficiencies are simply the fraction of events passing the listed cuts (in the order given in the table) for events with two tight electrons and N jets. The Monte Carlo models the data very well for the vertex cuts, and the Z —-> up channel shows similar behavior. Therefore, for the sake of simplicity, it was decided that scale factors averaged between the dielectron and dimuon channels would be used for all three dilepton cross section measurements. These scale factors are listed in Table 5.11. Differences are given by systematic errors when they are statistically significant. 89 Chapter 6 Cross Section Analysis Though the fraction of t? decays with two electrons in the final state is small, as discussed in Section 2.4.2, this channel tends to be cleaner than the l+ jcts channels, which have a huge W + jcts background. That is, very few physics processes have a final state with two high-pT electrons, two high-pT jets, and large ET. In fact, the only two physics processes that contribute significantly to the background are: 0 Z —> TTZ Although the Z cross section is large, requiring two jets in the final state decreases the cross section by about two orders of magnitude. In addition, the branching ratio for both T’s to decay to electrons is small, BR(TT --> cc) : 0.032 [4]. Moreover, the resulting electron pT spectrum is softer than that of electrons in Z ——> cc or t? —-> cc decays. and there tends to be less ET in T decays than in decays involving W’s. 0 WW —> cc: This cross section is comparable to the top cross section, and the events are very top-like in electron m and ET. However, requiring two jets in the final state likewise reduces the W’IV cross section, minimizing the contribution from this background. The largest single background, Z / 7* —’ cc, is actually an instrumental background. That is, the background is a real physics process coupled with some detector or elec- 90 tronics effect which makes the event look top-like. Three instrumental backgrounds, of which the second two are small, are considered: 0 Z / 7* —> cc: This process has a large cross section compared to t? even when two jets are required. It also has two high pT electrons in the final state. However, this process produces no significant ET since no 11’s are involved in the decay. Rather, spurious ET must also be present for a Z / 7* —~> cc event to look top—like. o W + jets: This process has a large cross section and significant ET, but a high- pT jet must fake an isolated electron. The probability for this to happen is very small, but finite. o QCD multijet: The cross section for this is huge. However, two high—pf jets must fake isolated electrons, and spurious ET must be produced since jets in these multijet processes should balance. This background actually turns out to be insignificant. 6. 1 Cut Optimization The key to this analysis is optimizing kinematic and topological cuts in order to reject as much background (B) as possible while keeping as much of the signal (S) as possible. The optimization is done by minimizing m / S as the figure of merit. For similar figures of merit, S / B and signal efficiency are also used to select the most ideal set of cuts. Since Z / 7* —-> cc dwarfs the other backgrounds and the signal, this is the primary background to reject. Fortunately, the invariant mass of the two electrons tends to fall into a window around the Z mass. In addition, this process has no inherent ET; therefore, as shown in Run I, cutting concurrently on Z mass and ET is a very effective way to reject most of this background (though it still remains the largest source of background). Figure 6.1 shows the ET vs dielectron im'ariant. mass for 91 the signal and backgrounds. Unlike in Run I, however, the Z mass window must be excluded entirely in order to keep the Z / 7* —) cc background in check since Run II is a noisier environment, producing more spurious ET. In addition, this analysis considers different ET cuts below and above the Z mass window since the Z / 7* —> TT —> cc background lies mainly in the low mass region. Therefore, one might hypothesize that a ET cut in the high mass region would not have to be as severe as the cut in the low mass region. Preliminary studies attempt to determine the most effective variables for rejecting background while preserving signal and over what ranges to consider these variables. A number of variables are examined, including: 0 Electron pT 0 Jet pT Width of the Z mass window ET in the low and high AI”, regions, separately 'cts HT : Z T o / t Hi=Pi~1+ZP¥S Aplanarity: A = 36% Sphericity: S = %(Q1 + Q2) 92 (ll 50 100 150 200 250 ‘ll 50 100 150 200 250 Dielectron Mass (GeV) Dielectron Mass (oGeVl .. 116°!) ”SWSHS B 8 ET Gel!) ...? Mlsslng E , . , . fl ,. Mlssln T l v‘lvv - D b 3 N ‘? M O T li_L xxi; AljliL L. :_4_A_11J_A'_L_i_1_l_xi_t_i_L_A_LJ_2i_L6_4_Lim so 100 150 200 250 00 so 100 150 250 Dielectron Mass (GeV) Dielectron Mass (oGeV) :9” 00 50 100 150 Dielectron Mass ((a‘neV)50 Figure 6.1: ET vs. Meg distribution after dilepton and 2 jet cuts for data (top left), top (top right), WW (middle left), Z —> TT (middle right). and Z —> cc + 2 jets (bottom) Monte Carlo. Also shown is the applied cut. 93 Figure 6.2: Momentum tensor ellipsoid [66]. The Q’s in A and S are the ordered (from smallest to largest) eigenvalues of the normalized momentum tensors; that is; _ Z(Pj - fii)2 Qi — 2P]? (6-1) 7 where j runs over the two electrons and all jets in the event. pj is the three-momentum vector of the jth object and Pj is the total momentum of the jth object. Q1 is a measure of the flatness of the momentum tensor ellipsoid (Figure 6.2); Q2 is a measure of its width; and Q3 is a measure of its length [66]. The optimization results are always less optimal when any of the HT variables are used compared to when they are left out. Moreover, aplanarity has very little dis- crimination power in the dielectron channel. Therefore, only the remaining variables are considered in detail for the analysis. The final optimization is performed using a full grid search [67], sequentially varying the cuts on these remaining variables. This optimization uses the final data and Monte Carlo sets; however, when the optimiza— tion was run, the final scale factors relating data and Monte Carlo had not yet been calculated. Instead, the scale factors from the Moriond 2004 analysis are used [68]. For the grid search, the background and signal yields are obtained as they are in the proceeding analysis. However, in the analysis, the largest background, Z / 7* —+ cc+ fake ET, is derived from data. Because the sample size for this background gives 94 fl + B/ S S/ B ET 100 GeV) 0.81 (0.82) 1.8 (1.7) 40 40 — 8.6% 0.81 (0.82) 2.3 (2.2) 40 40 0.15 7.8% 0.81 (0.82) 2.1 (2.0) 40 35 0.15 8.0% * 0.83 (0.82) 1.7 (1.8) 35 35 0.15 8.3% Table 6.1: Cut choices which perform best in the grid search. The Monte Carlo cross check is given in parentheses. * indicates the cut chosen for analysis. a non-negligible statistical error, there is some concern that the grid search might tune the cuts on fluctuations in the data, leading to a. bias in the analysis. However, as in the analysis, the Z j j Alpgen sample is used as a cross-check since it models the data quite well. Both samples yield the same result in the grid search, giving confidence that the cut selection is unbiased. Four out combinations give the same figure of merit. All four require that both electrons have pT > 15 GeV, that both jets have pT > 20 GeV, and that the Z window is 20 GeV wide (80 < Mee < 100 GeV). The differences in these cut combinations are listed in Table 6.1 along with the figures of merit, S/B’s, and signal efficiencies. The signal efficiencies have very small statistical errors of 2-3%. Typical statistical errors on the figures of merit are 4-5% and 10—15% on S/ B. Table A.1 in Appendix A lists all combinations of cuts with \/S_+—B/S < 0.9, S/B > 1.7 and 687'9 > 0.068, which are the benchmarks from the unoptimized cuts used in a previous iteration of the analysis [68]. Figure 6.3 shows the expected number of signal vs expected number of background events for all cut combinations. Of the four cuts listed in Table 6.1, the third one is chosen. Since all of these choices are all statistically comparable, the final selection cuts are the ones that give the middle efficiency and S/ B values. To summarize, the final event selection determined by the grid search is: a Two tight, oppositely-charged electrons with PT > 15 GeV in the CC or EC. 2.8} ‘3‘ 2.6:— -. . ‘3 2.4 E— a}? :5 '. : 9321.22; ' ‘ .... 0 2.2 :— 623-5., ‘ .. 5:: . .2. s‘ . a — ’22“: '. . d“ 'f -‘-'- , . — o €12" :3',’ ’g . . :1? -. 2 ...— ,éfi;‘-: :3“: “Y "I. _ 4"” '3‘ no}. f- ..4! 3 1.8" ‘1 é}: .' 1.6_ i; :55" —Sqfi(S+B)/S : aha}; 1'4 '8: -—S/B=1 1'2 lllllLlLll‘lllllifliiiililiililiiI11 BKG Figure 6.3: Expected signal vs expected background for all cut combinations tested in the grid search. The four combinations listed in Table 6.1 are circled. 0 Two good jets with pT > 20 GeV with |77| < 2.5. o Exclusion of the Z mass window from 80 < Ales < 100 GeV. 0 ET > 40 GeV for Mae < 80 GeV and ET > 35 GeV for Mes > 100 GeV. 0 8>0.15 6.2 Signal Efficiencies The final selection cuts can now be applied to the tf Monte Carlo in order to obtain the cut-by-cut efficiencies as well as the overall efficiency for signal. The efficiency breakdown is given in Table 6.2. Electrons originating from direct decays of the W and from W decays to T where the 7' then decays to c are taken into account. 96 Category Cut Efficiency Total Efficiency Electrons Reco,EM Acc, ID, pT > 15 GeV 0.399i0.005 0.399:t:0.005 Assoc. Track Match 0.845zl:0.007 O.337:l:0.005 £7 > 0.85 0.816i0.008 0.275i0.005 Opposite Sign 0.998i0.001 0.275:l:0.005 19,600+” 0.9362t0.000 0.257i0.005 tem,. +lhood 0.733i0.000 0.189:t:0.004 ngiqn 0.990:t0.001 0.187:t0.004 Trigger 0.935:l:0.006 0.175jz0.003 Jets 2 1 jet (pT > 20 GeV) 0.970;}:0004 0.169i0.003 Z 2 jets, (pT > 20 GeV) 0.732i0.010 0.124i0.003 Vertex |z| < 60 cm, Ntrk > 2 0.999:l:0.001 0.124:t:0.003 lZREC — znewl < 5 cm 0.997:t0.001 0.123zt0.003 Az(c, PV) < 1 cm 13:0 0.123i0.003 ram 0.987i0.003 0.122:t:0.003 M2 cut Mee < 80 GeV or > 100 GeV 0848320009 0.103i0.003 ET ET > 40(35) GeV, 0.745i0.012 0.077i0.002 Mee < 80(> 100) GeV Topological s > 0.15 0.916i0.009 0.071i0.002 Table 6.2: Efficiencies of object identification and kinematic selection on if ——> cc Monte Carlo. Errors are statistical only. 97 Cut HCCCC “CCEC “ECEC Cluster selection, EM ID 0.958i0.000 0.858i0.000 0.767 :l:0.000 Track Match, Likelihood 0.755:l:0.000 0.654i0.000 0.567 i 0.000 Opposite sign 0998320001 0.962:t:0.003 0.928i0.011 Vertex 0.987zt0.003 0.987 i0.003 0.987i0.003 Table 6.3: Summary of the correction factors relating Monte Carlo and data efficien- cies. Errors are statistical only. As discussed in Section 5.1.5. the Monte Carlo does not reproduce all the features in the data accurately; therefore, correction factors are applied separately to CC and EC electrons. Since the dielectron final state of the top decay has two electrons, there are three different combinations of electrons: two electrons in the CC (CCCC), one in the CC and one in the EC (CCEC), and two electrons in the EC (ECEC). Scale factors for events with each electron configuration are shown in Table 6.3. The scale factors shown in Table 6.2 are derived from the scale factors in Table 6.3 by weighting each scale factor by the number of events in each region; that is, 2 chcc Ncccc + HC'CECNccEc + “ECECNEFEC (6 2) rs: , . . A’CCCC + “CCEC + NEC'EC where NCCCCa NCCECa and NECEC are the numbers of events with each type of electron configuration. The total efficiency for tf -—> cc events to pass all cuts, obtained by multiplying the efficiency for each cut and the scale factors together, is (imp : 0.071 :1: 0.002(stat)i8:8(1”1’(syst). The biggest efficiency hit occurs in the first line of the table. This inefficiency results mainly from inefficient electron reconstruction and from limiting the acceptance to only CC and EC electrons. The 15 GeV cut also ren‘ioves many of the events involving T decays since some of the T momentum is lost to the two neutrinos involved in that. 98 decay; hence, the electrons produced in T decays tend to be softer than electrons decaying directly from W’s. In fact, the efficiency at the first line of Table 6.2 for decays in which both electrons come from T’s is 9%; whereas, it is 50% when both electrons come from W’s. Once two 15 GeV electrons are found, however, going from loose to medium cuts is almost fully efficient. Requiring a second jet with pT > 20 GeV and the severe ET cut are two other large inefficiencies; however, these cuts are necessary to keep the backgrounds in check. The ET cut is considerably harsher than the comparable cut in Run I; however, the ET resolution is considerably worse in Run II requiring a stiffer cut. Assuming a ti cross section of 7 pb and a branching fraction of 0.01584 (accounting for the decays involving T ——> c and using the latest PDG numbers [4]), the expected event yield is 1.91 :l: 0.05(stat)f8:38(syst) events. The breakdown of systematic un- certainties is discussed in Section 6.6. 6.3 Physics Backgrounds As discussed at the beginning of this chapter. the two main physics backgrounds in the dielectron channel are WW —-> cc and Z —> TT where both T’s decay to electrons. Both of these backgrounds have two high pT electrons and significant ET; however, the fraction of time they are produced with two jets is small. The contributions from both of these backgrounds are obtained from Monte Carlo just as the expected top yield is obtained. 6.3.1 Z ——> TT The Z —> TT contribution is estimated using the Alpgen (Z —’ TT)jj sample. An expected yield of 0.11 i- 0.0l(stat) is obtained using the method described for the signal sample. However, in a sample of (Z —> cc)jj Monte Carlo generated with the 99 same settings, the predicted yield does not match the observation because the jet [)7 spectrum of the Monte Carlo sample is softer than the W spectrum of jets in the data. Therefore, a correction factor is obtained which normalizes the expected number of (Z —+ cc) j j events to the number of observed (Z ——> cc) j j events in a Z mass window (80 < Meg < 100 GeV). This correction factor is 1.20 :1: 0.09. The correction factor remains stable when the window is widened to 75 < Alec < 105 GeV. Figure 6.4 shows the effect of applying the correction factor in (Z ——> cc) j j Monte Carlo. When the correction factor is applied to the (Z —> TT) j j expectation, the predicted yield is 0.13 :t 0.02(stat) events. g 50: I Data Ill _ '5 - MonteCarto E 40 _— Corrected Monte Carlo E _. 3 z I 30— 20_. E 10:" 1' + J l l 1 l l l l l J J l l l 1 $0 70 80 90 100 110 120 130 Figure 6.4: Comparison of Z j j Monte Carlo to Z + 2 jets data. Both the corrected and uncorrected Z j j distributions are shown. 100 Cut WW WZ ——> j jll Total NZEN" 2 2 7.48:1:019 4.54i0.07 12.63i0.20 1115,1532" _>_ 1 0.62:1:003 4.27:1:007 50910.08 N338)?” 2 2 0.30:1:0.11 2.28i0.04 2.61i0.12 MZ Cut 0263:009 0.10i0.01 0.36i0.09 ET Cut 0.18i0.06 0.01i0.00 0.19i0.06 s > 0.15 0.1435005 ‘ —- 0.14:0.05 Table 6.4: Diboson background expectations at each cut level. Errors are statistical only. 6.3.2 Diboson The WW background is studied using the WIV j j Alpgen sample discussed in Sec- tion 4.3.1. Estimating this contribution in the same way the signal contribution is estimated yields 0.14 :l: 0.05(stat) events. The contribution from WZ is examined using the WZ ——> j jll sample generated with Pythia. This sample, in which the W decays to jets and the Z to two electrons, does contribute before the Z mass window cut. In fact, this is a larger source of background than WW at the one and two jet cut levels because the branching fraction of W —> j j is about six times higher than the branching fraction of W ——> cu. In addition, WZ does not need to be produced with extra jets in this decay channel unlike in IV W since the W decays to two high pT jets. However, the Z mass cut effectively removes nearly all of this background. This background, which also has no inherent ET, is completely insignificant at the final cut level. A breakdown of the diboson background expectation at each cut level is shown in Table 6.4. Note that the first two lines of the WW background are obtained from the W W' Alpgen sample, not the ”WV j j Alpgen sample. 101 6.4 Instrumental Backgrounds 6.4.1 Fake ET Background The primary background to reject. in the dielectron analysis is Z /7* —-> ec+fake ET. Z / 7* decaying directly to electrons produces no neutrinos and should therefore have no ET in the event. However, these events can occur with enough ET to pass the selection cuts for several reasons: 0 Single object. energy resolutions are finite and worse than expected. 0 Hot cells in the calorimeter or malfunctioning readout towers can produce a spurious excess or deficit of energy. 0 Problems in the calorimeter readout chain can cause the precision readout to read out large positive or negative energies. e The unreconstructed part of the event from soft gluons and other low energy deposits is not modeled well in the Monte Carlo. This background has proven to be the most difficult to understand and reject. How- ever, several studies have led to a clean-up of the high, non-Gaussian ET tail in the data and more accurate modeling of the ET in the Monte Carlo. Estimating the ET fake background first requires determining the ET fake rate, fET’ from a sample which does not contain top or the physics backgrounds. This fake rate is expressed as a correspondence between the number of observed events that would fail and that would pass the ET selection. To be exact. f E T is defined as the number of events passing the ET cut, NET>35““), divided by the number of events failing it, NET<35‘4”. Y I" I." . f“""~4” __ M “T — NET<35.<111' 102 The samples used to derive f ET must have kinematics and resolutions similar to the Z / 7* events to be rejected. The jet kinematics must be alike in particular since jets have a major impact on the ET resolution. In addition, the sample must model the entire ET spectrum, from the core Gaussian resolution to the tails, observed in a pure sample of Z + 2 jet events in data. Hence, to determine the best sample, the ET in three candidate samples are compared to the ET in the tight Z + 2 jet data sample. These candidate samples are: 0 single photon+2 jets o diphot0n+2 jets 0 Z ——> cc + 2 jets Alpgen I\"Ionte Carlo. The term photon is used loosely here since a photon is defined as an EM cluster with no track matched in a 0.05 x 0.05 road and no likelihood cut applied. These photons could then be photons or QCD multijet processes where jets fake photons. The cuts applied to the EM clusters for the different samples are summarized in Table 6.5. (Even when the isolation and X31117 cuts are severely tightened such that the fraction of direct photons to fake jets must dramatically change, the overall change in fE A I is only about 10% .) The triggers used for the analysis are also used to select the diphoton sample; the single photon sample, on the other hand, is selected using single photon or electron triggers designed for direct photon and jet energy scale studies. Until recently, the Z Monte Carlo has not been a useful sample for studying fET because the generators do not reproduce the detector effects that result in fake ET well. The core of the distribution has been too narrow and the tails have not been well modeled. However, by applying the ET smearing, described in Section 5.3.1, to recent Alpgen Z j j , a reasonable ET distribution can be obtained. Thus. the Monte Carlo can finally be used as a cross check to the fVT estimate in data. 103 Cut Tight / MC Single Photon Diphoton Number of EM clusters 2 1 2 pr > 15 GeV \/ \/ \/ l77dl < 1.1 or 1.5 09: fiso < 015, Xian < 50 \/ \/ \/ Track in 0.05 x 0.05 road \/ veto veto £7 > 0.85 \/ N/ A N / A Table 6.5: EM cluster selections for the dielectron and photon samples used to estimate the number of ET fakes. The single photon sample is the data sample used to calculate fET because its ET distribution describes the ET distribution of the Z + 2 jet. sample extremely well (Figure 6.5). In the two jet bin, the single photon sample is defined to have one EM object with the cuts described in Table 6.5 and two jets with exactly the same kinematic cuts used in the analysis. To demonstrate that the single photon sample behaves like the Z sample, the pT cut on the photon and on the dielectron system is varied. That is, in the Z sample, each electron has pT > 15 GeV while a cut on the vector sum of the electron pT is varied. Figure 6.6 shows that varying the single photon pp and the Z W in the same way gives the same ET distribution and fET vs ET cut. Moreover, if no pT cut is applied to the dielectron system, it behaves just as if a 15 GeV cut is applied, hence just like a 15 GeV pT cut on the single photon. Since the single photon sample cannot be used to study the A166 dependence of fEMa a diphoton sample is used. In the two jet bin, however, the diphoton sample has a topological bias: the Ad) between the di—EM system and the ET is significantly different between the diphoton and Z samples [55]. This difference is correlated with the ET resolution in this sample. Thus, the diphoton sample is reweighted with respect to Ad) to make it more comparable to the Z sample. The plots of ET and [ET vs ET cut for both the reweighted and not—reweighted diphoton samples are shown in Figure 6.7. The “reweighted diphoton” sample is the default diphoton sample for this study. This reweighting does not work as well in the lower jet multiplicity bins. 104 Entries 999044 .1 fi* + Single Photon Mean 12-57 1 O *+ Tight RMS 8.381 + MC 53 (with MET smeanng) Entries 205 a .2 Mean 1 2.48 c 10 ‘f RMS 8.774 O I: +B‘. Entries 5812 Mean 12 -54 1 0‘3 RMS 1o , l n 1 l 1 1 ‘ J o 20 40 Missing E T (GeV) 1 E + $3th Photon _ + fight -1 w 10 E- +Mchj(wimMETsmearhgl _ , g E 3 _ E -2 10 g- 10‘3 h44144‘l‘ 4414—‘ 0 1o 20 30 40 so 6o70 so 90 100 Missing E 7 (GeV) Figure 6.5: ET (top) and ET fake rate vs ET cut (bottom) for the Z + 2 jets data, single photon, and Z j j Monte Carlo samples [55]. 105 + Single Photon (PI, > 15 GeV) + Single Photon (P; > so GeV) + Tight (P: > 15 GeV) —o- Tight (P? > so GeV) mi” —._ Tight (No P$Cut) ._ I- tn .2 _L-o- .... __ C 10 5 -ll- 2 5 =1 =tl=tl= I.” _ -ll- +- __ -tl- 1- .3 -II- + 10 E‘ + ++ E ++ +4- : "' ++ _ -I- -I- 10.4 l I l l 1 l I l l l IEfi: 0 - 20 40 60 80 100 Mlssmg E T (GeV) 1 : —-— Single Photon (P1 > 15 GeV) E —o— Single Photon (P: > so GeV) _ + Tight (P$> 15 GeV) - + Tight (P? > so GeV) 1 0.1 + Tight (No P: Cut) 2 E m _ h _ d.) _ x _ If .2 10 E— 10-3 lllllllllllllJiLJllllll llllllll lllllllllll 0 10 20 30 40 50 60 70 80 90100 Missing E T (GeV) Figure 6.6: ET (top) and ET fake rate vs ET cut (bottom) for the single photon and dielectron plus 2 jets samples with different pT cuts applied to the photon and dielectron system [55]. 106 : + Tight Erltl'iOS 205 7 a... —-t— Diphoton Unreweighted Mean 12°48 -1 I; =3 nus 8.774 10 —o— Diphoton _ E ¥ Entries 1 1 17 : + Mean 13.57 to L - RMS 9.564 *- -2 5 10 F Ii Entries 1117 > 5 Mean 12.9 m _ T f RMS 9.291 7 -l -3 r 10 '5" 7 . . t 1 . i l . ,. l . L 1 1 . 1 n 0 20 4O 60 80 1 00 Missmg E T (GeV) 1 + Tight T Ilrll« + Diphoton ——e— Diphoton Unreweighted Fake rate ..L O T I liillll I TIIIIIII l I IIIIII] l_l_LllllllllllJ_lJlllllllllllll7L llllllllllllll 10 0 1o 20 30 4o 50 6070 so 90100 Missing ET (GeV) Figure 6.7: ET (top) and ET fake rate vs ET cut. (bottom) for the tight, unreweighted diphoton, and (reweighted) diphoton two jet samples [55]. 107 ET > 35 GeV ET > 40 GeV photon: = 1 jet 0.00467:l:0.00005 0.00255i0.00003 Z 2 jet 0.01972:i:0.00014 0.01109:l:0.00011 Z + 2 jet Monte Carlo Z 2 jet 0.0191:l:0.0018 0.01146:l:0.00141 Z 2 jet + S > 0.15 0.0182i0.0020 0.00968:t0.00145 diphoton: = 0 jet 0.00147:l:0.00009 0.00084i0.00007 = 1 jet 0.00773i0.00058 0.00494i0.00047 Z 2 jet 0.02999i0.00184 0.01567:t0.00131 Table 6.6: ET fake rates. Figures 6.8, 6.9, and 6.10 show the ET distributions and corresponding f ET vs ET cut for the Z data, single photon, and diphoton samples for three jet multiplicities: 0 jet, 1 jet, and _>_2 jet, respectively. Clearly, the single photon and diphoton samples describe the ET of the Z sample very well in the two jet case, which is the what is needed for this analysis. In the one jet case, the single photon sample performs better than the diphoton sample, but. does not describe the Z data as well as in the two jet case. In the zero jet case, the single photon sample cannot be used since there is a large ET bias, and the diphoton sample does not agree very well with the Z sample. This trend indicates that, as the jet multiplicity increases, the ET resolution is dominated by the jet resolution. Table 6.6 lists the fate rates for the single photon and diphoton samples for ET cuts of 35 GeV and 40 GeV. Also, the fake rates for the Alpgen Z j j Monte Carlo sample are included for comparison in the two jet case. 1\--'loreover, sphericity in the single photon sample does not necessarily behave as it would in a sample with two electrons; therefore, the Monte Carlo is used to show that the f ET with and without the sphericity cut agree within statistical errors. As discussed earlier, the diphoton sample is necessary to study the Alec depen- dence on the ET fake rate. Table 6.7 gives the overall fake rate as well as the fake 108 Mass Range ET > 35 GeV ET > 40 GeV full sample 0.02999:t0.00184 0.01567i0.00131 .Mee 3 80 GeV 0.02884i0.00221 0015001000157 80 GeV< Ales < 100 GeV 0.02958i000517 0.01971i0.00418 Arlee 2 100 GeV 0.03387i0.00427 0.01540i0.00283 Table 6.7: ET fake rates for different MW bins in the diphoton sample. rates for events below, above, and inside the Z window used in the analysis for ET cuts of 35 GeV and 40 GeV. Since the bins agree within statistical errors, there is no clear dependence on Arlee; thus, using the single photon sample to obtain fET is valid for the whole mass range. Now that the ET fake rate has been determined, the number of ET fakes can be estimated. In order to obtain this estimate, the number of events passing all cuts except the ET cut in the low and high mass bins in the tight dielectron sample, desig- nated thfiffgo and N3??? 100‘ respectively, must. be obtained. Then, the expected fake ET background is _ ,rfilee<8(l >40 fille€>lll0 >35 NZ/7*.QCD — ‘ tight X ET 1” Atty/2t X fET - (6-3) Table 6.8 gives the fET’s and Nag/it’s for the two mass bins used in the analysis for the last two cuts in the cut progression. After all cuts, the ET fake yield is NZ/7'*.QCD : 0.59 :l: 0.09 BVQIIiS. Since the Monte Carlo and data show very good agreement, the Alpgen Z + 2 jet sample is used to cross check this result. Using the procedure for calculating the expected signal. the Monte Carlo predicts 0.86 :t 015(stat) and 0.63 :t 014(stat) events for the first and second totals in Table 6.8, respectively. These expectations are fully consistent with the estimates from the data. Since they agree within 0.40, a systematic error is not applied to this background. 109 Cllt l\"l‘dSS Range fET Ntight chpcctcd 111,155?” 2 2 + Mee+ET Me, < 80 GeV 0.0111 i 0.0001 45 0.50:t0.07 111..6 > 100 GeV 00197 :t 0.0001 15 0.30i0.08 Total 0.80 :t 0.11 + Sphericity>015 Me, < 80 GeV 0.0111 2: 0.0001 32 0.36i0.06 Me. > 100 GeV 0.0197 :t 0.0001 12 0.23i0.07 Total 0.59 :t 0.09 Table 6.8: ET fake ratios, numbers of tight events below the ET cut, and total expected ET fakes for the last two lines. 110 E . Entries 9119 — + “9.2+ “9'“ Mean 5.966 1 -1 h :: +D'ph0t0n RMS 3.941 0 E 2: Entries 174907 5 Mean 6.832 a) _ -o- .. _2 _‘_ RMS 4.695 510 g— + > _ l.l.l : -l— -3 _ +- 10 _— 1- + E j + ._ ’0‘ i + '10.4 1 1 11L L ‘ irLécclebeselcbedsle O 20 4O 60 80 100 Missing ET (GeV) 1 _ ; +Tight -1 P . 1O _— +Dlphoton e E ‘5 - “10'2 Q) E- x E a .- m .. L. .3 10 L; 10.4 llillLLJIlltttlllll lllll1111l1_Lllllllllllllllll 0 1O 20 3O 4O 50 60 70 80 90100 Missing ET (GeV) Figure 6.8: ET (top) and ET fake rate vs. ET cut (bottom) for tight. dielectron and diphoton data samples with all cuts applied in the 0 jet case [55]. 111 E Entries 2259606 :j‘i. +Single Photon Mean 8568 -1 -°- : +T'9ht RMS 5.995 10 :— _._ +Dlphoton . E Entries 686 I #3: Mean 8.093 g -2 — ... RMS 5.28 g 10 :— _ _ Entries 22963 > E : M n 9 873 T ea . I“ : + RMS 6.736 -3 _ - ++ 10 ‘5— : -l- I -I- + _ + f 10.4 I I l 1 l l l l I__=:i_élé::=l:l£ O 20 40 60 80 1 0 Missing ET (GeV) 1 = I +Single Photon _1 ' +7 Tight 10 L; +Dipholon o E ‘5 _ " 10'2 d) E“ x E a _ “- I -3 10 g" .4 _ 1O Illlllllllll_l_1J_lll lllllllllllllll llllllllllllll 0 10 20 30 40 50 60 70 80 90100 Missing ET (GeV) Figure 6.9: ET (top) and ET fake rate vs. ET cut (bottom) for single photon. tight dielectron, and diphoton data samples with all cuts applied in the l jet case [55]. 112 Fake rate .4 10 Figure 6.10: ET (top) and ET fake rate vs. ET cut (bottom) for single photon, tight dielectron, and diphoton data samples with all cuts applied in the 2 jet case [55]. I IIII I I I IIIII] I I IIIIITI T I IIIITII : - _ +Single Photon Entries 9990“ _ _ -$*_ +Tigh, I Mean 12.57 i -0— Di photon RMS 8.381 E 1* 9 Entries 205 ’ Mean 12.48 b T“ RMS 8.774 _— ‘- E I Entries 9425 : .... ' Mean 13.48 I RMS 9.083 5’ fit: 0 20 4O 60 80 1 00 Missmg ET (GeV) +Tight + Single Photon + Diphoton ILLLLLILIIIlllllllllJLllJllllllllllllllllllllllll O 10 20 3O 4O 50 60 7O 80 90100 Missing ET (GeV) 113 6.4.2 Fake Electron Background The other instrumental background results from multijet processes such as W + jets in which one or more jets shower in such a way that they look electron-like. This background is calculated by first obtaining an electron fake rate, fe, which tells how frequently a loose EM object passes the tight electron selection, from a sample in which real electrons are removed. Then, fe is applied to a signal sample in which only one of the two electrons is required to be tight (a “loose-tight” sample) in order to predict how many of these events would appear to have two tight electrons. The electron fake rate is defined to be the fraction of loose electrons that pass the tight selection criteria. That is. fc : M Nloosc Events in DIEIVLEXTRALOOSE skim, selected with the signal triggers, are used to obtain this quantity. However, certain conditions must be applied in order to remove electrons from real physics objects. which would bias f6. First, only events with ET< 10 GeV are used in order to remove W’s from the sample. Moreover, since two EM objects are required, events in which the invariant mass of the objects falls between 75 GeV and 105 GeV are excluded in order to remove the Z resonance. Moreover, an EM object is considered only if the other EM object in the event has no track in a 0.05 x 0.05 road in 7] x ¢ . This requirement suppresses contamination from Drell-Yan production. Figure 6.11 shows the 77d distributions of electron candidates passing successive identification cuts from loose to tight. These distributions show several features. First, no evidence for North—South asymmetry is observed at any cut. level. Second, an excess of loose electrons exists in the forward regions of the CC. This feature becomes more distinct with the track match, but is then reduced by the likelihood. 114 Njcts (:00 chC 0 0.0035:1:00001 0.0056i0.0002 1 0.0032i0.0001 0.0057i0.0003 2 0.0032i0.0002 0.0056i0.0004 Table 6.9: f6 for different jet multiplicities. 9 In fact, 77d for tight electrons looks very similar to 77d for loose electrons. This is evident from Figure 6.12 which shows the ratio of tight to loose EM objects (f6) vs 77d. Figure 6.12 shows that fe is also independent of jet multiplicity. This claim is reinforced by Table 6.9, in which the average fake rates in the CC and EC for different jet multiplicities are shown. Since the fake rates are reasonably flat in 77d and in pT (Figure 6.13), the average fake rates given in Table 6.9 are used in this analysis. However, because the signal electrons are required to be oppositely-charged, fake rates for requiring a certain sign can be obtained. An equal number of positively and negatively charged objects are observed, as shown in Figure 6.14. (The deviation from equal numbers is negligible, 0.7%.) Thus, the fake rate is halved for a loose electron to fake a tight electron of a given sign. That is, _fe +_— _ f€_f€—2‘ Then, 8005: = 0.0016 :1: 0.0001 and ffCi : 0.0028 i 0.0002. To estimate the number of expected events from fake electrons, the number of loose-tight events is obtained (Table 6.10). In these events, only one of the two EM objects must pass tight cuts; however, the event must pass all other analysis cuts. The number of these loose-tight events, N“, in each region is then multiplied by the corresponding fét to obtain the expected number of fakes: CC C'C'j: rEC ECi Nefakc. I [\th c + A'lt fe ° 115 Cut CC EC NEM > 1 10195 4727 N jets > 0 2029 771 N733 > 1 382 143 11:12 323 121 E? 47 10 8 > 0.15 36 6 Table 6.10: Numbers of events in data with one tight and one loose electron, Nita passing the progression of cuts listed. After all cuts are applied, 0074i0034 events are expected. It should be mentioned that N,3 fake contains the QCD multijet background as well since this background has two jets faking electrons. That is, the tight electron in the loose-tight sample is actually a fake electron. However, this background is also included in the fake ET background when it is obtained from the data. Therefore, this contribution should be removed to avoid double-counting it. The QCD multijet contribution can be found much like the fake electron background, but using a “loose- loose” sample this time. Thus, the number of events passing all cuts, except that the two EM objects need only pass loose identification cuts, is found in the CCCC, CCEC, and ECEC regions. Then, the QCD contribution is obtained from: vCCCC CC CC’i CCEC CC ECzl: ECEC EC ECzl: NQCD :Nll fe. fe +Nll fe fc +Nll fc c : Since the numbers of loose-loose events are multiplied by fcfét, the QCD multijet contribution turns out to be negligible, contributing only 0.0039:t00003 events after all cuts. 116 7 t . Loose electrons 10 r . E . Medium electrons e ' . Medium-track electrons 10 E? : v Tight electrons 5 _ —o—_,__._,_._—o— 3 ++ ++ PL"- + —O- —O-1 4 r.— + + —o~ 10 f -.. _._. + . .. —~e—-— ”—‘__._—‘.-‘ —A-— 3 hk “‘— —e-— _‘T‘ 10 _s : —V— +—q—_' ' + _.._. i. + ——v— - —e— 102 _T-L l l 1 l-h l J l l l l l 1 l {—1.1%}. l 1 1 -2 -1 0 1 2 '16 l” ++_+_+—o— ‘ . I +++++ ++ 10‘ :._++ ' " : —o—- + + .7.- + + ——‘——1 3 —e—— -—A— 10 .— —-— _._ . —-— —.— 5+— —.—~ l _H I“ 2 Z —Y— I ~1— 71“ + ‘1— 1 o I l l I I I l l I I I I l I 1 I I l 1 I _L_LL_.L__1_ -2 -1 O 1 2 '06 Figure 6.11: Detector T] distributions for electrons in all (top left), 0 jet (top right), 10 10 10 10 +l+l IIIIIII T FFTTITTI l P l I I IIIIIII l I IYVII ll “...— l l D l _._ +_.__ _, T —t—- —t— ILLLLJLIIIIlLJlLlllllllLl -2 -1 O 1 2 Tie 10 10 10 ._ .. _. .. .. .. IT FFT TIIIII l FYIIIII [I I IIIYIII l... — _ i- i- l- .— p h- u .- .... f h- b ..— I- i- I —l N O 1 I N :3 a. 1 jet (bottom left), and >2 2 jet (bottom right) events. 117 O O _L 1 J J " l .... _ . E _ I 0 jets _ 0 _ A 1 jet _,,_ - '35 0.008 — . 2 jets — IL - _‘H: _ — _ ‘1‘ .1 0.006- T4 0.004: -_ — _ i 4 — 4 0.002 — — T _ o l- l l l l 1 l l l l l 1 l l l l l l l L 1 l l l l l _ 1”l Figure 6.12: Electron fake rate. fe, as a function of ndet for different jet multiplicities. 118 P O —l _d -I_ .1 q ._d a d _. d 1 —-1 fl — IIIIIIIIIIIII é - 0 Jets Q “11Jefl 30.003 02 Jets i (L004 y i lllllllllllll C _ (L002T— ‘ —— ._ 1‘ d o 1 L l I I I I l 4 a I I l I I I I l I I I I l I I L L l I I 20 30 40 50 60 70 P1 % 0.02 I I I I I I I I I I fl I I I I I I I I I I I I I I I I l- "1 I: e II0 Jets 4 g 7 A 1 Jet j 7 e 11. 0.015 —— 2 Jets —« ._ ‘ _i — . '1 0.01 e , 2 t— . —{}— l— . ___'.___ _ + _ r -_ , 4 l _ ...m —4F— — 0.005 —+ _ . .. — l- A i " o h I LI I I l l L L I I l I I I I l I 141 I L I I A lA—l A L—l 20 30 40 50 60 70 P1 Figure 6.13: EM fake rate fe as a function of pr for different jet multiplicities. The plot on the top shows CC electrons while the one on the bottom shows EC electrons. 119 0.5: l. 0.4:— 0.3} 0.2} 0.1:— 0_llLLlllll lllJLllll llJLLlJIl -1 -0.5 0 0.5 1 1.5 2 Charge(e) Figure 6.14: Electron charge for EM objects passing the track match in the sample from which fe is derived. 6.5 Expectations and Observations The signal and background expectations after all cuts are sumn'larized in Table 6.11. This table also shows that five events pass all cuts in the data. The run and event numbers of these events are listed in Table 6.12. The kinematics of the candidates and their event. displays are presented in Appendix B. It is also useful to compare expectations and observations at various cut levels in order to ensure that there is agretnneut throughout the analysis. This comparison is presented in Table 6.13. Note that several columns use different samples at different cut levels in the table. Iii the IIWV/I/VZ column. the I'VI/V contributions in the first two lines are obtained from the Iii-'14" Monte Carlo sample while the rel'naining lines 120 Category Yield Stat Err Sys Err WW 0.14 005 i982 z _. T7- 013 0.03 i838; ET Fakes 0.59 0.09 0.00 EM Fakes 0.07 0.03 0.00 Total Bkg 0.93 0.11 :333 Expected signal 1.91 0.05 igfig Selected Events 5.00 2.24 — Table 6.11: Yield summary for tf ——> cc channel. Run Number Event Number 166779 121971122 177681 13869716 178152 26229014 178177 13511001 180326 14448436 Table 6.12: Run numbers and event numbers for the cc candidate events. are derived from the WW j j sample. Likewise in the Z —+ TT column, the first two lines are obtained from the inclusive sample while the two jet sample is used for other lines. Finally, the contributions in the Z —> cc column are derived from three different samples. The first two lines come from Z —> cc inclusive Monte Carlo, the next. two from the Z j j sample, and the last two from the fake ET calculation in the data. Kinematic and topological distributions can also be compared at various cut levels in order to check how well the Monte Carlo models the data. Since samples tend to change at the third line, this line is the first used for such a comparison. Figures 6.15 through 6.19 show distributions of p7 and 7) of the two leading electrons, Meg, number of jets per event, pT and 77 of the two leading jets, HT, H51, ET Ao(ET,leading jet), A, and S for the third line of Table 6.13. Figures 6.20 through 6.24 show these same distributions for the fourth line of Table 6.13, and Figures 6.25 through 6.29 show these distributions after all cuts. Overall, the data and Monte Carlo distributions are in very good agreement. 121 8.222222222222222 E 83%. Eeefigmzm 222222 222.222.22.22? @220. Beta 252.2228 20 E222 some 222 €258.22er c2222 22.2mm 22¢ 2222.29. mwdia. 2 mm 2212.2 22 mm $.22. 22 22.22.322.222 8.33.22 mmeiew. N 2 2.2.22 A m. +2.22. 2 22222.32 22 22312. 22 2.235222 2822522222 a 2+2. 2.. 2 so .222 mm 3:. m mm“ on o .2 2+2 o wwwfigse 22225222222 mm EH33 222 so 222.. $2.0! metal 2. cl 9:ch . . . medal . | 252.22. 222.218 2.. sis 2.. 22.32.22 22 222.233 see 22 ass 2 2.2.2322. EN 228 m A 22205.22 ocdl end! mvél mménml . . . cm.§ml . | 2...: 22.2122. 2. 8.2+ so 2 22.2.9222 22 8.222218 222.22 2.22 cams 2 222.2812 252 2222222 2 A 2220.22.22 22.2.3223. 822222.22 ””2332. 2.2222222282222222 22222228 22.222.222.228: 22222.22 m N 2222222 22 N22322: t T N 8.22.2 2225.0 .l N 822.2 2222 2222 can 226 122 if - ww, wz - EM Fakes Z/y'att - Z/y'—> ee/MET fakes 0 Data 20 40 so 80 100 120 _140 160 180 200 220 240 Leading Electron P 1- (GeV) 20 4o 60 80 100 120 _140 160 180 200 220 240 2nd Leading Electron P T (GeV) Number of Events d —A _L b O 8 O to A O O O O O M O o 20 40 60 80 100 120 140 160 180 200 Dielectron Mass Figure 6.15: Leading (top) and second leading (middle) electron W and Alec (bottom) for background, tf, and data corresponding to line 3 of Table 6.13. 123 70 60 50 40 Number of Events 30 20 10 3.5 -2 -1.5 -1 JV -o.5 o 0.5 _1 1.5 2 2.5 Leading Electron n 7O 60 50 40 Number of Events 30 20 10 3.5 -2 -1.5 -1 -o.5 1 1.5 2 2.5 o 0.5 2nd Leading Electron n M 0'! 0 Number of Events to ‘o” 8 _A O 0 0| 0 OO 50 100 150 200 250 Misslng ET (GeV) Figure 6.16: Leading (top) and second leading (middle) electron n and ET (bottom) for background, ti, and data corresponding to line 3 of Table 6.13. 124 Number of Events .5 _L N N O 0'! O U! 0 O O O I I I I I I I I I I I I I I I I I I I I I I I I I 0| 0 4 5 6 7 Number of Jets N 0 Number of Events 8 8 a: O 20 40 60 80 100 120140160 180 200 220 24 Leading Jet P T (GeV) l0 Number of Events 20 40 so 80 100 120 140 160 180 200 220 240 2nd Leading Jet P T (GeV) Figure 6.17: Number of jets (pf > 20 GeV) in the event (top) and leading (middle) and second leading (bottom) jet pT for background, ti. and data corresponding to line 3 of Table 6.13. 125 . + 50 40 Number of Events 30 20 10 1.5 2 2.5 $5 -2 -1.5 -1 -o.5 o 0.5 1 Leading Jet 11 50 40 30 Number of Events 20 10 3.5 -2 -1.5 -1 -o.5 o 0.5 1 1.5 2 2.5 2nd Leading Jet 11 Number of Events ++ 2 2.5 3 A It) (ILMET) Figure 6.18: Leading (top) and second leading (middle) jet 7] and A¢(ET, leading jet) (bottom) for background, ti, and data corresponding to line 3 of Table 6.13. 126 33180 C 3160 I"14o ‘6 '-120 5100 3 2 so 50 100 150 200 250 300 350 400 450 500 HT (GeV) 50 100 150 200 250 300 350 400. 450 500 HT (GeV) Number of Events 0.8 1 Spheficfly Figure 6.19: HT (top), H; (middle), and 8 (left) for background, ti, and data corre- sponding to line 3 of Table 6.13. 127 g - If ll>J - ww, wz '5 "Vii; EM Fakes E Z/‘f—fl’t 5 - Z/y'—> ee/MET fakes . Data 20 4o 60 80 100 120 _140 160 180 200 220 240 Leading Electron P T (GeV) 60 50 40 30 Number of Events 20 10 020 40 60 80 100 120 140 160 180 200 220 240 2nd Leading Electron P T (GeV) Number of Events °o 20 4o 60 so 100 120 140 160 180 200 Dielectron Mass Figure 6.20: Leading (top) and second leading (middle) electron W and Meg (bottom) for background, ti, and data corresponding to line 4 of Table 6.13. 128 Number of Events 0.5 1 1.5 2 2.5 Leading Electron n Number of Events 0.5 1 1.5 2 2.5 $5 -2 -1.5 -1 o ‘ 2nd Leading Electron 11 -0.5 60 50 Number of Events 150 200 250 Missing ET (GeV) Figure 6.21: Leading (top) and second leading (middle) electron n and ET (bottom) for background, ti, and data corresponding to line 4 of Table 6.13. 129 70 6 O b O IIIIIIIIIIIIIIIIIIIlIIII'IIII'IIIIlI Number of Events 0| 0 o 1 2 3 4 5 6 7 Number of Jets Meow UIOUI Number of Events to O 20 40 60 80 100 120140 1630 180 200 220 21 Leading Jet P T (GeV) 50 Number of Events 20 4o 60 so 100 120140160 180 200 220 24 2nd Leading Jet Pr (GeV) Figure 6.22: Number of jets (p-T > 20 GeV) in the event (top) a I0 nd leading (middle) and second leading (bottom) jet, pT for background, ti, and data corresponding to line 4 of Table 6.13. 130 Number of Events 1 1.5 2 2.5 Leadinq Jet 11 Number of Events 52.5 -2 -1.5 -1 -o.5 o 0.5 1 1.5 2 2.5 2nd Leading Jet 11 Number of Events 13 16 14 12 10 a 6 4 2 00 2.5 3 A ¢ (j1,MET) Figure 6.23: Leading (top) and second leading (middle) jet 7] and A¢(ET, leading jet) (bottom) for background, ti, and data corresponding to line 4 of Table 6.13. 131 50 40 Number of Events 50 100 150 200 250 300 350 400 450 500 HT (GeV) 3 c I» > Ill 3 h .8 E 3 z 50 100 150 200 250 300 350 400° 450 500 HT (GeV) 3 1: o > I." '5 h 3 E 3 z 0.8 1 Spheficny Figure 6.24: HT (top), H? (middle), and 8 (left) for background, t2, and data corre- sponding to line 4 of Table 6.13. 132 ‘2 8 E] ti .5 7 - ww,wz 3 6 5] EM Fakes g 5 Z/y‘art g 4 - Z/7'—> ee/METfakes 3 0 Data 2 1 °204o so so 100 120 140 160 180 200 220 240 Leading Electron P 1' (GeV) i 5" C 33 4:- é 3: = . z - 2. 1 020 4o 60 so 100 120 140 160 180 200 220 240 2nd Leading Electron P 1’ (GeV) u h I N NumberotEvents in re '0: w L T .l ..s L T P at "o 20 40 so so 100 120 140 160 130 200 Dielectron Mass Figure 6.25: Leading (top) and second leading (middle) electron pp and NBC (bottom) for background, ti, and data after all cuts. 133 . P S" mnmumh AIIIIlIIIlIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII Number of Events a .A 211.2 -.5 -2 -1.5 -1 -o.5 o 0.5 1 1.5 2 215 Leading Electron 1] 0 Number of Events '0 re in u in a .5 din IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII -.s -2 -1.5 -1 -o.5 o 0.5 1 1.5 2 2.5 2nd Leading Electron 11 Number of Events 1 5 250 0 200 Missing E T (GeV) Figure 6.26: Leading (top) and second leading (middle) electron 7) and ET (bottom) for background, it, and data after all cuts. 134 Number of Events (a) TlIIIIIIIIITIrIIIIIlIIIIlIIII' ° 5 6 7 NumberotJete 3 5: § 5 '5 4? i E E 3:— —<>— 3 I z . 2:- 1:- 0o 20 4o 60 80 100120 140 160 180 200 220 240 Leading Jet P 1 (GeV) Number of Events b M IIII—[IITTIIIII'IIII'IITII 0020 40 60 80 100 120140 160 180 200 220 240 2nd Leading Jet P 1(GOV) Figure 6.27: Number of jets (p-T > 20 GeV) in the event (top) and leading (middle) and second leading (bottom) jet. W for background, ti, and data after all cuts. 135 Number of Events 0) AI III III IIII IIWTI1I‘II' ITTI 'IIT -2 -1.5 -1 -o.5 o o. 1 1.5 2 25 Leading Jet 11 l in NumberotEvents P P N at a (II b .b I -L ‘” P on IIIIIIII IIIIIIITI'IIIIIIIII'IIII'IIII'I 52.5 -2 -1.5 -1 -o.5 o 0.5 1 1.5 2 215 2nd Leading Jet 11 U I I” I uranium III'IIIIIIITTIIIIIITIIIITIITIIIIII Number of Events -‘ l T d :9 0 (ll 1 5 2 2.5 3 A (P (itMET) Figure 6.28: Leading (top) and second leading (middle) jet 7] and A¢(ET, leading jet) (bottom) for background, ti, and data after all cuts. 136 Number of Events °° 50 100 150 200 250 300 350 400 450 500 HT (GeV) Number of Events 0.: 9° so 100 150 200 250 300 350 400 .450 500 HT (GeV) Number of Events . P .0 at to UI to UI é d _L 0.5 - IIIIIIIII'IIIllIIIllIIII'IIIIlIIIIlllIIlI 0.8 1 Sphericity Figure 6.29: HT (top), 11% (middle). and 8 (left) for background, ti, and data after all cuts. 137 6.6 Systematic Uncertainties For the purpose of combining the results from the dielectron analysis with the results from the other dilepton analyses (an and e,u) [55], the uncertainties must be broken down into “uncorrelated systematics,” and correlated systematic uncertainties. The uncorrelated systematic uncertainties are simply the uncorrelated statistical errors depending on the sample sizes used in each channel. The correlated systematic un- certainties arise from systematic effects like sample or cut dependencies, which affect multiple channels. The list of uncertainties with a brief description of how they are obtained follows: 0 Uncertainty due to the Electron Reconstruction and Identification Efficiency Measurement: As discussed in Section 5.1, the scale factor for electron reconstruction and medium identification is plotted against the distance between the electron and nearest jet, and the systematic uncertainty is defined to be the RMS of the scatter of the scale factors. The uncertainties for CC and EC are determined separately. 0 Uncertainty due to the Electron Tracking and Likelihood Efficiency Measurement: As discussed in Section 5.1, this uncertainty for CC electrons is taken to be the larger RMS of the scatter of scale factors in the plots of scale factor versus 7) and (i). In the EC, the 7) distribution of scale factors is convoluted with the 7] spectrum of electrons in ii Monte Carlo. The systematic uncertainty is taken to be the convolution of statistical errors from the scale factor measurement and the ti sample. 0 Uncertainty due to the Trigger Efficiency Measurement: In Section 4.1.2. it is shown that electron trigger efficiencies vs pT are obtained from the Z sam- ple. Since this sample has limited statistics, the statistical errors on the fit to 138 the turn-on curves are varied by :tla to obtain the systematics for L1 and L3 separately. Uncertainty due to the Jet Energy Scale: The uncertainty on the prese- lection efficiency associated with the jet energy scale is derived by varying the JES by 3:10 where __ 2 ‘2 2 2 0 — Jae-interim + UstatJlI C + Usystflata + asystfi! C ‘ Uncertainty due to the Jet Energy Resolution: The uncertainty of the jet energy resolution is accounted for in the J ES uncertainty for data; however, this resolution has a component which is not accounted for by the JES in the Monte Carlo. Therefore, this uncertainty is obtained separately by varying the parameters of the jet energy smearing by the size of the errors on the smearing. Uncertainty due to the Jet Reconstruction and Jet Identification Ef- ficiency Measurement: The :tla error bands in Figure 5.10 are obtained from the statistical errors associated with the efficiency measurements using the 7+jet sample. The systematic errors are obtained by running the analysis with the scale factor varied by 21:10. Uncertainty due to Theoretical Cross Sections: For the WW background Monte Carlo, the leading order WW production cross section differs by 35% from the theoretical NLO production cross section. A theoretical prediction for the NLO cross section does not exist for the WW + 2jet process; therefore, the leading order cross section obtained from the generator is scaled up by 35%, and a 35% error is applied to the cross section. Uncertainty due to Top Mass: There is an uncertainty due to the effect. of top mass on selection efficiency, as shown in Figure 6.30. Instead of assigning 139 gap Efgclencg 8 .. 3 2 LITITIIFIIIIIIITTIIIII 0.04 0.02 . i . 1 L4 4 l i i I l . i l . i l l i J_i 120 140 160 180 200 220 240 Top Mass Figure 6.30: tf selection efficiency as a function of mass. an error, however, a comment on how the cross section varies with top mass is provided with the results. 0 Uncertainty due to Luminosity: A very conservative error of 6.5% is applied to the luminosity measurement, as discussed in [69]. Table 6.14 gives a summary of the systematic uncertainties for the ti, W'W, and Z —+ TT processes. 6.7 Cross Section In general, the cross section can be written: N063 __ [vb/cg . 6.4 £65793]? ( ) 0': , where N Obs is the number of events observed, Nka is the expected number of back- ground events, £ is the integrated luminosity, (3’9 is the signal elliciency, and BR is the branching ratio to the channel being studied. In practice, however, the cross sec- tion is obtained by maximizing the product of likelihoods for each individual channel 140 Systematic Source Signal Backgrounds t? WW Z —> 77‘ EM Reconstruction,ID +6.5 :l:7.1 +7.8 EM Tracking and Likelihood 21:4.7 2125.0 i5.3 L1 EM Trigger +1.1 21:12 +1.3 — 1.5 L3 EM Trigger i0.9 +1.1 i4.0 JES +6.2 — 6.4 +286 — 11.4 +17.6 — 32.4 Jet ID +1.5 — 8.7 —12.7 +24.2 - 6.2 Jet Resolution +2.1 —16.7 —32.4 Theoretical cross section — :t35 —- Uncorrelated +2.8 21:11.8 Table 6.14: Summary of the relative systematic uncertainties for signal and back- ground in % . Channel N363 kag mph-1) 68779 BR 66 5 0.9325014 243.00 0071::83‘1’? 0.01584 e,u 8 0.911433%}3 228.29 O.102+0.012 0.03155 W 0 1.37%}? 224.33 0.063+0.011 0.01571 Table 6.15: Summary of cross section inputs for the 66 and cu channels. Errors on kag and 632-9 are total errors with systematic and statistical errors added in quadrature. [70]. For channel 2', the likelihood is defined as the Poisson probability that N signal plus background events: 1&7]: ::.: afiiéfigBRi + [Vibkg (6.5) is compatible with Nz-Obsz ~ Nobs _ I ~ obs e (6.6) NZ- ! '1 Yb}: . 2(0, {5, cf“). 312,-. IV, 9, NiObsl'i:ce.e,u) = H i=86£fl Table 6.15 summarizes the cross section inputs for the 66 as well as the cn and ,un [55] channels. The systematic uncertainty on the cross section measurement is obtained by vary- ing the backgrounds and the efficiencies within their errors taking into account the 141 correlations between different. backgrounds and between different channels as dis- cussed in [70]. The tf cross section at ,/§ .2 1.96 TeV in the dielectron channel is at; = 14.9“:2fi (stat) if; (syst) d: 1.0 (lumi) pb. The likelihood obtained as a function of tf cross section for the 66 channel is shown in Figure 6.31. The statistical error is the change in cross section required to increase — ln(Q) by 0.5, where Q is the value of the likelihood [54]. The cross section measured in the ep channel [55] is at; : 9.7343,] (stat) fig (syst) :l: 0.6 (lumi) pb. and the combined cross section for all three channels is 0,, : 8.63:3. (stat) +1.1 (syst) +0.6 (lumi) pb. The likelihood obtained as a function of ti cross section for the combined cross section is shown in Figure 6.31. This cross section measurement is based on an assumed top mass of 175 GeV. Instead of folding the uncertainty due to top mass into the systematics errors, the dependence of (Sig on top mass is used to derive a slope for the if cross section vs mt. In the region 160 < mt < 190 GeV, the measured cross section decreases by 0.08 pb per GeV increase in mt. 142 1575 155 15.2.5 \ / 15 \\km/// .4 .6 .8...w...12...“. tt 6(pb) ll'l Yllfjlll 11" VIII 17" I 14.75 N Ill '11—r1—t! in Figure 6.31: Likelihood as a function of t? production cross section. The central value and the statistical errors are shown. The dielectron cross section is shown on the top and the combined on the bottom [55]. 143 Chapter 7 Mass Analysis The cross section analysis gives a set. of five t? candidate events in the dielectron channel. A top mass can now be measured using the kinematic information from these events. The procedure followed for the mass analysis is based on the neutrino- weighting method [71] [72] developed in Run I. In the dilepton channels, extracting the top mass in the tf system is not as straight- forward as it is in the lepton+ jets channels because there are two neutrinos instead of only one. The two neutrinos are observed only as missing energy in the transverse plane with no information about the individual neutrinos themselves. Moreover, all information about their momenta in the z direction is lost. However, to make a mea- surement of the top mass, the four-vectors of the two neutrinos, the two electrons, and the two jets are needed. Since the masses of the final state particles are known, 18 independent quantities are required for the mass measurement. The three-momenta of the jets and electrons as well as the a: and 3; components of the ET , a total of 14 independent quantities, are measured in the detector. There are also 3 kinematic constraints that can be applied. The first two constraints are that the invariant mass of each electron-neutrino pair is the W mass. The other constraint is mt = mg. These constraints bring the total up to 17 independent quantities, still leaving an underconstrained problem. 144 To solve this problem, an additional constraint is needed. One constraint that may be imposed is an assumed top quark mass. Not all top quark masses will be compatible with the observed final state; however, events are typically soluble for more than one top quark mass. Thus, for each event, a weight function, a measure of the probability density for a tf event to decay with the observed kinematics as a function of top mass, is derived. These weight functions are compared to weight functions obtained from Monte Carlo simulations of ti events for different top masses, using a maximum likelihood fit to extract the best top mass value. 7 .1 Neutrino-Weighting Method In an ideal situation, the probability density for a t? pair to decay to the observed final state for any assumed value of the top mass could be calculated analytically. Such a probability density can be written 79({0}|mt) o< / ff<=r>IM12peoii{v}><>‘4d“‘{v}dzda (7.1) where {0} is the set of the 14 measured quantities, {v} is the set of the 18 parameters defining the final state of the t? system, and p({o}]{v}) is the probability density to observe {0} given {0}. M is the matrix element for the process qu, 99 —+ tf + X ——+ e+uebe-Vefi + X, depending on the PDFs, f (.r) and f (75) for the proton and antiproton partons, respectively. Finally, the four—dimensional delta function imposes the mass constraints: ‘ + - ’7 - ‘l’ - ‘7— 04 = 6(me V — A’IW)0(mC ' — ill/IW)0(me Vb — mt)0(m€ 'b — mt), (7.2) neglecting the finite widths of the W boson and top quark. This multidimensional integral, however. can only be evaluated numerically. In addition, higher order effects like initial and final state gluon radiation complicate matters even more when trying to compute the exact probability density. Instead, a simpler weighting scheme, which is sensitive to the top mass, is used in order to make the computation possible with the available computing resources in a reasonable amount of time. As mentioned earlier, the method employed here is the neutrino- weighting method. Given the observed values for the electrons and the b quarks and the two con- straints, A'[W : (Ee. + EV)? — (Fe + 131/)2 (7 3) m. = (E: + E. + Eb)2 — (5. + a + 502... ' the underconstrained problem now needs two more constraints in order to be solved. One constraint, as mentioned before, comes from assuming a top mass. The other comes from assuming an 7) for the neutrino in question since the width of the (Gaus— sian) neutrino 77 distribution is slightly dependent on top mass. Neutrino n distri- butions for several top masses are shown in Figure 7.1, and the distribution of the width vs top mass is shown in Figure 7.2. This dependence can be parameterized by a quadratic fit: 05,,(mt) : 1.48 — 4.62 x 10_3mt + 1.04 X 10—57713. (7.4) Assuming an n for each neutrino. the a: and y components of the neutrino momenta, p??? and ply/’3, can be calculated from Equation 7.3 for a given mt. For each top decay, this calculation yields zero or two solutions, meaning that there are zero, two, or four solutions per t? system. For each solution, a weight, based on the agreement between the observed components of the ET and the computed neutrino momentum 146 components, is calculated such that the 1th solution has a weight: — ,‘_,u'_,?2 —J—')V—')'72 wflmt) : exp (E1, 201:1. Pa.) x exp (Hy 201211 111) ‘ (7.5) . .1- Ey where 03.1: and aEy, the El. and Ey resolutions calculated from Z + 2 jet events, are: a : 6.85 0.035 >1: E- 7.6 E17 + 2 Tu ncl us ( ) and a 2: 7.43 0.021 >1: E . 7.7 By + Z Tunclus ( ) i 121 nd 85.74 / 37 U "00 * 1211101 78.93 I 37 2500: comm 1606 1 13.80 [ 1200» Constant 828.8 1 10.37 2000»:- uoan 0.002874 1 0.007341 , . Moan 0.00254 1 0.008888 [ 819519 ________ 1.081 + 0008814 [ 100°;— SI ma 1.012 :1; 0.007848 1800:— [ 800} E 600:— 1000: l : . E - 400: : 500 - : [ 200—- [ 04 4. _. ..4 n 1400E xilndt 7' 71.65! 37 ] 1400: 12mm 02.00/30 : eon-um 875 1 10.88 ; cm 000 1 11.30 1200? Icon 0.005888 :1: 0.008548 1200, mean «0.004482 1 0.00ms1 1000"— 0.0084 1 0.007556 _ 1000: 9‘9"- 1‘1303‘ $0907“ 800:— 000L 800:— 600i: 400 m:— 200 200:— 0 5° -4 -3 -2“'-1“"0"“1”"2” 3 4 Figure 7.1: Neutrino 7) distributions for mt : 120 (top left), 160 (top right), 180 (bottom left), and 230 (bottom right) GeV. For each value of mt considered. the weighting program steps through the neutrino and anti-neutrino 7) distributions. a.p1_)roxirnated by Gaussians with widths. 0,,,,(nu). 147 S _ 35 _ 2 - — 18.92/16 31.08: X ’ "d‘ g __ p0 1.485 i 0.05029 >1_05 “I. p1 -0.004618 2 0.0005996 I p2 1.038e-05 £1.7386-06 1.04 :- i. 1.02 “_— 1 _— _ 0.98 :— 0.96 :- ; l l L l l I l I l I L l l L l l I I l l I l l l l 120 140 160 1 80 200 220 240 Fi ‘ure 7.2: Neutrino 7 widths vs. 771 . t in equal steps of area under the curve. This ensures that 7) values near 77 = 0 are sampled preferentially. Then, to obtain the weight for all possible solutions and all neutrino combinations for a given mt, the weights are simply added: wV(mt) : 2 2:2: 'wfl-mt). (7.8) 7);} 1,17 2' 7 .1.1 Jet Combinatorics One issue that is currently neglected is jet combinatorics. In an ideal situation, a ti event in the dilepton channel would contain only the two jets from the b quarks in the final state. In this case, there are only two ways to pair the jets with the electrons. The weighting tool loops over both of these, assigning an equal weight to each pairing since I) and b are indistinguishalile. 1‘18 ()1 02 ISR J1 J2 J3 J1 J3 J2 J2 J3 J1 J1 + J3 J2 J1 + J2 J3 J2 + J3 J1 Table 7.1: Possible combinations of three observed jets as 0 jets or ISR. Gluon radiation, however. complicates the reconstruction. This effect can come into play in two different ways. The first is initial state radiation (ISR), in which a gluon is radiated before the £2 pair is produced and has nothing to do with the final state decay products. It is just an extra jet in the event. The other possibility is that the b jet radiates a gluon. In this case, the extra jet carries some fraction of the b quark’s momentum, and the jet should be recombined with the b quark for the purpose of reconstructing the event. In the Run I analysis, three jet events were considered with a weighting scheme based on the event kinematics. That is, the six combinations listed in Table 7.1 were considered [71]. It was discovered, however, that this exercise gave very little improvement while requiring considerably more CPU time [73]. Therefore, during the development and testing stages of the neutrino weighting tools for Run II, only the two highest-pp jets are considered. Gluon radiation is a higher order effect on the table for future study. 7 .2 Monte Carlo Tests Before being run on the data, the weighting method must. be tested on Monte Carlo to determine the sensitivity of the event weights to top quark mass and other parameters. 149 OMS:- H 0.007 ll . ] Mean 188.2 0.008 t]: 0.004— ~ RM 36.11 [ i _S_ , . 0005 l ]L “0% ] 0.004 J [ 0.002L [ ] 0.000 ,. i - 3. . l ‘1 : 11 0.002 K t L 0.001 \ c'ai 1L1. .111111 . .11-:L1“"'f‘l.l.4.11. or. 1 .L111_1.14.L111111.‘N_121 1 150 200 250 000 100 150 200 250 300 MGBV) MGQV) 0.00185 , —— ,,__ ] 0.0045;— 0mg]:— [ Mean 218.8 om"; Mean I77d ‘: RMS 28.92 0 .0035— HMS 5.81 0.0012;— .7 _ 0.000:— m; 0.0025: : {La : [ 0.0008; 1 0.002:— 00008} L o.0015;— [ 010004;;— HA 0.001: R 0.0002:— J] "‘ 0.0005.“— nfi 11.41:].11. L111; nEi 111.11J11Lfldu111.111114 U U 100 150 200 250 000 100 150 200 250 300 Mew) Mew) Figure 7.3: Weight distributions for four mt : 175 GeV parton-level top mass events. 7 .2.1 Parton Tests Parton-level tests are conducted using the momenta of the partons generated by the Monte Carlo simulation before the events are run through the detector simulation. Hence, detector resolution effects are not present. However, in order to use events whose kinematics are similar to those which enter the data analysis, the parton tests are run using only events which pass the selection criteria defined in the cross section analysis. The weight distributions that are produced by the weighting tool are examined, both on an event-by-event level and as a sum of all of the event weight distributions for a given top mass. Figure 7.3 exhibits several weight distributions for individual events with 1m 2 175 GeV. These distributions show that, while some events have a very narrow range of mt which give solutions, many events are solvable over a wide range of 7m. Also, some events have two peaks, indicating that reasonalfle solutions exist for either pairing of the jets and the electrons. When all of the individual event weights are summed, however, the total weight distribution tends to be sharply peaked within about one GeV of the input mass. Figure 7.4 shows total event weight curves for three values of mt, and Figure 7.5 plots the peak mass given by the weight distribution for all Monte Carlo mass points. In general, these weight distributions are asymmetric with the high-mass tail extending much farther than the low mass tail. 7 .2.2 RECO-level Tests Tests similar to the ones run using the parton-level information are also run using the RECO-level information since the final comparison with the data is at RECO- level. At RECO level, the individual weight distributions vary widely, just like the individual weight distributions at parton level. The total weight distributions have a similar shape with the sharp rise and long tail; however. they tend to be wider by about 4-5 GeV as exemplified by Figure 7.6. 0.7‘ 0.8 0.5 0.4 0.3 0.2 (“E 0 Mean 171.2 RMS 36.79 ‘100‘“150“200“‘250“800 mt 0.8 0.6 0.4 0.2 ’IFI[III[PIJ[IIIFTT—U[ Mean 206.9 RMS 34.57 0.8 0.6 0.4 0.2 [fiIIFrI[TIJTIfIIIII[ITj[ l 111 o 1 1141111111111. 10 150 200 250 3 A l 0 mt Figure 7.4: Total weight distributions for mt : 150 GeV (left), 175 GeV (middle), and 200 GeV (right) at parton-level. 152 V) N b O 220 8 8 -L b O —L N O Peak Evt Weight Mass (Ge 33 O ‘j‘IIIIIIIIIITIITIIIIIWWT ,0 .4" .4" ’l.’ ,4. I. l" ,5 ”. ’D ..o' 'D ’I‘ ’3 I" ,. I,’ l”’ .r” l I l l l l l l l I L41 I l l l l l l l l l l L 120 140 160 180 2 220 G v4 mtop( e ) Figure 7.5: Weight distribution peak mass vs. input rm. 0 0.35 0.3 0.25 0.2 0.15 0.1 0.05 Mean 171.2 RMS 39.29 [TWVIIII TIT10 III] ITTj’lIIII [I77 ITTW TI II 01111114111111M1L1111. 100 150 200 250 3081 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 100 74150” ‘ 200 ”“250 ‘ 8 Mean 187.7 RMS 38.29 IIII[IIII[IIITTIIII[III[[IIIIIIIIIIIIIIIIIIIII .° 01 Im\ Mean 199.9 RMS 39.38 .° .8 I I I I I] I I I I I T P 00 .° N 0.1 ,,__;__L__.L l 1 l 1 A' 0 111111111. 100 150 200 250 3 l 0 "‘1 Figure 7.6: Total weight distributions for mt : 150 GeV (left). 175 GeV (middle), and 200 GeV (right) at parton-level. 7 .3 Mass Fitting 7 .3.1 Procedure To extract a mass for the top quark, the weight functions from Monte Carlo tf sam— ples generated at 19 different values of the top mass are compared with the weight functions for the data. A maximum likelihood fit is used to determine at what. top mass value the data agree best with the Monte Carlo predictions. For each event, a weight, What), is obtained for 125 different top masses between 80 and 330 GeV. However, storing 125 values for each event and then calculating a probability density as a function of 125 arguments is impractical. Instead, the contents of the weight distributions are stored in 25 GeV wide bins, requiring only ten bins instead of 125. Ten bins are chosen since the background statistics are too low to use finer binning while five 50 GeV bins prove to be too wide as discussed in Section 7.3.4. The distributions are also normalized to one, leaving only nine bins with independent information. For each event. the weight. function can then be written as a nine-dimensional vector: W : (W1, 141/"2, . . . , Mfg), (7.9) where 25i+lll5 Gt‘V V II",- :: / W (m) (1m (7.10) . 25i+80 GeV for i = 1, . . .,9. Next, each bin is averaged over the total number of data events, N, in order to obtain an average event weight vector: wav : (11’3", wig” ..... wgl’). (7.11) where :N __1___ w 3 [Via_ v _ N (7.12) In the Run I analysis, the maximum likelihood contained a Gaussian component, a Poisson component, and a probability density estimation (PDE) classifier [74]. A simplified form of the joint likelihood used in Run 1. is employed in this analysis: L= ( ("b—n5)? ) ("'8 +nbiN 6 ("3””) nsfs-(Wavlmt)+n5fbfwav) 20b N! 71.3 + nb 1 ex V2001, p (7.13) At a given mass point, mt, — In L is minimized with respect to the parameters 713 and nb, given the number of events in the data sample, N, the probability density functions for signal and background, f3 and fb, respectively, the expected number of background events, fib, and the error on the background expectation, ab. This minimization is done numerically using a ROOT implementation of the minimization package MINUIT [75]. The minimum value of — In L is plotted against mt. These points are fit with a quadratic around the minimum such that the fit minimum gives the measured value of the top mass. The fitting tool can also do a cubic fit to the minimum, but using a cubic fit does not affect the output mass significantly. The details of this mass fitting are described in more detail in the following sec- tions. 7 .3.2 Maximum Likelihood Function In equation 7.13, the first term, 711, — fib _ 1 ( )2 g(nb. 7113.01,) : MOI; exp (-7) , (7.14) b is the Gaussian term. It is included as a constraint on "b in the minin'iization. That is, nb should reflect the expected number of background events predicted by the cross section analysis. The second term, (".9 + 72b)Ne_(n$+nb) P(7zb, 71.5, N) :2 N' . (7.15) is the Poisson term. This constraint requires as + "b to be consistent with the size of the data sample, N. The final part, nsfe(Wa"l'mt) + T?-5fb(Wav) L,(n31nbawav) : 9 (716) 71.3 + 71b is the probability density for W"Iv to agree with 77..., signal events and nb background events as described by the Monte Carlo. The probability density functions, f3 and fb, are simplified forms of the PDEs used in Run I. For this analysis, f8 and fb are defined Wa" _ 1 d (II/"f" — 113290110)? M [mt]— (_T—W HlexP — 2h)2 (7.17) J: and fbfw ): (———— W 111 eXp — 211.2 , (7.18) J: where d is the dimension of W" ((1 = 9), II'I’SIZ'gOnt) is the signal template value in the jth bin for a given mass point, le-NJ is the background template value in the jth bin, and h is a parameter which approximates the error on 11",”. The value for h is set to 0.05, as discussed below. The signal template is found just like W“ for the data. That is, the event weights of the Monte Carlo If events are binned in ten bins, normalized, and averaged in each bin. The background template has one added complication since there are multiple backgrounds. In this case, an average template is found for each background. Then, these individual background templates are weighted by their contributions to the estimated background. as determined in the cross section analysis, to obtain the final background template. ng. The back— ground samples employed are the WVij, ij —> TTjj. and ij (in all three mass bins) Monte Carlo samples as well as the “loose-tight” events from data to account for the EM fake background. The relative background contributions are listed in Table 7.2. The background template is shown in Figure 7.7. E o 3 - EM fakes .% E I ij (Mass 3) :_ - 2]] (Mass 2) 5025: - ij (Mass 1) a I V Z-—)T1,' 'fi 0.2_— fl WW g E o ; 20.15: 0.1:— 0.05;“— 0 100 150 200 250 300 mtop Figure 7.7: Ten—bin average background template showing the relative background contributions. 7.3.3 Determination of h The free parameter h is used as the width in the probability density functions. It is called the smoothing parameter which, when using the average ensemble method, can be approximated by 4 l/(tlt 71) : — ".19 h (d + 2) ’ (t ) 158 Background Source Fractional Contribution ij (mass 1) 0.12 ij (mass 2) 0.45 ij (mass 3) 0.06 I/VWjj 0.15 ij —+ TTjj 0.14 EM fakes 0.08 Table 7.2: Relative background contributions to the average background template. where d is the dimension of W” [74]. Tests run using h values of 0.05 and 0.1 show similar performance. To check that such a value reflects the widths of the average values of the ensemble weight distribution in each bin, tests are run with ensembles of five events since this is the number of candidate events. For each bin, the average ensemble weight is found, and the average bin values for each of the 500 ensembles are plotted. It turns out that the scatter around the mean of the bin values is dependent on the mean of the bin values. The scatter is not dependent, however, on the mass point used. Figure 7.8 shows the RMS of the scatter around the mean bin values vs the mean bin values of 500 ensembles. From this plot, 11. can be parameterized as: h. 0.0012 + 0.1847 00,. (7.20) where bav is the content of the event weight bin. Preliminary tests have also been run with a varying h parameter. and the performance is comparable to the tests run with a constant )1 value. Future analyses could implement a bin-dependent h with further testing. However, higher background statistics are necessary to ensure that the background samples exhibit the same It dependence. For this analysis, It : 0.05 is chosen. LIIIIIIIIIIIIIIIIII'IIIIII 0.06 x2 I ndf 0.00813 / 124 0.04 Prob 1 p0 0.001183 i 0.001041 0-02 p1 0.1847 1 0.003143 0 1 1 1 1 I 1 1 1 1 L 1 1 1 1 L 1 14 1 l 1 1 1 1 l 1 1 1 1 0 0.1 0.2 0.3 0.4 0.5 0.6 Bin Average Figure 7.8: RMS vs mean bin value of 500 five event ensembles. 7 .3.4 Ensemble Testing Ensemble tests are experiments in which mock data sets are created using Monte Carlo tf events of a given top mass. Each ensemble is run through the maximum likelihood fitter where it is compared with the templates at each mass point. Ideally, the result of the likelihood fit should return the known top mass for the ensemble being tested. Since enough Monte Carlo has not been generated to have a dedicated set for creating ensembles and another for generating the templates, the ensembles are created by randomly selecting the number of events desired in the ensemble, New, from the template sample. To avoid biasing the result, however, these events are removed from the template, and the average template is recalculated. W’hen backgrounds are included, a random number is selected between 0 and 1 for each ensemble event. If the random number is greater than fib/ans, then an event is taken from the signal template. Otherwise, it is selected from the background template. 160 If multiple backgrounds are included, then a random number generator is used to determine from which sample the event is taken based on the relative background contributions. Any samples from which events are selected for the ensemble have their average templates recalculated excluding the ensemble events. Ensemble testing is done in several stages. The first testing stage does not even use the signal Monte Carlo. Instead, in order to ensure that the code is working properly, a toy Monte Carlo is used. This toy Monte Carlo is generated with a random Gaussian number generator. The bulk of the studies use “events” which are composed of 500 entries from the random number generator. W ith so many contributions to each event, the events all look very Gaussian. At each mass point, 7m, 1000 toy events with a mean of mt and width, (I, are generated. Separate samples are generated for 0 = 15, 20, 30, 40, and 50 GeV. Separate samples are also generated with different average event weight binning. Samples with 5, 10, 15, and 20 bins are tested. These samples are then used exactly as described above for the signal-only ensemble tests. Several conclusions can be drawn from these tests. First, by cross-checking cal- culations by hand, it can be confirmed that the the code is calculating the likelihood correctly. Second, bin size has an effect on the output mass. That is, if the 0 of the Gaussian is considerably less than the bin width, oscillations about a line with a slope of one appear with masses at the lower side of the bin fluctuating high and masses on the higher side of the bin fluctuating low. Figure 7.9 shows this effect in the five bin sample in which the bins are 50 GeV wide. Figure 7.10 shows that this effect disappears when ten 25 GeV bins are used. This effect is not observed in any samples using more than ten bins either. Once the binning is set such that the oscillations are removed, the output masses agree very well with the input masses as Figure 7.10 also depicts. For each of these plots, 100 events are used to construct the ensemble, and 50 tests are conducted at each mass point. Examples of the minimized —— ln L plotted against mt for 10 bin Gaussians with a : 20 GeV are displayed in Figure 7.11. The 161 98° lain!" 35-39112 9 1211181 ‘ 40278—712 . 9.80 Prob 0.0002803 1 935° Prob 0 “' ' E 40 p0 0.1412 103433 W, .. E 40 90 0.1189 20.1511 mu ._ -' -_ p1 1.009 t 0.007813 E 20 p1 ,.,____-...‘.:9.‘9*°-°.3?.°_‘i_. E 20. 12’" ‘ k . ........ , .20: .............. 20 . 9A . : . . 40 ”'3'; 0‘ “ """""""""" t 40? .... f l l '6on 140 ‘ ‘201 ‘ A 0 A A 320‘ A J A ‘ ‘ “-8.03. -40 ”WT—20‘"W0“”'20 Input 1111-172 (60180 - Input m1 4’? (01080 950 x’nfar 13.15/12 ‘ ‘, 9 ’ x’lndf 02214112 ’ é Prob 0.3505 3 " £50? Prob 1 ‘ """ " 4 1'" 5 1 ,l '- ,- E 40 00 0.1891 20.2200 - -‘ .+ g 409 p0 0,4147 2 0,3015 ," 22° -31., ”9“, 5.9-9‘” .. / -. : in 0.99071001489 / go 1 - o 5 1 S I E .20 - 2 O -20 .-_-. '....._.... O : if “'° . 1 “in? “ . "'i " _60-‘__- 1 '1 .19...111111111141la11l111 -80 -4o -20 0 20 ‘6980 -40 -2o 0 20 Input m1 412 (60ng Input m1 - 112 (0‘10VI0 Figure 7.9: Output mt vs input mt for ensemble tests using Gaussians of width 15 (top left), 20 (top right), 30 (bottom left) and 40 (bottom right) GeV. The average weight distributions are binned into five 50 GeV bins. Bin boundaries occur at 130, 180, and 230 GeV (-45, 5, and 55 GeV on the graph). minima of these distributions are fit with quadratic functions in order to obtain the most likely value for mt. The fit range is i20 GeV around the minimum mass point. Once confident that the toy Monte Carlo performs as expected, signal-only ensem- ble tests are conducted using templates and ensembles created from the parton—level Monte Carlo information. As shown in Section 7.2.1, the peaks of the event weight distributions for the actual Monte Carlo samples do track with the input masses; however, these distributions are asymmetric, not Gaussian. Hence, the Gaussian tests are useful for testing the code, but further tests must be run with event weight distributions that look more like the final RECO-level templates. Parton-level tests are conducted first since higher order effects do not affect the mass reconstruction at this level. Thus, the maximum likelihood fit should return the input mass. E E 12mm 29.17/12 , g E lendt 03569112 , v w :._. o. s ...“- ....-- ., .-. . ._ .:'..?‘... V w r 1 .é..-_....._- .... ....... ::. ~31 2 40 E_ p0 0.00235 1 0.1063 _ ”..I . E 40 ;. p0 0.0412 1 0.1659 . . F b I W _ I , -_’ ; p1 1.004 t 0.005246 : . -_ : p1 0.9906 1 0.008169 / : E 20 ,. 1 : ?-_..._._-. .. : ...,.-.._ .....- ”..-—.... E 20 T. E..............--..é..-..........---T--......-.. .... ....._....-_?.__...___.-_... g 0 E... . g o, ' ' L 9 ? f _20 7-..... .-. __ _20: .. -.-... .,.- 3 ,t' . . . , : i , = go". : z 2 l :fi, . . 2’ . .w-sor.1_».;36,. 1..- .. 2;.01. . 1..- ... 6.4..-." ...2164 ....-- ,. -1 .. , . ...-..“ .Qm“ ...“ .16-“..- . ...210.... - 0L - 210 x - I ... Input n1. - 1% (60V?0 Input m. -1%g(GoV§o Figure 7.10: Output mt vs input mt for ensemble tests using Gaussians of width 15 (left) and 20 (right). The average weight distributions are binned into ten 25 GeV bins. The parton-level tests and all further tests are conducted using the ten-bin tem- plates and ensembles. When five-bin templates and ensembles are used, the same oscillatory behavior seen in the Gaussian tests is also observed in the Monte Carlo tests. Ideally, 15 or 20 bins would be employed; however, background samples with low statistics require coarser binning to be used at this time. When the parton-level tests are run, — In L vs mt is smooth around the minimum but has a slightly differ- ent shape than the corresponding plots for the Gaussians, as shown in Figure 7.12. Nevertheless, a plot of output mass vs input mass still exhibits a slope of one (F ig- ure 7.13). Two sets of parton level tests are conducted. First, a test is run with a high statistics ensemble of 100 events. Such an ensemble looks very much like the template for the corresponding mass. This test. allows the fitter to run over ensembles with small statistical fluctuations. In reality, however, there are only five events in the data. Therefore, 100 tests are also conducted at each mass point using ensem- bles with five events in order to determine the average behavior of the low statistics ensembles. Both of these cases are shown in Figure 7.13. 163 40 2O llIIIIIIITTIIIIIIIIIITTII[Ill -20 l 1 L4]; L L4 1 l 1 1 l 1 L I l l l l l 1 l l l 1 120 140 160 180 200 220 240 Input m‘ (GeV) -In L [VITIVTTITTIIIYIIIITIIIII L L l l L 1 l l l l J 1 1 1 l 1 L4 1 l i 140 160 180 200 220 240 Input m, (GeV) 1 J l l 120 120 -In L 100 60 40 Illll]VIIIIIIIIIIIYIIITIIIIFrT JLL+IILLLIJJL111 1 I 1 1 + l l l 1 120 140 160 180 200 220 240 Input mt (GeV) Figure 7.11: Plots of — In L values vs mt where the mass is obtained using a quadratic fit around the minimum. The ensembles are generated using Gaussians with means of 150 (top), 175 (middle), and 200 (bottom) GeV. 164 70 -In L 50 40 30 20 1 0 -10 -InL 60 40 20 E— F. E E— E_ L4.Lili4.li.il..il.i.l.a_L 120 140 160 180 00 220 240 lnputm,(GeV) l; _Li. .1ia.1..iiriiiai.1... 120 140 160 180 00 220 240 InputhGeV) t L E E —l l P; 1 L4_L*1 l 1 1 J l 1_L L l 1 L+J l l l 120 140 160 180 200 220 240 lnputm,(GeV) Figure 7.12: Plots of —— In L values vs mt where the mass is obtained using a quadratic fit around the minimum for ensembles generated with the mt : 150 (top), 175 (mid- dle), and 200 (bottom) GeV parton-level signal samples. Output mt - 175 (GeV) M g m 00 :1) c 40: Output mt - 175 (GeV) 1: 1'0 N 4:- 01 00 c c o c c c c ch :3 Figure 7.13: Output mt vs input. mt for ensemble tests using Monte Carlo parton- level information. The average weight distributions are binned into ten 25 GeV bins. The plot on the top shows the average output masses from five ensemble tests at each mass point using 100 events per ensemble. The one on the bottom uses five events Or a r Iiii ou-u :— X2 I ndf 39.42/12 ,. ....................... . :- Prob 3_9339.05 : p0 0.4671 i0.3562 . :2. N 1 ”£1834 ....................... ; ...................... ,. ........................ g ...................... . ................................................ :- ."E . . . 111111 20 42 Input m1 - 17 (GeV?0 In...c-.-.-.o.....‘fi.....-......a-..-......l\~-TI II f .......................................................................................... lendf 39.42/12 E:_. Prob 8.983e-05 “Eu"....................§, ............. ;:.'.;.'..‘.: ;_ p0 0.4671 i 0.3562 ,3 .............. p1 1:0.01334 - :.."o'¢'... ........................................................................ .. 5:1" . ' "i ..... 1......1 ..... i ..... 1.... .....‘ ...... 1 ..... i ..... i ”'"1 ..... i ..... l ..... i"""1 ..... .l, ..... I ..... 1......1 ..... .1. .......... l ..... l ..... 20 4% Input m1 - 17 per ensemble and 100 tests per mass point. 166 (GeV)0 The next step is to test the likelihood fitter with RECO-level events. This adds higher order effects into the signal sample without adding backgrounds yet. These tests are conducted in exactly the same way as the parton-level tests, only using the RECO-level information. Here, the plots of — In L vs mt look very similar to those produced by the parton-level tests (Figure 7.14), and, again, the output masses agree with the input masses as shown in Figure 7.15. Finally, the background can be added as described above. Ensemble tests are run using four signal events and one background event since only one background event is predicted by the analysis. When background is added, the shape of —In L vs mt changes drastically (Figure 7.16). Away from the minimum, — In L takes on a constant value. This effect is due to the size of f3 compared to the size of fb. That is, fb is not dependent on the mass point and is a constant value everywhere. f3, on the other hand, is very dependent on the mass point. When the ensemble with mt : 175 GeV is tested against mass templates around 175 GeV, f3 is greater than fb, indicating the ensemble is most likely to be signal. When this ensemble is tested against mass points a few tens of .GeV away, fb can be orders of magnitude larger than f3. When f3 becomes much smaller than fb, the likelihood essentially goes to: 1 <> <>‘ ,,,,,(W.w, 11.211 V2770], 20”) 1V! ms + 11;, removing the mt dependence from the picture entirely. This results in a constant — In L value. Moreover, in general, the low input masses tend to be pulled high while the high input masses tend to be pulled low, as shown in Figure 7.17. While this deviation from a slope of one seems worrisome at first, two further tests show that this result is expected when an average ensemble is compared to the average signal and background templates instead of individual ensemble events. First, when the average background template is compared to the signal templates, the background has a mass of 178.4 GeV, as shown in Figure 7.18. From this, one can 167 infer that mixing background with low mt signal in the ensembles makes the average ensemble look like a higher mt ensemble; whereas, mixing background with high mt signal pulls the average ensemble low. Ideally, the background statistics would be comparable to the signal statistics so that high statistics ensembles could be used just as in the signal-only tests, in which tests with 100 event ensembles are conducted in order to reduce statistical fluctuations in the ensembles. However, the current background statistics do not allow this. Instead, another test can be run in which every background event is defined to be the average background template. Such a background event is called an average background event. Then, since one background event is expected and five events are observed, average ensembles containing 100 events can be constructed with 80 signal events and 20 average background events. Using the average background events reduces the statistical fluctuations involved in randomly selecting a few individual background events from a low statistics sample. As expected, this test demonstrates that a slope that deviates from unity results from mixing background into the samples. Figure 7.19 depicts this result for two cases — the 100 event ensembles just discussed and five event ensembles with four signal events and one average background event. Because the slope of the output vs input mt line is not unity, the output mass must be corrected in order to obtain the input mass. A correction can be derived from either the fit obtained from running tests with five event ensembles with one randomly selected background event or the fit obtained from running tests with five event ensembles with one average background event. Since a background event in the data is a single event from a single source, not the average of all backgrounds, the fit using ensembles with a randomly selected background event. is chosen. In this case, the corrected top mass is m?” -— 29.23 7.22 0.84 ( ) mt : out where mt is the mass obtained from the fitter and mt is the corrected top mass. 168 -10 lIIllIIIF1IITIIIIrIIIIIII 1111 11.. 11.1.1. 120 140 160 180 J 1 l 1 1 1 I 1 1 1 200 220 240 Input m, (GeV) -InL d O IIIIlI—IIIIIIIIIIIIIIIIIIIIIII 1114+L1J111L411k111111 1 l 1 120 140 160 180 200 220 240 Input 111, (GeV) 50 -In L 40 IWTTTIIIIIIII[ITITTFTIIIIIIIIIITIII J 1 l .L_L 1 J 1 1 I 1 1 l 1 1 1 l 1 I 1 1 l 1 1 1 120 140 160 180 200 220 240 Input m, (GeV) Figure 7.14: Plots of - In L values vs mt where the mass is obtained using a. quadratic fit around the minimum for ensembles generated with the 7m : 150 (top), 175 (mid- dle), and 200 (bottom) GeV RECO-level signal samples. 169 E E lendf I 117.1/12 9 50:" Prob 2.168e-19 E 40:. p0 1.55:0.1958 a, : p1 1.008 i0.009675 E 20_ .‘i, 3 0_ ‘5 : 0-20_ I ,9» -4G _ '.. " -6C ‘ ' -60 -40 -20 20 40 60 Input m. - 175 (GeV) g E lendf 4.165/12 2 6° 3" Prob 0.9803 E 40 -_. p0 0.7906 1— 0.6656 ; ; p1 0.9684 i0.03336 E 20_ 3. 0_ g _ -20: .,r’ _ 3x -4G_ "a" 34"7. . . . . . '6960 -40 -20 170 0 20 4O 0 Input m1 - 175 (GeV? Figure 7.15: Output mt vs input mt for ensemble tests using Monte Carlo RECO— level information. The average weight distributions are binned into ten ‘25 GeV bins. The plot on the top shows the average output masses from 100 ensemble tests at each mass point using five events per ensemble. The one on the bottom uses 100 events per ensemble and five tests per mass point. -In L 10 TIIIIFITTlT—IIIITIIIIIIII' 1 l 1 1 1 1 I 1 1 1 200 220 240 Input m, (GeV) 11111 1.1 1 .111. 120 140 160 180 lllIIlIIllI—rIITr1IIIIIITI 1 I 1 1 1 l 1 1 1 I 1 1 1 I 1 14 I 160 180 200 220 240 Input 1nt (GeV) _L__ 11.1.1. 120 140 IIIIIIIIIITIIITIIITTIYTTTIIIIIIIIITIT 1111LJ_1_1_111L111111111111_1_ 120 140 160 180 200 220 240 Input 111, (GeV) Figure 7.16: Plots of —— ln L values vs mt where the mass is obtained using a quadratic fit around the minimum for ensembles generated with one background and four mt : 150 (top), 175 (middle), and 200 (bottom) GeV signal events. 171 g 60 2' 1— = : 3 ..‘g 1.. : x lndf 45.66/10 a = - Prob 1.637e-06 '2. 4o .......... ..g.’ .............. l ..... :1 _ p0 1.229 :0.1145 . O _ p1 0.6419 :0.006413 , " . = 20 ‘ o_.. ............................................ -20 h -40 -60 -60 Figure 7.17: Output mt vs input mt for ensemble tests using signal and background. At each mass point, 500 ensemble tests are run using ensembles with four signal events and one background event. -In L N 01 IIIIIIIIIIIIIITIIITTIIIIIIIIIIIIIIIIIIITTI LllllllL+lll lllllLngLL 1 l 1 120 140 160 180 200 220 240 Input rnt (GeV) Figure 7.18: —ln L values vs 7m where the mass is obtained using a quadratic fit around the minimum for an “ensemble” consisting of the average background tern- plate. 172 3 5 lendf 4.453/12 9 60;— Prob 0.9736 .. E40:— PO 0.8954i0.8325 ..5" 'l' '. : p1 0.8998i 0.03935 E 20- .13 0i 3 : °-20_ 1147'" . _ II.’ -40: j. V" -60 ' 20 40 0 Input m1- 175 (Gov? % E x2 / ndf 166.5/ 12 9 6° :— Prob 0 E 40j—- po 1.122 1: 0.2016 11 . a, : P1 0.666 i 0.01023 ' ' E 20 _ 3 0 s _ 0-20 _ ./.-" Z 9 -40 _ j, .. -6c 7'" O 20 40 0 Input m1- 175 (GeV? Figure 7.19: Output mt vs input mt, for ensemble tests using signal and background. In the plot on the top. at each mass point, five ensemble tests are run using ensembles with 80 signal events and 20 average background events. In the plot on the bottom. at each mass point, 100 ensemble tests are run using ensembles with four signal events and one average background event. 7 .4 Mass Measurement of the Candidate Events Having completed the ensemble testing, it is time to run over the candidate events in the data. First, the candidate events must be run through the weighter. The event weights for each event are shown in Figure 7.20. From these event weights, an average data ensemble is constructed (Figure 7.21). This average data ensemble is then compared to the signal (at each mass point) and background templates via the maximum likelihood. The plot of — In L vs mt for the data is shown in Figure 7.22. A quadratic fit around the minimum gives a measured top mass of 169.7 :t 19.7 GeV. The statistical error is found from the 500 Monte Carlo ensemble tests at each mass point. The number of events with an output mass between 165 and 170 GeV for each input mass point is found, and the RMS of this distribution is taken to be the statistical error. The error found using the Monte Carlo is in good agreement with the standard method for finding a likelihood error: — In L + 1/2 yields a statistical error of i177 GeV. Since background is expected in the candidate sample, the correction given by Equation 7.22 must be applied to extract the top mass. Taking into account the errors on the fit parameters, this correction yields mt : 167.3 i 23.5 GeV. 7.5 Prospects for the Future This first pass at a mass measurement shows that the tools needed for the measure- ment are in place. However, to make a measurement that improves upon the Run I mass measurement, a considerable amount of fine-tuning still needs to be done. Such refinements include finding the optimal value for h or going to a variable h value and finding the optimal binning for the event weights. Other methods of fitting the 174 minimum of the — In L vs mt distributions should also be examined. A critical need for an improved mass analysis is higher background statistics. Sta- tistical fluctuations in the current low statistics samples could lead to the background event weight distributions not being modeled correctly. Ideally, the background sam- ples would have similar statistics to the signal samples after all cuts. To achieve this, the sample sizes would have to be increased by a factor of 15 (WWj j) to 100 (Z j j mass 1). Since this is unreasonable to do using full detector simulation and reconstruction, fast Monte Carlo like PMCS is needed [76]. As a further refinement, higher order effects such as ISR and F SR should also be explored. To study the effect of extra jets in the event, ISR-only and FSR-only Monte Carlo samples must be generated, as well as samples with both ISR and FSR turned off. Finally, systematic effects need to be examined. Studies that need to be conducted include determining the dependence of the top mass on the jet energy scale, EM energy scale, and detector resolutions. The effect of using different fitting methods also needs to be explored. In addition, another version of the likelihood is under study. This likelihood compares the data and template events event-by-event in order to obtain maximal use of the information. This approach is much closer to what was done in Run 1. For this method, f3 and fb change. These become: N jv(m ) 1k 1' ‘2 1 A“. t 1 _(W .- — W .-(mt)) w'k : J 9 7.23 f8( lmt) NMC(mt) 1:1(v227r)h d 31:11 eXp 2h? ’ ( ) where N MClmt) is the number of tf Monte Carlo events with mass mt, W"; is the event weight of kth ensemble event in the jfh bin. and I'Vflmt) is the event weight of ith signal template event in the jfh bin, and f (Wk): N N 7271 18()I_I.TC€ n 1 (1qu _WJ )2 Z ";(f—hdflefp( 2712 ) ZNsolurce bnNn n; 1 (7.24) where Nn is the number of events in the nth background source, bj is the fractional contribution of the nth background source, and W3?” is the event weight of ith back- ground template event in the jth bin from the nth background source. This gives a likelihood for the kth ensemble event. The total likelihood is then: N evts : [1 L1,, (7.25) kzl where Nevts is the total number of events in the ensemble. This method currently gives a slope of unity for parton—level, signal-only tests. However, a non-unitary slope is observed for RECO-level signal-only tests. This effect is under study. The hope is that these methods can be used to cross-check the mass measurement. The same event weight information is given to both likelihood fitters, but they find the best value for the top mass using that information in different ways. 176 0.006: 0.005:— Moan 207.9 E nus 33.06 0.004,— 0.1!)3 0.002 0.001 IIIY—I—rrIYTv—T'rvvr :— L F:— 11111.1LP c‘100“‘160‘ 200 250 300m I 0.001:- 0,” Mean 140.6 : Mean 1%.? 0.007 nus 11,13 09°08? RMS 6.72 0.006 0 -_ 0.005 '00“: 0.004 0 0004; 0.003 : 0002 0.0002:— QW‘I j - c L1m1 1 ‘1“ 1 Am. A 4 12501 n 41 Am; 1 c 1 A 1 A 1 A A A L L A A A n 1 t "'1 x10" 0'5 Mean 158.9 0,4 RMS 9.053 0.3 0.2 0.1 011111 11 L11L1111111L 100 150 200 250 300 mt Figure 7.20: Event weights for the five candidate events. 177 ght O 00 Normalized WeI O _5 0| ITIIIIIHIIHIIIIIIIIIIIIHII] 0.1 0.05 o 1 l A; 1 1 1 1 1 1 1 L 1 1 1 1 I 1 1 1 Ifi 1 1 100 150 200 250 300 t Figure 7.21: Average ensemble of candidate events. A l" $13.2— 43.4— 43.6— 43.8— f- -14— '- 1 l l I 1 1 1 1¥L l 1 1 1 l 1 4‘1 1 l 1 l L 1 l l l 120 140 160 180 200 220 240 Input m, (GeV) Figure 7.22: —lnL vs mt for the candidate events. A quadratic fit around the minimum gives a measured top mass of 169.7 GeV. 178 Chapter 8 Conclusions The first measurement of the tf production cross section at \/§ 2 1.96 TeV in the dielectron decay channel was performed using 243 pb—1 of DO Run II data. The cross section was found to be 5 at; = 149757)?) (stat) ii; (syst) :l: 1.0 (lumi) pb. This is in agreement with the predicted cross section calculated at. NN LO+NN LL. which is at; :— 8.0 :i: 0.6 :l: 0.1 pb for a top quark with mt : 175 GeV. The combined dilepton cross section is in very good agreement with the theory. Hence, these results show no discernible deviation from the Standard Model. Phture measurements of the cross section in the dielectron channel will benefit from considerably more integrated luminosity, leading to a smaller statistical error. Monte Carlo samples with higher statistics are also being generated in order to decrease the uncertainty on the background estimation. In addition, as the jet energy scale, the electron energy scale, the detector resolutions, and the luminosity measurement are fine-tuned, the systematic errors will continue to (le(_.-rea.se. A first pass at a measurement of the top mass in the dielectron channel using the neutrino weighting method was also presented. The measured value of the top 1 79 mass, mt = 167.3 i 23.5 GeV, agrees with previous top mass measurements within error; however, it does not yet show an improvement over the measured top mass in the dielectron channel from Run 1. This measurement demonstrates that the tools for making a mass measurement are in place. However, further refinements and fine- tuning, as well as more background Monte Carlo samples, are still necessary before a competitive mass measurement can be made. 180 Appendix A Grid Search Results 5+3 S/B Mg 12% g. 61 (GeV) 121 (GeV) 8 e... (GeV) (GeV) (GeV) (< Mg) (> MZ) 0.892 1.98 70-110 15 20 35 30 0.15 6.8% 0.916 1.72 70—110 15 20 40 25 0.15 6.8% 0.903 1.72 70-110 15 20 40 35 —— 7.0% 0.904 1.81 70-110 15 20 40 40 — 6.9% 0.887 1.70 75-105 1.5 20 35 25 0.20 7.3% 0.906 1.76 75-105 15 20 35 25 0.23 6.9% 0.848 1.89 75-105 15 20 35 30 0.15 7.7% 0.855 1.98 75-105 15 20 35 30 0.17 7.4% 0.857 2.27 75-105 15 20 35 30 0.20 7.1% 0.838 2.26 75-105 15 20 35 35 0.15 7.4% 0.849 2.28 75-105 15 20 35 35 0.17 7.2% 0.850 2.71 75-105 15 20 35 35 0.20 6.8% 0.900 1.73 75-105 15 20 40 25 0.20 7.0% 0.853 2.02 75-105 15 20 40 30 0.15 7.4% 0.864 2.07 75-105 15 20 40 30 0.17 7.2% Continued on Next Page. . . 181 5+3 S/B MZ pg pal ET (GeV) ET (GeV) 8 egg (GeV) (GeV) (GeV) (< Mg) (> AIIZ) 0.868 2.35 75—105 15 20 40 30 0.20 6.8% 0.840 1.84 75-105 15 20 4O 35 — 7.9% 0.842 2.47 75-105 15 20 40 35 0.15 7.2% 0.857 2.43 75-105 15 20 40 35 0.17 6.9% 0.841 1.94 75—105 15 20 40 40 — 7.7% 0.841 2.69 75—105 15 20 40 40 0.15 7.0% 0.896 1.85 75-105 20 20 35 35 —- 6.9% 0.873 1.72 80-100 15 20 35 30 0.23 7.5% 0.888 1.75 80—100 15 20 35 30 0.25 7.2% 0.820 1.82 80100 15 20 35 35 0.15 8.3%. 0.833 1.82 80—100 15 20 35 35 0.17 8.1% 0.840 2.01 80—100 15 20 35 35 0.20 7.7%. 0.853 2.18 80100 15 20 35 35 0.23 7.2% 0.867 2.25 80100 15 20 35 35 0.25 6.9% 0.880 1.80 80100 15 20 40 30 0.23 7.3% 0.895 1.84 80-100 15 20 40 30 0.25 7.0% 0.823 1.98 80100 15 20 40 35 0.15 8.0% 0.839 1.94 80100 15 20 40 35 0.17 7.8% 0.848 2.13 80—100 15 20 40 35 0.20 7.4% 0.858 2.34 80-100 15 20 40 35 0.23 7.0%. 0.815 1.71 80-100 15 20 40 40 — 8.6% 0.821 2.15 80100 15 20 40 40 0.15 7.8% 0.837 2.10 80100 15 20 40 40 0.17 7.6% 0.848 2.26 80-100 15 20 40 40 0.20 7.2% Continued on Next Page. . . 182 S+B (V p} 5/3 Mz PT ET (GeV) ET (GeV) 5 6.92:9 (GeV) (GeV) (GeV) (< Mg) (> Mg) 0.857 2.52 80—100 15 20 40 40 0.23 6.9% 0.881 1.84 80100 ‘20 20 35 35 0.15 7.2% 0.897 1.80 80—100 20 20 35 35 0.17 7.0% 0.883 2.00 80- 100 20 20 40 35 0.15 6.9% 0.878 1.74 80—100 20 20 40 40 -— 7.4% Table A.1: Dielectron kinematic cut optimization based on MC. 183 Appendix B Candidate Events The kinematics of the five dielectron candidate events are listed in Tables 8.1 to B5. Corresponding event displays are shown in Figures B] to B.5., In the displays, red represents energy deposited in the EM part of the calorimeter; blue is energy in the hadronic part of the calorimeter; and yellow is ET. The event displays show uncorrected, RECO-level information. Object pT (GeV) 7) q“) elel 55.5 -0.04 1.93 ele2 19.9 0.45 3.50 jetl 106.9 —0.37 3.03 jet2 39.4 1.11 5.98 ET (GeV) 110.5 1W“; (GeV) 49.8 Sphericity 0.27 Table 8.1: Kinematics for event 121971122 in run 166779. 184 Object pT (GeV) 7) <9 elel 67.4 0.11 1.73 ele2 58.7 1.03 5.52 jetl 84.1 0.51 2.71 jet2- 33.0 —.50 4.37 ET (Gel/r) 43.9 Alec (GC‘II') 133.5 Sphericity 0.68 Table 8.2: Kinematics for event 3869716 in run 177681. Object pT (GeV) 7] (1') elel 61.8 -0.19 5.04 ele2 18.0 -O.24 3.68 jetl 83.9 0.96 0.42 jet2 20.2 -2.17 1.73 ET (GeV) 79.7 Ma» (GeV) 42.1 Sphericity 0.39 Table 13.3: Kinematics for event 26229014 in run 178152. Object pT (GeV) 77 (f) elel 97.6 0.29 1.42 ele2 19.2 -0.17 0.75 jetl 133.8 1.11 5.60 jet2 51.7 0.64 3.81 ET (GeV) 98.7 Sphericity 0.32 Table B.4: Kinematics for event 13511001 in run 178177. 185 Object pT (GeV) 7] 4') elel 104.5 -1.16 2.23 ele2 42.7 -091 4.43 jet1 85.2 —1.34 5.61 jet2 69.4 -027 1.76 jet3 27.8 -1.46 1.00 jet4 23.0 0.10 5.70 jet5 16.2 -0.84 6.27 ET (061") 75.1 A166 (061’) 120.3 Sphericity 0.49 Table B.5: Kinematics for event 14448436 in run 1.80326. 186 ETScale: 538eV EScale: 31 GeV Figure B.l: Run 166779 Event 121971122: RZ view (upper right), XY view (upper left), Lego view (lower). 187 Figure B.2: Run 177681 Event 13869716: RZ view (upper right), XY view (upper left), Lego View (lower). 188 E Scale: 59 GeV \ Figure B.3: Run 178152 Event 26229014: RZ view (upper right), XY view (upper left), Lego view (lower). 189 g“. y ". o . ¢ . _.-r 51‘, ’. . . " " .77.. . . ’9 _. ‘ . V. 1»; - vs' -.--V e a . ." o ’ ’ ‘.7 phi ‘60 -- Figure B.4: Run 178177 Event 13511001: RZ view (upper right), XY view (upper left), Lego view (lower). 190 _ J / Er§caleztoaeev ~\ . l E Scale: 101 GeV ”1‘“ \ . -— . Figure B.5: Run 180326 Event 14448436: RZ view (upper right), XY view (upper left), Lego view (lower). 191 Bibliography [1] D. Griffiths, Introduction to Elementary Particles, John Wiley &. Sons, Inc., 1987. [2] F. Halzen, A. Martin, Quarks and Leptons: An Introductory Course in Modern Particle Physics, John Wiley & Sons, Inc., 1984. [3] M. Peskin, D. Schroeder, An Introduction to Quantum Fied Theory, Perseus Books, 1995. [4] L. Alvarez-Guame, et al.,“Review of Particle Physics,” Phys. Lett. B 592:1-1109, 2004. [5] S. Abachi, et al., Observation of the Top Quark, Phys. Rev. Lett. 74:2632—2637, 1995. [6] F. Abe, et al., Observation. of Top Quark Production in Pbar-P Collisions, Phys. Rev. Lett. 74:2626-2631, 1995. [7] D. Chakraborty, J. Konigsberg, D. Rainwater, Review of Top Quark Physics, Ann. Rev. Nucl. Part. Sci. 53:301—351,2003. [8] N. Kidonakis and R. Vogt. Theoretical Status of the Top Quark Cross Section, hep-ph/0410367, 2004. [9] S. Willenbrock, Top Quark Theory, hep-ph/ 9611240, 1996. [10] V.M. Abazov, et al.,New Measurement of the Top Quark Mass in Lepton+ Jets tt Events at D0, Submitted to Phys. Rev. Lett, 2004. [11] S. Willenbock, The Standard Model and the Top Quark. hep-ph/0211076, 2002. [12] http://www.fna1.govpub/about/whatis/history.html. [EH http://www.inal.gov/pub/about/whatis/picturebook/ descriptions/OO_635.html. [14] http : //www—ad . fnal . gov/runII/index . html 192 [15] http : //www . fnal . gov/pub/inquiring/physics/ accelerators/chainaccel.html. [16] http : //www-bd . fnal . gov/public/chain . html. [17] G. Dugan, et at., Mechanical and Electrical Design of the Fermilab Lithium Lens and Transformer System, IEEE Trans. Nuc. Sci., 30:3660-3662, 1983. [18] T. LeCompte, H. T. Diehl, The CDF and DO Upgrades for Run II, Annu. Rev. Nucl. Part. Sci., 50271-117, 2000. [19] DO Collaboration, The Upgraded DO Detector, in preparation. [20] R. Hooper, Ph.D. Thesis, University of Notre Dame, 2004 (unpublished). [21] The DO Upgrade Central Fiber Tracker Technical Design Report, DO Note 4164. [22] J. Brzezniak, et al., Conceptual Design of a 2 Tesla Superconducting Solenoid for the Fermilab D0 Detector Upgrade, DO Note 2167. [23] L. Groer, DO Calorimeter Upgrades for Tevatron Run 11, DO Note 4240. [24] Design Report, The DU Experiment at the Fermilab Antiproton-Proton Collidcr, DO Note 137, 1984. [25] C—C. Miao, The DORun II Luminosity Monitor, DO Note 3573, 1998. 26 http://www. a.msu.edu/he /dO/11/framework/ P P tfw_tutoria1_march_2003.html [27] M. Abolins, et al., The Level 1 Calorimeter Trigger for DO, DO Note 706, 1988. [28] http : //www . pa . msu . edu/hep/dO/ftp/l 1/cal_trig/hardware/ general/calorimeter_trigger_tt_data.txt. 29 http://www. a.msu.edu/he /d0/ft /11/ca1_tri /drawings/. P P P 8 [30] http://www.pa.msu.edu/hep/dO/l1/ca1_trig/index.html. [31] http://dOdb.fnal.gov/trigdb/cgi/trigdb_main.py. [32] B. Vachon, et al., Top Trigger E fliciency Measurements and the top-trigger Pack- age, DO Note 4512, 2004. Fifi http://www-d0.fna1.gov/RunQPhysics/top/d0_private/wg/top_analyze Stradivarius_Updated/Stradivarius_Updated.html [34] http : //root . cern. ch [35] L. Duflot. et al., cal.event-guality PackachO Note 4614. 193 [36] http : //www-d0 . f nal .gov/ "dOupgrad/d0_private/software/ jetid/certification/Macros/Runsel/VS.1/runse1ection.summary [37] http : //www-d0 . fnal .gov/dOdist/dist/re1eases/development/top_dq [38] K. Rajan, et al., Calorimeter Event Quality Using Level 1 Confirmation, DO Note 4554, 2004. [39] C. Corcella, et al., Herwig 6.5, hep—ph/0011363, 2001. [40] F. Caravaglios, et al., Nucl. Phys. B 539:215-232, 1999. [41] M. Mangano, et al., Nucl. Phys. B 632:343-362, 2002. [42] M. Mangano, et al., hep—ph/0206293,2003. [43] T. Sjostrand, L. Lénnbald, Pythia 6.2, hep-ph/0108264,2001. [44] J. Pumplin, et al., New Generation of Parton Distributions with Uncertainties from Global QCD Analysis, JHEP 02072012, 2002. [45] D. Lange, et al., The EvtCen Event Generator Package, Proceedings of CHEP, 1998. [46] S. Jadach, et al., TA UOLA Library Version 2.5., Comp. Phys. Comm.,76:361, 1993. [47] R- Brun and F. Carminati, CERN Program Library Long Writeup W 5013, 1993. [48] R. Field, http://www.phys.ufl.edu/“rfield/cdf/tunes/rdf_tunes.html [49] B. Olivier et al., NADA, a New Event by Event Hot Cell Killer, DO Note 3687, 2000. [50] S. Crepe-Renaudin, Energy corrections for geometry eflects for electrons in Run 11, DO Note 4023, 2002. [51] R. Zitoun, Study of the Non Linearity of the D0 Calorimeter Readout Chain, DO Note 3997, 2002. [52] M. Narain, U. Heintz, A Likelihood Test for Electron ID, DO Note 2386, 1994. [53] D. Whiteson, L. Phaf, Electron Likelihood, DO Note 4184, 2003. [54] L. Lyons, Statistics for Nuclear and Particle Physicists, Cambridge University Press, 1986. [55] S. Anderson, et al., Measurcment of the ttbar Production Cross-section at center of mass energy 1.96 Te V in. Dilepton Final States, DO Note 4653, 2004. [56] S. Jain Scale and Oversmearing for High-p11 Electrons. DO Note 4402. 2004. 194 [57] G. Blazey, et al., Run 11 Jet Physics, DO Note 3750, 2000. [58] J-R. Vlimant, et al., Technical description of the T42 algorithm for the calorime- ter noise suppression, DO Note 4146, 2003. [59] G. Bernardi, et at, Improvements from. the T42 Algorithm on Calorimeter Objects Reconstruction, DO Note 4335, 2004. [60] T. Golling, CALGO presentation, November 4, 2003. [61] http : //www-d0 . fnal . gov/comput ing/algorithms /ca1go /jet/jetID_p14.htm1. [62] http : //www-d0 . fnal .gov/phys_id/jes/dO_private/latest/ v5.1/links.html [63] M. Agelou‘, et al., DO Top Analysis and Data Sample of the Winter Conferences 2004, DO Note 4419, 2004. [64] T. Golling, Top Production Meeting presentation, June 1, 2004. [65] A. Kumar, et al.,Oversmearing of Missing Tiansverse Energy in Z —> ee + X Monte Carlo Events, DO Note 4551, 2004. [66] P. Soding, G. Wolf, Experimental Evidence on QCD, Ann. Rev. Nucl. Part. Sci. 31:231-93, 1981. [67] N. Amos, et al., The Random Grid Search: A Simple Way to Find Optimal Cuts. DO Note 2791, 1996. [68] S. Anderson, et al.,Measurement of the ttbar Xsec in the dilepton channels at sqrt(s) = 1.96 TeV (topological), DO Note 4420, 2004. [69] T. Edwards, et al., An Updated DO Luminosity Determination: Short Summary. DO Note 4328, 2004. [70] E.Barberis, J .-F. Grivaz, M. Kado, Combined Results for the ttbar Cross Section Measurement, DO Note 4246, 2003. [71] E.W. Varnes, Ph.D. Thesis, University of California, Berkeley, 1997 (unpub- lished). [72] B. Abbott, et al., Measurement of the Top Quark Mass in the Dilepton. Channel, hep—ex/ 9808029, 1998. [73] E. Barberis, Private communication. [74] L. Holmstrbm, S. Sain, and H. Miettinen, A New Multivariate technique for Top Quark Search, Comp. Phys. Comm., 88:195-210, 1995. [75] F. James, CERN Program Library D506, 1978 (unpublished). 195 [NH http://www-d0.fnal.gov/computing/MonteCarlo/pmcs/ pmcs_doc/pmcs.html 196 1:][1]]]]1]]1j]]1]]11