3.. :1‘ ‘2 {UN .14“. . an 3;. .uho» , . .) {FLA}... 1.433.! 2. .:...:.~ .1 .V . .. l .1111"; p. r5 : : c9521. . .2 I r it: I. 25.11: 7.2.34 9...... .1: .ibv... 2.41.2.0. RSITY UBRARIES "3353 33333333333333333333333 3333333333 33333 3129301147 ' This is to certify that the dissertation entitled Measurement of the Nuclear Dependence of Direct Photon ani Neutral Meson Production at High Transverse Momentmn by Negative 515 GeV/c Pions Incident on Beryllium and Copper Targets" presented by IeeBmaldSorrell has been accepted towards fulfillment of the requirements for Ph.D. degree in Phygics major professor Date May 5, 1995 MS U is an Affirmative Action/Equal Opportunity Institution 0-12771 LEIRARY Michigan state University PLACE II RETURN BOX to romavothh checkout from your record. TO AVOID FINES Mum on or More data duo. Ill 3 L MSU I. An Afflnnntlvo Adlai/EM Oppomfllty Institulon W ”3-9.1 MEASUREMENT OF THE NUCLEAR DEPENDENCE OF DIRECT PHOTON AND NEUTRAL MESON PRODUCTION AT HIGH TRANSVERSE MOMENTUM BY NEGATIVE 515 GEV/C PIONS INCIDENT ON BERYLLIUM AND COPPER TARGETS By Lee Ronald Sorrell A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY 1995 313333": DIRECT i .‘, The nuclear depeni meson production h by the E706 exp": .lceeierator Lab-3r: calorimeter and a measurements of rapihty interval i 310mm CeVic t Gehgc to 70 C 3 \ t for the range fro: r .1; 3333331111 and 03; A°- The neutral 1 results. The d1 ' fl AA‘T'- ABSTRACT MEASUREMENT OF THE NUCLEAR DEPENDENCE OF DIRECT PHOTON AND NEUTRAL MESON PRODUCTION AT HIGH TRANSVERSE MOMENTUM BY NEGATIVE 515 GEV/C PIONS INCIDENT ON BERYLLIUM AND COPPER TARGETS By Lee Ronald Sorrell The nuclear dependence of inclusive direct photon production and inclusive neutral meson production by a 515 GeV/c 1r" beam has been measured using data collected by the E706 experiment during the 1990 fixed target run at the Fermi National Accelerator Laboratory. The experiment utilized a finely segmented liquid argon calorimeter and a high precision charged particle spectrometer to make precision measurements of inclusive direct photon, neutral pion, and 17 production in the rapidity interval from -0.75 < y < 0.75. The 7r° data is reported for the PT range from 0.6 GeV/c to 12 GeV/c, while the 1] data is reported for the range from 3.5 GeV/c to 7.0 GeV/c. The direct photon nuclear dependence results are reported for the range from approximately 4.0 GeV/c to 8.5 GeV/c. The data from the beryllium and copper targets have been fit using the parameterization 034 = 0’0 x A“. The neutral meson results are in good agreement with previous charged meson results The direct photon results are consistent with no anomalous enhancement. Test and lorem:s'. ihe degree With 31;: one has ever ior ei- Working on a i great iortune oi he. seine of the ideas 3 their home icr r: can count. and. C‘: 313? sanity. but ‘1: sup-port (luring i: all the other itihi for that really 51: in purple. but 1': been good irieni goal? Jerry and lloeh iorwazd r serve. Carol and They will be mi The E7533 £2 When We're not lraditional ore: also miss the e and Hill currj Kilian? have ‘ ACKNOWLEDGEMENTS First and foremost I would like to thank my parents. I never would have finished this degree without their support, which continued even after they realized that no one has ever (or ever will) actually seen a quark with their eyes... Working on a high energy thesis can be a very difficult task. I have had the great fortune of having many friends who have kept me from actually carrying out some of the ideas I have had for those PDPs. David and Cindy Striley have shared their home for more dinners, movies, conversations (weird and otherwise) than I can count, and, of course, The Barbecue. I won’t say that they kept me from losing my sanity, but losing it wasn’t quite as painful... Thanks Sandy for the mutual support during these crazy times. I would also like to thank Mr. Dan Jansa and all the other folks at MMAA / Eagle Academy for their friendship and support, and for that really strange road trip to Keokuk. I was going to have this thesis printed in purple, but they nixed that-sorry Rose! Pat and the entire Prerna team have been good friends and good people to have at your back-so how did I end up in goal? Jerry and Polly Murphy have always been ready with a smile and a laugh. I look forward to that barnfire! To my fellow Klin, I say Qapla. The honor is to serve. Carol and Mike have invited me to some of the strangest party at Fermilab. They will be missed. The E706 family has made our little corner of the lab a fun place to be (well, when we’re not working, anyway). Little things like Lucy’s attack plants and the traditional oreos make all the difference, although I do miss Rob’s posters... I will also miss the times when we gathered around Chicago style pizza, Polish buffet, and Thai curry. Of course, now that I am going into restaraunts without Dan, I actually have to ask for the food to be spicy. It turns out that there is a local chain that calls its cuisine “murderburgers”. I will do what I can to send some to the Lenmeister. John was kind enough to share his personal torture devices with me at the gym more times than I can remember—thanks for dragging me out of the lab... On the unhealthier side, I wonder if I should ask Michael to make a cheesecake for my defense—it worked for my pizza talk, why not my defense? Although I am running out of vaguely witty things to say, I would like all of you to know that I have appreciated the friendship that we have shared over the last 6(!) years and that you will all be missed. To George Ginther, I can only say thanks. We still don’t agree on everything, but you have helped in more ways than I can count and you have made me a better physicist. I have also enjoyed the countless conversations at weird hours on weird subjects. Just remember George, I am not responsible for people using the hide-away bed during meetings... I would also like to thank Joey for taking the extra time to teach me about particle physics and triggers. I also enjoyed the jokes, but I guess I can’t repeat most of them here... Carl took the time to tell me when I was doing the wrong thing (and was right all too often). Cc-llabciratisn : made all the life: eyes on the przz would also like tc have to worry a’c; “Stephanie H be mahng mere :: lncts created by t lwould also 1.: who have made 1.17 Of course. l h; kind and helgful a ll? teachers at myself}. Thanks 1: For RA H. as: lhave protah‘g Sly Thanhsl Q3215 Collaboration meetings without Paul’s jokes just aren’t any fun. The laughs made all the difference... I would like to thank Tom for reminding us to keep our eyes on the prize... I would like to thank Gene for reminding me to have a life. I would also like to thank Loretta for her friendship and her help. Now you won’t have to worry about shovelling my stuff off your desk anymore... If Stephanie Holland is ever paid what she is worth to the grad students, she will be making more than anyone else in the PA building. She has untied the Gordian knots created by the university more times than I'care to remember. I would also like to thank Barb, Lisa, Dolores, and all of the HEP secretaries who have made life a little easier. Of course, I have to thank Lee. She and the rest of the library staff have been kind and helpful and just weird enough to be interesting. ;) My teachers at Parkway West and Rose Hulman taught me how to think (despite myself). Thanks is somehow not enough. For R.A.H. and everyone else who is still looking for the door into summer. I have probably missed many important people. To everyone who has helped I say Thanks! Qapla! and Hap Ki! l INTRODT'C 1.1 The Data. .5 Direct P}. 1.6 1.7 2 THE EXPEF 2.1 2.2 2.3 2.4 2.5 2.6 Austral: '. Pretious Ot'ERt'Il THE BE: THE TR, 2.3.1 3;; 2.3.2 TE. 2.3.3 I} THE LIQ- 243 It 2.4.2 I} 2.4.3 Tl THE F0] THE E57 ’ THE TRIGC 3.1 3.2 3.3 OVERVI THE BE. TABLE OF CONTENTS 1 INTRODUCTION 1.1 The Data Set .............................. 1.2 The Standard Model .......................... 1.3 Quantum Chromodynamics (QCD) .................. 1.4 The Parton Model and Perturbative QCD .............. 1.5 Direct Photon Physics ......................... 1.6 Anomalous Nuclear Effects ....................... 1.7 Previous Experiments .......................... 2 THE EXPERIMENTAL APPARATUS 2.1 OVERVIEW ............................... 2.2 THE BEAMLINE ............................ 2.3 THE TRACKING SYSTEM ...................... 2.3.1 Silicon Strip Detectors and Targets .............. 2.3.2 The Analysis Magnet ...................... 2.3.3 The Downstream Tracking System .............. 2.4 THE LIQUID ARGON CALORIMETER (LAC) .......... 2.4.1 The LAC Cryostat and Gantry ................ 2.4.2 The Electromagnetic Calorimeter (EMLAC) ......... 2.4.3 The Hadron Calorimeter (HALAC) .............. 2.5 THE FORWARD CALORIMETER (FCAL) ............. 2.6 THE E672 MUON SPECTROMETER ................ 3 THE TRIGGER AND DA SYSTEMS 3.1 OVERVIEW ............................... 3.2 THE BEAM AND INTERACTION LEVEL ............. 3.2.1 The Beam Definition ...................... 3.2.2 The Beam Hole Counter(s) ................... 3.2.3 The Interaction Definition ................... 3.3 THE RABBIT PT SYSTEM ...................... 3.3.1 The LAC Amplifier Card (LACAMP) ............. 3.3.2 The P1- Attenuator Card .................... 3.3.3 “Image Charge” ......................... 3.3.4 The Biased PT Adder Card .................. 3.4 THE PRETRIGGER LEVEL ..................... OwaH 8 12 15 19 23 23 24 29 31 33 33 36 36 40 43 46 48 49 49 50 50 52 53 57 58 58 65 69 3.5 3.5.4 '1". 3.5 THE rat 4 EVENT REC 4.1 OYERVII 4.2 THE DIS 4.3 THE ETE AND Eli! 4.3.1 4.3.2 4.3.3 4.3.4 4.3.5 4.3.6 El fl C: In “C Pl 4.4 THE ca 5.1 Oveme“ 5-2 Vertexc 5.3 EhllAC 5-4 Energy, 5.5 Longituc 5'6 3311011 B 5.7 5.8 5.6.1 5.6.2 5.6.3 5.6.4 5.6.5 ( I l t t The 1’0 The E3 5.8.1 5.8.2 5.8.3 5.8.4 5.85 58.6 vi 3.4.1 The Zero Crossing Timing Discriminator ........... 75 3.4.2 The Veto Walls ......................... 78 3.4.3 The Early PT System ...................... 81 3.4.4 The Pretrigger Logic ...................... 84 3.4.5 The Two Gamma Pretrigger .................. 86 3.5 THE TRIGGER LEVEL ........................ 88 3.5.1 The Local Triggers ....................... 89 3.5.2 The Global Triggers ...................... 93 3.5.3 The Two Gamma Trigger ................... 97 3.5.4 The Prescaled Triggers ..................... 98 3.6 THE DATA ACQUISITION SYSTEM ................ 100 EVENT RECONSTRUCTION 103 4.1 OVERVIEW ............................... 103 4.2 THE DISCRETE LOGIC ROUTINES (DLUNP AND DLREC) . . 104 4.3 THE ELECTROMAGNETIC CALORIMETER ROUTINES (EMUNP AND EMREC) ............................. 106 4.3.1 EMUNP ............................. 106 4.3.2 FREDPED ........................... 110 4.3.3 Group and Peak Finding .................... 113 4.3.4 Initial Shower Reconstruction ................. 116 4.3.5 “Gamma” Correlation ..................... 119 4.3.6 Photon Timing Information .................. 122 4.4 THE CHARGED TRACK ROUTINES (PLUNP AND PLREC) . . 123 NEUTRAL MESON ANALYSIS 128 5.1 Overview ................................. 128 5.2 Vertex Cuts and Reconstruction Efficiency .............. 129 5.3 EMLAC F iducial Cuts and Geometric Acceptance .......... 130 5.4 Energy Asymmetry ........................... 132 5.5 Longitudinal Shower Development ................... 135 5.6 Muon Bremsstrahlung Rejection .................... 136 5.6.1 Offline Veto Wall Requirement ................. 138 5.6.2 Directionality .......................... 141 5.6.3 Balanced PT ........................... 143 5.6.4 Chisquared / E .......................... 144 5.6.5 Corrections for Muon Cuts ................... 144 3 5.7 The x° and 1] Signal Definitions .................... 147 5.8 The EMLAC Energy Scale ....................... 149 5.8.1 EMUNP and EMREC Corrections .............. 150 5.8.2 Octant Energy Correction ................... 150 5.8.3 Boundary Corrections ..................... 150 5.8.4 Correction for Lost Energy ................... 151 5.8.5 The Radial Correction ..................... 152 5.8.6. Octant Energy Corrections Revisited ............. 154 5.3.? E. 53.5 E 5.9 The 33:: 5.9.1 T 5.9.? E 59.3 C 5.10 Reconst: 5.11 Phctor. - 5.12 Trigger l 5.13.1 T 5.12.? 3 5.12.3 C set N 5.13.5 5‘ 5.13 Beam X; 51% Beam E; 515 C1055 Se: 6 NEUTRAL l 6.1 r°Crass 6-2 1° Nuclei 6-3 nCrossS 6'4 nNuclea: 6'5 SFstemat 7 DIRECT P} u Oflnka 3'2 Relectici: 73 BALANt 7'4 The Dire 3'5 Bachgror 8 DIRECT p} 8'1 Distribui ' DlICClw .3 5,st 9C0NCLCSI vii 5.8.7 Electrons ............................ 5.8.8 Energy Scale Verification and Results ............. 5.9 The Monte Carlo Simulation ...................... 5.9.1 The Detector Simulation .................... 5.9.2 Event Generation ........................ 5.9.3 Comparison With Real Data .................. 5.10 Reconstruction Efficiency ........................ 5.11 Photon Conversion Probabilities .................... 5.12 Trigger Corrections ........................... 5.12.1 The Local Discriminator Analysis ............... 5.12.2 Calibrating the Global Trigger PT Calculations ....... 5.12.3 Global Discriminator Efficiency Measurements ........ 5.12.4 Pretrigger Efficiency Measurements .............. 5.12.5 Summary of Trigger Efficiencies ................ 5.13 Beam Normalization .......................... 5.14 Beam Energy .............................. 5.15 Cross Section Calculations ....................... NEUTRAL MESON RESULTS 6.1 1r° Cross Section Results ........................ 6.2 1r° Nuclear Dependence ......................... 6.3 17 Cross Section Results ......................... 6.4 17 Nuclear Dependence ......................... 6.5 Systematic Errors ............................ DIRECT PHOTON ANALYSIS 7.1 Overview ................................. 7.2 Rejection of Charged Particle Showers ................ 7.3 BALANCED PT CUT ......................... 7.4 The Direct 7 Signal Definitions .................... 7.5 Background Subtraction ........................ DIRECT PHOTON RESULTS 8.1 Distribution of Direct Photon Sample ................. 8.2 Direct 7 Nuclear Dependence ..................... 8.3 Systematic Errors ............................ CONCLUSIONS 155 158 158 162 162 163 166 168 170 171 180 182 187 188 191 192 194 194 202 202 214 214 221 221 221 224 224 225 229 229 229 231 237 1.1 Diagram long dista A and B 1.2 The lead; 1.3 Rescatte: going pa: the cross oil); the. than A : 2-1 The Mg indicate 32-2 The Ce the C01} aper’tm 32-3 The 13 2.4 The la Plane. 2.5 Oflen: 1333.26: 2.6 The l 1.1 1.2 1.3 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 3.1 3.2 LIST OF FIGURES Diagram of hadron-hadron interaction factorized into the short and long distance components. The incident particles are represented by A and B and hl and h; are outgoing hadrons. ............ The leading order diagrams for direct photon production. ..... Rescattering of outgoing particle PT in nucleus. The P7 of the out- going particle may be slightly enhanced or slightly reduced. Because the cross sections for particle production are steeply falling functions of PT the net effect will be to increase the cross sections more rapidly than A at high PT. ........................... The Meson West beamline. The overlaid lines on the lower plot indicate the relative divergence of the secondary beam particles. The Cerenkov counter. The top diagram shows the overall layout of the counter. The lower diagram shows the layout of the phototube apertures in the plane perpendicular to the beam axis. ....... The 1990/ 1991 configuration of the Meson West spectrometer. The layout of the silicon strip detector system (SSDs) in the Y-Z plane. .................................. Orientations of the wires in the Proportional Wire Chamber (PWC) planes. ................................. The Liquid Argon Calorimeter (LAC) Gantry. ........... Exploded view of the Electromagnetic Calorimeter (EMLAC). . . . Geometry of Hadron Calorimeter (HALAC) readout pads. ..... The Forward Calorimeter (FCAL) modules. ............. Distribution of it" event vertices weighted to correct for beam ab- sorption. a) The distribution along the beam (2) axis. b) The x-y distribution for the Cu target region. c) The x-y distribution for the 2 region containing the Be targets. The outlines of the targets, beam hodoscope, the active area of the beam SSDs, and beam hole counter have been included to show the misalignment of the targets with the other systems. ............................. a) The interaction counters. b) Overview of MWest spectrometer showing the locations of the interaction counters. .......... viii 17 25 3.3 CA.) 11.; 3.9 st ‘ cs LJo as loea; er ‘ trigge Bloch Bloch were Gish; (fiat- devia 1321. The 1 indie: hetwe a tie: image Hypo image outpi outp: locks. :giti 356d 1 sigma: Irons and I iIlputs nators 5) A 1 is Sllgl Than 1 Signals 116.15 8‘: when referer, Signal 1 large Sc therefm 1'2:ng are)“, “Kinda ~ lagrar: of the he 3.3 3.4 3.5 3.6 3,7 3.8 3.9 3.10 3.11 3.12 3.13 ix Block diagram of the RABBIT trigger electronics. The Biased PT adder signals are used by the pretriggers and global triggers. The local discriminator signals are used in the local triggers, two gamma trigger, and in the global triggers. .................. Block diagram of the LAC amplifier (LACAMP) card. ....... Block diagram of the PT attenuator card. Only the sum-of-8 signals were used for the 1990 and 1991 runs. .3 ............... Global gains for two octants for the 1990 run. The dotted lines in— dicate the desired 2sin(6) values. Octant 1 had some of the largest deviations while octant 3 had fewer deviations than the average oc- tant. .................................. The global gains for two octants for the 1991 run. The dotted lines indicate the desired 2sin(9) values. Note that the large deviations between neighboring strips have been reduced although there is still a decrease in the net gains toward the outside, which may be due to image effects. .............................. Hypothetical model of the image charge effects. The size of the image signal has been exaggerated. a) A “snapshot” of the fast output signals at the interaction time. b) A snapshot of the fast output signals approximately 300 us later. A number of events that looked like b) were read out during the 1991 run. .......... Digital oscilloscope picture of the signals from the inner and outer bi- ased PT adder cards for octant 1 during calibration studies. The late signal is the image charge signal produced by the calibration elec- trons hitting the LAC at R = 25 cm. The scales are 100 mv/ division and 100 ns/ division. .......................... Block diagram of the biased PT adder card. Each card has 32 analog inputs and generates four analog output signals. .......... Principles behind constant fraction (zero crossing) timing discrimi- nators. a) The input PT signal (same convention as for figure 3.9). b) A fraction of the signal is delayed and inverted. This fraction is slightly more than 1 / 2 of the input signal (so that slightly less than 1 / 2 of the signal goes through without inversion). c) The two signals are added back together. The uneven split between the sig- nals guarantees that the signal will cross back through zero voltage. When this happens, the size of the input signal is compared with a reference voltage. If the signal is large enough, then a fixed width signal is generated. Multiplying the input signal by an arbitrarily large scale factor will have no effect on the zero crossing time and therefore no effect on the timing of the output signal. ....... Diagram of veto walls 1 and 2. These walls are located immedi- ately downstream of the hadron shield and neutron absorber. The secondary beam direction 18 into the page. .............. Diagram of veto wall 3. This wall was located immediately upstream of the hadron shield. The incident beam direction is into the page. 82 lllDugam hadron El into the ; lhlwdgy The octa: ._ ._ ” OCTAEQ 1.. loge. Ti g tum l 315 Shnphie: sgnds 117 Diaaiani: a hue of: resent the theshowg Mhmmkia ll Thnede; days' i:r indxated hcn hea: 3'2 Et'entir. Spondini 4-3 The“rm amen 4-4 Separa: Thesh; so shoe. lnthef ‘31 Distrih 52 Toacce 53 toasy 54 Longn Particl l0m). Toma as)‘mr 36 Thee mass 3 haVel X 3.14 Diagram of veto wall 4. This wall was installed upstream of the hadron shield prior to the 1991 run. The incident beam direction is into the page. ............................. 3.15 Topology of opposite octant requirement for two gamma trigger logic. The octant in the upper left can combine with any of the three octants in the lower right to form a pair that satisfies the two gamma logic. There are 12 unique pairs of octants that satisfy the two gamma logic (3 of those pairs are shown in the diagram). ..... 3.16 Simplified block diagram of the “local” formation from the R strip signals. ................................. 3.17 Diagram of showers from a 7r° or 1] decay. The dashed line represents a. line of constant 43 in the calorimeter. The solid curved lines rep- resent the boundaries between groups of 8 R view strips. Note that the showers shown would not be contained by the same “local”. 3.18 Block diagram of the data acquisition system. ............ 4.1 Time dependence of the EMLAC energy scale as a function of “beam days” for the 1990 and 1991 runs (dark circles). The open triangles indicated the ratios measured using the 50 GeV/c electron calibra- tion beam data. ............................ 4.2 Event in quadrant 1 showing a ramp in the left R view and a corre- sponding step in the outer ¢ view. .................. 4.3 The “ramp and step” event from Figure 4.2 after the global pedestal shifts have been removed by FREDPED. ............... 4.4 Separation of showers using the front / back segmentation of the LAC. The showers are narrower in the front section than in the sum section, so showers that coalesce in the sum section can often be separated in the front view. ........................... 5.1 Distribution of 1r°s that fall within the EMLAC fiducial definition. 5.2 1r° acceptance for several P1- bins averaged over '0, and By. 5.3 1r° asymmetry after sideband subtraction. .............. 5.4 Longitudinal shower development for showers matched with charged particle tracks (top) and showers matched with ZMP electrons (bot- tom). .................................. 5.5 11'" mass distributions for several PT bins after the E,,m/EW and asymmetry cuts have been applied. .................. 5.6 The effect of the veto wall and other muon rejection cuts on the 1r° mass distribution for 7.0 < P1- < 9.0 GeV/c. The mass distributions have been divided into the forward (right side) rapidity regions and the backward rapidity regions (left side). The top row shows the distributions after the Erma/Ema and asymmetry cuts have been applied. The middle shows the distributions after the veto wall cut has been applied. The bottom row shows the distributions after the other muon cuts have been applied. ................. 112 139 5.7 Photon d. 5.5 and ‘9 wall Sign: lation he: pparent. I 5.8 Balance: 9.0 GeV signals. u- 5.9 {lEdistrl The event the event: clearly as: 51!) 1° and 7; :1 5.11 Average e tons isolri reconstru: 51?. Radial de; relative r; 2.0 GeV 3: 5.13 Ratio of E a function Teal data a 5'14 z3333215503 tomass. ' the tar-gs: regions 3r 5'35 The I” {C the OCtan mass at 3 “‘0 phott '05 Heart sante PT 5.16 The a; m; 5 3” Compam Home c, 5318 Compam phOlon E lohlams j The To 11 .20 Emmi (dfihed 3 upper plc an Oman: the Outer 5.19 5.7 5.8 5.9 5.10 5.11 5.12 5.13 5.14 5.15 5.16 5.17 5.18 5.19 5.20 xi Photon directionality distributions for 7r°s with PT values between 5.5 and 9.0 GeV/c. The events on the left side generated offline veto wall signals, while the events on the right side did not. The corre- lation between the veto wall signals and high directionality values is apparent, especially for the backward rapidity regions. ....... 142 Balanced PT distributions for 1r°s with RT values between 5.5 and 9.0 GeV/c. The events on the left side generated offline veto wall signals, while the events on the right side did not. ......... 145 x2 / E distributions for 1r°s with PT values between 5.5 and 9.0 GeV/c. The events on the left side generated offiine veto wall signals, while the events on the right side did not. The events with high x2 / E are clearly associated with the veto wall signal. ............. 146 1r° and 7) mass distributions after the analysis cuts have been applied. 148 Average energy lost in the material in front of the EMLAC for pho- tons (solid line) and electrons (dashed line) as a function of the the reconstructed energies. ........................ 151 Radial dependence of the reconstructed masses for 1r° and 1) particles relative to the nominal values. The 1r°s are required to have at least 2.0 GeV/ C PT and the 178 are required to have at least 3.5 GeV/c. 153 Ratio of EMLAC energy to track momentum for ZMP electrons as a function of the EMLAC energy. The closed circles come from the real data sample and the open circles are the Monte Carlo data. . 156 a) Mass of «Os reconstructed in the 7ee mode divided by the nominal 1r° mass. The 1% decrease is due to energy loss as the electrons leave the target region. b) and c) show the mass peaks for the 1r° and 1) regions (respectively) in this mode. .................. 157 The it" (closed circles) and 1] (open circles) masses as functions of a) the octant number, b) PT, and c) radial position. The dip in the 1r° mass at high PT is caused by the decrease in the separation of the two photons at high energies. A similar effect can be seen for the «”8 near the inner edge (which must have larger energies to have the same PT as an event near the outside of the detector). ....... 159 The w mass from the 1r°7 decay mode. ................ 160 Comparison of the it" mass and asymmetry distributions from the Monte Carlo (open circles) to the data distribution (solid histogram). 164 Comparison of Monte Carlo (open circles) and data (solid histogram) photon E from / Em“; distributions for several energy ranges. The his- tograms have been area normalized. ................. 165 The 1r° reconstruction efficiency as a function of PT and y. ..... 167 Efficiency curves for single local hi (solid lines) and single local 10 (dashed lines) discriminators as functions of local trigger PT. The upper plots are for locals 2 (left) and 10 (right) in the inner part of an octant. The lower plots are for locals 19 (left) and 27 (right) in the outer part of an octant. ...................... 172 5.21 Raw g1: twohne 522 Raw :11: 5213 Raw g1: 53% Global first pas 5.25 Glo’call landi the cut eventSi bythet 5.96 Preté; Uefi ac pkntei wuap flohdi 5.27 Eificie: for eve: been a (nghtt fh ado 5% Ehdei forevc been a 313333 : 33 ad: 6'1 lnclus; the pr trigger 6'2 InClusi ASClCt energy 6'3 Rapid. sCVera becau: of ma, masst Peahe: Comp dCSCrf Lolllh 6.4 xii 5.21 Raw global PT calibration plot for octant 1. The events cluster along two lines due to a problem with the relative gains measurements. 5.22 Raw global PT calibration plot for events in groups 1-8 of octant 1. 5.23 Raw global PT calibration plot for events in groups 916 of octant 1. 5.24 Global PT calibration plot for octant 1 after gains corrections and first pass cutoff values have been applied. .............. 5.25 Global 10 (upper) and hi (lower) discriminator efficiencies for octants 1 and 4. The efficiencies for events with 1 or 2 trigger groups above the cutoffs are indicated by the solid circles. The efficiencies for events with 3 or more trigger groups above the cutoffs are indicated by the open circles. .......................... 5.26 Pretrigger hi efficiencies averaged over all of the octants for inner (left side) and outer (right side) events. The efficiencies have been plotted as functions of 1r° PT for events in which the leading particle was a pion (top plots) and as functions of the appropriate half octant global trigger PT. ........................... 5.27 Efficiencies of the local 10 (upper) and local hi (lower) discriminators for events in which the leading particle was a 1r°. The efficiencies have been averaged over all octants for the inside (left) and the outside (right) of the detector as defined by the break between the biased PT adder cards. ............................ 5.28 Efficiencies of the global 10 (upper) and local hi (lower) discriminators for events in which the leading particle was a 1r°. The efficiencies have been averaged over all octants for the inside (left) and the outside (right) of the detector as defined by the break between the biased PT adder cards. ............................. 6.1 Inclusive 7r° production cross section for the Be targets. Data from the prescaled interaction, prescaled pretrigger, and single local hi triggers were used. ........................... 6.2 Inclusive r° production cross section per nucleon for the Be targets. A selection of other pion production measurements with similar beam energies has been included. ...................... 6.3 Rapidity distribution of inclusive 1r°s produced in the Be target for several PT bins. The distributions are shifted to forward rapidities because the calculation has been performed with respect to the center of mass frame for the pion-nucleon system, not the parton center of mass frame. For proton-nucleon collisions the distribution would be peaked at zero. ............................. 6.4 Comparison of 1r° cross sections for Be and Cu with N LL calculations described in the text. The calculations have been rescaled to account for the measured nuclear dependence. ................ 190 6.5 Nuclea: Be and shades the 1a: to bet 5.6 0:11;: areas: to the 6.3 Nucle pidity 5.5 lnclui 6 Bart: 3’2 Co: 3-3 Co 8.1 NI 6.5 6.6 6.7 6.8 6.9 6.10 6.11 7.1 7.2 7.3 8.1 xiii Nuclear dependence of inclusive 1r° production measured using the Be and Cu targets. Note that the value for (1 decreases toward the shadowing value of 2/ 3 as P1- decreases. The average value of a for the range from 4.0 GeV/c to 8.5 GeV/c is 1.1100 i 0.0034 and seems to be constant from 3.0 GeV/c to 8.0 GeV/c. ............ Comparison of E706 1r° nuclear dependence measurements with E258 measurements for charged pion production. The triangles correspond to the 1r+ data and the squares correspond to the r” data. Nuclear dependence of inclusive 11’0 production as a function of ra- pidity for events with 4.0 < PT < 4.5 GeV/c. ............ Inclusive 17 production cross section for the'Be targets. ....... Rapidity distribution of inclusive as produced in the Be target for several PT bins. The rapidities are shifted forward because the cal- culation used the particle center of mass frame instead of the parton center of mass frame. ......................... Comparison of 1] cross sections for Be and Cu with NLL calculations using Q2 = P§~/4. The calculations have been rescaled to account for the measured nuclear dependence. The parameter 6 is the cone size (in radians) used in determining the theoretical prediction. Nuclear dependence of inclusive 1; production measured using the Be and Cu targets (open circles). Using the values from 3.5 GeV/c to 7.0 GeV/c, the average value of a is determined to be 1.137 :3: 0.019. The star indicates a preliminary measurement of a for w production. Distribution of AR2 = Ax2 + Ay2 between the positions of the charged particle tracks extrapolated to the front of the LAC and the reconstructed shower positions. .................... Comparison of 7/1r distributions from data (solid circles) and Monte Carlo of background processes (open circles). 1r°s and 175 with asym- metry values less than 0.75 have been rejected and the corresponding sidebands have been added back in. ................. Comparison of 7/11' distributions from data (solid circles) and Monte Carlo of background processes (open circles). 1r°s with asymmetry values less than 0.90 have been rejected and the corresponding side- bands have been added back in. 173 with asymmetry values less than 0.75 have been rejected and the corresponding sidebands have been added back in. ............................. Number distribution of photon candidates as a function of PT. The 753 scheme has been applied to the sample. The photon reconstruc- tion efficiency has not been included and the number distribution will be more sensitive to the background subtraction than the mea- surements of a. ............................. 205 207 208 210 211 215 217 223 227 228 52 Nudea de: andthl Caflo Ge\'c Comp; 135 SC. range deters the va 8.2 8.3 xiv Nuclear dependence of inclusive 7 production measured using the Be and Cu targets. 1r°s with aymmetries less than 0.75 have bee rejected and the remaining contamination has been removed using the Monte Carlo 7/1r measurements. Using the values from 4.0 GeV/c to 8.5 GeV/c, the average value of a is determined to be 1.024 :t 0.016. Comparison of inclusive 7 nuclear dependence results obtained using 753 scheme (open circles) and 908 scheme (solid circles). The agree- ment is very good above 4.0 GeV/c PT. Below 3.5 GeV/c the mea- surement is extremely sensitive to the accuracy of the background subtraction. Using the values obtained using the 903 scheme for the range from 4.0 GeV/c to 8.5 GeV/c PT, the average value of a is determined to be 1.022 :1: 0.015, which is in good agreement with the value obtained using the 753 scheme. .............. 234 3.— 1.3 1.4 3.1 3.2 3.3 3.4 q um 53‘.“ list rela‘. 1.1 1.2 1.3 1.4 1.5 1.6 3.1 3.2 3.3 3.4 LIST OF TABLES Summary of data taken by the E706 experiment during the 1988 and 1990-1991 fixed target runs at Fermilab. ............... List of the four fundamental fources in the Standard Model. The relative strengths of the forces are compared against the strength of the strong force for a separation distance of 10‘13 cm. The masses are the masses of the force carrying bosons. There is currently no direct evidence for the existence of the graviton. .......... List of leptons in the Standard Model. The particles have been grouped together by generation. The electron and its neutrino belong to the first generation of matter particles. .............. List of quarks in the Standard Model. The particles have been grouped together by generation. The up and down belong to the first generation of matter particles. .................. Characteristics of selected fixed target experiments measuring high p1 pion cross sections using a 1r" beam ................. Characteristics of selected experiments measuring high PT direct photon production. The experiments in the top portion of the ta- ble (down to NA3) had data taken with incident pion beams. UA6 used a hydrogen gas jet as the target for the antiproton beam. The remaining experiments were pp or pi collider experiments. The ex- periments with UA, NA, WA, or R designations were run at CERN. The other experiments were run at Fermilab. ............ Global PT system run breaks used in analysis of the pretriggers and global triggers for the 1990 run. These breaks correspond to docu- mented changes in trigger systems, but many of the breaks involve more changes than were documented .................. Local discriminator DAC settings (in units of “DAC counts”) for the 1990 run. Note that the single local 10 trigger was installed prior to run 9183 (the logic was not installed for runs 9181 and 9182). Voltage thresholds used for the global lo and hi discriminators for the 1990 run. .............................. Voltage thresholds used for the 1 / 2 global lo and hi discriminators for the 1990 run. The 1/2 global lo trigger was removed prior to run 9183. .................................. XV 21 74 93 5.1 Ty; 5.? Sun 6.1 luv: 6.? Ra; 6.3 Ra; 6.4 Ra 6.5 Ra 6.6 ln' 6.? N: 6.8 N: 5-9 Ir. 5.1 5.2 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.9 6.10 6.11 6.12 6.13 6.14 6.15 8.1 8.2 xvi Typical ranges for the various contributions to the livetime for the 1990 run. ................................ 191 Summary of the corrections for 1r° analysis. The photon conversion and beam absorption corrections have been averaged. ........ 193 Invariant cross section for 1r° production for the Be target (from Figure 6.1). ............................... 197 Rapidity distributions for inclusive 1r° production for 4.0 GeV/c < PT < 4.5 GeV/c (see Figure 6.3). ................... 200 Rapidity distributions for inclusive 1r° production for 4.5 GeV/c < PT < 5.5 GeV/c (see Figure 6.3). ................... 200 Rapidity distributions for inclusive 1r° production for 5.5 GeV/c < PT < 7.0 GeV/c (see Figure 6.3). ................... 201 Rapidity distributions for inclusive 1° production for 7.0 GeV/c < PT < 8.0 GeV/c (see Figure 6.3). ................... 201 Invariant cross section for 1r° production for the Cu target (see Figure 6.4). ................................... 204 Nuclear dependence parameter a for inclusive 1r° production as a function of PT (see Figure 6.5). .................... 206 Nuclear dependence of inclusive 1r° production as a function of ra- pidity (see Figure 6.7). ......................... 206 Invariant cross section for 17 production for the Be target (see Figure 6.8). ................................... 209 Rapidity distributions for inclusive 1] production for 3.5 GeV/c < PT < 4.0 GeV/c (see Figure 6.9). ..................... 212 Rapidity distributions for inclusive 1) production for 4.0 GeV/c < PT < 4.5 GeV/c (see Figure 6.9). ..................... 212 Rapidity distributions for inclusive 1] production for 4.5 GeV/c < PT < 5.5 GeV/c (see Figure 6.9). ..................... 213 Rapidity distributions for inclusive 17 production for 5.5 GeV/c < PT < 7.0 GeV/c (see Figure 6.9). ..................... 213 Invariant cross section for 1] production for the Cu target (see Figure 6.10) .................................... 216 Nuclear dependence parameter a for inclusive 1] production as a func- tion of PT (see Figure 6.11). ...................... 216 Nuclear dependence parameter a for inclusive direct photon produc- tion as a function of PT using the 758 scheme (see Figure 8.2). The errors shown are statistical errors only. ................ 233 Nuclear dependence parameter a for inclusive direct photon produc- tion as a function of PT using the 903 scheme (see Figure 8.3). The errors shown are statistical errors only. ................ 236 1 much in Table and 193 “quire: Dioxide predSiOz the Pile: Chapter 1 INTRODUCTION 1.1 The Data Set This thesis presents results from the 1990-1991 run of the E706 fixed target experiment at Fermi National Accelerator Laboratory (Fermilab). E706 is a sec- ond generation experiment designed specifically to make precision measurements of the production of high transverse momentum (PT) direct photons and neutral mesos hadron-nucleus and hadron-proton collisions. To achieve this goal a finely segmented Liquid Argon Calorimeter (LAC) was designed and built specifically for E706. The experiment went through an initial run during the 1987-1988 fixed tar- get run and several articles based on this data have been published [1] [2] [3] [4]. During the time between the two fixed target runs a number of improvements were made including a major overhaul of the LAC readout system to improve parallelism and a large number of less drastic changes that allowed the experiment to gather a much larger and higher quality data set in the 1990—1991 run. The information in Table 1.1 gives a brief overview of the data acquired by E706 during the 1988 and 1990-1991 fixed target runs. Most of the data presented in this thesis was acquired during the 1990 run using a negative pion beam. The following sections provide a brief review of the standard model, discuss the importance of making precision measurements of direct photon production, and present an overview of the phenomenology of nuclear effects in high PT production. Ru l: SUMMARY OF E706 DATA SETS Run Interaction Beam Momentum Number of Events Sensitivity (GeV/c) (events/pb) 1r‘Be 0.5 2x106 1r‘Cu 0.1 1988 500 (p,1r+)Be 0.75 3 ><106 (p.1r+)Cu 0.1 1r‘Be 8.6 1990 515 30x106 1r‘Cu 1.4 pBe 7.3 pCu 800 23x106 1.8 pH 1.5 (p,7r+)Be 6.4 1991 (p,1r+)Cu 530 14x106 1.6 (p,1r+)H 1.3 «'Be L4 1r‘Cu 515 4x106 0.3 1r'H 0.3 Table 1.1: Summary of data taken by the E706 experiment during the 1988 and 1990-1991 fixed target runs at Fermilab. 1.2 Thl The ex Made}. {53 of the large it succeeds gives a brie veiepment. history of 11 The 5' forces [See' trees 5 a: four forces Wins pa (lesmbe ll‘n and Weak { tries, the Pi [leery usin The exist“ SM [massle and D0 hat nosed to ll 1.2 The Standard Model The current understanding of high energy interactions is called the “Standard Model” (SM). While this model is not completely satisfying for theorists because of the large number of input parameters that must be experimentally determined, it succeeds in quantitatively describing a very widerange of results. This section gives a brief overview of the Standard Model without describing its historical de- velopment. Readers interested in more information on the Standard Model or the history of its development should consult References [5] and [6], respectively. The standard model (SM) attempts to encompass three of the four known forces (see Table 1.2 below, which contains a summary of information from Refer- ences [5] and [7]) while including only a minimal number of particles. Each of the four forces is described by an unbroken gauge symmetry and its associated force carrying particle(s). The primary motivation for the use of gauge symmetries to describe the forces is the success of this technique in describing the electromagnetic and weak forces. In order to perform the calculations using these gauge symme- tries, the particles must be treated as massless. The masses are reinserted into the theory using “symmetry breaking” techniques borrowed from solid state physics. The existence of at least a doublet of massive “Higgs” particles is required for the SM (massless) gauge description of the fundamental forces. Now that both GDP and DO have evidence for the existence of the top quark, the Higgs particles have moved to the top of HEP’s “most wanted particle” list. Other mysteries remaining in the standard model include determining the masses (if any) of the neutrinos, Table 1.2; l. strengths of separation c bosons. The -H Ell Eiectr p‘ )l U C! V" [l Tau K Table 1.3: I [Ogether by ofmanerpa finding a rer Violations. The SM mm" parser. Table, l. 3 and generation bein hunt"1108). Meg CERX 1,”, ”Ch 4 FUNDAMENTAL FORCES FORCE CARRIER RANGE RELATIVE MASS STRENGTH (GeV/cz) Electromagnetism 7 Infinite 10"2 0 Weak W+, W‘, 20 < 10”13 cm 10"13 81, 81, 93 Strong Gluons < 10'13 cm 1 0 GravitatiOn Graviton? Infinite 10’38 0 Table 1.2: List of the four fundamental fources in the Standard Model. The relative strengths of the forces are compared against the strength of the strong force for a separation distance of 10'13 cm. The masses are the masses of the force carrying bosons. There is currently no direct evidence for the existence of the graviton. LEPTONS PARTICLE NAME SYMBOL REST MASS ELECTRIC CHARGE (MeV/cz) Electron e 0.511 -1 Electron Neutrino u, #0 0 Muon is 105.7 -1 Muon Neutrino u“ #0 0 Tau 1' 1777 -1 Tau Neutrino :1, z0 0 Table 1.3: List of leptons in the Standard Model. The particles have been grouped together by generation. The electron and its neutrino belong to the first generation of matter particles. finding a renormalizable gauge description of gravitation, and understanding CP violations. The SM description of matter includes three generations (or “families”) of matter particles. Each of these families includes two leptons and two quarks (see Tables 1.3 and 1.4, taken from Reference [5]), with the particles in each successive generation being heavier than those in the previous generation (except for the neutrinos). Measurements made at the Large Electron Positron ring (LEP) at CERN have excluded the possibility of a fourth generation of particles (unless the neutrino mass is higher than 45 GeV/c’. The leptons, which are subject to the lable 1.4: hrs together by z pauticles. electrcvwealt a isolated part: neutrino. Th but the natu appearing as ('norrnahy ref to usual coi: associated wi hadrons] thus of three quail color is ‘white' ““9“th balar the proton and 1 Although i, deuce supporting 5 QUARKS PARTICLE NAME SYMBOL MASS ELECTRIC CHARGE (MeV/cz) Up 11 2-8 2/3 Down d 5-15 -1/3 Charm c 1000- 16000 2 / 3 Strange s 100-300 - 1 / 3 Top t z 174,000 2/3 Bottom b 4100-4500 ' - l / 3 Table 1.4: List of quarks in the Standard Model. The particles have been grouped together by generation. The up and down belong to the first generation of matter particles. electroweak and gravitational forces, but not the strong force, can all appear as isolated particles and have all been measured directly with the exception of the tau neutrino. The quarks shown in Table 1.4 are subject to all four of the known forces, but the nature of the strong nuclear force (or “color” force) prevents them from appearing as isolated particles. Each quark possesses one of three “color” charges (normally referred to as Red, Green, and Blue). These color charges are not related to visual colors, but they are convenient labels to use for the three charge states associated with the color force. Isolated particles containing quarks (known as hadrons) must be “neutral” in color. This means that quarks can appear as groups of three quarks (baryons), with each quark possessing one color so that the net color is “white”, or in quark-antiquark pairs (mesons), where the “anticolor” of the antiquark balances out the color charge of the quark. The most common baryons, the proton and neutron, have uud and udd (valence) quark structures, respectively. Although isolated quarks have not been detected, there is a large body of evi- dence supporting the existence of quarks. Early evidence came from deep inelastic 6 scattering of electrons by protons. These results pointed toward the existence of point particles within the proton, which came to be referred to as “partons” (this phrase actually encompasses the valence quarks, the gluons that hold the quarks together, and a “sea” of virtual pairs). The quark model also unified the large body of hadron spectroscopy results, and reduced the particle “zoo” that existed in the 19605 to a manageable list of fundamental particles. Most importantly, calculations within Quantum Chromodynamics (see Section 1.3) using the parton model have been successful in quantitatively describing hadron interactions. 1.3 Quantum Chromodynamics (QCD) The formulation of a gauge field description of color force interactions was hampered by the unique behavior of the color force. In order to explain the exper- imental data, one required a gauge field that would explain both the confinement of quarks to hadrons and the apparent decrease in the coupling strength with in- creasing energy (known as “asymptotic freedom”). It was eventually demonstrated that a gauge theory based on the non-Abelian SU(3) group would have these prop- erties and that such a theory could be renormalized. This gauge theory for the color force was called Quantum Chromodynamics for rather obvious reasons. The force resulting from the SU(3) symmetry of the standard model is carried by spin 1 particles called gluons. Each gluon carries a color and an anticolor, which gives rise to a total of eight different gluons. While the gluons play a role analogous to that of the photon in electromagnetism, the gluons are able to interact with each other because they carry color charge, which results in the unique properties of the 7 color force as compared to the electromagnetic and weak forces. Calculating nonperturbative cross sections in QCD is beyond our current abili- ties. However, perturbative calculations can be performed and have been successful in describing a large body of experimental results [8] [9]. The analog to the elec- tromagnetic coupling, a, is the strong coupling constant, a., which is given (when calculated to leading log in Q”) by: aa(Q2)=127r/{(33 - 2"010802742” (1-1) where A is a scale parameter for the perturbative expansion, which has an experi- mentally measured value on the order of several hundred MeV. Q2 is a measure of the momentum transfer in the collision. The equation also depends on the number of quark flavors, 11,, that can participate in the reaction, which is determined by the energy available in the reaction. If the perturbation expansion was carried out to all orders, then the results would not depend on A or the renormalization scale used in the calculations. However, the computational requirements of calculating even next to leading log (NLL) values are severe, so interpreting experimental results requires estimating the value of A from data. One property of the color force that is hinted at by Equation 1.1 is confine- ment. As Q2 decreases, or equivalently the distance scales become large, the color coupling increases to unity and eventually diverges in the perturbative approxima- tion, suggesting that the color force becomes too strong to describe perturbatively at low energies and long distances. If one tries to remove a quark from a hadron the color attraction will increase as the separation distance increases. Eventually there wil be en between I wil. conti: hadrons a Equ the collie the part: and the 5“”an Rl'l ar' [0 perf; can be 1.4 8 will be enough potential energy in the interaction to create a quark- antiquark pair Jetween the quark and the rest of the original hadron. This pair creation process vill continue until all of the original quarks are contained inside of new color neutral radrons and is known as “fragmentation” or “hadronization.” Equation 1.1 also shows that a, will decrease toward zero as Q2 increases. ‘his property of the color force is known as asymptotic freedom. As the energy of re collision increases (which corresponds to the size of the “probe” used to look at re partons decreasing), the partons increasingly behave like independent particles rd the interactions with the other partons can be neglected. Experimental mea- rements of the coupling strength give values of 0.35 at the 1' mass (1.8 GeV) and [2 at the Z mass (90 GeV). This decrease in a with increasing Q2 make it possible perform perturbative calculations because the hard scattering (large Q2) process it be treated independent of rest of the collision. 4 The Parton Model and Perturbative QCD One of the most important theorems of perturbative QCD is that the calcu- on of the short distance dependence of a cross section can be separated from calculation of the long distance dependence of the cross section. The process of arating the short and long distance parts of the calculation is known as “factor- ion”. The technique for factorization is not uniquely defined. There are many :rent factorization schemes, which are all valid as long as the same scheme is ied to all parts of the calculations. Because a, is small at short distances, per- ative calculations can be used to determine the short distance part of the cross 9 :ction. To leading order (the Parton Model), this would correspond to calculating te two body cross section for the partons involved in the actual hard scattering ee Figure 1.1). The divergence of a, in the long distance limit makes it impos- ble to calculate the long distance portions of the cross section perturbatively, so tey must be measured experimentally. One of the long distance inputs is the distribution of partons in the incident articles. To first order, these parton distribution functions (PDFs) are just mea- rrements of the probability of finding a parton “a” in particle “A” carrying a ven fraction (x) of the particle’s momentum. The PDFs are independent of the articular reaction being studied, so once a set of PDFs has been obtained they LII be used to calculate cross sections for any desired reactions. The sensitivity of LCh reaction to a given parton distribution will vary, so the standard technique for )taining a full set of parton distributions is to measure several different reactions, ith each reaction constraining the distribution of a particular set of partons. This :ocess is known as global fitting. A practical problem associated with measur- g and using parton distributions is that PDFs measured using one factorization :heme are not compatible with PDF measurements made using another scheme. Vhile the results from each scheme will be the same to first order, the separation ;' higher order terms into long and short distance parts will vary between schemes, > only PDFs determined using the same factorization scheme can be used together hen calculating a cross section. The other long distance term in the cross section calculation involved the adronization of the outgoing partons. The fragmentation function is a measure- (If) Ylgu'le has (his 5] and 10 A+B—>h1+h2+x 62 Distribution Function DI Fragmentation Function ure 1.1: Diagram of hadron-hadron interaction factorized into the short and g distance components. The incident particles are represented by A and B and and h; are outgoing hadrons. ment of the produce a pa ma] partons independent factorization The {a distribution hard scatte and B scat expression The X S particles Vkfiahle Where abOVe tOrizat 11 nent of the probability that a particular outgoing parton (c) will hadronize to >roduce a particular type of hadron, hl, carrying a given fraction (z) of the orig- rial parton’s momentum. Like the PDFs, these “fragmentation functions” are ndependent of the particular reaction being studied, but they do depend on the actorization scheme. The factorized cross section can be expressed as the convolution of the parton istribution functions (G), the fragmentation functions (D) and the parton level ard scattering cross section (do /th ). The cross section for incident particles A rid B scattering to produce outgoing particle C would be given by the following (pression: (1 EC 3" (AB _t 0X) 2 Z / dzadxbdcha/A(za,Q2)Gb/3(zb,Q2) d PC abcd A d A ch/c(zc,Q2)z:1r étab a cow + t + a) (1.2) C he X simply indicates that there is no constraint on the number or type of other articles produced in the event. The variables a, f , and 12 are the Mandelstam triables for the partons and are given by: s = (p. + pt)? (1.3) i: (Pa ' Pelz (1'4) fl. = (P6 _ Pd)2 (1'5) iere p,- is the four vector momentum of parton 2'. The cross section expression ove uses p = p, = Q, where p. is the renormalization scale, and p, is the fac- rization scale. Since the real cross sections are independent of p. and pay, one s a certain amount of freedom in assigning them. However, the actual values obtatfifd W“ cross seetic‘: To in Q2 value c» diction of Riseser order to c measured set of cor first ads Perturb. Sites us Order tr must 1 f”glue. PhystCa R is oft Pinon my 12 >btained will depend somewhat on these choices, since we are not calculating the moss section to all orders. To first order, the PDFs and fragmentation functions are independent of the )2 value of the interaction. This phenomenon is known as “scaling” and the pre- iction of Q2 independence was one of the early successes of the parton model. [owever, if one includes higher order terms, these functions do depend on Q”. In rder to calculate cross sections beyond the leading order, these functions must be reasured for a particular Q2 and then “evolved” to the desired Q2 value using a :t of coupled differential equations known as the Altarelli-Parisi equations. .5 Direct Photon Physics Direct photon production has two main advantages over other processes. The 'st advantage is that the photon is color neutral, so it does not undergo the non- :rturbative fragmentation process. Measuring the four vector of the direct photon ves us complete information on one leg of the two-body scattering process. In der to obtain this information for the case of an outgoing quark or gluon, one ist measure the products of the fragmentation process without confusing the .gments from the desired parton with fragments from the spectator partons. The ' ysical limitations of real detector systems make this extremely difficult, although is often possible to get fairly good information on the original direction of the rton if one can reconstruct a large fraction of the fragmentation products. It is o relatively straightforward to measure the energy and position of a photon by averting the photon into a shower of particles via pair production in a large Z material is: In adi; the direct pi males dire: distributior him in \‘ slum for] Ti’é‘ue l.’. coalition!)- Pmon 16,. The leddi: [Files] 01] 13 material (where Z denotes the charge of the nucleus involved). In addition to providing direct information about the two-body scattering, the direct photon is sensitive to the gluon in the leading order of calculation. This makes direct photon production a more direct way to determine the gluon parton distribution than Deep Inelastic Scattering (DIS), which is only sensitive to the gluon in the next to leading order terms. DIS can provide some constraints on the gluon for low x values, but its sensitivity to mid range (0.2-0.7) gluons is minimal. figure 1.2 shows the two leading order diagrams for direct photon production, :ommonly referred to as the Compton diagram and the annihilation diagram. The rarton level cross sections for these processes are give by: do 81raa, 2112 + f2 ——. - —* = —— .. 1. dt (cm 97) 93, ea at ( 6) a' —1raa, 112 + 32 (<19 -* <17) = W83 (1-7) X as he leading order calculation for direct photons is much simpler than that for in- usive single hadron production where there are 127 leading order diagrams. By easuring the direct photon cross section with several different incident particle pes, one can obtain information about the relative importance of these two pro- sses. While direct photon production has distinct advantages over other reactions, are are also some practical difficulties related to separating the direct photon nal from the backgrounds due to neutral mesons which frequently decay into otons. The dominant background for direct photons comes from 1r° decays in ich one of the photons is not detected or the two photons cannot be separated. I”) Flgl 14 q H 7 q 9 q 9 q Compton Diagrams i 7 9 9 Annihilation Diagrams Figure 1.2: The leading order diagrams for direct photon production. .475 0. 01/: increa the ra paratu Call act 1.6 I secuons they a} 3.1033 scatt- V! {7* Q mf’p. ‘V'C 15 Approximately 80% of the direct photon background is due to neutral pion decays. At the parton level the production of direct photons is suppressed by a factor of a/a.(as 0.01) compared to the production of quarks and gluons. However, the pro- cess of fragmentation tends to favor distributing the parton’s momentum among a number of particles, so the production of neutral mesons tends to fall off with ncreasing PT more rapidly than the production of direct photons. Because of this, he ratio of direct photons to pions can be relatively large. If the experimental ap- iaratus can reject a large fraction of the background, then the signal to background an actually exceed 1. .6 Anomalous Nuclear Effects Although high energy physics experiments are concerned with measuring cross :ctions for free nucleons, it is often more practical to use nuclear targets because .ey are compact and do not need the special support equipment that liquid by- ogen and deuterium targets require. Naively one might expect that the hard attering of the partons would not be influenced by the presence of other nucleons d that the nucleus could be treated as a collection of independent nucleons. In Ls case, the cross section for a nucleus should just be A (the number of nucleons) res the free nucleon cross section. However, it has been known since the 19705 ] [11] that the production of charged hadrons in hadron-hadron collisions does ; follow this simple scaling rule. The experiments conducted by Cronin et al twed that the production of high transverse momentum (PT) charged hadrons teased more rapidly than A with increasing nuclear size. This data also showed where Vols} motita a would hadron ' low PI n f- ‘a 16 that the rate of increase varied with PT. The simplest interpretation of this data is that the partons are undergoing secondary interactions in the other nucleons as they enter or escape the nucleus (see Figure 1.3). This effect can be parameterized 0,4 = 0A = 00A“ (1.8) where 0,4 is the cross section per nucleus, 0 is the 'cross section per nucleon, and an is the extrapolated cross section for a free nucleon. This form is not strongly notivated by QCD theory, but it seems to describe the experimental measurements remarkably well. If there was no anomalous enhancement of the cross section, then a would be 1. The Cronin data measured a values between 1.1 and 1.3 for high PT .adron production, while a approached 2/ 3 at low PT values, suggesting that at >w PT the interaction was controlled by the area “shadowed” by the nuclear disk. Subsequent experimental measurements have added substantially to the qual- ative understanding of these effects, although an understanding of these effects ithin QCD has not yet been fully achieved. Measurements of dijet production have own that the size of the jets produced in hadron-nucleus collisions is roughly the me as the size of the jets produced in hadron-hadron collisions. This is con- tent with the idea that the secondary scattering occurs before any hadroniza- n / fragmentation of the partons occurs. The dijet data also shows a broadening the relative azimuthal angle (43) between the two jets, which is consistent with ltiple scattering of the outgoing partons. Measurements of Drell-Yan (DY) pro- :tion show only a small enhancement of the PT spectrum. The two outgoing Figure 1‘ Particle ; for Part],- mtleae 17 h2 hi are 1.3: Rescattering of outgoing particle PT in nucleus. The P1- of the outgoing ticle may be slightly enhanced or slightly reduced. Because the cross sections particle production are steeply falling functions of PT the net effect will be to ’ease the cross sections more rapidly than A at high PT. 18 leptons in DY production are unlikely to rescatter because they do no interact via the color force and the electromagnetic coupling, a (z 1 / 137), is much smaller than the strong coupling, a, (z 0.2). Any anomalous enhancement of the DY cross section would have to come from secondary scattering of the incident partons. A useful review of these results can be found in Reference [12]. Like the experiments conducted by Cronin et al, E706 has measured the anomalous dependence of meson production as a function of PT and rapidity. One expects the measurements of 7r° production to be similar to previous measurements of the charged pion production. However, E706 will be the first experiment to mea- sure the nuclear dependence of direct photon production. This measurement should provide clear information about secondary scattering of the incident partons, since .he photon is unlikely to undergo rescattering as it leaves the nucleus. To make hese measurements, samples of data from Cu and Be targets were collected si- nultaneously to minimize systematic effects. The parameter a can be found by reasuring the particle production cross sections in the desired bins of PT, rapidity, he. and rewriting the parameterization of the nuclear dependence as follows: 0(PT, 1], . . .) = I + {ln[0’cu(PT,17,. . -)/0'Be(PT:TI, . . .)]/ ln(Acu/AB¢)} (1.9) his data should be helpful in achieving a QCD based understanding of these :"ects. While early theoretical models [13] provided some qualitative insights into ese processes, calculations of these effects based on simple parton models did not ovide quantitative agreement with the experimental results [14]. More recently Sill 19 theorists have started trying to understand these effects using much more sophisti- cated QCD based models [15] so that heavy ion collisions (especially the upcoming experiments at RHIC) can be interpreted using the best possible understanding of strong interactions in nuclear matter. 1.7 Previous Experiments The cross sections for neutral meson and direct photon have been measured by several previous experiments. Table 1.5 gives an overview of previous mea- surements of 1r° production by pion beams (except for E258, which measured the iroduction of charged pions). Table 1.6 shows a selection of the experiments which :ave measured direct photon production [16] [17]. The NA3, NA24, WA70, E705, nd E706 experiments were dedicated direct photon experiments featuring electro- lagnetic calorimetry and triggering on electromagnetic showers. NA3, NA24, and [A70 used alternating lead and scintillator layers in their calorimeters (sometimes conjunction with wire chamber information). These detectors provided energy solutions similar to that obtained by E706, although the spatial resolution of ese detectors was generally poorer. R110 and E705 used scintillating glass de- :tors. These detectors provided excellent energy resolution, but had very poor itial resolution. R808 used NaI for their electromagnetic system, which provided ;htly better energy resolution than the E706 LAC, but poorer spatial resolu- 1. UA6 used a system of proportional tubes sandwiched between lead layers for ir electromagnetic detector, and their energy resolution and spatial resolution e somewhat worse than the E706 LAC’s performance. The high energy collider n r-ii' CX 20 experiments (UAl, UA2, CDF, D0) used different techniques to measure direct photon production. At these energies, it is generally impossible to separate the photons from neutral meson decays, although the width of the showers can some- times be used to differentiate direct photons from the photons produced by neutral meson decays. Other techniques used at the colliders involve requiring “isolation” [mesons will be accompanied by other fragmentation products, but direct photons 'requently won’t) or using conversion probabilities. For more information on these .echniques see Reference [18]. The results obtained from the earlier direct photon :xperiments were used to determine the features needed for E706 to be able to make high precision measurements of direct photon production using a variety of team types. Tab pic): 21 OVERVIEW OF PION PRODUCTION EXPERIMENTS Experiment Target fl pr Range Rapidity (GeV) (GeV/c) Range in am. E706 36, Cu, H; 31.1 1 —+ 12 - 0.75 —> 0.75 E705 Li 23.7 4 —r 7 - 0.6 —r 0.8 E258 Be, Cu, W, H 19.4, 23.7 1 -—+ 6 ~ 0 E111 H; 13.7, 19.4 1 —+ 5 ~ 0 WA70 H; 22.9 4 —-+ 7 - 1.0 —+ 1.3 NA24 H; 23.7 1 —a 7 - 0.65 —r 0.52 NA3 C 19.4 3 —+ 6 - 0.4 —-> 1.2 ble 1.5: Characteristics of selected fixed target experiments measuring high m- »n cross sections using a 1i" beam. 22 OVERVIEW OF DIRECT PHOTON EXPERIMENTS Experiment Target J3 p1- Range Rapidity (GeV) (GeV/c) 'Range in cm. E706 Be, Cu, H; 31.1 3 —» 12 - 0.75 -—v 0.75 E705 Li 23.7 4 —> 7 - 0.6 —+ 0.8 WA70 H; 22.9 4 ——a 7 - 0.7 —a 1.2 NA24 H; 23.7 3 --> 6 - 0.65 —+ 0.52 NA3 C 19.4 3 —* 6 - 0.4 —a 1.2 UA6 p on H2(gas) 24.3 3 —a 6 — 0.4 —+ 1.4 CDF pp 1800 12 -—> 115 -1.1 —+ 1.1 D0 pp 1800 9 —a z75 — 0.9 —+ 0.9 UA2 pp 630 13 -—> 71 - 0.8 —-> 0.8 UAl pp 630 16 —a 90 0 —a 2.3 R110 pp 63 4 —a 9 -0.8 —> 0.8 R808 pp 63 3 —» 6 - 0.4 -> 0.4 able 1.6: Characteristics of selected experiments measuring high PT direct photon roduction. The experiments in the top portion of the table (down to NA3) had ata taken with incident pion beams. UA6 used a hydrogen gas jet as the target r the antiproton beam. The remaining experiments were pp or pi collider experi- ents. The experiments with UA, NA, WA, or R designations were run at CERN. he other experiments were run at Fermilab. .Q mi of the data d perform able to Wen “ . bi dire [[16 ex form to Par Chapter 2 THE EXPERIMENTAL APPARATUS 2. 1 OVERVIEW The E706 experiment was designed to make precision measurements of direct photon production and the neutral meson backgrounds to this measurement for various nuclear targets and incident beams. The experiment was located at the end of the Meson West beamline at Fermi National Accelerator Laboratory and took lata during the 1988 and 1990/1991 fixed target runs. In addition to assessing the >erformance of the experimental apparatus during this run, the experiment was 0 able to publish measurements of inclusive direct photon, 7r , and 17 production as vell as measurements of the structure of the recoiling jet system for events triggered ty direct photon and 1r°s [1] [2] [3] [4]. During the 1990 and 1991 fixed target runs, he experiment collected its primary data set, which was more than an order of iagnitude larger than the 1988 data sample. The data from the 1990 run will >rm the basis for the analysis being presented in this thesis. The hardware used 3 make these measurements will be described in the following sections, except for re trigger system, which was the focus of the author’s work on the experiment and ill be described in detail in Chapter 3. With the exception of the elements of the 'igger system, the systems will be presented in the order that an incident beam article would encounter them. 23 24 2.2 THE BEAMLINE During the 1990 and 1991 runs, the Tevatron accelerator provided high in- tensity (z 1013 protons/cycle) 800 GeV proton beams to the fixed target lines (for a general description of the Tevatron accelerator see Reference [19]). The beam particles were transferred from the Tevatron to the fixed target lines during a slow extraction period that lasted approximately 23 seconds (referred to as a “spill”). After a spill was over, the accelerator required approximately 35 seconds to bring the next batch of protons up to 800 GeV, so that the overall cycle time for spill delivery was just under a minute. The 53 MHz RF structure of the Tevatron pro- duced beam particles in tightly controlled (z 1 ns) RF buckets that were separated by approximately 19 ns. The particles that were extracted from the Tevatron into the fixed target lines vere initially split between the primary beamlines: Meson, Proton, and Neutrino. Each of these primary beamlines supplied protons to about five secondary lines >r the various experiments. The Meson West (MWest) beamline was designed rid built specifically for the E706 experiment (see Figure 2.1). The beamline was ivided into nine major sections, with the first six sections controlling the transport ' the primary proton beam from the main switchyard to the primary target and e last three sections controlling the transport of the secondary beam. For the 90 run, a 1.14 interaction length beryllium block was placed in MW6 in order produce a secondary beam of negative pions. An incident proton intensity of 5 x 1012 protons per spill was required to produce the desired secondary intensity 25 Primary (800 GeV p) MW2TCOL MWSTCOL - A . A I. A .ls E. .ewsogg ElE i W10 ' uvnw ', uwzw : uwza : i , I O m 100 m 200 m 500 m 550 m cm/mrad 4[ Secondary (515 GeV 1r') 2 h 706 wear .3 MWBO4.8 Y ‘ Cerenkov Y or . ‘ '—' ‘ l - '5 1H rumor: '1 l ' HW7OI 2 W7015 ' j _20 o m 100m 200m ’00!“ -4] 0 Production Target 0 Experiment Target 0 Horizontally focusing quadrupole magnet cluster fl Horizontally defocusing quadrup0le magnet cluster A Horizontally deflecting dipole magnet cluster I Muon spoiler = Pinhole collimator SWIC SEM ‘igure 2.1: The Meson West beamline. The overlaid lines on the lower plot indicate he relative divergence of the secondary beam particles. We (W 66‘ We. 26 of z 2 x IDs/spill. The secondary beamline was adjusted to transport negative pions with an average momentum of 515 GeV/c with a full width of approximately 70 GeV/c. For the 1991 run, the beamline was set up to provide both positive 530 GeV/c and negative 515 GeV/c secondary beams as well as 800 GeV primary proton beams to the experiment. The beam momenta were chosen as a compromise between obtaining a high energy beam and obtaining a beam intensity near the design intensity of 10 MHz. This momentum also gives the valence quarks in the incident pions about the same average momentum as the valence quarks in the incident 800 GeV protons. The 515 GeV beams actually contained pions, kaons, and protons (or an- tiprotons for the negative beam), which were identified by a differential Cerenkov detector. This detector allowed us to identify individual particles incident on the :xperiment’s targets (see Figure 2.2). The Cerenkov detector was placed in the /IW8 section of the beamline, where the angular divergence of the secondary beam articles was minimal (see Figure 2.1 overlay). The angle of the Cerenkov light roduced by a beam particle depended on the mass of the particle involved, so ght from particles with different masses (but the same momentum) could be sep- ated and identified. The light produced by a particle was reflected by a spherical ,rror at the downstream end of the detector back to 3 rings of phototubes in the stream end of the detector, almost doubling the effective length of the counter 1 increasing the separation of the light from different particles. Using special ses and cylindrical mirrors, the light arriving at the upstream end of the counter . divided into three radial regions. By selecting the appropriate helium pressure, erpe neg ) thar the be ticles t decay \ 891- ll learnt magne Seller lug 1 Elle: ills H 27 the light from each of the different incident particle types would fall primarily into a specific ring, allowing identification of all three particle types simultaneously. Al- though the analysis of the 1990 Cerenkov data has not been completed, one might expect the minority fractions in the beam to be close to those observed in the 1988 negative pion beam, which consisted of 97.0%1r' particles, 2.9%K ‘particles, and 0.1% 5 particles [3] [20]. Studies performed during the 1991 run using the forward calorimeter system concluded that the muon contamination of the beam was less than 0.5% [21]. In addition to containing magnets to focus and steer the secondary beamline, the beamline also contains several elements designed to reduce the number of par- ticles travelling parallel to the beamline. These particles can come from pions that decay while being transported through the secondary line or from the primary tar- get. They may also come from interactions between secondary beam particles and )eamline elements. To reduce the number of beam halo particles, several “spoiler” nagnets were installed in the secondary beamline (see Figure 2.1). These magnets enerate fields that deflect halo particles away from the beamline without disturb- rg the particles within the beamline aperture. While these magnets were fairly fective, they did not remove all of the halo particles. A hadron shield was located st inside the MW9 enclosure to absorb most of the halo hadrons that were left. me hadron shield was made up of large steel plates that formed a 4.3 m wide, 4.7 long, and 3.7 m high stack of steel with a hole through the middle to allow the am to pass through. In addition, there was a removable vertical slab in the top I of the stack that allowed the beam to be scanned upward during calibration of Eig tzhe 28 0.457". -d------------. ..... ciao——-n----"-------—--—----cw --——. Photomultiplier Tubes Spherical Mirror igure 2.2: The Cerenkov counter. The top diagram shows the overall layout of le counter. The lower diagram shows the layout of the phototube apertures in the .ane perpendicular to the beam axis. 3'4”" was sect absc 2.3 measur. ments of tracking.- lalgtl i)’; ShOWers 1 Electron retact the also aria “his tl detectg i0 allo of pro; Fiddre 29 the calorimeter. Immediately following the hadron shield, there was a large tank of water that absorbed any neutrons that came out of the hadron shield. This tank was approximately 10’ tall, 10’ wide, and 1’ thick. The downstream surface and a section of in the middle of the tank were made out of B — CH2, which is a neutron absorbing material. 2.3 THE TRACKING SYSTEM The E706 charged particle tracking system was designed to provide precision measurements at interaction rates up to 1 MHz. In addition to providing measure- ments of the jets associated with direct photon and neutral meson production, the tracking system provided information on the interaction location so that multiple target types could be used simultaneously. It also allowed identification of EMLAC showers that were initiated by charged particles, especially electrons and muons. Electron showers are nearly, identical to photon showers, so it was important to 'eject these showers when measuring photon production. However, this similarity lso allowed us to use the momentum measurements from the tracking system to erify the calorimeter energy scale. The tracking system consisted of silicon strip atectors (SSDs) upstream and downstream of the targets, an analysis magnet allow momentum measurements, and a downstream tracking system composed proportional wire chambers (PWCs) and straw drift chambers (STRAWs) (see :ure 2.3). 30 thm gum .33: £300 n 2.82:8: cam.- < 38...? Batch— n_nh_afl< 21>) 2:23: 20> \/ . o u . 5935.00 3:93 . Beau: . . . . 3 . 8.. .—. u c a . affine. £5.33 cases... 8:82 32.8 comet. 3:3.— . 53.... 9.333383 53.. 8.3: Figure 2.3: The 1990/1991 configuration of the Meson West spectrometer. 2.3.1 downstt sided l necess pmni with e2 the ups stream ‘ experirn altis. the 31 2.3.1 Silicon Strip Detectors and Targets The silicon strip detectors were located in the SSD / Target box immediately downstream from the hadron shield. The aluminum box enclosing the SSDs pro- vided electromagnetic shielding as well as a closed dry air volume. The dry air was necessary because water condensation can short circuit the wafer bias voltages by providing a lower resistance path. The silicon planes were installed in modules, with each module containing two wafers. In each of these modules, the strips in the upstream plane were aligned parallel to the X axis, and the strips in the down- stream plane were aligned parallel to the Y axis. The coordinate system for the experiment was chosen so that the positive Z axis was along the incident beam axis, the positive Y axis was chosen to be in the upward direction, and the X axis followed from these ”choices via the “right hand rule.” The origin of the coordinate :ystem was chosen to be the vicinity of the targets. Three modules of SSDs were installed upstream of the targets to provide in- )rmation on the directions and positions of the incident beam particles. This rformation was used to improve the resolution of the photon momentum mea- rrements by providing the beam direction on an event basis. The remaining five 3D modules were located downstream of the targets to provide information on e interaction vertex location by measuring the outgoing charged particles (see gure 2.4). The outer portions of some of these vertex chambers were not read t since tracks hitting these regions would not hit enough of the other SSDs to m identifiable tracks. Most of the wafers used were etched with 50pm pitch figure [Separa had ”F the W; Sisle: were due Ea F’ 15c To: be 32 E *MWIJ VartnSSDXYmodulu 3 . . l 2 l- L p 1 .- :Boam 0 ‘—> L I -1 [- p -2 . . Beryllium Targets 3 - Copper I Inatrumentod 25 u Region - , Targets I lnatrumantod 50 u Region ’--1...1..-r.-.inn.1..-1...1...iii.r...i ~20 -16 -12 -8 -4 0 4 8 12 16 Figure 2.4: The layout of the silicon strip detector system (SSDs) in the Y-Z plane. (separation between strip centers), but the wafers in the upstream vertex module had 25pm pitch in their central regions to provide better vertex resolution. All of the wafers were approximately 300pm thick. The four secondary targets were positioned between the SSD beam and vertex systems. The two upstream targets were 0.08 cm thick copper. The copper targets , were shaped liked disks of diameter 2.0 cm, with straight edges in the vertical direction (see Figure 3.1b). The two downstream targets were 2.1 cm diameter Beryllium cylinders. The upstream piece was 3.71 cm long and the downstream piece was 1.12 cm. The targets were split to provide air gaps to be used in searching for heavy flavor decays [22]. A liquid hydrogen target was installed prior to the 19697 I the CU (‘3 5,: .4 end Syste 7 9430 e: 33 991 run to provide an isolated nucleon target for comparison with the data from re Cu and Be targets. .3.2 The Analysis Magnet The MW9AN analysis magnet provided a 6.2 KG dipole field oriented along the Y direction. This corresponded to a 445 MeV/c horizontal PT impulse to charged particles passing 'through the field. Mirror plates were installed at both ends of the magnet to maintain the uniformity of the dipole field and prevent the magnet fringe field from interfering with the interaction counter phototubes and the tracking chambers located close to the magnet. 2.3.3 The Downstream Tracking System The downstream system consisted of four PWCs, each with 4 planes of wires, and two straw chambers, each with 8 planes of drift tubes. The straw chambers were installed for the 1990 run to improve linking with the tracks from the SSD system and to improve the measurement of track momenta. Each of the PWCs consisted of X, Y, U, and V planes (see Figure 2.5). The X plane wires were parallel to the Y axis (to provide information on the X position of a hit) and the Y plane wires were strung parallel to the X axis. The U-V system was rotated by 37 degrees with respect to the X-Y system to reduce ambiguities in correlating the hits. The spacing between each of the wires was 0.10” (2.54 mm). Each of the layers of wires was surrounded by two cathode planes. The cathode planes were divided into three sections (see Figure 2.5) so that when the beam ta p 5).. Qt - l6 t. if) 34 intensity caused “sagging” in the high voltage supply due to the high rates, it would not cause inefficiency in the rest of the chamber. The upstream module was 1.62 m x 1.22 m, the middle two modules were 2.03 m x 2.03 m, and the downstream module was 2.44 m x 2.44 m so that the modules covered a roughly uniform rapidity region. The PWCs used a relatively standard gas mixture composed of 80.4% argon, 18% isobutane, 0.1% freon, and 1.5% isopropyl alcohol vapor as the ionization medium. Two straw chambers were added prior to the 1990 run to reduce ambiguities in matching PWC tracks with SSD tracks. Identifying the proper links between SSD and PWC tracks is important because misidentifications result in inaccurate momentum measurements. A smaller straw chamber (STRAWI) was installed be- tween the two upstream PWCs and a larger straw chamber (STRAW2, which was built by MSU [23]) was installed on the calorimeter gantry just downstream of the last PWC chamber. Each of these chambers had four X view layers and four Y view layers. The straws were made from spiral wrapped layers of aluminized mylar with 20pm gold plated tungsten wires threaded through the centers. The tubes used for STRAWl were 10.4 mm in diameter while the tubes used in the larger STRAW2 chamber were 15.9 mm. Each of the four planes in a given view were offset by a quarter of a tube from the other planes to minimize the effect of the poor resolution regions near the central wires and the edges of the tubes. The signals from each of the wires were fed into a set of common stop TDCs with 1 ns resolution to provide information on the electron drift time from the particle track to the central wire. By using the drift time information, the resolution could be V—Anode 35 WW U " “‘09 e ’//////////l///////////////////////////////////////////// ///l Y—Anode ; L f x‘AmdeflllllllllllllllllIlllllllllllllllllllllllllllllllllIllilllllll Diffractive Region Beam Regionk \ Cathode Vill/I/l/l/I//////////////////////////////////// WW é—b' Cathode ? Cathode Figure 2.5: Orientations of the wires in the Proportional Wire Chamber (PWC) planes. hhsin was a; filed a 2.4 the tra. hadrons Pr Show and a be support acryosh illustat supporte The the SlZe ( LiQI 600d 51' SD Cdon mete tObeencl. 36 improved significantly beyond that obtained by only reading out which tubes had hits in them. Using the drift time information, the resolution of the straw chambers was approximately 250nm for Strawl and 1:: 200pm for Straw2. The straws were filled with argon-ethane gas bubbled through ethyl alcohol. 2.4 THE LIQUID ARGON CALORIMETER (LAC) The liquid argon sampling calorimeter located immediately downstream of the tracking system was designed to measure signals from photons, electrons, and hadrons and to provide fast signals to use for triggering on events containing high PT showers. The LAC was divided into an electromagnetic section (the EMLAC) and a hadronic section (HALAC). The overall layout of the LAC and the associated support equipment is shown in Figure 2.6. The calorimeters were suspended inside a cryostat that contained the liquid argon used as the sampling material. Both the cryostat and the calorimeters were supported by a large mobile gantry that also supported the isolation room containing the readout electronics. The choice of sampling calorimetry was motivated by the desire to minimize the size of the cryogenic volume while still obtaining good stability and resolution. 2.4.1 The LAC Cryostat and Gantry Liquid argon was chosen as the sampling medium because it provides very good signal uniformity and stability and allowed very fine segmentation in the calorimeter. Since liquid argon boils at 88°K (at 1 atm), the entire calorimeter had to be enclosed in a cryostat. The argon also had be isolated from the atmosphere 37 gantry To storage * K, J dewars —> rabbit crates SUPPO“ “’95 1‘. iglk aliii n _'_'A 'J IIIIIIIIIA Uli’llll III, 'III::ZZZ§é;ZZZZZZ-EIIII:ZZZZ‘ZZIZZZZ‘ZZZZZZZZZZZZ'.‘ZZZ a ;-.-;..-:i \ - \ . s : t . \ - \ - x - \ : t - \ . ; § ; a . e : t : t - \ : t - x . \ . : § . g s ; ’front 3 . . S : filler 3]" —s -: l vessel . HALAC EMLAC insulation '_‘- Figure 2.6: The Liquid Argon Calorimeter (LAC) Gantry. 38 because impurities such as oxygen could seriously degrade the signals collected from in the Argon by reducing the mobility of the electrons. The argon in the cryostat was replaced prior to the 1990 run as a precaution against any impurities that may have accumulated during or after the 1988 run. Each of the shipments of argon used to refill the cryostat was tested and verified to have less than 0.5 ppm of oxygen equivalent contamination. The bottom portion of the cryostat was a cylinder of 1.6 cm stainless steel that was 17 feet in diameter and 21 feet deep with a rounded bottom The steel was covered with z25 cm of fiberglass and polyurethane foam to provide insulation. A 5 cm diameter port made of 1.6 mm stainless steel was installed at the point where non-interacting beam particles would hit the cryostat to minimize the number of spurious signals seen in the LAC. As an additional safeguard against scattering of the beam jet and non-interacting beam particles, a beam filler vessel was installed around the axis where the beam went through the cryostat. The beam filler vessel was a tapered 3.2 mm thick steel cylinder with an average radius of about 40 cm. It was filled with helium to minimize the amount of material in the beam region and to reduce the pressure difference on the walls of the tube from the liquid argon. A second filler vessel was installed between the curved wall of the cryostat and the flat face of the EMLAC in order to minimize the amount of material encountered by photons originating from the target, since shower signals produced in this region could not be measured. This front filler vessel was made out of Rohacell foam coated with fiberglass and epoxy, as wells as a 1.6 mm layer of stainless steel to provide structural support. Per mi: ‘hC bfa 5tUdies. 39 The top portion of the cryostat was a cap made out of mild (carbon) steel forming a cylinder 17 ft in diameter and 25 ft thick. The cap supported a cryogenic cooling system that was used to keep the argon at a uniform temperature. A layer of insulating plastic baffling was installed immediately above the cooling system to prevent the cap from getting cold. The eight rods that provided the support for the calorimeters projected through this cap and attached to the gantry frame. The cap also had 30 access ports to allow the cables carrying the signals from the calorimeters to be brought out of the cryostat. The electronics for the calorimeters were installed on the cap to minimize the lengths of the cables. In order to provide shielding for these electronics, the cap was surrounded by a Faraday room. The walls of this room were made out of galvanized sheet metal with the electrical ground provided by contact with the gantry. All of the power and signal lines entering this room were isolated using transformers or optical links to ensure that no electrical contact was made between the outside and inside of the room. The Faraday room also housed the LAC high voltage power supplies and several racks of trigger electronics. The entire cryostat was mounted on a large open-framed gantry system. The gantry frame was in turn mounted on a set of Hillman rollers that allowed the entire structure to be moved transverse to the beamline at rates of up to 6 inches per minute. By combining horizontal movement of the LAC with vertical steering of the beam, the beam could be directed at almost any part of the LAC for calibration studies. 40 2.4.2 The Electromagnetic Calorimeter (EMLAC) The electromagnetic calorimeter was an essential part of the system for the direct photon measurement. The EMLAC contained 33 cells, each of which con- tained a 0.2 cm thick lead sheet, a 0.857 cm thick R view anode board, another 0.2 cm thick lead sheet, and a 0.857 cm thick 45 view anode board. These layers were separated by 0.25 cm liquid argon gaps. Approximately 18% of the energy from an average photon shower was deposited in the liquid argon gaps in the EMLAC. The remaining energy was deposited in the lead absorber plates and G10 boards. Small G10 “buttons” were glued to the faces of the G10 boards to maintain the proper gap sizes. The buttons for subsequent layers were located in different positions to avoid creating any large dead regions in the sampling medium. Lead sheets were used as the absorbing material because a large nucleus was efficient in converting incident photons into electron-positron pairs. The density of the lead sheets also helped keep the EMLAC compact. The calorimeter was cylindrical with an outer radius of 1.6 m and a length of 71 cm. The front face of the calorimeter was 9.0 m from the targets. The calorimeter had a 20 cm radius hole in the center to allow beam particles and very forward particles from interactions to go through without hitting the calorimeter. This allowed the calorimeter to operate at the high inter- action rates anticipated for the data taking runs. The EMLAC was divided into 4 essentially identical quadrant modules to simplify construction (see Figure 2.7). A strip geometry was chosen over a pad geometry for the EMLAC anode boards because it reduced the number of channels needed to provide good spatial 41 Vertical Tapered Plate Sections Capacitor Bank 3.0 m Slotted Spring plate Front G-lO plate Support Ring Figure 2.7: Exploded view of the Electromagnetic Calorimeter (EMLAC). afidufi cause t \mw inlot‘ insea sponc this text narr each 9g: Sign; were gesn the 1 gang elect; layer prob} and k 42 resolution, which reduced electronics costs. The R — 9b geometry was chosen be- cause the signals from the R strips were the natural signal to use for triggering on transverse momentum (PT) (see Chapter 3). An R view anode board was divided into two octants, with 256 strips in each octant. Dividing the R strips into octants instead of quadrants simplified correlation of the R view showers with the corre- sponding 4) view showers. Each of the R strips (on the first layer) was 0.547 cm wide. The (I) strips were divided into inner and outer strips so that a greater num- ber of strips could be used on the outside without making the strips unreasonably narrow at the inside edge of the detector. Each quadrant contained 96 inner strips and 192 outer strips. The strips in subsequent layers were “focused” on the target region so that each strip covered the same solid angle region when viewed from the target. The signals from the first 11 R layers were ganged together to form the “front section” signals for each of the R strips. Similarly, the R strips in the remaining 22 layers were ganged together to form the “back section” signals. Because of the focusing geometry, some of the R strips are missing from either the front or back section on the inside and outside edges of the detector. The signals from the 4) strips were also ganged together to form front and back section signals that were sent to the readout . electronics. Although some longitudinal segmentation is desirable, reading out each layer of the calorimeter independently would have resulted in larger overall noise problems and prohibitive electronics costs. Segmenting the calorimeter into front and back sections allowed the electromagnetic showers, which deposited most of their energy in the front section, to be distinguished from the showers initiated by balm section plot-at | I .‘ ...| it Hit ‘U CAN out; lie l2! Pi- III the leyja)! 43 hadrons or muon bremsstrahlung, which deposited more of their energy in the back section. This segmentation was also useful for separating closely spaced pairs of photons, because the photon showers will show more separation in the front section signals than they would if the signal was summed over both sections. The signals from each of the strips were sent to LAC amplifier cards (LA- CAMPs). These custom designed RABBIT cards (see Section 3.3) produced fast output signals, precision energy measurements, and arrival time measurements. The precision strip energy measurements were made by sending the LAC strip sig- nals through 21800 ns delay circuits and then into sample and hold circuits. The delay circuits were necessary so that the pretrigger load signal, which was based on the fast outputs signals, could be used to start the sampling circuit. The overall sampling time was adjusted to minimize image charge effects (see Section 3.3.3). During the 1990 run, a sampling time of 790 ns was used. This was much longer than the electron drift time, which was 400-500ns, so there was no danger of losing any of the signal from the showers. All of the instrumented LAC channels were read out, which improved our ability to reconstruct low energy photons and detect pedestal variations on an event basis. Signals from groups of 4 strips were also sent to Time-To—Voltage Converters (TVCs) to measure the arrival times of signals and allow rejection of showers from out of time interactions. 2.4.3 The Hadron Calorimeter (HALAC) Like the EMLAC, the HALAC used liquid argon as the active sampling ma- terial, but steel absorber plates were used instead of lead. While the lead was mm MICE brca gas: Sill} he) Vol Th also 4356 44 appropriate for pair production in the EMLAC, the HALAC relied on strong inter- actions to initiate the showers, so any dense material could be used. In addition to providing a dense absorbing material, the two steel “super-plates” in the hadron calorimeter also provided mechanical support. The hadron calorimeter was designed to look at showers produced by hadronic particles, which take longer to develop and are much broader than electromagnetic showers. The design made use of the broadness of hadronic showers by using a much coarser and simpler measurement geometry (see Figure 2.8). This geometry made reconstruction of the showers much simpler, especially since most of the signal from a given shower fell within a single hexagon of pads. The pads were machined so that the pads from subsequent layers all covered the same angular region when viewed from the target. Each HALAC cell consisted of a 1” steel plate and a sampling “cookie”, with the gaps occupied by the liquid argon. Each cookie was composed of two high voltage boards, two anode planes, and three layers of spacers or mechanical support. The high voltage boards were made of G10 clad on both sides with copper. The side of the board that faced the anode board was maintained at high voltage while the other side was maintained at ground. G10 spacers were placed between the high voltage boards and the anode boards to maintain the proper separation gap, provide mechanical support, and ensure there was no argon gap over the readout traces that ran between the rows of anode pads. A layer of G10 spacers was also placed between the anode boards to provide further mechanical support. The assembled cookie formed an octagonal unit 4 m in diameter. Fifty three of these cookie / steel plate cells were used to construct the HALAC. 45 Typically 93% of :1 hadron's energy is contained in a 6-cell hexagon. Figure 2.8: Geometry of Hadron Calorimeter (HALAC) readout pads. L) s (‘2 . . C31 Wi 46 The signals from the first 14 cells were ganged together and read out as the front section while the remaining 39 cells were ganged together to form the back section. The LAC as a whole contained 9.8 proton interaction lengths of material to provide good containment for hadronic showers. A more detailed description of the HALAC can be found in Reference [24]. 2.5 THE FORWARD CALORIMETER (FCAL) The forward calorimeter (FCAL) was located just downstream of the LAC and was designed to measure the energy and PT of the particles that passed through the beam hole of the LAC and the inefficient central region of the tracking system. These particles were generally part of the “beam jet” produced by fragmentation from the spectator partons in hard scattering events. Reliable measurements of the PT in the beam jets may allow measurements of higher twist processes [25]. The FCAL was divided into three modules (see Figure 2.9). The two up- stream modules consisted of 28 steel absorber plates interleaved with 29 scintillator plates. The last module consisted of 32 steel plates and 33 scintillator plates. Each steel plate was 3/4” thick, which represented about 0.1 interaction lengths. The scintillators were 3/ 16” thick. Each of the modules was drilled with seventy six holes in a 4 1/2” grid pattern. Waveshifter rods doped with an organic dye (BBQ) were inserted into the innermost sixty holes for the 1990 and 1991 runs. These rods shifted the UV light produced by the scintillator sheets to green wavelengths which could then be transported with relatively low attenuation to phototubes that were sensitive to the green light. The phototubes were attached to alternating ends 47 Steel Absorber ScintillatOH BBQ Wave S n ifter Bars _+ Figure 2.9: The Forward Calorimeter (FCAL) modules. 48 of the rods in the two upstream modules to minimize resolution variations due to attenuation. The signals from the phototubes were read out by custom made flash ADC modules. These modules can provide a record of the signals seen in the pre- vious 2.56ps for each of the phototubes. Each of the calorimeter modules also had a 1.25” hole in its center to allow non-interacting beam particles to pass through without producing signals. This minimized the confusion caused by the overlap of beam particle signals with signals from interactions in the targets. 2.6 THE E672 MUON SPECTROMETER The E672 spectrometer was immediately behind the FCAL (see Figure 2.3). This system consisted primarily of PWCs surrounding a toroidal magnet and was designed to make momentum measurements of muon pairs produced by the decay of J/‘I’ particles. Information from the E672 tracking system was used to study the performance and calibration of the E706 tracking system. However, the information from this spectrometer was not used directly in any of the analysis that will be presented in this thesis. Tl 3.1 siz "v-d Chapter 3 THE TRIGGER AND DA SYSTEMS 3. 1 OVERVIEW Because the distribution of direct photons is a steeply falling function of the PT of the photon, the set of events containing high PT photons is a very small fraction of the total number of interactions in the target. The limitations of the data acquisition system and to a lesser degree the offline analysis system, made it impossible to write out complete information for every incident beam particle, so the trigger system was designed to reject events that were clearly not relevant to the experiment’s goals. In order to reduce the rate of output to a more manageable size, a rejection factor of almost 105 is necessary. Event selection consists of three basic steps: 1) Beam and Interaction defini- tion, 2) Preliminary PT measurement (Pretrigger), and 3) Final PT measurement (Trigger). The first step has to contend with very high signal rates, so it is based on scintillation counter signals that can be restricted to single accelerator RF buckets of 19 ns. The last two steps are based on fast estimates of the electromagnetic PT deposited in the Liquid Argon Calorimeter (LAC) made by the RABBIT PT Sys- tem electronics. These estimates are made in conjunction with the timing definition provided by the scintillation counters. While the following chapter contains a con- siderable amount of information on the trigger system, those wishing more details regarding the performance of the trigger system can consult the memo written by 49 50 the author [26]. 3.2 THE BEAM AND INTERACTION LEVEL The sharply defined RF structure of the beam allows us to use scintillation counters to form fast beam and interaction definitions for a given RF bucket with- out worrying about accidental overlaps between beam and interaction signals from neighboring RF buckets. The signals defined using this RF “bucket” structure also provide a stable and precise timing reference for the LAC, FCAL, and tracking sys- tems. The beam and interaction logic used primarily NIM and CAMAC units. The CAMAC units provided a great deal of flexibility for testing the trigger performance and making improvements to the trigger definitions. 3.2.1 The Beam Definition The presence of secondary beam particles passing through the target region was detected by a multi-element beam hodoscope. This hodoscope, which was originally used for the E629 experiment [27], was augmented with a third plane of scintillators and installed just downstream of the downstream veto walls. The three planes of scintillation counters were set up as X, Y, and U views where the U view was rotated by 45° with respect to the X and Y planes. Each plane consisted of 12 elements that were 2 mm thick and 35 mm long. The eight central elements were 1 mm wide so that each element would receive only a small fraction of the high intensity beam. The innermost eight elements were flanked on each side by 2 mm wide elements followed by 5 mm elements on the outside. The relatively small 51 radius of our secondary beams made it possible to use the larger elements on the outside without the signal rates being high enough to cause sagging as long as the beam was roughly centered on the hodoscope. In addition to providing efficient beam particle signals, using the multi-element hodoscope allowed us to veto RF buckets containing multiple particles. In order to take advantage of this ability, the discriminated signals for each plane were put into a LeCroy 4532 MAjority Logic Unit (MALU). Each unit produced two signals. One signal indicated that the plane had at least one cluster of hits in it, and the other signal indicated that the plane had two or more clusters of hits in it. The clustering algorithm treated any set of hits in adjacent elements as a single cluster without imposing any limit on the number of adjacent elements that could contribute to a given cluster. The signals from the MALU units were used to produce two basic beam signals. BM was satisfied if at least two of the planes had at least one cluster in them in the same RF bucket. This signal was used for the prescaled beam trigger. The other signal produced was BMl which was an attempt to identify buckets that contained only single particles. BMl was satisfied if BM was satisfied and there was not more than one plane that had two or more clusters in it. Thus, if the X and Y planes both had two clusters in a given RF bucket, BM would be satisfied, but not BMl. If the X plane had one cluster, the Y plane had two clusters, and the U plane had no clusters in it, then the event would satisfy both BM and BMl. Using this definition allows single particles to be defined even if there is an extra noise hit present. In addition to being used in the trigger logic directly, both BM and BMl 52 were used to form other definitions in conjunction with other conditions such as the computer ready signal (Live-BM and Live-BM1) and the interaction requirements. Many of these beam signals were sent to sealers so that we could use them for live triggerable beam calculations and for diagnostic purposes. 3.2.2 The Beam Hole Counter(s) While the majority of the incoming beam particles passed through the tar- gets and fell within the acceptance of our SSD system, some fraction of them did not. To avoid counting beam particles that did not pass through the targets and triggering on beam particles that were outside the acceptance of the SSD system, a “beam hole” veto was used in both the scaler logic and the online trigger logic. For the 1990 run, the beam hole counter was a z 4” x4” x 1 / 8” plastic scintil- lator with a z 3/ 8” hole in the center. Vetoing events using the signal from this counter should have forced all of the triggerable beam particles to pass through the targets. Unfortunately, while the hole counter was lined up with the center of the hodscope and SSD system, the Be targets were not aligned with the center of the SSD system. Figure 3.1 shows the position of the Be target with respect to the hole counter-many of the beam particles that were not vetoed never hit the Be targets. Preliminary measurements for the 1990 data sample suggest that about 75% of the beam particles satisfying the trigger actually satisfied the target fiducial cuts. Prior to the 1991 run, the single hole counter was replaced with a set of four hole counters to divide the signal rate and ensure efficiency vetoing. Each plastic scintillator was 2 2 5/ 8” x 2 5/ 8” x 1/4” with a hole of diameter 2 1 / 2” removed 53 from the corner of the counters, so that the counters when placed edge to edge would have formed a single hole counter with 1 / 2” hole. 3.2.3 The Interaction Definition K Most of the beam particles that pass through the target region of the ex- periment pass through the target without interacting. Approximately 15% of the beam particles that pass through the target or the surrounding support structures interact and produce a varying number of charged and neutral particles. In order to identify the beam particles that interacted, four scintillation counters were in- stalled. One pair of counters was installed between the SSD / Target Box and the MW9AN analysis magnet. Each of these counters was 6” x 3” x 1/16” with a semicircular hole of diameter 3 / 4” removed from the counter (see Figure 3.2). The other pair of counters was installed on the downstream end of the analysis magnet. These counters were 8” x 4” x 1/ 16” with semicircular holes of diameter 1 1/2” removed from them. The counters were intended to intercept the largest possible fraction of the charged particles produced in hard interactions while minimizing the probability that a beam halo particle could create an interaction signal. During the 1990 run, these counters suffered from “ringing” problems. The photomultiplier tube (PMT) high voltages levels were adjusted to minimize the ringing. During the analysis of the 1988 data, it was found that less than about 1% of the high P1- events had only one interaction counter fired. The vast majority of events firing only one interaction counter seemed to have been generated by beam particles going through the counters (without interacting in the targets) or by 54 20000 - CU .. 10000 y (cm) Figure 3.1: Distribution of 1r° event vertices weighted to correct for beam absorp- tion. a) The distribution along the beam (2) axis. b) The x-y distribution for the Cu target region. c) The x-y distribution for the 2 region containing the Be targets. The outlines of the targets, beam hodoscope, the active area of the beam SSDs, and beam hole counter have been included to show the misalignment of the targets with the other systems. 55 Md": '4 a) Beam 2 Hole 2’ g R Sclntlllator -l'. ..... ' nght pipe Photomultlpllor Tube SE! SE: SW1 SW2 b) WI Beam SSD & STRAW CHAMBERS Figure 3.2: a) The interaction counters. b) Overview of MWest spectrometer show- ing the locations of the interaction counters. 56 ringing or noise in the counters. In order to reduce the deadtime of the system and to avoid a number of fake triggers, the interaction definition was changed from the “OR” of all 4 counters used in 1988 to requiring that any 2 or more of the counters fire. This requirement was used to generate the following interaction definitions: INT = BM :0: {2 or more interaction counters fired } = INTO (3.1) INTl = BMl at {2 or more interaction counters fired } (3.2) INT2 = BM2 It {2 or more interaction counters fired } (3.3) (Note: BM2 used the two scintillators used for the 1988 beam definition and was only interesting as a rough cross check.) Similarly, INT#*B_H (# = 0,1,2) were defined by adding a coincidence with the appropriate beam hole counter veto (1 counter veto for 1990, 4 counter veto for 1991). Live.INT# and Live_INT#*B_H—(# = 0,1) signals were also formed by re- quiring a coincidence between the computer ready signal and the appropriate quan- tities. These definitions were all scaled so that the performance of the interaction definitions could be studied. In addition to detecting the presence of a beam particle that has interacted in the target region, the interaction level also rejects events that contained interactions in neighboring RF buckets. This “cleaning” was necessary to protect the pretrig- ger logic unit, which would freeze if the time between interaction signals was not sufficient to complete the clearing phase. This would cause the trigger to remain busy for the remainder of the spill. During the 1990 run, three non-interacting RF buckets were required before (ERLY-CLN) and after (LATE-CLN) any interaction that was sent upstairs. The three bucket requirements for the early and late clean 57 definitions (or “filters”) were chosen because it took'about 70 ns for the pretrigger unit to be ready for the next strobe signal if a given interaction signal did not gen- erate a pretrigger. Prior to the 1991 run, a custom-made ECL to NIM conversion circuit was installed to reduce the clearing time for the pretrigger unit and the other 4 associated units. This reduced the cycle time for the pretrigger unit to about 50 us, so that two bucket early and late clean requirements could have been used. The late filter was removed from the LAC trigger interaction definition late during the 1991 run to investigate the possibility of physics correlations in the events that are “self-vetoed” by ringing in the interaction counters. 3.3 THE RABBIT PT SYSTEM The basic layout of the RABBIT (Redundant Analog Bus Based Information Transfer) portion of the trigger electronics system is shown in Figure 3.3. The signals from the LAC are fed into custom designed RABBIT amplifier (LACAMP) cards (see Figure 3.4). The charge integrating amplifiers on these cards produce a fast estimate of the energy deposited on a given strip in the LAC as well as providing a slower, but more accurate measurement to be used offline. The energy measurements from the LACAMPs for the R-view strips are then fed into RABBIT PT Attenuator cards that attenuate each strip energy by a factor proportional to sin(91) ( where 91 is the angle between the beam line axis and the Ith strip sub- tended from the target). These single strip PT estimates are added together to produce analog PT sums for groups of 8 neighboring R-view strips. These signals are sent to the biased PT adder cards as well as being daisy-chained into two local 58 discriminator cards used for the trigger level decision. The biased PT adder cards sum the total analog PT signals for the inner and outer halves of each octant and send these signals to the pretrigger zero crossing discriminators and to the global discriminators. The local discriminator cards identify large isolated PT depositions (such as those produced by direct photons) by discriminating the PT signals from groups of 16 R-view strips. The operation of each of these RABBIT cards is dis- cussed in detail below, except for the local discriminator card, which is discussed in the section on the trigger level. 3.3.1 The LAC Amplifier Card (LACAMP) The LACAMP cards produce several different signals (see Section 2.4.2). The fast output signals are specifically designed to allow a fast measurement of the PT depositions in an event so that soft scattering events can be rejected. In order to produce the fast output signal, a copy of the signal from a given calorimeter strip is delayed by 180 ns and subtracted from the incoming signal from the same strip. This produces a 180 ns differentiated signal that is especially sensitive to the leading edge of the signal produced by an electromagnetic shower, but is less sensitive to the much slower “decay” portion of the shower signal, which minimizes the summation of signals from interactions occurring at different times (pile-up). 3.3.2 The PT Attenuator Card The PT attenuator cards produce PT weighted analog sum signals for groups of 8 R-view strips. Initially, pairs of analog strip energies from the LACAMP 59 1W3 To SLHI TRIGGER BiaSeal Pr H16“ “£50340 Card 1. Mam/ador- LAC. H a? R Views-imp; of EA Calorimeter Figure 3.3: Block diagram of the RABBIT trigger electronics. The Biased P1- adder 3f8nals are used by the pretriggers and global triggers. The local discriminator “Shale are used in the local triggers, two gamma trigger, and in the global triggers. 60 Calibration Before After V 1 Fast Output x16 180 as top >1 _L LAC >——<| >-——l Amp 800 no delay w I Am H Master NC I Amp ___. Amp _. Slave TVC —l x4 x16 Tap Bus 1v Bottom Bus <——— Analog Multiplexers Figure 3.4: Block diagram of the LAC amplifier (LACAMP) card. 61 fast outputs are added together and then attenuated by a factor of approximately 2sin(0). These PT weighted energy signals are then added together to form analog sums signals for groups of 8 calorimeter strips (see Figure 3.5). Each attenuator card sums signals from 32 R-view strips and produces four analog output sums, which are sent to the biased PT adder cards and local discriminator cards using custom designed “flat cables” that were shielded using a copper tape wrapping to avoid noise pickup. In order to handle the 256 front and 256 back R-view signals used in making the trigger decisions, it was necessary to use a total of 16 attenuator cards per octant. A number of measurements of the actual attenuation/ gain factors provided by these cards have been made and the results have shown there are non-negligible variations from the ideal sin(6) weighting scheme. Figures 3.6 and 3.7 show gains measurements for the 1990 and 1991 configurations. These measurements include all of the gains from the LACAMP to the output of the biased PT adder card, but are dominated by the gains in the attenuator cards. Figure 3.6 shows octant 1 primarily because it had the most significant variations involving large numbers of strips. For 1990, octant 3 came the closest to the ideal distribution of gains with no large “kinks” or “steps”. The overall scale for most of the octants was not exactly 2sin(0), but the overall scale is simply a multiplier on the threshold location. Non-uniformity in the gains distributions causes positional variations in the trigger thresholds, with the trend being toward a higher threshold in the outer R (backward rapidity) regions for the 1990 data. Most of the useful gain measurements were made between the 1990 and 1991 62 Ch.0 8-bit linear Ch. 1 attenuator Ch.2 . . 8-bitlinear Ch. 3 attenuator H sum°0f'8 cn.4 (0'7) 8-bit linear Ch. 5 attenuator Ch.6 . . 8-bitlinear Ch. 7 attenuator # Slim-Of-S (8-15) 0 O O H—# sum'Of'S (16-23) Ch. 30 e e e sum-ofv8 Ch. 31 : > . (24-31) sum-of-32 Figure 3.5: Block diagram of the PT attenuator card. Only the sum-of-8 signals were used for the 1990 and 1991 runs. 63 .2 0.5 _ ’i 0.5 _ U h u I- I i I 2 3a 0.4 - 32' 0.4 - 8 3 .. a I < : ._. L; 0.3 _- g 0.3 L- a - 7- - o 0-'2 ,- 5 0.2 E- O.1 E- 0.1 :- E . 0 “-4 L4 1 l l 1 1 1 l 1 0 100 200 - 100 200 STRIP NUMBER STRIP NUMBER GLOBAL FRONT GAINS FOR OCTANT 1 GLOBAL BACK GAINS FOR OCTANT 1 .— O.5 .— 0.5 2 ' z r LU '- u l- 2 : 2 I 3;” 0.4 - 35’ 0.4 — a Z .. a : .. a I ......... 6 I ......... z 0.3 :- ...... 2 0.3 :- ' g . ....... g _ o 0.2 - """ ° 0.2 E- = 5 0‘1 L— 0.1 :- . o l l l l l l l l l I l J— o l l l l l l l l l I L E 100 200 100 200 TRIP NUMBER STRIP NUMBER 5 GLOBAL FRONT GAINS FOR OCTANT 3 GLOBAL BACK GAINS FOR OCTANT 3 Figure 3.6: Global gains for two octants for the 1990 run. The dotted lines indicate the desired 2 sin(9) values. Octant 1 had some of the largest deviations while octant 3 had fewer deviations than the average octant. 64 .5 .2 0.5 _ ,2 o t L1J "' L1J _ 2 i a . E 0.4 - a: 0.4 L- 3 " 3 _ m p ' m i- :5. I :5 .. 2 0.3 :- 2 0.3 :- - z - ...... g r ES : .......... 0 0-2 L 0-2 :' .......... 0.1 I. 0.1 } ............ o 1 1 1 l 1 1 1 1 l 1 — 0 1 1 1 1 l 1 1 1 1 l 1 r— 100 200 100 200 STRIP NUMBER STRIP NUMBER GLOBAL FRONT GAINS FOR OCTANT 1 GLOBAL BACK GAINS FOR OCTANT 1 0.5 E 0.5 .- 5 : Lu - Lu _ I I a - 35" 0.4 '- a: 0.4 :- 3 I .- 3 . 9 : ......... S : ..... g 0.3 - ....... 2 0.3 :- 3 ------ z - ........ g I .......... g ; ......... o 0'2 ” -- 0-2 r ---------- 0.1 1- ............. 0.1 :_ ............ o I I I I I l I I I I I'— o I I I I I I I I I l I ’ 100 200 100 200 STRIP NUMBER STRIP NUMBER GLOBAL FRONT GAINS FOR OCTANT 3 GLOBAL BACK GAINS FOR OCTANT 3 Figure 3.7: The global gains for two octants for the 1991 run. The dotted lines indicate the desired 2 sin(0) values. Note that the large deviations between neigh- boring strips have been reduced although there is still a decrease in the net gains toward the outside, which may be due to image effects. 65 runs. This means that early 1990 gains have to be extrapolated from known hard- ware changes or through analysis using the global PT ADC information. By break- ing the data into R regions and plotting reconstructed global PT values against the ADC values, variations in the relative gains with respect to the actual gains were detected and removed. Because the LACAMP sample times were tuned to minimize image charge effects, the reconstructed PT signals do not fully include the image effects that were seen during the 100-180 ns trigger signal sampling time. However, analysis of the trigger performance has indicated that the trigger signals can be reconstructed well enough using these gains and the 790 ns strip energy measurements. 3.3.3 “Image Charge” In order to understand the design of the biased PT adder cards, which will be explained in the next section, it is necessary to understand an effect that the cards were designed to counteract. During the 1988 run of the experiment, it was found that the signal from an electromagnetic shower was reduced by a signal from the strips that were not hit by the shower. Figure 3.8a shows a rough (hypothetical) sketch of the effect of image charge on the “in-time” signals (note that a positive pulse height corresponds to a negative voltage from the amplifiers). In addition to the negative signals generated in the strips outside the shower region, it is also possible that there is a reduction in the signal from the strips containing the shower that is proportional to the strip areas (this is based on the systematic gains modifi- cations that were required for the 1990 and 1991 global trigger gains). This problem 66 may be due to the resistors installed in the lines between the HV capacitors and the lead plates to solve the problem of the large inductance of the leads [28]. The capacitors were designed to quickly restore charge to the lead plates, but the resis- tors may have caused a delay in the restoration of charges to plates when they were “hit” by a shower so that the charges that were induced in the strips that were hit by the electromagnetic shower were (at least in part) drawn from the remaining strips in the octant that are connected to the same high voltage source. On a larger time scale, the charge balance will be restored, but on the time scale relevant for triggering, there is a large signal from the portion of the octant that was not hit by electromagnetic showers that will partially cancel the signal from the shower. In addition to subtracting from the in-time shower signal, the image charge signal also produces an “overshoot” about 300 us later. Figure 3.9 shows voltage distributions versus time for the signals from the two biased PT adder cards for an octant that had a large shower deposited near the inner edge of the calorimeter. The interaction signal provides a timing reference for the physics signal. Figure 3.8b shows a rough sketch of the signals when the “overshoot” in the image signal reaches its peak. The total overshoot signal for an octant can be quite large, since there are coherent contributions from a large number of strips, so that the overshoot signal may be large enough to satisfy the PT requirements for the pretriggers and global triggers. If another interaction occurs about 300 us after the initial interaction, then this overshoot signal could generate a trigger. Most of these events will not contain any high PT showers, so they will not effect the cross section measurements. However, if these events were not vetoed in the trigger logic, they could dominate 67 Envy 1) W105 1’) Figure 3.8: Hypothetical model of the image charge effects. The size of the image signal has been exaggerated. a) A “snapshot” of the fast output signals at the interaction time. b) A snapshot of the fast output signals approximately 300 us later. A number of events that looked like b) were read out during the 1991 run. 68 H2 188-00 Figure 3.9: Digital oscilloscope picture of the signals from the inner and outer biased PT adder cards for octant 1 during calibration studies. The late signal is the image charge signal produced by the calibration electrons hitting the LAC at R = 25 cm. The scales are 100 mv/division and 100 ns/division. 69 the overall trigger rate. Also, events of this type that do contain high PT showers present significant challenges to the reconstruction programs because they must properly separate the showers from the image charge signal. The actual size of the induced image signal will depend on the energy of the electromagnetic shower, since this determines the amount of charge that needs to be drawn from the neighboring strips. Thus, a photon of a given PT value which is deposited on the inner R region will tend to induce a larger image charge signal than a photon having the same PT on the outer part of the detector, since the photon on the inside must have a larger energy. In addition to the dependence on the energy of the shower, the size of the image signal seems depend on the capacitances of the strips, which tend to increase roughly linearly with R. This tends to enhance the image signals from the outer R regions. The combination of these effects results in very large image signals on the outside of the detector for high PT showers deposited on the inner R region of the calorimeter. 3.3.4 The Biased PT Adder Card The biased PT adder card was designed to suppress the image-induced signals from the strips not containing showers and provide a fast analog sum signal for the pretrigger decision. The design of the biased PT adder card relies on the fact that the real physics signals generate large signals in a few of the 8 channel sums while the image signals generate much smaller signals over the whole octant. The biased PT adder cards apply a threshold-to each of the 8 strip sum signals produced by the attenuator cards and then subtract the threshold from the signal that is sent out 70 if the input signal is over threshold. The signals from the groups that were above the cutoff threshold were summed to produce the global PT signal for the inner or outer half of an octant. By selecting the proper threshold level the image signals can be reduced substantially while the physics signals are only slightly reduced. In addition to providing topological rejection of image charge signals, the biased PT adder cards amplified the signals from the attenuator cards by a factor of z 6 prior to the discrimination and suppression stage. The reference voltage for the threshold / subtraction process was programmable through the RABBIT interface. During the 1990 run a reference voltage that corresponded to about 300-350 MeV per group of 8 strips was used. Since each photon will typically generate signals in about 3 groups, the trigger signal for two widely separated photons (e.g. from an 17 particle) will be about 1 GeV less than the trigger signal for a single photon with the same PT. In order to reduce this topological bias, the DAC setting for the 1991 run was changed to reduce the threshold value to around 180 MeV. The layout of the signals going into a biased PT adder card for the inner half of an octant is shown in Figure 3.10. The layout for an outer card is similar. Each of the voltage reference DACs that controlled the suppression circuits could be programmed independently. Although this feature was never used, there were variations in the voltage reference values due to variations in the DACs. The relative variations in these values were measured by C. Lirakis in 1991 and have been used in the offline analysis. Each octant has two biased PT adder cards. The inner card sums the fast output signals from the innermost 128 (front and back) 71 R-view strips and produces a single analog PT sum signal. The outer card sums the signals from the outer R-view strips. For runs prior to run 8925, the outer biased PT adder cards summed signals from all 128 outer R-view strips. After this run, the outermost 32 strips were removed from octants 2,3,6,7 (these regions are shadowed by the frames of the tracking system components) and the outermost 16 strips were removed from octants 1,4,5,8. These strips were disconnected from the trigger system because they were generating a large number of image charge triggers even with the biased PT adder card suppression system due to their large weighting factors and unusual capacitance values. The oscillosope pictures in Figure 3.9 show the output from the inner and outer biased PT adder cards in octant 1 during the electron beam calibration performed at the beginning of the 1991 run. A 50 GeV beam of (mostly) electrons was directed at the LAC, which had been moved 25 cm off center for the test. The arrows in the pictures indicate the zero voltage locations. One can see from this figure that a large shower on the inside of the detector can produce an image overshoot signal that is bigger than the original PT signal, even after the suppression has been applied. It should be remembered that showers at R z 25 cm produce the largest possible image signal, but this clearly illustrates the limitations of our image charge suppression. The biased PT adder cards substantially reduced the size of the global PT signal generated by “image charge” without substantially changing the signals gen- erated by large showers. However, the cards did tend to suppress the signals from small showers, which may reduced the effectiveness of the global triggers for 005 and 72 8C ham-tel P+ Wetjkld Signal Sums 9°": P+ Attenuator Gard: SI IN PM Sf‘r‘ 0‘7 Fun Section SIN 7‘5 [1 reflection 1; 5‘ Strips I63 __ Fndkl'm LSC- J - W ; l louu:~.- .. -.. o- Stfipslm I fl mast... 1 5‘3 0% Figure 3.10: Block diagram of the biased PT adder card. Each card has 32 analog inputs and generates four analog output signals. 73 asymmetric 1,5. The cards did not completely solve the image charge problem, but they did substantially reduce the rate at which fake trigger signals were generated which saved valuable livetime. Table 3.1 shows the run breaks found for the global PT system for the 1990 run. These breaks reflect changes in the attenuator and biased PT adder cards as well as changes in the cabling connecting the various elements of the system. 3.4 THE PRETRIGGER LEVEL The pretrigger level was designed to provide a fast LOAD signal for the rest of the experiment, minimize overall trigger deadtime, and place a narrow timing requirement on the PT signals from the LAC. The LACAMPs and the tracking system nano cards had limited “memory” times, so the trigger had to generate a LOAD decision quickly. The delays in the local discriminator cards and the trigger level logic were slightly too long to allow the trigger to be used as the LOAD for the LAC and tracking systems, so generating a LOAD signal only for the events that we wanted to write to tape was not an option. Using clean interaction as the LOAD signal was also not practical because of the large deadtimes that would result. A 20 us “settling” period is required after ever RESET of a LOAD signal, during which the system cannot respond to any incoming signals. This would probably result in the trigger system being constantly busy and reduce the effective beam rate substantially. Although the trigger logic could not be completed quickly enough, it was possible to generate a global PT signal rapidly and use it to selectively generate 74 1990 GLOBAL PT SYSTEM RUN BREAKS (L, RUNS SET NAME COMMENTS 7688-7740 L 7748-8010 K 80 11-8054 J2 J1/J2 Split only for GLOBAL HI measurements 80 55-8141 J1 81 42-8280 12 Il / I2 Split only for GLOBAL HI measurements 82 81-8330 I1 83 31-8543 H 8544-8629 G2 G1 / G2 Split only for GLOBAL HI measurements 8630-8678 G1 86 79-8822 F 88 23-8924 E 89 25-9147 D Outer strips removed prior to run 8925 9148-9164 C 9165-9246 B Early PT Latch installed prior to run 9247 9247-9335 A2 A2/Al Split arbitrary to minimize any i 9336-END A1 possible time dependent effects Table 3.1: Global PT system run breaks used in analysis of the pretriggers and SIObal triggers for the 1990 run. These breaks correspond to documented changes in trigger systems, but many of the breaks involve more changes than were docu- mented. 75 LOAD signals. The pretrigger level used the half octant PT signals from the biased PT adder cards to “weed out” the bulk of the very low PT events. The pretrigger required that the event contain at least a: 2 GeV/c PT, which corresponded to approximately 1% of the interactions. This reduced the overall deadtime to a reasonable level. In addition to rejecting events based on the sizes of the PT signals, the pretrigger level also imposed a timing constraint to prevent accidental overlaps between the relatively long lived global PT signals and out of time interaction sign 3.15. 3.4.1 The Zero Crossing Timing Discriminator The zero crossing timing discriminators were chosen to allow a stable and precise timing requirement to be applied at the pretrigger level. A normal “time over threshold” discriminator generates pulses with widths corresponding directly to the time that the input voltage was above a threshold value. This means that larger LAC signals will tend to generate wider signals, which would be more likely *0 0Verlap with other interactions and generate false trigger signals. This could be dealt with by using a discriminator with a fixed output width. However, the t'iming of the pulse would be determined by the time when the input signal first Went over the threshold value. This time will vary with the size of the input signal, 30 a. very wide timing window would still have to be used. The zero crossing timing discriminators solve this problem by providing fixed width signals with stable timing for all signals that have roughly the same shape over time but differ in amplitude. The basic operating principles of the zero crossing discriminator are shown in 76 Figure 3.11. The input signal is split into two parts. One portion of the signal goes through without any delays or modifications. The second part of the signal is sent through an external delay line (which allows us to select a delay appropriate for the rise time of the input signal) and is then inverted. The two signals are then added together and the resulting signal is checked to see if it crossed through zero voltage. The signals are split unevenly to allow for attenuation in the external delay line. This is done to ensure that the magnitude of the second signal does not get attenuated so much that it is smaller than that of the first signal, because this could prevent the sum of the signals from passing through zero. When there is not much attenuation in the external delay line, the uneven split will tend to force the sum signal to pass through zero more quickly. Multiplying the input signal by an overaJl scale factor will not change the zero crossing time, since both of the signals used to produce the sum signal will by multiplied by the same factor. Thus, the zero crossing time will not change with respect to the timing of the original physics. The delay times for the inverted signals were chosen to be z 155 us for the 1990 run and z 105-120 ns during the 1991 runs. The shorter times were chosen for the 1991 runs to accommodate some unexplained timing shifts in the LAC signals that may have been related to the shift in the energy scale over time (see Figure 4-1). The zero crossing outputs signals were 100 ns wide in 1990 and 125 ns wide f01' 1991. The wider 1991 signals were necessary to cover the shifts seen in the tithing as a function of R position in the detector. The cause of this shift is not Completely understood. The thresholds for the zero crossing units were chosen to keep the pretrigger 0M ., p)- 1”” 535”“. rumofinsvso, b) gm» M 'Ma . . ID‘N av mm c) sun ml. .1 T/Ld” \ “zap atom“ Figure 3.11: Principles behind constant fraction (zero crossing) timing discrimina- tors. a) The input P1- signal (same convention as for figure 3.9). b) A fraction of the signal is delayed and inverted. This fraction is slightly more than 1 / 2 of the input signal (so that slightly less than 1/2 of the signal goes through without inver- sion). c) The two signals are added back together. The uneven split between the signals guarantees that the signal will cross back through zero voltage. When this happens, the size of the input signal is compared with a reference voltage. If the signal is large enough, then a fixed width signal is generated. Multiplying the input signal by an arbitrarily large scale factor will have no effect on the zero crossing time and therefore no efl'ect on the timing of the output signal. 78 dead time relatively small. This corresponded roughly to rejecting 99% of the inter- action signals as uninteresting. The 1990 thresholds corresponded to 1r“ thresholds of 1.8 GeV/c on the inside and about 2.0 GeV/c for the outside. 3.4.2 The Veto Walls Beam halo particles can cause large numbers of false triggers because these particles can produce showers in the LAC at large radii via bremsstrahlung. The PT signals produced by these showers will be very large because of the high weight- ing given to the outside and will be difficult to distinguish online from genuine large PT signals. While the spoiler magnets, hadron shield, and neutron absorber substantially reduced the number of halo particles reaching the LAC, a further reduction was needed. During the 1988 run, two walls (Veto Walls 1 and 2) of plastic scintillators were installed immediately downstream of the hadron shield. Each of these walls consisted of thirty-two 20” x 20” x 3 / 8” counters arranged in the pattern shown in Figure 3.12. There was a 4” x 4” hole in the center and there was a 4” offset between the quadrant boundaries for the two walls to avoid a small gap in the coverage along the quadrant boundaries of each wall. Analy- sis of the prescaled interaction data after the 1988 run conducted by the author suggested that the veto wall was being hit by particles travelling backward from the interactions in the target and surrounding material. In order to minimize any biases in our data due to backseatter effects, a third set of scintillation counters (veto wall 3) was installed UPSTREAM of the hadron shield prior to the 1990 run (see Figure 3.13). It is extremely unlikely that the particles scattered backwards 79 from a collision would have enough energy to penetrate through the hadron shield, so requiring a coincidence between the upstream and downstream walls minimizes the probability that backscatter from an event will cause an event to be vetoed by the muon rejection definition. In addition to adding a new wall, the logic was changed so that the coincidences were checked on a quadrant level. The quadrant veto logic that was used for the 1990 run was: Quad I Veto = (VWl Quad I + VW2 Quad I ) * VW3 Quad I whereI = 1, 2, 3, or 4 (3.4) Each of these quantities was checked for the same RF bucket and VW# refers to the wall number (the old walls are referred to as walls 1 and 2). The “+” represents a logical OR and the “”5 represents a logical AND. Using quadrant vetoes allowed us to reduce the veto wall dead time by removing accidental coincidences between veto wall hits in different quadrants and by allowing us to accept triggers in the octants not covered by a veto wall quadrant that was hit by a halo muon. To improve the efficiency of the upstream veto signal, another veto wall (Veto Wall 4) was installed upstream of the hadron shield prior to the 1991 run. The layout for veto wall 4 is shown in Figure 3.14. The status of each of the phototubes in each of the walls was latched in the Minnesota latches for use in the offline analysis. The quadrant veto signals for the 1991 run were defined as follows: Quad I Veto = (VWl Quad I + VW2 Quad I) s (VW3 Quad I + VW4 QuadI) whereI = 1, 2, 3, or 4 (3.5) 80 Figure 3.12: Diagram of veto walls 1 and 2. These walls are located immediately downstream of the hadron shield and neutron absorber. The secondary beam di- rection is into the page. d: 81 These signals were sent up to the Faraday room, where the signals were regenerated and widened to z 155 ns and applied to the octant pretrigger hi’s. 3.4.3 The Early PT System The early PT system was included in the pretrigger definition to prevent the ex- periment from triggering on “pile-up” signals in the LAC. If an interaction produces an electromagnetic signal in a given octant, it takes approximately one microsec- ond for the signal to decay away. If a subsequent shower is deposited in the same octant during this decay time, the new signal will be added to the remainder of the previous signal, creating a trigger signal that corresponds to a much larger physics PT signal. In addition, the z 800 ns strip energies will also be affected by this background signal. Depending on the relative timing, the image charge signal from the preceding event may cause a large pedestal shift in the “current” event, making it more difficult to reconstruct the event with the needed accuracy. To prevent this kind of event from firing the trigger, the global PT signals were discriminated, delayed, and used as vetoes on the pretrigger hi signals. The delay of about 350 us was necessary to prevent these signals from vetoing the events that generated them. The threshold and type of the early PT discriminator was changed several times during the 1990 and 1991 runs to maximize the rejection of early depositions while minimizing the overall trigger deadtime. 82 4 126" » 67" o l 14.5" Figure 3.13: Diagram of veto wall 3. This wall was located immediately upstream of the hadron shield. The incident beam direction is into the page. Fig 83 >l5" _ 0‘ I¢ Willi” Figure 3.14: Diagram of veto wall 4. This wall was installed upstream of the hadron shield prior to the 1991 run. The incident beam direction is into the page. in 11101 Sign 84 3.4.4 The Pretrigger Logic The programmable logic units used in the pretrigger system are strobed by the Gated Interaction signal, which is just the Clean Interaction signal generated by the scintillator logic in the latch house gated by a signal indicating that the trigger decision loop is not already involved in making a decision. The pretrigger timing units latch the status of the octant pretrigger hi’s, which are defined by: {Inner 0 - X Hi + Outer 0 — X Hi } at: GatedJnt a: Octant; Early P1- 11: Quadrant; Veto Wall (3.6) (where quadrant J contains octant I ) I have used 0-X as the abbreviation for the signals from the zero crossing discrim- inators. Note that the logical AN Ds (denoted by *) and ORs (denoted by +) all involve timing requirements. The OR of the octant pretrigger hi’s for octants 1 through 4 is sent to the final pretrigger logic unit as the LacUpTiming signal. Sim- ilarly, the OR of the octant pretrigger hi signals for octants 5 through 8 is sent as the LachTiming signal. The individual pretrigger hi signals are sent to the pretrigger hi Store unit which is strobed by the final pretrigger logic unit’s output. The signals from the pretrigger store are the signals used in defining the octant trigger definitions for the single octant triggers, since they already contain the veto wall and early PT vetoes. Unfortunately, the early P1- and veto wall signals are not latched for the readout system, which makes the offline analysis of the pretriggers more difficult. The final pretrigger logic unit defines the following output pretrigger signals: 85 LACPretrigger = {LacUpTiming + LachTiming + (TwoGammaPretrigger al- “VF-WORD * S—R +{PrescaledBeamTrigger + PrescaledInteractionTrigger} == LACPREI = LACPRE2 = LACPRE4 (3.7) This signal was used to strobe the trigger logic units for the single octant triggers. The Veto Wall OR (VWOR) requirement, which was simply the OR of the four quadrant signals, was removed during the 1991 run when the veto wall and early PT signals were installed in the two gamma trigger logic (see the next section). The pretrigger signal used to strobe the two gamma trigger was: LACPRE3 = LACPREl + {E672Pretrigger .. SCR .. VWOR} (3.8) This definition was chosen to allow overlap between our two gamma trigger and the E672 dimuon trigger by adding the pretrigger signal for the E672 dimoun system to the strobe signal for the two gamm trigger. The pretrigger unit also produced signals indicating whether or not any of the pretrigger definitions had been satisfied: PRETRIGGERDR = LACPRE3 + E672Pretrigger (3.9) NO_PRETRIGGER = PRETRIGGER-OR (3.10) If the event did not satisfy any of the pretrigger signals, then the N O.PRETRIGGER signal cleared all of the pretrigger units in preparation for the next gated interaction signal. 86 3.4.5 The Two Gamma Pretrigger In order to keep the mass threshold for the two gamma trigger as low as pos- sible, the two gamma pretrigger used a separate set of zero crossing discriminators with lower thresholds (the zero cross 10 discriminators). The zero cross 10 logic was the same as the zero cross hi logic, with an OR between the inner and outer zero crossing units being produced for each octant. However, these ORed signals were required to satisfy a two octant requirement before a two gamma pretrigger was generated. Since we wanted to trigger on parton level diphoton events, the second octant was required to be directly opposite to the first octant or one of the neighboring octants adjacent to the directly opposite octant (see Figure 3.15). There were 12 unique combinations of 2 octants that could satisfy this requirement and fire the two gamma pretrigger. The early PT and beam halo vetoes were never installed in the two gamma pretrigger, but the two octant requirement reduced the rates to a small enough level that it was generally not a major deadtime component. The ORed zero cross 10 signals were also daisy chained into a delay unit that fed the two gamma pretrigger STORE unit (which was strobed by LACPRE3). The two gamma pretrigger store did the same thing that the (regular) pretrigger STORE did, which was to hold the octant pretrigger signals until the trigger level units were ready to make their decisions. Because of the beam structure problems that occurred during the 1991 run, the early PT and veto wall signals were installed in the zero cross 10 STORE unit to minimize the number of two gamma triggers generated by “pile up” events. The timing requirements of the pretrigger level prevented us 87 Figure 3.15: Topology of opposite octant requirement for two gamma trigger logic. The octant in the upper left can combine with any of the three octants in the lower right to form a pair that satisfies the two gamma logic. There are 12 unique pairs of octants that satisfy the two gamma logic (3 of those pairs are shown in the diagram). 110: IN 88 from installing these vetoes in the pretrigger level unit. The installation of the vetoes in the two gamma logic was accompanied by the installation of scalers to measure the dead fraction for each of the 12 pairs of octants that can fire the trigger. 3.5 THE TRIGGER LEVEL Once the pretrigger level had screened out most of the uninteresting events, the trigger level was strobed by the pretrigger signals and a final decision was made regarding the event. Approximately 5% of the pretriggers satisfied the trigger level. The trigger level was actually split into two parts, the trigger ORing level and the READOUT level. The trigger ORing level determined whether or not any of the E706 triggers had been satisfied. If one of the E706 triggers was satisfied, or if the E672 pretrigger had fired, then the TRIGGER_OR signal was sent to a gate generator and eventually to the readout unit. The gate generator provided a 100,113 delay in the signal so that the E672 dimuon trigger processor (DTP) would have time to complete its decision making process. If the E672 DTP fired or there was an E706 trigger in the event, a readout/shared interrupt signal was sent to the readout systems and the event was written to tape. The thresholds and types of triggers that were used were chosen to provide useful numbers of events across the entire accessible PT range, useful overlap be- tween the triggers to allow measurement of the trigger efficiencies, and to emphasize the different types of physics that we wanted to study. Although several different trigger types were run with similar thresholds, most of the events that satisfied one trigger satisfied the other triggers that had the same threshold (e.g. Most of the dc By 89 single local hi triggers also satisfied the global hi and 1 / 2 global hi triggers). The primary differences between the various trigger topologies occurred for events near the trigger thresholds. For events that were far above the thresholds, the over- lap between the triggers approached 100%. Because the particle distributions are steeply falling with PT, most (z 90%) of the data accumulated with a given trigger threshold will fall with about 1 GeV of the trigger threshold. In order to provide coverage over the full PT range without reducing the effective integrated luminosity for high PT events by having outrageous deadtime fractions, it was necessary to run lower threshold triggers with “prescaled” strobes (see below for more details). By combining different selection topologies and multiple thresholds, we were able to select a large fraction of the physics signals that can be measured by the LAC. 3.5.1 The Local Triggers The local triggers, which use the local discriminators for their primary thresh- old requirements, were designed specifically to look at direct photons. They also provide good acceptance for 1r°s. The showers from direct photons and from high energy nos tend to be confined to relatively small regions, so most of their signals will fall within the 8 cm (16 strips) covered by a given “local.” This allows the local triggers to fire on only the signal produced by the leading particle, instead of also including the signal produced by the rest of the octant, which can vary greatly depending on the amount of noise (or image charge signal) in the rest of the octant and the topology of the event (i.e. the other particles in the jet for the 1r° triggers). By triggering on a signal that depends almost exclusively on the leading particle, the the Si; (73 90 the width of the efficiency turn-on is minimized. It is important to remember that the cross sections are steeply falling, so a wider turn-on will integrate over a signifi- cantly larger number of events in the turn-on region that are difficult or impossible to use. The narrower turn-on obtained by triggering on the local PT signal selects (fractionally) fewer events in the turn-on region and maximize the usable physics. In order to avoid large threshold variations near the edges of each local, the signals were discriminated for groups of 16 strips that overlapped each other by 8 strips (see Figure 3.16). If this was not done, then a shower that fell on the boundary between two locals would contribute only half of its signal to a given local and the effective threshold for firing the local would be twice as high. The overlap ensures that photons hitting the edge of one local will be centered on the next local, so that the variations in the threshold will be minimized. The definition of the Single Local Hi trigger is as follows: SLHI TRIGGER = LACPRE4 * 8 {Z Pretrigger Hi1 at Local Discriminator Hi1} (3.11) 1:1 The pretrigger hi signals come from the PRETRIGGER STORE unit and contain the zero-crossing timing requirement, the early PT veto, and the beam halo veto, but not the SCR veto. The SCR veto is applied to the LACPRE4 signal unless the event also fired the prescaled beam or interaction triggers. The E symbol denotes a logical OR over the octants. Starting with run 9183, a prescaled single local 10 trigger was added to provide a lower threshold trigger that used the localized PT definition. The definition of this trigger was: 91 Overlapping "sum-0H6" signals used for local tn'gger wcissliigglzmgxffi" .. .. .. .. .. .. .......... p meodules Simm‘ A groups of 8 strips (256 strips in all) Figure 3.16: Simplified block diagram of the “local” formation from the R strip signals. 92 SLLO TRIGGER = Prescaled LACPRE2 * s {2 Pretrigger Hi1 * Local Discriminator L01} (3.12) 1:1 The strobe for this trigger was prescaled so that the trigger would contribute 10% to 20% of the overall triggers. The thresholds used for the local 10 discriminators are shown in Table 3.2. While the local triggers were good for selecting direct photons and high energy 1r°s, they were not as effective in selecting 175, ws, and some lower energy 1r°s. The average separation between the photons produced by these particles is generally larger, so that the two photons will often not be contained by a single group of 16 strips. This problem (along with a definition of the angle a) are shown in Figure 3.17. If a is close to zero, then the two photons will deposit most of their signals within the same group of 16 strips. However, if a is close to 90°, then the amount of signal deposited in a single group of 16 strips will depend strongly on the separation distance between the two showers. For the relatively massive 17 and to particles, as well as lower energy 1r°s, the separation distances are often larger than the 8 cm covered by a local, so that the signal from each shower will be discriminated separately. This makes the physics threshold for the as and cos a strong function of the energy asymmetry between the two showers. If one of the photons contains almost all of the energy of the original parent, then the physics threshold for the parent particle will be about the same as that for direct photons. However, if the photons from an 1] share the parent’s energy equally, then the effective physics 93 1990 LOCAL DISCRIMINATOR THRESHOLDS RUNS HI Threshold LO Threshold 7296-7593 120 60 7594-8054 125 63 8055-8239 135 70 8240-8628 140 “” 8629-8988 148 “” 8989-9180 “ ” 75 9181-END “ ” 110 Table 3.2: Local discriminator DAC settings (in units of “DAC counts”) for the 1990 run. Note that the single local 10 trigger was installed prior to run 9183 (the logic was not installed for runs 9181 and 9182). threshold for triggering on the 17 can be twice as large as for single photons (since one or the other of the two photons must satisfy the single photon threshold). 3.5.2 The Global Triggers The global triggers were designed to trigger on 1r°,17, and to particles that contribute to the background for our direct photon measurements without having the inherent limitations of the local triggers. The global triggers used the total PT signal for an octant as the basis for event selection, with an additional local 10 requirement to ensure that the event contained at least one significant photon and avoid pure noise events. The global signal was obtained by taken the analog sum of the signals produced by the inner and outer biased PT adder cards for an octant. During the 1988 run, the analog fan in/fan out (FIFO) units that performed this addition failed to function properly because their unipolar design was unable to handle the “wrong sign” voltages from the image charge signals. The problem was solved prior to the 1990 run by replacing these units with bipolar units. The “total global PT” signal was then discriminated by a LeCroy 4416 unit and the time over 94 I I I I / I Y \ I I l I ' . / \ (w :/\a .r9 //I 591‘; - ' / l \. 0'7 I I / I \ I I l K I \ I I I I I I Figure 3.17: Diagram of showers from a 1r° or 17 decay. The dashed line represents a line of constant 45 in the calorimeter. The solid curved lines represent the boundaries between groups of 8 R view strips. Note that the showers shown would not be contained by the same “local”. 95 threshold output was sent to the global trigger logic units. Because of the problems . encountered during the 1988 run, “1 / 2 global” triggers were also installed to study the effects of image charge signals on the trigger performance, and as an “insurance policy” against further image charge problems in the global triggers. The 1 / 2 global signals would be expected to have fewer problems because of the division of image signals between the two halves of the octant. To produce the 1/2 global signal, the inner and outer PT signals were discriminated separately and the logic outputs from the inner and outer discriminators were ORed. No serious differences between the global and 1/2 global triggers were seen during the 1990 and 1991 runs. The other important change in the global triggers prior to the 1990 run was the installation of the biased PT adder cards. Although the biased PT adder cards worked well, the DAC setting used for the 1990 run corresponded to a trigger PT of about 300 MeV. Since an average shower usually covered about 3 groups (recall that front and back groups were handled separately), there could be ”significant differences (potentially as much as 0.5-1.0 GeV trigger PT) between the signals produced by two showers falling on the same R groups and two showers falling on separated R groups. While the effect is not as severe as similar problems in the local triggers, this produced a much stronger bias than was desired. Prior to the 1991 run, the biased PT adder DAC setting was changed to try to reduce the cutoff size. The Local Global Hi trigger was defined as follows: GLHI TRIGGER = LACPREl * 96 1990 GLOBAL DISCRIMINATOR THRESHOLDS RUNS HI Threshold LO Threshold 7296-7687 450 mV 350 mV 7688-8054 465 mV “ ” 8055-8266 500 mV “ ” 8267-8629 530 mV “ ” 8630-END 590 mV “ ” Table 3.3: Voltage thresholds used for the global lo and hi discriminators for the 1990 run. a {z Pretrigger Hi1 at Local Discriminator Lo; * I=1 Global Discriminator H11} (3.13) The Local Global Lo trigger definition was almost the same: GLLO TRIGGER = Prescaled LACPRE2 at: 8 {Z Pretrigger Hi1 * Local Discriminator L01 at I=1 Global Discriminator L01} (3. 14) The thresholds used for the global discriminators are shown in Table 3.3. The 1 / 2 Global Hi trigger was defined as follows: 1/2 GLHI TRIGGER = LACPREl I: 8 {Z Pretrigger Hi1 :1: Local Discriminator L01 at: I=1 1 / 2 Global Discriminator Hi1} (3.15) The 1 / 2 Global Lo trigger was defined as follows: 1/2 GLLO TRIGGER = Prescaled LACPRE2 at: 97 1990 1/2 GLOBAL DISCRIMINATOR THRESHOLDS RUNS HI Threshold LO Threshold 7297-7687 440 mV 340 mV 7688-8054 450 mV" “ ” 8055-8266 475 mV “ ” 8267-8629 495 mV “ ” 8630-8988 550 mV ' “ ” 8989-9182 575 mV “ ” 9183-END “ " N /A Table 3.4: Voltage thresholds used for the 1 / 2 global lo and hi discriminators for the 1990 run. The 1/2 global lo trigger was removed prior to run 9183. 8 {Z Pretrigger Hi1 In Local Discriminator L01 * I=l 1/2 Global Discriminator L01} (3.16) The thresholds used for the 1 / 2 global discriminators are shown in Table 3.4. The 1/2 global lo trigger was converted into the single local 10 trigger late in the 1990 run. The strobes to the global lo and 1 / 2 global lo trigger were prescaled by factors ranging from 10 to 40 in order to avoid taking an overwhelming numbers of events and saturating the DA system. 3.5.3 The Two Gamma Trigger The two gamma trigger used the same two octant requirement as the two gamma pretrigger (see Figure 3.15), but instead of requiring a zero cross 10 OR signal in two octants, the trigger level required a coincidence between the zero cross 10 OR and the local discriminator lo signals in each of the octants. Thus, the. two gamma trigger definition was: 98 TWO GAMMA TRIGGER = LACPRE3 * {2 {[Inner Zero Cross L01 + Outer Zero Cross L01 ) [J at: Local Lo Discriminator; } 4: {[Inner Zero Cross Lo; + Outer Zero Cross LOJ ] * Local Lo Discriminator; } (3.17) (where I,.I run over the 12 octant combinations that satisfy the two gamma opposite octant definition) After run 13599, the {Inner Zero Cross LoI+Outer Zero Cross L01} signals were ANDed with vetoes on the {Octantl Early PT+ QuadrantK Veto Wall} signals (where K is the quadrant containing octant I) to provide early PT protection for the two gamma triggers. 3.5.4 The Prescaled Triggers In addition to the global and local 10 triggers, there were three other prescaled triggers. These triggers were chosen to provide events at each of the early stages of the trigger decision process, allowing use to monitor the performance of subsequent trigger levels and also measure the low PT meson cross sections. The prescaled beam trigger was our “minimum bias” trigger for the obvious reason that it placed the fewest constraints on an event. During the 1990 run, the definition of this trigger included the beam hole veto. This is unfortunate, because this trigger could have been used to map out the target region more effectively if the beam hole veto had not been included. The hole counter veto was removed from this trigger for the 1991 run. In order to generate the prescaled beam events, the 99 BM signal was run through a sequence of 6 prescaling channels to obtain prescaling factors ranging from 10° to 15°. The signals that survived the prescaling then had to satisfy an early / late filter requirement that was equivalent to the early/ late requirements for the higher threshold triggers, but was generated by a separate set of units. During the 1991 run the late filter requirement was removed for the prescaled beam and interaction triggers and the early filter was reduced to two buckets to minimize any possible biases in these triggers. Events that satisfied these requirements and arrived when the computers and pretrigger were ready were written out to tape and constituted about 1% of our data. The prescaled interaction trigger was very much like the prescaled beam trig- ger, except that the BM signal was replaced with INT. The beam hole veto was also part of this trigger for 1990, but not for 1991. The prescaled interaction events gave us an enhanced sample of interactions compared to the prescaled beam sample, but still avoided any LAC PT requirements, allowing us to use these events to study the behavior of the LAC based triggers. These events also provided a higher statistics sample of events for studying low PT events (especially 1r°s). Prescaling factors ranging from 105 to 2 - 155 were used so that this data sample would constitute z 2% of our overall data set. The prescaled pretrigger was created by prescaling the PRETRIGGER-OR signal from the pretrigger logic unit (a LeCroy 4508). This gave us a sample of pretrigger hi signals, two gamma pretriggers, and E672 pretriggers to use for measuring the intermediate PT range cross sections and for studying the higher threshold triggers. These events can be used to study the full triggers and to fill 100 in the PT region between the prescaled interaction events and the events selected by the full triggers. They constituted 1% to 5% of our data, depending on the run region. 3.6 THE DATA ACQUISITION SYSTEM The SharedJnterrupt signal from the trigger system was used to initiate the readout procedures for each of the four data acquisition systems (see Figure 3.18). Three of these systems were DEC PDPs equipped with Jorway CAMAC inter- faces. The fourth system was a large FASTBUS system which buffered many of the events in memory for readout between spills. Each of these systems operated in- dependently and could be removed from the overall data acquisition system during tests or repairs. The data from each of these four sources was sent to a VAX host system where the the data packets from each system were concatenated together to form complete events. These events were then written out to 8 mm magnetic tapes. The average readout time for an event was eight to nine milliseconds. When all of the readout systems were finished reading out an event, the computer ready signal was re-established and the trigger system would become live again. In addition to reading out the events selected by the trigger, the Fastbus sys- tem also monitored the stability of the LAC systems. During the times between beam spills, the system would compare the pedestal offsets for each of the LACAMP readout channels with a reference set of pedestal values and flag any channels that had large pedestal changes. New pedestal reference sets were generally measured about once every 8 hours to minimize resolution problems due to drifting pedestals. 101 LAC A Downstream Trigger Muan PWC Identifier SSD (E672) FCAL Cherenkov STRAWs RABBIT “i” A A} I} i WOLF CAMAC CAMAC CAMAC TDCS ICBM MU ROCH NEU ‘ PDP POP POP FASTBUS 11/34 11/34 11/34 I“ i r i HOST TAPE ,uVAX DRIVE Figure 3.18: Block diagram of the data acquisition system. 102 In addition, the Fastbus “between spill tasks” checked the LACAMP gain values and monitored the timing signals used for the strip energy measurements and LA- These tasks also monitored LACAMP crate CAMP calibration measurements. voltages and temperatures. The information provided by the between spill tasks allowed us to quickly respond to changes and failures in the LAC electronics and maintain a high level of integrity in the data. Chapter 4 EVENT RECONSTRUCTION 4.1 OVERVIEW The computing resources available when the experiment was being designed and set up were not sufficient to provide online reconstruction of the E706 data. This analysis was carried out primarily on the Fermilab UNIX “farm” systems after the data taking runs had been completed. The farms used for reconstructing the real data events consisted of groups of about ten Silicon Graphics International (SGI) RISC systems. Generation of the Monte Carlo data was carried out on both IBM RISC machines and the SGI systems, while reconstruction was limited to the SGI machines. The framework for the event reconstruction was provided by the MAGIC pro- gram written by G. Alverson [29]. MAGIC contained routines for handling input and output of raw, unpacked, and fully reconstructed data as well as switches for selecting which reconstruction subroutines to run. The ZEBRA memory manage- ment system [30] was used to provide flexible and efficient usage of memory and to allow the data structures to be written out in a machine independent format. The PATCHY code management system [31] simplified coordination among the many authors working on MAGIC subroutines and provided greater flexibility. The Cern- lib programming libraries were used extensively throughout MAGIC. In addition to the I/O routines, MAGIC contained several “unpackers” as 103 104 well as the “hooks” for calling the associated reconstruction routines. Each of the major subsystems of the experiment had its own unpacker and reconstructor. The unpacking routines rearranged the raw data that was written out by the DA system into ZEBRA banks which reflected the physical structure of the systems involved. The unpackers also applied preliminary calibration and alignment corrections. For example, EMUNP took the information from the LACAMPs, which were read out by crate, and produced banks of energy values ordered by increasing values of R or 4), depending on the bank. EMUNP then applied several energy calibration corrections to each of these strip energies. Each of the unpackers could be run independently, which reduced the overall CPU time needed during the calibration phase. Similarly, the reconstructors could also be activated or deactivated by changing the logical values of the appropriate “cards” in the user control file. The major features of the reconstructors used in the photon analysis will be discussed below. The information from the forward calorimeter and hadron calorimeter was not used in this analysis and will not be discussed. 4.2 THE DISCRETE LOGIC ROUTINES (DLUNP AND DLREC) The discrete logic unpacker (DLUNP) loaded the raw data from the NEU CAMAC system into four banks. The banks contained the data read out from the input latches of the programmable logic units, the global PT ADCs, the ring memory latches (known as the Minnesota Latches), and the 32 bit Nanometrics N278 latches used to latch the hits in the individual groups of 16 channels in the local cards. 105 The unpacker also determined which of the expected CAMAC transfers were not present or had the wrong number of transfer words and passed this information on to DLREC. The discrete logic reconstructor (DLREC) provided two main sets of infor- mation based on the data read out via the NEU CAMAC system from the trigger logic units and Minnesota Latches. The first block of information consisted of four integer words of quality information. DLREC checked the internal consistency of the information readout from the trigger logic units and set bits in each of these words to flag inconsistencies on an event basis. The quality words also contained brief summaries of the trigger information, veto wall status, cerenkov detector in- formation, and CAMAC readout failures. In addition to providing a convenient summary of the information during analysis, the information in these words was used extensively during data acquisition to monitor the trigger logic units. This allowed us to detect and repair bad units, timing problems, and cabling errors on a short time scale, so that we could maximize the amount of usable data. The second block of information produced by DLREC was the DL summary bank. This bank contained approximately 40 integer words with information coded by accessing the bits in each word directly. The bank contained the status of each of the triggers for each octant as well as the status of each of the trigger discriminators to be used in the trigger efficiency measurements. The bank also contained a summary of the information from the Minnesota Latches. The Minnesota Latches were variants of the modules used in the forward calorimeter readout. Each one latched the logical status of 16 input channels for 106 each accelerator RF bucket. The ring memories in the units could store 256 RF buckets of information, which allowed enough time for the pretrigger decision to arrive before the in-time memory locations were overwritten by information from subsequent buckets. Fifteen RF buckets of information roughly centered on the trigger time were read out from each latch. The signals from each of the phototubes in the beam hodoscope, interaction counters, veto walls, and cerenkov rings were fed into separate channels of these latches to provide detailed information on the trigger particles. The “out-of-time” RF buckets that were not used in forming the trigger decisions also provided unbiased information on the incident beam. The information from the Minnesota Latches required approximately 180 in- teger words of memory per event. In order to reduce the overall data sample to a manageable size, the data from the beam hodoscope and the veto walls was sig- nificantly compressed. The data from the interaction counters and cerenkov rings was saved in an uncompressed form to allow more detailed analysis after the event reconstruction pass. 4.3 THE ELECTROMAGNETIC CALORIMETER ROU- TINES (EMUNP AND EMREC) 4.3.1 EMUNP The readout of the electromagnetic calorimeter was designed to minimize the overall readout time (and hence the resulting readout dead time). Since the ordering of the channels in the crate did not always reflect the physical ordering of the strips in the R and (1) views, the first job of EMUNP was to rearrange the data into banks 107 that reflected the physical geometry of the calorimeter. EMUNP created separate unpacker banks for each of the four quadrants with daughter banks for each of the four views (Left R, Right R, Inner 45, Outer 05) and each of the three sections (Front, Back, Sum 2 Front + Back). Within each of these daughter banks, the strip energies were ordered in increasing R or d) values. After rearranging the raw data, EMUNP subtracted the individual pedestal values from each of the strip ADC values. Pedestal measurements were taken at roughly eight hour intervals during the run. However, average deviations of a few ADC counts were found when looking at the data offline. These deviations may reflect the differences in the sampling rates between the real data and the calibration tasks or shifts in the pedestals for the real data due to “pile up.” Although these residual pedestal variations were only a few ADC counts on average, they were big enough to cause significant variations in the reconstructed energies and therefore were removed. The final channel-to-channel pedestal values were measured by averaging the values seen for each of the channels in prescaled beam events. Several cuts were made on the prescaled beam data to be sure that the strip energies used in the averages did not include signals from showers: 0 Eliminate events that had the SCR bit set. 0 Eliminate events in which 2 or more interaction counters fired in any of the 3 buckets before or after the in-time bucket. 0 Eliminate channels that belonged to a “group” (see section 4.3.3) or were less than 10 strips away from a group. 108 0 Eliminate the LAC quadrant if the upstream veto wall fired in coincidence with the “OR” of the two downstream veto walls during any of the 15 accessible buckets for that quadrant. These cuts ensured that no photons or large ramps were included in the averages. If these cuts were satisfied and a strip energy fell within a window of i150 ADC counts then the strip energy value was used in determining the average pedestal for that strip. This large window was needed to cover the full range of real pedestal values. EMUN P then applied several multipliers to the pedestal subtracted ADC val- ues. The first factor was a strip-wise gain factor that removed channel-to-channel variations in the response of each channel to a given energy input. Each of the ADC values was then multiplied by a global conversion factor that mapped the ADC val- ues into values that roughly corresponded to physics energies. An additional gain factor was used to compensate for the observed time dependence of the LAC signal size (see Figure 4.1). The changes in the energy scale only occurred while the ex- periment was receiving beam, so this plot shows the variations as a function of the number of ”beam days”. The change in the energy scale only occurred during the periods when the experiment was receiving beam, so the days without beam have . been removed. The cause of this effect is not well understood, although a similar effect was seen in Fermilab experiment E629. These gain factors did not provide the final energy scale calibration. However, it was necessary to remove these variations to ensure that all of the data sets would be reconstructed with approximately the ame reconstruction thresholds. 109 9 T I T t Y r l 1 r 1 f l T ffi —r T T— I t v I r 1 r r l v r T r l r T o . o d m -I >s 9’ . 0 a “S . a: Q 1990— 1991 Runs 0.0 0 . 5 ' e. : z .0 e _ w d e . 1.2 _ A - 1. _[ ' : . C1 1990 . _ O o +. : .. 0 1r moss rano i O . . . a 0.9 e A Calibration electron energy rotIo E e e ' 1 ' I 0.8 ._ I I I I l I I I I l I I I I I I I I J I 1 1 I I i I I I I l 1 I I 40 80 120 160 200 240 Number of Beam Days Figure 4.1: Time dependence of the EMLAC energy scale as a function of “beam days” for the 1990 and 1991 runs (dark circles). The open triangles indicated the ratios measured using the 50 GeV/c electron calibration beam data. 110 4.3.2 FREDPED The F REDPED routine was called by EMREC before shower reconstruction began to remove global shifts in the EMLAC pedestals on an event basis. These effects seemed to be correlated with intensity and beam structure problems (en- hancements in the fraction of RF buckets containing high numbers of beam par- ticles) and created large “ramps” in the R view strips and “steps” in the d) view strips (see Figure 4.2). These problems may have been caused by the same problem that caused the image charge ramps. These shifts could significantly change the reconstructed energy of a photon and could result in the creation of fake signals or omission of real shower signals in the group finding process, so they had to be corrected before the showers were reconstructed. FREDPED identified the groups of strips that contained showers and removed these from the pedestal corrections. The remaining strips were used to fit sloping lines to each of the R views and each half of the (1) views (since the ramps and steps seemed to be octant based instead of quadrant based). These pedestal shapes were then subtracted from the strip ener- gies. The total pedestal correction from the inner R strips was used to correct the inner ¢ strip pedestals because there were usually not enough inner (I) view strips to obtain an accurate determination of the pedestal shifts (studies of events where the inner ¢ shift could be measured showed good correlation with the inner R shifts). Figure 4.3 shows the ramp and step event from Figure 4.2 after the FREDPED correction was been applied. 111 .8 2.8 g 2.4 012.4 C” 2 L h 2 2 31 6 “’1 6 L” i .9 ' .912 31.2 .5 0 8 m U‘) . 0.8 0.4 0.4 o 0 : -o.4 I I I I I I I I I l I I _0.4 : LI I I l I IMIJ I I O 100 200 O 100 200 Left R View Right R View (I) (I) a: 1.2 . a) '5. s '62‘4 L I :" g 8 o 8 =_ s 2 Lu ‘ E LIJ 1.6 .905 ;- Q1 2 L " 'C HO.4 :— 4. ' (D : . (I) 0.2 Fill 0'8 g o '7 0.4 E- —o.2 O I _O.4 JJIJLIJLIIIIIIIIIIIIII _04 ELIIIIIIIIIIIIILIILIIII O 20 4O 60 80 ' O 40 BO 120 160 Inner a: View Outer p View Figure 4.2: Event in quadrant 1 showing a ramp in the left R view and a corre- sponding step in the outer 45 view. 112 3’3 8 .5 '51 2 a“) B c C113 1.11 L1J .a. ' Q12 '51. '5 m0 mos 0'4 0.4 O _o.4 11I1111I1 _0.4 111111111I11 o 100 200 o 100 200 Left R View Right R View a”) 04.75 .. . 0 E g, 5— '§,1.5 E- 00. :- u.25 E- C : C : L*Jo. 5- Lu 1 :- Q. : : '50. :- ‘3‘175 5’ (no. _E_ (no.5 g E 0.25 0 '. —O.2 5 _O4 inIIlIIIIlILIIlJIIIIIII -O.25 :IIIlIIIIlIIIIIIJIIlIII O 20 4O 60 80 0 4O 80 120 160 Inner to View Outer qp View Figure 4.3: The “ramp and step” event from Figure 4.2 after the global pedestal shifts have been removed by FREDPED. 113 4.3.3 Group and Peak Finding After the pedestal subtraction had been completed, EMREC searched for groups of 3 or more channels (2 for the outer ¢ view because of the larger strip size) in which all of the strips contained more than 80 MeV (95 MeV for the outer (1) view). If the total energy of the set of strips contained more than 600 MeV and at least one of the strips was above 300 MeV (350 MeV for the outer 05 view), then the set of strips was defined to be a “group.” Not all of these requirements applied ‘ to groups located at the detector boundaries. As an example, a group at the inner R boundary could be as small as one strip. These exceptions allowed the energy deposited at the boundary of one view (e.g. the R view) to be properly correlated with the corresponding signal in the other view (e.g. the ¢ view). After the groups had been located, the locations of the “peaks” within the groups were determined by looking for local maxima in the sum view strip energies. If the height of the peak compared to the surrounding local minima (a minimum could simply be the edge strip for the group) was consistent with being an energy fluctuation at the 2.50 level (where a is the EMLAC resolution defined in Equa- tion 4.7), then the peak was not considered significant. If a peak was considered significant, then the process was repeated using the corresponding front view strips to see if the peak was produced by two overlapping showers that coalesced in the sum View (see Figure 4.4). If additional maxima were found in the front view, then a significance check was performed and the information for the surviving peaks was saved. If an original sum view peak corresponded to more than one front view peak, 114 the original peak was divided and the sum view energy was split according the rel- ative energy fractions seen in the front view. A similar check was then performed on the back view strips. If a group contained only one peak that had an energy sum greater than 25 GeV, a special check was made for “shoulders.” The showers from a highly asymmetric decay of a high energy 1r°(or other neutral particle) can overlap each other and appear as a single peak. The lower energy shower can be detected by looking for a change in the sign of the logarithmic derivative of the strip energies. If such a change was found, the peak significance criteria were used to determine whether or not the shoulder was a significant peak. Once the boundaries of the peaks had been located, preliminary estimates of the energies and positions of the showers were made. The energy in each of the peaks was estimated by summing up the energies from the strips lying between the minima surrounding the peak. These energy sums were performed for the sum view and the front view using the peak boundaries determined from the front view. The location of the maximum strip in the group was used as the initial estimate of the shower location. This position was then improved by taken a weighted difference between the strip energies on either side of the maximum strip. The peak position was shifted away from the center of the maximum strip by a factor of 1.7 x (E1 — E2)/(E1 + E2) strip widths, where E and E; are the energies of the two neighboring strips. If no peak was found in one of the sections, then zero flags vere used for the peak information for that section. After the peaks position and energy information have been determined, the 115 ”a it” “2::‘1‘... ”‘0. a". ’90.... 'Ooeeoo......... K... .15” s -" “ -. . 11.1 mm In mm a hrs! of am _ (has M) Figure 4.4: Separation of showers using the front / back segmentation of the LAC. The showers are narrower in the front section than in the sum section, so showers that coalesce in the sum section can often be separated in the front view. 116 values of two more variables were determined. For each of the peaks, the energy sums from the front and sum sections were used to define the ratio of these energies: Efmm/Emm = Ef/E¢ = front section energy / sum section energy (4.1) This ratio provided an estimate of the longitudinal development of the shower that could be used to differentiate showers from photons or electrons, which deposited about 2/ 3 of their energy in the front of the calorimeter on average, from showers initiated by hadrons or muon bremsstrahlung, which tended to deposit more energy in the back of the calorimeter. The locations of the peaks in the front and back sections of the R views were also used to define the “directionality” of a shower: dir = Rm... — (zLAC 2:33) x Rb...k (4.2) front where the Rs refer to the peak positions in the front and back views and the Zs are the locations of the first layers of the front and back sections of the calorimeter along the beam axis. For showers originating from the target area, the directionality should be approximately zero. However, showers initiated by halo particles moving parallel to the beamline will have roughly the same R positions in the front and back sections, and therefore will have large positive directionality values. This makes directionality a useful tool for identifying beam halo backgrounds in the calorimeter. 4.3.4 Initial Shower Reconstruction A parameterized shower shape was necessary for fitting the signals found in the peak finding process to determine the positions of the showers and also to 117 reconstruct the signals from overlapping showers. Single photon Monte Carlo data in which the shower development was allowed to continue down to the 1 MeV level (referred to as “full shower” data) was used to determine an average normalized shower shape. This shape was then compared with isolated photons from the data and found to be consistent. Separate shower shapes were determined for the front and back sections because the showers were much narrower in the front than in the back. The shower shape fits had the following forms: Efmm(r) = (fle‘ffl + fge—f‘r + fse'f")/r (4.3) Ebuk(r) = ble'b” + b3e'b" + b5e'b" (4.4) E.um(r) = 0.7Efrom(r) + 0.3Eb.¢k(t) (4.5) In these formulas r represents the distance from the shower center. The factor of 1 / r was used to make the front section fit fall more rapidly. The fractional contributions of the front and back shower shapes to the sum view shape were also determined from the Monte Carlo events and isolated photon data. The normalized shape of the showers was determined to be independent of energy in the sum section, which greatly simplified the fitting process. In order to extract more precise information about the energies and positions of showers, the shower shapes were fit to the data using the energies and positions determined from the peak finding routines. The simplest case occurred when there was only one peak in a group. In this case, the energy and position of the photon were determined by minimizing the x2 for the strip energy deviations from the shower shape, which was defined as follows: X2 — —Z(el— E61 X 21) 2/0'12 (4.6) 118 where e1 was the energy in strip 1, Efit was the shower energy parameter for the fit, and z; was the estimate of the fraction of the shower energy deposited in strip I based on the shower shape. The sum is over the strips in the peak, although the strips on the edges of the peak were only included if their energies were greater than twice the group finding threshold to reduce the effect of fluctuations. The weight a; for each strip was determined by the LAC energy resolution, which was determined to be: 02(E) = (0.22)2 + (0.16)2 x E + (0.01)2 x E2 (4.7) Although this procedure is straight forward for the R view fits, the variable widths of the 45 view made it necessary to estimate the R position of the shower by using the apparent width of the shower before the fit was performed. After the R and 43 view showers are correlated, the improved R location was used to perform a new fit to the 45 view peaks. The energy and position information obtained from the fit was referred to as the “gamma” information and stored in the EMREC zebra banks. After the fit had been performed for a peak, the amount of energy in the tails (the strips outside of the defined peak region) was determined as follows: Egan = Em X {l — Z 21} (4.8) I If the x2 of the fit was less than 5, then the fit energy was stored as the gamma energy. However, if the x2 was larger than 5, then the energy was determined using the peak strip energies plus the energy in the tails: 119 Esum : Eel + Etail (4.9) I This preserved the energy information from hadron showers, which were much broader than photon showers and generally were not fit well by the photon shower shape. If a shower was near a boundary between views (e.g. the inner/outer ¢ boundary), then a single shower might be split into two peaks (one in each view). However, sometimes only one peak was found and the energy in the other view was lost. In this case, an estimate of the energy lost to the other view was made and added to the energy of the gamma that was found. Similar corrections were applied to compensate for energy deposited outside the fiducial volume of the LAC when the showers were near the fiducial boundary. 4.3.5 “Gamma” Correlation Once the energy and position fits had been made for each of the views, the gammas from the R views had to be correlated with the cf) view gammas to obtain full position information for each of the showers. These correlated pairs (or sets) of gammas were referred to as “photons” in EMREC. Although the gammas and photons are reconstructed on a quadrant basis, the correlation process makes use of the division of the quadrants into left and right R and inner and outer d) to reduce the combinatoric possibilities. Inner ¢ gammas were matched only with R gammas having radii less than 40 cm, since this is the location of the break between the inner and outer (1) strips. Similarly, gammas from the left R view were only matched with (,0 view gammas in the left half of the quadrant. 120 Two values were used to match R and 43 view gammas. The first was the differ- ence between the energies in the two views weighted by the LAC energy resolution (ER — E45)/a'2 (where 02 is the energy resolution defined in section 4.3.4). Since the R and d) boards alternated in the calorimeter, the R and ¢ view gammas-for a shower should have very similar energies. Some differences will result from fluctua- tions in the development of the shower and the fact that the first readout board has R geometry, but these differences were only significant for very low energy showers. The second criterion was a comparison of the longitudinal development of the two gammas using the Eg/E. values determined in the previous stage of reconstruction. The ordering of the R and ¢ boards resulted in a slight difference in Ef/E. between the R and d) views, which was measured using Monte Carlo showers. A curve was fit to this difference as a function of energy and use as an offset when comparing the values. The correlation process was done in steps, with the allowed number of standard deviations in the energy deviation and longitudinal development increased for each step. The simplest possible correlation was the case in which a single R view gamma was matched with a single 45 view gamma (known as a 1-1 correlation). For each combination of R and d) gammas, the positions of the gammas were checked to see if the gammas were in the same subsection of the quadrant. If this test was passed, then the energy and Ef/E. values were compared to see if the differences were within the current limits. If the differences fell within the current correlation window, then the gammas were considered correlated, and the photon information was stored in the output banks. The 42 view shower was refit using the new R position information 121 to maximize the accuracy of the d) view fit. After this had been done, the gammas were excluded from the remainder of the correlation process. While many of the photons fell into the 1-1 category, many of the events required more sophisticated matching techniques. 'In some cases, showers that were separated in the R view overlapped in the <13 view to the extent that the reconstructor could not separate them. In this case (known as a 2-1 correlation), the two R view gammas had to be added together before comparing them with the <13 view gamma. If a 2-1 correlation was found to fall within the correlation window, a new fit to the g6 view using two gammas was made using the energies of the R view gammas. This case was fairly common for high energy 1r°s with low asymmetry. The correlation routines also check for 1-2 correlations, in which the two showers overlapped in the R view. If an event fell into this category, the R view would be refit assuming there were two showers. Special correlation routines were written to handle the photons that landed near the inner-outer cf) boundaries and the left-right R boundaries. In these cases, the photon would be split into three views (e.g. photons near the inner-outer d) boundary would have to correlated from one R gamma and two 43 gammas). Boundary correlations were checked first and were given wider windows for match- ing than other correlation types. In addition to the boundary correlations, there were several routines that checked for more complicated combinations, such as 2-2, 1-3, 3-1, 1-4, and 4-1 correlations. In these cases, the 4) view gammas were not refit. After the correlation possibilities had all been checked using the first set of requirements on the energy and longitudinal development differences, a second pass 122 was made using a less restrictive set of requirements. After completion of the corre- lation passes and assignment of TVC times to the photons (see section 4.3.6), the “photon” information was stored in the EMREC output banks, which were written out to the data summary tapes (DSTs). 4.3.6 Photon Timing Information The time of arrival for each photon was determined using the Time-to-Voltage- Convertor (TVC) information read out from the LACAMPs. These devices mea- sured the arrival times for signals from groups of four neighboring strips in the LAC. The threshold for starting the timing circuit was about 4 GeV of energy. The TVC values for the groups that fell within the strips associated with a given photon were sorted into groups of TVC values that fell within 21 ns (about 30') of each other. The group that had the greatest number of TVC values was assigned as the best time for the photon. If several groups had the same number of entries, then the group that had the largest amount of energy associated with it was chosen as the best time, since the TVC resolution improved with increasing energy. Once the best group of times had been selected, the arrival time of the photon was deter- mined by taking the average of the TVC values weighted by their associated group (of 4 strips) energies. Using this technique and requiring that the photon time be based on at least two TVCs, a timing resolution of about 7 us was obtained. The efficiency for obtaining a reliable timing measurement for a photon reached 50% at‘a photon energy of about 16 GeV and was 100% for photons with more than 50 GeV energy. This was too high to use for the final cross section measurements, 123 especially for photons at large radii which had much lower energies for a given PT, but the TVC information was very useful for studying the beam halo rejection cuts and the sensitivity of the trigger system to out of time signals. 4.4 THE CHARGED TRACK ROUTINES (PLUNP AND PLREC) The planes unpacker (PLUN P) took the list of hits read out from the Nano- metrics CAMAC system and assigned an appropriate spatial coordinate to each hit. These hit locations were then stored in zebra banks to be used by the reconstruction program (PLREC). A considerable amount of effort was expended to ensure that the alignment of the various elements of the tracking system was known precisely enough to take advantage of the increased precision of the straw chambers with respect to the PWCs. The planes reconstructor then used the information from the upstream SSD system and the downstream PWC and straw systems to determine the three vectors of the charged tracks produced in each event. This information was used to reconstruct the location of the interaction vertex, tag electromagnetic showers that were initiated by charged particles, and to aid in calibrating the energy scale of the LAC. The only system that could resolve the ambiguities in correlating hits from different views was the PWC system, which had two additional sets of orthogonal planes that were rotated by 37° with respect to the X and Y planes. For this reason, the PWC track reconstruction was done first. The hit information from each of the four PWC planes that had wires oriented in the same direction was used to 124 construct “view tracks.” With the exception of the PWCs, the tracking systems had only X and Y views, which made it impossible for these systems to resolve the X and Y hit combinations. Instead of writing an algorithm to reconstruct points in space for the PWC hits in each module and then write a separate algorithm to reconstruct view tracks in the remaining systems, it was more practical to use view tracks for the whole system and use the PWC U and V view tracks to resolve the combinations. Candidate view tracks were created by selecting pairs of hits from the outermost PWC modules (these were the “seed planes” for the 4 hit tracks). The remaining planes were then checked to see if they had any hits that fell within 1 wire spacing (0.1”) of the line between these points. The view track was fit to minimize the overall x2 determined by taking the residual distance between the reconstructed track and each of the hits weighted by the measured projection uncertainty for that plane. View tracks that had hits in all four planes and an acceptable xz/ Degree of Freedom (DoF) were retained. A similar process was used to determine the tracks with hits in only 3 of the planes using two sets of “seed planes.” Finally, two hit tracks were reconstructed using the X and Y views of the first two PWCs so that the wide angle (low momentum) tracks would be reconstructed. To obtain the three vectors for the charged particles, the tracks from the different views had to be correlated. This was done by taking pairs of X and Y view tracks and creating “space track” candidates. These candidate tracks were then projected into the U and V views to see if any of these view tracks fell within 1.5 wire spacings of the candidate space tracks. This process was repeated using space track candidates based on the U and V view tracks to maximize the 125 reconstruction efficiency. Candidate space tracks that had reasonable overall x2 values, 13 or more hits, and did not share many of their hits with other space track candidates were accepted as space tracks. These tracks were then refit to minimize their overall x2 values, instead of minimizing the x2 values for individual views, to get the best possible determination of the particle’s three vector. These tracks were then projected into the straw chambers and the better resolution of the straw chamber hits was used to improve the space track resolution by a factor of about 2.5. The next step was the reconstruction of X and Y view tracks in the beam and vertex SSD systems. In the beam SSDs, the view tracks that had hits in all three modules were reconstructed first. View tracks with only two hits were only reconstructed if they had small slope values with respect to the beam axis. This reduced the number of fake tracks generated by taking random combinations of two hits. In the vertex chambers, 4 and 5 hit tracks were reconstructed first, and the tracks that satisfied the xz/DoF requirements were saved. These tracks were then projected to the center of the magnet to see if they linked with projections of the space tracks reconstructed from the downstream system. Each downstream track was assigned a momentum dependent linking window based on a simple estimate of the track momentum that assumed the track came from the nominal origin of the coordinate system. Linking x2 values were determined for the X and Y views based on the measured resolution for linking in each view. The x2 values for the X view were determined using the differences in the X projections at the center of the magnet between the upstream and downstream tracks. The Y view tracks were 126 not bent significantly by the magnetic field, so the differences in the upstream and downstream slopes were also used in the x2 calculations (using the measured slope resolution as the weight). The SSD track with the smallest x2 value was called the “best link.” Up to 4 other links were retained for each downstream track in order of increasing x2 value. Once the linking for the 4 and 5 hit tracks had been completed, the hits for these tracks were removed from the list and the process was repeated to look for 3 hit SSD view tracks. The “best links” were then used to determine the location of the interac- tion vertex for the event. The existence of a link between an SSD track and a downstream track was required to reduce the background of fake SSD tracks. A minimum of three tracks were required to determine a vertex location. If there were not enough best links available to reconstruct a vertex, then extra links and unlinked tracks were also used. The vertex location was determined by minimizing the x2 defined using the impact parameter of each track with respect to the vertex weighted by the projection uncertainty for the track. If the vertex defined in this manner did not satisfy a x2 cut, then the worst track was removed from the fit and the fit was redone. This allowed tracks from secondary vertices to be removed from the primary vertex determination. If a beam SSD track pointed to the vertex, the vertex location was refit using the beam track. The final vertex position was the weighted average of the positions determined from each of the views. The vertex resolution was 400 pm in the Z direction and 10 pm in the transverse directions. The vertex code could reconstruct up to 2 vertices. If two vertices were found, then the upstream vertex was automatically labelled the primary vertex. 127 Once the primary vertex location had been determined, this information was used to re-determine the best links between the SSD view tracks and the down- stream space tracks. The X2 calculation for the refitting process included an added term for the vertex impact parameters. The new best links were then used to make more precise determinations of the track momenta using the bending angles between the upstream and downstream tracks. Chapter 5 NEUTRAL MESON ANALYSIS 5.1 Overview The data sample used for this analysis (and for the direct photon analysis) consists of runs 7523 through 9434 of the 1990 run. These runs represent all of the correctable data from the 1990 negative pion beam sample. The Prescaled Interaction, Prescaled Pretrigger, GLOBAL LO, and SINGLE LOCAL HI triggers were all used for the 1r° and 1) analysis to provide coverage over the widest possible PT range. The cross sections for the neutral mesons must be understood, before the direct photon background can be determined. Many of the cuts and procedures used in making the neutral meson measurements will be used in making the direct photon measurements. Analysis of the neutral mesons is also useful in understanding how to remove the backgrounds from muon bremsstrahlung since the neutral meson signals can identified by the mass of the photon pair (given by mu 2 (2EiEJ-(1 —cos(9,,~)))1/’, where E; and Ej are the photon energies and 96 is the angle between the two photons in the lab frame assuming that they came from the reconstructed vertex). The neutral meson cross sections could be measured without making a number of these halo rejection cuts, but an analysis without these cuts must contend with a large background that is difficult to subtract, which would result in a larger degree of uncertainty in measuring the signals. Measuring the nuclear dependence of the neutral meson cross sections is impor- 128 129 tant because it provides a crosscheck on the ability of the experiment to accurately measure nuclear dependence. These cross sections should have the same depen- dence as the (non- strange) charged mesons measured by Cronin et aI, since the rescattering should occur at the parton level before the mesons have been created in the fragmentation process. This measurement is an important step in demon- strating the validity of the measurement of the nuclear dependence of direct photon production. 5.2 Vertex Cuts and Reconstruction Efficiency The 2 distribution of vertices in the target region is shown in Figure 3.1a. Each of the entries in this plot has been corrected for absorption of the beam in the upstream material. Longitudinal cuts on the vertex locations were defined for each of the target materials. Figures 3.1b and 3.1c show the vertex distributions in the plane transverse to the beam for the 2 regions containing the copper and beryllium targets (respectively). The shadowed positions of the beam hodosc0pe, beam SSD planes, beam hole counter, and the targets have also been superimposed to show the relative offsets of the targets with respect to the rest of the system. The vertices outside of the beryllium target come from the Rohacell target holders (the careful observer will note the gap between the two pieces of Rohacell at y x: 0cm and x z 0 -— 1cm). The transverse vertex requirements were defined so that any beam particle within the fiducial area would pass through all of the targets. This also avoids having to measure a different normalization correction for each target (since the normalization must be modified to account for those particles that did 130 not pass through the fiducial region). The targets were moved during work on the SSD chambers, so two different fiducial definitions were used to cover the runs taken before the work was done (runs 7523-8499) and the runs taken after the work was done (runs 8500-9434). The efficiency of the SSD reconstruction routines was determined by running events generated by the Herwig Monte Carlo [32] through the full MAGIC recon- struction process and measuring the ratio between the generated and reconstructed vertex distributions. Prior to MAGIC reconstruction, each of the tracks in an event was digitized using the measured hit efficiencies for each of the planes, so that the Monte Carlo “detectors” would have the same efficiencies as the real system. Using this technique, the efficiency for reconstructing vertices in the copper and beryl- lium targets was determined to be independent of 2 position within the copper and beryllium targets and had an overall value of 99.6%. Although interactions occurring in the SSD planes were included in some of our studies of nuclear effects, the efficiency and biases involved in reconstructing vertices in these planes are not well understood, so this data will not be included in the neutral meson and direct photon nuclear dependence measurements. 5.3 EMLAC Fiducial Cuts and Geometric Acceptance Showers located near the octant and quadrant boundaries or near the inner or outer edges of the detector were subject to much larger uncertainties in their energies and positions. Fiducial requirements were applied to the photons used in the analysis to avoid using these poorly measured photons . Photons that fell 131 cm) :120 r , 1 80 - 4o- 1 L i l . . l . . l -120 -80 -40 0 40 80 120 x (cm) Figure 5.1: Distribution of 1r°s that fall within the EMLAC fiducial definition. within 2 R strip widths of the inner edge of the detector, the octant boundaries, or the quadrant boundaries were excluded from the analysis. Photons that fell within 2 R strip widths of the last full R strip were also excluded from the analysis (the outermost 16 strips were not present in all 66 layers of the calorimeter, so showers in these regions were not always fully contained and tended to have unusual shapes). The distribution of 7r°s whose photons fell within the fiducial region is shown in Figure 5.1. The correction for this geometric acceptance cut was determined using a sim- plified Monte Carlo program. This program generated 1r°s which decayed uniformly in asymmetry (see Section 5.4 for the definition of asymmetry) and measured the 132 fraction of these events in which both photons were inside of the EMLAC fiducial volume. The acceptance fraction was determined as a function of PT, lab frame pseudorapidity (ybb), the 2 position of the vertex, and the radial distance between the vertex and the z axis (RT). Events were generated on a four dimensional grid of these input values using 20 bins in PT from 2.5 to 12 GeV/c, 34 bins in yhb from 2.51 to 4.335, 11 bins in v. from 0 to -40 cm, and 11 bins in RT from 0 to 2cm. Figure 5.2 shows the 1r° acceptance averaged over v. and R;- as a function of yhb for several PT bins. The geometric acceptance correction for each event was determined by interpolating between the grid points to determine the geometric ac- ceptance fraction and weighting the event by the inverse of this value. This process was then repeated to determine the EMLAC geometric acceptance for the other neutral mesons and for direct photons. 5.4 Energy Asymmetry The 1r° and n mesons are spin zero particles, so the decay of these particles into pairs of photons should occur isotropically in the center of mass frame. Using special relativity one can derive the following equation to relate the center of mass decay angle to the energies of the two decay photons: A = [icos(0’) 2| E1 - E2 | /(E1 + E2) (5.1) where E; and E; are the energies of the photons, and 9" is the center of mass decay angle relative to the particle’s direction of motion. For the mesons measured in this experiment, fl is effectively 1 and can be ignored. The isotropic decay of the parent 133 _ 1 A q q a .1 —( 4 - 4 — .1 1 —1 .l .4 4 ‘1 4 Geometnc Acceptmoc c an I f/ 1 o on I j 06 - -‘ 0.6 I- -‘ < 1 Q4 _ 3.5 . 02 - _ 0.2 '- -‘ L l A l A l A l 4 l A l A 0 A l A l 1 A 1 A l A -0 75 -0 5 -0.25 0 0 25 0 S 0 75 -0.75 —0.5 -0 25 0 0 25 0 5 0.75 Y Y 1 w I w j T I r I I I w l ' T ' I ' 1 I ' r . > 4 1 0.8 ~ ‘1: 0.8 F H b 4 b 4 0.6 '- n 0.6 - r F + p 4 0.4 .. 5.5 < pT< 7.0 ., Q4 ._ 7.0 < [31. < 8.5 - b 4 b 1 0 2 n q 0 2 r- -4 4 L 4 0 A l A L; J A l_m J_A o A J A l A l A l A l_m -0.75 -0.5 -0.25 0 0.25 0.5 0.75 -0.75 -0.5 0.25 0 0.25 0.5 0.75 Y Y Figure 5.2: 1r° acceptance for several PT bins averaged over v, and RT. 134 meson into two photons will result in a distribution that is uniform over intervals of cos(9"), and therefore the distribution as a function of asymmetry should be uniform. The distribution of 1r0 asymmetry values is shown in Figure 5.3 This plot shows the asymmetry distribution for photon pairs whose masses fall within the pion region after the asymmetry distribution of the sideband regions has been subtracted (see Section 5.7). There are several factors that cause the measured asymmetry distribution to deviate from uniformity. At large values of asymmetry, one of the two photons will be very small and may leave the EMLAC fiducial region or be lost in the reconstruction process. This will result in a dip in the asymmetry distribution as the asymmetry approaches 1. The distribution will also be modified by the muon background. A high PT muon can randomly combine with small background photons to produce a signal in the 1r° mass band. This increases the population of the distribution near 1. However, subtracting the asymmetry distribution for the sideband regions removes most of this effect. Cuts on the asymmetry of the 1r° and 1] candidates are used to avoid these problems at high asymmetry. Compensating for this asymmetry cut is straight- forward since the distribution is expected to be uniform. The asymmetry cut for measuring the pions (which constitute about 80% of the direct photon background) was set to 0.75, since the distribution was roughly constant from 0 to 0.75. The 17 asymmetry distribution showed a similar decline at high asymmetry values, so a cut at 0.75 was also chosen for the ns. There is one other small deviation from a uniform distribution that can be 135 0.06 gr . ‘_‘_‘— 0.05 : 0.04 :— __ 0.03 0.02 0.01 lLllLllllllllLllLl_LlllllllllljlllLlLlllllll 00‘ 10.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Asymmetry (1r° Moss Range) IUIII'I'II ll IUIIIII Figure 5.3: 1r° asymmetry after sideband subtraction. seen in Figure 5.3 The slight dip in the distribution near zero is probably due to a subtle bias in the reconstruction process. For very low asymmetries (near 0) the energies of the two photons will be almost equal. The correlation process matches the R and 05 view gammas based on their energies, so fluctuations in the energy deposited in one of the views can cause the wrong gammas to be correlated, which will increase the asymmetry of the reconstructed meson. 5.5 Longitudinal Shower Development Showers initiated by hadrons and muon bremsstrahlung can be rejected by measuring the longitudinal development of the showers. Showers initiated by pho- tons or electrons (generically referred to as “electromagnetic showers”) generally deposit about 2 / 3 of their energy in the front section of the EMLAC (see Figure 3.4). However, showers initiated by hadrons develop more slowly and tend to de- 136 posit almost all of their energy in the back section. Studies of the distribution of Efrom/Etom distribution showed that requiring a value greater than 0.2 for photons would reject most of the showers initiated by hadrons. This cut will also reject some of the muon showers, since many of these showers will start in the back sec- tion. The correction for the small fraction of real photons rejected by this cut was determined by including this cut in the calculation of the reconstruction efficiency using the Monte Carlo data (see Section 5.9). 5.6 Muon Bremsstrahlung Rejection Although a number of online techniques were used to reduce the muon bremsstrahlung background, a significant number of muons were contained in the raw data sam- ple. The mass requirements for reconstructing the neutral mesons reduced the background, but random combinations of muons with soft photons produced large background signals, especially at high PT and in the outer R regions (“backward rapidity”) where muons were more likely to create a trigger. The random combina- tions between muon showers and soft photons generally had very high asymmetry values and low mass values, but some of these combinations had masses in the 1r° and 1] mass bands. Imposing the Egmm/Etom requirement on the photons removed some of the muons, but a significant number of muons remained in the sample. Figure 5.5 shows the two photon mass distribution in the 1r° mass region for sev- eral P-r bins. The muon distribution is comparatively fiat over PT, while the meson spectrum falls rapidly, so the background problem is greatest at high PT. The top part of Figure 5.6 shows the mass distributions for high PT divided into two 137 x 103 3 1800 ‘3 1600 3 1400 8 1200 1000 800 600 400 200 o 4JL11111111111111111llllelLJlllLllllll[ll1.1111; o 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Ema/Em. Entr x 102 3 8000 O h 7000 0 Q6000 (I) .2 5000 b 15 4000 3000 2000 1000 o 7".III'IIII'IIII'IIII'IUI'III'UIIUI lIl111lLlJlLLilJLJiILLLIlllllLLLllJLLllll1111111 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 Figure 5.4: Longitudinal shower development for showers matched with charged particle tracks (top) and showers matched with ZMP electrons (bottom). 138 rapidity bins, so that the severity of the problem in the backward rapidity region can be observed. Several cuts were used in addition to the asymmetry and longitu- dinal profile requirements to remove these muons from the sample. These cuts are described in the following subsections. 5.6.1 Offline Veto Wall Requirement The online veto wall definition used a coincidence between the scintillators up- stream and downstream of the hadron shield to reject beam halo particles. However, this online veto was not fully efficient. One of the sources of inefficiency was caused by inefficiencies in the logic signals due to the very short signals used to define the signals for each RF bucket independently. The veto wall logic seems to have been particularly sensitive to the problems associated with these short strobes. Another problem that allowed some of the muons into the triggered sample was inefficiency in the counters due to radiation damage and the high signal rates (especially in the counters upstream of the hadron shield). Although there was no offline way to change the efficiency of the counters, the information from the Minnesota latch ring memories seemed to latch the signals from the veto wall scintillators more efficiently than the online systems did. These ring memories provided the status (on or off) of each of the phototubes attached to the scintillators for fifteen RF buckets roughly centered around the interaction time. The offline veto wall cut evaluated the status of the online veto wall logic (VWl + VW2) * VW3 for each of the quadrants separately. The status of the downstream logic (VWl + VW2) for a given RF bucket i was compared with the status of the upstream logic (VW3) for 139 5 r T T I I IT I I I I I L 12m I I I I I I I Ifi ‘I fl L . A : f 1 .. .1 l' -1 40000 I. 4-0 3.5 GeV/c and asymmetry values less than 0.5. The average mass values varied over a range of about 5%. These values were used to rescale the reconstructed shower energies in each octant. 5.8.3 Boundary Corrections Showers in the 05 view that landed near the inner/ outer 43 boundary were cor- rected for the energy lost into the other section of 05 strips. However, a comparison of the results obtained for the R and 05 views showed that the EMREC boundary correction was too large. To minimize the effect of this overcorrection, the photon 151 5IIrIIIIrTIIIIIIIIII'IIIIIIIIIIIIIIIIIIfi—TIIII IIII T I I § IIIT Electrons lllllllllllLJ o o .. .... o ..... o‘.’ o... 0" 0’. 0" .0 .O .' .9 o o o o Photons Average Energy Loss (GeV) u I‘IIIIIIIIIIII \ ‘ llllllJl o lJlllillllllllLLLlillLLllLlJllllllllllllllllllll 20 40 60 80 100 120 140 160 180 200 Ener9y(GeV) Figure 5.11: Average energy lost in the material in front of the EMLAC for pho- tons (solid line) and electrons (dashed line) as a function of the the reconstructed energies. energy was replaced with two times the R view gamma energy for photons that fell within :t 5 cm of the inner/outer boundary at 40.175 cm. 5.8.4 Correction for Lost Energy Both photons and electrons deposit some fraction of their energy in the ma- terial in front of the active region of the EMLAC. Monte Carlo data was used to determine the amount of energy lost by photons and electrons as a function of energy. Figure 5.11 shows a plot of the energy loss values for both photons and electrons. The electrons lose more energy on average because they start showering earlier on average. 152 5.8.5 The Radial Correction There was also a systematic variation in the reconstructed masses of 1r° and 1] particles as a function of radial position (see Figure 5.12). The 1r° sample was obtained by selecting pions with greater than 2 GeV/CPT from the TWO GAMMA trigger sample. This was done to avoid the extensive overlapping of showers at high PT values that can cause variations in the reconstructed mass (but not the overall energy of the 1r°). The 17 events were required to have at least 3.5 GeV/cPT and were obtained from the SINGLE LOCAL HI trigger sample. The radial dependence seen in the 1r° and n masses is similar to the effect seen in the radial dependence of E/ P for electrons (where P is the momentum determined by the tracking system), which suggests the effect can be attributed to the reconstructed energy. If the photon energies are replaced by twice the d: view (or R view) gamma energies, the same variation is seen (for both views). This suggests that the effect is not simply an artifact of the readout geometry. The low PT 1r° sample was used to determine a correction for each octant independently. Each photon in the 1r° was required to have an energy of at least 10 GeV to avoid effects associated with residual energy corrections and reduced . resolution at low energies. Since the mass measurements depend on two energies, the determination of the radial correction had to be done iteratively. In addition, the radial dependence of the 1r° and n masses changed when the sampling time for the LAC strip energies was changed during the early part of the 1990 run, so a separate set of corrections had to be determined for the early data. 153 b m I I I I I I I I I I I fir T ff I I I I Ifi r tr I I I I I I I I .9 . ’5 » d m .- d g " .. o1.04 :- j 2 '- —4 1.03 L? L 102 : 9:0 1 1.01 :- 3 , .1 b + —+— 1 " -I 1 L 3* :F S .. I -—.—— .1 i -<>——O— : 0.99 l .1 098 L e 7I° moss rotio, pT>2 GeV/c 1 : o 77 moss rotio, pT>3.5 GeV/c 3 0.97 3— .1 ’ .4 0.96 l .1 095 b 1 L 4 1 l i I 1 1 l n 1 1 1 l 1 l 1 1 l L L_L_l. l J L I _1 l 1 1 1 1 T 20 40 60 80 100 120 140 160 Radius (cm) Figure 5.12: Radial dependence of the reconstructed masses for 1r° and 17 particles relative to the nominal values. The 1r°s are required to have at least 2.0 GeV/C PT and the as are required to have at least 3.5 GeV/c. 154 5.8.6 Octant Energy Corrections Revisited After the corrections described above had been made, the overall octant energy scales were measured and corrected using 17 particles with PT values greater than 4 GeV/c. The 7) particles were chosen because the larger mass of the 17 causes the two decay photons to be well separated, so that variations in the mass value associated with overlapping showers were minimized. 5.8.7 Electrons One crosscheck on the validity of the energy scale for isolated photons is to measure two photon masses when one of the photons has been converted into an electron- positron pair in the target region. This takes advantage of the good momentum resolution of the tracking system to measure the energy of one of the photons very precisely, so that the consistency of the energy determination for the unconverted photon can be checked. The momentum scale for the tracking system was determined by fitting the mass peaks for the K2 and J / ‘1’ particles. The mass of the K2 particle obtained by measuring the decay into a charged pion pair was 497.7:f:0.1 MeV/c2. The mass of the J / ‘1’ particle obtained by measuring the decay into a muon pair was 3.097zt0.002 GeV/c2. Both of these values are in good agreement with the nominal values. Since this was done by tuning a single parameter (the magnetic field strength), the good agreement with two different (and widely separated) masses indicates that there are not any large systematic effects remaining in the momentum measurements. Because electrons start showering earlier in the material in front of the active 155 region of the EMLAC, the shower shape for electrons is somewhat different than the shower shape for photons. This difference becomes negligible for high energies, but is significant at low energies. Figure 5.13 shows the average ratio of LAC energy to tracking system momentum (E/ P) for electrons (and positrons) that were part of a converted photon (these electron-positron pairs were known as zero mass pairs or ZMPs) as a function of the EMLAC energy. A fit to this data was used to remove the systematic variation in the electron energy scale due to the shower shape difference. Once the electron energy scale had been determined, the 1r° mass obtained from the 7ee mode was measured as a function of the 7 energy. Figure 5.14 shows the 7ee mass divided by the nominal 7r° mass. The 7ee mass is 1% lower than expected because the electrons loose energy as they exit the target material. This effect is also seen in the Monte Carlo data. A more important feature of this plot is that it is flat as a function of photon energy, indicating that there are no PT dependent systematic effects in the isolated photon energy scale. 5.8.8 Energy Scale Verification and Results Figure 5.15 shows the 1r° and 17 masses as functions of octant number, PT, and radial position after the final energy scale corrections have been applied. One further crosscheck on the energy scale for isolated photons can be obtained by measuring the (.0 mass in the 1r°7 decay mode, since the third photon is effectively an isolated. Figure 5.16 shows the invariant mass distribution for this decay mode in the to region. The mass obtained by fitting this data is very close to the nominal 156 1.1 I I I I f I I I I I I I I I I f .4 Cl J \ _ LrJ _ 3 1.075 — _ I 1 1.05 —- _ 1.025 — 1 ~ I _ ’43:? I —+——+— a 0.95 :_ I D010 ; I o Monte Corlo I - + 0.925 — — -% l I l l l 1 L l L i l l 1 l l l l L 1 q 80 100 E (GeV) Figure 5.13: Ratio of EMLAC energy to track momentum for ZMP electrons as a function of the EMLAC energy. The closed circles come from the real data sample and the open circles are the Monte Carlo data. 157 .9 r r ' - ' _ £1.04 a) - 3 . 5’ . 1.02 I .0 x 1.00 l « ++++—+— + r 1 4 0.98 ~ I 0.95 - . 0 20 40 I A so so 100 7 Energy (GeV/6’) I I I a c, I i ‘ I ' I ' Ieffi'i I I f ' i J b .1 . o M V j I - ‘ 2‘0 .' m 3.39103 0 / b) 3 240 I' m 54412 MeV/ c) l 200 j- 1. 200 C 160 j- «1 160 120 ' £ 120 so «1 so 1 : 4o «1 4o -1 o - . 0 . - - - - 1 - - ‘ 0.1 0.2 0.3 0.4 0.6 0.8 1 ye’e' Moss (GeV/c’) 10’.“ Mon (GeV/c') Figure 5.14: a) Mass of 1r°s reconstructed in the 7ee mode divided by the nominal «a mass. The 1% decrease is due to energy loss as the electrons leave the target region. b) and c) show the mass peaks for the 1r° and 17 regions (respectively) in this mode. 158 value. Figure 5.16 shows the 1r° and 17 masses obtained using the final photon energy scale. The dip in the 1r° mass at high PT is caused by the increasing overlap between the two showers for the higher energy 1r°s and does not indicate any problems in the isolated photon energy scale. This effect is also too small to cause any of the 1r°s to fall outside of the defined mass band. The residual uncertainty in the overall energy scale is 0.5%, which contributes an uncertainty of z 6% to the overall cross sections at high PT. 5.9 The Monte Carlo Simulation 5.9.1 The Detector Simulation A detailed simulation of the experiment was necessary for measuring a number of important parameters for the experiment, including the photon shower shape and the reconstruction efficiency. Although the generation of the initial physics events was done in several different ways, all of the Monte Carlo events were propagated through a GEANT [35] simulation of the detector. GEANT contained tables of material properties and standard geometrical shapes that allowed the simulation to be tailored to the experiment’s measured parameters. The simulation included all of the materials known to be present in the experiment, although some of the material descriptions were simplified to reduce the computer processing time. For example, the LAC insulation and the copper clad readout boards were both represented by homogeneous materials instead of including the actual multilayer structures. The thermal contraction of the materials from their room temperature dimensions was not included in the representation. 159 1'04 . I I T I I I I v I ' I I I . _: E 1 1.02 r '3 1 F2923“: {e} .T —9—::9::9: 0.93 :— 1: : a 0'96 7 1 1 . 1 n 1 1 1 1 1 . 1 . 1 1 I 0 1 2 3 4 5 6 8 Octant lmgfIIIIIIII I I'I‘I’I'I_3 1.02 :— —I l E- :9::9::9:-—0—_°_—0————o—— j E +—O- e e 3 0.98 E- —‘ s 1 0'96 ~— 1 1 1 1 1 1 1 1 1 1 . " 3 4 5 6 7 8 9 pT(GeV/c) I.“ E I I I I r I I I I T I T I r .5 1.02 -_:- 3 1 :_ —o— 9 :t a A, 4 * E +—e—+ : 0.98 :— -1 0'96 ~— 1 1 1 1 1 1 1 1 . 1 1 1 m 1 . 1 20 30 40 50 60 70 80 90 100 r(cm) Figure 5.15: The 1r° (closed circles) and 7) (open circles) masses as functions of a) the octant number, b) PT, and c) radial position. The clip in the 1r° mass at high P1- is caused by the decrease in the separation of the two photons at high energies. A similar effect can be seen for the 1r°s near the inner edge (which must have larger energies to have the same PT as an event near the outside of the detector). 160 E706 (o—moy signal at 515 GeV ' ll” Mo = 782:3 MeV/02 1200 — 1 + \r ’ I _ 1 \1 Jr. 800 - + 1990 n'Be data pT>4 GeV/c + + + 400 1 1 1 1 J 1 1 1 1 1 0.5 0.75 1 1107 Mass (GeV/02) Figure 5.16: The on mass from the 1r°7 decay mode. 161 GEANT traced the propagation of particles through each of the materials and calculated the probabilities for interactions occurring in each of the materials. This process continued until all of the energies of the particles had energies less than the specified cutoff value. After the particles had reached the cutoff energy, the deposition of the remaining energy was handled by an external program written to simplify the low energy portion of the simulation. The “full” shower simulation used a low cutoff of 1 MeV so that the shower development would be determined completely within GEAN T. The full shower simulation was used to determine the parameterization of the shower shape. However, the full shower simulation required a large amount of computing time and was not practical for creating a large sample of simulated physics events. To obtain a larger event sample, a cutoff of 10 MeV was used. Once the particles in an event had been propagated through the system, GEAN T produced the detector outputs, such as the LAC strip energies and hits in the charged tracking system and the event was written out to a tape in a format similar to the real data format. These outputs were based on a completely efficient “ideal” detector. The real detector characteristics, such as hit efficiencies and dead channels, were imposed on the MC data by a preprocessor that was run before the events were put through a full MAGIC reconstruction. This allowed the detector characteristics to be tuned to match the real data without having to repeat the CPU intensive process of propagating the particles through the experiment. 162 5.9.2 Event Generation The HERWIG event generator was used to produce events with roughly the same numbers of photons and charged particles as the real data events. The particle multiplicities produced by HERWIG matched the data distributions more closely than those produced by PYTHIA, which was also considered as an event generator. HERWIG was used to produce two meson samples, one rich in 7r°s and one rich in 115. Several different generation thresholds were used to ensure that sufficient numbers of events were available for each PT range without having to generate The data that fell within 0.5 GeV an unreasonable number of low PT events. of the generation threshold for a given sample was not used to avoid biases in the distributions caused by the resolution of the detector. Some differences between the Monte Carlo data and the real data were seen in the PT and rapidity distributions after all of the corrections had been applied, but these were removed by weighting the events. 5.9.3 Comparison With Real Data The events generated by HERWIG and propagated through GEANT were then passed through the preprocessor and a full MAGIC reconstruction. The pre- processor used the detector characteristics stored in the MAGIC run constants to match the running conditions found in the real data set. A number of comparisons were made to ensure that the Monte Carlo (MC) events actually matched the data. Figure 5.17 shows a comparison of the 1r° mass and asymmetry distributions. Fig- ure 5.18 shows a comparison of the Efront/Etotd distributions for several energy 163 ranges for the MC and real data. The agreement between the MC and real data is satisfactory, so the MC data was used to determine corrections for several of the analysis cuts. 5.10 Reconstruction Efficiency The reconstruction efficiencies for the neutral mesons were calculated using the HERWIG Monte Carlo events. These efficiencies were measured by dividing the number of events reconstructed in a given PT and rapidity bin using the normal analysis cuts (except for the veto wall cut) by the number of generated events. The generated events had to interact inside the target fiducial region and decay into two photons (or three photons for the 02 sample). The photons were not allowed to convert into electron- positron pairs upstream of the magnet (if this occurred, the event was not included in the denominator sample). For the 1r° and 17 samples, the generated decays were required to satisfy the analysis asymmetry cuts before they were included in the denominator. Since the numerator used the reconstructed physics variables and the denominator used the generated physics variables, this correction also compensated for the differences between these variables. A two dimensional surface was fitted to the results obtained for the 1r°s and ns. The parameterization of the reconstruction efficiency for the pions had the following form: (5.2) 1r° Reconstruction Efficiency 2 0.973 — e('3'°°°'5'°1"’+°'°“xp") This surface is shown in Figure 5.19 The fit was made to the data for the rapidity 164 00 0.04 0.08 0.12 02.4 0.28 Two Photon Moss (GeV/c2) 0.06 _- ... _,~¢ -+- _ . ._. - 0.05 .+-+< 0.04 0.03 0.02 0.01 9‘ 0o 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Asymmetry (1r° Moss Range) Figure 5.17: Comparison of the 1r° mass and asymmetry distributions from the Monte Carlo (open circles) to the data distribution (solid histogram). 165 0.225 E-IVIYITVVYIIIIIIYIfrIIUIY 0225 EH” 1tn111111r11,l,,r, 0.2 E. 201r + 0074xy) (5.3) 5.11 Photon Conversion Probabilities Some fraction of the photons produced by the rapid neutral meson decays will be converted into electron-positron pairs before they reach the EMLAC. If the conversion occurs downstream of the magnet, then the position and energy of the resulting showers will be approximately the same as the position and energy of the original photon. However, if the conversion takes place upstream (or inside of) the magnet, the paths electron and positron will be bent away from each other and the energy of the original photon will be split into two EMLAC showers (or miss the LAC entirely). The probability that both of the photons from a 7r° or 1] reached 167 ------ M. >5 1 ‘.. 3“;‘ .. ....0l--uoooaonoooo" 0.. "’o. \ - - ..‘ ~.... 1. ‘1'” o o 8 0 9 """ “§€{€’€“€‘3’%::::':::{’:::::0. Q .‘.‘o‘ $0-- 0-- - 8 08 --------- “‘WW 1:: ------ ‘\\.. O... ’9’. .‘O .‘O l” “-1 ......... ‘..-..-OOOOO/.’”” m 0-7 """"""" #:3353393?! ’ 1 ............ - .............. -..':¢”/ ’0’. I 1 0.6 ................... . w v ........................ w 0.5 ...................... ................ 0.4 ........................................................... 1 ...................................... .0..4 0.3 .................... ................... ....1 0.2 ...... I ..‘ 0.8 y 0.6 0.4 0.2 9 10 0 8 pr -0.2 6 7 0.4 5 0.6 4 -0.8 Figure 5.19: The 7r° reconstruction efficiency as a function of PT and y. 168 the LAC without being converted to electron-positron pairs was calculated from the materials that the photon has to pass through and the interaction probabilities for each of the materials. The materials that each photon passed through were determined using the database of information compiled for the detector simulation. 5.12 Trigger Corrections The analysis of the trigger performance was divided into two sections based on the kind of P1- information used by the triggers. The local hi and lo discriminators were analyzed together because they both discriminated the signals from groups of 16 strips. The pretriggers and global discriminators were analyzed together because they used global PT signals from the biased PT adder cards. Instead of measuring the discriminator efficiencies as functions of reconstructed particle PT values, a reconstruction of the signals sent to each of the discriminators was performed. Since the discriminator voltage thresholds were constant within a run set, the threshold for a given discriminator in terms of the online voltages will be the same over a run set. However, the relation between the reconstructed PT (or energy) values and the LAC output voltages changes over time (see Figure 4.1), so the threshold in terms of reconstructed PT will be a function of the time when the run occurred. Reconstructing the voltage signals sent to each of the discriminators avoids this and also avoids any biases associated with the EMREC reconstruction process. Simulating the voltages sent to the trigger discriminators for each event re- quired a detailed knowledge of the gains applied to each of the strip energies. These measurements were made by generating pulses in the LACAMP and measuring ei- 169 ther the analog output signal (for the adder cards) or the location of the threshold as a function of DAC setting (for the local cards). These techniques provided mea- surements of the overall gains in the system from the LACAMPs to the biased PT adder cards (see Figures 3.6 and 3.7) and the local hi and lo discriminator cards. The gains were generally dominated by the attenuation applied by the attenuator cards, but they also included any effects due to variations in the other electronics. These measurements were made after the 1990 run and periodically during the 1991 run. The high threshold discriminator efficiencies were measured using events se- lected by the corresponding lower threshold triggers. This technique could be used because the lower threshold triggers were generally fully efficient at trigger PT val- ues well below the high trigger thresholds. Special “opposite octants” were written out to provide a sample of unbiased events for determining the efficiencies of the low threshold triggers. These opposite octants were only written out for events that fired one of the high threshold triggers (Single Local Hi, Global Hi, 1/2 Global Hi). If an event contained a hi trigger, then the other seven octants were sorted according to how much transverse momentum they contained (according to the R view gammas) and the octant with the highest total momentum was designated , an opposite octant. Typically the showers in this octant came from the recoil jet produced in the hard scattering process, so the opposite octant was usually located directly opposite from the hi triggered octant. If several octants contained hi trig- gers, then an opposite octant was designated for each of the hi triggered octants. No trigger requirement was placed on the opposite octant itself to'avoid biases. 170 Although all of the non-triggered octants could have been used to provide a data sample for measuring the efficiencies, this would have substantially increased the amount of information stored in the DST without greatly increasing the number of octants that contained useful amounts of electromagnetic transverse momentum. Selecting a single opposite octant for each hi triggered octant maximized the num- ber of useful events while minimizing the amount of information stored in the data summary tapes. 5.12.1 The Local Discriminator Analysis The signals sent to each of the local discriminators were reconstructed by weighting each of the LACAMP strip energies by the corresponding gain value and summing over the 16 front and 16 back section strips that were used for a given local. These calculations were performed by the EM unpacker after the LAC pedestals had been applied, but before the time variation in the energy scale had been removed. No corrections were made to the gains to account for image charge effects, but the small spatial extent of each local make such corrections unnecessary. The measurement of the local 10 discriminator efficiency was made using op- posite octant events and the efficiency of the local hi discriminators was determined using the events selected by the two gamma trigger. The two gamma trigger was used because its performance was dominated by the local 10 discriminator and therefore provided a sample in the right PT range for the measurement. Although it would have been desirable to use the single local 10 trigger for this purpose, it was only available for the end of the 1990 run. 171 The efficiency curves were determined independently for each local in an octant using the information from the Nanometrics latches. Two histograms were used for each local. The denominator histogram contained the trigger PT distribution for that local for the “raw” sample (the opposite octants for the measurement of the low threshold discriminators and the low threshold trigger events for the measurement of the high threshold discriminators). The numerator histogram contained the trigger PT distribution for the events from the “raw” sample which had fired the nanolatch bit corresponding that that local (see Figure 5.20). The efficiency curves obtained by dividing these two histograms were then fit using an “erf” function, which is convolution of a step function with a Gaussian smearing function (or equivalently, it is the integration of a normalized Gaussian distribution from —00 to x). The threshold (7') and width (0) parameters were determined for each local independently and saved in a database for each run set. These fits were then used to determine the probability of each local firing for a given event. The probability that the local discriminator card fired for an event is just the probability that at least one of the locals fired, or, conversely, 1 minus the probability that none of the locals fired: PSLH = l - H (1 '- 133) (5.4) sums-of-B In this equation, p; is the probability that local ifired. 5.12.2 Calibrating the Global Trigger PT Calculations There were three calibration procedures that had to be performed on the global PT calculations before the efficiencies of the global discriminators and pretriggers 172 59’ 140 E'- I l I T _: 140 E- I I ' I I fl >1 . : : « - 21112213 ‘ U : . . : I: , tuna : 5 120 r : 1 120 r :...::': 1 IE : : : t .. I a: 1..— . .1 L ._ '10|"'t"0:“' .1 m 100 : i 1 100 : ;":::':1:;i!:11. : b 1 ummhh ' -1 1- . .01 1 1 t .1 P . "u“.hhll II I .1 p . . ‘3 I . d 30 I‘ : . : ‘1 30 r ! -:::.:. 1 60 :— -: 60 :— ; —: 4o :— —i 40 :— ‘ —2 E 1 I 1 : 20 r J. 20 :- 1' -_j I- 4 i- -1 0 : r 1 4L : O : 114 l l 1 J 1 l :‘ 0 8 0 2 4 6 8 PT PT : I I I I fl I I I ‘ ‘l’ I T I 1’ I I 1w L— 0 till" I I I l I‘ 140 E- | I ' II 120 :- E:5:5.3i35555 I 5'] is; 120 Z- 533: ii I- I 1.1... 111m 0 1 l- u u 11 lm :- 4:l'i."+lui u o 100 :— i:#"' it ' 5 12:21:21? 2 1 s 2 =1 as 22 80 r -1":'EE:::' . :. : 30 r i::' :' I 33: '::1:: :: : i 1:: ' 1 0' 1 : ,‘ ; F I : : 1.; E ‘,' 1 Z 1" ' 1- 1 20 ;- 1 € 20 .- .11: 1- : I -1 1- . m). t '1 j : n m: I 0 1 L4 1 1 l O . n l 1 0 2 4 6 8 0 2 4 6 'o -1 'u -1 Figure 5.20: Efficiency curves for single local hi (solid lines) and single local 10 (dashed lines) discriminators as functions of local trigger PT. The upper plots are for locals 2 (left) and 10 (right) in the inner part of an octant. The lower plots are for locals 19 (left) and 27 (right) in the outer part of an octant. 173 could be measured. These procedures were originally carried out in the order in which they are described below and then repeated to make sure that the results for each procedure were not skewed by initial inaccuracies in the other values. Each of the procedures relied on a comparison of the global trigger PT reconstructed using the EMREC strip energies with the global PT values measured by the online trigger ADCs. The global PT ADCs used a 100 ns gate width, so these values correspond to the signals used in the trigger decision. The ADCs recorded the signal from each of the biased PT adder card for each octant (called “1 / 2 global PT”, since each card summed the signal from half of an octant) as well as the sum of the signals from the two adder cards in each octant (referred to as “global PT” or “total global PT”). The first step in the global PT calibration was the removal of variations in the global gains caused by hardware changes made before the gains measurements were made. This was a problem for the 1990 data because the gains measurements were made after the run had been completed. Localized problems with the gains measurements were assessed by measuring the slope of the plot of reconstructed global PT versus the corresponding ADC value. Figure 5.21 shows this plot for the inner half of octant 1 for a run taken near the end of the 1990 run. A careful examination of this plot reveals that the data are distributed around two different lines. Finding the positions of the gains problems is difficult because the only infor- mation available is the global PT sum for half of an octant. However, it is possible to select events in which a large fraction of the signal came from a specific group 174 141- 12- IRIUGtK PI IN GILV 10— 'I. 1- . o o 3 '. '0- - -.:° . O I' . ‘1: .4. 4 — '§.OO.: 1.. : .‘.{:.-_ '. . : -'. o;§°~ v; .._\ 1:33,"! - '.,;;.. - - rag-"44:11.; . . 'fi’c'.‘ y‘. "5' I" " ?‘.l-1.§. 145‘ 5.1333, . I...". V. 1' ' a ' 0’- -' 2 "' '5'. =1 ~a 3. " ‘ l“. I . '...-.'04'.:'?.0.‘ . I. a . e. .of‘ a ' I ."“"‘.§!‘ .0 b . :’ K". '0'. ‘0 e 13..“ '23-} 0' O. n. a- V o‘ l ' O b i l L l l P l l l l l 1 L l l L I l l l L l I l ADCI COUNTS Figure 5.21: Raw global P1- calibration plot for octant 1. The events cluster along two lines due to a problem with the relative gains measurements. 175 14— 10 - 1 I l l 1 l l l l 1 l l l l 1 400 600 800 1000 ADCICOUNWS Figure 5.22: Raw global P1- calibration plot for events in groups 1-8 of octant l. 176 14- 12- ' 0.. . :3... . l l l l l l I L l 1 l l l l l 400 600 800 1000 ADCICOUNTS Figure 5.23: Raw global PT calibration plot for events in groups 9-16 of octant 1. 177 INNER TRIGGER PT IN EEV _. u u- in I d P o- l 7.5 '- 2.5 - 1000 ADC! COUNTS 1 1 800 Figure 5124: Global PT calibration plot for octant 1 after gains corrections and first pass cutoff values have been applied. 178 of 8 strips. Requiring that the largest photon in an event and the largest group of 8 trigger PT signal have the same R position provided a sample of events in which the desired group of 8 strips contained a large fraction of the octant’s trigger PT. This requirement also excluded the “ramp and sfep” events from the calibration sample, which reduced the “noise” background for the measurements. The group of 8 positions were numbered from 1 (innermost group) to 32 (the outermost group). Figures 5.22 and 5.23 show the events from Figures 5.21 separated into events with positions (determined using the above requirements) in groups 1-8 and events with positions in groups 9-16. Because the global PT sums included contributions from more than just the highest group of 8, the gain correction had to be carried out iteratively. The first step was to measure the overall slope of the line for an octant. A similar slope measurement was then made for the region to be corrected. The ratio of these slope measurements was then used as a multiplier on the raw gains measurements for the affected groups and the slopes were remeasured. For in- dividual group of 8 corrections, this first correction typically reduced the difference in the slopes by about 40-60%. By taking the ration of the remaining difference in the slopes to the original differences in the $10pes, one can estimate the fraction of the global P1- that is coming from the rest the octant and then use this to estimate the full correction needed for the group. In general the slopes were the same as the rest of the octant within the sensitivity of the measurements after the second step. Once the gains variations for set A had been completed, the other sets were checked in reverse chronological order, since new gains changes will be “added” as one gets further away from the time when the raw measurements were made. 179 A similar procedure was applied at the strip level to correct for several “hot” or “dead” strips. While these stripwise corrections were not sensitive to small variations in the gains, they were sufficient to measure gains that required large corrections (on the order of 50% or more). In addition to correcting for hot strips and dead strips, corrections were applied to the ground flash strips on the inside and outside of the detector. These strips have capacitance values that are significantly different from the normal R strips and the signals seen on the trigger timescale were generally about 1 / 2 of the signals expected based on the EMREC strip energies. An additional gains problem was discovered while the gains for individual groups of 8 were being corrected. The slope of the reconstructed trigger PT versus ADCed trigger PT plot increased with increasing R position within each of the biased PT adder cards. This effect may be the result of image charge within the groups of 8 occupied by the showers reducing the trigger signal. This effect increases with increasing R values, which is consistent with the generally linear increase in the capacitance with R. The sampling time for the EMREC strip energies was tuned to minimize the sensitivity to these image charge effects, but the trigger signals would have included these effects because of their short integration times. Based on this model of the image charge effects, the slopes would be expected to , increase with R since the ADC values would be reduced more by the image effects for larger R values while the 790 ns strip energies would be relatively unaffected. Although these effects were not really problems with the gains measurements, the image charge effects were included in the global PT calculations at the strip level by modifying the gains using a correction of the form l/(A+Bx N.) (where N. is 180 the strip number). All of the gains corrections described above were applied to the gains used in the MAGIC processing pass. The third procedure was the determination of the cutoffs for the groups of 8. Measurements of the voltage levels for each of the cutoffs were made during 1991. However, the conversion scale between these voltages and the trigger PT values was unknown (there was no absolute reference for the gains measurements), so the scale factor for these measurements had to be determined from the data. This was done on a half octant basis by extrapolating the high trigger PT portion of the data back to the ADC axis using a linear fit. This was repeated for different values of the scale factors until the linear fits intersected the location of the ADC “zero” (the value corresponding to no input, which was generally set to be about 50 counts out of the 1024 count range). The low PT end of the data was excluded because it tended to drag the fits toward the ADC zero and reduce the sensitivity of the measurement. This technique was used for all of the octants except octant 4. The biased PT adder cards for octant 4 were repaired after the 1990 run and the measurements of the cutoff voltages after the repair did not show good agreement with the data. A single cutoff value was applied to all of the groups in each half of octant 4. 5.12.3 Global Discriminator Efficiency Measurements The efficiency curve for each global lo discriminator was measured as a function of the total global PT in the octant using the opposite octant events. Some of the calibration effects described above were different for the inner and outer biased 181 PT adder cards for each octant, so a comparison of the efficiencies for inner and outer events was made. No significant differences in the efficiencies for the inner and outer events were found, so the inner and outer events were added together in measuring the global discriminator efficiencies. While the efficiencies did not seem to vary strongly with the position of the largest shower in the octant, there was a variation in the efficiency as a function of the number of photons in the event. This effect may have been caused by image charge effects beyond the “self imaging” effect for each photon that was compensated for in. the gains modifications. To accommodate this dependence, the efficiency measurements were separated according to the event topologies. Instead of using the number of photons found by EMREC, which depends on the details of the EMREC reconstruction process, a variable based on the trigger group of 8 information was defined. This definition counted the front and back sections for a group of 8 strips as part of the same unit so that showers which produced signals above the cutoffs in both front and back would not be counted as two photons. Using this definition of a “trigger group”, the efficiencies were measured separately for events that involved 1 or 2 trigger groups and for events that involved 3 or more trigger groups. Separating the efficiency measurements this way minimizes the tendency to overcorrect events containing only 1 or 2 photons and undercorrect events with many photons. Figures 5.24 shows the results of these measurements for two different octants. The efficiencies for the different run sets (see in Chapter 3) were compared and the events from contiguous run sets were combined as much as possible within the threshold sets in order to obtain sufficiently large samples to 182 constrain the measurements in the threshold region. The results obtained in this manner were fit with erf functions and the threshold, width, and plateau values were stored for use in calculating event-wise corrections. The global hi discriminator efficiencies were measured in the same fashion as the global lo discriminator measurements, but the global 10 events were used as the references sample for the measurements. For those run sets in which the global lo trigger was dead or inefficient for a particular octant, the global hi efficiencies were not measured, because there were not enough events in these other samples to measure the behavior of the efficiency curves in the threshold regions. However, events from some of the other 10 threshold triggers were used to attempt to verify that the global hi discriminator was fully efficient above a high cutoff (typically 5-6 GeV/c), so that the higher PT events from these octants could be used. As with the global 10 measurements, run sets were combined as much as possible in making the global hi discriminator measurements in order to obtain sufficient statistical samples to constrain the measurements. The global hi efficiency curves were also fit using erf functions. Figure 5.25 shows the global hi efficiencies for two octants. 5.12.4 Pretrigger Efficiency Measurements The techniques used to measure the pretrigger hi efficiencies were similar to those used to measure the global discriminator efficiencies. However, there were several factors that made the pretrigger hi measurements more difficult. The first was that the only reliable information on the pretriggers came from the octant pretrigger store. These bits were “OR”ed over the two zero-crossing discriminators 183 8 8 5‘ 100 We— 5‘ 100 wo—o——+——- c O c .0 (D Q) IQ 80 :9 80 t 2'. ° LLJ LL11 6O 60 0 40 O Octant 1 4O Octant 4 20 ' 20 o 0 1 1 l 1 l 1 O L 1 l 1 l 1 O 2 4 6 8 2 4 6 8 Total Global Trigger P, Total Global Trigger PT Ex? 00 R V 1 - . V 100 wt——+— >\ >~ O ' ' 8 X9 . 8 0° .‘2 + .‘2 .9 80 ¢ .9 80 ° 2: 1.“: o L1.J LLJ 60 60 0 4O Octant 1 40 ii Octant 4 20 20 l: ’ 3: ° 0 1 1 1 1 O " 0 1 1 1 1 1 4 6 8 O 2 4 6 8 Total Global Trigger P1 Total Global Trigger Pt Figure 5.25: Global 10 (upper) and hi (lower) discriminator efficiencies for octants l and 4. The efficiencies for events with 1 or 2 trigger groups above the cutoffs are indicated by the solid circles. The efficiencies for events with 3 or more trigger groups above the cutoffs are indicated by the Open circles. 184 for each octant and included several vetoes which were not directly latched by the discrete logic units. In order to calculate the probability of the zero cross OR accurately, the effi- ciency of the inner and outer zero-crossing discriminators had to be measured sep- arately. To do this, the “location” of the event was found by determining whether the inner or outer half of the octant had more trigger PT. If the other half of the octant had more than 0.4 GeV of trigger PT (not “real” PT), then the event was rejected. This cutoff ensured that the contribution of the other half of the octant to the probability of satisfying the zero-cross OR requirement was negligible. The vetoes included in the pretrigger store information were already accounted for in the beam normalization definition, so the events that fired these vetoes had to be removed from the opposite octant sample used to measure the pretrigger hi efficiencies. The events that fired the SCR veto were easily removed using the latched SCR information. However, the events that fired the veto wall vetoes were more difficult to remove because the online veto signals were not latched. These events were removed by requiring that there be no hits in any of the 15 time buckets of the Minnesota Latches for the quadrant shadowing the opposite octant. The last veto was the early PT veto, which was not latched in any form for most of the _ run. However, an early PT latch was installed starting with run 9247 so that some information would be available for the pretrigger studies. This data was used to understand how to extract the proper efficiency information from the earlier data. This was done by comparing fits to the data with and without the early PT rejection applied to the opposite octant event sample. Fits to the data without the early 185 PT rejection applied to the sample were made using an erf function with a variable plateau. The plateaus for the event sample with the offline early PT rejection were generally completely efficient and were fit with an erf function with a plateau value of 1. The threshold (defined as 50% of the plateau value) and widths of the fits to the two samples were found to be the same within the limits of the sensitivities of the measurements. Based on this, all of the pretrigger hi measurements were made by fitting the data with a variable plateau value and then setting the plateau value to 1 in the probability calculations. The early PT cut was not applied to any of the 1990 data sample to avoid the possibility of systematic differences between the data sets (the veto wall and SCR cuts were still applied). An additional cut was applied to remove events that were dominated by out of time interactions. This was necessary because the pretrigger definition was a timing definition as well as a threshold definition. These out of time events were removed by requiring that there not be more than 40% of the PT in the octant that comes from more than 25ns out of time according to the photon TVCs. This cut did not significantly alter the topology distribution of the opposite octant events. The inner zero-cross measurements were separated into events in which 1-2 trigger groups were above the cutoffs and events in which 3 or more trigger groups were above the cutoffs. The outer half of the octant covered a significantly smaller rapidity region and did not have many events that fired more than 2 trigger groups, so the outer zero- cross measurement was not separated into topology bins. Figure 5.26 shows the pretrigger hi efficiencies summed over octants. The pretrigger lo efficiencies (which are only used for the two gamma trigger) 186 E I. f} r I T l r .1 h r i r I r T r .l g : .° ; + I : U 1- 4 l- fi 8 1- -1 1- .4 15 80 - . d 80 — H — 1.1.1 I 1 I Z : : : t i 60 r . 1 60 b 1 1- 4 II- -1 40 - 1 40 : 1 : 20 :— . -} 20 [- 1 -§ . .- . I 4. . 0 _ l l l l 1 l 1 0 A l l 1 l l 0 2 4 6 8 0 2 4 6 8 o r: pT(GeV/c) x” pT(GeV/c) I I I I I I I I. I I I f I I 1m _' “—— 100 “- ,‘vrlr T T '— E ." 3 E 1.’ I E ' 1 I ‘ 80 f - 8O — — o " b + -4 l- -l 1- cl 1- -1 l- -1 l- -1 >- -< 60 r 1 60 r 1 1- -4 1- d h- d l- ct 40 i.— o 4 40 '— .1 E 1 I 3 30 r 1 20 r 1 . 1 . 1 1- - 1 - '1 0 1 l 1 l 1 L 1 0 m 1 1 1 l 1 0 2 4 6 8 O 2 4 6 8 Inner trigger pT (Ge V/c) Outer trigger pT (GeV/c) Figure 5.26: Pretrigger hi efficiencies averaged over all of the octants for inner (left side) and outer (right side) events. The efficiencies have been plotted as functions of 1r“ PT for events in which the leading particle was a pion (top plots) and as functions of the appropriate half octant global trigger PT. 187 were measured in the same manner as the pretrigger hi efficiencies. However, the SCR and veto wall cuts on the opposite octant data set were not needed, since the pretrigger 10 information was stored without these vetoes. The fits to these efficiency curves were complicated by a slow transition from 2 90% to 100% in some of the octants, which was caused by an accidental shift in the signal delays between the two gamma pretrigger unit and the two gamma pretrigger store. The fits to this data were carried out using an erf multiplied by a modulating function of the form (l-BXGaussian). 5.12.5 Summary of Trigger Efficiencies Although the trigger PT variables are the proper variables for measuring the discriminator efficiencies, they are not directly related to the corresponding physics variables. Figures 5.26, 5.28, and 5.27 show the discriminator efficiencies in terms of leading 1r°Prr. The efficiencies should look similar for events in which the leading particle is a 7 when plotted as a function of the photon PT. The efficiencies for events in which the leading particle was an n or to will tend to have higher thresholds when plotted as functions of the leading particle PT because much (or all) of the energy from the second (or third) photon will not contribute to the trigger signal. For the local discriminators, this will be because the photons do not all land within the same local. For the global discriminators, this will be a result of the large cutoffs applied to the photons because they land in different groups of 8. For the 17 s and w 5, there will be an enhancement in the trigger probability for events in which all of the photons fall within the same 1 or 2 trigger groups. This effect has 188 been simulated in the Monte Carlo using the measured efficiencies and the results agree well with the data. 5.13 Beam Normalization To properly normalize the cross section measurements the number of beam particles that could have triggered the system must be known. This quantity, known as “Live Triggerable Beam” (LTB), was determined using sealers which counted the number of beam particles passing through the experiment as well as several quantities used to determine the fraction of the beam particles for which the trigger was ready to fire, known as the “live faction”. The factors used to calculate the live fraction were the data acquisition live fraction, the clean interaction live fraction, the pretrigger live fraction, and the octant trigger live fractions. Each of these factors was assumed to be independent of the other factors. The actual calculation of the live fraction from the sealers used the following equation: Live Fraction = (Clean Interaction Fraction) x (Pretrigger Live Fraction) x (“Trigger Live Fraction”) x (Computer Live Fraction) = (CLNINT/INT) x ([Pretrigger-OR + No-Pretrigger]/Live_int) x (l-[GatedJnt*(Octant Early PT+ Quadrant Veto Wall +SCR)/Gated_lnt])x (Live-Beam / Beam) Each octant had its own live fraction because the vetoes from the halo rejection and early PT rejection systems varied from oetant to octant. The trigger thresholds and 189 $100 I é1:100 : >~. .— --" > L— l 8 : 8 : ++ .9 - .9 _ .9 80 r * .2 80 :- ‘l' a: 2 ID I + 60 '— § 60 .— 2 C 40 L R<85cm 40 T. ‘l' R>85cm E - 5 + 20 :- 20 r + - - - + o a: L 1 1 1 1 1 o . 14.1 1 1 1 1 1 O 2 4 6 8 O 2 4 6 8 1r° P, 1r° P, g : E : >‘100 _~ >‘1oo :- u . o . t C ,. —— C 1- .°-’ - .9 ~ .9 80 L" _,_ .9 80 :- [L I E I 60 :— - so :— : : + 40 :— R<85cm 4Q :- E E R>85cm 20 E- 20 Z + O 1 l__-1 l 1 L 1 O - **l 1 l 1 O 2 4 6 8 0 2 4 6 8 1r° P, 71° P, Figure 5.27: Efficiencies of the local 10 (upper) and local hi (lower) discriminators for events in which the leading particle was a 1r°. The efficiencies have been averaged over all octants for the inside (left) and the outside (right) of the detector as defined by the break between the biased PT adder cards. 190 g : E : >‘ 100 1'.— *__—— >‘ 100 :' +—g_ I 8 t 8 : + .9 r * .9 ~ .9 80 E .9 80 :- + [II P C] E 60 E- 50 E' + Z i 40 :_ R<85cm 40 E + R>85cm ; ’ : 20 _— ’ 20 :— : - I +" O __L"'- l l l l l l O + l J 1 l 1 l l O 2 4 6 8 O 2 4 6 8 1r° P, 1r° P, >~100 >100 L- s 2 - I .93 a: I l .9 80 :9 80 r C] + £13 I 60 so -'_- "' F 40 R<85CFT1 40 E + - R>85cm 20 " 20 E- . + 0 414.1 J l l l O p l l l l 2 4 6 8 O 2 4 6 8 1r° P, 7r° P, Figure 5.28: Efficiencies of the global 10 (upper) and local hi (lower) discriminators for events in which the leading particle was a 7r°. The efficiencies have been averaged over all octants for the inside (left) and the outside (right) of the detector as defined by the break between the biased PT adder cards. 191 L FACTOR [LIFETIME FRACTION] Clean Interaction 0.9-0.95 Pretrigger 0.9-0.95 “Trigger” 0.8-0.9 Computer 0.75-0.85 Table 5.1: Typical ranges for the various contributions to the livetime for the 1990 run. prescaling factors were adjusted during the run to maintain a live fraction between 40%_ and 60% to optimize the amount of data taken. Typically values for the live fractions from each of the factors are shown below in Table 5.1 The overall calculation of the live triggerable beam used the following formula: NLTB = N Bemhfi x (LiveFraction) (5.5) In principle, the sealer for Beaml‘Ef-I- should have provided the number of incident beam particles that passed through the experiment’s targets. However, this number must be corrected for the misalignment of the targets with respect to the beam hole counter (and the rest of the system—see Figure 3.1). This was included as a separate factor in the cross section calculations because it depends on the choice of the target fiducial region. 5.14 Beam Energy The momentum of particles transported by the secondary beam line was deter- mined primarily by the current settings of a pair of dipole magnets. The correlation between the currents used in these magnets and the momentum of the transported beam was determined using the 800 GeV primary proton beam from the accelerator and extrapolating to the settings used for the pion data. This measurement was 192 verified using the E706 tracking system to measure the momenta of beam particles that did not interact [36]. This sample was obtained using the prescaled beam trigger, so the events were distributed throughout the run and should properly reflect any changes that occurred during the run. The average beam momentum was determined using these techniques to be 515 GeV/c with an EMS width of 30 GeV/c. 5.15 Cross Section Calculations The invariant cross section per nucleon (in pb/(GeV/c)2) for inclusive pro- duction of a given particle (e.g. 7r° or 17) was given by: d3_a _ 1 1 :3" (5 6) dp’ ZWPTAPTA)’ PINA Nmn ' The first term is the phase space term for the PT and pseudo-rapidity variables used for the measurement. The second term is the factor for the nucleon area density of the target material. N“rr is the number of events in a given PT and pseudo- rapidity bin after the corrections have been applied. Corrections were applied to account for the following factors: 0 Octant trigger weight 0 EMLAC acceptance 0 Reconstruction efficiency 0 Photon conversion 0 Beam absorption in the targets 0 Asymmetry cut 193 Summary of Correction Values [ Source of Correction I Correction Value ] photon conversions 1.175 (Be) 1.380 (Cu) beam absorption 1.054 (Be) 1.007 (Cu) asymmetry cut 1.333 veto wall cut 1.05 directionality cut 1.021 balanced p-r cut (0.91-1-0.009"‘Pq~)‘1 scaled x2 cut 1.016 beam contamination 1.005 target fiducial region 1.35 vertex reconstruction 1.004 branching ratio 1.012 Table 5.2: Summary of the corrections for 1r° analysis. The photon conversion and beam absorption corrections have been averaged. o Veto wall cut 0 Directionality cut 0 Balanced PT cut 0 X2 / E cut 0 Muon contamination of the beam 0 Target fiducial cut 0 Vertex reconstruction efficiency For the meson measurements there is also a factor for: e Branching ratio to measured decay mode (e.g. two photon mode for 1r°s and ns) The sizes of the corrections for these factors are shown in Table 5.2 Chapter 6 NEUTRAL MESON RESULTS 6.1 7r0 Cross Section Results The invariant cross sections for 1r° production presented in this chapter were calculated using the techniques described in the previous chapter. Figure 6.1 shows the invariant cross section per nucleon for the beryllium target as a function of PT. The center of mass rapidity range for all of the results in this section has been restructed to —0.75 < y < 0.75 to avoid the large corrections associated with the rapidly decreasing acceptance values outside of this range. The errors shown in all of the figures in this section are statistical. The systematic errors on these measurements are discussed in Section 6.5. Table 6.1 contains the values plotted in Figure 6.1. Figure 6.2 shows a comparison of the E706 results with a selection of results from other experiments measuring pion production. The E706 data is higher than the results from the other experiments because of the higher center of mass energy. The E706 data sample is clearly much larger than most of the previous measurements, and the use of multiple trigger thresholds ensured good statistical coverage over the entire range of transverse momentum accessible to the experiment. The limitations on measuring the low PT signal were imposed by the efficiency and resolution limitations of the low energy shower reconstruction. The rapidity distributions for the 1r° events in several PT bins are shown in Figure 6.3 and tabulated in Tables 6.2-6.5. The rapidity distributions are shifted forward (toward the positive rapidity side) because the rapidity has been calculated in the pion-nucleon reference frame instead of the parton center of mass frame. Figure 6.4 shows the invariant cross section per nucleon for the copper target as a function of PT. The values for pion production from the copper target are shown in Table 6.6. Figure 6.4 also shows a comparison between the measured cross sections and N LL predictions for the pion cross sections using two different Q2 definitions. The 194 195 NLL predictions were obtained by running the program written by Aversa et (11 (see reference [37]) using the fragmentation distributions described in reference [38]. The NLL data has been rescaled to compensate for the measured nuclear dependence in the E706 data. In Figure 6.4, the copper data and the corresponding NLL data were divided by a factor of 20 so that comparisons between data and theory could be shown for both of the targets simultaneously. The data in Table 6.6 has not been divided by this factor. 196 -2 GeV) ES '55 E; 00 \O O Ed3o/dp3 (pb a 23 —a O M ’0' p—s O .h ’0 p—n 0 DJ 102 Figure 6.1: Inclusive 1r° production cross section for the Be targets. Data from the prescaled interaction, prescaled pretrigger, and single local hi triggers were used. 197 P1- Range 0' per Nucleon PT Range 0’ per N ucleon (GeV/C) (Pb/(GeV/CV) (GeV/C) (Pb/(GeV/CV) 0.60 - 0.75 (4.14 i 0.62) E+9 4.4 - 4.5 (2.172 :1: 0.019) E+3 0.75 - 0.90 (1.36 :1: 0.20) E+9 4.5 - 4.6 (1.681 :t 0.015) E+3 0.90 - 1.05 (6.7 i 1.0) E+8 4.6 - 4.7 (1.285 d: 0.013) E+3 1.05 - 1.20 (3.52 i 0.53) E+8 4.7 — 4.8 (1.007 :1: 0.011) E+3 1.20 - 1.35 (1.68 :1: 0.25) E+8 4.8 - 4.9 779 :1: 10 1.35 — 1.50 (7.7 d: 1.2) E+7 4.9 - 5.0 616.9 :1: 8.8 1.50 - 1.65 (4.09 d: 0.63) E+7 5.0 - 5.1 488.4 :1: 7.3 1.65 - 1.80 (2.27 :1: 0.36) E+7 5.1 - 5.2 387.6 :1: 6.4 1.8 - 2.0 (8.4 :t 1.3) E+6 5.2 - 5.3 300.8 :t 5.6 2.0 — 2.1 (5.62 :t 0.93) E+6 5.3 — 5.4 242.2 :1: 5.1 2.1 - 2.2 (2.731 :1: 0.053) E+6 5.4 - 5.5 198.6 :1: 4.4 2.2 - 2.3 (1.865 :1: 0.041) E+6 5.5 - 5.6 152.1 :1: 3.8 2.3 - 2.4 (1.311 :1: 0.032) E+6 5.6 — 5.7 119.1 :1: 3.3 2.4 - 2.5 (9.22 i 0.24) E+5 5.7 - 5.8 96.2 :t 3.0 2.5 — 2.6 (6.58 :1: 0.19) E+5 5.8 - 5.9 76.9 :1: 2.7 2.6 - 2.7 (5.01 :1: 0.16) E+5 5.9 - 6.0 58.4 :1: 2.2 2.7 — 2.8 (3.40 i 0.12) E+5 6.0 - 6.125 45.0 :1: 1.7 2.8 — 2.9 (2.371 :1: 0.094) E+5 6.125 - 6.25 35.7 :1: 1.6 2.9 - 3.0 (1.809 :t 0.083) E+5 6.25 - 6.375 28.6 :1: 1.3 3.0 - 3.1 (1.396 :1: 0.074) E+5 6.375 - 6.5 23.3 i 1.3 3.1 - 3.2 (9.25 :1: 0.56) E+4 6.5 - 6.625 16.29 :t 0.99 3.2 - 3.3 (7.46 :1: 0.50) E+4 6.625 - 6.75 13.60 :1: 0.88 3.3 - 3.4 (4.52 :t 0.37) E+4 6.75 — 6.875 9.68 :1: 0.73 3.4 - 3.5 (3.39 i 0.11) E+4 6.875 - 7.0 7.76 :1: 0.66 3.5 - 3.6 (2.544 :1: 0.081) E+4 7.0 - 7.25 5.11 :1: 0.38 3.6 - 3.7 (1.936 :1: 0.068) E+4 7.25 - 7.5 3.09 :t 0.30 3.7 - 3.8 (1.423 i 0.058) E+4 7.5 - 7.75 1.45 :1: 0.20 3.8 — 3.9 (1.066 :1: 0.048) E+4 7.75 - 8.0 0.97 i 0.15 3.9 - 4.0 (8.22 :1: 0.43) E+3 8.0 - 8.5 0.500 :1: 0.079 4.0 — 4.1 (6.038 :1: 0.035) E+3 8.5 - 9.0 0.160 :t 0.040 4.1 - 4.2 (4.645 :1: 0.030) E+3 9.0 - 10.0 0.038 :1: 0.014 4.2 - 4.3 (3.580 :1: 0.026) E+3 10.0 - 12 0.0021 :1: 0.0021 4.3 — 4.4 (2.817 i 0.023) E+3 Table 6.1: Invariant cross section for 1r° production for the Be target (from Figure 6.1). Ed3o/dp3 (pb GeV'z) 198 10 9 :. 10 8 :3 10 7 11.. 11: production at large p1. by n- beams 10 6 '30. 0 E706 n0 production at 75 = 31.1 GeV 5 I"-.~ :1 13258 11:" production at ‘(s = 23.7 GeV 10 ‘ a. I 5258 n' produCtion at \15 = 23.7 GeV 10 4 A NA24 n0 production at 45 = 23.7 GeV * WA70 n0 producxion at *Js = 22.9 GeV 10 3 (Nuclear target data sealed by VA) 10 2 * f ’0. 10 . . ~... * O 1 ‘ . O . -1 10 ° -2 t 10 -3 1» 10 .4 10 10- l 41 l m l 1 l 1 l 1 l 1 l 1 l m l 1 l 1 ; 0123456789101112 pT (GeV/c) Figure 6.2: Inclusive 1r" production cross section per nucleon for the Be targets. A selection of other pion production measurements with similar beam energies has been included. 199 E : _—----——— ‘03:” ___..._.. B — _ _* _._ e 1 1 -- ma. '- * _ -_.— Q . b _.. B 3 :— 4.0 10 5 0) CD Be a 10 4 n ABFKW. p ABFOW D. v x :1: (1‘1 m NLL QCD A G. 3 ., x 3 10 an: 111010.003 Mb “‘ ‘.\‘ 'c 2 . ....... 2 _ 2 m 10 ‘s‘ i. ‘g‘ Q _pT : i __ 2 _ 2 '- CU/ZO \.. ‘, ‘ u.‘~ Q - pT /4 10 p 1 ax‘ \ a '1 “s“ “x 10 “\ .“‘ ‘\\ ‘s“ ‘ '2 ‘s‘ ‘§‘ 10 s .‘ s 10 ' .4 10 1 1 1 L 1 1 1 1 J: 1 14 1 l 1 1 1 L1 1 1 l ' J: 1 1 1 1 I 1 3 4 5 6 7 8 9 10 11 12 Figure 6.4: Comparison of 1r° cross sections for Be and Cu with NLL calculations described in the text. The calculations have been rescaled to account for the mea- sured nuclear dependence. Table 6.6: Invariant cross section for 1r° production for the Cu target (see Figure 6.4). 204 PT Range 0 per Nucleon (GeV/C) (Pb/(GeV/C)2) 3.5 — 3.6 25610 :1: 270 3.6 - 3.7 20670 :1: 230 3.7 - 3.8 16310 :1: 180 3.8 - 3.9 12610 :1: 150 3.9 - 4.0 9650 i 130 4.0 - 4.1 7560 :1: 110 4.1 — 4.2 5707 :1: 93 4.2 — 4.3 4442 i 90 4.3 - 4.4 3467 :1: 66 4.4 - 4.5 2666 :1: 60 4.5 — 4.6 2111 :t 48 4.6 - 4.7 1616 :1: 40 4.7 - 4.8 1225 :t 35 4.8 - 4.9 1025 :1: 32 4.9 - 5.0 778 :t 27 5.0 - 5.1 596 :1: 21 5.1 — 5.2 447 :1: 18 5.2 - 5.3 373 :1: 18 5.3 — 5.4 302 i 15 5.4 - 5.5 263 :t 14 5.5 - 5.6 181 :1: 11 5.6 - 5.7 161 :1: 11 5.7 — 5.8 117.0 :1: 8.7 5.8 - 5.9 84.3 :1: 7.8 5.9 - 6.0 64.1 :1: 6.7 6.0 - 6.125 57.7 i 5.2 6.125 - 6.25 41.0 i 4.4 6.25 - 6.375 30.3 :t 3.6 6.375 - 6.5 22.9 :t 3.1 6.5 - 6.625 23.5 :1: 3.3 6.625 - 6.75 14.3 :1: 2.3 6.75 - 6.875 13.0 :t 2.2 6.875 - 7.0 9.9 d: 2.2 7.0 - 8.0 3.09 i 0.39 8.0 - 10.0 0.183 i 0.080 205 01(1),) 0.5 - 111111 1 1 111 111111111 0 1 2 3 4 5 6 7 8 p1,.(GeV/c) Figure 6.5: Nuclear dependence of inclusive 1r° production measured using the Be and Cu targets. Note that the value for 0: decreases toward the shadowing value of 2/ 3 as PT decreases. The average value of a for the range from 4.0 GeV/e to 8.5 GeV/c is 1.1100 :t 0.0034 and seems to be constant from 3.0 GeV/c to 8.0 GeV/c. Table 6.7: Nuclear dependence parameter a for inclusive 1r° production as a function of PT (see Figure 6.5). Table 6.8: Nuclear dependence of inclusive 1r° production as a function of rapidity (see Figure 6. 7). 206 [ Rapidity Range 1 a 7 0.6 - 0.9 0.796 :1: 0.048 0.9 — 1.2 0.857 i 0.042 1.2 — 1.6 0.930 d: 0.044 1.6 - 2.0 1.002 :1: 0.014 2.0 - 2.5 1.040 :1: 0.016 2.5 - 3.0 1.080 :1: 0.022 3.0 - 3.5 1.1146 :1: 0.0040 3.5 - 4.0 1.1112 2t 0.0029 4.0 - 4.5 1.1094 :1: 0.0044 4.5 - 5.0 1.1175 :t 0.0068 5.0 - 5.5 1.104 :t 0.011 5.5 - 6.0 1.097 :1: 0.019 6.0 - 6.5 1.069 d: 0.030 6.5 - 7.0 1.128 :t 0.047 7.0 - 7.5 1.025 :1: 0.084 7.5 - 8.5 1.127 i 0.12 LRapidity RangeJ a J -0.75 - -0.60 1.074 :t 0.048 -0.60 - -0.45 1.101 :1: 0.024 -0.45 - -0.30 1.119 :t 0.015 -0.30 - -0.15 1.103 :1: 0.013 -0.15 - 0.00 1.120 :1: 0.011 0.00 - 0.15 1.117 :1: 0.011 0.15 - 0.30 1.120 :1: 0.011 0.30 - 0.45 1.093 :1: 0.012 0.45 - 0.60 1.122 :1: 0.012 0.60 - 0.75 1.095 :t 0.014 207 01(1),) 0.5 - 0 1m11111—LJILILL1111111L1111141LJPL1L1111111 0 l 2 3 4 5 6 7 8 pT (GeV/c) Figure 6.6: Comparison of E706 1r° nuclear dependence measurements with E258 measurements for charged pion production. The triangles correspond to the “Ir+ data and the squares correspond to the 1r“ data. 208 a 1.5 T T T r T V T I f I I I l I I I f I l 7 r I I I 7 T T T T Y j 1 1 I Y I v I: 5 * 1 -( —.——.__.-—G-— + ~_+_—+——*-—+— _._ _._. 1 ._ ............................................................................................................................................. _1 0.5 — r 0 4 J 1 I 1 1 1 1 l 1 1 1 1 l 1 1 L 1 l 1 L 1 1 f l 1 1 1 l 1 1 1 1 l J J 1 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 y (Rapidity) Figure 6.7: Nuclear dependence of inclusive 1r° production as a function of rapidity for events with 4.0 < PT < 4.5 GeV/c. 209 PT Range 0 per N ucleon (GeV/C) (pb/(GeV/CV) 3.4 — 3.6 10980 :t 450 3.6 - 3.8 7390 :t 280 3.8 - 4.0 4490 :i: 180 4.0 - 4.2 2640 :l: 110 4.2 - 4.4 1667 :t 69 4.4 - 4.6 991 :i: 44 4.6 — 4.8 594 :t 28 4.8 - 5.0 369 j: 19 5.0 — 5.25 221 i 11 5.25 - 5.5 112.7 :t 7.1 5.5 - 5.75 62.9 :1: 4.9 5.75 - 6.0 34.5 :1: 3.5 6.0 - 6.5 18.1 :1: 1.4 6.5 - 7.0 4.90 :1: 0.75 7.0 - 7.5 1.52 :t 0.39 7.5 - 8.0 0.62 :t 0.24 8.0 - 9.0 0.273 :t 0.088 9.0 - 10.0 0.012 :t 0.040 Table 6.9: Invariant cross section for 1) production for the Be target (see Figure 6.8). Figure 6.10 also shows a comparison between the measured cross sections and NLL predictions obtained from the code written by Aversa et al using the fragmentation functions described in [40]. Based on the pion results, Q2 = P.}/4 has been used for the 1; comparison. Within the NLL calculations, the etas are constrained to lie within a cone of size 6 (in radians) with respect to the original parton to eliminate backgrounds from spectator particles. The values shown in Figure 6.10 reflect the range that the authors of the NLL code consider to be reasonable. The N LL data has been rescaled to compensate for the measured nuclear dependence in the E706 data. In Figure 6.10, the copper data and the corresponding N LL data were divided by a factor of 20 so that comparisons between data and theory could be shown for both of the targets simultaneously. The data in Table 6.14 has not been divided by this factor. 210 T T I ff T T IfY T T I Y Y Y Y Y Y Y T T T r V T r r I I Y T p—n O A ‘ 1 t t 1 1 1 111111 I I IUI‘III H O W I TIIIVYUI 1 11111111 Ed30/dp3 (pb GeV’z) 1—1 O N t 1 1 1111111 I I 111111] fl 1 ‘ VIIIIII 1 141 11111 I 1 1111711 1 1 1111111 fl O I YIIIIITI 1 1 1111111 I I WIIHI Figure 6.8: Inclusive 7] production cross section for the Be targets. 211 Q I % 104 7' +_,__,_-+-—+— Q + + + ++‘*’“" E _ -+- ’ ++ + "‘5- . 10 3 .— + e . _ : 3 ++ 3.5—_ ................................................................................................................... _1 L w 0.5 — - )- q J_1 l 1 1 l 1 i 1 1 1 l 1 1 1 O 2 3 4 5 6 7 pT(GeV/c) Figure 6.11: Nuclear dependence of inclusive 1] production measured using the Be and Cu targets (open circles). Using the values from 3.5 GeV/c to 7.0 GeV/c, the average value of a is determined to be 1.137 d: 0.019. The star indicates a preliminary measurement of a for to production. 218 sections. 0 The reconstruction efficiency The uncertainties in the reconstruction efficiencies were determined by measuring the width of the residuals between the original Monte Carlo data points and the surface fits to the efficiencies. Using this technique, the uncertainties were estimated to be z5% for PT < 6 GeV/c and z 8% for PT > 6 GeV/c for pions and z 5% for etas. o The Monte Carlo energy scale Using the same techniques used for the data, the uncertainty in the Monte Carlo energy scale was determined to be 0.5%, which results in uncer- tainties of the same size as those attributed to the uncertainty in the EM LAC energy scale. 0 Trigger Corrections The trigger corrections were estimated by comparing cross section results obtained from different triggers in the regions where they overlapped. The uncertainties in the overall cross sections were found to be @570 around 4 GeV and decreased to z 1% above :36 GeV. However, the trigger uncertainties for the backward rapidity regions were larger due to the limited statistics in the outer regions. The uncertainties below 4 GeV were generally less than 5% since there were a number of overlap- ping thresholds that were used to avoid including the threshold regions of each trigger. The data below 2 GeV was taken with the prescaled interaction trigger, which has no efficiency correction, but does have a different normalization correction than the other triggers. o The Photon Conversion Corrections The uncertainty in the cross sections associated with the current under- standing of the materials that photons must go through before passing the magnet is estimated to be z3%. 0 Beam Normalization 219 The overall Live Triggerable Beam measurement had an uncertainty of 10% due to problems with electronics units and losses during the acqui- sition and processing phases. 0 Signal Definitions The uncertainties associated with the definitions of the mass and side- band regions were estimated to be 0.5% for the pions and 1% for the etas. ,° Target Fiducial Region The uncertainty associated with measurement of the fraction of beam particles falling within the offline target fiducial region were estimated to be z2%. This estimate was obtained by comparing the fractions of particles falling within the fiducial cuts for the two pairs of SSD planes immediately upstream of the target and the two pairs immediately down- stream of the target and extrapolating the variations over the length of the target region. 0 Beam Halo Rejection The uncertainty associated with measurement of signals lost due to the muon rejection cuts was estimated to be 1%. o Vertex Definition The uncertainty in separating the vertices in the beryllium targets from the vertices in the copper targets was estimated to be z1% based on the sizes of the ”tails” of the copper and beryllium distributions. 0 Target Specifications The uncertainty associated with the measurement of the actual dimen- sions and density of the targets were estimated to be 20.7% and 80.3% (respectively). Adding these uncertainties in quadrature gives overall systematic uncertainties of z 15% in the 1r° cross section at 4 GeV/c and z 17% at 8 GeV/c. The systematic uncertainty in the 1] cross section is estimated to be z 15% at 4 GeV/c and z 16% at 220 8 GeV/ c. For the nuclear dependence measurements, the systematic uncertainties came primarily from the following sources: 0 The Photon Conversion Corrections The uncertainty in the cross sections associated with the current under- standing of the materials that photons must pass through before passing the magnet is estimated to be z3%. This corresponds to an uncertainty of z1.5% in a. o Vertex Definition The uncertainty with separating the vertices in the beryllium targets from the vertices in the copper targets was estimated to be 21%. This corresponds to a 0.5% uncertainty in the (1 measurements. 0 Target Specifications The uncertainty associated with the measurement of the actual dimen- sions and density of the targets was estimated to be zO.7%. This corre- sponds to a 0.4% uncertainty in the a measurements. a Target Fiducial Region/ Beam Skewing The uncertainty associated with the possibility that the fraction of the beam passing through the target fiducial region is different between the two targets is estimated to be z2%. This estimate is based on an extrap- olation of the variations in the fiducial fraction measured in SSD modules at different 2 locations. This corresponds to an uncertainty of z 1% in the 0: measurements. Most of the other corrections cancel out in the measurement of a as long as a does not have a strong dependence on rapidity (which is consistent with Figure 6.7). Adding the uncertainties in quadrature gives an overall systematic uncertainty of 1.9% in the (1 measurements. Chapter 7 DIRECT PHOTON ANALYSIS 7. 1 Overview The analysis of the direct photon sample was performed using many of the same techniques developed for the analysis of the neutral mesons. The following sections will only discuss those aspects of the direct photon analysis that were different from the neutral meson analysis. Cuts and corrections which did not change include the vertex definition and reconstruction efficiency, the longitudinal shower development requirement, and the muon bremsstrahlung requirements. The EMLAC fiducial cuts were not changed, but the corrections were slightly different because only one photon was required to fall within the geometric acceptance region instead of two photons. The determinations of the trigger corrections and photon conversion corrections were performed using the techniques described in Chapter 5. 7 .2 Rejection of Charged Particle Showers One source of background for the photon cross section measurement comes from electron showers. These showers are produced when one of the photons from a neutral meson decay is converted into an electron-positron pair. The EMLAC showers produced by these electrons are very similar to the showers from real pho- tons, so they cannot be easily be removed by placing requirements on the shower 221 222 parameters. However, these electron showers can be identified by extrapolating the charged particle tracks to the front face of the EMLAC and matching the tracks with the reconstructed showers. Figure 7.1 shows the distribution of the square of the distance between the tracks and the showers at the front of the LAC. Showers that fall within 1.0 cm of a charged particle track are removed from the photon sample. The correction for accidental overlaps between photon-initiated showers and charged tracks was 1.0163. While this cut is effective in removing showers initiated by charged particles coming from the target region, it is not as effective in removing showers initiated by charged beam halo particles for two reasons. This is partly due to the limited timing window of the tracking system. The timing of the halo particles is random with respect to the incoming beam particles. To generate a LAC trigger, they must fall within a window of 26 RF buckets around the arrival time of an interaction. However, the tracking system was fully efficient only for particles that fell within 1-2 buckets of the interaction signal. The halo particles that arrive outside this timing window cannot be identified using the tracking system. The efficiency of the charge track cut for rejecting muons is also reduced by the techniques used in the reconstruction of the shower positions. In EMREC the shower positions are measured using the SUM view signals, which are produced by adding the front and back section strips together. This has the effect of moving the back section signal positions to the front face of the EMLAC using the assumption that these signals ’came from the target region. For halo particles, which produce most of their signals in the back section, this produces a relatively large shift (2 1-2 cm) in the 223 x10‘4 1 400 III I I 1 200 I I L I I I 1000 800 I fT Ij I I I I I I I 600 400 200 I I I I I IfT I l I I I II O ._1 N DJ .9. M O\ \1 OO \0 Figure 7.1: Distribution of AR2 = Ax2 + Ay2 between the positions of the charged particle tracks extrapolated to the front of the LAC and the reconstructed shower positions. 224 R position of the shower, which substantially reduces the likelihood of matching a muon track with the corresponding shower. The beam halo backgrounds must be eliminated using the cuts discussed in Section 5.6. 7.3 BALANCED PT CUT The definition of the balanced PT ratio and the cut value did not change for the direct photon analysis. However, the distribution of ratio values was slightly different for the photon sample. The discrepancy is caused by differences in the recoil jets. For an event containing a direct photon, the photon contains all of the momentum for the trigger side. However, for leading neutral mesons, the trigger particle contains only part of the momentum of the trigger side jet. This means that meson and photon events with the same trigger particle PT values will (on average) have different recoil jet momenta. The pion recoil jets will have larger momenta on average. The correction for the balanced PT cut for the photons was (0.927 + 0.005 x PT)“. 7 .4 The Direct 7 Signal Definitions Several different definitions of the direct photon signal cuts were used to study the direct photon sample. The most important of these were the “75S” and “903” schemes. These two schemes differed only in the asymmetry requirements placed on the rejection of photons coming from neutral pions (the numeric portions of the names are just 100 x the asymmetry cut value for the pions). The first step in defining the photon candidates was to calculate the mass of the photon taken 225 in combination which each of the other photons in the octant. If none of the masses corresponding to these combinations fell within the defined pion or eta mass bands (see Chapter 5) or if they did so only with asymmetry values higher than the specified cuts, then the photon was considered a candidate. The asymmetry cut for the etas was 0.75 for both of these schemes. If the photon candidate (in combination with another photon in the octant) formed a mass that fell within one of the defined mass ranges (and the asymmetry was below the cutoffs), then the photon was rejected. It is known from the neutral meson analysis that there is a background to the mass peaks, so some of the direct photon candidates are removed by this cut. To correct for this, photons that form masses in the sideband regions are weighted by a factor of 2. This effectively adds in the events that were improperly identified as coming from neutral meson decays. 7 .5 Background Subtraction The signal definitions discussed in the previous section make use of the fine spatial resolution of the EMLAC to reject photons coming from neutral meson de- cays. However, even with good spatial resolution some of the meson decays will not be reconstructed. The remaining amount of background from neutral mesons is , determined using the neutral meson events generated by the Herwig Monte Carlo. These events were processed using the full analysis code to obtain number distri- butions from the direct photon analysis and the pion analysis. The ratio of the distribution of direct photon candidates obtained from the neutral meson Monte Carlo events to the pion distribution obtained from these events (known as “gamma 226 to pi”) was used so it was not necessary to normalize these events. The number of background events to be subtracted from the data is obtained by multiplying the 7/1r ratio obtained from the Monte Carlo meson events by the measured pion cross section. Figure 7.2 and 7.3 show the gamma distributions obtained from the data before background subtraction and from the neutral meson Monte Carlo events for the 753 and 905 scheme, respectively. It is clear from these plots that the fraction of the candidates that come from background processes becomes high at low PT, which makes it difficult to reliably determine the signal. However, at high PT values, the signal is much larger than the background and one can reliably measure the signal. At the very highest PT values the ratio of the direct photon candidates to the 1r°s actually exceeds unity. A fit to the Monte Carlo 7/ 1r ratio for the background as a function of PT was performed and used for the subtraction process. Using this fit, the subtracted photon signal was defined on a bin-by-bin basis to be: ‘7 _ '7 7 1r 1° subtracted _ NunsubtractedData _ (N / N )MOMCCN'IO X aData (7'1) The mixture of neutral mesons used in the measurement of the background was the default mixture obtained from Herwig. However, the ratios of n/ 1r and w/7r were checked and found to be in reasonable agreement with the values measured by the experiment . 227 E P. F’ —t 2 _ l ._ l + + + ++ " -1-<>-=- .. ’ °"°"'°“--o---<>---o ------ o ............. o ....... O11111111111111141111111111111111 4 5 6 7 8 9 10 PT(GeV/c) Figure 7.2: Comparison of 7/1r distributions from data (solid circles) and Monte Carlo of background processes (open circles). 1r°s and 17s with asymmetry values less than 0.75 have been rejected and the corresponding sidebands have been added back in. 228 Y/1t 1 _ l _+_ + + _ + ++ F a -."‘°'“--O--~- _ O- "'°'"--o---"°‘"'O' ------ O-U-Os ....... o. ....... O 1 1 1 #1 L 1 l 1 1 1 1 1 1 1 #4 1 1 1 1 1 1 J 1 1 1 1 1 1 l 1 4 5 6 7 8 9 10 P.r (GeV/c) Figure 7.3: Comparison of 7/1r distributions from data (solid circles) and Monte Carlo of background processes (open circles). 1r°s with asymmetry values less than 0.90 have been rejected and the corresponding sidebands have been added back in. 17s with asymmetry values less than 0.75 have been rejected and the corresponding sidebands have been added back in. Chapter 8 DIRECT PHOTON RESULTS 8.1 Distribution of Direct Photon Sample The number distribution of direct photons produced in the beryllium target is shown in Figure 8.1. The reconstruction efficiency has not been determined yet, so this is not a true cross section. If the data came strictly from meson decays, then one would expect that the slope of the result would be similar to the slope of the meson cross sections. However, it is clear from this plot that the slope of the photon candidates is shallower than that for the mesons, which is a good verification that the data come from direct photons instead of mesons. 8.2 Direct 7 Nuclear Dependence The nuclear dependence of the 7 cross section was measured using the param- eterization given in equations 1.8 and 1.9. Figure 8.2 and Table 8.1 show the values of the nuclear dependence parameter a measured for inclusive gamma production as a function of PT using the 75S scheme. The center of mass rapidity range has been restricted to ~0.75 < y < 0.75. The uncertainties are statistical only. Figure 8.3 shows the sensitivity of this measurement to the definition of the background subtraction. The 905 scheme depends more heavily on the Monte Carlo reproduc- ing the behavior of the detector properly in the high asymmetry regions and is more likely to be sensitive to any deficiencies. Excluding this bin, the data in the 229 230 % 1- z . m I + E _._. _ + 10 :— 2 -—o— I + 1 E' __._. . __¢___, -1 10 — . 1 1 1 1 I . 1 . 1 1 4 5 6 7 8 9 10 PT (GeV/c) Figure 8.1: Number distribution of photon candidates as a function of PT. The 753 scheme has been applied to the sample. The photon reconstruction efficiency has not been included and the number distribution will be more sensitive to the background subtraction than the measurements of a. 231 PT range from 4.0 GeV/c to 8.5 GeV/c are consistent with a single value. Fitting this region gives an average value of 1.024 :1: 0.016 for a in inclusive direct photon production. 8.3 Systematic Errors For the direct photon nuclear dependence measurements, the systematic uncertain- ties came primarily from the following sources: 0 The Photon Conversion Corrections The uncertainty in the cross sections associated with the current under— standing of the materials that photons must pass through before passing the magnet is estimated to be zl.5%. This corresponds to an uncertainty of zO.8% in a. o Vertex Definition The uncertainty with separating the vertices in the beryllium targets from the vertices in the copper targets was estimated to be z1%. This corresponds to a 0.5% uncertainty in the or measurements. 0 Target Specifications The uncertainty associated with the measurement of the actual dimen- sions and density of the targets was estimated to be z0.7%. This corre- sponds to a 0.4% uncertainty in the (1 measurements. 0 Target F iducial Region/ Beam Skewing The uncertainty associated with the possibility that the fraction of the 232 beam passing through the target fiducial region is different between the two targets is estimated to be :z:2%. This estimate is based on an extrap- olation of the variations in the fiducial fraction measured in SSD modules at different z locations. This corresponds to an uncertainty of a: 1% in the (1 measurements. 0 The Background Subtraction There are several factors contributing to the uncertainty of the back- ground subtraction. A simplifed Monte Carlo that include the geometric and asymmetry cuts, but did not include a full reconstruction was used to estimate the uncertainties in the subtraction. The uncertainty in the background 7/ 1r ratio due to the uncertainty in the energy scale was found to be less than 0.005. The uncertainty due to the uncertainties in the material positions and dimensions for the conversion probability calculations were found to be less than 0.001. Thus, overall uncertainty in the 7 / 1r ratio is approximately 0.005. This corresponds to systematic uncertainties in the cross section of 7% for the 4.0 to 4.5 GeV/c range, 3% for the 5.0 to 5.5 GeV/c range, 2% for the 6.0 to 6.5 GeV/c range, and lGeV/c. The corresponding uncertainties in the determination of a i of 4% for the 4.0 to 4.5 GeV/c range, 2% for the 5.0 to 5.5 GeV/c range, 1% for the 6.0 to 6.5 GeV/c range, and less than 1% above 6.5 GeV/c. The other corrections cancel out in the measurement of a as long as a does not have a strong dependence on rapidity. Adding the uncertainties in quadrature gives 233 PT Range 0: (GeV/c) 3.5 - 4.0 1.014 :t 0.021 4.0 — 4.5 1.033 d: 0.027 4.5 - 5.0 0.957 i 0.035 5.0 - 5.5 1.055 :1: 0.038 5.5 - 6.0 1.062 :t 0.043 6.0 - 6.5 1.032 :1: 0.058 6.5 - 7.0 0.917 :1: 0.092 7.0 - 7.5 1.086 :1: 0.093 7.5 - 8.5 1.044 :t 0.100 Table 8.1: Nuclear dependence parameter a for inclusive direct photon production as a function of PT using the 753 scheme (see Figure 8.2). The errors shown are statistical errors only. overall systematic uncertainties for a of 4% at 4.0-4.5 GeV/c, 2% at 5.0-5.5 GeV/c, 2% at 6.0-6.5 GeV/c, and 2% or less above 6.5 GeV/c. 234 1 8' d 1 0.5- . O 1111; 11111411111111111 4 5 6 7 8 pT(GeV/c) Figure 8.2: Nuclear dependence of inclusive 7 production measured using the Be and Cu targets. nos with aymmetries less than 0.75 have bee rejected and the remaining contamination has been removed using the Monte Carlo 7 /1r measure- ments. Using the values from 4.0 GeV/c to 8.5 GeV/c, the average value of a is determined to be 1.024 :1: 0.016. 235 3:15 5 . L 1 0.5 — 1411 1 1 11 11 11 1 114 O 4 5 7 8 pT°(GeV/c) Figure 8.3: Comparison of inclusive 7 nuclear dependence results obtained using 753 scheme (open circles) and 903 scheme (solid circles). The agreement is very good above 4.0 GeV / c PT. Below 3.5 GeV / c the measurement is extremely sensitive to the accuracy of the background subtraction. Using the values obtained using the 908 scheme for the range from 4.0 GeV/c to 8.5 GeV/c PT, the average value of a is determined to be 1.022 d: 0.015, which is in good agreement with the value obtained using the 753 scheme. 236 PT Range 0: (GeV/c) 3.5 — 4.0 1.050 :1: 0.020 4.0 - 4.5 1.035 :t 0.026 4.5 — 5.0 0.956 :1: 0.032 5.0 - 5.5 1.045 :1: 0.036 5.5 - 6.0 1.065 :t 0.040 6.0 - 6.5 1.017 :t 0.056 6.5 — 7.0 0.948 :1: 0.083 7.0 — 7.5 1.077 d: 0.091 7.5 - 8.5 1.044 :1: 0.098 Table 8.2: Nuclear dependence parameter a for inclusive direct photon production as a function of PT using the 908 scheme (see Figure 8.3). The errors shown are statistical errors only. Chapter 9 CONCLUSIONS The nuclear dependence of inclusive direct photon production and inclusive neutral meson production by a 515 GeV/c 1r" beam has been measured using data collected by the E706 experiment at Fermilab. The experiment used a finely segmented liquid argon calorimeter and the high precision charged particle spec- trometer to make precision measurements of inclusive direct photon, neutral pion, and 17 production in the rapidity interval from -0.75 < y < 0.75. The data sample covers a wide range in PT and rapidity and provides unique information on the nuclear dependence of neutral meson and direct photon production. The inclusive production of neutral pions is found to be consistent with the earlier measurements of the nuclear dependence of charged pion production made by E258 for the PT range where the experiments overlap, which is roughly 1 GeV/c to 6 GeV/c. No significant variation in the nuclear dependence of pion production as a function of rapidity is observed. The data from the beryllium and copper targets were fit using the parameterization 0,1 = do x A“. Using this paramterization, the value of a was measured to be 1.110 3‘: 0.003 (statistical) :1: 0.019 (systematic) for - inclusive 1r° production in the PT range from 4.0 GeV/c to 8.5 GeV/c. The nuclear dependence of 17 production is found to be similar to that for pion production, although the measured value of a is somewhat higher. For the PT range from 3.5 GeV/ c to 7.0 GeV/c, the value of a for 17 production is measured to be 1.14 :l: 0.02 237 238 (statistical) :l: 0.02 (systematic). These values are consistent with being the same, which supports the idea that the effects are due to rescattering at the parton level within the nuclei. The value of 0: obtained for inclusive direct photon production in the PT range from 4.0 GeV/c to 8.5 GeV/c is 1.02 :l: 0.02 (statistical) :l: 0.03 (systematic). This value is different from the corresponding values reported for neutral meson production and is consistent with no anomalous enhancement. It is important to note that a large portion of the systematic effects in the meson and photon calculations are the same and are not relevant to determining the significance of the difference between the values. In the framework of the simple rescattering model, the fact that the value of a is near unity for the photons indicates that rescattering of the incident partons is not a large factor in the anomalous nuclear effects. Calculations of these processes within a QCD framework will be available in the very near future [42] and should increase our understanding of these effects. BIBLIOGRAPHY [1] Alverson, G. et al, Phys. Rev. Lett. , 1992, V68, 2584. [2] Alverson, G. et al, Phys. Rev. D, 1992, V45, 3899. [3] Alverson, G. et al, Phys. Rev. D, 1993, V48, 6. [4] Alverson, G. et al, Phys. Rev. D, 1993, V49, 3106. [5] Quigg, C., Scientific American, April 1985, 84. [6] Parker, 3., Search for a Supertheory, 1987, Plenum Publishing Corporation, ISBN 0-306-42702-8. [7] Particle Data Group, Phys. Rev. D , 1994, V50, 1173. [8] Owens, J. F. Reviews of Modern Physics, 1987, V59 Number 2, 465. [9] Sterman, G., et al, Handbook of Perturbative QCD Verson 1.1, September 1994, FERMILAB-Pub-94/ 316. [10] Cronin, J. W., et al, Phys. Rev. D , 1975, V11, 3105. [11] Antreasyan, D., et al, Phys. Rev. Lett. , 1977, V38, 112. [12] Fields, T., “A-Dependent Effects in High PT Reactions”, June 1994, ANL- HEP-CP-94-40. [13] Lev, M., and B. Petersson, Z. Phys. C', V21, 1983, 155. [14] Miettinen, H. I., and J. Pumplin, Phys. Rev. Lett., V42, 1979, 204. [15] Luo, M., J. Qiu, and G. Sterman, Phys. Lett. B, V279, 1992, 377. [16] Ferbel, T., and W. R. Molzon, Rev. Mod. Phys., 1984, V56, 181. [17] Aurenche, P. and M. R. Whalley, Preprint RAL-89-106 DPDG/89/04, Septem- ber 1989. [18] Huston, J ., in Proceedings of the 1989 International Symposium on Lepton and Photon Interactions at High Energies, 1989, World Scientific, 348. [19] Lederman, L. M., Scientific American, March 1991, 48. 239 240 [20] Kourbanis, I., unpublished Ph.D. Thesis, Northeastern University, Boston, MA, 1989. [21] Osborne, G., E706 Note 197. [22] Jesik, R. et al, Phys. Rev. Lett., V74, 1995, 495. [23] Bromberg, C. et al, Nucl. Instr. and Meth., 1991, A307, 292. [24] Lirakis, C. B., unpublished Ph.D. Thesis, Northeastern University, Boston, MA. [25] Berger, E, L., Phys. Rev. D, 1982, V26, 105. [26] Sorrell, L., E706 Note 201, 1994. [27] Bromberg, C., S. R. W. Cooper, and R. A. Lewis, Nucl. Instr. and Meth., 1982, V200, 245. [28] Huston, J ., private communication. [29] Alverson, G. 0., and E. L. Pothier, E706 Note 139, 1985. [30] Brun, R., et a1, ZEBRA User’s Guide, CERN Computer Center program li- brary DD/EE/85-6. [31] Klein, H.J., and J. Zoll, PATCHY Reference Manual, CERN Computer Center program library, 1983. [32] Marchesini, G., et a1, HERWIG V5.6, FNAL Computing Department PU0124, 1993. [33] Varelas, N., ”1r° Production at High Transverse Momenta from 1!" Collisions at 520 GeV/c on Be and Cu Targets”, PhD Thesis, University of Rochester, 1994. [34] Varelas, N., ”Calibration of the Fermilab E706 Liquid Argon Electromagnetic Calorimeter”, submitted to Proceedings of the 4th International Conference on Advanced Technology and Particle Physics, Como, Italy, 3-7 October 1994. [35] Brun, R., et a1, GEANT3 Users Guide, Cern Computer Center program library DD / EE/ 84-1. [36] Miller, E706 Note 200, 1994. [37] Aversa, F. et al, Nucl. Phys. B, 1989, V233, 105. [38] Chiappetta, P., et al, Nucl. Phys. B, 1994, V412, 3. [39] Frisch, H. J., et al, Phys. Rev. D, 1983, V27, 1001. 241 [40] Greco, M., and S. Rolli, Z. Phys. C, V60, 169. [41] De Barbaro, L., private communication. [42] Qiu, J ., private communication. [43] Quigg, C., Gauge Theories of the Strong Weak and Electromagnetic Interac- tions, 1983, Benjamin/ Cummings Publishing, Inc. HICH G V. RIES 1[111111111111