USE OF STASTISTICAL RESAMPLING TECHNIQUES FOR THE LOCAL CALIBRATION OF THE PAVEMENT PERFORMANCE PREDICTION MODELS By Wouter Christo Brink A DISSERTATION Submitted to Michigan State University i n partial fulfillment of the requirements f or the degree of C ivil Engineering Doctor of Philosophy 2015 ABSTRACT LOCAL CALIBRATION OF THE PAVEMENT PERFORMANCE PREDICTION MODEL S USING RESAMPLING TECHNIQUES By Wouter Christo Brink The performance prediction models in the Pavement - ME design software are nationally calibrated using in - service pavement material properties, pavement structure, climate and truck loadings, and performance data obtained primarily from the Long - Term Pavement Performance Program (LTPP). The nationally calibrated models may not perform well if the inputs and performance data used to calibrate those do not represent the local design and construction practices. Therefore, before implementing the new M - E design procedure at a local level, each state highway agency should evaluate how well the n ationally calibrated performance models predict the measured field performance. The local calibration of the Pavement - ME performance models are recommended to improve the performance prediction capabilities to reflect the unique conditions and design pract ices. During the local calibration process, the traditional calibration techniques ( such as split sampling) may not necessarily provide adequate results when limited number of pavement sections are available. Consequently, there is a need to employ statist ical and resampling methodologies that are more efficient and robust for model calibrations given the data related challenges encountered by State Highway Agencies. The main objectives of this study were to demonstrate the local calibration of the rigid an d flexible pavement performance models and compare the calibration results obtained from different resampling techniques. Additionally, the input and measured performance data collection efforts were established for Michigan. For flexible pavements - allig ator cracking, rutting, thermal cracking and IRI performance models were calibrated for Michigan conditions and for rigid pavements - transverse cracking, joint faulting and IRI performance models were calibrated to reflect Michigan conditions. Several di fferent sampling techniques and dataset options were utilized to calibrate each of the models. These datasets included combinations of pavement types such as newly reconstructed, rehabilitation and LTPP sections. Initially, the models were calibrated using the entire dataset, then split sampling was performed. Due to limitations of these techniques, repeated split sampling, bootstrapping and jackknifing sampling techniques were used to randomly select pavement sections for the local calibration. The bootstr ap is a nonparametric and robust resampling technique for estimating standard errors and confidence intervals of a statistic. The main advantage of bootstrapping is that model parameters estimation is possible without making distribution assumptions. The m ajor contribution of this work was to demonstrate the use resampling techniques to locally calibrate the performance prediction models for newly constructed and rehabilitated pavements. The results of local calibration and validation of various models show that the locally calibrated model significantly improved the performance predictions for Michigan conditions. The local calibration coefficients for all performance models are documented. Additionally, recommendations for future calibrations are presented to improve the current local calibration. iv ACKNOWLEDGEMENTS I would like to thank Dr. Neeraj Buch for being an excellent mentor over the past six years of graduate school . Without your guidance and encouragement, I surely would have been less successful in my graduate career. You have made me the researcher that I am today, and I am forever grateful . I would also like to thank my committee members: Dr. Karim Chatti , Dr. Gilbert Baladi , and Dr. Mark Urban - Lurain for your guidance, advice and support throug hout the duration of my research. Thank you to Dr. Syed Waqar Haider for all your help and guidance throughout this research and the late nights spent at the Engineering building . Thank you to Michael Eacker and Justin Schenkel (Michigan Department of Tra nsportation) for your contributions to various aspects of this research over the past four years . Thank you to Mr. Tim Hinds for all your support throughout my graduate career. It has been an honor to get to know you as a colleague and a friend. Thank you to Margaret Connor for making the paperwork process so easy and your help with the administrative side of the process. It was truly a big help! I would like to thank my class mates and friends for all of your help and for making my time here filled with g reat memories, and many laughs. Thanks Sean Woznicki, Sudhir Varma, and Gopikrishna Musunuru . F inally, thank you to my mother and father for your love and support throughout all my studies at MSU and being there when I needed you . v TABLE OF CONTENTS LIST OF TABLES ................................ ................................ ................................ .......................... x LIST OF FIGURES ................................ ................................ ................................ ...................... xv 1 - INTRO DUCTION ................................ ................................ ................................ ..................... 1 1.1 PROBLEM STATEMENT ................................ ................................ ................................ . 2 1.2 RESEARCH OBJECTIVES ................................ ................................ ............................... 3 1.3 LAYOUT OF THE DISSERTATION ................................ ................................ ................ 3 2 - LITERATURE REVIEW ................................ ................................ ................................ .......... 5 2.1 INTRODUCTION ................................ ................................ ................................ ............... 5 2.2 LOCAL CALIBRATION PROCESS ................................ ................................ ................. 6 2.3 LOCAL CALIBRATION EFFORTS AND CHALLENGES ................................ ........... 11 2.3.1 Local Calibration Efforts ................................ ................................ ........................ 17 2.3.1.1. Load related cracking in flexible pavements ................................ ................ 17 Alligator cracking transfer function ................................ ................................ .................. 18 Longitudinal cracking transfer function ................................ ................................ ............ 21 2.3.1.2. Transverse (thermal) cracking model ................................ ........................... 22 2.3.1.3. Rutting model ................................ ................................ ............................... 25 2.3.1.4. IRI model (flexible pavements) ................................ ................................ .... 28 2.3.1.5. Transverse cracking model (rigid pavements) ................................ .............. 30 2.3.1.6. Faulting model ................................ ................................ .............................. 32 2.3.1.7. IRI model (rigid pav ements) ................................ ................................ ......... 34 2.3.2 Challenges and Lesson Learned ................................ ................................ ............. 36 2.3.2.1. Other Challenges ................................ ................................ .......................... 39 2.4 IMPLEMENTATION EFFORTS IN MICHIGAN ................................ ........................... 39 2.4.1 MDOT Sens itivity Study ................................ ................................ ........................ 40 2.4.2 Pavement Rehabilitation Evaluation in Michigan ................................ .................. 42 2.4.3 HMA Mixture Characterization in Michigan ................................ ......................... 43 2.4.4 Traffic Inputs in Michigan ................................ ................................ ..................... 44 2.4.5 Unbound Materia l Inputs in Michigan ................................ ................................ ... 45 2.4.6 Coefficient of Thermal Expansion ................................ ................................ ......... 48 3 - NEED FOR LOCAL CALIB RATION, PROJECT SELE CTION AND DATA REQUIREMENTS ................................ ................................ ................................ .................. 49 3.1 I NTRODUCTION ................................ ................................ ................................ ................... 49 3.2 M OTIVATION ................................ ................................ ................................ ...................... 49 3.3 D ATA COLLECTION EFFOR TS ................................ ................................ .............................. 50 3.3.1 Project Selection Criteria ................................ ................................ ........................ 50 3.3.1.1. Identify the Minimum Number of Required Pavement Sections ................. 51 3.3.1.2. MDOT Pavement Management System Condition Data .............................. 52 Selected Distresses ................................ ................................ ................................ ............ 52 Pavement distress unit conversion of HMA pavements ................................ ................... 54 Pavement distress unit conversion for JPCP designs ................................ ........................ 56 vi 3.3.1.3. Available In - Service Pavement Projects ................................ ...................... 57 3.3.1 .4. Summary of Selected projects ................................ ................................ ...... 62 3.3.1.5. Extent of Measured Pavement Performance ................................ ................. 65 3.3.1.6. Refining Selected Pavement Sections based on Measured Performance ..... 79 3.3.2 Input Data Collection ................................ ................................ ............................. 86 3.3.2.1. Pavement Cross - S ection and Design Feature Inputs ................................ .... 87 3.3.2.2. Traffic Inputs ................................ ................................ ................................ 88 3.3.2.3. As - Constructed Material Inputs ................................ ................................ .... 94 HMA layer inputs ................................ ................................ ................................ ............. 94 Level 1 HMA inputs ................................ ................................ ................................ .... 95 Level 3 HMA layer inputs ................................ ................................ ........................... 9 6 PCC material inputs ................................ ................................ ................................ ........ 100 PCC strength ................................ ................................ ................................ .............. 100 Coefficient of thermal expansion ................................ ................................ ............... 102 Aggregate base/subbase and subgrade input values ................................ .................. 102 Environmental Inputs ................................ ................................ ................................ ...... 109 3.4 S UMMARY ................................ ................................ ................................ ........................ 109 4 - LOCAL CALIBRATION PR OCEDURES ................................ ................................ ........... 111 4.1 I NTRODUCTION ................................ ................................ ................................ ................. 111 4.2 C ALIBRATION A PPROACHES ................................ ................................ ............................. 111 4.3 C ALIBRATION T ECHNIQUES ................................ ................................ .............................. 112 4.3.1 Traditional Approach ................................ ................................ ........................... 113 4.3.2 Bootstrapping ................................ ................................ ................................ ....... 114 4.3.3 Jackknifing ................................ ................................ ................................ ........... 118 4.3.4 Summary of Resampling Techniques ................................ ................................ ... 119 4.4 P ROCEDURE FOR C ALIBRATION OF P ERFORMANCE M ODELS ................................ ............ 119 4.4.1 Testing the Accuracy of the global Model Predictions ................................ ........ 120 4.4.2 Local Calibration Coefficient Refin ements ................................ .......................... 121 4.4.2.1. Data subpopulations ................................ ................................ .................... 122 4.4.2.2. Sampling techniques ................................ ................................ ................... 122 4.5 F LEXIBLE P AVEMENT M OD EL C OEFFICIENTS ................................ ................................ ... 125 4.5.1 Alligator Cracking Model (bottom - up fatigue) ................................ .................... 125 4.5.2 Longitudinal Cracking Model (top - down fatigue) ................................ ............... 126 4.5.3 Rutting Model ................................ ................................ ................................ ....... 127 4.5.4 Thermal Cracking Model ................................ ................................ ..................... 136 4.5.5 IRI Model for Flexible Pavements ................................ ................................ ....... 137 4.6 R IGID P AVEMENT M ODEL C OEFFICIENTS ................................ ................................ ......... 138 4.6.1 Transverse Cracking Model ................................ ................................ ................. 138 4.6.2 Transverse Join t Faulting Model ................................ ................................ .......... 138 4.6.3 IRI Model for Rigid Pavements ................................ ................................ ........... 145 4.7 D ESIGN R ELIABILITY ................................ ................................ ................................ ....... 147 4.7.1 Reliability based on Method 1 ................................ ................................ .............. 149 4.7.2 Reliability based on Method 2 ................................ ................................ .............. 151 4.7.3 Summary ................................ ................................ ................................ .............. 152 vii 5 - LOCAL CALIBRATION RE SULTS ................................ ................................ ................... 154 5.1 I NTRODUCTION ................................ ................................ ................................ ................. 154 5.2 L OCAL C ALIBRATION OF F LEXIBLE P AVEMENT M ODELS ................................ ................. 157 5.2 .1 Fatigue Cracking Model Bottom - up ................................ ................................ .. 157 5.2.1.1. Option 1a MDOT reconstruct only (measured AC/LC combined) ......... 157 Global Model ................................ ................................ ................................ .................. 158 Sampling Technique Results ................................ ................................ ........................... 159 Model Reliability Updates ................................ ................................ .............................. 162 5.2.1.2. Option 1b MDOT reconstruct only (measured AC only) ........................ 163 Global Model ................................ ................................ ................................ .................. 163 Sampling Technique Results ................................ ................................ ........................... 164 Model Reliability Updates ................................ ................................ .............................. 167 5.2.1.3. Fatigue cracking model calibration observations, contributions, limitations and issues ................................ ................................ ................................ ................. 168 Measured data ................................ ................................ ................................ ................. 168 Constraints on coefficients ................................ ................................ .............................. 169 Distributions for repeate d split sampling and bootstrapping ................................ .......... 169 5.2.2 Rutting Model ................................ ................................ ................................ ....... 170 5.2.2.1. Method 1 Option 1 ................................ ................................ ...................... 171 Global Model ................................ ................................ ................................ .................. 171 Sampling Technique Results ................................ ................................ ........................... 173 Model Reliability Updates ................................ ................................ .............................. 177 5.2.2.2. Method 1 Option 2 ................................ ................................ ...................... 177 Global Model ................................ ................................ ................................ .................. 177 Sampling technique results ................................ ................................ ............................. 179 Model Reliability updates ................................ ................................ ............................... 183 5.2.2.3. Method 1 Option 4 ................................ ................................ ...................... 183 Global Model ................................ ................................ ................................ .................. 183 Sampling technique results ................................ ................................ ............................. 185 Model Reliability updates ................................ ................................ ............................... 188 5.2.2.4. Method 2 Option 1 ................................ ................................ ...................... 188 Global Model ................................ ................................ ................................ .................. 189 Sampling technique results ................................ ................................ ............................. 190 Model Reliability updates ................................ ................................ ............................... 193 5.2.2.5. Method 2 Option 2 ................................ ................................ ...................... 193 Global Model ................................ ................................ ................................ .................. 193 Sampling technique results ................................ ................................ ............................. 194 Model Reliability updates ................................ ................................ ............................... 197 5.2.2.6. Method 2 Option 4 ................................ ................................ ...................... 197 Global Model ................................ ................................ ................................ .................. 197 Sampling technique results ................................ ................................ ............................. 199 Model Reliability updates ................................ ................................ ............................... 201 5.2.2.7. Summary of Rutting Model: Observations, limitations, contributions and future ................................ ................................ ................................ ........................ 202 Benefits of bootstrapping and repeated split sampling ................................ ................... 202 Transverse profile analysis assumptions ................................ ................................ ......... 203 Hypothesis testing ................................ ................................ ................................ ........... 204 viii 5.2.3 Transverse (thermal) Cracking Model ................................ ................................ . 205 5.2.3.1. Level 1 HMA layer characterization ................................ .......................... 205 5.2.3.2. Level 3 HMA layer characterization ................................ .......................... 207 5.2.3.3. Reliability for thermal cracking model ................................ ....................... 208 5.2.4 Flexible Pavement Roughness (IRI) Model ................................ ......................... 209 5.2.4.1. Option 1 ................................ ................................ ................................ ...... 209 Global Model ................................ ................................ ................................ .................. 209 Sampling Technique Results ................................ ................................ ........................... 210 5.2.4.2. Option 2 ................................ ................................ ................................ ...... 213 Global Model ................................ ................................ ................................ .................. 213 Sampling Technique Results ................................ ................................ ........................... 214 5.2.4.3. Option 4 ................................ ................................ ................................ ...... 218 Global Model ................................ ................................ ................................ .................. 218 Sampling Technique Results ................................ ................................ ........................... 219 5.2.4.4. Summary of IRI loc al calibration ................................ ............................... 221 Benefits of repeated split sampling and bootstrapping ................................ ................... 222 Model Constraints ................................ ................................ ................................ ........... 223 5.3 L OCAL C ALIBRATION OF R IGID P AVEMENT M ODEL ................................ ......................... 225 5.3.1 T ransverse Cracking Model ................................ ................................ ................. 225 5.3.1.1. Option 1 ................................ ................................ ................................ ...... 225 Global Model ................................ ................................ ................................ .................. 225 Sampling Technique Results ................................ ................................ ........................... 226 Model Reliability Updates ................................ ................................ .............................. 230 5.3.1.2. Option 2, 3 and 4 ................................ ................................ ........................ 231 Global Model ................................ ................................ ................................ .................. 231 Sampling Technique Results ................................ ................................ ........................... 234 Model Reliability Updates ................................ ................................ .............................. 240 5.3.2 Faulting Model ................................ ................................ ................................ ..... 242 5.3.2.1. Method 1 ................................ ................................ ................................ ..... 242 Reliability for faulting model ................................ ................................ .......................... 243 5.3.2.2. Method 2 Gen etic Algorithm ................................ ................................ ... 243 5.3.3 Rigid Pavement Roughness (IRI) Model ................................ ............................. 249 5.3.3.1. Global Model ................................ ................................ .............................. 249 5.3.3.2. Sampling Technique Results ................................ ................................ ...... 251 5.3.3.3. Rigid IR I Model Calibration Summary ................................ ...................... 258 Benefits of repeated split sampling and bootstrapping ................................ ................... 259 5.4 S ATELLITE S TUDIES ................................ ................................ ................................ ......... 261 5.4.1 Repeated bootstrapping and validation ................................ ................................ 261 5.5 O VERA LL S UMMARY ................................ ................................ ................................ ........ 263 6 - CONCLUSIONS, RECOMME NDATIONS AND FUTURE RESEARCH ........................ 265 6.1 S UMMARY ................................ ................................ ................................ ........................ 265 6.2 L OCAL C ALIBRATION F INDINGS ................................ ................................ ....................... 266 6.2.1 Data Needs for Lo cal Calibration ................................ ................................ ......... 266 6.2.2 Process for Local Calibration ................................ ................................ ............... 268 6.2.3 Coefficients for the Locally Calibrated Models ................................ ................... 269 ix 6.3 F INDINGS AND RECOMMENDATIONS ................................ ................................ ........ 272 6.4 F UTURE R ESEARCH ................................ ................................ ................................ .......... 275 REFERENCES ................................ ................................ ................................ ........................... 281 x LIST OF TABLES Table 2 - 1 Summary of the global models statistics( 1; 2 ) ................................ ............................... 8 Table 2 - 2 Reasonable standard error values ................................ ................................ ................... 9 Table 2 - 3 F actors for eliminating bias and reducing standard error ( 1 ) ................................ ....... 10 Table 2 - 4 Flexible pavement model calibration status ( 1 ) ................................ ............................ 11 Table 2 - 5 Rigid pavement model calibration status ( 1 ) ................................ ................................ 12 Table 2 - 6 Rigid pavement local calibration efforts ( 1 ) ................................ ................................ . 13 Table 2 - 7 Flexible pavement local calibration efforts ( 1 ) ................................ ............................ 14 Table 2 - 8 Summary of local calibration efforts ................................ ................................ ............ 16 Table 2 - 9 Summary of calibration sections and pavement types ................................ ................. 16 Table 2 - 10 Local calibration coefficients for alligator cracking ................................ .................. 20 Table 2 - 11 Local calibration coefficients for longitudinal cracking ................................ ............ 22 Table 2 - 12 Local calibration coefficients for t he thermal cracking model ................................ ... 24 Table 2 - 13 Local calibration coefficients for the rutting model ................................ ................... 28 Table 2 - 14 Local calibration coefficients for the IRI model ................................ ........................ 30 Table 2 - 15 Local calibration coefficients for the rigid transverse cracking model ...................... 32 Table 2 - 16 Local calibration coefficients for the faulting model ................................ ................. 34 Table 2 - 17 Local calibration coefficients for rigid IRI model ................................ ...................... 36 Table 2 - 18 Impact of input variables on rigid pavement performance ( 21 ) ................................ . 42 Table 2 - 19 Impact of input variables on flexible pavement perform ance ( 21 ) ............................ 42 Table 2 - 20 List of significant inputs HMA over HMA ................................ ........................... 43 Table 2 - 21 List of significant inputs C omposite pavement ................................ ..................... 43 Tab le 2 - 22 List of significant inputs Rubblized PCC pavement ................................ ............. 43 Table 2 - 23 List of significant inputs Unbonded PCC overlay ................................ ................. 43 Table 2 - 24 Conclusions and recommendations for traffic input levels ................................ ........ 45 xi Table 2 - 25 Average roadbed soil MR values ( 27; 28 ) ................................ ................................ . 46 Table 3 - 1 Minimum number of sections for local calibration ................................ ...................... 51 Table 3 - 2 Flexible pavement distresses ................................ ................................ ........................ 53 Table 3 - 3 Rigid pavement distresses ................................ ................................ ............................ 53 Table 3 - 4 Number of reconstruct projects for each pavement type ................................ .............. 63 Table 3 - 5 Number of rehabilitation projects by MDOT region ................................ .................... 63 Table 3 - 6 Selection matrix displaying selected projects (rehabilitation sections) ........................ 64 Table 3 - 7 Selection matrix displaying selected projects (reconstruct sections) ........................... 65 Table 3 - 8 Projects with acceptable performance ................................ ................................ .......... 86 Table 3 - 9 Average HMA reconstruct thicknesses ................................ ................................ ........ 87 Table 3 - 10 Average HMA rehabilitation project thicknesses ................................ ....................... 88 Table 3 - 11 JPCP reconstruct thickness ranges ................................ ................................ ............. 88 Table 3 - 12 Unbonded PCC overlay thickness ranges ................................ ................................ ... 88 Table 3 - 13 Conversion from raw vehicle counts to vehicle class percentages ............................. 92 Table 3 - 14 Ranges of AADTT for all reconstruct projects ................................ .......................... 93 Table 3 - 15 Ranges of AADTT for all rehabilitation projects ................................ ....................... 94 Table 3 - 16 Projects with available Level 1 HMA input properties ................................ .............. 96 Table 3 - 17 As - constructed percent air voids ................................ ................................ ................ 98 Table 3 - 1 8 HMA top course average aggregate gradation ................................ ........................... 99 Table 3 - 19 HMA leveling course average aggregate gradation ................................ ................... 99 Table 3 - 20 HMA base course average aggregate gradation ................................ ....................... 100 Table 3 - 21 Average values for compressive strength and MOR by MDOT region ................... 101 Table 3 - 22 List of MR reduction factors for Michigan weather stations in the Pavement - ME . 106 Table 3 - 23 Infla ted MR values from Method 1 ................................ ................................ ......... 107 Table 3 - 24 Average roadbed soil MR values ................................ ................................ ............. 108 xii Table 3 - 25 Michigan climate station information ................................ ................................ ...... 109 Table 3 - 26 Summary of input levels and data source ................................ ................................ . 110 Tab le 4 - 1 Model calibration approach (calibration outside of the software or rerunning the software) ................................ ................................ ................................ ..................... 115 Table 4 - 2 Hypothesis tests ................................ ................................ ................................ .......... 121 Table 4 - r2 r3 calibration coefficients ................................ ........................ 128 Table 4 - 4 Reliability equations for each distress and smoothness model ................................ .. 153 Table 5 - 1: Hypothesis tests ................................ ................................ ................................ ......... 157 Table 5 - 2 Global model fatigue cracking hypothesis test results ................................ ............... 159 Table 5 - 3 Local calibration coefficients and hypothesis testing results ................................ ..... 162 Table 5 - 4 Reliability summary for Option 1a ................................ ................................ ............. 163 Table 5 - 5 Global model fatigue cracking hypothesis test results for Option 1b ........................ 16 4 Table 5 - 6 Local calibration coefficients and hypothesis testing results ................................ ..... 167 Table 5 - 7 Reliability summary for Option 1b ................................ ................................ ............. 168 Table 5 - 8 Global model SEE and bias (Method 1 Option 1) ................................ ...................... 172 Table 5 - 9 Global model hypothesis testing results (Method 1 Option 1) ................................ ... 173 Table 5 - 10 Hypothesis testing results for all sampling techniques (Method 1 Option 1) .......... 176 Table 5 - 11 Rutting model reliability - Bootstrapping (Method 1 Option 1) .............................. 177 Table 5 - 12 Global rutting model hypothesis testin g results (Method 1 Option 2) ..................... 178 Table 5 - 13 Hypothesis testing results for all sampling techniques (Method 1 Option 2) .......... 182 Table 5 - 14 Rutting model reliability - Bootstrapping (Method 1 Option 2) .............................. 183 Table 5 - 15 Global rutting model hypothesis testing results (Method 1 Option 4) ..................... 184 Table 5 - 16 Hypothesis testing results for all sampling techniques (Method 1 Option 4) .......... 187 Table 5 - 17 Rutting model reliability for Option 4 Bootstrap ................................ .................. 188 Table 5 - 18 Global model hypothesis testing results (Method 2 Option 1) ................................ . 190 Table 5 - 19 Hypothesis testing results for all sampling techniques (Method 2 Option 1) .......... 192 xiii Table 5 - 20 Rutting model reliability for Method 2 Option 1 Bootstrap ................................ .. 193 Table 5 - 21 Global model hypothesis testing results (Method 2 Option 2) ................................ . 194 Table 5 - 22 Hypothesis testing results for all sampling techniques (Method 2 Option 2) .......... 196 Table 5 - 23 Rutting model reliability for Option 2 Method 2 - Bootstrap ............................... 197 Table 5 - 24 Global model hypothesis testing results (Method 2 Option 4) ................................ . 198 Table 5 - 25 Hypothesis testing results for all sampling techniques (Method 2 Option 4) .......... 201 Table 5 - 26 Rutting model reliability for Method 2 Option 4 Bootstrap ................................ .. 202 Table 5 - 27 Transverse thermal cracking results Option 1 ................................ ....................... 206 Table 5 - 28 Transverse thermal cracking results Option 2 ................................ ....................... 206 Table 5 - 29 Transverse thermal cracking Level 3 results Option 1 ................................ .......... 207 Table 5 - 30 Transverse thermal cracking Level 3 results Option 2 ................................ .......... 208 Table 5 - 31 Transverse thermal cracking Level 3 results Option4 ................................ ........... 208 Table 5 - 32 Reliability summary for Level 1 ................................ ................................ ............... 209 Table 5 - 33 Reliability summary for Level 3 ................................ ................................ ............... 209 Table 5 - 34 Hypotheses testing results for the global IRI model (Option 1) ............................... 210 Table 5 - 35 IRI model hypothesis testing results (Option 1) ................................ ....................... 213 Table 5 - 36 IRI model calibration coefficients (Option 1) ................................ .......................... 213 Table 5 - 37 Hypothesis testing for global IRI model (Option 2) ................................ ................. 214 Table 5 - 38 IRI model hypothesis testing results (Option 2) ................................ ....................... 217 Table 5 - 39 IRI model calibration coefficients (Option 2) ................................ .......................... 217 Table 5 - 40 Hypotheses testing results for the global IRI model (Option 4) ............................... 218 Table 5 - 41 IRI model hypothesis testing results (Option 4) ................................ ....................... 221 Table 5 - 42 IRI model calibration coefficients (Option 4) ................................ .......................... 221 Table 5 - 43 Global model hypothesis testing results (Option 1) ................................ ................. 226 Table 5 - 44 Hypothesis testing results (Option 1) ................................ ................................ ....... 229 xiv Table 5 - 45 Calibration coefficients (Option 1) ................................ ................................ ........... 230 Table 5 - 46 Transverse cracking reliability Option 1 ................................ ............................... 230 Table 5 - 47 Global model hypothesis testing results (Options 2, 3, 4) ................................ ........ 233 Table 5 - 48 Hypothesis testing results for transverse cracking local calibration ........................ 239 Table 5 - 49 Transverse cracking reliability Option 2 ................................ ............................... 241 Table 5 - 50 Transverse cracking reliability Option 3 ................................ ............................... 241 Table 5 - 51 Transverse cracking reliability Option 4 ................................ ............................... 241 Table 5 - 52 Summary of Option 1 local calibration Faulting model ................................ ........ 242 Table 5 - 53 Faulting model reliability ................................ ................................ ......................... 243 Table 5 - 54 Faulting model calibration coefficient constraints ................................ ................... 245 Table 5 - 55 Faulting model calibration coefficients ................................ ................................ .... 247 Table 5 - 56 Hypothesis testing p - value results ................................ ................................ ............ 248 Table 5 - 57 Global Rigid IRI model hypothesis testing results ................................ ................... 251 Table 5 - 58 IRI model hypothesis testing results ................................ ................................ ......... 257 Table 5 - 59 Transverse Cracking Calibration coefficients ................................ .......................... 263 Table 5 - 60 Comparison of reasonable standard error after local calibration .............................. 264 Table 6 - 1 Summary of input levels and data source ................................ ................................ ... 267 Table 6 - 2 Summary of flexible pavement performance models with local coefficients in Michigan ................................ ................................ ................................ .................... 271 Table 6 - 3 Summary of rigid pavement performance model coefficients and standard errors .... 272 Table 6 - 4 Flexible pavement distresses ................................ ................................ ...................... 276 Table 6 - 5 Rigid pavement di stresses ................................ ................................ .......................... 276 Table 6 - 6 Testing requirements for significant input variables for rehabilitation ...................... 277 xv LIST OF FIGURES Figure 3 - 1 Geographical location of identified JPCP reconstruct projects ................................ ... 60 Figure 3 - 2 Geographical location of identified freeway HMA reconstruct projects .................... 61 Figure 3 - 3 Geographical location of identified non - freeway HMA reconstruct projects ............. 61 Figure 3 - 4 Geog raphical location of identified crush and shape projects ................................ .... 62 Figure 3 - 5 Selected HMA rehabilitation sections longitudinal cracking data ......................... 68 Figure 3 - 6 Selected HMA rehabilitation sections rutting d ata ................................ ................ 68 Figure 3 - 7 Selected HMA rehabilitation sections Transverse (thermal) cracking data ........... 68 Figure 3 - 8 Selected HMA rehabilitation sections IRI data ................................ ...................... 68 Figure 3 - 9 Selected JPCP rehabilitation sections cracking data ................................ .............. 69 Figure 3 - 10 Selected JPCP rehabilitation sections joint faulting data ................................ ..... 69 Figure 3 - 11 Selected JPCP rehabilitation sections IRI data ................................ .................... 69 Figure 3 - 12 Selected HMA freeway sections alligator cracking data ................................ ..... 72 Figure 3 - 13 Selected HMA freeway sections longitu dinal cracking data ............................... 72 Figure 3 - 14 Selected HMA freeway sections rutting data ................................ ....................... 73 Figure 3 - 15 Selected HMA freeway sections thermal cracking data ................................ ...... 73 Figure 3 - 16 Selected HMA freeway sections IRI data ................................ ............................ 73 Figure 3 - 17 Selected HMA non - free way sections alligator cracking data .............................. 73 Figure 3 - 18 Selected HMA non - freeway sections longitudinal cracking data ........................ 74 Figure 3 - 19 Selected HMA non - freeway sections rutting data ................................ ............... 74 Figure 3 - 20 Selected HMA non - freeway sections thermal cracking data ............................... 74 Figure 3 - 21 Selected HMA non - freeway sections IRI data ................................ ..................... 74 Figure 3 - 22 Selected HMA crush and shape sections alligator crack ing data ......................... 75 Figure 3 - 23 Selected HMA crush and shape sections longitudinal cracking data ................... 75 Figure 3 - 24 Selected HMA crush and shape sections rutting data ................................ .......... 75 xvi Figure 3 - 25 Selected HMA crush and shape sections thermal cracking data .......................... 75 Figure 3 - 26 S elected HMA crush and shape sections IRI data ................................ ............... 76 Figure 3 - 27 Selected JPCP sections transverse cracking data ................................ ................. 76 Figure 3 - 28 Selected JPCP sections joint faulting data ................................ ........................... 76 Figure 3 - 29 Selected JPCP sections IRI data ................................ ................................ ........... 76 Figure 3 - 30 Selected Mich igan LTPP sections alligator cracking data ................................ ... 77 Figure 3 - 31 Selected Michigan LTPP sections rutting data ................................ .................... 77 Figure 3 - 32 Selected Michigan LTPP sections IRI data ................................ .......................... 78 Figure 3 - 33 Selected LTPP SPS - 2 sections transverse cracking data ................................ ..... 78 Figure 3 - 34 Selected LTPP SPS - 2 sections transverse joint faulting data .............................. 79 Figure 3 - 35 Selected LTPP SPS - 2 secti ons IRI data ................................ ............................... 79 Figure 3 - 36 Flexible pavement performance criteria ................................ ................................ .... 80 Figure 3 - 37 Rigid pavement performance criteria ................................ ................................ ........ 80 Figur e 3 - 38 Performance for all HMA projects ................................ ................................ ............ 83 Figure 3 - 39 Normal pavement performance for HMA projects ................................ ................... 84 Figure 3 - 40 Performance for all JPCP projects ................................ ................................ ............ 85 Figure 3 - 41 Normal pavement performance for JPCP projects ................................ .................... 85 Figure 3 - 42 MDOT freight data ................................ ................................ ................................ .... 90 Figure 3 - 43 Location of classification counts ................................ ................................ ............... 91 Figure 3 - 44 Raw vehicle class counts ................................ ................................ ........................... 91 Figure 3 - 45 Cluster selection based on steps 1 and 2 ( 25 ) ................................ ........................... 92 Figure 3 - 46 Distribution of concrete strength properties ................................ ............................ 102 Figure 3 - 47 Subgrade MR over time in Lansing ................................ ................................ ........ 105 Figure 4 - 1 Schematic of bias and standard error for model calibration ................................ ..... 121 Figure 4 - 2 Repeated sample calibration procedure ................................ ................................ .... 124 xvii Figure 4 - 3 Effect of calibration coefficients on alligator cracking ................................ ............. 126 Figure 4 - 4 Effect of calibration coefficients on longitudinal cracking ................................ ....... 127 Figure 4 - 5 Effect of (a) 2 and (b) 3 on HMA rutting ................................ ............................ 129 Figure 4 - 6 Positive and negative areas in the NCHRP p rocedure ( 40; 41 ) ................................ 130 Figure 4 - 7 Calculation of the maximum rut depth (40; 41) ................................ ........................ 132 Figure 4 - 8 Typical seat of rutting based on transverse profile shapes ( 40; 41 ) .......................... 132 Figure 4 - 9 Conditions for determining the rutting seat ( 40; 41 ) ................................ ................ 133 Figure 4 - 10 Correlation of the type of failure as a function of maximum rut depth and total rut area ( 40; 41 ) ................................ ................................ ................................ ............... 133 Figure 4 - 11 Edge adjustme nt for transverse profile ................................ ................................ ... 134 Figure 4 - 12 Transverse profile analysis results ................................ ................................ .......... 136 Figure 4 - 13 Effect of transverse cracking model calibration coefficients ................................ .. 138 Figure 4 - 14 Location of input parameters required for faulting calculations ............................. 139 Fig ure 4 - 15 Input parameters for faulting model ................................ ................................ ........ 140 Figure 4 - 16 Impact of C1 on faulting ................................ ................................ ......................... 143 Figure 4 - 17 Impact of C2 on faulting ................................ ................................ ......................... 143 Figure 4 - 18 Impact of C3 on faulting ................................ ................................ ......................... 144 Figure 4 - 19 Impact of C4 on faulting ................................ ................................ ......................... 144 Figure 4 - 20 Impact of C5 on faulting ................................ ................................ ......................... 144 Figure 4 - 21 Impact of C6 on faulting ................................ ................................ ......................... 145 Figure 4 - 22 Impact of C7 on faulting ................................ ................................ ......................... 145 Figure 4 - 23 Design Reliability Concept for Smoothness (IRI)( 3 ) ................................ .............. 148 Figure 5 - 1 Global model measured versus predicted fatigue cracking for Option 1a ................ 159 Figure 5 - 2 Standard error for all sampling techniques Option 1a ................................ ........... 160 Figure 5 - 3 Bias for all sampling techniques Option 1a ................................ ........................... 161 Figure 5 - 4 Measured versus predicted after local calibration Bootstrapping Validation ........ 161 xviii Figure 5 - 5 Global model measured versus predicted alligator cracking for Option 1b .............. 164 Figure 5 - 6 Standard error for all sampling techniques Option 1b ................................ ........... 165 Figure 5 - 7 Bias for all sampling techniques Option 1b ................................ ........................... 166 Figure 5 - 8 Measured versus predicted after local calibration Bootstrapping Vali dation ........ 166 Figure 5 - 9 Parameter distributions for bootstrapping sampling technique ................................ . 170 Figure 5 - 10 Global rutting model verification (Method 1 Option 1) ................................ .......... 172 Figure 5 - 11 Standard Error for all sampling techniques (Method 1 Option 1) .......................... 174 Figure 5 - 12 Bias for all sampling techniques (Method 1 Option 1) ................................ ........... 174 Figure 5 - 13 Measured vs. predicted total rutting for model validation (Method 1 Option 1) .... 175 Figure 5 - 14 Calibration c oefficients (Method 1 Option 1) ................................ ......................... 177 Figure 5 - 15 Global rutting model verification (Method 1 Option 2) ................................ .......... 179 Figure 5 - 16 Standard Error for all sampling techniques (Method 1 Option 2) .......................... 180 Figure 5 - 17 Bias for all sampling techniques (Method 1 Option 2) ................................ ........... 180 Figure 5 - 18 Measured vs. predicted total rutting for model validation (Method 1 Option 2) .... 181 Figure 5 - 19 Calibration c oefficients (Method 1 Option 2) ................................ ......................... 183 Figure 5 - 20 Global rutting model verification (Method 1 Option 4) ................................ .......... 184 Figure 5 - 21 Standard Error for all sampling techniques (Method 1 Option 4) .......................... 185 Figure 5 - 22 Bias for all sampling techniques (Method 1 Option 4) ................................ ........... 186 Figure 5 - 23 M easured vs. predicted total rutting for model validation (Method 1 Option 4) .... 186 Figure 5 - 24 Calibration coefficients (Method 1 Op tion 4) ................................ ......................... 188 Figure 5 - 25 Global model rutting predictions (Method 2 Option 1) ................................ .......... 189 Figure 5 - 26 Standard Error for all sampling techniques (Method 2 Option 1) .......................... 190 Figure 5 - 27 Bias for all sampling techniques (Method 2 Option 1) ................................ ........... 191 Figure 5 - 28 Measured vs. predicted total rutting for model validation (Method 2 Option 1) .... 191 Figure 5 - 29 Calibration coefficients (Method 2 Option 1) ................................ ......................... 193 xix Figure 5 - 30 Global model rutting predictions (Method 2 Option 2) ................................ .......... 194 Figure 5 - 31 Standard Error for all sampling techniques (Method 2 Option 2) .......................... 195 Figure 5 - 32 Bias for all sampling techniq ues (Method 2 Option 2) ................................ ........... 195 Figure 5 - 33 Measured vs. predicted total rutting for model validation (Method 2 Option 2) .... 196 Figure 5 - 34 Calibration coefficients (Method 2 Option 2) ................................ ......................... 197 Figure 5 - 35 Global model rutting predictions (Method 2 Option 4) ................................ .......... 198 Figure 5 - 36 Standard Error for all sampling techniques (Method 2 Option 4) .......................... 199 Figure 5 - 37 Bias for all sampling techniq ues (Method 2 Option 4) ................................ ........... 200 Figure 5 - 38 Measured vs. predicted total rutting for model validation (Method 2 Option 4) .... 200 Figure 5 - 39 Calibration coefficients (Method 2 Option 4) ................................ ......................... 201 Figure 5 - 40 Calibration parameter distributions - Bootstrapping ................................ ............... 203 Figure 5 - 41 Option 1 measured versus predicted transverse (thermal) cracking ....................... 206 Figure 5 - 42 Option 2 measured versus predicted transverse (thermal) cracking ....................... 207 Figure 5 - 43 Measured versus predicted TC for Option 1 ................................ ........................... 208 Figure 5 - 44 Global IRI model measured versus predicted comparison (Option 1) .................... 210 Figure 5 - 45 Standard error for all sampling techniques (Option 1) ................................ ............ 211 Figure 5 - 46 Bias for all sampling techniques (Option 1) ................................ ........................... 212 Figure 5 - 47 Measured versus predicted IRI after local calibration (Option 1) ........................... 212 Figure 5 - 48 Global model measured versus predicted IRI (Option 2) ................................ ....... 214 Figure 5 - 49 Standard error for all sampling techniques (Option 2) ................................ ............ 215 Figure 5 - 50 Bias for all sampling techniques (Option 2) ................................ ........................... 216 Figure 5 - 51 Local model measured versus predicted IRI for boo tstrapping validation (Option 2) ................................ ................................ ................................ ................................ .... 216 Figure 5 - 52 Global IRI model measured versus predicted comparison (Option 4) .................... 218 Figure 5 - 53 Standard error for all sampling techniques (Option 4) ................................ ............ 219 Figure 5 - 54 Bias for all sampling techniques (Option 4) ................................ ........................... 220 xx Figur e 5 - 55 Measured versus predicted IRI after local calibration (Option 4) ........................... 2 20 Figure 5 - 56 IRI calibration parameter distributions Bootstrapping Validation ....................... 223 Figure 5 - 57 Global model comparison between measured and pr edicted transverse cracking (Option 1) ................................ ................................ ................................ ................... 226 Figure 5 - 58 Summary of standard error for all sampling techniques (Option 1) ....................... 227 Figure 5 - 59 Summary of bias for all sampling techniques (Option 1) ................................ ....... 228 Figure 5 - 60 Comparison between measured and predicted transverse cracking after local calibration (Option 1) ................................ ................................ ................................ . 228 Figure 5 - 61 Option 2 global model comparisons ................................ ................................ ....... 231 Figure 5 - 62 Option 3 global model comparisons ................................ ................................ ....... 232 Figure 5 - 63 Option 4 global model comparisons ................................ ................................ ....... 233 Figure 5 - 64 Standard error results for all options and sampling techniques .............................. 235 Figure 5 - 65 Bias results for all options and sampling techniques ................................ .............. 236 Figure 5 - 66 Bootstrapping validation for all dataset options ................................ ...................... 237 Figure 5 - 67 C 4 local calibration coefficient for all options and sampling techniques ................ 240 Figure 5 - 68 C 5 local calibration co efficient for all options and sampling techniques ................ 240 Figure 5 - 69 Summary of standard error for all sampling techniques ................................ ......... 246 Figure 5 - 70 Summary of bias for all sampling techniques ................................ ......................... 246 Figure 5 - 71 Frequency distributions of SEE and bias for the bootstrapping validation sampling technique ................................ ................................ ................................ .................... 247 Figure 5 - 72 Measured versus predicted faulting before and after local calibration ................... 248 Figure 5 - 73 Rigid IRI model measured versus predicted comparison using global model coefficients ................................ ................................ ................................ ................. 250 Figure 5 - 74 Summary of SEE for IRI model ................................ ................................ .............. 253 Figure 5 - 75 Summary of bias for IRI model ................................ ................................ .............. 254 Figure 5 - 76 Measured versus predicted IRI for the validation of the bootstrapping sampling technique ................................ ................................ ................................ .................... 255 xxi Figure 5 - 77 Summary of IRI model C 1 calibration coefficient ................................ .................. 258 Figure 5 - 78 Summa ry of IRI model C 2 calibration coefficient ................................ .................. 258 Figure 5 - 79 IRI model parameter distributions for repeated split sampling ............................... 260 Figure 5 - 80 IRI model parameter distributions for bootstrapping ................................ .............. 261 Figure 5 - 81 Transverse cracking model standard error ................................ .............................. 262 Figure 5 - 82 Transverse cracking model bias ................................ ................................ .............. 263 1 1 - INTRODUCTION The Mechanistic - Empirical Pavement Design Guide (MEPDG) and the accompanying software, AASHTOWare PavementME - Design TM is becoming the state - of - the - practice for flexible and rigid pavement designs across the nation. The Pavement - ME is based on a mechanistic - empirical analysis of a pavement structure to predict pavement performance considering traffic, material properties and environmental conditions. The Pavement - ME is a robust method to determine temporal pavement performance compared to the previous empirically based AASTHO 93 design methodology. The mechanistic part of the Pavement - ME design method relies on the application of engineering mechanics to compute stress, st rain and deformation in the pavement structure induced by vehicle loads and climatic effects. The empirical portion of the design concept is based on laboratory developed performance models that are calibrated with observed distresses from in - service pavem ents with known structural properties, traffic loadings and measured performance ( 1 ) . The inputs related to traffic, material properties, and pavement structure which characterize the in - service pavements play a vital role in the Pavement - ME design and analysis process. The mechanistic - empirical ba sed design procedure provides several advantages over the empirical methods such as : A broader range of vehicle loadings, material properties, and climatic effects Improved characterization of the existing pavement layers Improved reliability of pavement p erformance predictions The adoption of the ME - based design procedure may require increased time to develop a design and evaluate the pavement performance, an increase in needed data to characterize a pavement and personnel knowledge and experience in ME - ba sed design practices ( 1 ) . The implementatio n of the Pavement - ME by State Highway Agencies (SHA) may require collaboration between 2 various agency groups or divisions in areas such as, materials, geotechnical, pavement design, ess of implementing the Pavement - ME as their current state of the practice. The adequacy of the prediction models utilized in the Pavement - ME needs to be determined prior to the adoption of the Pavement - ME to reflect local design practices . If the pavement performance prediction models do not provide an accurate representation of current practice for a particular SHA, local calibration is required to improve the overall accuracy of the prediction models. Currently, the Pavement - ME performance prediction mod els for rigid and flexible pavements are calibrated using national pavement performance data. It is therefore necessary to re - calibra te the performance models using input variable data that reflect local construction and design practices to minimize predic tion errors (i.e., random and systematic bias). The local calibration process minimizes the differences between the measured/observed and predicted pavement performance. 1.1 PROBLEM STATEMENT The main objective of this study is to locally calibrate the pavemen t performance prediction models to reflect Michigan design and construction practices. The local calibration of the pavement distress prediction models minimizes the error between predicted and measured iculties related to the selection of adequate number of pavement sections for local calibration. Limited sample sizes affect the overall confidence of the local calibration coefficients because the selected pavement sections may not represent local SHA des ign and construction practices. Additionally, the accuracy of the input variable data for each project is essential because the material properties directly affect the predicted performance. The local calibration of the performance prediction models will also impact the design reliability of future designs. Overall, the project selection, as - constructed 3 material collection, and local calibration procedures needs to be performed to reflect Michigan design and construction practices. 1.2 RESEARCH OBJECTIVES The main research objective of this study is to perform the local calibration of the various pavement performance predictions models included in the Pavement - ME for Michigan pavements. Additionally, various sampling techniques are implemented to study the vari ability of the local calibration coefficients, error, and bias. Furthermore, the effect of sample size will be studied to test if the r esampling techniques provide a better estimate of the calibration coefficients. If a small number of pavement sections ar e selected and it provides similar standard error compared to a larger sample size, then it can potentially save a SHA time and effort in data collection for future calibrations. The changes in design reliability to reflect the local calibration will be st udied based on the results of each sampling technique. Overall, the sampling technique that produces the best results (based on SEE, bias and associated reliability) will be identified and recommended for design . The outcomes of the local calibration proce ss will also provide a set of recommendations related to the required resources for implementing the Pavement - ME in Michigan. The detailed resource needs in terms of laboratory and field equipment, personnel and the needed resources for implementation, and future recalibrations of the performance models were some of the outcomes expected at the conclusion of this research. 1.3 LAYOUT OF THE DISSERTATION The dissertation is divided into six chapters. Chapter 1 includes the problem statement and research objectiv es. Chapter 2 documents the review of the literature from previous local calibration studies, models included in the Pavement - ME and implementation issues. Chapter 3 discusses the data collection efforts and details related to characterizing an in - service pavement 4 section using the Pavement - ME. Chapter 4 details the methods used to calibrate the Pavement - ME for Michigan conditions. Chapter 5 presents the local calibration results and findings for all the performance prediction models. Chapter 6 includes the conclusions and future work. 5 2 - LITERATURE REVIEW 2.1 INTRODUCTION The mechanistic - empirical pavement analysis and design procedure to replace the empirical AASHTO 93 design method was incorporated in the Pavement - ME software (MEPDG, DARWin - ME and now Pavement - ME).The initial version of the software was made public in mid - 2004. Since the release of the software, many State Highway Agencies (SHAs) have worked on exploring several aspects of the design and analysis procedures. Most of the efforts focused on (a) d etermining significant input variables through sensitivity studies to reduce the number of inputs, (b) evaluating local calibration needs to represent performance predictions in the local conditions (i.e., construction practices, climate, traffic, material s, and observed pavement performance), (c) performing local calibration to improve the pavement designs, and, (d) highlighting the implementation related issues such as PMS data compatibility and other data - specific needs . The local calibration of the Pa vement - ME performance models is a procedure that relies on the available pavement cross - section, material, traffic and performance data. The NCHRP 1 - 40B project report ( 2 ) documented extensive guidelines for local calibration. The overall goal of calibratio n is to mathematically reduce the total error between the measured and the predicted pavement distress es and roughness (IRI) . Subsequently, the calibrated models must be validated by using an independent set of pavement sections or projects to determine th e accuracy of the pavement distress prediction models. A successful validation will produce similar or acceptable bias and precision statistics for the independent set of projects as obtained through the calibration process. The literature review is organi zed in two main subsections (a) local calibration process, and (b) local calibration efforts by different SHAs and most widely faced challenges while 6 implementing the new pavement analysis and design procedure. 2.2 LOCAL CALIBRATION PROCESS The performance pr ediction models in the Pavement - ME are calibrated using in - service pavement material properties, pavement structure, climate and truck loading conditions, and performance data obtained from the Long - term Pavement Performance (LTPP) program ( 1 ) . Due to the limited availability of Level 1 input prope rties, the nationally calibrated models are primarily based on Level 2 and Level 3 inputs ( 3 ) . Generally, the nationally calibrated models may not perform well if the inputs and performance data used for calibration do not represent State practices. Therefore, it is recom mended that each SHA conduct an evaluation to determine how well the nationally calibrated performance models predict field performance. If the predictions are not adequate, then local calibration of the Pavement - ME performance models is recommended to imp rove the pavement performance prediction capabilities reflecting the unique field conditions and design practices. The local calibration process is used to (a) confirm that the prediction models can predict pavement distress and smoothness without bias, an d (b) determine the standard error associated with the prediction equations. The calibration process outlined in the NCHRP 1 - 40B final report consists of 11 steps as summarized below ( 4 ) . Step 1: Select hierarchical input level s The hierarchical level is a policy - based decision established on the information available related to field and laboratory testing capabilities, material and construction specificatio ns and traffic collection procedures and equipment. Different hierarchical levels for inputs can be selected based on the available data. Step 2: Develop experimental plan and sampling template The experimental plan and sampling template should represent t he agencies standard 7 specifications, construction and design practices, and construction materials. The pavement sections could represent a variety of design types, traffic levels, and climates. LTPP sections can also be included if necessary. Step 3: Est imate sample size for specific distress prediction models An adequate number of sections are required to provide statistically meaningful results. The recommended minimum number of pavement segments for each performance prediction model includes: a. Rut depth (or faulting) 20 roadway segments b. Alligator and longitudinal cracking 30 roadway segments c. Transverse (thermal) cracking 30 roadway segments d. Transverse slab cracking 26 roadway segments e. Reflective cracking (HMA only) 26 roadway segments Step 4: S elect roadway segments Applicable roadway segments, replicate segments, and LTPP segments should be selected to populate the experimental design matrix developed in Step 2. It is recommended that the selected segments have at least 3 condition observations over an 8 - 10 year period. Step 5: Evaluate project and distress data The input and performance data for each project needs to be collected and verified to ensure compatibility with the requirements of the Pavement - ME. Any discrepancies between the local a gency and the Pavement - ME need to be resolved to ensure compatibility. The Pavement - ME also recommends that the average condition level exceed 50% of the design criteria. 8 Step 6: Conduct field testing and forensic investigation If any information is missin g in step 5, field testing and forensic investigation are recommended to obtain the missing information. Step 7: Assess local bias Plot and compare the measured performance to the Pavement - ME predicted performance, based on the global/national models, for each pavement segment. The prediction capability and accuracy should be evaluated by performing linear regression on the predicted and measured performance, comparing the standard error of the estimate ( S e ) to the nationally calibrated models and determini ng the bias for each performance prediction model. An R 2 value above 0.65 is considered a reasonable prediction ( 1 ) . The bias significance is determined by performing hypothesis testing on the mean difference between the measured and predicted distresses. If the null hypothesis is rejected, then a local calibration is required. The nationally calibrated model statistics for the various performance prediction models are summarized in Table 2 - 1. Table 2 - 1 Summary of the global models statistics ( 1 ; 2 ) Pavement type Performance prediction model Model statistics R - square S e Number of data points, N New asphalt Alligator cracking 0.275 5.0 1 405 Transverse cracking 0.344 1 , 0 .218, 0.057 N/A N/A Rut depth 0.58 0.11 334 IRI 0.56 18.9 1926 New JPCP Transverse cracking 0.85 4.52 1505 Joint faulting 0.58 0.03 1239 IRI 0.6 17.1 163 1 Three values correspond to levels 1, 2, and 3, respec tively 9 Step 8: Eliminate local bias If the hypothesis tests are rejected in step 7, the cause of the bias needs to be determined and removed if possible. Features to consider in removing bias include improving the accuracy and extent of traffic, climate , and material characteristics data. Step 9: Assess standard error of the estimate The S e obtained from local calibration is compared to the nationally calibrated S e . Reasonable S e values are summarized in Table 2 - 2 ( 1 ; 2 ) . Table 2 - 2 Reasonable standard error values Pavement type Performance prediction model S e New a sphalt Alligator cracking (% lane area) 7 Longitudinal cracking (ft/mile) 600 Transverse cracking (ft/mile) 250 Rut depth (inch) 0.1 New JPCP Transverse cracking (% slabs cracked) 7 Joint faulting (inch) 0.05 Step 10: Reduce standard error of th e estimate Determine if the standard error of each cell of the experimental matrix is dependent on other factors and adjust the local calibration coefficients to reduce the standard error. Table 2 - 3 summarizes the factors for eliminating bias and reducing the standard error. 10 Table 2 - 3 Factors for eliminating bias and reducing standard error ( 1 ) Step 11: Interpretation of results Compare the measured and predicted distress or IRI to verify that acceptable results are obtained. The above documented el even (11) step process forms the groundwork for the local calibration of the Pavement - ME performance prediction models. In each of these steps, a significant amount of work is required, especially related to the input data collection. Furthermore, some of the models require a full analysis of the software each time a calibration coefficient is adjusted which can become extremely time - consuming. The next section presents local calibration efforts and various implementation related issues reported in the lite rature. N/A 11 2.3 LOCAL CALIBRATION EFFORTS AND CHALLENGES The local calibration process requires a comparison between measured and predicted distress. In order to make this comparison, the SHAs first need to identify if their measured distress definitions are comp atible with those predicted by the Pavement - ME. The distresses which are not compatible need conversions in order to compare the measured and predicted distresses. Recently, several SHAs have performed local calibration for both flexible and rigid pavement s. Tables 2 - 4 and 2 - 5 summarize the current status for implementation along with the use of various performance models in the Pavement - ME by various SHAs for flexible and rigid pavements, respectively. Table 2 - 4 Flexible pavement model calibration status ( 1 ) Agency IRI Longitudinal cracking Alligator cracking Thermal cracking Rut Depth Reflective cracking Asphalt layer Total Arizona - Global Colorado Hawaii Future Future Future Future Future Future Indiana - Global Global - - Missouri Global Global Global New Jersey Future Future Future Future Future Future Oregon Global Ind icates model was locally calibrated Future indicates that the model was not calibrated at the time of the report and will be calibrated in the future 12 Table 2 - 5 Rigid pavement model calibration status ( 1 ) Agency JPCP CRCP IRI Transverse cracking F aulting IRI Punchouts Arizona Colorado - - Florida - - Indiana Global Global - - Missouri Global Global - - North Dakota Global Global - - Oregon Global Global Indicates model was locally calibrated The local c are shown in Tables 2 - 6 and 2 - 7 for rigid and flexible pavements, respectively . It can be observed from these tables that significantly different coefficients are possible in a reg ion or state as compared to the coefficients in the global/national models. The following issues were found to be common among all the SHAs, and should be addressed when performing local calibration to ease the way for full implementation of the Pavement - ME. These issues include: Number of available pavement sections Input data for each pavement section Measured condition of the selected pavement sections Local calibration techniques 13 Table 2 - 6 Rigid pavement l ocal calibration efforts ( 1 ) 14 Table 2 - 7 Flexible pavement local calibration efforts ( 1 ) Several other SHAs have undertaken the local calibration process for various models within the Pavement - ME. These results were not presented in the NCHRP Synthesis 457 ( 1 ) and 15 provide additional information regarding local calibration of the per formance models. These efforts performed by the various SHAs are discussed and summarized in this section. Currently, the following states have been identified: Arkansas Colorado FHWA Kansas Minnesota Missouri Montana New Mexico North Carolina Ohio Oklahom a Oregon South Carolina Texas Utah Washington The review is focused on the methods used for local calibration and the significant findings in each state. Several SHAs have performed local calibration of the performance prediction models in the Pavement - M E. Table 2 - 8 summarizes the type of model calibrated by each SHA. Many of the States only attempted to calibrate some of the models. For example, Minnesota considered only the local calibration for transverse cracking and IRI for both flexible and rigid pa vements. The local calibration was performed for the models where adequate data were available. Table 2 - 9 summarizes the various pavement types and the number of pavement segments considered for calibration by each state. Several States used LTPP data to i ncrease the number of pavement sections since their local pavements did not meet the minimum recommended number of pavement segments summarized in the NCHRP 1 - 40B guide ( 2 ) . 16 Table 2 - 8 Summary of local calibrat ion efforts State Agency Performance Model Flexible Recalibration Rigid Recalibration Alligator cracking Longitudinal cracking Thermal cracking Rutting IRI Transverse cracking Faulting IRI Arkansas Yes Yes No Yes No - - - Colorado Yes - Yes Yes Yes N o No No Kansas No Yes Yes Yes Yes No Yes Yes Minnesota No No Yes Yes No Yes No Yes Missouri No - Yes Yes Yes No No Yes Montana Yes No No Yes No - - - New Mexico Yes Yes - Yes Yes - - - North Carolina Yes - - Yes - - - - Ohio No - No Yes Yes No No Ye s Texas - - - Yes - - - - Washington Yes Yes - Yes No Yes No No = model was not considered at this time Yes = local calibration coefficients recommended No = global calibration is sufficient or model was not calibrated Table 2 - 9 Summary of calibration sections and pavement types State Agency Performance Model Flexible Pavements Rigid Pavements Number of sections Pavement types calibrated Number of sections Pavement types calibrated Arkansas 26 tot al (LTPP and PMS) New design - - Colorado 95 CDOT and LTPP New and rehab design 31 CDOT and LTPP New and rehab design Kansas 28 KDOT - 32 KDOT - Minnesota 13 MnROAD (rut) 14 MnROAD (TC) 12 MnROAD (LC) New design 65 LTPP New design Missouri 7 MoDOT 1 4 LTPP 20 HMA overlays New and rehab design 25 MoDOT 6 LTPP 5 Unbonded overlays New and rehab design Montana 55 LTPP and Non - LTPP New and rehab design - - New Mexico 11 LTPP 13 Non - LTPP New design - - North Carolina 22 LTPP 24 Non - LTPP New design - - Ohio 13 LTPP New design 14 LTPP New design Texas 18 LTPP New design - - Washington 8 Sub - sections New design 3 calibration 6 validation New design 17 2.3.1 Local Calibration Efforts Details regarding the local calibration of each performance model are summ arized in this section. The transfer function equations are shown for each performance model for flexible and rigid pavements to highlight the local calibration coefficients. 2.3.1.1. Load related cracking in flexible pavements Two types of load - related cracking ar e considered in the Pavement - ME. Alligator cracking is defined as cracks that initiate at the bottom of the HMA layers and propagate to the surface (bottom - up) with continued truck traffic in the wheel - path. Longitudinal cracking is defined as cracks that initiate at the top of the HMA surface and propagate downwards (top - down) ( 3 ) . The allowable number of axle load applications for both alligator and longitudinal cracking can be estimated using Equation ( 2 - 1 ) ( 3 ) : ( 2 - 1 ) w here: N f - HMA = Allowable number of axle load applications for a flexible pavement and HMA overlays. t = Tensile strain at critical locations and calculated by the structural response model, in/in E HMA = Dynamic modulus of the HMA measured in compression, psi. k f1 , k f2 , k f3 = Global field calibration parameters (from the NCHRP 1 - 40D re - calibration; k f1 = 0.007566, k f2 = - 3.9492, and k f3 = - 1.281). f1 f2 f3 = Local or mixture specific field calibration constants; for the global calibration effort, these constants were set to 1.0. The C and M can be determined by using the following equations: ( 2 - 2 ) ( 2 - 3 ) where: V be = Effective asphalt content by volume, percent. V a = Percent air voids in the HMA mixture. C H = Thickness correction term, dependent on type of cracking. 18 For bottom - up or alligator cracking , the C H is determined by : ( 2 - 4 ) For top - down or longitudinal cracking , the C H is determined by : ( 2 - 5 ) The incremental damage is calculated on a grid pattern throughout the HMA layers at critical locations. The damage index is calculated by dividing the actual number of axle loads by the allowable number of axle loads. The cumulative damage is determined by summing the incremental damage over time as shown by Equation ( 2 - 6 ) ( 3 ) . ( 2 - 6 ) w here: n = Actual number of axle load applications within a specific time perio d. j = Axle load interval. m = Axle load type (single, tandem, tridem, quad, or special axle configuration) l = Truck type using the truck classification groups included in the Pavement - ME. p = Month. T = Median temperature for the five temperature in tervals or quintiles used to subdivide each month, °F. = Thickness of HMA layers (inches) = Damage index Alligator cracking transfer function Equation ( 2 - 7 ) ( 3 ) shows the transfer function for bottom - up fatigue cracking in the Pavement - ME and the required calibration coefficients. ( 2 - 7 ) 19 where: FC Bottom = Area of alligator cracking that initiates at the bottom of the HMA layers, percent of total lane area. DI Bottom = Cumulative damage ind ex at the bottom of the HMA layers. C 1,2,4 = Transfer function regression constants; C 4 = 6,000; C 1 = 1.00; and C 2 =1.00 The and coefficients in Equation ( 2 - 7 ) can be determined as: ( 2 - 8 ) ( 2 - 9 ) The local calibration of the alligator cracking model was considered by Missouri , Ohio, Arkansas, Washington, Minnesota, Montana, New Mexico and Colorado . Missouri used a non - statistical approach because the observed cracking was less than 5 percent for 99 percent of the test sections. The nationally ca librated model both over and under - predicted alligator cracking for the test sections. However, it was recommended that the nationally calibrated model can be used at this time and re - evaluation should be performed once more alligator cracking is observed for the selected 41 test sections ( 5 ; 6 ) . The alligator cracking model was not calibrated for the state of Ohio due to the inability to distinguish between top - down and bottom - up cracking for the selected sections ( 7 ) . Arkansas used the Excel solver add - on to optimize the local calibration coefficients by minimizing the error between the predicted and measured distress for 26 pavement sections ( 8 ) . The state of Washington pe rformed sensitivity by using an elasticity approach on the transfer functions to determine the most important calibration coefficients. Local calibration was performed on only two representative sections. The coefficients were adjusted until the error was minimized between the predicted and measured alligator cracking. The coefficients were validated using a larger set of data independent of the calibration sections. The purpose of the validation was to provide approximate field performance instead of a pre cise 20 prediction for each section ( 9 ) . The state of Min nesota did not observe any measured alligator cracking on the selected pavement sections for calibration. However, they compared the results from the Pavement - ME to the MnPAVE design software and determined that there is a correlation between the Pavement - ME and MnPAVE design software. They recommended changes to the fatigue damage equation by adding a direct multiplier for Minnesota conditions ( 10 ) . The state of Montana calibrated the fatigue damage model instead of the alligator cracking transfer function ( 11 ) . The state of New Mexico adjusted the model by running different C 1 and C 2 coefficient values in the transfer function and minimizing the error between predicted and measured alligator cr acking ( 12 ) . The local calibration of the alligator cracking model in Colorado used a nonlinear model optimization tool to minimize the error between the predicted and measure d alligator cracking ( 13 ) . The local calibration improved the prediction for Colorado conditions. Another st udy was performed to locally calibrate the MEPDG using the National Center for Asphalt Technology (NCAT) testing data. The main objective of the study was to evaluate the nationally calibrated models in the MEPDG for the Southeastern United States. Additio nally the models were calibrated and validated to reflect the test track data. After local calibration the validation procedures indicated that the sum of squared errors reduced by roughly 50% compared to the global model ( 14 ) . Table 2 - 10 summarizes the modified local calibration coefficients among the above mentioned States. Table 2 - 10 Local calibration coefficients for alligator cracking Calibration coefficient National coefficients Arkansas New Mexico Washington Colorado NCAT C 1 1 0.688 0.625 1.071 0.07 2.06 C 2 1 0.294 0.25 1 2.35 2.09 C 4 6000 6000 6000 6000 6000 10000 21 Longitudinal cracking transfer function It is assumed that l ongitudinal cracking starts at the top of the pavement and propagates downwards. Equation ( 2 - 10 ) ( 3 ) shows the transfer function and the calibration coefficients for top - down longitudinal cracking in the Pavement - ME. ( 2 - 10 ) where: FC Top = Length of longitudinal cracks that initiate at the top of the HMA layer, ft/mi. DI Top = Cumulative damage index near the top of the HMA surface C 1,2,4 = Transfer function regression constants; C 4 = 1,000; C 1 =7.00; and C 2 =3.5 The local calibration of the longitudinal cracking model was considered in Arkansas, Washington, Minnesota, Montana , New Mexico and Kansas . The st ate of Arkansas performed local calibration on the longitudinal cracking model by changing the coefficients and minimizing the error between the predicted and measured cracking by using 26 LTPP and PMS pavement sections, while 6 sites were used for subsequ ent validation ( 8 ) . Washington State calibrated the longitudinal cracking model using the methods described for alligator cracking ( 9 ) . Minnesota did not calibrate the model due to issues with the Pavement - ME software at the time of the study ( 10 ) . Montana obs erved a very large difference between the measured and predicted cracking and concluded that the model should not be used ( 11 ) . New Mexico calibrated the longitudinal cracking mod el using the methods described for alligator cracking ( 12 ) . Kansas calibrated the longitudinal cracking model by changing the C1 and C4 calibration coefficients. Kansas calibr ated the longitudinal cracking model for pavement projects with two different subgrade M r values (2700 psi and >4000 psi) ( 15 ) . Table 2 - 11 summarizes the modified local calibration coefficients in the above mentioned states. 22 Table 2 - 11 Local calibration coefficients for longitudinal cr acking Calibration coefficient National coefficients Arkansas Kansas New Mexico Washington C1 7 3.016 0.438/4.5 3 6.42 C2 3.5 0.216 3.5 0.3 3.596 C4 1000 1000 36000 1000 1000 2.3.1.2. Transverse (thermal) cracking model Thermal cracking is associated with the contraction of the HMA material due to surface temperature fluctuations. The variations in temperature affect the volume changes of the material and as a consequence stresses develop due to the continual contraction of the materials, and the restrained con ditions, which causes the occurrence of thermal cracks. Typically, thermal cracking in flexible pavements occur due to the temperature drop experienced by the pavement in cold conditions. A thermal crack will initiate when the tensile stresses experienced in the HMA layers become equal to or greater than the tensile strength of the material. The initial cracks propagate through the HMA layer with more thermal cycles. The amount of crack propagation induced by a given thermal cooling cycle is predicted using the Paris law of crack propagation . E xperimental results indicate that reasonable estimates of A and n can be obtained from the indirect tensile creep - compliance and strength of the HMA in accordance with E quations ( 2 - 11 ) and ( 2 - 12 ) ( 3 ) . ( 2 - 11 ) where: C = Change in the crack depth due to a cooling cycle. K = Change in the stress intensity factor due to a cooling cycle. A, n = Fracture param eters for the HMA mixture. ( 2 - 12 ) where: 23 = k t = Coefficient determined through global calibration for each input level (Level 1 = 5.0; Level 2 = 1.5; and Level 3 = 3.0). E HMA = HMA indirect tensile modulus, psi. m = Mixture tensile strength, psi. m = The m - value derived from the indirect tensile creep compliance curve measured in the laboratory. t = Local or mixture calibration factor. The stress intensity factor, K , has been incorporated in the Pavement - ME through the use of a simplified equation developed from theoretical finite elem ent studies by using the model shown in Equation ( 2 - 13 ) ( 3 ) . ( 2 - 13 ) where: tip = Far - field stress from pavement response model at depth of crack tip, psi. C o = Current crack length, feet. Equation ( 2 - 14 ) ( 3 ) shows the transfer function for transverse cracking in the Pavement - ME. ( 2 - 14 ) where: TC = Observed amount of thermal cracking, ft/mi. t1 = Regression coefficient determined through global calibration (400). N[z] = Standard normal distribution evaluated at [z]. d = Standard deviation of the log of the depth of cracks in the pavement (0.769), in. C d = Crack depth, in. H HMA = Thicknes s of HMA layers, in. The transverse cracking model was considered for local calibration by Missouri, Ohio, Arkansas, Minnesota, Montana , Colorado and Kansas . Missouri evaluated the model for both Level 1 (local creep compliance and Indirect Tensile (IDT) strength) and Level 3 (Pavement - ME 24 defaults) inputs. It was found that the Level 1 analysis provided more accurate results. Hypothesis testing for mean difference, intercept and slope were performed to assess the bias and error in the model. The model was recalibrated and the local calibration coefficients obtained by the study were recommended for use ( 5 ; 6 ) . Ohio used a non - statistical method to compare the measured and predicted transverse cracking. The method consists of dividing the distress magnitude into different ca tegories and comparing the number of data points that move from one category to the next. It was concluded that the nationally calibrated model is adequate at this time ( 7 ) . Arkansas did not calibrate the model becaus e minimal observed transverse cracking due to the appropriate performance grade asphalt selection for specific climatic conditions ( 8 ) . Minnesota did not calibrate the model because the curren t transverse cracking model was not incorporated in the Pavement - ME at the time of the study ( 10 ) . Montana calibrated the model and found it to be adequate for their design practices ( 11 ) . Colo rado locally calibrated the transverse cracking model using Level 1 data for 12 pavement projects. The Level 1 K coefficient was changed between 1 and 10. It was found that a K=7.5 produced the best goodness of fit and minimal bias. Kansas calibrated the t hermal cracking model coefficients to minimize the bias between the measured and predicted thermal cracking. The models were calibrated for two different datasets. The first was for projects with an subgrade M r value of 2700 and the second for M r values gr eater than 4000 psi ( 15 ) . Table 2 - 12 summarizes the modified local calibrat ion coefficients for the various States. Table 2 - 12 Local calibration coefficients for the thermal cracking model Calibration coefficient National coefficients Missouri Montana Colorado Kansas Level 1 K 1.5 0 .625 - 7.5 - Level 2 K 0.5 - - - - Level 3 K 1.5 - 0.25 - 120 1 /36 2 1 M r = 2700psi 2 M r > 4000psi 25 2.3.1.3. Rutting model The rutting model predicts the permanent deformation in each pavement layer/sub - layer for the entire analysis period. The rutting is predict ed in absolute terms and not based on an incremental approach such as fatigue cracking. The average vertical resilient strain is computed for each analysis over the entire design life of the pavement. The rutting is predicted separately for the HMA, base, and subgrade. The total rutting predicted consists of the sum of the HMA, base, and subgrade rutting. Equation ( 2 - 15 ) ( 3 ) shows the current rutting model for the HMA layers i n the Pavement - ME. The model indicates that there are three local calibration coefficients in this function (i.e., , ) Equation ( 2 - 16 ) ( 3 ) shows the rutting model for unbound layers. This model has one calibration coefficient. ( 2 - 15 ) where: p(HMA) = Accumulated permanent or plastic vertical deformation in the HMA layer/sub - layer, in. p(HMA) = Accumulated permanent or plastic axial strain in the HMA layer/sub - layer, in/in. r(HMA) = Resilient or elastic strain calculated by the structural response model at the mid - depth of each HMA sub - layer, in/in. h (HMA) = Thickness of the HMA layer/sub - layer, in. n = Number of axle load repetitions. T = Mix or pavement temper ature, °F. k z = Depth confinement factor. k 1r,2r,3r = Global field calibration parameters (from the NCHRP 1 - 40D recalibration; k 1r = - 3.35412, k 2r = 0.4791, k 3r = 1.5606). 1r 2r 3r = Local or mixture field calibration constants; for the global cal ibration, these constants were all set to 1.0. ( 2 - 16 ) w here: p(Soil) = Permanent or plastic deformation for the layer/sub - layer, in. n = Number of axle load applications. o = Intercept determined from laboratory repeated load permanent deformation tests, in/in. r = Resilient strain imposed in laboratory test to obtai n material properties o , , and , in/in. 26 v = Average vertical resilient or elastic strain in the layer/sub - layer and calculated by the structural response model, in/in. h Soil = Thickness of the unbound layer/sub - layer, in. k s1 = Global calibration co efficients; k s1 =1.673 for granular materials and 1.35 for fine - grained materials. s1 = Local calibration constant for the rutting in the unbound layers; the local calibration constant was set to 1.0 for the global calibration effort. = A parameter dep endent on moisture content of the soil = A parameter related moisture content and resilient modulus of the soil The total rutting is calculated based on Equation ( 2 - 17 ) ( 3 ) below: ( 2 - 17 ) The local calibration of the rutting model in the P avement - ME was performed by Arkansas, Minnesota, Missouri, Montana, New Mexico, North Carolina, Ohio, Texas, Washington Colorado and Kansas . Arkansas calibrated the rutting model by using an iterative approach. It was found from the field observations tha t rutting occurred in the asphalt and the subgrade layers; therefore, the coefficient for the granular base was not changed. The model was calibrated and a reduction in error between the predicted and measured rutting was obtained ( 8 ) . Minnesota performed the local calibration by investigating the contribution of each pavement layer. It was found that the global model over estimated early age rutting, especially for base and subgrade layers. Co nsequently, the total rutting model was adjusted by subtracting the first month predicted rutting prediction for base and subgrade layers. However, the calibration coefficients were not modified and it was observed that the adjusted rutting model predicts rutting adequately for Minnesota pavement sections ( 10 ) . Missouri calibrated the rutting model by performing a series of hypothesis tests to determine the bias in the rutting model predictions. The calibration coefficients were adjusted and the error between the predict ed and measured rutting was reduced ( 5 ; 6 ) . Montana observed that the models over - predict rutting when compared with measured rut depths. The base and subgrade rutting coefficients were adjusted to 27 reflect minimal predicted rutting to match observed rut depths in those layer s. Furthermore, the HMA mix specific coefficients ( k s ) were adjusted instead of the coefficients. The k coefficients depend on the voids filled with asphalt (VFA) for each design ( 11 ) . New Mexico calibrated the rutting model by changing the coefficients by minimizing the error between the predicted and measured rutting ( 12 ) . North Carolina used two different methods for calibration: (a) the first method consisted of running the software with different calibration coefficients and minimizing the error between the predicted and measured rutting, and (b) the second method used a genetic algorithm to determine the optimized coefficients ( 16 ) . Ohio used a similar procedure as Missouri and used hypothesis testing to determine the model bias. The calibration coefficients were modified although some bias existed after calibration. They concluded that the modified coefficient were more reasonable than the global ones ( 7 ) . Texas also used an approach that minimized the error between measured and predicted rutting. The coefficients were determined for several different regions within the State. Based on the regional coefficients, a statewide average was determined ( 17 ) . Washington changed the local calibration coefficients iteratively until the error was minimized ( 9 ) . Colorado calibrated the rutting model by changing the specific mixture coefficients as well as the individual layer coefficients . The local calibration slightly improved the rutting predictions for Colorado conditions wh en compared with the global model. Kansas calibrated the rutting model by focusing on projects without an unbound base course layer because many of their pavement projects did not have an unbound base layer. It was found that the global models over - predict ed rutting for Kansas conditions. The models were calibrated to minimize bias between the measured and predicted rutting. The calibrated models predicted rutting much better for Kansas pavements compared to the global model ( 15 ) . Similar to the fatigue cracking model, the rutting model was calibrated using the NCAT test data. It wa s 28 found that the global model significantly over - predicted rutting for the NCAT test sections. Based on the results, it was concluded that the unbound layer rutting prediction was the cause for the over - prediction. The rutting model was calibrated by minim izing the sum of squared errors between the measured and predicted rutting. The HMA layer calibration coefficients were not changed and only the coefficients related to the base and subgrade were calibrated ( 14 ) . The local calibration coefficients for the different States are summarized in Tab le 2 - 13 . Table 2 - 13 Local calibration coefficients for the rutting model Calibration coefficient Global AR MO NM NC OH TX WA CO KS NCAT HMA 1 1.2 1.07 1.1 13.1 0.51 2.39 1.05 1.34 0. 9 1 HMA 1 1 - 1.1 0.4 - - 1.109 1 1 1 HMA 1 0.8 - 0.8 1.4 - 0.856 1.1 1 1 1 base 1 1 0.01 0.8 0.303 0.32 - - 0.4 1 0.05 subgrade 1 0.5 0.437 1.2 1.102 0.33 0.5 0 0.84 0.1281 1 / 0.3251 2 0.05 1 M r = 2700psi 2 M r > 4000psi 2.3.1.4. IRI model (flexible pavements) Equation ( 2 - 18 ) ( 3 ) shows the IRI performance prediction mod el in the Pavement - ME. There are four calibration coefficients in this model: ( 2 - 18 ) w here: IRI o = Initial IRI after construction, in/mi. SF = Site factor FC Total = Area of fatigue cracking (combined alligator, longitudinal, and reflection cracking in the wheel path), percent of total lane area. All load related cracks are combined on an area basis length of cracks is multiplied by 1 foot to convert length into an area basis. TC = Length of transverse cracking (including the reflection of transverse cracks in existing HMA pavements), ft/mi. RD = A verage rut depth, in. 29 Currently, the following equation is documented in most of the literature for the site factors ( SF ): ( 2 - 19 ) w here: Age = Pavement age, years. PI = Percent plasticity index of the soil. FI = Average annual freezing index, degree F days. Precip = Average annual precipitation or rainfall, in. Howe ver, during the local calibration in Michigan, it was found that the following equations were coded in the Pavement - ME analysis and design software: ( 2 - 20 ) ( 2 - 21 ) ( 2 - 22 ) where: SF = Site factor Age = Pavement age (years) FI = Freezing index, °F - days. Rain = Mean annual rainfall (in.) P 4 = Percent subgrade material passing No. 4 sieve P 200 = Percent subgrade material passing No. 200 sieve. The IRI model was calibrated in Missouri, Ohio, New Mexico and Colo rado . The IRI model was not locally calibrated in Arkansas, Minnesota, Montana and Washington. Missouri calibrated the IRI model after the rutting and the transverse cracking models were calibrated. The bias after calibration was deemed acceptable. Ohio us ed the same procedure as Missouri and the bias was also considered more reasonable after calibration. New Mexico calibrated the IRI model after calibrating the rutting and fatigue cracking models and only the site factor parameter was modified. Colorado c alibrated the IRI model and found that the locally calibrated model 30 improved the IRI predictions for Colorado conditions. The standard error of the estimate (SEE) increased slightly, and the correlation coefficient ( R 2 ) improved significantly. Kansas calib rated the IRI model by focusing on minimizing the bias between the predicted and measured IRI. The SEE and bias was reduced after local calibration ( 15 ) . The adjusted calibration coefficients for different State are presented in Table 2 - 14 . Table 2 - 14 Local calibration coefficients for the IRI model Calibration coefficient National coefficients Missouri New Mexico Ohio Colorado Kansas SF 0.015 0.01 0.015 0.066 0.019 0.015 FC total 0.4 0.975 - 1.37 0.3 0.04 TC 0.008 0.008 - 0.01 0.02 0.001 RD 40 17.7 - 17.6 35 270/95 2.3.1.5. Transverse cra cking model (rigid pavements) In rigid pavements, transverse cracking is a load related distress caused by repeated loading. Under typical service conditions, transverse cracking can occur starting at either the top or bottom of the concrete slab because of slab curling. The potential for either mode of cracking is present in all slabs. Any slab can crack from either the bottom or the top of the concrete pavement, but not both simultaneously. Therefore, the predicted bottom - up and top - down cracking are com bined in such a way where both types of cracking is reported but the possibility of both modes occurring on the same slab is excluded. The percentage of slabs with transverse cracks (including all severities) in a given traffic lane is used as the measure of transverse cracking and is predicted using Equation ( 2 - 23 ) ( 3 ) for both bottom - up and top - down cracking: ( 2 - 23 ) The general expression for fatigue damage accumulations considering all critical factors (age, month, axle type, load level, tem perature gradient, axle wander, and hourly traffic) for 31 Equation ( 2 - 24 ) ( 3 ) . ( 2 - 24 ) where: DI F = Total fatigue damage (top - down or bottom - up). n i,j,k, .. . = Applied number of load applications at condition i, j, k, l, m, n. N i,j,k, = Allowable number of load applications at condition i, j, k, l, m, n. i = Age (accounts for change in PCC modulus of rupture and elasticity, slab/base contact friction, d eterioration of shoulder LTE). j = Month (accounts for change in base elastic modulus and effective dynamic modulus of subgrade reaction). k = Axle type (single, tandem, and tridem for bottom - up cracking; short, medium, and long wheelbase for top - down cr acking). l = Load level (incremental load for each axle type). m = Equivalent temperature difference between top and bottom PCC surfaces. n = Traffic offset path. o = Hourly truck traffic fraction. The fatigue damage calculation is a process of summi ng damage from each damage increment. Once top - down and bottom - up damage are estimated, the corresponding total cracking is computed E quation ( 2 - 25 ) ( 3 ) . ( 2 - 25 ) where: TCRACK = Total transverse cracking (percent, all severities). CRK Bottop - up = Predicted amount of bottom - up transverse cracking (fraction). CRK Top - down = Predicted amount of top - down transverse cracking (fraction). The transverse cracking model was considered for local calibration in Ohio, Minnesota, Missouri, Washington and Co lorado . The global model was accepted for Ohio and Missouri since there was no significant difference observed between the measured and predicted transverse cracking. Both studies recommend to revisit the local calibration of the model once 32 more condition data becomes available for the selected projects. Washington calibrated the transverse cracking model and the local calibration coefficients provided a better prediction as compared to the global model. Minnesota used an iterative approach to locally calib rate the transverse cracking model. The local calibration coefficients improved the transverse cracking predictions for the pavement sections in the calibration set. It should be noted that the model was calibrated on a limited dataset. Colorado found that their JPCP pavement sections were performing well and did not show any significant amounts of cracking. At this time, they did not locally calibrate the transverse cracking model. The nationally calibrated transverse cracking model was recalibrated to ref lect changes in the measurement of the coefficient of thermal expansion (CTE). The updated CTE measurements resulted in slightly lower values ( 18 ) . The models were recalibrated using similar methods as the original national calibration. Table 2 - 15 summarizes the transverse cracking model local calibration coefficients in different stat es. Table 2 - 15 Local calibration coefficients for the rigid transverse cracking model Calibration coefficient National coefficients Missouri Washington Ohio Colorado Minnesota Revised National C4 1 1 0.139 1 1 0.9 0.6 C5 - 1.98 - 1.98 - 2.115 - 1.98 - 1.98 - 2.64 - 2.05 2.3.1.6. Faulting model The mean transverse joint faulting is predicted on a month ly basis using an incremental approach. A faulting increment is determined each month and the current faulting level affects the magnitude of the increment. The faulting at each month is determined as a sum of faulting increments from all previous months in the pavement life from the traffic opening date using the equations ( 3 ) : ( 2 - 26 ) ( 2 - 27 ) 33 ( 2 - 28 ) ( 2 - 29 ) w here: Fault m = Mean joint faulting at the end of month m , in . i = Incremental change (monthly) in mean transverse joint faulting during month i , in. FAULTMAX i = Maximum mean transverse joint faulting for month i , in. FAULTMAX 0 = Initial maximum mean transverse joint faulting, in. EROD = Base/sub - base erod ibility factor DE i = Differential density of energy for subgrade deformation accumulated during month i curling = Maximum mean monthly slab corner upward deflection due to temperature curling and moisture warping. P S = Overburden on subgrade, lb. P 200 = Percent subgrade material passing #200 sieve. WetDays = Average annual number of wet days (greater than 0.1 inch rainfall). C 1,2,3,4,5,6,7,12,34 = Global calibration constants ( C 1 = 1.0184; C 2 = 0.91656; C 3 = 0.002848; C 4 = 0.0008837; C 5 = 250; C 6 = 0 .4; C 7 = 1.8331) ( 2 - 30 ) ( 2 - 31 ) where: FR = Base freezing index defined as percentage of time the top base temperature is below freezing (32 °F) temperature. Several SHAs attempted to locally calibrate the faulting model. Most states accepted the global model coefficients because of limited faulting measurements. Washington found very different local calibration coefficients for their paveme nt sections included in the calibration dataset. They also determined different calibration coefficients for un - doweled and dowel - bar retrofitted (DBR) pavements. Kansas calibrated the faulting model coefficient for three different chemically stabilized ba se types. The calibration coefficients were adjusted for all three base types. Based on the results, the locally calibrated models showed lower SEE compared to the global model ( 15 ) . Similarly to the transverse cracking model, the nationally calibrated faulting model was recalibrated to reflect the changes in the CTE measurements. The calibration 34 coefficients were obtained in a method similar to the original national models. The faulting model local calibration results are summarized in Table 2 - 16. Table 2 - 16 Local calibration coefficie nts for the faulting model Calibration coefficient National coefficients Washington un - doweled Washington DBR Kansas Revised National C1 1.0184 0.4 0.934 1.0184 0.5104 C2 0.91565 0.341 0.6 0.91656 0.00838 C3 0.002848 0.000535 0.001725 0.00164 0.00147 C 4 0.00883739 0.000248 0.004 0.000883739 0.00834 C5 250 77.5 250 250 5999 C6 0.4 0.0064 0.4 0.15 0.8404 C7 1.8331 2.04 0.65 0.01 5.9293 C8 400 400 400 400 400 2.3.1.7. IRI model (rigid pavements) In the Pavement - ME, s moothness is predicted as a function of th e initial as - constructed profile of the pavement and any change in the longitudinal profile over time and traffic due to distresses and foundation movements . The global IRI model was calibrated and validated using LTPP field data to assure that it would pr oduce valid results under a variety of climatic and field conditions. The final IRI model is shown in the following equations ( 3 ) : ( 2 - 32 ) where: IRI = Predicted IRI, in/mi. IRI I = Initial smoothness measured as IRI, in/mi. CRK = Percent slabs with transverse cracks (all severities). SPALL = Percentage of joints with spalling (medium a nd high severities). TFAULT = Total joint faulting cumulated per mi, in. C1 = 0. 8203 C2 = 0.4417 C3 = 0.4929 C4 = 25.24 SF = Site factor. 35 ( 2 - 33 ) w here : AGE = Pavement age, yr. FI = Freezing index, °F - days. P 200 = Percent subgrade material passing No. 200 sieve. The transverse cracking and faulting values are obt ained using the models described earlier . The transverse joint spalling is determined by Equation ( 2 - 34 ) ( 3 ) , which was calibr ated using LTPP and other data. ( 2 - 34 ) where: SPALL = Percentage joints spalled (medium - and high - severities). AGE = Pavement age since construction, years. SCF = Scaling factor based on site, design, and climate. ( 2 - 35 ) The above model for scaling factor reported in the literature ( 19 ) was modified in the software as follows: ( 2 - 36 ) where: AIR% = PCC air content, percent. AGE = Time since construction, years. PREFORM = 1 if preformed sealant i s present; 0 if not. f'c = PCC compressive strength, psi. FTCYC = Average annual number of freeze - thaw cycles. 36 h PCC = PCC slab thickness, in. WC_Ratio = PCC water/cement ratio. The IRI model was considered for local calibration by Minnesota, Missour i, Ohio, Washington Colorado and Kansas . Minnesota only calibrated the IRI model with respect to the pavement age and did not change anything else. The IRI model has changed since Minnesota performed local calibration. Missouri and Ohio recommend the local ly calibrated coefficients since those provided a better prediction with less bias compared to the global model. The global model provided adequate predictions but both States wanted to reduce bias further. Washington found significant differences between the global model predictions and the measured IRI. They believe that the difference between the measured and predicted IRI is attributed to the use of studded tires on Washington pavements. Colorado determined that the global model predicted IRI sufficient ly and local calibration was not necessary. Kansas calibrated the IRI model by changing the various coefficients to minimize the bias between the measured and predicted IRI. The model calibrations were performed for three different base types. The locally calibrated model predicted IRI for Kansas conditions better than the global model ( 15 ) . The IRI local calibration coefficients are summarized in Table 2 - 17. Table 2 - 17 Local calibration coefficients for rigid IRI model Calibration coefficient National coefficients Missouri Ohio Colorad o Cracking 0.820 0.82 0.82 0.820 Spalling 0.442 1.17 3.7 0.442 Faulting 1.493 1.43 1.711 1.493 Site Factor 25.24 66.8 5.703 25.24 2.3.2 Challenges and Lesson Learned A survey of SHAs was conducted recently and is documented in NCHRP Synthesis 457 37 ( 1 ) fy their current design practices as well as their plan for the implementation of the ME - based design. The questionnaire focused on the practices, related to the implementation of the Pavement - ME are of particular interest. The challenges are most appropriate hierarchical input levels and the need for local calibration . Most SHAs are concerned with the software complexity, the training needed for the ME - based design practices, and, the operation and functionality of the software. The availability of the necessary input data is a major concern. Most SHAs indicated tha t pavement condition data, existing pavement structure information, and traffic data are readily and testing the missing information requires a significant ef fort by the SHAs. Selecting Level 1 inputs also requires significant effort by the agencies. The survey indicated that only site - specific vehicle classification and average annual daily truck traffic (AADTT) are likely available for the majority of SHAs. Based on the lack of available data for Level 1 inputs, regional averages or Pavement - ME default values will be used for pavement designs. The survey respondents provided a number of challenges and lessons learned during the implementation process. As expe cted, one of the most common challenges reported was the lack of readily available traffic and materials data, and the large effort required to obtain the needed data. In addition, SHAs indicated that contacting the respective office or division in an agen cy (e.g., construction, materials, traffic or planning) early on in the implementation process is helpful. This proactive awareness and coordination among different offices will make sure that everyone understands what data are needed and why. Further, thi s communication will help in 38 preparing the respective staff to conduct field sampling and testing if the needed data are not available. The survey results describe the following challenges in implementing the ME - based designs: District offices are resista nt to change from empirical - based designs to ME - based designs. The main reason is a higher comfort level with the inputs and resulting outputs (i.e., layer thickness) with the AASHTO 1993 Guide. Therefore, making a shift to using design inputs and predicti ng distresses in the Pavement - ME, contrary to obtaining layer thickness as the final result, has been difficult to accept. Variations and changes to the pavement condition data collection procedures in different highway agencies have resulted in inconsis tency with condition data measurement. These discrepancies among agencies have lowered their ability to obtain reliable pavement condition data for use in the calibration process. Lack of resources to conduct in - house local calibration and training of staf f is another hurdle identified . While Pavement - ME is too complex for most practicing engineers, the adoption of the benefits of the design procedure in the long - run. The pr ocedure is evolving over time and several variations and improvement were made in last couple of years (various versions of the software). Therefore, a potential of more work remains (i.e., recalibration of performance models), as a result of newer version s and modifications to software. The survey ( 1 ) results also presented the following lessons learned in the implementation process: 39 Establish realistic timelines for the calibration and validation process Allow sufficient time for obtaining materials and traffic data Ensure the data related to the existing pavement layer, materials properties, and traffic is readily available If necessary, develop a plan for collecting the needed data; this can require an expensive field sampling and testing effort Develop agency - based design inputs to avoid defaul t or other inputs to minimize design variability Provide training to agency staff in ME design fundamentals, MEPDG procedures, and the Pavement - ME software 2.3.2.1. Other Challenges Louisiana ( 20 ) published a paper which outlined some of the ch allenges which they experienced throughout the local calibration process. Many of the issues they experiences were also outlined in the NCHRP Synthesis 457 ( 1 ) . Additionally, the importance of calibrating the transfer functions and the associated standard error is essential. The authors state that calibration of the standard error is more important than the calibration of the cracking model in ( 20 ) . 2.4 IMPLEMENTATION EFFORTS IN MICHIGAN To support the Pavement - ME implementation process in the state of Michigan, s everal research projects have been completed to explore the various attributes of the design and analysis software. As a result of these efforts over the last seven (7) years, the following reports have been published: 40 Evaluation of the 1 - 37A D esign P roces s for N ew and R ehabilitated JPCP and HMA P avements (Report No. RC - 1516) ( 21 ; 22 ) Preparation for Implementation of the Mechanistic - Empirical Pavement Design Guide in Michigan - Part 2: Rehabilitation Evaluation (Report No. RC - 1594) ( 23 ) Preparation for Implementation of the Mechani stic - Empirical Pavement Design Guide in Michigan - Part 1 : HMA Mixture Characterization (Report No. RC - 1593) ( 24 ) Characterization of T raffic for the N ew M - E P avement D esign G uide in Michigan (Report No. RC - 1537) ( 25 ; 26 ) Pavement S ubgrade MR D esign V S easonal C hanges ( Report No. RC - 1531) ( 27 ) Backcalculation of Unbound Granular Layer Moduli (Report No. RC - 1548) ( 28 ) Quantifying C oefficient of T hermal E xpansion V alues of T ypical H ydraulic C ement C oncrete P aving M ixtures (Report No. RC - 1503) ( 29 ) The results from these st udies are considered throughout the local calibration process for Michigan. Brief findings from these works are summarized below and can be found at the MDOT website (URL: http://www.michigan.gov/mdot/0,4616,7 - 151 - 9622_11045_24249 --- ,00.html ). 2.4.1 MDOT Sensiti vity Study - 37A Design Process for New and ( 21 ) was performed. The main objectives of the study were to: a. Evaluate the Pavement - ME pavement design procedures for Michigan conditions 41 b. Verify the relationship betw een predicted and observed pavement performance for selected pavement sections in Michigan and c. Determine if local calibration is necessary The report outlined the performance models for JPCP and HMA pavements. Two types of sensitivity analyses were perform ed namely, a preliminary one - variable - at - a - time (OAT), and a detailed analysis consisting of a full factorial design. Both analyses were conducted to reflect MDOT pavement construction, materials, and design practices. For both new rigid and flexible pave ment designs, the methodology contained the following steps: 1. Determine the input variables available in the Pavement - ME and the range of values which MDOT uses in pavement design 2. Determine the practical range for each input variable based on MDOT practice and Long Term Pavement Performance (LTPP) data 3. Select a base case and perform the OAT 4. Use OAT results to design the detailed sensitivity analysis 5. Determine statistically significant input variables and two - way interactions 6. Determine practical significance of statistically significant variables 7. Draw conclusions from the results Tables 2 - 18 and 2 - 19 show the impact of input variables on different pavement performance measures for rigid and flexible pavements, respectively. 42 Table 2 - 18 Impact of input variables on rigid pavement performance ( 21 ) Design/material variable Impact on distress/smoothness Transverse joint faulting Transverse cracking IRI PCC thickness High High High PCC modulus of rupture None High Low PCC coefficient of thermal expansion High High High Joint spacing Moderate High Moderate Joint load transfer efficiency High None High PCC slab width Low Moderate Low Shoulder type Low Moderate Low Permanent curl/warp High High High Base type Moderate Moderate Low Climate Mo derate Moderate Moderate Subgrade type/modulus Low Low Low Truck composition Moderate Moderate Moderate Truck volume High High High Initial IRI NA NA High Table 2 - 19 Impact of input variables on flexible pavement performance ( 21 ) Fatigue cracking Longitudinal cracking Transverse cracking Rutting IRI HMA thickness HMA effective binder content HMA air voids Base material type Subbase material type HMA thickness HMA air voids HMA effective binder content Base material Subbase material Subgra de material HMA binder grade HMA thickness HMA effective binder content HMA air voids HMA aggregate gradation HMA thickness Subgrade material Subgrade modulus HMA effective binder content HMA air voids Base material Subbase material Base thickness Subbase thickness HMA thickness HMA aggregate gradation HMA effective binder content HMA air voids Base material type Subbase thickness Subbase material type Subgrade material type Note: The input variables are listed in order of importance. 2.4.2 Pavement R ehabilitati on E valuation in Michigan The study was performed to determine the sensitive inputs for the pavement rehabilitation options ( 23 ) . Three different sensitivity analyses were performed for each rehabilitation option. The global sensitivity analysis results provid ed the best results. The rankings of important input variables for each rehabilitation option are summarized below: 43 Table 2 - 20 List of significant inputs HMA over HMA Input variables Ranking (NSI) Overlay air voids 1 (6) Existing thickness 2 (5) Overlay thickness 3 (4) Existing pavement condition rating 4 (4) Overlay effective binder 5 (2) Subgrade modulus 6 (2) Subbase modulus 7 (1) Note: NSI = Normalized sensitivity index Table 2 - 21 List of significant inputs C omposite pavement Inputs Ranking (NSI) Overlay air voids 1 (9) Overlay thickness 2 (2) Existing PCC thickness 3 (1) Table 2 - 22 List of sig nificant inputs Rubblized PCC pavement Inputs Ranking (NSI) Overlay air voids 1 (6) Overlay effective binder 2 (2) Overlay thickness 3 (1) Table 2 - 23 List of significant inputs Unbonded PCC overlay De sign inputs Ranking (NSI) Overlay PCC thickness 1 (23) Overlay PCC coefficient of thermal expansion (CTE) 2 (12) Overlay PCC modulus of rupture (MOR) 3 (8) Overlay joint spacing 4 (5) Existing PCC elastic modulus 6 (1) Climate 7 (1) 2.4.3 HMA Mixture Characterization in Michigan - HMA mixture properties for typical mixtures used in the state of Michigan ( 24 ) . The Level 1 HMA inputs require laboratory tests to characterize a pavement in the Pavement - ME software. 44 The most important properties obtained from this study include the following: Dynamic modulus (E*) Bi nder ( G* ) Creep compliance and, Indirect tensile strength (IDT) The study determined Level 1 HMA mixture and binder characterizations for use as inputs in the Pavement - ME. Additionally, the study used artificial neural networks (ANN) for better prediction s of dynamic modulus from asphalt volumetrics. The research team also reviewed the current HMA test data as part of the MDOT testing program and compared it to the data required by Pavement - ME. Standalone software, called DYNAMOD, was developed to serve as a database to obtain the necessary HMA properties in a form compatible with the Pavement - ME software. 2.4.4 Traffic Inputs in Michigan Another study extensively focused on the traffic characterization for the Pavement - ME in Michigan ( 25 ; 26 ) . The following traffic characteristics were investigated: 1. Monthly distribution factors 2. Hourly distribution factors 3. Truck traffic classifications 4. Axle groups per vehicle 5. Axle load distributions for different axle configurations T he data were collected from 44 Weigh - in - motion (WIM) sites distributed throughout the entire state of Michigan. The data were used to develop Level 1 (site specific) traffic inputs for 45 the WIM locations. Cluster analysis was conducted to group sites with s imilar characteristics for developing Level 2 (regional) inputs. Statewide (Level 3) averages were also determined. The inputs and their recommended input levels are summarized in Table 2 - 24. Table 2 - 24 Concl usions and recommendations for traffic input levels Traffic Characteristic Impact on pavement performance Suggested input levels (when level I data not available) Rigid pavement Flexible pavement Rigid pavement Flexible pavement TTC Significant Moderate Level II HDF Significant Negligible Level II Level III 1 MDF Negligible Level III (State average) AGPV Negligible Level III (State average) Single ALS Negligible Level III (State average) Tandem ALS Significant Moderate Level II (State average) Tride m ALS Negligible Negligible Level III (State average) Quad ALS Negligible Moderate Level III (State average) 1 Level III inputs were available for flexible pavements in the MEPDG version 1.1 and are no longer available as input in the Pavement - ME 2.4.5 Unbo und Material Inputs in Michigan Two studies to characterize unbound material in Michigan were carried out in the last few years ( 27 ; 28 ) . The first study outlined the importance of the resilient modulus (MR) of the roadbed soil and how it affects pavement systems. The study focused on developing reliable methods to determine the MR of th e roadbed soil for inputs in the Pavement - ME. The study divided the state of Michigan into fifteen clusters based on the similar soil characteristics. Laboratory tests were performed to determine moisture content, grain size distribution, and Atterberg li mits. Another aspect of the study was to determine the differences between laboratory tested MR values and back - calculated MR. Based on the analysis it was concluded that the values between laboratory MR and back - calculated MR are almost equal if the stres s boundaries used in the laboratory matched those of the FWD tests. Table 2 - 25 summarizes the recommended MR values for design based on different roadbed types in Michigan. The study 46 recommends the use of the design values . Table 2 - 25 Average roadbed soil MR values ( 27 ; 28 ) Roadbed Type Aver age MR USCS AASHTO Laboratory determined (psi) Back - calculated (psi) Design value (psi) Recommended design MR value (psi) SM A - 2 - 4, A - 4 17,028 24,764 5,290 5,200 SP1 A - 1 - a, A - 3 28,942 27,739 7,100 7,000 SP2 A - 1 - b, A - 3 25,685 25,113 6,500 6,500 SP - SM A - 1 - b,A - 2 - 4, A - 3 21,147 20,400 7,000 7,000 SC - SM A - 2 - 4, A - 4 23,258 20,314 5,100 5,000 SC A - 2 - 6, A - 6,A - 7 - 6 18,756 21,647 4,430 4,400 CL A - 4, A - 6, A - 7 - 6 37,225 15,176 4,430 4,400 ML A - 4 24,578 15,976 4,430 4,400 SC/CL/ML A - 2 - 6, A - 4, A - 6, A - 7 - 6 26,853 17, 600 4,430 4,400 The second study focused on the backcalculation of MR for unbound base and subbase materials and made the following recommendations ( 28 ) : 1. In the desig n of flexible pavement sections using design Levels 2 or 3 of the Pavement - ME, the materials beneath the HMA surface layer should consist of the following two layers: a. Layer 1 - An aggregate base whose modulus value is 33,000 psi b. Layer 2 - A sand subbase wh ose modulus is 20,000 psi 2. In the design of rigid pavement sections using design Levels 2 or 3 of the Pavement - ME, the materials beneath the PCC slab could be either: a. An aggregate base layer whose modulus value is 33,000 psi supported by sand subbase whose modulus value is 20,000 psi b. A granular layer made up of aggregate and sand mix whose composite modulus value is 25,000 psi 47 c. A sand subbase whose modulus value is 20,000 psi 3. For the design of flexible or rigid pavement sections using design Level 1 of the Pa vement - ME, it is recommended that: For an existing pavement structure where the PCC slabs or the HMA surface will be replaced, FWD tests be conducted every 500 feet along the project and the deflection data be used to backcalculate the moduli of the aggreg ate base and sand subbase or the granular layer. The modulus values to be used in the design should correspond to the 33 rd percentile of all values. The 33 rd percentile value is the same as the average value minus half the value of the standard deviation. For a total reconstruction or for a new pavement section, the modulus values of the aggregate base and the sand subbase or the granular layer could be estimated as twice the average laboratory determined modulus value. 4. Additional FWD tests and backcalcula tion analyses should be conducted when information regarding the types of the aggregate bases under rigid and flexible pavements becomes known and no previous FWD tests were conducted. 5. MDOT should keep all information regarding the various pavement layers. The information should include the mix design parameters of the HMA and the PCC, the type, source, gradation and angularity of the aggregate and the subbase material type, source, gradation and angularity. The above information should be kept in easily se archable electronic files. 48 2.4.6 Coefficient of Thermal Expansion The CTE input values were obtained from the MDOT study that determined the CTE for various aggregates available across the state of Michigan ( 29 ) . It was decided later that t he CTE values for concrete in Michigan are either 4.5 or 5.8 in/in/°F×10 - 6 depending on the location of the pavement section . For U niversity and M etro regions , a CTE value of 5.8 in/in/°F×10 - 6 while for other regions, a value of 4.5 in/in/°F×10 - 6 should be used. 49 3 - NEED FOR LOCAL CALIB RATION, PROJECT SELE CTION AND DATA REQUIREMENTS 3.1 I NTRODUCTION The first step in a local calibration pr ocess includes the selection of an adequate number of pavement sections representing state - of - the - practice for local conditions. A subsequent, essential step is to collect the required data for the selected in - service pavement sections. The data includes t he information about (a) measured pavement condition over time, and (b) several Pavement - ME inputs, for each project. The inputs directly affect the performance predictions. These predictions are compared to the measured performance of the as - constructed p avement sections. A pavement section is defined as a specific length of roadway corresponding to a construction project. A project may have up to two pavement sections (i.e. different directions on a divided highway). These sections for a project may have similar data inputs, but different measured pavement performance. The predicted pavement performance in the Pavement - ME relies on the inputs used to characterize an in service pavement. Therefore, several inputs are necessary to analyze a particular paveme nt in the design software, especially the one s which have significant impact on the predicted performance. This chapter describes the motivation for local calibration, the process for pavement section selection and the procedures adopted to collect the nec essary information for the selected pavement sections. 3.2 M OTIVATION The Pavement - ME is becoming the state of the practice in pavement design. The performance prediction models in the Pavement - ME need to be able to accurately predict pavement performance for unique conditions experienced in a particular State o r region. In order to do this, l ocal calibration is required. Over the last few years the Michigan Department of Transportation (MDOT) has sponsored several research studies to prepare for the 50 implement ation of the Pavement - ME. Based on the previous studies, the final step consists of locally calibrating the performance models to reflect Michigan design practices. 3.3 D ATA COLLECTION EFFOR TS The local calibration of the Pavement - ME is affected greatly by the accuracy of the measured performance and input data. This section discusses the pavement section selection criteria and the available data to characterize pavement sections in Michigan. 3.3.1 Project Selection Criteria In order to locally calibrate the performa nce prediction models, in - service pavement sections are selected which represent Michigan pavement design, construction practices, and performance. These pavement sections should represent all current pavement types and rehabilitation types which are const ructed by MDOT. The process for identifying and selecting pavement sections consists of the following steps: 1. Determine the minimum number of pavement sections based on the statistical requirements 2. Evaluate the available distress data collected in Michigan using the Pavement Management System (PMS) 3. Identify all available in - service pavement projects constructed after 1992 (Measured performance data was consistent after 1992) 4. Extract all pavement distresses from the customized database for all identified pro jects 5. Evaluate the measured performance for all the identified projects 6. Establish a refined list of the potential projects which exhibit multiple distresses with adequate magnitude 51 3.3.1.1. Identify the Minimum Number of Required Pavement Sections The first step in project identification and selection consists of determining the adequate number of pavement sections for local calibration based on statistical needs. The NCHRP 1 - 40B ( 2 ) suggests a method to determine the minimum number of sections for each condition m easure. The minimum number of sections was calculated using Equation ( 3 - 1 ) and the results are summarized in Table 3 - 1 for each condition measure. ( 3 - 1 ) where: = The z - value from a standard normal distribution n = Minimum number of pavement sections = Performance threshold e t = Tolerable bias = = Standard error of the estimate Table 3 - 1 Minimum number of sections for local calibration Performance Model Nationally calibrated SE E Z 90 Threshold N (required number of sections) Number of sections used Total number of projects available Flexible Pavements Alligator cracking (%) 5.01 1.64 20 16 121 108 2 33 3 Longitudinal cracking (ft/mile) 600 2000 12 165 Thermal cracking 1 - - - 169 Rutting (in) 0.107 0.5 22 162 IRI (in/mile) 18.9 172 83 167 Rigid Pavements Transverse cracking (%) 4.52 1.64 15 11 31 20 2 8 3 Joint faulting (in) 0.033 0.25 57 49 IRI (in/mile) 17.1 172 101 44 N = minimum number of samples required fo r 90% confidence level 1. No SEE , threshold or N was reported for thermal cracking in the literature 2. A total of 108 and 20 projects were identified for flexible and rigid pavements, respectively, based on the construction d ate and PMS data availability 3. Rehabilitatio n projects selected from sensitivity study ( 23 ) 52 3.3.1.2. MDOT Pavement Management System Condition Data The second step in the project selection consisted of evaluating the readiness of the MDOT Pavement Management System ( PMS ) database to provide the n ecessary data for local calibration and validation of the Pavement - ME performance models. The PMS and other data sources (construction records, previous reports, traffic monitoring information system etc.) were evaluated to extract the following input data : a. A ll performance measures predicted by the Pavement - ME for both flexible and rigid pavements were evaluated for consistency of units. The units of the measured distress data from MDOT PMS were converted to those predicted by the Pavement - ME ; b. The const ruction records and other sources were used to assess the pavement cross - sections, MDOT provided the information for identified sections; c. Traffic data as required by the Pavement - ME were obtained from MDOT . In situations where Level 1 data were unavailabl e, Level 2 and 3 data were estimated based on the previous studies conducted in the state of Michigan; and d. Material types used in different pavement layers were documented for the most common construction materials in Michigan. Selected Distresses The ne cessary distress information was identified and extracted from the MDOT PMS and sensor database. The MDOT PMS current Distress Manual was used to determine all the Principal Distresses (PDs) corresponding to predicted distresses in the Pavement - ME. It shou ld be noted that, all PDs were included since MDOT began collecting the data in 1992; the earlier versions of the PMS manual were consulted to ensure that the correct data was extracted for all years. The necessary steps for PMS data extraction included: 53 1. I dentified the PDs that corresponds to the Pavement - ME predicted distresses, 2. Converted (if necessary) MDOT PDs to the units compatible with the Pavement - ME 3. Extracted PDs, and sensor data for each project 4. Summarized time - series data for each project The ide ntified and extracted pavement distresses and conditions for flexible and rigid pavements are summarized in Tables 3 - 2 and 3 - 3 . The PD numbers correspond to a particular pavement distress type. The validity of these numbers was confirmed after consulting w ith A detailed discussion of the conversion process is presented for both flexible and rigid pavements. Table 3 - 2 Flexible pavement distresses Flexible pavement d istress MDOT principle distresses (PDs) MDOT units Pavement - ME units Conversion needed? IRI Directly measured in/mile in/mile No Top - down cracking 204, 205, 724, 725 miles ft/mile Yes Bottom - up cracking 234, 235, 220, 221, 730, 731 miles % area Yes Thermal cracking 101, 103, 104, 114, 701, 703, 704, 110 No. of occurrences ft/mile Yes Rutting Directly measured in in No Reflective cracking No specific PD None % area N/A Table 3 - 3 Rigid pavement distre sses Rigid pavement distresses MDOT principle distresses MDOT units Pavement - ME units Conversion needed? IRI Directly measured in/mile in/mile No Faulting Directly measured in in Yes Transverse cracking 112, 113 No. of occurrences % slabs cracked Yes 54 Pavement distress unit conversion of HMA pavements It should be noted that only the distress types predicted by the Pavement - ME were considered for the local calibration. The corresponding MDOT PDs were determined and compared with distress types predict ed by the Pavement - ME to verify if any conversions were necessary. The MDOT measured pavement distresses related to HMA pavements are listed in Table 3 - 1 . The conversion process (if necessary) for all distress types are as follows: IRI: The IRI measurement s in the MDOT sensor database are compatible to those in the Pavement - ME. Therefore, no conversion or adjustments were needed and the data were used directly. Top - down cracking: Top - down cracking is defined as load related longitudinal cracking in the whee l - path. The PDs 204, 205, 724, and 725 were assumed to correspond to the top - down cracking in the MDOT PMS database because those may not have developed an interconnected pattern which indicates alligator cracking . Those cracks may show an early stage of f atigue cracking which could also be bottom - up. The PDs are recorded in miles and needs conversion to feet/mile. Data from the wheel - paths were summed into one value and divided by the total project length. Bottom - up cracking: The bottom - up cracking is defi ned as alligator cracking in the wheel - path. The PDs 234, 235, 220, 221, 730 and 731 match this requirement in the MDOT PMS database. The PDs have units of miles; however, to make those compatible with the Pavement - ME alligator cracking units, conversion t o percent of total area is needed. This can be achieved b y using the following Equation ( 3 - 2 ) : ( 3 - 2 ) The width of each wheel path and lane width were assumed to be 3 feet and 12 feet, 55 respectively. The typical wheelpath width of 3 feet is recommended by the LTPP distress identification manual ( 30 ) . However, in future local calibrati ons MDOT associated distress can be considered to verify the width associated with alligator cracking in the wheelpath. It should be noted that the bottom - up and top - down fatigue cracking in the wheel paths (6 feet) are combined for new HMA reconstruct pr ojects due to the difficulty determining the source (top or bottom) of the fatigue cracks observed at the surface. Thermal cracking : Thermal cracking corresponds to transverse cracking in flexible pavements. The Pavement - ME pr edicts thermal cracking in feet/mile. The PDs 101, 103, 104, 114, 701, 703 and 704 were utilized to extract transverse cracking in flexible and rubblized pavements. For the composite pavement, PDs 101, 110, 114 and 701 were used. The transverse cracking is recorded as the number of occurrences. In order to convert transverse cracking into feet/mile, the number of occurrences was multiplied by the lane width for PDs 101, 103 and 104. For the PDs 114 and 701, the number of occurrences was multiplied by 3 feet because these PDs lengths were summed and divided by the project length to get feet/mile as shown in Equation ( 3 - 3 ) . ( 3 - 3 ) Rutt ing : The measured rutting in the MDOT database is the total amount of surface rutting contributed by all the pavement layers. The average rutting (left & right wheel - paths) was determined for the entire project length. No conversion was necessary. It is a ssumed that the measured rutting corresponds to total rutting, and was compared to the total rutting predicted by the Pavement - ME. 56 Reflective c racking : MDOT does not have any specific PDs for reflective cracking. It is difficult to determine the difference between a thermal and a reflective crack at the surface. Therefore, the total transverse cracking observed can be compared to the total combined thermal and reflective cracking. Reflective cracking was not included in verification for this reason and due to the limitations of the prediction model in the Pavement - ME . Pavement distress unit conversion for JPCP designs As mentioned before, only the distresses that are predicted by the Pavement - ME were considered for the verification. The corresponding MDOT PD s were determined and necessary conversions were applied if needed. Table 3 - 2 summarizes the distresses related to JPCP overlays and the conversion process is discussed below: IRI : The IRI in the MDOT sensor database does not need any conversion; the value s were used directly Faulting : Faulting is predicted as average joint faulting by the Pavement - ME. The faulting values reported in the MDOT sensor database corresponds to the average height of all faults at a discontinuity observed for an entire 0.1 mile ( 528 feet) section. However, the Pavement - ME faulting prediction does not distinguish between faulting at cracks or joints and only predicts faulting at the joints. Therefore, only measured average joint faulting should be compared with the predicted faulti ng by Pavement - between faults at cracks and joints, the average joint faulting needs to be calculated. The average joint faulting is calculated using Equation ( 3 - 4 ) : ( 3 - 4 ) 57 where: FAULnum = Number of faults in a 0.1 mile FAUL i = Average faulting in a 0.1 mile (inches) Joint Spacing = Joint spacing for the project It should be noted that it may be possible that the number of faults measured exceed the number of joints in a 0.1 mile sec tion. In this case, the faulting at cracks is included in the average faulting value. As mentioned before, such crack faulting is not predicted by the Pavement - ME. Therefore, if the number of faults is greater than the number of joints, the 0.1 mile paveme nt section should not be included in the calibration dataset. Transverse cracking : The transverse cracking distress is predicted as % slabs cracked in the Pavement - ME. However, MDOT measures transverse cracking as the number of transverse cracks. The PD s 112 and 113 correspond to transverse cracking. The measured transverse cracking needs conversion to percent slabs cracked by using Equation ( 3 - 5 ) . ( 3 - 5 ) 3.3.1.3. Available In - Service Pavement Projects The most common pavement and re habilitation types that are constructed throughout Michigan were determined to ensure that a broad range of projects/sections were selected. The reconstruct and rehabilitation pavement types considered include: 1. HMA reconstruct 2. HMA crush & shape 58 3. HMA over HM A 4. HMA over rubblized PCC 5. HMA over PCC (composite) 6. JPCP reconstruct, and 7. Unbonded concrete overlay Rehabilitation projects were identified by MDOT (unbonded, rubblized and composite overlays) for local calibration. Newly constructed pavement sections were s elected for flexible reconstruct, crush and shape and HMA over HMA, and rigid JPCP reconstruct pavements. The pavement projects to be included in the study were selected based on the following criteria: Site factors : The site factors addressed the various regions in the state, climatic zones and subgrade soil types . Traffic : Th re e traffic categories were selected; less than 1000 AADTT, 1000 to 3000 AADTT; and greater than 3000 AADTT. The three levels were selected based on pavement class, trunk - line routes, US routes and Interstate routes. T hicknesses : The range of constructed HMA, PCC and overlay thicknesses. Open to traffic date : Th e information is needed to determine the performance period. As built cross - section : Includes details of the existing structur e and the overlay other than layer thickness (e.g., joint spacing, lane width, and number of lanes etc.) Pre - overlay repairs performed on the existing pavement (such as partial and/or full depth repairs, dowel bar retrofit) . Material properties of both the existing and the new structure . 59 For each identified pavement project, the following details were extracted from an Excel file provided by MDOT: 1. MDOT region 2. Control section number 3. Job number 4. Beginning mile point (BMP) 5. Ending mile point (EMP) 6. Year opened 7. T wo way AADTT 8. Number of Distress Index ( DI ) points 9. Current pavement age Initially, a total of 223 reconstruct pavement projects (job numbers) were identified based on the pavement age (constructed after 1992). Later, the measured condition data were extrac ted from the PMS and Sensor databases for all the 223 pavement projects. The projects that had very low levels of measured distress or no available data were excluded from the final list. Thus, there were 108 (22 flexible freeway, 63 non - freeway, and 23 cr ush & shape) flexible and 20 JPCP pavement projects available for the local calibration. Figures 3 - 1 through 3 - 4 show the geographical distribution of the initial and revised number of identified projects based on a measured condition evaluation for new r econstruct and crush and shape project. The extracted condition data were analyzed to evaluate the following trends: Increasing trend (i.e., positive progression of distress over time) 60 Decreasing trend (i.e., negative progression of distresses over time wh ich may happen because of maintenance history, or measurement errors) Flat line (i.e., no progression over time) Not enough data (i.e., inadequate measured condition over time) No trend (i.e., high variability among different measurement cycles) If a proj ect showed an increasing trend for any of the condition measures it was included in the final list. Projects that had insufficient condition data, showed no time series trend, or showed a consistent flat line (no growth over time) with minimal magnitude we re removed. The total number of increasing trends were determined from the final project list and compared to the minimum number of sections needed for each performance model . (a) Initial projects (b) Revised projects Figure 3 - 1 Geographical location of identified JPCP reconstruct projects 61 (a) Initial projects (b) Revised projects Figure 3 - 2 Geographical location of identified freeway HMA reconstruct projects (a) Initia l projects (b) Revised projects Figure 3 - 3 Geographical location of identified non - freeway HMA reconstruct projects 62 Figure 3 - 4 Geographical location of identif ied crush and shape projects 3.3.1.4. Summary of Selected projects The final revised list of identified projects was sent to MDOT to acquire the necessary construction, design, and material related inputs. The requested information was obtained from the available c onstruction records. Tables 3 - 4 and 3 - 5 summarize the selected reconstruct and rehabilitation projects. Additionally, Tables 3 - 6 and 3 - 7 sum marize the selected projects based on the design matrix for reconstruct and rehabilitation projects. The crush & sha pe projects were analyzed as a new pavement since a specific rehabilitation option is not available in the Pavement - ME software. Therefore, those projects were included in Table 3 - 6 (rehabilitation) instead of Table 3 - 7 (reconstruct). 63 Table 3 - 4 Number of reconstruct projects for each pavement type Pavement type MDOT region Number of projects Crush and Shape Grand 2 North 9 Superior 12 HMA Reconstruct Freeway Bay 2 Grand 5 Metro 7 North 3 Sout hwest 1 Superior 1 University 3 HMA Reconstruct Non - Freeway Bay 7 Grand 8 Metro 5 North 12 Southwest 6 Superior 17 University 8 JPCP Reconstruct Grand 2 Metro 11 Southwest 4 University 3 Total 128 Table 3 - 5 Number of rehabilitation projects by MDOT region Pavement type MDOT region Number of projects Composite overlay Bay 3 Grand 1 Metro 1 North 1 Southwest 1 HMA over HMA overlay Bay 1 North 5 Southwest 7 Superior 1 U niversity 1 Rubblized overlay Grand 3 North 4 Southwest 1 University 3 Unbonded overlay Grand 1 North 1 Southwest 4 University 2 Total 41 64 Table 3 - 6 Selection matrix displaying selected proje cts (rehabilitation sections) Rehabilitation type Traffic level* Overlay thickness level* Age (years) Total <10 10 to 20 >20 Composite overlay 1 2 1 1 7 2 2 2 2 3 2 1 HMA over HMA 1 1 7 15 2 1 5 2 2 1 1 Rubblized overl ay 1 2 4 2 11 3 2 2 2 1 3 2 1 3 1 Unbonded overlay 2 2 1 8 3 1 3 3 1 5 Total 3 22 16 41 *Levels 1 2 3 Traffic (AADTT) <1000 1000 - 3000 >3000 Overlay thickness (in) <3 3 - 6 >6 65 Table 3 - 7 Selection matrix displaying selected projects (reconstruct sections) Road type Traffic level* Thickness level* Age Level Total <10 10 - 15 >15 Crush and Shape 1 1 4 4 2 5 8 13 3 2 1 2 1 5 6 3 3 1 2 3 HMA Reconstruct Freeway 1 1 2 1 3 4 3 4 1 5 2 1 2 1 1 3 4 2 6 3 1 2 1 1 3 1 3 1 5 HMA Reconstruct Non - Freeway 1 1 1 1 2 10 25 12 47 3 8 1 9 2 1 2 1 2 1 4 3 1 1 2 3 1 2 3 JPCP Reconstruct 1 1 2 3 1 1 2 1 2 3 1 1 3 1 2 3 3 9 6 18 Total 17 68 43 1 28 *Levels 1 2 3 Traffic (AADTT) <1000 1000 - 3000 >3000 Thickness (in) <3 3 - 7 >7 3.3.1.5. Extent of Measured Pavement Performance A thorough investigation was performed to determine the extent of distress among all pavement sections identified for the local calibration. The calibration process involves comparisons of predicted and measured performance for each selected projects. For a robust local calibration, the distress magnitudes should cover a reasonable range (i.e., above and below 66 threshold limit s for each distress type). Therefore, the distress magnitudes for all projects were summarized to determine their ranges. Forty one rehabilitation projects were identified and selected as a part of sensitivity analysis study of the rehabilitation models ( 23 ) . However, there was a need to include new pavements (i.e., projects which were not rehabilitated in the past) in the calibration database. Therefore, an additional 128 projects (HMA and JPCP) were selected based on the criteria discussed previously. To furt her expand the calibration database, LTPP pavement sections located in Michigan and the surrounding States were considered, especially for rigid pavements. The summary of measured performance is presented for the selected rehabilitation, reconstruction , and LTPP pavement sections. For each figure presented below, the red line represents the distress threshold determined for Michigan pavements. The following efforts ensure adequacy of the information needed for a robust and accurate local calibration of the performance models: a. A total of 4 1 rehabilitation projects were considered in the local calibration. Thirty t hree projects have an HMA surface layer while 8 projects are JPCP unbonded overlays. These 3 3 HMA and 8 PCC unbonded overlay projects were analy zed to test the calibration procedures for different distresses in flexible and rigid pavements, respectively. o HMA o verlay p erformance data : The magnitude and age distribution for the HMA rehabilitation projects are shown in Figures 3 - 5 to 3 - 8 . The follow ing observations can be made from results in the figures: Longitudinal /top - down fatigue cracking : T here is a good representation of distress above and below the threshold value of 2000 ft/mile. The age at maximum distress range d from 8 to 20 years. It shou ld be noted that the magnitude of bottom - up 67 cracking predictions for rehabilitation design are always low and therefore, only longitudinal cracking is considered for such designs. Rutting : Most of the project s did not exhibit significant rutting. Only one project reached the threshold value of 0.5 inches. The age distribution ranged from 6 to 20 years. Transverse (thermal) cracking : The thermal cracking for the rehabilitation projects ranged from 250 to over 3000 feet/mile. The age at whic h the maximum ther mal cracking occurred ranged from 4 to 20+ years. It should be noted that some of the thermal cracks could actually be reflective cracks. MDOT PMS does not distinguish between a reflective or transverse (thermal) crack. IRI : Most of the projects had IRI va lues less than the threshold. Three projects exceeded the threshold value of 172 inch/mile. The age at maximum IRI ranged from 4 to 20 years o Unbonded o verlay p erformance data : The magnitude and age distribution for the JPCP rehabilitation projects are show n in Figures 3 - 9 and 3 - 11 . The following observations can be made from the figures: Transverse c racking : None of the projects exceed the distress threshold of 15% slabs cracked. The age distribution ranges from 8 to 12 years. Transverse joint faulting : No ne of the projects exceed the faulting threshold of 0.25 inch . The age distribution ranges from 4 to 12 years. IRI : None of the unbonded overlay projects exceeded the IRI threshold of 172 in/mile. The age at maximum IRI ranges from 8 to 12 years. 68 (a) Di stress magnitude (b) Age distribution Figure 3 - 5 Selected HMA rehabilitation sections longitudinal cracking data (a) Distress magnitude (b) Age distribution Figure 3 - 6 Selected HMA rehabilitation sections rutting data (a) Distress magnitude (b) Age distribution Figure 3 - 7 Selected HMA rehabilitation sections Transverse (thermal) cracking data (a ) Distress magnitude (b) Age distribution Figure 3 - 8 Selected HMA rehabilitation sections IRI data 69 (a) Distress magnitude (b) Age distribution Figure 3 - 9 Selected JPCP rehabilitation sections cracking data (a) Distress magnitude (b) Age distribution Figure 3 - 10 Selected JPCP rehabilitation sections joint faulting data (a) Distress magnitude (b) Age distribution Figure 3 - 11 Selected JPCP rehabilitation sections IRI data b. Newly reconstructed projects were selected and consist of 1 08 HMA and 20 JPCP reconstruct pavement projects in the state of Mich igan. o Reconstruct p erformance data : The magnitude and age distribution for the HMA freeway, non - freeway , crush and shape and JPCP projects are shown in Figures 3 - 1 2 to 3 - 29 . T h e following observations were made: 70 HMA freeway alligator cracking : The distres s magnitude ranged from 5 to 40 percent for the selected projects. Only four projects exceeded the distress threshold of 20%. The age at maximum distress ranged from 8 to 16 years. HMA freeway longitudinal cracking : The distress magnitude ranged from 250 to over 3000 feet/mile for the selected projects. Only six projects exceeded the distress threshold of 2000 feet/mile . The age at maximum distress ranged from 6 to 1 8 years. HMA freeway rutting : The rut depth ranged from 0.1 to 0.5+ inches. The rutting th reshold of 0.5 was exceeded by one project. The age at maximum rutting ranged from 8 to 18 years. HMA freeway thermal cracking : The thermal cracking for the selected pavement sections ranged from 50 to 1200 ft/mile. The distress threshold of 1000 ft/mile w as exceeded by 9 pavement sections. The age at which the maximum distress occurred ranged from 4 to 18 years. HMA freeway IRI : The observed IRI for HMA freeway projects ranges between 50 and 190+ in/mile. However, only one project exceeded the threshold va lue of 172 in/mile. The age at maximum IRI ranges from 4 to 18 years. HMA non - freeway alligator cracking : The distress magnitude ranged from 5 to 50 percent for the selected projects. Ten projects exceeded the distress threshold of 20%. The age at maxim um distress ranged from 4 to 18 years. HMA non - freeway longitudinal cracking : The distress magnitude ranged from 2 5 0 to over 3000 feet/mile for the selected projects. Seven projects exceeded the distress threshold of 2000 feet/mile . The age at maximum dist ress ranged from 2 to 16 years. 7 1 HMA non - freeway rutting : The rutting distress ranged from 0.15 to 0.5 inches. The rutting threshold of 0.5 was not exceeded by any of the projects. The age at maximum rutting ranged from 4 to 18 years. HMA non - freeway therm al cracking : The measured thermal cracking range d from 50 to over 1200 feet/mile. The design threshold was exceeded by 18 pavement sections. The age at the maximum distress ranged from 4 to 18 years. HMA non - freeway IRI : The observed IRI for HMA non - freewa y projects range d between 50 and 280 in/mile. Six projects exceeded the threshold value of 172 in/mile. The age at maximum IRI range d from 4 to 1 6 years. HMA crush & shape alligator cracking : The measured alligator cracking ranged from 5 to 40 percent for the selected projects. The age at which the maximum distress occurred ranged between 4 and 18 years. HMA crush and shape longitudinal cracking : The distress magnitude ranged from 2 5 0 to 2000 feet/mile for the selected projects. None of the projects exceed ed the distress threshold of 2000 feet/mile . The age at maximum distress ranged from 8 to 1 8 years. HMA crush & shape rutting : The rutting distress ranged from 0.2 to 0.5 inches. None of the sections exceeded the rutting threshold of 0.5 inch. The age at m aximum rutting varied between 4 and 14 years. HMA crush & shape thermal cracking : The thermal cracking ranged between 50 and over 1200 feet/mile. Two pavement sections exceeded the distress threshold of 1000 feet/mile. The age at maximum distress ranged b etween 8 and 16 years. 72 HMA crush & shape IRI : The measured IRI ranged between 70 and 130 in/mile. None of the sections exceeded the design threshold of 172 in/mile. The age at which the maximum IRI occurred ranged between 4 and 18 years. JPCP transverse cr acking : The transverse cracking for all projects ranged from 5 80% slabs cracked. Nine projects exceeded the distress threshold of 15% slabs cracked. The age at maximum transverse cracking ranged from 4 - 16 years. Transverse joint faulting : None of the pr ojects exceed the faulting threshold of 0.25 inch . The age distribution ranges from 2 to 16 years. JPCP IRI : The measured IRI ranged between 70 and 170 in/mile for all projects. None of the projects exceeded the threshold value of 172 in/mile. The age at maximum IRI range d between 4 and 16 years. (a) Distress magnitude (b) Age distribution Figure 3 - 12 Selected HMA freeway sections alligator cracking data (a) Distress magnitude (b) Age distribution Figure 3 - 13 Selected HMA freeway sections longitudinal cracking data 73 (a) Distress magnitude (b) Age distribution Figure 3 - 14 Selected HMA freeway secti ons rutting data (a) Distress magnitude (b) Age distribution Figure 3 - 15 Selected HMA freeway sections thermal cracking data (a) Distress magnitude (b) Age distribution Figure 3 - 16 Selected HMA freeway sections IRI data (a) Distress magnitude (b) Age distribution Figure 3 - 17 Selected HMA non - freeway sections alligator cracking data 74 (a) Distr ess magnitude (b) Age distribution Figure 3 - 18 Selected HMA non - freeway sections longitudinal cracking data (a) Distress magnitude (b) Age distribution Figure 3 - 19 Selected HMA non - freeway sections rutting data (a) Distress magnitude (b) Age distribution Figure 3 - 20 Selected HMA non - freeway sections thermal cracking data (a) Distress magnitude (b) Age distribution Figure 3 - 21 Selected HMA non - freeway sections IRI data 75 (a) Distress magnitude (b) Age distribution Figure 3 - 22 Selected HMA crush and shape sections alligator cracking data (a) Distress magnitude (b) Age distribution Figure 3 - 23 Selected HMA crush and shape sections longitudinal cracking data (a) Distress magnitude (b) Age distribution Figure 3 - 24 Selected HMA crush and shape sections rutting data (a) Distress magnitude (b) Age distribution Figure 3 - 25 Selected HMA crush a nd shape sections thermal cracking data 76 (a) Distress magnitude (b) Age distribution Figure 3 - 26 Selected HMA crush and shape sections IRI data (a) Distress magnitude (b) Age distribution Figure 3 - 27 Selected JPCP sections transverse cracking data (a) Distress magnitude (b) Age distribution Figure 3 - 28 Selected JPCP sections joint faulting data (a) Distress magnitude (b) Age distribution Figure 3 - 29 Selected JPCP sections IRI data 77 c. The LTPP pavement sections in Michigan and adjacent States were considered as part of the database to be used fo r the local calibration. o Flexible pavement performance data : The magnitude of distresses and age distribution for the Michigan LTPP sections are shown in Figures 3 - 30 to 3 - 32 . The performance data show that there are sufficient number of pavement sections exceeding the threshold values for cracking, rutting and IRI over time. (a) Distress magnitude (b) Age distribution Figure 3 - 30 Selected Michigan LTPP sections alligator cracking data (a) Distress m agnitude (b) Age distribution Figure 3 - 31 Selected Michigan LTPP sections rutting data 78 (a) Distress magnitude (b) Age distribution Figure 3 - 32 Select ed Michigan LTPP sections IRI data o Rigid pavement p erformance data : The magnitude of distresses and age distribution for the rigid LTPP sections are shown in Figures 3 - 33 to 3 - 3 5 . The following observations were made: T ransverse cracking : The transverse cracking for the SPS - 2 projects ranged from 10 to over 40 % slabs cracked. Three sections exceeded the distress threshold of 15% slabs cracked. The age at maximum transverse cracking ranged from 8 to 1 8 years. Faulting : None of the projects exceed the faul ting threshold of 0.25 inch . The age at which maximum faulting occurred range d from 6 to 8 years. IRI : The maximum measured IRI ranged from 130 to greater than 180 in/mile for the selected SPS - 2 projects. One section exceeded the threshold value of 172 in/ mile. The age at maximum IRI range d from 8 to 1 4 years. (a) Distress magnitude (b) Age distribution Figure 3 - 33 Selected LTPP SPS - 2 sections transverse cracking data 79 (a) Distress magnitude (b) Age distribution Figure 3 - 34 Selected LTPP SPS - 2 sections transverse joint faulting data (a) Distress magnitude (b) Age distribution Figure 3 - 35 Selected L TPP SPS - 2 sections IRI data 3.3.1.6. Refining Selected Pavement Sections based on Measured Performance The measured performance data were evaluated for 108 new HMA, and 20 new JPCP projects. Normal pavement performance is based on FHWA criteria ( 31 ; 32 ) modified to reflect Michigan distress threshol ds which indicate the expected good and poor pavement performance trends for various distress types. The measured fatigue or alligator cracking, rutting and IRI for HMA and measured transverse cracking and IRI for rigid pavements were compared with the per formance criteria. The various upper and lower limits of expected performance for each distress type are shown in Figures 3 - 36 and 3 - 37 for both pavement types, respectively. 80 (a) Alligator cracking (b) Rutting (c) IRI Figure 3 - 36 Flexible pavement performance criteria (a) Transverse cracking (b) IRI Figure 3 - 37 Rigid pavement performance criteria The time - series measured distresses for each project w ere compared to the modified FHWA criteria to identify any projects exceeding the FHWA pavement performance behavior. It is important to determine if a project exceeds the criteria for poor performance at an early age. If a pavement section exhibits an abn ormal performance (i.e., premature cracking), the Pavement - ME can only account for such behavior through adjusting the critical inputs (e.g. material properties or traffic). However, a pavement section may be considered as normally performing if the distre ss exceeds the criteria limit beyond 10 years (which is about half of the design life). It is 81 expected that a pavement may exceed the distress criteria at a later stage in the pavement life. The investigation was performed for the following distress types: HMA pavements: 1. Alligator cracking 2. Rutting 3. IRI JPCP pavements: 1. Transverse cracking 2. IRI It should be noted that the performance criteria were not considered for thermal and longitudinal cracking. All the pavement sections were considered for the local calib ration of the thermal transverse cracking model to account for the variability in the measured transverse cracking. Since the measured longitudinal cracking was combined with alligator cracking, no separate performance criterion was considered for longitud inal cracking. In addition, the performance criteria for the measured t ransverse joint faulting was not considered for rigid pavements (JPCP) because of low faulting magnitudes in the selected pavements sections. Several projects exceeded the expected norm al performance trends (indicating poor performance) based on the performance criteria. Figure 3 - 38 shows the measured pavement performance for all HMA projects. Several projects exceeded the alligator cracking threshold limits at an early age (i.e. above t he red dashed line). However, the rutting and IRI performance followed the normal trends (i.e. between the green and red lines) and fewer projects exceeded the threshold limits. The causes for early age cracking, rutting, and IRI are important in determini ng whether the projects should be included in the calibration or not. Such decisions are made based 82 on if there were any construction, or material related issues encountered at the time of construction. The pavement projects performing within the bands of the performance criteria are shown in Figure 3 - 39 . Figure 3 - 40 illustrates the measured pavement performance for the 20 JPCP projects. Several of the JPCP projects exceed the expected transverse cracking performance threshold. Based on the figure, one proj ect shows measured transverse cracking above 80 percent slabs cracked at less than 10 years of age showing a premature failure. It is reasonable to assume that including such projects that exceed the distress threshold much earlier in their lives will give unreasonable calibration coefficients. On the other hand, the majority of the rigid pavement projects experienced expected IRI behavior. The rigid pavement projects performing within the bands of the performance criteria are shown in Figure 3 - 41 . Table 3 - 8 shows the summary of the pavement performance evaluation for all the selected sections for both pavement types. About 121 flexible pavement sections (some projects have two separate directions) exhibited adequate performance behavior for fatigue cracking while 162 and 167 flexible pavement sections showed adequate performance for rutting and IRI, respectively. It should be noted that a project may perform poorly in cracking while rutting and IRI performance are still in the normal range . About 31 out of 3 3 rigid pavement sections (some projects have two separate directions) have shown some level of cracking while 49 and 44 sections exhibited joint faulting and IRI, respectively. Any project that exceeds the threshold performance limit is a cause for concer n during the local calibration process because of the significant difference expected between predicted and measured performance. The list of projects which exceed the expected pavement performance were identified and sent to MDOT for further review. After reviewing the identified sections, it 83 was concluded that not enough information was available to determine why these sections were performing poorly. These sections were not excluded from the local calibration at this time. (a) Alligator cracking (b) Rutting (c) IRI Figure 3 - 38 Performance for all HMA projects Poor Normal Poor Normal Good Poor Normal Good 84 (a) Alligator cracking (b) Rutting (c) IRI Figure 3 - 39 Normal pavement performan ce for HMA projects 85 (a) Transverse cracking (b) IRI Figure 3 - 40 Performance for all JPCP projects (a) Transverse cracking (b) IRI Figure 3 - 41 Normal pavement performance for JPCP projects 86 Table 3 - 8 Projects with acceptable performance Performance measure Acceptable pavement sections Total number of available sections Flexible pavements Alligator crackin g 121 129 Longitudinal cracking 128 (37) 129 (40) Rutting 129 (33) 129 (40) IRI 127 (40) 129 (40) Rigid pavements Transverse cracking 18 (13) 18 (13) Joint faulting 33 (16) 33 (16) IRI 29 (15) 29 (15) Note: The values in parenthesis represent numbe r of rehabilitated pavement sections 3.3.2 Input Data Collection The cross - sectional, traffic and material input data are needed to characterize the as - constructed pavements in the Pavement - ME software. The accuracy of the data directly impacts the performance prediction. The Pavement - ME uses a very large number of inputs to characterize a pavement. Furthermore, the hierarchical structure of the Pavement - ME provides three levels of inputs for many of the important input parameters. The input data collection effo rts used to characterize a pavement in the Pavement - ME can be very time consuming. In order to reduce input data collection time, the most sensitive inputs which affect the pavement performance predictions were given priority. The sensitive inputs which af fect the performance prediction for new and rehabilitation design of rigid and flexible pavements were determined by several research projects ( 21 ; 23 ; 33 ) as outlined in Chapter 2. Based on the results of these studies, focusing on the sensitive inputs significantly reduced the amount of time collecting input data . Additionally, the best available input level was used for the selected pavement sections. The general process for collecting the as - constructed input data, the details regarding the source of 87 the data, issues and observations related to the data, and the final selection of the input values are discussed in this section. 3.3.2.1. Pavement Cross - Section and Design Feature Inputs The pavement cross - sectional information is necessary to characterize the layer thicknesses of the various layers. The cross - sectional info rmation was obtained from the as - constructed or as - designed drawings. These drawings were provided by MDOT for each selected project. The thickness and lane dimension information were included in these drawings. For projects where the cross - section was not available, a list of the projects with missing information was sent to MDOT. Typically, the base/subbase thicknesses were not found on the design drawings. The missing information was obtained from MDOT. Additionally, in the case for HMA pavements, the dr awings typically provided the asphalt application rate of the HMA layers which was used to determine the HMA lift thicknesses. All of the information obtained from the design drawings was used to populate the inputs necessary to characterize the in - service pavement cross - section. A summary of the design thicknesses for flexible and rigid reconstruct and rehabilitation selected pavement projects are shown in Tables 3 - 9 through 3 - 12. Table 3 - 9 Average HMA reconst ruct thicknesses Types HMA top course thickness (in.) HMA leveling course thickness (in.) HMA base course thickness (in.) Base thickness (in.) Subbase thickness (in.) Crush and Shape 1.5 1.8 1.5 8.0 19.6 Freeway 1.6 2.2 4.7 6.6 17.6 Non - Freeway 1.5 2.1 3.3 6.3 16.4 State - wide Average 1.5 2.1 3.7 6.7 17.2 88 Table 3 - 10 Average HMA rehabilitation project thicknesses Types Overlay thickness (in.) Existing pavement thickness (in.) Base thickness (in.) Subbas e thickness (in.) Composite 3.6 8.4 3.4 10.0 HMA over HMA 2.7 4.5 7.9 16.3 Rubblized 5.6 8.4 3.5 11.9 State - wide Average 3.9 6.7 5.9 12.9 Table 3 - 11 JPCP reconstruct thickness ranges MDOT Region Average P CC thickness (in.) Average base thickness (in.) Average subbase thickness (in.) Grand 11.5 5.0 14.9 Metro 11.2 6.1 11.3 Southwest 11.7 4.0 10.7 University 11.2 6.3 9.5 State - wide Average 11.3 5.6 11.4 Table 3 - 12 Unbonded PCC overlay thickness ranges Pavement type Average PCC thickness (in.) Average existing PCC thickness (in.) Average base thickness (in.) Average subbase thickness (in.) Average asphalt interlayer thickness (in.) Unbonded overlay 6.9 9 .0 3.6 11.1 1.0 3.3.2.2. Traffic Inputs The traffic data are one of the most important inputs used in the Pavement - ME pavement analysis and design procedure . The traffic inputs were obtained from various sources. The sources include: MDOT historical traffic coun ts (AADTT) MDOT traffic characterization study o Monthly distribution factors (MDF) o Hourly distribution factors (HDF) o Truck traffic classifications (TTC) 89 o Axle groups per vehicle (AGPV) o Axle load distributions for different axle configurations (ALS) MDOT M - E traffic subcommittee recommendations In order to collect the most accurate traffic inputs for the selected Michigan pavements, the traffic charact erization study was used to determine the traffic related inputs ( 25 ; 26 ) . The study identified the inputs outlined above. Furthermore, a cluster analysis was performed to group sites with similar characteristics. These clusters provide regional level inputs and are especially useful when Level 1 traffic data are no t available for a particular pavement section. The most important traffic inputs were the TTC, HDF, and tandem ALS based on their impact on the predicted pavement performance. The study recommended statewide average values for all other input variables ( 25 ) . Therefore, t he resu lts from the traffic characterization study in Michigan were utilized to characterize traffic inputs on the pavement sections included in the calibration dataset. The following inputs were collected for each pavement project : Average annual daily truck tra ffic (AADTT) TTC ALS Tandem HDF The following procedure was used to determine the cluster for each individual project: 1. Collect commodity data for each project based on MDOT GIS maps with frei ght data based on the roadway (see F igure 3 - 42 ) . 2. Identify vehic le class 5 and 9 counts for each project from the MDOT Traffic Monitoring Information System (TMIS) from the following URL: http://mdotnetpublic.state.mi.us/tmispublic/ ) , see Figures 3 - 43 to 3 - 44 and Table 3 - 13. 90 Convert the class 5 and 9 counts to percenta ges for use in the discriminant analysis for project allocation to a cluster. 3. Use the MDOT commodity data, VC 5/9, AADTT, MDOT region, and road class information for input into formulas to determine the specific clusters for each project (see Figure 3 - 45 f or the spreadsheet that identifies formula solutions to determine the clusters). Figure 3 - 42 MDOT freight data 91 Figure 3 - 43 Location of classification coun ts Figure 3 - 44 Raw vehicle class counts 92 Table 3 - 13 Conversion from raw vehicle counts to vehicle class percentages Figure 3 - 45 Cluster selection based on steps 1 and 2 ( 25 ) The AADTT values were determined for each project to identify the truck volumes at the time of construction based on historical traffic records. If the historical traffic records were not available, an es timate of the AADTT for each project was based on the as - constructed project design drawings. The TTC, ALS - Tandem, and HDF clusters were used for each pavement section by following the procedure recommended in the traffic characterization study ( 25 ) . The statewide average value s were used for the M DF, AGPV, and other ALS inputs. The range and average two - way AADTT values for all reconstruct and rehabilitation projects are summarized in 93 Table 3 - 14 and 3 - 15, respectively. Table 3 - 14 R anges of AADTT for all reconstruct projects Road Type REGION Min AADTT Max AADTT Average AADTT Crush and Shape Grand 265 1986 1126 North 91 1757 926 Superior 60 312 178 HMA Reconstruct Freeway Bay 313 2034 1174 Grand 819 4315 1656 Metro 354 6745 2434 North 685 5722 2455 Southwest 367 367 367 Superior 350 350 350 University 1220 5011 3721 HMA Reconstruct Non - Freeway Bay 142 523 345 Grand 367 1440 708 Metro 152 1600 843 North 194 880 382 Southwest 442 996 617 Superior 63 1096 37 2 University 137 556 321 JPCP Reconstruct Grand 3195 3499 3347 Metro 500 16605 7883 Southwest 7532 10578 8937 University 5299 7498 6569 Statewide 60 16605 1859 94 Table 3 - 15 Ranges of AADTT for all rehabilitation projects Pavement Type Region Minimum AADTT Maximum AADTT Average AADTT Composite Bay 512 2250 1254 Grand 2882 2882 2882 Metro 1380 1380 1380 North 672 672 672 Southwest 6064 6064 6064 HMA over HMA Bay 200 200 200 North 130 450 2 99 Southwest 185 1564 536 Superior 260 260 260 University 350 350 350 Rubblized Grand 370 575 478 North 279 1550 696 Southwest 856 856 856 University 455 3707 2517 Unbonded Overlay Grand 2744 2744 2744 North 1458 1458 1458 Southwest 3185 5700 4683 University 4279 5004 4642 Statewide average 130 6064 1601 3.3.2.3. As - Constructed Material Inputs The as - constructed materials inputs characterize the material properties for each pavement layer at the time of construction. These inputs range from pr oject specific values, to statewide average values. The details of material properties for each pavement structural layer are discussed in this section. HMA layer inputs An attempt was made to collect the HMA layer information from the construction records ; however, the needed data were not available for all pavement sections. Two different input levels were identified to study the effect on HMA pavement performance. The collection process of Level 1 and Level 3 data are discussed in this section. 95 Level 1 H MA inputs The level 1 HMA inputs require laboratory testing to determine several HMA mixture and binder properties. These properties include: Dynamic modulus (E*) Binder (G*) Creep compliance and, Indirect tensile strength (IDT) The laboratory testing to determine these material properties were performed during Part 1 of this study and results are documented elsewhere ( 24 ) . The Level 1 HMA inputs were collected for projects which had similar mixture and binder types. Since Part 1 of the study involved testing only SuperPave HMA mixtures, the test results could only be used for past projects constructed using SuperPave mixture design. Furthermore, material characterization data for only the HMA mixtures and binder types which were tes ted can be used (not all HMA mix/binder combinations were tested in Part 1) ( 24 ) . It should be noted that the HMA characterization data and testing results are from recently constructed projects and does not reflect the as constructe d HMA materials for the selected projects. As part of the deliverables from the HMA mixture characterization study ( 24 ) , a software (DYNAMOD) was developed that provides easy extraction of the E* , G* , creep compliance and IDT values for the local materials in a format consistent with the Pavement - ME needs. Table 3 - 16 summarizes the number of projects which had similar HMA mixture and binder properties with tested Level 1 data. 96 Table 3 - 16 Projects with available Level 1 HMA input properties Pavement Type MDOT region Number of projects with Level 1 data Crush and Shape North 1 HMA Reconstruct Freeway Bay 2 Grand 2 Metro 4 North 1 University 1 HMA over HMA Southwest 1 Univers ity 1 HMA Reconstruct Non - Freeway Bay 3 Grand 7 Metro 4 North 5 Southwest 5 Superior 9 University 8 Rubblized Grand 1 North 1 Level 3 HMA layer inputs The HMA structural layers are characterized using the asphalt binder, air void content, asphalt binder content and the aggregate gradation for the asphalt mixture. The HMA inputs were obtained from the following sources: 1. Project design drawings with typical cross - section, HMA binder type, HMA binder mix type, and application rate 2. Historical as - constructed project records which identified the HMA job mix formulas for each HMA layer. It should be noted that the historical records were not available for all projects 97 3. In the absence of historical records, the average aggregate gradation for each HMA mixture type was utilized. If no historical records were available, an average gradation was determined for each HMA mixture type based on the available HMA data from the remainder of the projects in the calibration dataset. The average HMA aggregate g radation was calculated based on the values obtained from the historical records and HMA mixture characterization MDOT study ( 24 ) . The average as - constructed percent air voids and HMA mixture inputs are summarized in Tables 3 - 17 thro ugh 3 - 20. The different mixture types presented in the tables below correspond to the where this project is constructed. Typically, the surface course will con sist of either a 5 or a 4, a leveling course, 3 and 4, and a base course, 2 or 3. The ESAL numbers consists of 1 is 1million ESALs, 3 is 3 million ESALs, 10 is 10 million ESALs and so forth. Some of the older mixture types do not follow the same methodolog y as the MDOT SuperPave mixtures. 98 Table 3 - 17 As - constructed percent air voids HMA mixture type Average as - constructed air voids GGSP 8.4 5E3 7.1 5E10 6.5 5E1 6.7 4E30 6.5 4E3 6.7 4E10 6.5 4E1 6.8 4C 5.4 4B 5.9 3E30 5.7 3E3 6.6 3E10 6.4 3E1 6.5 3C 5.5 3B 5.7 2E3 7.4 2C 5.7 13A 5.3 99 Table 3 - 18 HMA top course average aggregate gradation HMA mixture type Effective AC binder content Percent passi ng sieve size 3/4 3/8 #4 #200 1100T 12.0 100.0 88.7 62.5 6.9 13 T 10.2 100.0 70.0 51.2 5.6 13A 11.7 100.0 82.8 66.0 5.4 1500T 10.4 100.0 86.0 53.4 5.2 3B 9.9 100.0 64.6 44.5 4.9 4B 11.1 100.0 89.0 60.1 5.0 4C 11.3 100.0 80.9 57.8 4.8 4C - M 11.2 1 00.0 86.8 51.7 4.6 4E3 11.4 100.0 90.1 67.0 6.1 4E30 12.2 100.0 85.8 52.8 4.3 5E1 11.9 100.0 96.8 76.6 5.5 5E10 12.0 100.0 98.2 75.6 5.4 5E3 11.9 100.0 97.4 74.6 5.3 GGSP 12.8 100.0 73.5 31.9 8.5 4E10 HS 10.0 100.0 85.7 65.7 4.9 5E3 HS 11.6 100.0 9 7.3 75.7 5.4 4E1 HS 10.8 100.0 85.8 71.4 5.4 5E1 HS 12.1 100.0 98.4 81.8 5.9 4E3 HS 10.4 100.0 86.0 65.7 5.4 5E50 11.0 100.0 99.7 77.7 6.2 5E10 HS 11.4 100.0 99.5 76.2 5.4 4E30 HS 10.0 100.0 87.1 52.3 5.5 5E30 HS 11.6 100.0 99.7 76.4 6.1 Table 3 - 19 HMA leveling course average aggregate gradation HMA mixture type Effective AC binder content Percent passing sieve size 3/4 3/8 #4 #200 1100L 11.2 100.0 88.7 62.5 6.9 13 L 10.8 100.0 77.1 62.1 5.8 13A 11.7 100.0 83.6 66.4 5.5 1500L 11.4 100.0 85.0 56.1 5.5 3B 9.9 99.9 69.9 46.2 4.7 3C 10.9 100.0 72.0 49.7 5.1 3E3 10.6 100.0 78.7 46.8 3.7 3E30 10.0 98.9 83.9 66.6 4.3 4E1 11.0 100.0 86.8 68.0 4.8 4E10 10.6 100.0 87.5 58.6 4.9 4E3 10.9 100.0 87.7 88.9 4.9 4E30 11.1 100.0 86.8 64.1 5.0 100 Table 3 - 20 HMA base course average aggregate gradation HMA mixture type Effective AC binder content Percent passing sieve size 3/4 3/8 #4 #200 700 8.5 71.5 56.0 4 6.9 4.5 2C 9.6 87.1 55.5 41.4 6.0 2E3 9.7 89.9 71.6 58.3 4.6 3B 9.7 99.8 62.6 40.0 4.7 3E1 11.0 100.0 71.4 48.3 4.5 3E10 10.1 99.4 75.2 47.0 4.7 3E3 10.3 99.7 79.7 58.0 4.5 3E30 11.0 99.7 76.2 55.7 5.1 4E3 11.3 100.0 85.4 68.2 5.0 3E30 9.8 100.0 77.1 57.6 4.5 PCC material inputs The Pavement - ME transverse cracking prediction model is very sensitive to concrete strength (compressive or flexural). The PCC material related inputs were obtained from the following sources: Material testing results Typ ical MDOT values Quantifying c oefficient of t hermal e xpansion v alues of t ypical h ydraulic c ement c oncrete p aving m ixtures ( 29 ) PCC strength The c oncrete core compressive strength ( c ) test data were collected by MDOT . These tests represent the concrete compressive strength close to the time of construction for the selected pavement sections. These test values were used directly for each corresponding project. The in - situ strength values also represented several MDOT geographical regions. The average c for each region was determined in order to represent concrete strengths for the pavement sections which did not h ave actual test values. The region specific average compressive strengths are 101 summarized in Table 3 - 21 . The transverse cracking model in the Pavement - ME directly uses modulus of rupture (MOR) to estimate damage. The MOR values were estimated based on the A CI correlation between MOR and c as shown by Equation (6) . Figure 3 - 46 shows the c and estimated MOR distributions. It should be noted that the specific testing age of these cores were not available; however, all cores were tested after or at least 28 days. The Pavement - ME internally calculates the relationship between c and MOR. The relationship differs slightly from Equation ( 3 - 6 ) ( 34 ) , instead of using 7.5 , the software uses a value of 9.5. Since the cores were not tested at 28 days, and the actual testing dates were unknown, the lower values were assumed to better represent 28 day strengths for each pavement section. ( 3 - 6 ) Table 3 - 21 Average values for compressive strength and MOR by MDOT region Region/Job number Measured compressive st rength (psi) Calculated MOR (psi) All sections 5142 538 Grand 5119 537 Metro 4963 528 Southwest 5496 556 University 5165 539 102 (a) Compressive strength ( c ) (b) Modulus of rupture (MOR) Figure 3 - 46 Distribution of concrete strength properties Coefficient of thermal expansion The CTE input values were obtained from the MDOT study that determined the CTE for various aggregates available across the state of Michigan ( 29 ) . The most prevalent CTE values for aggregate used in Michigan are either 4.5 or 5.8 in/in/°F×10 - 6 depending on the location of the pavem ent section . For U niversity or M etro regions , a CTE value of 5.8 in/in/°F×10 - 6 , while for other regions, a value of 4.5 in/in/°F×10 - 6 was used. Aggregate base/subbase and subgrade input values The aggregate base/subbase and subgrade input values were obtai ned from the following sources: Backcalculation of unbound granular layer moduli ( 28 ) Pavement s ubgrade MR d esign v s easonal c hanges ( 27 ) The resilient modulus (MR) values for the base and subbase material were selected based on the results from previous MDOT studies. The typical backcalculated values for base and subbase MR are 33,000 psi and 20,000 psi, respectively. These values were assumed for all 103 projects, since in - situ MR values were not available. The subgrade materi al type and resilient modulus was selected based on the Subgrade MR study ( 27 ; 28 ) . The study outlined the location of specific soil types and their MR values across the entire State. There are three possible ways to incorporate the design MR (e.g. 4,400 psi A - 7 - 6 soil) for soils in Michigan in the Pavement - ME local calibration. These methods include: 1. Adjusted subgrade MR 2. Annual representative value (effective MR representing the entire year) 3. Design MR at specified optimum moisture content (OMC) [using typical OMC for the Michigan soils] . Method 1: Adjusted Subgrade MR The design softwa re internally adjusts the subgrade MR value based on the soil type and climate. Therefore, the adjustment factors were determined for each climate and soil type in the entire state . The MR values fluctuate based on the monthly climate variations for a year . The MR adjustment factor s were determined from the minimum MR value over the entire 20 year design period . The adjustment factors are used to artificially adjust the MR so that the minimum value represents the MDOT recommended design MR for a particular soil ( 27 ; 28 ) . For example, the following process was adopted to determine the adj ustment factor and inflate the design MR values for SM soils in Lansing climate (see Table 3 - 22): 1. Backcalculate MR or use backcalculated MR for SM soil from previous MDOT study. The MR value in the table is 24,764 psi. 104 2. Run the pavement section with the bac kcalculated MR value obtained in step 1 and obtain the minimum MR value based on the EICM adjustments from the Pavement - ME output file. 3. Calculate the adjustment factor as shown below: 4. Determine the inflated design MR as shown bel ow: 5. Use the inflated design MR in the Pavement - ME to reflect the MDOT design MR value. Ideally the adjusted MR value should reflect a value similar to the design MR because of the climatic variations. It should be noted that the adjustment factors were determined only for backcalculated moduli and the adjustment factors were assumed to be similar for the design MR values. However, it was also found that the adjustment factors vary d epending on the magnitude of MR (i.e., backcalcu lated or design MR values). A summary of adjustment factors based on soil type and climate stations is shown in Table 3 - 22. Table 3 - 23 summarizes the adjusted design MR values used in the calibration process initially. The adjusted MR values are shown in F igure 3 - 47 for a design MR of 4400 psi. Two concerns were identified for this method: a. The adjustment factors were determined based on backcalculated moduli and may not accurately represent the lower design MR values. b. The adjustment factors should have bee n estimated based on the design MR values for each soil type and climate. 105 Therefore, this method was not adopted for the final local calibration. Method 2: Annual Representative Subgrade MR Value Method 2 consists of directly using the design MR as an inpu t and value does not fluctuate based on climatic variations over the design life. Method 2 is only applicable if the design MR represent s an e ffective roadbed modulus (i.e., it already accounts for the moisture variations through the year). The MDOT subgrade soil characterization report ( 27 ) considers the design MR as effective MR. It should be noted that the backcalculated subgrade MR reflects the in - situ moisture conditions. Therefore, it is difficult to determine whet her the in - situ moisture content represents optimum moisture content. Generally, a saturated soil modulus should be considered for AASHTO 93 design because those values are determined based on soaked CBR. The annual representative MR is shown in Figure 3 - 4 7. The figure shows that if this option is selected, then there is no fluctuation of the design MR. Therefore, the current local calibration of the performance models adopted this method and the design MR values for each soil type were utilized. Figure 3 - 47 Subgrade MR over time in Lansing 106 Table 3 - 22 List of MR reduction factors for Michigan weather stations in the Pavement - ME USCS AASHTO Back - calculated (psi) Design MR value (psi) Adrian Ann Arbor Battle Creek Benton Harbor Detroit Flint Gaylord Grand Rapids Hancock SM A - 2 - 4, A - 4 24,764 5,200 0.82 0.84 0.84 0.82 0.82 0.81 0.47 0.74 0.49 SP1 A - 1 - a, A - 3 27,739 7,000 0.77 0.75 0.81 0.76 0.82 0.68 0.49 0.74 0.59 SP2 A - 1 - b, A - 2 - 4, A - 3 25,113 6,500 0.78 0.66 0.7 0.79 0.78 0.64 0.51 0.65 0.5 SP - SM A - 2 - 4, A - 4 20,400 7,000 0.71 0.7 0.74 0.82 0.81 0.66 0.51 0.67 0.53 SC - SM A - 2 - 6, A - 6, A - 7 - 6 20,314 5,000 0.48 0.49 0.48 0.47 0.47 0.48 0.31 0.42 0.34 SC A - 4, A - 6, A - 7 - 6 21,647 4,400 0.48 0.49 0.48 0.47 0.47 0.48 0.31 0.42 0.34 CL A - 4, A - 6, A - 7 - 6 15,176 4,400 0.48 0.49 0.48 0.47 0.48 0.48 0.31 0.42 0.34 ML A - 4 15,976 4,400 0.4 0.41 0.4 0.4 0.4 0.35 0.23 0.35 0.28 SC/CL/ML A - 2 - 6, A - 4, A - 6, A - 7 - 6 17,600 4,400 0.47 0.47 0.46 0.46 0.46 0.46 0.3 0.41 0.33 USCS AASHTO Back - calculated (psi) Design MR value (psi) Houghton Lake Iron Mountain Kalamazoo Lansing Muskegon Pellston Pontiac Traverse City SM A - 2 - 4, A - 4 24,764 5,200 0.48 0.54 0.85 0.7 0.82 0.57 0.7 0.57 SP1 A - 1 - a , A - 3 27,739 7,000 0.54 0.51 0.77 0.68 0.85 0.47 0.7 0.59 SP2 A - 1 - b, A - 2 - 4, A - 3 25,113 6,500 0.46 0.49 0.78 0.65 0.78 0.54 0.65 0.54 SP - SM A - 2 - 4, A - 4 20,400 7,000 0.48 0.5 0.84 0.67 0.81 0.56 0.68 0.56 SC - SM A - 2 - 6, A - 6, A - 7 - 6 20,314 5,000 0.36 0.36 0 .48 0.42 0.48 0.32 0.42 0.42 SC A - 4, A - 6, A - 7 - 6 21,647 4,400 0.36 0.36 0.48 0.42 0.48 0.32 0.42 0.41 CL A - 4, A - 6, A - 7 - 6 15,176 4,400 0.36 0.36 0.48 0.42 0.48 0.32 0.42 0.41 ML A - 4 15,976 4,400 0.24 0.26 0.4 0.34 0.4 0.26 0.35 0.28 SC/CL/ML A - 2 - 6, A - 4, A - 6, A - 7 - 6 17,600 4,400 0.34 0.35 0.47 0.41 0.47 0.31 0.41 0.4 107 Table 3 - 23 Inflated MR values from Method 1 USCS AASHTO Design MR (psi) Adrian Ann Arbor Battle Creek Benton Harbor Detroit Flint Gaylo rd Grand Rapids Hancock Houghton Lake Iron Mountain Kalamazoo Lansing Muskegon Pellston Pontiac Traverse City SM A - 2 - 4, A - 4 5,200 6,365 6,228 6,168 6,318 6,318 6,443 11,159 7,046 10,700 10,766 9,647 6,103 7,460 6,341 9,091 7,482 9,155 SP1 A - 1 - a, A - 3 7,00 0 9,091 9,358 8,653 9,210 8,578 10,340 14,286 9,447 11,864 12,939 13,780 9,079 10,370 8,226 15,053 10,014 11,925 SP2 A - 1 - b, A - 2 - 4, A - 3 6,500 8,355 9,848 9,326 8,207 8,291 10,172 12,622 9,924 12,897 14,192 13,266 8,355 10,077 8,291 12,082 9,984 11,993 SP - SM A - 2 - 4, A - 4 7,000 9,873 10,043 9,498 8,516 8,610 10,622 13,645 10,401 13,333 14,645 14,028 8,353 10,511 8,599 12,613 10,234 12,478 SC - SM A - 2 - 6, A - 6, A - 7 - 6 5,000 10,331 10,267 10,439 10,548 10,526 10,439 15,923 11,877 14,621 14,045 13,967 10,374 11,877 1 0,352 15,674 11,877 11,877 SC A - 4, A - 6, A - 7 - 6 4,400 9,091 9,035 9,186 9,282 9,263 9,186 14,013 10,452 12,866 12,360 12,290 9,129 10,452 9,109 13,794 10,452 10,705 CL A - 4, A - 6, A - 7 - 6 4,400 9,091 9,035 9,186 9,283 9,263 9,186 14,014 10,451 12,866 12,359 12 ,291 9,128 10,451 9,110 13,794 10,451 10,706 ML A - 4 4,400 10,919 10,706 10,973 10,892 10,865 12,571 18,805 12,753 15,604 17,960 17,187 11,084 13,056 11,110 16,729 12,643 15,439 SC/CL/ML A - 2 - 6, A - 4, A - 6, A - 7 - 6 4,400 9,412 9,354 9,510 9,610 9,590 9,510 14, 507 10,820 13,320 12,796 12,724 9,451 10,820 9,431 14,280 10,820 11,083 108 Method 3: Subgrade Design MR at OMC Method 3 represents a case when design MR is assumed to be at the OMC for a particular soil (A - 7 - 6) in Michigan. It should be noted that the mois ture content needs to be determined for in - situ conditions when FWD testing is used to backcalculate the subgrade MR while OMC for a subgrade is required at a construction site. The OMC for a soil type obtained, from the subgrade soil characterization stud y in Michigan ( 27 ) , were utilized in this method. T he results show that the P avement - ME reduced the MR from 4,400 to 2,300 psi and then it further fluctuates based on the climatic variations estimated by the Enhanced Integrated Climate Model (EICM) (see Figure 3 - 47). Based on the above discussion and consultation with MDOT, method 2 was adopted for the local calibration. Consequently, the recommended design MR value corresponding to the soil type for each pavement section was utilized based on Table 3 - 24. Table 3 - 24 Average roadbed so il MR values Roadbed Type Average MR USCS AASHTO Laboratory determined (psi) Back - calculated (psi) Design value (psi) Recommended design MR value (psi) SM A - 2 - 4, A - 4 17,028 24,764 5,290 5,200 SP1 A - 1 - a, A - 3 28,942 27,739 7,100 7,000 SP2 A - 1 - b, A - 3 25,6 85 25,113 6,500 6,500 SP - SM A - 1 - b,A - 2 - 4, A - 3 21,147 20,400 7,000 7,000 SC - SM A - 2 - 4, A - 4 23,258 20,314 5,100 5,000 SC A - 2 - 6, A - 6,A - 7 - 6 18,756 21,647 4,430 4,400 CL A - 4, A - 6, A - 7 - 6 37,225 15,176 4,430 4,400 ML A - 4 24,578 15,976 4,430 4,400 SC/CL/ML A - 2 - 6, A - 4, A - 6, A - 7 - 6 26,853 17,600 4,430 4,400 109 Environmental Inputs The climate inputs were obtained from the Pavement - ME built in climate files. Each climatic station has historical weather data collected over many years. The closest weather station to each selected project was utilized. Table 3 - 25 summarizes the important climatic information for the stations used in the local calibration. These climate stations represent those that are available to choose as a single climate station. Table 3 - 25 Michigan climate station information Climate station Mean annual air temperature (°F) Mean annual rainfall or precipitation (in.) Wet days (#) Freezing index (°F - days Average annual number of freeze thaw cycles Ad rian 49.4 28.9 207.0 1465.2 68.6 Ann Arbor 48.2 29.0 204.0 1771.8 76.3 Battle Creek 49.3 29.4 198.1 1479.9 58.7 Benton Harbor 49.8 28.7 189.5 1205.4 65.5 Detroit (Metro airport) 50.7 32.7 194.1 1149.9 54.5 Detroit (willow run airport) 48.8 27.8 207.5 1559.5 70 Detroit (city airport) 50.4 27.3 202.3 1157.1 47.3 Flint 48.9 26.6 197.9 1544.3 64.8 Gaylord 43.6 27.3 233.7 2346.3 63.3 Grand Rapids 49.0 30.8 206.3 1371.7 61.6 Hancock 40.8 21.5 235.3 2594.1 63.3 Houghton Lake 45.0 25.2 215.9 2152.0 75.3 Iron Mountain 42.9 22.5 205.5 2918.9 85.9 Kalamazoo 49.5 32.6 213.7 1427.3 59.6 Lansing 48.5 28.8 201.9 1658.8 70.0 Muskegon 49.1 30.5 202.8 1182.8 60.5 Pellston 43.1 30.5 233.5 2837.5 88.8 Pontiac 48.6 27.5 209.0 1520.5 58.5 Traverse City 46.6 28.0 221.5 1701.6 69.1 3.4 S UMMARY This chapter highlights the steps necessary to select the in - service pavement sections and obtain their as - constructed input values for use in the Pavement - ME. Input data collection is one of the 110 most important steps in the loca l calibration of the performance prediction models. Table 3 - 26 summarizes the inputs and corresponding levels for traffic, and material characterization data used for the local calibration. Table 3 - 26 Summary of input levels and data source Input Input level Input source Traffic AADTT 1 MDOT Historical Traffic counts TTC 2 Cluster analysis ALS Tandem 2 Cluster analysis HDF 2 Cluster analysis MDF 3 MDOT traffic characterization study AGPV 3 MDOT traf fic characterization study ALS single, tridem, quad 3 MDOT traffic characterization study Cross - section (new and existing) HMA thickness 1 Project specific HMA thicknesses based on design drawings PCC thickness 1 Project specific PCC thicknesses based on design drawings Base thickness 1 Project specific base thicknesses based on design drawings Subbase thickness 1 Project specific subbase thicknesses based on design drawings Construction materials HMA Binder type 3 Project specific binder and mixt ure gradation data obtained from data collection HMA mixture aggregate gradation 3 Project specific binder and mixture gradation data obtained from data collection Binder type 1 Pseudo Level 1 - MDOT HMA mixture characterization study HMA mixture a ggregate gradation 1 Pseudo Level 1 - MDOT HMA mixture characterization study PCC Strength ( f' c , MOR) 1 Psuedo Level 1 - project specific testing values CTE 2 MDOT CTE report recommendations Base/subbase MR 2 Recommendations from MDOT unbound materia l study Subgrade MR 2 Soil specific MR values - MDOT subgrade soil study Soil type 1 Location based soil type - MDOT subgrade soil study Climate 1 Closest available climate station Note: Level 1 is project specific data, pseudo level 1 means that t he inputs are not project specific but the material properties (lab measured) corresponds to similar materials used in the project Level 2 inputs are based on regional averages in Michigan Level 3 inputs are based on statewide averages in Michigan 111 4 - LOCAL C ALIBRATION PROCEDURE S 4.1 I NTRODUCTION The NCHRP Project 1 - 40B ( 2 ) guide documented the recommended practices for local calibration of the Pavement - ME. The guide outlines the significance of the calibration process as well as the general approach for local calibration. In general, the calibration process is used to (a) confirm that the prediction models can predict pavement distress and smoothness with minimal bias, and (b) determine the standard error associated with t he prediction equations. The standard error estimates the scatter of the data around the line of equality between predicted and measured values of distress. The bias indicates if there is any consistent under or over - prediction by the prediction models. I t should be noted that the local calibration process only applies to the transfer functions or statistical models in the Pavement - ME. Furthermore, the feasibility of the mechanistic or constitutive models within the Pavement - ME is assumed to be accurate a nd depict a correct simulation of real - world conditions. This chapter details the (a) local calibration approach es and techniques used in this study for each model, (b) effect of the local calibration coefficients on the performance predictions, and, (c) n eed for reliability and the methods to det ermine the reliability . 4.2 C ALIBRATION A PPROACHES The local calibration of the performance prediction models are performed by changing the calibration coefficients in each model. These coefficients are adjusted indivi dually or estimated through minimizing the error between the predicted and measured distress. Table 4 - 1 summarizes the flexible and rigid pavement performance prediction models, their corresponding transfer functions, and model calibration coefficients. Th e detailed performance prediction models were summarized in Chapter 2. Two methods of closed - form model calibration are 112 generally used: (a) an analytical process for linear models, and (b) a numerical optimization technique for non - linear models. In both m ethods, the model constants are determined to minimize the error between measured and predicted distress values. Two types of models are used in the Pavement - ME for performance prediction: (a) structural response models, and (b) transfer functions. The for mer models are based on analytical solutions based on engineering mechanics (e.g., linear elastic solution to determine stress, strain and deformation for flexible pavements) while the latter models are empirical in nature and relate the pavement response or damage over time. The local calibration process deals with the transfer function for predicting distresses. Among empirical transfer functions, two different calibration approaches may be required depending upon the nature of the distress being predicte d: (a) model that directly calculates the magnitude of surface distress, and (b) model that calculates the incremental damage index rather than actual distress magnitude. In the first approach, the pavement response parameter is used to compute the increme ntal distress in a direct relationship. The local calibration approaches for each model are summarized in Table 4 - 1. Approach I indicates that the local calibration is performed without running the software each time. Alternatively, approach II requires so ftware execution each time the coefficients are adjusted. 4.3 C ALIBRATION T ECHNIQUES In model calibration, a fitting process produces empirical model constants that are evaluated based on the goodness - of - fit criteria to estimate the best set of values for the coefficients. The success of the local calibration depends on the dataset used. Different sampling techniques can improve the confidence in the local calibration coefficients. This section discusses the available local calibration procedures and sampling t echniques that can be used to calibrate the performance prediction models. 113 The local calibration guide ( 35 ) suggests using statistical techniques to validate the adequacy of the performance prediction models. Traditional split sampling provides one method to calibrate the performa nce predictions models. Furthermore, resampling methods such as jackknifing and bootstrapping are recommended because they provide more reliable and robust assessment of the model prediction accuracy than the split sampling methods. While the traditional s plit sampling approach uses a two - step process for calibration and validation, advanced approaches can simultaneously consider both steps. Moreover, the goodness - of - fit statistics are based on predictions rather than on data used for fitting the model para meters. The efficiency and robustness of such approaches become more important when the sample size is small. Different methods can be used for the local calibration of the distress models including (a) traditional split sampling approach, (b) bootstrappin g, and (c) jackknifing. These approaches are discussed below. 4.3.1 Traditional Approach The NCHRP Project 1 - 40B documented the recommended practices for local calibration of the Pavement - ME performance models ( 2 ) . The traditional approach consists of splitting t he sample into two subsets. One set is used for calibration and the other for validation. The calibration - validation process depends on the number of sections selected. In addition, two calibration approaches may be necessary depending on the nature of the distress predicted through the transfer function. The first approach is used for the models that directly calculate the magnitude of the surface distress, while the second approach was used for models that calculate the incremental damage over time and re late damage to distresses. Data collected from in - service pavements are used within both approaches to establish the calibration coefficients such that the overall standard error of the estimate between the predicted and observed response is minimized. 114 The model validation procedure is used to demonstrate that the calibrated model can produce accurate predictions of pavement distress for sections other than the ones used for calibration. The success of the validation process can be determined based on the b ias in the predicted values and the standard error of estimate. Statistical hypotheses tests should be performed to determine if a significant difference between the calibrated model and the model validation exists ( 2 ; 35 ) 4.3.2 Bootstrapping At its simplest, for a dataset with a sample size of N , B "bootstrap" samples of size N are randomly selected with replacement from the ori ginal dataset. The bootstrap is an approach used to estimate variances, confidence intervals and other statistical properties of a population from a sample. These properties are obtained by drawing samples from a sample . Each bootstrap sample typically omi ts several observations and has multiple copies of others . This procedure selects the new bootstrap samples "with replacement" and therefore, there is an equal chance of select ing each sample (i.e., a pavement section) multiple times. Validity of bootstrap variance estimates require s that he resampling properties of the bootstrap must be similar to the sampling properties of the population. The bootstrapped resampling can be performed using different methods: (a) resampling randomly or (b) resampling based on the residuals. The type of resampling approach for bootstrapping depends on the data structure. For example, if fixed regressors are needed for an experiment design, the bootstrapping can be performed using the residuals (error - terms). This means that t he residual will correspond to the predicted value since the measured value will not change. 115 Table 4 - 1 Model calibration approach (calibration outside of the software or rerunning the software) Pavement type P erformance prediction model Approach Model transfer functions I II Flexible pavements Fatigue cracking bottom up Fatigue cracking top down Rutting HMA Base/subgrade Thermal cracking IRI Rigid pavements Transverse cracking Transverse joint faulting IRI *Red font indicates calibration coefficients 116 On the ot her hand if the random effects of regressors are to be estimated in a regression model, random resampling can be employed in bootstrapping. Bootstrap ping based on resampling the observations is an approach that is applied when the regression models are b uilt from data that have random regressors and responses. The procedure consists of ( 36 ; 37 ) : 1. Draw an n sized bootstrap sample with replacement from the observations giving a 1/n probability f or each value in the sample set. 2. Calculate the ordinary least squares (OLS) coefficients from the bootstrap sample. 3. Repeat steps 1 and 2 for the total number of bootstraps desired (1000 or 10,000) . Higher number of bootstraps will lead to better accuracy; however, it may not be very efficient and gain in accuracy may be not significant. 4. Obtain the probability distribution of the bootstrap estimates and use the distribution to estimate regression coefficients, variances and confidence intervals. Equation ( 4 - 1 ) shows the bootstrap regression equation: ( 4 - 1 ) where, is an unbiased estimator of . Bootstraps based on resampling the residuals are used when the fixed effect of the re gressors are to be considered (i.e., in an experiment design). Therefore, bootstrap resampling must preserve the data structure. The procedure based on resampling the errors is as follows ( 36 ) : 1. Find the ordinary least squares ( OLS ) coefficient using the least squares regression for the sample 117 2. Calculate the residuals based on measured and predicted response values. 3. Draw an n sized bootstrap random sample with replacement from the residuals determined in Step 2. 4. Compute the bootstrap estimated values by adding the resample d residuals to the OLS regression predicted values. 5. Obtain least squares estimates from the bootstrap sam ples. 6. Repeat steps 3 to 5 for the total number of bootstrap samples. Finally, the bootstrap bias, variance, and confidence intervals for regression coefficients can be estimated by using Equations ( 4 - 2 ) to ( 4 - 4 ) , respectively ( 36 ) : ( 4 - 2 ) where: = bootstrapped bias = bootstrapped estimate of regression coefficient = mean estimate of regression coefficient ( 4 - 3 ) w here: = bootstrap variance of regression coefficient = ordered bootstrap coefficient corresponding to B = unbiased estimator of = number of bootstraps ( 4 - 4 ) w here: = critical value of t with probability /2 118 = standard error of the 4.3.3 Jackknifing Jackknifing is an analytical procedure for adjusting and confirming the calibration coefficients of a model. The model validation statistics are obtained independent of the data used for calibration. Performing jackknifing multiple times is used to assess the sensitivity of the validation goodness - of - fit statistics. To develop jackknife statistics from a sample of n sets of measured values, the data matrix is divided into a calibration set and a validation set. For an n - 1 jackknife validatio n the procedure starts by removing one set of measurements (pavement section) from the data total set of pavement sections and calibrating the model with the remaining n - 1 set of projects . The set of measurements that was not used for calibration is then u sed to predict the pavement performance or distress , from which the error is computed as the difference between the predicted and measured values of the performance measure . This process is repeated for all pavement sections in the database. After the proc edure is complete, there will be n values of the error, from which the jackknifing goodness - of - fit statistic can be computed. The jackknifed errors are computed from measured x values that were not used in calibrating the model coefficients. Thus, the jack knifing goodness - of - fit statistics are considered to be an independent measures of model accuracy ( 38 ) . When comparing jackknife with bootstrap procedures, j ackknife is less demanding computationally than the bootst rap method and relies on dividing the sample observations into disjoint subsets, each having the same number of observations. In addition, t he jackknife takes a fundamentally different view of the possible replicates of the statistic; it treats them as a f inite collection whereas bootstrap resampling assumes that the replicates are a s a mple from the population of infinite size . W hen bootstrap ping is used to estimate the standard error of a statistic, it gives slightly different results when 119 repeated on the same data, whereas the jackknife gives exactly the same result each time. The bootstrap is a more general technique and preferred to the jackknife method. 4.3.4 Summary of Resampling Techniques The bootstrap and jackknife are nonparametric and robust resampling techniques for estimating standard errors and confidence intervals of a population parameter such as mean, median, proportion, odds ratio, or regression coefficient s. The main advantage of these techniques, especially bootstrapping , is estimation of param eters are possible without making distribution assumptions . In addition, the approach is valid when such assumptions are in doubt, or where parametric inference is impossible or requires very complicated formulas for the standard errors estimation. On the other hand, there are several limitations of the bootstrap method, especially when (a) using the methods for small data sets with outliers and (b) adopting bootstrap ping for time series data when the independent assumption is violated. Jackknifing may be more efficient; however, it is not as powerful as bootstrapping, especially when sample size is limited. Both bootstrapping and jackknifing are demonstrated for Michigan pavement sections. 4.4 P ROCEDURE FOR C ALIBRATION OF P ERFORMANCE M ODELS The local calibrati on procedure for MDOT pavements follows the general guidelines provided by the NCHRP guide for local calibration ( 35 ) as described in Chapter 2. The first step consists of selecting the pavement sections and collecting their corresponding input data. The project selection and input data collection efforts were discussed in Chapter 3. The remaining process consists of the following steps: 1. Execute the Pavement - ME software to predict the pavement performance for each selected pavement section. 2. Extract the predicted distresses and compa re with the measured distresses. 120 3. Test the accuracy of the global model predictions and determine if local calibration is required. 4. If local calibration is required, adjust the local calibration coefficients to eliminate bias and reduce standard error. 5. Vali date the adjusted coefficients with pavement sections not included in the calibration set. 6. Adjust the reliability equations for each model. Steps 1 and 2 do not require further explanation. Step 3 and onwards are presented next. 4.4.1 Testing the Accuracy of th e global Model Predictions The adequacy of the global model predictions is determined through performing three hypothesis tests (step 3). The hypothesis tests provide an indication if the models are biased. Bias is defined as the consistent under - or over - prediction of distress or IRI. The bias between measured and predicted distress/IRI is determined by performing linear regression, hypothesis tests and a paired t - test using a significance level of 0.05. Figure 4 - 1 shows a representation of model bias and standard error for various conditions. The three hypothesis tests are summarized in Table 4 - 2. If any of these hypothesis tests are rejected (significance level greater than 0.05) for a performance model, then local calibration is recommended. The null hyp othesis ( H o ) represents the mean difference between predicted and measured distress is zero i.e., there is no difference between both. The alternate hypothesis ( H 1 ) depicts that there is a difference between predicted and measured distress. Similarly, hypo thesis tests were performed to test the intercept and slope differences between predicted and measured distresses. 121 (a) Low bias, low std. error (b) High bias, high std. error (c) No bias, low std. error (d) No bias, high std. error Figure 4 - 1 Schematic of bias and standard error for model calibration Table 4 - 2 Hypothesis tests Hypothesis test Hypotheses Mean difference (paired t - test) H 0 = (predicted measured) = 0 H 1 = (predicted Intercept H 0 = intercept = 0 H 1 Slope H 0 = slope = 1 H 1 4.4.2 Local Calibration Coefficient Refinements As discussed in Section 4.3, there are different procedures to locally calibrate the various 122 pe rformance prediction models. This section outlines the procedures used for each performance prediction model in this study (steps 4 and 5). Additionally, the various resampling techniques and datasets used for local calibration are also discussed. 4.4.2.1. Data su bpopulations T he pavement performance prediction model s were locally calibrated for Michigan pavements using multiple statistical sampling techniques and dataset options. The data set options are combinations of reconstruct, rehabilitation and LTPP pavemen t sections. The main objective for considering different subsets of all the selected pavement sections, referred as options herein, is to verify if different calibration coefficients are required for an option or if an overall model calibration is adequate for different options. The options with different dataset combinations are as follows: Option 1: MDOT reconstruct sections only Option 2: MDOT reconstruct and rehabilitation sections Option 3: MDOT reconstruct, rehabilitation, and LTPP sections Option 4: MDOT rehabilitation sections only 4.4.2.2. Sampling techniques The sampling techniques mentioned above were applied to each option (i.e., different subpopulation) for studying the effects of various sampling methods on the local calibration coefficients. The perfor mance prediction models were locally calibrated by minimizing the sum of squared error between the measured and predicted distress for each of the following sampling techniques: No sampling (include all data) Traditional split and repeated split sampling 123 B ootstrapping Jackknifing The different sampling techniques were used to determine the best estimate of the local calibration coefficients and the associated standard errors. The use of these techniques was considered because of data limitations, especiall y due to limited sample size. First, the entire dataset , including all the selected pavement sections , was used to calibrate the performance prediction model s . Since all of the pavement sections were included in the calibration effort, no validation of th e locally calibrated model was performed. Second, a traditional split sampling technique was used. In this method , 70% of the pavement sections were randomly selected for local calibration, and the remaining 30% were utilized for validation. Split samplin g indicate s how well the calibrated model can predict pavement distress for pavement sections that are not included in the calibration dataset. Generally, SEE and bias from the validation should be similar to those of locally calibrated model. However, the split sampling technique might not give reasonable results when using limited sample size s . In order to address the concerns of limited sample size, the split sampling technique was used repeatedly to estimate distributions of the calibration and validati on parameters (i.e., SEE, bias, calibration coefficients ). Based on these distributions, a mean value , median, and confidence intervals for each parameter was estimated. The confidence interval determined through repeated sampling provides a better indicat ion of the variability of the calibration parameters. Next, bootstrapping resampling was considered. F or a dataset of sample size N , B size N was randomly selected with replacement. The model parameters are estimated for B number of bootstraps. The details of bootstrapping were discussed previously. Similar to the split 124 sampling approach, bootstrap samples were drawn from the entire dataset. The model is calibrated for each bootstrapped sample dataset and the SEE, bias, an d calibration coefficient parameters were estimated. The process was repeated for B number of bootstraps to obtain distributions for each parameter. Figure 4 - 2 shows the flow diagram for the calibration process using both bootstrapping and repeated split s ampling for the performance models. Figure 4 - 2 Repeated sample calibration procedure 125 4.5 F LEXIBLE P AVEMENT M ODEL C OEFFICIENTS The flexible pavement performance prediction models include fatigue (alligator) cra cking, longitudinal cracking, rutting, transverse (thermal) cracking and IRI. The impact of calibration coefficients on the predicted performance specific to each model is discussed in this section. 4.5.1 Alligator Cracking Model (bottom - up fatigue) The alligato r (bottom - up fatigue) cracking model is calibrated by changing the C 1 and C 2 coefficients (see Table 4 - 1). The effects of C 1 and C 2 on the predicted alligator cracking are shown in Figure 4 - 3. The C 1 affects the initiation or start of the alligator crackin g and C 2 affects the slope of the crack propagation. In this work, t wo sets of calibrations are performed for the alligator cracking model (a) combined measured top - down and bottom - up cracking, and (b), bottom - up cracking only. These distresses are combine d because at the surface, it is difficult to identify if the crack propagated from the top or bottom without performing destructive testing methods or coring . 126 (a) Effect of C 1 (b) Effect of C 2 Figure 4 - 3 Effect of calibration coefficients on alligator cracking 4.5.2 Longitudinal Cracking Model (top - down fatigue) The longitudinal (top - down fatigue) cracking model is calibrated by changing the C 1 and C 2 coefficients (see Table 4 - 1). The effects of C 1 and C 2 on the predicted longitudinal cracking are shown in Figure 4 - 4 . The C 1 affects the initiation or start of the longitudinal cracking and C 2 affects the slope of the crack propagation. 127 (a) Effect of C1 (b) Effect of C2 Figure 4 - 4 Effect of calibration coefficients on longitudinal cracking 4.5.3 Rutting Model The rutting model in the Pavement - ME predicts rut depth for the HMA, base, and subgrade layers in the pavement structure. The prediction models are diff erent for each layer and the sum of the rut depth in the three layers represents the total surface rutting prediction. Since the rutting models are different for bound (HMA) and unbound layers (base, subbase, and subgrade), the measured rutting for each la yer is needed to calibrate the rutting model. The MDOT PMS database report only total surface rutting and does not provide rutting measurements or contributions for individual layers. Pavement transverse profiles were used to estimate the rutting in indivi dual layers without performing destructive testing methods. The 128 transverse profiles for the selected projects were utilized to estimate the rutting in the HMA, base and subgrade layers. The HMA rutting model has three calibration coefficients as shown in T able 4 - 1. The local calibration coefficient ( r1 ) is a direct multiplier and does not require rerunning of the software every time the calibration coefficient is adjusted. The r2 and r3 calibration coefficients are related to the number of load repetitio ns and the pavement temperature, respectively. These coefficients do require a rerunning of the software every time the coefficients are adjusted. A combination matrix was developed to determine which combination provided the lowest SEE and bias. The range s of these coefficients values were consistent with those found in the literature as summarized in Chapter 2. Table 4 - 3 summarizes the r2 and r3 values used in the evaluation. Figure 4 - 5 shows the impact of the r2 and r3 calibration coefficients on the predicted HMA rutting. The results show that both coefficients affect the overall magnitude and rate of HMA rutting. It should be note d that the predicted rutting magnitudes will be significantly different if axle load spectra or climates are changed. Table 4 - 3 r2 r3 calibration coefficients r2 coefficients r3 coefficient s 0.4 0.4 0.7 0.7 1 1 1.3 1.3 129 r2 r3 Figure 4 - 5 Effect of (a) 2 and (b) 3 on HMA rutting The base and subgrade rutting models have one calibration coefficient each ( s1 ). The coeff icient is a direct multiplier in the equation and does not require rerunning of the software when the coefficient is adjusted. The total rutting is determined by summing the HMA, base and subgrade rutting. The rutting model in the Pavement - ME was calibrate d using the following two methods: 1. Method 1: Individual layer rutting calibrations calibrate the rutting model by changing the individual calibration coefficient (HMA, base/subbase, and subgrade) relative to the rutting contribution of each layer by usin g the estimates of layer contributions from a transverse profile analysis. 130 2. Method 2: Total surface rutting calibration calibrate the rutting model by changing the individual calibration coefficient for each layer simultaneously relative to the total su rface rutting. Transverse profile data were obtained from MDOT to estimate the layer contributions to total rutting. The analysis of the transverse profiles for the selected flexible pavement sections could improve the rutting calibration. Analyses of tr ansverse profiles assist in estimating the seat of rutting and the layer contributions to the total surface rutting ( Method 1 ). The width and depth of the measured rut channel can be used to determine the seat of rutting. A study sponsored by the National Cooperative Highway Research Program (NCHRP) developed a procedure to determine the seat of rutting from the transverse profiles ( 39 ) . The procedure consists of calculating the critical ratio between the to tal area above (positive area) and below (negative area) the profile reference line. The positive and negative areas with a reference line are shown in Figure 4 - 6. Figure 4 - 6 Positive and negative areas i n the NCHRP procedure ( 40 ; 41 ) In order to determine the seat of rutting, several calculations have to be made to satisfy a set of conditions. The following calculations are needed: 131 ( 4 - 5 ) ( 4 - 6 ) ( 4 - 7 ) ( 4 - 8 ) ( 4 - 9 ) where, A = Total area Ap = Positive area An = Negative area R = Critical ratio C1 = theoretical average total area for HMA failure, mm 2 ; C 2 theoretical average total area for base/subbase failure, mm 2 C 3 theoretical average total area for subgrade failure, mm 2 ; D maximum rut depth, mm Based on the above equations, the maximum rut depth is calculated by following the illustration in Figure 4 - 7. The total maximum rutting is the distance between the average value of the two positive peaks and the rut depth below the profile reference line. In this case, the maximum rut depth is 1.2 inches. A similar procedure was used to determine the maximum rut for all transverse profiles. The seat of rutting may occur in any of the three pavement layers (HMA, base, subgrade) and the typical failure shapes are visually represented in Figure 4 - 8. 132 Figure 4 - 7 Calculation of the maximum rut depth ( 40 ; 41 ) Figure 4 - 8 Typical seat of rutting based on transverse profile shapes ( 40 ; 41 ) The flow chart shown in Figure 4 - 9 can be used to determine the seat of rutting. Finally, Figure 4 - 10 can be used as an alternative to Equations ( 4 - 5 ) through ( 4 - 9 ) . 133 Figure 4 - 9 Conditions for determining the rutting se at ( 40 ; 41 ) Figure 4 - 10 Correla tion of the type of failure as a function of maximum rut depth and total rut area ( 40 ; 41 ) The data provided by MDOT included transverse profiles approximately every six feet along the length of the entire control section and was collected in 2012 and 2013. The transverse profile data were extracted for each flexible pavement project i n the calibration dataset. The transverse profiles were analyzed using the procedure described above. Collaboration with the 134 data collection vendor (M/s Fugro Roadware) clarified misconceptions regarding the methods used to calculate rutting from the raw p rofile data. Based on these discussions, several concerns regarding the pavement edge, and how to analyze heaving sections were clarified. The analyses were adjusted based on these clarifications. Initially, it was assumed that the first and last point in the transverse profile represents the edge of the pavement. Later, the edge was adjusted to ensure that there is no unexpected drop off because of its impact on the reference line. It should be noted that an incorrect reference line can significantly impac t the seat of rutting calculations. Figure 4 - 11 shows the adjustment made for the pavement edge. (a) Before edge adjustment (b) After edge adjustment Figure 4 - 11 Edge adjustment for transverse profile As mentioned before, the transverse profiles were analyzed for each project. Furthermore, the transverse profile analysis results were summarized for each pavement type (new reconstruct and rehabilitation). The heave sections were excluded from the analys is because the entire transverse profile results in a positive area above the reference line. The ratio between the positive and negative area cannot be calculated for heave sections. The following process was adopted for determining the layer contribution s to total surface rutting: a. Identify and extract all the transverse profiles within a particular section (i.e., based on the beginning mile point (BMP) and ending mile point (EMP) 135 b. Analyze each individual transverse profile (at 6 feet interval) using the ab ove mentioned seat of rutting NCHRP methodology c. Determine the distribution of the seat of rutting for HMA, Base/subbase, and subgrade layers within a project d. Calculate layer contributions to the total surface rutting based on the seat of rutting distribut ions e. f. Establish average layer contributions to total surface rutting by pavement type and MDOT regions. These averages were used for the pavement sections where transverse profile data were not avai lable. It should be noted that the transverse profile analyses were utilized only to determine the layer contributions (i.e., the percent) for each pavement section to total surface rutting. However, the magnitude of individual layer rutting (HMA, base /subbase, and subgrade) was determined based on the estimated percent contribution by multiplying it with the measured surface rutting over time for local calibration. The results shown in Figure 4 - 12 indicate that the overwhelming majority (>70%) of rutt ing occurs in the HMA layer for all pavement sections. The individual percentages for each pavement section were used to determine the HMA, base and subgrade rutting from the measured rutting extracted from the MDOT sensor database. The rutting contributio n from each layer is very important in the calibration of the rutting model in the Pavement - ME and will be mentioned later. 136 (a) Overall (b) HMA reconstruct freeway sections (c) HMA reconstruct non - freeway sections (d) HMA over HMA sections (e) HMA over rubblized PCC (f) Composite sections Figure 4 - 12 Transverse profile analysis results 4.5.4 Thermal Cracking Model The transverse cracking model in the Pavement - ME has three distinct models based on the selected HMA input level. It was noted from the literature that minimal transverse (thermal) cracking was predicted for Level 3 inputs. The low prediction values were attributed to the assumption that if the appropriate asphalt binder (PG) is selected for the appropriate climatic condition, then thermal cracking should not occur. The transverse cracking model was calibrated for both Level 1 and 3 HMA inputs. All of the pavement sections in the calibration dataset had available Level 3 data. Results from HMA mixture characterization study ( 24 ) were used to 137 determine which mixture and binder types had Level 1 data. The projects with Level 1 data were also used for local calibration. It should be noted that the Level 1 data may not reflect HMA material properties at the time of construction of the selected pavement sections. The calibration coefficients for the thermal cracking model were adjusted individually and the software requires rerunning for each project. The calibration coefficien ts were adjusted based on ranges obtained from the literature and is shown in Chapter 5. 4.5.5 IRI Model for Flexible Pavements The IRI model for flexible pavements was calibrated after completing the local calibration of the fatigue; rutting and transverse crac king models. The IRI model is a function of the individual distress predictions, site factor and initial IRI (Equation 10). The regression coefficients (calibration coefficients) were adjusted to minimize the error between the predicted and measured IRI. T he global model coefficients were used as seed (initial) values for the local calibration. Issues were encountered when attempting to match the IRI predictions outside the software. These predictions did not match due to errors in the site factor ( SF ) equa tion found in the literature. The correct SF equation as coded in the Pavement - ME software is shown in Equations ( 4 - 11 to 4 - 13). ( 4 - 10 ) Where ; IRI o = Initial IRI after construction, in/mi. SF = Site factor, refer to Equation (11) FC Total = Area of fatigue cracking (combined alligator, longitudinal, and reflection cracking in the wheel path), percent of total lane area. All load related cracks are combined on an area basis length of cracks is multiplied by 1 foot to convert length into an area basis. TC = Length of transverse cracking (including th e reflection of transverse cracks in existing HMA pavements), ft/mi. RD = Average rut depth, in. 138 ( 4 - 11 ) ( 4 - 12 ) ( 4 - 13 ) where; SF = Site factor Age = Pavement age (years) FI = Freezing index, °F - days. Rain = Mean annual rainfall (in.) P 4 = Percent subgrade material passing the No. 4 sieve P 200 = Percent subgrade material passing the No. 200 sieve. 4.6 R IGID P AVEMENT M ODEL C OEFFICIENTS The rigid pavement performance prediction models include transverse cracking, joint faulting, and IRI. The impact of calibration coefficients on the predicted performance specific to each model is discussed in this section. 4.6.1 Transverse Cracking Model The transverse cracking model was calibrated by adjusting the C 4 and C 5 co efficients. These coefficients affect the slope and magnitude of the transverse cracking predictions. Figure 4 - 13 shows the effect of changing these calibration coefficients. (a) C 4 (b) C 5 Figure 4 - 13 Ef fect of transverse cracking model calibration coefficients 4.6.2 Transverse Joint Faulting Model The fa ulting model has eight different calibration coefficients. The coefficients affect 139 many different aspects of the model. Some of the faulting model equations ar e different than previously presented in the MEPDG Manual of Practice ( 3 ) . This section outlines the process for calculating the transverse joint faulting using inputs from the Pavement - ME. The first step consists of extracting the necessary inputs from the Pavement - ME in termediate files. The initial 0 - 14 shows the location of the four different inputs described previously. Fi gure 4 - 14 Location of input parameters required for faulting calculations - 15 shows a detailed look at the faultg eneral.txt file. This file contains the following information: PCC thickness Joint Spacing Dowel Diameter PCC unit weight Base thickness Base unit weight Base friction Erodibility Percent passing sieve #200 # of wet days Built in curl C1, C2, C3,C4, C5, C6 , C7, C8 FaultMax 0 , FR, C12, C34 140 (a) Original faultgeneral.txt file (b) Annotated faultgeneral.txt file Figure 4 - 15 Input parameters for faulting model These input values are used to calculate monthly faulting over the entire pavement design life. The updated faulting model equations are presented below. Overburden Pressure: ( 4 - 14 ) Where: P s = Overburden pressure on Subgrade H PCC = PCC thickness (in.) PCC = PCC unit weight (lb/ft 3 ) H Base = Base thickness Base = Base unit weight (lb/ft 3 ) 141 Curl: This value is internally calculated. The fir st step is to use the inputs to calculate the initial value. ( 4 - 15 ) Where: curl = Maximum mean monthly slab corner upward deflection due to temperature curling and moisture warping. FAULTMAX 0 = Initial Maximum Faulting (in.) C 12 = Calibration coefficient: FR = Freezing Index C 5 = Calibration coefficient EROD = Erodibility factor P 200 = Percent subgrade material passing #200 sieve. WetDays = Average annual number of wet days (greater than 0.1 inch rainfall). P s = Overburden pressure on subgrade Initial Maximum Faulting: ( 4 - 16 ) ( 4 - 17 ) ( 4 - 18 ) ( 4 - 19 ) ( 4 - 20 ) 142 Monthly faulting increment: ( 4 - 21 ) Where: FMAX_MO i = Maximum mean transverse joint faulting for month i , (in.) FMAX 0 = In itial maximum mean transverse joint faulting (in.) DE mo = Differential Energy accumulated during month (i) LOG_EROD = Erodibility defined in Equation 17 Cumulative faulting: ( 4 - 22 ) ( 4 - 23 ) Where: Faulting monthly = Mean joint faulting at the end of month m , in. i = Incremental change (monthly) in mean transverse joint faulting during month i , in. FMAX_MO i = Maximum mean transverse joint faulting for month i , in. The faulting predictions obtained from running the Pavement - ME software can replicated using Equations (4 - 14 to 4 - 23). This method was tested with several pavement sections and the results had a perfect match between the Pavement - ME output and calculated values. The faulting model calculation outside the software is essential in the calibration p rocess. The calculations are used to calibrate all the coefficients simultaneously using various optimization tools and do not require rerunning the software every time a coefficient is changed. This method significantly reduces the time for local calibrat ion. The sensitivity of all the 143 calibration coefficients was studied and is presented below. The impact of the C 1 and C 2 calibration coefficients are shown in Figures 4 - 16 and 4 - 17. These coefficients directly affect the overall magnitude of predicted tran sverse joint faulting. The C 3 and C 4 calibration coefficients affect the early age faulting predictions as observed in Figures 4 - 18 and 4 - 19. The C 5 and C 6 coefficients seemingly have very little impact on the predicted faulting as shown in Figure 4 - 20. T he C 5 coefficient is related to the erodibility factor for a particular pavement section. The impact is not apparent because all factors are held constant in this case. The C 7 calibration coefficient affects the slope of faulting predictions at later stage s in the pavement design life as shown in Figure 4 - 21. Figure 4 - 16 Impact of C1 on faulting Figure 4 - 17 Impact of C2 on faulting 144 Figure 4 - 18 Impact of C3 on faulting Figure 4 - 19 Impact of C4 on faulting Figure 4 - 20 Impact of C5 on faulting 145 Figure 4 - 21 Impact of C6 on faulting Figure 4 - 22 Impact of C7 on faulting 4.6.3 IRI Model for Rigid Pavements The rigid pavement IRI model was calibrated after the local calibration of the transverse cracking and faulting models. The IRI model is shown in Equation ( 4 - 24) and additional details are shown in Chapter 2. The local ly calibrat ed transverse cracking and faulting models were used to predict IRI . The predicted and measured IRI are compared to calculate the SEE and bias. The local calibration coefficients are adjusted by performing a regression analysis. The global model calibration coefficients were used as seed values in the local calibration. It should be noted that 146 the spall ing model (coded in the software) in the global IRI model equation is not the same as reported in the literature . The spalling calculation equation found in the manual of practice ( 3 ) does not accurately represent the internal calculations for spalling in the software . Eq uation ( 4 - 25 ) shows the spalling model as documented in the literature, and Equation ( 4 - 26 ) shows the model used to calculate SCF . Equation ( 4 - 27) shows the correct model used to predict SCF in the software to calculate spalling. The correct model is repor ted in the FHWA study which determined improved performance prediction models for PCC pavement s ( 42 ) . ( 4 - 24 ) ( 4 - 25 ) where: SPALL = Percentage joints spalled (medium - and high - severities). AGE = Pavement age since construction, years. SCF = Scaling factor based on site, design, and climate . ( 4 - 26 ) where; AIR% = PCC air content, percent. AGE = Time since construction, years. PREFORM = 1 if preformed sealant is present; 0 if not. f'c = PCC compressive strength, psi. FTCYC = Average annual number of freeze - thaw cycles. h PCC = PCC slab thickness, in. WC_Ratio = PCC water/cement ratio. ( 4 - 27 ) where; AIR% = PCC air content, percent. AGE = Time since construction, years. PREFORM = 1 if preformed sealant is present; 0 if not. f'c = PCC compressive strength, psi. FTCYC = Average annual number of freeze - thaw cycles. 147 h PCC = PCC slab thickness, in. WC_Ratio = PCC water/cement ratio. 4.7 D ESIGN R ELIABILITY Reliability has b een incorporated in to the Pavement - ME in a consistent and uniform fashion for all pavement types (step 6) . A designer may specify the desired level of reliability for each distress type and smoothness. The level of design reliability could be based on the general consequence of reaching the terminal condition earlier than the design life. Design reliability ( R ) is defined as the probability ( P ) that the predicted distress will be less than the critical distress level over the design period ( 3 ) . The design relibility for a ll distresses can be shown by the following equation: ( 4 - 28 ) Design re liability is defined as follows for smoothness (IRI): ( 4 - 29 ) This mea ns that if 1 0 projects were designed and constructed using a 90 percent design reliability for fatigue cracking, one of those projects , on average, would exceed the threshold or terminal value of fatigue cracking at the end of the design period . T his defin ition deviates from previous versions of the AASHTO 1993 Pavement Design Guide in that it considers multiple predicted distress es and IRI directly in the definition. Design reliability levels may vary by distress type and IRI or may remain constant for eac h. It is recommended, however, that the same reliability be used for all performance indicators ( 3 ) . The designer input s critical or threshold values f or each predicted distress type and IRI. The Pavement - ME procedure pr e d icts the mean distress types and smoothness over the design life of the pavement . Figure 4 - 23 shows an example of average IRI prediction (solid line 148 R = 50 %) . The mean value of distresses or smoothness predicted may represent a 50 percent reliability estimate at the end of the analysis period (i.e., the re is a 50 percent chance that the predicted distress or IRI will be greater than or less than the mean prediction). Figure 4 - 23 Design Reliability Concept for Smoothness (IRI) ( 3 ) For all practical purposes, a designer wi ll require a reliability higher than 50 percent.. In fact, the more important the project in terms of consequences of failure, the higher the desired design reliability. For example, the consequence of early failure on an urban freeway is far more importan t than the failure of a farm - to - market roadway. Some agencies typically use the level of truck traffic volume as the parameter for selecting design reliability. The dashed curve in Figure 4 - 23 shows the prediction at a level of reliability, R (e.g., 90 per cent). For the design to be at least 90 percent reliable the dashed curve at reliability R should not cross the IRI at the threshold criteria throughout the design analysis period. If it does, the trial design should be modified to increase the reliability of the design. The reliability of the trial design is dependent on the model prediction error (standard error) of the distress prediction equations, (see Table 4 - 1). In summary, the mean distress or IRI value (50 percent reliability) is increased by the number of standard errors that apply to the reliability level selected. For example, a 75 percent reliability uses a factor of 1.15 times the 149 standard error, a 90 percent reliability uses a factor of 1.64, and a 95 percent reliability uses a value of 1.96. The calculated distresses and IRI are assumed to be approximately normally distributed over the ranges of the distress and IRI that are of interest in the design. As noted above, the standard deviation for each distress type was determined from the model prediction error in the local calibration. Each model was globaly calibrated using LTPP and other field performance data. The prediction error was obtained as the difference of predicted and measured distress for all pavement sections included in the calib ration efforts. This difference, or residual error, contains all available information on the ways in which the prediction model fails to properly explain the observed distress. The standard deviation for the IRI model was determined using a closed form va riance model estimation approach. There are two main methods to determine the design reliability for various performance models in flexible and rigid pavements: 1. Method 1 : Reliability determined based on the relationship between the mean predicted distres s and standard deviation of the measured distress. 2. Method 2 : Reliability based on the standard error determined using the variance approach. 4.7.1 Reliability based on Method 1 The reliability of the alligator cracking model assumes that the expected percentage of fatigue cracking is approximately normally distributed. The likely variation of cracking around the expected level can be defined by the mean predicted cracking and a standard deviation. The standard deviation is a function of the error associated with the predicted cracking and the data used to calibrate the alligator cracking model. The procedure to derive the parameters of the error distribution consists of the following steps: 150 1. Group all the data by the level of predicted cracking. This can be accompl ished by identifying the distribution bins based on the magnitude of the predicted cracking for all data. 2. Group the corresponding measured cracking data in the same distribution bins found in step 1. 3. Compute descriptive statistics for each group of data (i .e. mean and standard deviations of predicted and measured cracking). 4. Determine the relationship between the standard error of the measured cracking and predicted cracking. For example, the following equation shows the relationship between measured standar d deviation and the mean ( FC bottom ) predicted alligator cracking. ( 4 - 30 ) 5. Adjust the mean cracking for the desired reliability level by using the following relationship: ( 4 - 31 ) where; = Predicted cracking at reliability = Mean predicted cracking = Standard deviation of cracking = St andard normal deviate The reliability model standard error includes the variation related to the following sources: Errors associated to material characterization parameters assumed or measured for design Errors related to assumed traffic and environment al conditions during the design period 151 Model errors associated with the cracking prediction algorithms and corresponding data used The reliability based on method 1 is used for several other distress prediction models in both flexible and rigid pavements. These performance models include: a. Rutting model for different layers b. Thermal cracking c. Transverse cracking for rigid pavements d. Transverse joint faulting 4.7.2 Reliability based on Method 2 Method 2 is only used for the flexible and rigid pavement IRI. The main reasons for a different method include: (a) availability of a closed - form solution (i.e., the regression model of IRI), and (b) known variances of the different components. In the case of the rigid IRI model, t he development of the global IRI model used a ctual measured data for cracking, spalling and joint faulting. The global model SEE reflects measurement errors associated with the inputs along with model error, replication error and other errors. The reliability of the IRI model requires a variance anal ysis of the individual components. The first order second moment (FOSM) method is used to determine the standard error of the IRI model. The first step is to quantify the IRI model error through calibration. The errors include, input error, measurement err or, pure error and model (la ck of fit) error. The IRI standard error equations for rigid and flexible pavements are shown in Equations (4 - 32) and (4 - 33), respectively. The variance values for the different distresses are directly obtained from the locally calibrated model results. ( 4 - 32 ) where; s e(IRI) = Standard deviation of IRI at the predicted level of mean IRI. 152 Var IRIi = Variance of initial IRI (obtained from LTPP) = 29.16, (in/mi) 2 . Var CRK = Variance of cracking, (percent slabs) 2 . Var Spall = Variance of spalling (obtained from spalling model) = 46.24, (percent joint s) 2 . Var Fault = Variance of faulting, (in/mi) 2 . S e 2 = Variance of overall model error = 745.3 (in/mi) 2 . ( 4 - 33 ) where; s e(IRI) = Standard deviation of IRI at the predicted level of mean IRI. Var IRIi = Variance of initial IRI (obtained from LTPP) 2 Var FC = Variance of fatigue cracking, (% lane area) 2 Var TC = Variance of transverse (thermal) cracking , (ft/mile) 2 . Var RutDepth = Variance of rutting, (in.) 2 . S e 2 = Variance of overall model error 4.7.3 Summary For the globally calibrated performance prediction model, reliability standard error equations were derived based on m ethods 1 and 2 discussed above. The final equations for each model are summarized in Table 4 - 4. 153 Table 4 - 4 Reliability equations for each distress and smoothness model Pavement Type Pavement performance pred iction model Standard error equation Flexible pavements Alligator cracking Rutting Transverse cracking IRI Estimated internally by the software Rigid pavements Transverse cracking Faulting IRI Initial IRI S e = 5.4 Estimated internally by the software 154 5 - LOC AL CALIBRATION RESUL TS 5.1 I NTRODUCTION The local calibration of the pavement performan ce prediction models is a challenging task that requires a significant amount of preparation. The effectiveness of local calibration depends on the input values and the measured pavement distress and roughness. Chapter 3 documented the project selection an d data collection process and data synthesis for the local calibration of the performance models in Michigan. Chapter 4 presented the local calibration process and techniques used for local calibration in Michigan. This chapter includes the results of the local calibration of the performance prediction models using different data subsets (options) and statistical sampling techniques. The different data subsets are combinations of reconstruct, rehabilitation and LTPP pavement sections. The main objective for considering several options is to determine if the calibration coefficients vary for different dataset options. The use of LTPP sections was only included for the local calibration of rigid pavements due to a limited number of sections available. The data set option combinations are as follows: Option 1: MDOT reconstruct sections only Option 2: MDOT reconstruct and rehabilitation sections combined Option 3: MDOT reconstruct, rehabilitation, and LTPP sections combined Option 4: MDOT rehabilitation sections o nly The performance prediction models were locally calibrated by minimizing the sum of squared error between the measured and predicted distresses by using the following statistical techniques: a. No sampling (include all data) b. Traditional split sampling c. Repe ated split sampling 155 d. Bootstrapping e. Jackknifing f. Bootstrapping validation The different sampling techniques (a - e) were used to determine the best estimate of the local calibration coefficients and the associated standard errors. The use of these techniques i s considered because of data limitations, especially due to limited sample size for rigid pavements, and to utilize a more robust way of quantifying model standard error and bias. The split sample bootstrapping technique (f) was used to validate the bootst rapped local calibrated performance prediction models in the Pavement - ME. The following performance models in the Pavement - ME were locally calibrated for Michigan conditions. Flexible pavements o Fatigue cracking (bottom - up) o Rutting o Transverse (thermal) crac king o IRI Rigid pavements o Transverse cracking o Faulting o IRI The Pavement - ME software was executed using the as - constructed inputs for all the selected pavement sections and the predicted performance was extracted from the output files. The 156 measured and pre dicted distresses over time were compared. These comparisons evaluate the adequacy of global model predictions for the measured distresses on the pavement sections. Generally, the predicted and measured performance should have a one - to - one (45 degree line of equality) relationship in the case of a good match. Otherwise, biased and/or prediction error may exist based on the spread of data around the line of equality. As a consequence, local calibration of the model is needed to reduce the bias and standard e rror between the predicted and measured performance. The above mentioned steps can be accomplished by performing the following process ( 4 ) . 1. Compare the globa lly calibrated model predictions with measured performance. 2. Perform hypothesis tests between the measured and predicted performance (see Table 5 - 1 ). If any of the null hypotheses are rejected, follow step 3 otherwise no local calibration is needed. 3. Adjust local calibration coefficients to minimize the sum of squared error between predicted and measured performance, and compare the measured and predicted performance. 4. Perform hypothesis testing again based on the locally calibrated coefficients and determine if the model accuracy has improved. If not, identify the possible sources of bias such as outliers in the measured performance data or improve the accuracy of input data and continue local calibration process until the standard error of the estimate is low er than the globally calibrated model. 5. Accept or reject the local calibration coefficients based on the results from step 4. 157 Table 5 - 1 : Hypothesis tests Hypothesis test Hypotheses Mean difference (paired t - test) H 0 = (predicted measured) = 0 H 1 = (predicted Intercept H 0 = intercept = 0 H 1 Slope H 0 = slope = 1 H 1 The local calibration results are presented for both flexible and rigid pavement performance predi ction models and are compared for the different statistical techniques and the data set options mentioned above. The local calibration results for each performance prediction model are summarized to compare the different techniques. The results within eac h option are presented in the following order: Global Model predictions Local calibration results for all sampling techniques Model reliability updates (if applicable) 5.2 L OCAL C ALIBRATION OF F LEXIBLE P AVEMENT M ODELS The detailed results for the local calibr ation of the fatigue cracking, rutting, transverse (thermal) cracking and IRI models are presented in this section. 5.2.1 Fatigue Cracking Model Bottom - up 5.2.1.1. Option 1a MDOT reconstruct only (measured AC/LC combined) The bottom - up fatigue cracking (alligator cra cking) model was calibrated by adjusting the local calibration coefficients to minimize the error between the predicted and measured fatigue cracking. The fatigue cracking model was calibrated for reconstructed pavement sections 158 only. This is because the b ottom - up cracking model did not predict any fatigue cracking on the rehabilitated pavement sections. In addition, minimal fatigue cracking was measured on the rehabilitated pavements. The fatigue cracking model was calibrated using two different methods: ( a) combined measured alligator and longitudinal (AC/LC) cracking in the wheelpath, and (b) the measured alligator cracking (AC) only. In option 1a the measured alligator and longitudinal cracking in the wheelpath were combined due to difficulties determini ng if a measured crack at the pavement surface propagated from the top or bottom. The guide for local calibration also recommends such a procedure ( 2 ) . Option 1b considered only measured alligator cracking. The measured alligator cracking corresponds to spe of sections with measured alligator cracking was considerably lower than the number of sections with longitudinal cracking in the wheelpath. Global Model The first step in the calibration process was to compar e the globally calibrated model predictions to the measured fatigue cracking. The measured and predicted fatigue cracking is shown in Figure 5 - 1. As seen in the figure, the global model under - predicts measured fatigue cracking. The global model SEE was fou nd to be 7.64% and the bias was - 4.19 % further indicating the consistent under prediction of fatigue cracking. The three hypothesis tests (Table 5 - 2 ) reveal that all three tests were rejected and local calibration is needed for this model. The local calib ration using different calibration techniques is discussed next. 159 Figure 5 - 1 Global model measured versus predicted fatigue cracking for Option 1a Table 5 - 2 G lobal model fatigue cracking hypothesis test results Hypothesis test Hypotheses P - value Mean difference (paired t - test) H 0 = (predicted measured) = 0 H 1 = (predicted 0.00 Intercept H 0 = intercept = 0 H 1 0.00 Slope H 0 = slope = 1 H 1 0.00 Sampling Technique Results The local calibration was performed for all the pavement sections included in the calibration dataset. The six different sampling techniques were performed to determine the SEE and bias. Numerical op timization techniques were used to minimize the sum of squared errors between the measured and predicted alligator cracking. The results are summarized in Figure 5 - 2. The general trend shows that the local calibration improved the models prediction capabil ities using Michigan data. Overall the SEE and bias reduced for all sampling methods. The differences in magnitude between the sampling techniques did not vary greatly. The SEE after SEE: 7.64 % Bias: - 4.19 % C 1 : 1 C 2 : 1 160 local calibration ranged from 6.3 % to 7.0 % and the bias ranges from - 0. 7 % and - 1.4 %. Overall, the bootstrap validation sampling method produced the lowest SEE and bias compared to all others techniques. Additionally, the validation SEE is similar to the calibration SEE which indicates that the locally calibrated model predi cts alligator cracking reasonably well when using an independent set of pavement sections not included in the calibration dataset. The measured versus predicted alligator cracking for the bootstrapping validation sampling technique is shown in Figure 5.4. The measured versus predicted distress shows a much better prediction compared to the global model. Figure 5 - 2 Standard error for all sampling techniques Option 1a 161 Figure 5 - 3 Bias for all sampling techniques Option 1a Figure 5 - 4 Measured versus predicted after local calibration Bootstrapping Validation Table 5 - 3 summarizes the local calibration coeffi cients and hypothesis testing results. Hypothesis testing was performed to determine if there was a significant difference between the measured and predicted fatigue cracking. As shown in Table 5 - 3 the majority of the sampling methods resulted in p - values less than 0.05 which indicates that there was a significant difference between the measured and predicted fatigue cracking. The bootstrapping validation method SEE: 6.95 % Bias: - 2.00% C1 = 0.50 C2 = 0.56 162 however shows that for the 1000 calibrations, 582 showed that there was no significant differenc e between the two datasets. Additionally, the biggest indicator of model accuracy was to determine if a significant difference exists between the measured and predicted fatigue cracking for the validation sets. The results show that there was no significan t difference for the split sampling technique, 503 out of 1000 repeated split sampling validation sections, 82 out of 125 jackknifing sections. A p - value of 0.02 was obtained for the bootstrapping validation technique was only slightly less than the 0.05 t hreshold. Overall, the local calibration improved the model prediction capabilities for Michigan conditions. Table 5 - 3 Local calibration coefficients and hypothesis testing results Sampling Method Calibration Coefficients Hypothesis testing results (t - test) = 0.05 C 1 C 2 Global p - value Local model p - value Validation - p - value Global Model 1.00 1.00 0.00 - - No sampling 0.50 0.56 0.00 0.00 - Split sampling 0.50 0.56 0.00 0.00 0.20 Repeated split sampling 0.50 0.56 0/1000 55/1000 503/1000 Bootstrapping 0.50 0.56 0/1000 84/1000 - Jackknifing 0.50 0.56 0/125 0/125 82/125 Bootstrapping validation 0.50 0.56 0/1000 582/1000 0.02 Model Reliability Updates The standard error of the calibrated fatigue crackin g models were used to establish the relationship between the standard deviation of the measured cracking and mean predicted cracking as explained in Chapter 4. These relationships are used to calculate the cracking for a specific reliability. Table 5 - 4 sum marizes the relations for the options considered for fatigue cracking. 163 Table 5 - 4 Reliability summary for Option 1a Sampling technique Global model equation Local model equation No Sampling Split Sampling Repeated split sampling Bootstrapping 5.2.1.2. Option 1 b MDOT reconstruct only (measured AC only ) Global Model The all igator cracking model was also calibrated using only measured alligator cracking distress indices from the MDOT database. Even though only a limited number of pavement sections showed only alligator cracking the local calibration would still be useful to e nsure that the methods are established for when more data becomes available. The global model under - predicted alligator cracking for Michigan pavement sections included in the calibration dataset as seen in Figure 5 - 5. The global model SEE and bias were 4. 02% and - 2.00% respectively. The hypothesis testing indicates that there is a significant difference between the measured and predicted alligator cracking as shown in Table 5.5. Based on these results, local calibration of the alligator cracking model is r equired. 164 Figure 5 - 5 Global model measured versus predicted alligator cracking for Option 1b Table 5 - 5 Global model fatigue cracking hypothesis test results for Option 1b Hypothesis test Hypotheses P - value Mean difference (paired t - test) H 0 = (predicted measured) = 0 H 1 = (predicted 0.00 Intercept H 0 = intercept = 0 H 1 0.00 Slope H 0 = slope = 1 H 1 0.00 Sampling Technique Results The local calibration was performed using similar procedures as discussed for the alligator cracking model which included both longitudinal and alligator cracking distress indices. The results for all sampling techniques are shown in Fig ures 5 - 6 and 5 - 7. The results show that the SEE is reduced after local calibration except for the jackknifing technique. There are several limitations regarding the measured alligator cracking data. These limitations include: Minimal time series data for e ach project (less than three data points per pavement section) Less pavement sections compared to Option 1a SEE: 4.02 % Bias: - 2.00% C1: 1.00 C2: 1.00 165 The number of measured data points specifically affects jackknifing validation results more than any of the other sampling techniques. The SEE canno t be calculated for pavement sections which has only 2 data points and since the jackknife consists of using (n - 1) sections for calibration and the remaining one section for validation the calculation is divided by zero. The average value of 8.8 % for vali dation is only based on three pavement sections which had enough measured data. The bias was reduced for all the sampling techniques compared to the global model. The bias ranged from - 0.7% to 1.5% for calibration and - 0.3% to 1.7% for validation. The jack knifing technique showed slightly higher bias compared to all the other techniques. The locally calibrated model was tested using pavement sections which were not included in the calibration dataset. Figure 5 - 8 shows the comparison between the measured and systematic bias or consistent over/under prediction. Figure 5 - 6 Standard error for all sampling techniques O ption 1b 1 66 Figure 5 - 7 Bias for all sampling techniques Option 1b Figure 5 - 8 Measured versus predicted after local calibration Bootstrapping Validation The local calibration coefficients and hypothesis testing results are summarized in Table 5 - 6. The results indicate that there is a significant difference between the measured and predicted cracking for the no sampling, split sampling and jackknifing sampl ing methods. The repeated split sampling and bootstrapping techniques showed that there is no significant difference SEE: 5.10% Bias: - 1.60% C1: 0.67 C2: 0.56 167 between the measured and predicted cracking for 1000 and 780 of the calibrations, respectively. Additionally, 824 of the 1000 validations w ere not significantly different from each other. The bootstrapping validation method showed that 785 out of 1000 calibrations showed no significant difference between the measured and predicted alligator cracking. Furthermore, the validation of the alligat or cracking model using pavement sections not included in the calibration resulted in a p - value of 0.28 which means there is no significant difference between the measured and predicted alligator cracking. Table 5 - 6 Local calibration coefficients and hypothesis testing results Sampling Method Calibration Coefficients Hypothesis testing results (t - test) = 0.05 C 1 C 2 Global p - value Local model p - value Validation - p - value Global Model 1.00 1.00 0.00 - - No sampling 0.68 0.56 0.00 0.00 - Split sampling 0.68 0.56 0.00 0.00 0.20 Repeated split sampling 0.68 0.56 0/1000 1000/1000 824/1000 Bootstrapping 0.67 0.56 0/1000 780/1000 - Jackknifing 0.50 0.56 0/42 0/42 0/42 Bootstrapping validation 0.67 0.56 0/100 0 785/1000 0.28 Model Reliability Updates The standard error of the calibrated fatigue cracking model were used to establish the relationship between the standard deviation of the measured cracking and mean predicted cracking as explained in Chapter 4. T hese relationships are used to calculate the cracking for a specific reliability. Table 5 - 7 summarizes the relations for the sampling techniques considered for Option 1b . 168 Table 5 - 7 Reliability summary for Op tion 1b Sampling technique Global model equation Local model equation No Sampling Split Sampling Repeated split sampling Bootstrapping 5.2.1.3. Fatigue cracking model calibration observations, contributions, limitations and issues Several observations were made when calibrating the alligator cracking model for Michigan conditions. This section will discuss common observation s, limitations, and issues encountered during the local calibration process. Measured data The fatigue cracking model calibration is dependent on the available measured performance data. The MDOT PMS database showed mostly longitudinal cracking and minimal alligator cracking. The minimal alligator is expected in the early life of the pavement sections. However, even in later stages of the pavement life, minimal alligator cracking was observed. The measurements are made at the surface of the pavement. It is difficult to determine the initiation of a crack when only looking at the pavement surface. Therefore, many of the longitudinal cracking measurements at later stages in a pavement sections life may actually be alligator cracking. In order to account for th is issue, the alligator cracking and longitudinal cracking in the pavement wheelpaths were combined and the alligator cracking model was calibrated. For future calibrations, cores should be taken to determine if a particular crack started at the top or bot tom of the pavement. 169 Constraints on coefficients Constraints were put on the calibration coefficients to ensure that a reasonable calibration is achieved. Strange results were encountered when the model calibration coefficients were unconstrained especial ly when minimizing the sum of squared error between the measured and predicted alligator cracking. These strange results could be attributed to the large difference between the measured and predicted cracking before the calibration. Distributions for repe ated split sampling and bootstrapping The bootstrapping and repeated split sampling distri butions give an indication of the model variability based on the selected samples. The SEE and bias showed an almost normal distribution due to the random sampling wh ereas the C 1 and C 2 coefficients were heavily skewed due to the constraints put on the model as shown in Figure 5 - 9. For the majority of the bootstrapping calibrations the C 1 coefficient was around 0.5 and C 2 was 0.56 for almost 100% of the calibrations. T his is a limitation in the local calibration of the alligator cracking model. Ideally the coefficients should be unconstrained to improve the confidence in the calibration coefficients. 170 Figure 5 - 9 Parameter distributions for bootstrapping sampling technique 5.2.2 Rutting Model As mentioned in Chapter 4, the rutting model in the Pavement - ME was calibrated using two methods: 3. Method 1 : Individual rutting layer calibration (i.e., calibrate the rutting model by changi ng the individual calibration coefficients (HMA, base/subbase, and subgrade) relative to rutting contribution of each layer by using the estimates from the transverse profile analysis). 4. Method 2 : Total rutting calibration (i.e., calibrate the rutting mod el by changing the individual calibration coefficient for each layer simultaneously relative to the total rutting). The PMS data from MDOT provided the measured total rutting for each pavement section. The individual layer rutting was determined by using t he transverse profile analysis results as discussed in Chapter 4. This section discusses the findings for both Methods for all 171 data options. Only options 1, 2 and 4 were considered for the rutting model local calibration. Option 3 (LTPP pavement sections) was not included because an adequate number of Michigan specific pavement sections were available. 5.2.2.1. Method 1 Option 1 This section summarizes the local calibration results for individual rutting layer calibration using only MDOT reconstruct pavement section s. Global Model The initial step in the process for local calibration was to determine if the rutting model in the Pavement - ME was capable of predicting rutting for Michigan design practices and conditions. Figure 5 - 10 shows the global rutting models (befo re calibration) for the flexible reconstruct pavement sections in Michigan. The results show that the global model significantly over predicts the total rutting for Michigan conditions. The over prediction in the total rutting is mainly contributed by the predicted base/subbase and subgrade rutting. The individual layer contribution to total rutting was estimated by using the transverse profiles for each pavement section, based on the transverse profile analysis presented in Chapter 4. It should be noted th at only total rutting measured on the pavement surface is stored in the MDOT PMS. However, individual layer rutting was estimated based on the relative proportions determined from the transverse profile analyses. The relative proportions were utilized to e stimate the individual layers rutting contribution (i.e., HMA, base/subbase and subgrade layers). Tables 5 - 8 and 5 - 9 summarize the SEE, bias, and hypothesis testing results for the global rutting model. The SEE and bias is determined for each pavement laye r by comparing the measured (obtained from transverse profile analysis results), and predicted HMA/base/subgrade rutting. It is important to understand how the total rutting is calculated based on these individual 172 layers. The results indicate that there is no significant difference between the measured and predicted HMA rutting. The hypothesis tests for base, subgrade and total rutting were all rejected and a significant difference between the measured and predicted rutting was found. Therefore, local calib ration for the rutting model is needed in Michigan. (a) Global model total rutting (b) HMA rutting (c) Base rutting (d) Subgrade rutting Figure 5 - 10 Global rutting model verification (Method 1 Option 1) Table 5 - 8 Global model SEE and bias (Method 1 Option 1) Layer SEE (in.) Bias (in.) HMA rut 0.0786 - 0.0037 Base rut 0.1267 0.1111 Subgrade 0.2242 0.2143 Total rut 0.3431 0.3217 SEE: 0.341 in. Bias: 0.322 in. r1 : 1 s1 :1 sg1 :1 SEE: 0.079 in. Bias: - 0.004 in. SEE: 0.127 in. Bias: 0.111 in. s1 :1 SEE: 0.224 in. Bias: 0.214 in. sg1 :1 173 Table 5 - 9 Global model hypothesis testing results (Method 1 Option 1) Layer t - test p - value Intercept p - value Slope = 1 p - value HMA rut 0.3220 0.0000 0.0000 Base rut 0.0000 0.0000 0.0000 Subgrade 0.0000 0.0000 0.0000 Tota l rut 0.0000 0.0000 0.0000 Sampling Technique Results The local calibration of the rutting model for Method 1 Option 1 was performed to minimize the sum of squared error between the measured and predicted HMA, base, and subgrade rutting. It is not meani ngful to evaluate individual pavement layer rutting by itself because those were only estimated; therefore, the SEE and bias of the locally calibrated model was determined based on the total rutting. Figure 5 - 11 and 5 - 12 shows the total rutting SEE and bia s for the different sampling techniques used to calibrate the rutting model. The results indicate that the global model does not provide an adequate prediction of rutting for Michigan pavements. The SEE and bias reduced significantly after local calibratio n. The SEE reduced from 0.34 to 0.09 in. for most of the sampling techniques. The bias showed a similar trend and reduced from around 0.32 to - 0.01 in. There does not seem to be a distinct difference between the various sampling methods. A sufficient numbe r of pavement sections were available for the calibration of the rutting model and it is expected that there should not be a large difference between the various sampling techniques. T he rutting database was the most complete compared to other distresses . Figure 5 - 13 shows the validation results of the bootstrapping validation sampling method. The validation SEE and bias was calculated using pavement sections which were not included in the local calibration. The local calibration improved the rutting predic tions for Michigan conditions compared to the global model (see Figure 5 - 10). 174 Figure 5 - 11 Standard Error for all sampling techniques (Method 1 Option 1) Figure 5 - 12 Bias for all sampling techniques (Method 1 Option 1) 175 Figure 5 - 13 Measured vs. predicted total rutting for model validation (Method 1 Option 1) Hypothesis testing was performed to determine if the re was a significant difference between the measured and predicted rutting. The hypothesis testing results for all the different sampling techniques are summarized in Table 5 - 10. The results indicate that the majority of the sampling techniques still showe d a significant difference between the measured and predicted rutting after local calibration. The results for repeated split sampling, bootstrapping, jackknifing and bootstrapping validation are shown to represent the number of calibrations with a p - value greater than 0.05. As shown in Table 5 - 10, the global model had zero calibrations where the p - value was greater than 0.05. Very few of the local calibrations showed a p - value greater than 0.05. Alternatively, when validation was included in the sampling t echnique, more calibrations showed no significant difference between the measured and predicted rutting. The repeated split sampling method showed that there was no significant difference between the measured and predicted rutting for 512 of the 1000 valid ations sections. Similarly, 105 of the 130 jackknifing calibrations showed no significant difference between the measured and predicted rutting. The most informative results are obtained through the bootstrapping validation. This method splits SEE: 0.077 in. Bias: - 0.016 in. r1 : 0.96 s1 : 0.13 sg1 : 0.04 176 the dataset prior to calibrating the model using bootstrapping. The validation indicates how well the calibrated model predicts rutting for these randomly selected pavements sections not included in the calibration. The results in Table 5 - 10 indicate that there is no significant difference between the measured and predicted rutting with a p - value of 0.06. Table 5 - 10 Hypothesis testing results for all sampling techniques (Method 1 Option 1) Sampling Technique Global p - valu e Local - p - value Validation p - value No sampling 0.00 0.00 Split sampling 0.00 0.01 0.00 Repeated split sampling 0/1000 44/1000 512/1000 Bootstrapping 0/1000 27/1000 Jackknifing 0/130 0/130 105/130 Bootstrapping validation 0/1000 219/1000 0.06 Th e local calibration coefficients for the HMA, base and subgrade are shown in Figure 5 - 14. The magnitude of the HMA layer coefficient was adjusted from 1.0 to 0.96 for five of the sampling techniques. The split sampling technique had a value of 0.92. The lo cally calibrated coefficients for base and subgrade rutting were adjusted to 0.12 and 0.04 respectively. These results were expected because the transverse profile analysis indicated that the majority of the rutting for Michigan pavements is attributed to the HMA layer instead of the base or subgrade. 177 Figure 5 - 14 Calibration coefficients (Method 1 Option 1) Model Reliability Updates The standard error of the calibrated rutting models were used to establish t he relationship between the standard deviation of the measured rutting and mean predicted rutting as explained in Chapter 4. These relationships are used to calculate the rutting for a specific reliability. The final S e values for the bootstrapping model a re summarized in Table 5 - 11. Table 5 - 11 Rutting model reliability - Bootstrapping (Method 1 Option 1) Pavement layer Global model equation Local model equation HMA rutting Base rutting Subgrade 5.2.2.2. Method 1 Option 2 Global Model The next calibration consisted of using both newly reconstructed pavemen t sections and rehabilitated pavement sections. The rutting model predictions for both reconstruct ed and 178 rehabilitated pavements are similar. The global model was evaluated to determine if local calibration is necessary. The measured versus predicted rutti ng for all the pavement sections in Option 2 is shown in Figure 5 - 15. The global model over - predicts rutting for the majority of the sections similar to Option 1 discussed above. The total rutting SEE and bias was 0.357 in. and 0.324 in. respectively. The figures also show that the global HMA model predicted rutting well for Michigan pavement sections. Alternatively, the global rutting model over - predicts base and subgrade rutting and requires local calibration. Table 5 - 12 summarizes the hypothesis testing results for the global rutting model. The results indicate that there was no significant difference between the measured and predicted HMA rutting. The hypothesis tests for base, subgrade and total rutting were all rejected and a significant difference bet ween the measured and predicted rutting was found. Therefore, local calibration for the rutting model is needed using the pavements sections included in Option 2. Table 5 - 12 Global rutting model hypothesis tes ting results (Method 1 Option 2) HMA layer t - test p - value Intercept p - value Slope = 1 p - value AC rut 0.551 0.000 0.000 Base rut 0.000 0.000 0.000 Subgrade 0.000 0.000 0.000 Total rut 0.000 0.000 0.000 179 (a) Global model total rutting (b) HMA rut ting (c) Base rutting (d) Subgrade rutting Figure 5 - 15 Global rutting model verification (Method 1 Option 2) Sampling technique results The local calibration of Method 1 Option 2 was performed similar ly to Option 1. The sum of squared error was minimized between the measured and predicted HMA, base, and subgrade rutting individually. The SEE and bias results for all sampling techniques are shown in Figures 5 - 16 and 5 - 17. Similar to Method 1 Option 1, t he locally calibrated model SEE reduced from around 0.36 in. to around 0.09 in. across all sampling techniques. The bias reduced from around 0.32 in. to around 0.02 in. for all sampling techniques. The SEE and bias was similar between the different samplin g techniques. This trend was also seen in Option 1 Method 1. Based on these results, the local calibration improved the rutting prediction capabilities of the SEE: 0.357 in. Bias: 0.324 in. r1 : 1 s1 :1 sg1 :1 SEE: 0.078 in. Bias: - 0.010 in. r1 : 1 SEE: 0.141 in. Bias: 0.116 in. s1 :1 SEE: 0.224 in. Bias: 0.210 in. sg1 :1 180 Pavement - ME software. The validation of the local calibration was performed using pavement sectio ns not included in the calibration. The measured versus predicted rutting after local calibration is shown in Figure 5 - 18. The model SEE and bias for the validation sections are, 0.091 in. and - 0.011 in. respectively. Figure 5 - 16 Standard Error for all sampling techniques (Method 1 Option 2) Figure 5 - 17 Bias for all sampling techniques (Method 1 Option 2) 181 Figure 5 - 18 Measured vs. predicted total rutting for model validation (Method 1 Option 2) Hypothesis testing was performed to determine if there was a significant difference between the measured and predicted rutting. The hypothesis testing results for all the different sampling techniques are summarized in Table 5 - 13. The results indicate that the majority of the sampling techniques showed a significant difference between the measured and predicted rutting after local calibration. These results are similar to Method 1 Option 1. As summarized in Table 5 - 13, the global model had zero calibrations where the p - value was greater than 0.05 which indicates the need for local calibration. After local calibration, very few of the local calibrations showed a p - value g reater than 0.05. Alternatively, when validation was included in the sampling technique, more calibrations showed no significant difference between the measured and predicted rutting. The repeated split sampling method showed that there was no significant difference between the measured and predicted rutting for 370 of the 1000 validations sections. Similarly, 126 of the 162 jackknifing calibrations showed no significant difference between the SEE: 0.091 in. Bias: - 0.011 in. r1 : 0.95 s1 : 0.10 sg1 : 0.04 182 measured and predicted rutting. The most informative results are obtained through the bootstrapping validation. This method splits the dataset prior to calibrating the model using bootstrapping. The validation indicates how well the calibrated model predicts rutting for these randomly selected pavements sections not in cluded in the calibration. The results in Table 5 - 13 indicate that there is no significant difference between the measured and predicted rutting with a p - value of 0.18. Table 5 - 13 Hypothesis testing results fo r all sampling techniques (Method 1 Option 2) Sampling Technique Global p - value Local - p - value Validation p - value No sampling 0.00 0.00 Split sampling 0.00 0.00 0.00 Repeated split sampling 0/1000 0/1000 370/1000 Bootstrapping 0/1000 0/1000 Jackk nifing 0/162 0/162 126/162 Bootstrapping validation 0/1000 2/1000 0.18 The local calibration coefficients for the HMA, base and subgrade are shown in Figure 5 - 19. The magnitude of the HMA layer coefficient was adjusted from 1.0 to 0.94 for four of the s ampling techniques. The bootstrapping and bootstrapping validation techniques had a value of 0.95. The locally calibrated coefficients for base and subgrade rutting were adjusted to 0.10 and 0.04 respectively. Similar to Method 1 Option 1, the results were expected because the transverse profile analysis indicated that the majority of the rutting for Michigan pavements is attributed to the HMA layer instead of the base or subgrade. 183 Figure 5 - 19 Calibration co efficients (Method 1 Option 2) Model Reliability updates The standard error of the calibrated models was used to update the reliability equations in the Pavement - ME. Table 5 - 14 summarizes the global model reliability equations as well as the updated local model reliability equations for the Bootstrapping sampling technique. Table 5 - 14 Rutting model reliability - Bootstrapping (Method 1 Option 2) Pavement layer Global model equation Local model equation HMA rut ting Base rutting Subgrade 5.2.2.3. Method 1 Option 4 Global Model The rutting model was calibrate d using only the rehabilitated pavement sections. The global model over predicts total rutting for Michigan rehabilitation pavement sections. The global model SEE and bias were 0.404 in. and 0.332 in. respectively. The total rutting, HMA, 184 base and subgrade rutting comparison between the measured and predicted rutting is shown in Figure 5 - 20. The hypothesis testing results are summarized in Table 5 - 15. The results show that only the HMA rutting showed no significant difference between the measured and predic ted rutting. The base, subgrade and total rutting showed a significant difference. (a) Global model total rutting (b) HMA rutting (c) Base rutting (d) Subgrade rutting Figure 5 - 20 Global rutting model verification (Method 1 Option 4) Table 5 - 15 Global rutting model hypothesis testing results (Method 1 Option 4) HMA layer t - test p - value Intercept p - value Slope = 1 p - value AC rut 0.5589 0.0000 0.0000 Base rut 0.0000 0.0000 0.0005 Subgrade 0.0000 0.0000 0.1369 Total rut 0.0000 0.0000 0.0039 SEE: 0.404 in. Bias: 0.332 in. r1 : 1 s1 :1 sg1 :1 SEE: 0.076 in. Bias : 0.004 in. r1 : 1 SEE: 0.179 in. Bias: 0.132 in. s1 :1 SEE: 0.223 in. Bias: 0.196 in. sg1 :1 185 Sampling technique results The various sampling techniques were used to calibrate the rutting model. The results shown in Figures 5 - 21 and 5 - 22 present the SEE and bias for all sampling techniques. The model prediction capabilities improved after local calibration. The SEE was reduced from around 0.4 in. to around 0.08 in. after local calibration. The bias was reduced from 0.33 in. to around 0.02 in. The values w ere slightly different for the various sampling techniques but showed similar trends and magnitudes. Figure 5 - 23 shows the bootstrap validation measured versus predicted rutting comparison. The number of data points in the figure is the biggest difference between the analysis performed for Options 1 and 2. Figure 5 - 21 Standard Error for all sampling techniques (Method 1 Option 4) 186 Figure 5 - 22 Bias for all samp ling techniques (Method 1 Option 4) Figure 5 - 23 Measured vs. predicted total rutting for model validation (Method 1 Option 4) Hypothesis testing was performed to determine if there was a significant diffe rence between the measured and predicted rutting. The hypothesis testing results for all the different sampling techniques are summarized in Table 5 - 16. The results indicate that the majority of the sampling techniques showed a significant difference betwe en the measured and predicted rutting SEE: 0.069 in. Bias: - 0.033 in. r1 : 0.87 s1 :0.07 sg1 :0.02 187 after local calibration. These results are similar to Method 1 Option 1 and Option 2. After local calibration, very few of the sampling techniques showed a p - value greater than 0.05. Alternatively, when validation was performed in the sampling technique, more calibrations showed no significant difference between the measured and predicted rutting. The repeated split sampling method showed that there was no significant difference between the measured and predicted ruttin g for 519 of the 1000 validations sections. Similarly, 24 of the 33 jackknifing calibrations showed no significant difference between the measured and predicted rutting. The bootstrapping validation method also showed that there is a significant difference between the measured and predicted rutting. The local calibration coefficients are shown in Figure 5 - 24. The HMA coefficient ( r1) ranged from 0.87 to 0.90. The base ( s1) coefficient ranged from 0.06 to 0.07 and the subgrade coefficient ( sg1) was 0.02 for the various sampling techniques. Table 5 - 16 Hypothesis testing results for a ll sampling techniques (Method 1 Option 4) Sampling Techniques Global P - value Local - p - value Validation p - value No sampling 0.00 0.00 Split sampling 0.00 0.02 0.77 Repeated split sampling 0/1000 104/1000 519/1000 Bootstrapping 0/1000 63/1000 Jack knifing 0/33 0/33 24/33 Bootstrapping validation 0/1000 89/1000 0.01 188 Figure 5 - 24 Calibration coefficients (Method 1 Option 4) Model Reliability updates The standard error of the calibrated models was used to update the reliability equations in the Pavement - ME. Table 5 - 17 summarizes the global model reliability equations as well as the updated local model reliability equations for the Bootstrapping sampling technique. Table 5 - 17 Rutting model reliability for Option 4 Bootstrap Pavement layer Global model equation Local model equation HMA rutting Base rutting Subgrade 5.2.2.4. Method 2 Option 1 For Method 2, the rutting model was calibrated using the same dataset as for Method 1 Option 1. The calibration coefficients were changed simultaneously to minimize the error between the total measured and predicted rutting without considering the rutting in the individual pavement layers (see details in Chapter 4). The adequacy of the global model was tested to determine if local calibration is necessary. 189 Global Mod el The global rutting model was executed for all the pavement sections. The SEE, bias and hypothesis test were performed to determine how well the model predicts rutting for Michigan pavements. Figure 5 - 25 shows the comparison between measured and predicte d rutting for all layers. The figure indicates that the global rutting model over - predicts measured rutting performance. The global model hypothesis testing results are summarized in Table 5 - 18. The results show that there is a significant difference betwe en the measured and predicted rutting and local calibration is necessary. (a) Global model total rutting (b) HMA rutting (c) Base rutting (d) Subgrade rutting Figure 5 - 25 Global model rutting pr edictions (Method 2 Option 1) SEE: 0.343 in. Bias: 0.322 in. r1 : 1 s1 :1 sg1 :1 SEE: 0.079 in. Bias: - 0.004 in. r1 : 1 SE E: 0.127 in. Bias: 0.111 in. s1 :1 SEE: 0.224 in. Bias: 0.214 in. sg1 :1 190 Table 5 - 18 Global model hypothesis testing results (Method 2 Option 1) HMA layer t - test p - value Intercept p - value Slope = 1 p - value AC rut 0.551 0.000 0.000 Base rut 0.000 0.00 0 0.000 Subgrade 0.000 0.000 0.000 Total rut 0.000 0.000 0.000 Sampling technique results The different sampling techniques were used to calibrate the rutting model. The model was calibrated by changing all the coefficients simultaneously to minimize th e error between the total measure d and predicted rutting. Figures 5 - 26 and 5 - 27 shows the SEE and bias for the different sampling techniques. Overall, the SEE and bias were greatly reduced after local calibration for all techniques. The SEE reduced from 0. 34 in. to around 0.08 in. for all sampling techniques. The validation SEE was similar to the calibrated model which shows that the calibrated model was able to predict rutting for pavement sections not included in the calibration set. The model bias was re duced to zero for almost all the sampling techniques. Figure 5 - 28 shows the measured versus predicted rutting from the bootstrapping validation sampling technique. The validation model SEE and bias was 0.086 in. and 0.006 in. respectively. Figure 5 - 26 Standard Error for all sampling techniques (Method 2 Option 1) 191 Figure 5 - 27 Bias for all sampling techniques (Method 2 Option 1) Figure 5 - 28 Measured vs. predicted total rutting for model validation (Method 2 Option 1) Hypothesis testing was performed to determine if there was a statistical significant difference between the measured and predicted rutting. The hy pothesis testing results are summarized in Table 5 - 19. The locally calibrated models showed that there was no significant SEE: 0. 086 in. Bias: 0. 006 in. r1 : 0. 42 s1 :0. 21 sg1 :0. 38 192 difference between the measured and predicted rutting. The obtained p - value was greater than 0.05 for the different sampling technique s. The p - values for repeated split sampling, bootstrapping, jackknifing and bootstrapping validation were greater than 0.05 for most of the calibrations. The final bootstrapping validation p - value was 0.53. The local calibration coefficients are summarized in Figure 5 - 29. The fundamental differences between Method 1 and Method 2 can be seen in the local calibration p - values and the calibration coefficients. The local calibration coefficients do not take into account the individual layer rutting. This may no t present actual field performance. If the individual layer rutting is known, then Method 1 is a better calibration. The HMA base and subgrade coefficients may over or under - predict actual rutting performance. Table 5 - 19 Hypothesis testing results for all sampling techniques (Method 2 Option 1) Sampling Technique Global P - value Local - p - value Validation p - value No sampling 0.00 0.81 Split sampling 0.00 0.73 0.02 Repeated split sampling 0/1000 1000/1000 8 03/1000 Bootstrapping 0/1000 1000/1000 Jackknifing 0/130 71/130 109/130 Bootstrapping validation 0/1000 1000/1000 0.53 193 Figure 5 - 29 Calibration coefficients (Method 2 Option 1) Model Reliability updat es The standard error of the calibrated models was used to update the reliability equations in the Pavement - ME. Table 5 - 20 summarizes the global model reliability equations as well as the updated local model reliability equations for the Bootstrapping samp ling technique. Table 5 - 20 Rutting model reliability for Method 2 Option 1 Bootstrap Pavement layer Global model equation Local model equation HMA rutting Base rutting Subgrade 5.2.2.5. Method 2 Option 2 Global Model Similar to Method 2 Option 1, the global rutting model predictions were much greater th an the measured rutting obtained from the field. The results in Figure 5 - 30 show the over - prediction when comparing total rutting. The hypothesis testing results are summarized in Table 5 - 21 and indicates that there is a significant difference between the measured and predicted 194 rutting and local calibration is needed. Table 5 - 21 Global model hypothesis testing results (Method 2 Option 2) HMA layer t - test p - value Intercept p - value Slope = 1 p - value AC rut 0.55 05 0.0000 0.0000 Base rut 0.0000 0.0000 0.0000 Subgrade 0.0000 0.0000 0.0000 Total rut 0.0000 0.0000 0.0000 (a) Global model total rutting (b) HMA rutting (c) Base rutting (d) Subgrade rutting Figure 5 - 30 Global model rutting predictions (Method 2 Option 2) Sampling technique results The different sampling techniques were used to calibrate the rutting model by changing all the coefficients simultaneously. Figures 5 - 31 and 5 - 32 summarize the SEE and bias for the various sampling techniques. The magnitude of the SEE did not vary greatly between the SEE: 0.357 in. Bias: 0.324 in. r1 : 1 s1 :1 sg1 :1 SEE: 0.078 in. Bias: - 0.002 in. r1 : 1 SEE: 0.141 in. Bias: 0.11 6 in. s1 :1 SEE: 0.224 in. Bias: 0.210 in. sg1 :1 195 different sampling techniques. This may be attributed to the number of pavement sections included in the rutting model calibration. The SEE after ca libration was 0.08 in. for all sampling techniques. The validation of the local calibration using pavement sections not included in the local calibration ranged from 0.08 0.10 in. The bias was reduced for all sampling techniques and ranged between 0.00 a nd - 0.02 inches. The bootstrap validation measured versus predicted results are shown in Figure 5 - 33. The model was able to predict rutting much better compared to the global model. Figure 5 - 31 Standard Er ror for all sampling techniques (Method 2 Option 2) Figure 5 - 32 Bias for all sampling techniques (Method 2 Option 2) 196 Figure 5 - 33 Measured vs. predicted tot al rutting for model validation (Method 2 Option 2) The hypothesis testing results for all sampling techniques are summarized in Table 5 - 22. The results indicate that there is no significant difference between the measured and predicted rutting for the maj ority of the sampling techniques. The split sampling technique validation showed that there is still a significant difference between the measured and predicted rutting even though the calibrated model showed no significant difference. The bootstrapping va lidation technique had a final p - value of 0.12 and indicates that the calibrated rutting model predicts rutting adequately for pavement sections not included in the calibration database. Figure 5 - 34 shows the various calibration coefficients for the differ ent sampling techniques. The coefficients varied slightly between the different techniques. Table 5 - 22 Hypothesis testing results for all sampling techniques (Method 2 Option 2) Sampling Technique Global P - va lue Local - p - value Validation p - value No sampling 0.00 0.11 Split sampling 0.00 0.29 0.00 Repeated split sampling 0/1000 998/1000 714/1000 Bootstrapping 0/1000 844/1000 Jackknifing 0/162 1/162 128/162 Bootstrapping validation 0/1000 726/1000 0.1 2 SEE: 0. 085 in. Bias: - 0. 013 in. r1 : 0. 70 s1 :0. 03 sg1 :0. 28 197 Figure 5 - 34 Calibration coefficients (Method 2 Option 2) Model Reliability updates The standard error of the calibrated models was used to update the reliability equations in the Pavement - ME. Table 5 - 23 summarizes the global model reliability equations as well as the updated local model reliability equations for the Bootstrapping sampling technique. Table 5 - 23 Rutting model reliability for Option 2 Method 2 - Bootstrap Pavement layer Global model equation Local model equation HMA rutting Base rutting Subgrade 5.2.2.6. Method 2 Option 4 Global Model The Pavement - ME was executed to determine how well the global model predicts rutting for the rehabilitation pavement sections in the calibration dataset. The measured versus predicted results are shown in Fig ure 5 - 35. The results show that the global model over - predicts total 198 rutting. The global model SEE and bias are 0.404 in. and 0.332 in. respectively. The hypothesis testing results indicates that there is a significant difference between the measured and p redicted rutting as shown in Table 5 - 24. (a) Global model total rutting (b) HMA rutting (c) Base rutting (d) Subgrade rutting Figure 5 - 35 Global model rutting predictions (Method 2 Option 4) Ta ble 5 - 24 Global model hypothesis testing results (Method 2 Option 4) HMA layer t - test p - value Intercept p - value Slope = 1 p - value AC rut 0.5589 0.0000 0.0000 Base rut 0.0000 0.0000 0.0005 Subgrade 0.0000 0. 0000 0.1369 Total rut 0.0000 0.0000 0.0039 SEE: 0.404 in. Bias: 0.332 in. r1 : 1 s1 :1 sg1 :1 SEE: 0.076 in. Bias: 0.004 in. r1 : 1 SEE: 0.179 in. Bias: 0.132 in. s1 :1 SEE: 0.223 in. Bias: 0.196 in. sg1 :1 199 Sampling technique results The rutting model was calibrated using the various sampling techniques discussed previously. The SEE and bias for all sampling techniques are shown in Figures 5 - 36 and 5 - 37. The loca l calibration reduced the SEE and bias for all sampling techniques. The local model SEE values were approximately 0.08 in. for all sampling techniques. The validation SEE ranged from 0.07 in. to 0.09 in. The bias reduced from 0.33 in. to approximately - 0.0 1 in. after local calibration. The local calibration improved the prediction capabilities of the Pavement - ME for Michigan conditions. The measured versus predicted rutting for the bootstrapping validation technique is shown in Figure 5 - 38. Figure 5 - 36 Standard Error for all sampling techniques (Method 2 Option 4) 200 Figure 5 - 37 Bias for all sampling techniques (Method 2 Option 4) Figure 5 - 38 Measured vs. predicted total rutting for model validation (Method 2 Option 4) The hypothesis testing was performed to determine if there was a significant difference between the measured and predicted rutting after local cal ibration. The results are summarized in Table 5 - 25. The results show that there was no significant difference between the measured and predicted rutting for all sampling techniques except jackknifing. Additionally, the validation sections for repeated spli t sampling, jackknifing and bootstrapping validation resulted in no SEE: 0. 070 in. Bias: 0.002 in. r1 : 0. 87 s1 :0. 01 sg1 :0. 11 201 significant difference between the measured and predicted rutting. This indicates that the model is capable of predicting rutting for pavement sections not included in the calibration data set. The final bootstrapping validation p - value was 0.90. The local calibration coefficients for all sampling techniques are shown in Figure 5 - 39. Table 5 - 25 Hypothesis testing results for all sampling techni ques (Method 2 Option 4) Sampling Technique Global P - value Local - p - value Validation p - value No sampling 0.00 0.12 Split sampling 0.00 0.27 0.02 Repeated split sampling 0/1000 1000/1000 608/1000 Bootstrapping 0/1000 844/1000 Jackknifing 0/33 1/33 23/33 Bootstrapping validation 0/1000 762/1000 0.90 Figure 5 - 39 Calibration coefficients (Method 2 Option 4) Model Reliability updates The standard error of the calibrated models was used to update the reliability equations in the Pavement - ME. Table 5 - 26 summarizes the global model reliability equations as well as the updated local model reliability equations for the Bootstrapping sampling technique. 202 Table 5 - 26 Rutting model reliability for Method 2 Option 4 Bootstrap Pavement layer Global model equation Local model equation HMA rutting Base rutting Subgrade 5.2.2.7. Summary of Rutting Model: Observations, limitations, contributions and future Several observations were made when calibrating the rutting model for Michigan conditions. These observations were true for most of the Methods and Options. This section will discuss common observations, limitations, and issues encountered during the local calibration process. Benefits of bootstrapping and repeated split sampling The results from the different sa mpling techniques for the rutting model were not much different from each other. Each sampling technique has its benefits and drawbacks. The most important aspect of performing repeated split sampling and bootstrapping was to quantify the variability based on the entire calibration dataset. These methods were performed many times to determine a distribution for SEE, bias, and the calibration coefficients. Additionally, a confidence interval was determined to ensure that the coefficients were representative of the dataset and to provide the best estimate for local design practices. An example of the bootstrapping distribution is shown in Figure 5 - 40. The variability of the 1000 bootstraps was captured in the distributions. Each plot shows the mean, median, 2. 5 percentile and 97.5 percentile confidence levels. The bootstrapping technique should be paired with split sampling to include the validation of the model using pavement sections not included in the calibration 203 dataset. This procedure was performed for al l local calibrations as shown above. Figure 5 - 40 Calibration parameter distributions - Bootstrapping Transverse profile analysis assumptions The rutting model calibration used in Method 1 is highly depende nt on the accuracy of 204 the transverse profile analysis. Several assumptions were made in order to use the available transverse profile data. The data was collected from the 2012/2013 survey cycle and was assumed to represent rutting measurements for pavemen ts which were constructed in years prior to 2012. Time series transverse profiles were not available to study the effect of rut propagation over time. The transverse profiles do not provide a magnitude of rut depth for each pavement section. The measured t ransverse profiles were used to estimate where the rutting is coming from in the pavement structure. The percentage of HMA, base, and subgrade rutting was determined for the length of an entire pavement section. The actual measured total rutting obtained f rom the MDOT PMS database was multiplied by the HMA, base and subgrade percentages to determine the individual layer rutting values for each pavement section. For future local calibrations, trenching should be performed on a subset of pavement sections to verify the transverse profile analysis results. If transverse profiles or trenching is not available, then Method 2 should be used. Even though the results may not minimize the error for each individual pavement layer, it is the only other alternative to calibrate the rutting model. This was the reasoning behind calibrating the rutting model using Method 1 and Method 2. Hypothesis testing It was observed that for Method 1, the total rutting hypothesis test p - values were less than 0.05 after local calibrati on which indicates that there is a significant difference between the measured and predicted rutting. However, when the hypothesis tests were performed for the individual pavement layers, the results consistently showed a p - value greater than 0.05 and indi cates that there is not a significant difference between the measured and predicted HMA, base and subgrade rutting. The hypothesis tests were rejected only when the total measured 205 rutting was compared to total predicted rutting. On the other hand, the rev erse was observed for Method 2. The total rutting hypothesis tests indicated that there was no significant difference between the measured and predicted total rutting. Alternatively, the HMA, base and subgrade rutting hypothesis tests consistently indicate d that there was a significant difference between the measured and predicted rutting for individual pavement layers. This result is expected because the models were optimized by changing the calibration coefficients simultaneously without taking into accou nt the individual pavement layers. 5.2.3 Transverse (thermal) Cracking Model The transverse thermal cracking model was calibrated by changing the K coefficient in the Pavement - ME software. Each time the K coefficient is modified, the software needs to be rerun to obtain the ther mal cracking predictions. Since thermal cracking is significantly impacted by the HMA layer characterization, the local calibration was also performed for Level 1 and 3 HMA mixture characteristics. 5.2.3.1. Level 1 HMA layer characterizatio n The L evel 1 analyses for all options are summarized in Tables 5 - 27 and 5 - 28. The calibration coefficient increments were selected based on the literature. The results show that a K value of 0.75 provided the best result for Michigan conditions (i.e., lowest SEE and bias). For Option 1, the SEE reduced from 1343.58 to 753.24 ft/mile and the bias also reduced from 903.06 to - 70.40 ft/mile, when compared with the current global thermal cracking mode. For Option 2, a K = 0.75 combination also yielded the best result s with an SEE and bias of 732.1 and - 73.8 ft/mile. Figures 5 - 41 and 5 - 42 show the measured versus predicted thermal cracking results after local calibration for both Options. Options 3 and 4 were not considered for the level 1 transverse 206 cracking model du e to a limited number of LTPP and MDOT rehabilitation sections with Level 1 HMA data. Table 5 - 27 Transverse thermal cracking results Option 1 Parameter SEE Bias Global model Level 1 1343.58 903.06 K = 0.5 Level 1 767.05 - 217.64 K = 0.75 Level 1 753.24 - 70.40 K = 1 Level 1 943.39 246.75 K = 1.1 Level 1 1019.15 369.83 K = 1.2 Level 1 1094.84 492.97 K = 1.3 Level 1 1176.40 630.50 K = 1.4 Level 1 1277.75 783.51 K = 1.7 Level 1 1459.76 1109.90 K = 2 Leve l 1 1560.47 1310.64 K = 2.5 Level 1 1692.66 1553.99 (a) Global model (b ) Local model Figure 5 - 41 Option 1 measured versus predicted transverse (thermal) cracking Table 5 - 28 Transverse thermal cracking results Option 2 Parameter SEE Bias Global model Level 1 1306.5 854.7 K = 0.5 Level 1 745.5 - 212.6 K = 0.75 Level 1 732.1 - 73.8 K = 1 Level 1 916.4 225.1 K = 1.1 Level 1 989.9 341.2 K = 1.2 Level 1 1 063.2 457.5 K = 1.3 Level 1 1142.3 588.4 K = 1.4 Level 1 1241.0 736.0 K = 1.7 Level 1 1425.3 1064.9 K = 2 Level 1 1529.4 1271.1 K = 2.5 Level 1 1667.5 1524.3 207 (a) Global model (b) Local model Figure 5 - 42 Option 2 measured versus predicted transverse (thermal) cracking 5.2.3.2. Level 3 HMA layer characterizatio n The Level 3 analysis followed a similar procedure as for Level 1 and the results are summarized in Tables 5 - 29 through 5 - 31. Option 1 showed that K = 3 had the best overall bias, even though the SEE is slightly higher than the global model. Figure 5 - 43 shows the measured versus predicted results for both global and local models. Alternatively, for the Option 2, K = 4 provided the lowest bias compared to all other options. Option 3 was not performed for Level 3 due to a limited number of SPS - 1 sections available. Option 4 only considers rehabilitation pavements and a K = 5 provided the best results. Table 5 - 29 Transverse thermal cracking Level 3 results Option 1 Parameter SEE Bias Global model Level 3 754.6 - 318.5 K = 2 Level 3 785.5 - 249.7 K = 3 Level 3 867.2 - 23.2 K = 4 Level 3 978.2 233.9 K = 5 Level 3 1107.2 494.5 208 (a) Global model (b) Local mo del Figure 5 - 43 Measured versus predicted TC for Option 1 Table 5 - 30 Transverse thermal cracking Level 3 results Option 2 Parameter SEE Bias Global model Lev el 3 945.0 - 489.0 K = 2 Level 3 965.6 - 416.2 K = 3 Level 3 1022.4 - 209.6 K = 4 Level 3 1057.7 35.3 K = 5 Level 3 1121.8 289.6 Table 5 - 31 Transverse thermal cracking Level 3 results Option4 Parameter SE E Bias Global model Level 3 1304.7 - 906.6 K = 2 Level 3 1312.1 - 824.0 K = 3 Level 3 1334.6 - 666.1 K = 4 Level 3 1237.6 - 451.0 K = 5 Level 3 1163.8 - 212.0 5.2.3.3. Reliability for thermal cracking model The standard error of the calibrated thermal cracking mod els were used to establish the relationship between the standard deviation of the measured cracking and mean predicted cracking as explained in Chapter 4. These relationships are used to calculate thermal cracking for a specific reliability. Tables 5 - 32 an d 5 - 33 summarize these relations for the options considered for the thermal cracking models using the no sampling technique. 209 Table 5 - 32 Reliability summary for Level 1 Data set option Global model equation L ocal model equation Option 1 Option 2 Table 5 - 33 Reliability summary for Level 3 Data set option Global model equation Local model equation Option 1 Option 2 Option 4 5.2.4 Flexible Pavement R oughness (IRI) Model The IRI model was calibrated after the local calibration of the fatigue and transverse cracking, and rutting models were completed. These distresses are considered directly in the IRI model along with the site factor. The IRI model was calibrated by minimizing the error between the predicted and measured IRI values. The model was calibrated for Option 1, 2 and 4. Option 3 was not included because a sufficient number of Michigan specific pavement sections were available and the LTPP sect ions were not needed to supplement the data set . The results for each data option are summarized below. 5.2.4.1. Option 1 Global Model The global IRI model was executed to determine the adequacy of the model using Michigan specific pavement sections. Option 1 inclu des only flexible reconstruct pavements. Figure 5 - 44 shows the comparison between the measured and predicted IRI. The SEE and bias was 14.826 in/mile and 2.755 in/mile. The results indicate that the model slightly over - predicts 210 IRI for Michigan pavement se ctions. The hypothesis testing results are summarized in Table 5 - 34. All of the hypothesis tests had a p - value less than 0.05 which indicates that there is a significant difference between the measured and predicted IRI and local calibration should be per formed to improve the IRI predictions. Figure 5 - 44 Global IRI model measured versus predicted comparison (Option 1) Table 5 - 34 Hypotheses testing results for the global IRI model (Option 1) Hypothesis test Hypotheses P - value Mean difference (paired t - test) H 0 = (predicted measured) = 0 H 1 = (predicted 0.00 Intercept H 0 = intercept = 0 H 1 0.00 Slope H 0 = slope = 1 H 1 0.00 Sampling Technique Results The IRI model was locally calibrated because there was a significant difference between the measur ed and predicted IRI. The model calibration was performed using various sampling techniques. The sampling techniques were compared to each other to study the differences in SEE: 14.826 in/mile Bias: 2.755 in/mile C 1 : 40.00 C 2 : 0.400 C 3 : 0.008 C 4 : 0.015 211 model standard error, bias, and the calibration coefficients. The SEE and bias resu lts are summarized in Figures 5 - 45 and 5 - 46. The SEE was slightly reduced after local calibration for all sampling techniques. Alternatively, the model bias was reduced substantially after local calibration. The global model bias was greater than 2 in/mile for most of the sampling techniques and was reduced to less than 0.8 in/mile. The bootstrapping validation produced slightly higher bias compared to the global model. These results were not expected and may be attributed to the 80/20 split for calibration and validation as the validation sections also showed a higher bias compared to the calibrated model. Additionally, the measured cracking on Michigan PCC pavement sections was quite variable. The one 80/20 split of the dataset is a random selection and co uld be different each time the method is performed. The SEE of the validation sections was much lower than the calibrated model. Figure 5 - 47 shows the comparison between the measured and predicted IRI for the bootstrapping validation sampling technique. Figure 5 - 45 Standard error for all sampling techniques (Option 1) 212 Figure 5 - 46 Bias for all sampling techniques (Option 1) Figure 5 - 47 Measured versus predicted IRI after local calibration (Option 1) Hypothesis testing was performed to determine if there was a significant difference between the measured and predicted IRI. The results are summarized in Table 5 - 3 5. The locally calibrated IRI models consistently showed a p - value greater than 0.05 which indicates that there was no significant difference between the measured and predicted IRI. Additionally, the SEE: 10.91 in/mile Bias: 1.00 in/mile C 1 : 54.269 C 2 : 0.391 C 3 : 0.008 C 4 : 0.007 213 sampling techniques which included validation showed p - v alues greater than 0.05. The locally calibrated model is capable of adequately predicting IRI for pavement sections not included in the calibration dataset. Table 5 - 35 IRI model hypothesis testing results (Opt ion 1) Sampling Technique Global P - value Local - p - value Validation p - value No sampling 0.00 0.99 Split sampling 0.01 0.91 0.65 Repeated split sampling 122/1000 1000/1000 702/1000 Bootstrapping 140/1000 995/1000 Jackknifing 0/127 2/127 91/127 Boo tstrapping validation 760/1000 927/1000 0.45 The calibration coefficients for the IRI model are summarized in Table 5 - 36. The C 1 coefficient was higher than the global model for most of the sampling techniques except Jackknifing. The C 2 and C 3 coefficie nts were similar to the global model coefficient and the C 4 coefficient was lower. The reliability of the IRI model is calculated internally and cannot be adjusted outside of the software. Table 5 - 36 IRI model calibration coefficients (Option 1) Sampling Technique C1 C2 C3 C4 Global Model 40.00 0.400 0.008 0.015 No sampling 48.56 0.478 0.006 0.007 Split sampling 52.87 0.354 0.006 0.007 Repeated split sampling 50.82 0.409 0.006 0.007 Bootstrapping 50.37 0.4 10 0.007 0.007 Jackknifing 33.44 0.480 0.006 0.012 Bootstrapping validation 54.27 0.391 0.008 0.007 5.2.4.2. Option 2 Global Model The IRI model was calibrated using reconstruct and rehabilitation sections. The global model predictions are compared to the measur ed IRI. Figure 5 - 48 shows the measured versus 214 predicted IRI for the global model. The SEE and bias was 16.066 and 0.246 in/mile. These results indicate that there is very little bias in the predicted model. The hypothesis test results presented in Table 5 - 37 indicate that there was no significant difference between the measured and predicted IRI. The IRI model was locally calibrated to attempt to reduce the standard error and to improve the hypothesis testing results for slope and intercept. Figure 5 - 48 Global model measured versus predicted IRI (Option 2) Table 5 - 37 Hypothesis testing for global IRI model (Option 2) Hypothesis test Hypotheses P - value Mean dif ference (paired t - test) H 0 = (predicted measured) = 0 H 1 = (predicted 0.74 Intercept H 0 = intercept = 0 H 1 0.00 Slope H 0 = slope = 1 H 1 0.00 Sampling Technique Results The local calibration results for all sampling techniques are summarized in Figure s 5 - 49 and 5 - 50. The results show that the SEE was slightly reduced after local calibration. The validation SEE was similar to the calibration SEE for most sampling techniques. The jackknifing S EE: 16.066 in/mile Bias: 0.246 in/mile C 1 : 40.00 C 2 : 0.400 C 3 : 0.008 C 4 : 0.015 215 and bootstrapping techniques had a slightly higher validation SEE. These result s may be attributed to the splitting of the sample. The (n - 1) jackknife SEE validates only one pavement section which could result in slightly higher SEE and bias values. The bootstrapping validation is dependent on the 80/20 split of the sample. The 20% o f the sections may be underrepresented depending on their cracking, faulting, site factor and spalling predictions. The bias results did not improve for all sampling techniques. The bias was reduced for the no sampling, jackknifing and bootstrapping valida tion techniques. The bias slightly increased for the split sampling, repeated split sampling and bootstrapping techniques. It should be noted that the magnitudes of these increases or decreases are very small since there was minimal bias when comparing the global model calibration coefficients with the measured IRI. Figure 5 - 51 shows the validation of the bootstrapping validation sampling technique. The figure shows that the calibrated model can predict IRI for pavements sections not included in the dataset . The higher SEE compared to the calibration SEE is attributed to the few IRI data points which showed higher measured IRI compared to the predicted values. With random sampling, these limitations are expected to occur. Figure 5 - 49 Standard error for all sampling techniques (Option 2) 216 Figure 5 - 50 Bias for all sampling techniques (Option 2) Figure 5 - 51 Local m odel measured versus predicted IRI for bootstrapping validation (Option 2) Hypothesis testing was performed to determine if there was a significant difference between the measured and predicted IRI. The results are summarized in Table 5 - 38. The locally ca librated IRI models consistently showed a p - value greater than 0.05 which indicates that there was no significant difference between the measured and predicted IRI. Additionally, the SEE: 18.21 in/mile Bias: 0.09 in/mile C 1 : 32.30 C 2 : 0.404 C 3 : 0.006 C 4 : 0.016 217 sampling techniques which included validation showed p - values greater tha n 0.05. The locally calibrated model is capable of adequately predicting IRI for pavement sections not included in the calibration dataset. Table 5 - 38 IRI model hypothesis testing results (Option 2) Sampling T echnique Global P - value Local - p - value Validation p - value No sampling 0.74 0.94 Split sampling 0.81 0.77 0.26 Repeated split sampling 974/1000 998/1000 769/1000 Bootstrapping 783/1000 951/1000 Jackknifing 43/167 167/167 122/167 Bootstrapping val idation 675/1000 977/1000 0.96 The calibration coefficients for the IRI model are summarized in Table 5 - 39. The C 1 coefficient was lower than the global model for all of the sampling techniques. The C 2 coefficient was also lower than the global model co efficients except for the bootstrapping validation technique. The C 3 coefficient was slightly lower and the C 4 coefficient was slightly higher. The slight changes in the calibration coefficients were expected because the global model did not actually need local calibration based on the hypothesis testing results. The reliability of the IRI model is calculated internally and cannot be adjusted outside of the software. Table 5 - 3 9 IRI model calibration coefficient s (Option 2) Sampling Technique C1 C2 C3 C4 Global Model 40.00 0.400 0.008 0.015 No sampling 32.06 0.320 0.006 0.018 Split sampling 33.44 0.320 0.006 0.018 Repeated split sampling 32.19 0.343 0.006 0.018 Bootstrapping 31.65 0.361 0.007 0.017 Jackknif ing 32.16 0.320 0.006 0.018 Bootstrapping validation 32.30 0.404 0.006 0.016 218 5.2.4.3. Option 4 Global Model The next calibration used only rehabilitation pavement sections to determine if different calibration coefficients are necessary for different data sets. The global model measured versus predicted IRI is shown in Figure 5 - 52. The SEE and bias for the global model was 20.69 and - 5.90 in/mile. The global model under - predicts measured IRI. Hypothesis testing was performed to determine if there was a significan t difference between the measured and predicted IRI. The results are summarized in Table 5 - 40 and show that there is a significant difference based on a p - value less than 0.05. Based on these findings, local calibration of the IRI model is necessary. Fi gure 5 - 52 Global IRI model measured versus predicted comparison (Option 4) Table 5 - 40 Hypotheses testing results for the global IRI model (Option 4) Hypothesis test Hypotheses P - value Mean difference (paired t - test) H 0 = (predicted measured) = 0 H 1 = (predicted 0.00 Intercept H 0 = intercept = 0 H 1 0.00 Slope H 0 = slope = 1 H 1 0.00 SEE: 20.69 in/mile Bias: - 5.90 in/mile C 1 : 40.00 C 2 : 0.400 C 3 : 0.008 C 4 : 0.015 219 Sampling Technique Results The local calibration was performed using various sampling techniques. The results of the local ca libration are shown in Figures 5 - 53 and 5 - 54. The SEE and bias was reduced after local calibration. The SEE after local calibration ranged from 17 in/mile to 20 in/mile. The local calibration bias ranged from - 3.79 in/mile to 0.74 in/mile. The validation s ections performed well and showed SEE and bias values similar to or less than the local calibration SEE and bias. The bootstrapping validation section however showed a larger bias. The larger bias can be attributed to the random splitting of the data set. Figure 5 - 55 shows the comparison between the measured and predicted IRI for the bootstrapping validation sampling technique. The figure shows that for the majority of the data points there is a good match between the measured and predicted IRI. A slight ov er - prediction is observed and one data point showed a large over - prediction. This over prediction is most likely the cause of the higher bias value. Figure 5 - 53 Standard error for all sampling techniques (O ption 4) 220 Figure 5 - 54 Bias for all sampling techniques (Option 4) Figure 5 - 55 Measured versus predicted IRI after local calibration (Option 4) Hypothesis testing was performed to determine if there was a significant difference between the measured and predicted IRI. The results are summarized in Table 5 - 41. The locally calibrated IRI models consistently showed a p - value greater than 0.05 which indicates th at there was no significant difference between the measured and predicted IRI. Additionally, the sampling techniques which included validation showed p - values greater than 0.05. In general, SEE: 12.11 in/ mile Bias: 3.61 in/mile C 1 : 24.27 C 2 : 0.161 C 3 : 0.005 C 4 : 0.029 221 the locally calibrated model is capable of adequately predicting I RI for pavement sections not included in the calibration dataset. Table 5 - 41 IRI model hypothesis testing results (Option 4) Sampling Technique Global P - value Local - p - value Validation p - value No sampling 0. 00 0.86 Split sampling 0.03 0.73 0.76 Repeated split sampling 420/1000 1000/1000 771/1000 Bootstrapping 316/1000 1000/1000 Jackknifing 0/40 1/40 28/40 Bootstrapping validation 39/1000 997/1000 0.18 The calibration coefficients for the IRI model are summarized in Table 5 - 42. The C 1 coefficient was lower than the global model for all of the sampling techniques. The C 2 and C 3 coefficients were also lower than the global model coefficients. The C 4 coefficient was higher than the global model. The sl ight changes in the calibration coefficients were expected because the global model did not actually need local calibration based on the hypothesis testing results. The reliability of the IRI model is calculated internally and cannot be adjusted outside of the software. Table 5 - 42 IRI model calibration coefficients (Option 4) Sampling Technique C1 C2 C3 C4 Global Model 40.00 0.400 0.008 0.015 No sampling 20.80 0.160 0.005 0.028 Split sampling 20.80 0.160 0.0 05 0.028 Repeated split sampling 20.84 0.160 0.005 0.027 Bootstrapping 21.26 0.160 0.005 0.027 Jackknifing 33.44 0.320 0.010 0.018 Bootstrapping validation 24.27 0.161 0.005 0.029 5.2.4.4. Summary of IRI local calibration The local calibration of the IRI mode l was successful. The hypothesis tests for the 222 bootstrapping validation indicated that there is no significant difference between the measured and predicted IRI for pavement sections not included in the calibration dataset. These results were obtained for all the dataset options. Benefits of repeated split sampling and bootstrapping The repeated split sampling, bootstrapping and bootstrapping validation was performed to quantify the variability associated with the model predictions and parameters. The boot strapping method consistently showed lower SEE and bias values compared to the no sampling and split sampling techniques. The 1000 bootstraps samples essentially calibrate the model 1000 times with a random combination of pavement sections each time. Distr ibutions of the calibration parameters were extracted to study the variability of each parameter. Figure 5 - 56 shows an example of the parameter distributions obtained from the bootstrapping validation sampling technique. The SEE and bias parameters follow an almost normal distribution. Ideally the calibration coefficient should follow a normal distribution as well. However, the model than obtained for the glob al model. Details regarding the selection of the constraints are discussed below. The confidence intervals, mean, and median of the calibration parameters were obtained to study the variability of each parameter. Obtaining these values improves the confid ence of assuming that these values are the best estimates for the local calibration coefficients given the current set of data. 223 Figure 5 - 56 IRI calibration parameter distributions Bootstrapping Validation Model Constraints The IRI model is dependent on the initial IRI, fatigue cracking, rutting, thermal cracking predictions, and site factors. These distresses were locally calibrated prior to calibrating the IRI model. When the IRI model was calibrated usin g no constraints on the calibration parameters, some coefficients had a much greater impact on the IRI predictions than expected. As briefly mentioned above, some of the calibration coefficients were constrained to obtain reasonable results. For the IRI mo del calibration, the upper and lower bounds were selected based on the calibration coefficients obtained for other highway agencies that performed local calibration. The coefficients have an impact on the magnitude of each distress included in the IRI pred iction. It might be unreasonable to have a specific calibration coefficient impact the IRI model more than 224 others. For example, the site factor in Michigan should not vary greatly across the State, however, if the model is calibrated using no constraints, the Site Factor coefficient may be artificially inflated to compensate for other distresses when minimizing the sum of squared error simultaneously f or all calibration coefficients , since the majority of the climatic stations in Michigan were not drastical ly different from each other. 225 5.3 L OCAL C ALIBRATION OF R IGID P AVEMENT M ODEL 5.3.1 Transverse Cracking Model The transverse cracking model calibration discussion is presented in this section. The model was calibrated for all four options discussed previously. Minim al pavement sections were available for rigid pavements, therefore, some SPS - 2 sections from the LTPP database is included to study if these sections improve the local calibration for Michigan. 5.3.1.1. Option 1 Global Model The globally calibrated transverse crack ing model was verified by comparing the predicted and measured cracking. The model adequacy was tested by comparing the standard error of the estimate (SEE) and bias of the global model and by performing the three hypothesis tests mentioned before. The fir st hypothesis test determines if there was a statistically significant difference between the predicted and measured cracking. The second and third hypothesis tests indicates if the intercept and slope of the linear line between measured and predicted perf ormance is similar to zero and one, respectively. A zero intercept and slope of one indicate that no bias exists between the predicted and measured performance. Figure 5 - 57 shows the comparison between the measured and predicted transverse cracking using t he global cracking model. Based on the results, the global cracking model significantly under - predicts measured cracking. The hypothesis tests revealed that there is a significant difference between the predicted and measured transverse cracking (see Table 5 - 43 ). While the intercept of the global model was not significantly different than zero before calibration, the slope of regression line was significantly different than one. Since two of the three hypothesis tests were rejected the transverse cracking m odel needs calibration. 226 Figure 5 - 57 Global model comparison between measured and predicted transverse cracking (Option 1) Table 5 - 43 Global model hypothesis t esting results (Option 1) Hypothesis test Hypotheses P - value Mean difference (paired t - test) H 0 = (predicted measured) = 0 H 1 = (predicted 0.00 Intercept H 0 = intercept = 0 H 1 0.36 Slope H 0 = slope = 1 H 1 0.00 Sampling Technique Results The transverse cracking model was calibrated by minimizing the error between the predicted and me asured cracking. The model was calibrated for various sampling techniques. Figure 5 - 58 summarizes the SEE for each sampling technique. Overall, the SEE reduced after local calibration for all sampling techniques. The bootstrapping technique showed the lowe st SEE compared to all others. These results are expected because the bootstrapping technique uses random sampling with replacement that can capture the variability within the full dataset. The SEE: 21.10 % slabs cracked Bias: - 11.86 % slabs cracked C 4 : 1.00 C 5 : - 1.98 227 bootstrapping validation only showed a slightly higher SEE com pared to the bootstrapping technique. The bias was significantly reduces after local calibration for all sampling techniques as summarized in Figure 5 - 59. Based on the results, the validation section bias is much greater than the calibration sections but s till less than the global model. Figure 5 - 60 shows the comparison between the measured and predicted transverse cracking for the bootstrapping validation sections. The figure indicates the local calibration coefficients can predicted transverse cracking fo r pavement sections not included in the dataset and that a consistent over or under - prediction is not observed. The bootstrapping validation technique is preferred because it includes validation of the calibration coefficients using pavement sections not i ncluded in the calibration dataset. Figure 5 - 58 Summary of standard error for all sampling techniques (Option 1) 228 Figure 5 - 59 Summary of bias for all sampl ing techniques (Option 1) Figure 5 - 60 Comparison between measured and predicted transverse cracking after local calibration (Option 1) Hypothesis testing was performed to determine if there is a statistica lly significant difference between the measured and predicted transverse cracking. The results are summarized in Table 5 - 44. Overall, the local calibration p - values were greater than 0.05 which indicates that there was not a significant difference between the measured and predicted transverse cracking. For the validation sections, the results were more variable. A significant difference was observed SEE: 14.00 % slabs cracked Bias: - 3.89 % slabs cracked C 4 : 0.268 C 5 : - 1.644 229 for the split sampling technique. This result was expected because the split sampling technique is very sens itive to the number of pavement sections included in the local calibration. Additionally, the split sampling is only performed once. If the technique is performed another time, different results will be obtained. Due to this limitation, the repeated split sampling was performed 1000 times. After local calibration, the results for the repeated split sampling validation sections showed that 678 of the 1000 split samples showed no significant difference between the measured and predicted cracking. The jackknif ing technique showed that 10 of 18 of the validation sections showed no significant difference. Ultimately, the bootstrap validation should provide the best estimate of the calibration coefficients and the validation using pavement sections not included in the dataset. The hypothesis testing resulted in a p - value of 0.33 which indicates that there is no significant difference between the measured and predicted transverse cracking. The local calibration coefficients obtained for each sampling technique is su mmarized in Table 5 - 45. The C 4 calibration coefficients were less than the global coefficients and the C 5 coefficient was greater than the global coefficient. The calibration coefficients did not vary greatly between the different sampling techniques. Tab le 5 - 44 Hypothesis testing results (Option 1) Sampling Technique Global p - value Local - p - value Validation p - value No sampling 0.00 0.92 Split sampling 0.00 0.94 0.00 Repeated split sampling 0/1000 1000/1 000 678/1000 Bootstrapping 0/1000 998/1000 Jackknifing 0/18 18/18 10/18 Bootstrapping validation 0/1000 1000/1000 0.33 230 Table 5 - 45 Calibration coefficients (Option 1) Sampling Technique C4 C5 Global Model 1.00 - 1.980 No sampling 0.27 - 1.560 Split sampling 0.35 - 1.280 Repeated split sampling 0.26 - 1.630 Bootstrapping 0.25 - 1.710 Jackknifing 0.26 - 1.591 Bootstrapping validation 0.27 - 1.644 Model Reliability Updates The standard error of the cali brated cracking models were used to establish the relationship between the standard deviation of the measured cracking and mean predicted cracking as explained in Chapter 4. These relationships are used to calculate the cracking for a specific reliability. Table 5 - 46 summarizes the updated standard error for the different sampling techniques. Table 5 - 46 Transverse cracking reliability Option 1 Sampling technique Global model equation Local model equation No Sampling Split Sampling Repeated split sampling Bootstrapping 231 5.3.1.2. Option 2, 3 and 4 The results for Options 2, 3 an d 4 are summarized next. The same procedure as Option 1 was used to calibrate the transverse cracking model. The global model predictions are discussed first. Global Model The Pavement - ME was executed to predict transverse cracking. The predicted transvers e cracking was compared to the measured cracking obtained in the field. The comparison between the measured and predicted transverse cracking for Option 2 is shown in Figure 5 - 61. The results show that the global model under - predicts transverse cracking fo r the pavement sections included in Option 2. The global model SEE and bias was 14.3 and - 5.83 % slabs cracked. The fatigue damage curve is also presented in Figure 5 - 61(b). The curve also shows that the model under - predicts the fatigue damage correspondin g to the measured cracking values. (a) Measured versus predicted (b) Fatigue damage Figure 5 - 61 Option 2 global model comparisons The comparison between the measured and predicted transverse cracking for Opti on 3 is shown in Figure 5 - 62. The results show that the global model over and under - predicts transverse SEE: 14.3 % slabs cracked Bias: - 5. 83 % slabs cracked C 4 : 1.00 C 5 : - 1.98 232 cracking for the pavement sections included in Option 3. The SPS - 2 sections were included in Option 3. It was observed that the global model over - predic ted transverse cracking for the SPS - 2 sections and under - predicted cracking for the MDOT pavement sections. The global model SEE and bias was 16.94 and - 4.46 % slabs cracked. The fatigue damage curve is also presented in Figure 5 - 62(b). (a) Measured versus predicted (b) Fatigue damage Figure 5 - 62 Option 3 global model comparisons The comparison between the measured and predicted transverse cracking for Option 4 is shown in Figure 5 - 63. The results show that th e global model over and under - predicts transverse cracking for the pavement sections included in Option 4. Only unbonded overlay pavement sections were included in Option 4. The global model SEE and bias was 0.92 and - 0.55 % slabs cracked. The fatigue dama ge curve is also presented in Figure 5 - 64(b). It should be noted that for Option 4, there were very few pavement sections available. Additionally, these sections showed minimal transverse cracking and low fatigue damage as seen in Figure 5 - 64. SEE: 16.94 % slabs cracked Bias: - 4.46 % slabs cracked C 4 : 1.00 C 5 : - 1.98 233 (a) Measure d versus predicted (b) Fatigue damage Figure 5 - 63 Option 4 global model comparisons Hypothesis testing was performed to determine if there was a statistically significant difference between the measured and pr edicted transverse cracking for the different datasets. Table 5 - 47 summarizes the hypothesis testing results for the global model comparisons of Options 2, 3 and 4. The results show that a significant difference was observed between the measured and predic ted transverse cracking for all three options. The intercept and slope hypothesis testing results also indicate that the slope was significantly different than one for all options. The intercept was significantly different than zero for all options except Option 2. Based on these results, local calibration of the transverse cracking model is needed to improve the prediction capabilities of the Pavement - ME design software for Michigan conditions. Table 5 - 47 Glob al model hypothesis testing results (Options 2, 3, 4) Hypothesis test Hypotheses p - value Option 2 Option 3 Option 4 Mean difference (paired t - test) H 0 = (predicted measured) = 0 H 1 = (predicted 0.00 0.00 0.00 Intercept H 0 = intercept = 0 H 1 0.42 0.04 0.00 Slope H 0 = slope = 1 H 1 0.00 0.00 0.00 SEE: 0.92 % slabs cracked Bias: - 0.55 % slabs cracked C 4 : 1.00 C 5 : - 1.98 234 Sampling Technique Results The transverse cracking model was calibrated by minimizing the erro r between the predicted and measured cracking. The transvers cracking model was calibrated using various sampling techniques for each dataset option. The standard error and bias results are summarized in Figures 5 - 64 and 5 - 65. The different sampling techni ques and options are compared with one another. The local calibration improved the model prediction capabilities based on the reduced SEE and bias across all sampling techniques and dataset options. Option 4 showed the lowest global model and local model S EE. These results are misleading due to the small sample size and the minimal measured cracking. Option 2 provided the next best results comparing the SEE and bias after local calibration. Option 3 showed higher SEE and bias compared to Option 2 but lower than Option 1. Overall, the bootstrapping validation technique provides the lowest SEE and bias and gives the best estimate of the local calibration coefficients because it incorporates both bootstrapping calibration and validation using pavement sections not included in the dataset. Figure 5 - 66 shows the comparison between the measured and predicted transverse cracking for the bootstrapping validation sections. The results show that the model over and under - predicts transverse cracking for the validation sections. These results were expected as model predictions because no consistent over or under - prediction was observed. Option 3 showed a much higher SEE value due to the LTPP sections which are included in the dataset. Alternatively, Option 4 showed the lowest SEE and bias but very few pavement sections were included in the dataset and minimal transverse cracking was observed. 235 (a) No sampling (b) Split sampling (c) Re peated split sampling (d) Bootstrapping (e) Jackknifing (f) Bootstrapping validation Figure 5 - 64 Standard error results for all options and sampling techniques 236 (a) No sampling (b) Split sampling (c) Repeated split sa mpling (d) Bootstrapping (e) Jackknifing (f) Bootstrapping validation Figure 5 - 65 Bias results for all options and sampling techniques 237 (a) Option 2 (b) Option 3 (c) Option 4 Figure 5 - 66 Bootstrapping validation for all dataset options Hypothesis testing was performed to determine if there is a statistically significant difference between the measured and predicted transverse cracking. The results are summarized in Table 5 - 48. Generally the local calibration greatly improved the prediction capabilities of the Pavement - ME design software. The results for Option 2 indicate that there was no significant difference between the measured and predicted trans verse cracking for all sampling techniques. Only half of the jackknifing pavement sections showed a significant difference. The validation of the calibration SEE: 4.17 % slabs cracked Bias: 1.31% slabs cracked C 4 : 0.241 C 5 : - 1.748 SEE: 20.51% slabs cracked Bias: - 2.49 % slabs cracked C 4 : 0.228 C 5 : - 1.801 SEE: 1.01 % slabs cracked Bias: - 0.25 % slabs cracked C 4 : 4.938 C 5 : - 0.964 238 coefficients using pavement sections not included in the dataset also showed that there was no sig nificant difference between the measured and predicted transverse cracking and indicates that the locally calibrated transverse cracking predicts well for Michigan conditions. Most of the sampling techniques for Option 3 showed no significant difference af ter local calibration. The bootstrapping technique showed that 759 of the 1000 bootstraps had no significant difference between the measured and predicted cracking. The jackknifing technique only had 1 pavement section which showed no significant differenc e after local calibration. Alternatively, 20 out of the 39 pavement sections showed no significant difference for the validation of the jackknifing technique. Overall, the bootstrapping validation showed that there was no significant difference between the measured and predicted cracking with a p - value of 0.55. Option 4 also showed that the local calibration improved the prediction capabilities of the Pavement - ME transverse cracking model. All of the local calibration techniques showed a p - value greater th an 0.05 which indicates that there is no significant difference between the measured and predicted transverse cracking. The bootstrapping validation p - value was 0.30 and shows that the model can predict transverse cracking well for pavement sections not in cluded in the local calibration. 239 Table 5 - 48 Hypothesis testing results for transverse cracking local calibration Option 2 Sampling Technique Global p - value Local - p - value Validation p - value No sampling 0 0.63 - Split sampling 0 0.94 0.62 Repeated split sampling 0/1000 1000/1000 620/1000 Bootstrapping 0/1000 999/1000 - Jackknifing 0/31 16/31 19/31 Bootstrapping validation 0/1000 1000/1000 0.11 Option 3 No sampling 0.00 0.11 Split sampling 0.00 0.34 0.00 Repeated split sampling 203/1000 998/1000 613/1000 Bootstrapping 191/1000 759/1000 Jackknifing 0/39 1/39 20/39 Bootstrapping validation 0/1000 997/1000 0.55 Option 4 No sampling 0.00 0.65 Split sampling 0.00 0.63 0.16 Repeated split samplin g 0/1000 1000/1000 493/1000 Bootstrapping 0/1000 994/1000 Jackknifing 0/13 13/13 8/13 Bootstrapping validation 0/1000 1000/1000 0.30 The local calibration coefficients for the various dataset options and sampling techniques are summarized in Figures 5 - 67 and 5 - 68. The calibration coefficients are fairly similar between Options 1 and 2 for all sampling techniques. Alternatively, the coefficients for option 3 and 4 are much greater for C 4 and C 5 compared to Options 1 and 2. The Option 4 coefficients wa s expected to be different because of the low measured distress observed in the unbonded overlay pavement sections. 240 Figure 5 - 67 C 4 local calibration coefficient for all options and sampling techniques Fi gure 5 - 68 C 5 local calibration coefficient for all options and sampling techniques Model Reliability Updates The standard error of the calibrated cracking models were used to establish the relationship betwe en the standard deviation of the measured cracking and mean predicted cracking as explained in Chapter 4. These relationships are used to calculate the cracking for a specific reliability. Tables 5 - 49 to 5 - 51 summarize these relations for the options consi dered for the cracking model. 241 Table 5 - 49 Transverse cracking reliability Option 2 Sampling technique Global model equation Local model equation No Sampling Split Sampling Repeated split sampling Bootstrapping Table 5 - 50 Transverse cracking reliability Option 3 Sampling technique Global model equation Local model equation No Sampling Split Sampling Repeated split sampling Bootstrapping Table 5 - 51 Transverse cracking reliability Option 4 Sampling technique Global model equation Local model equation No Sampling Split Sampling Repeated split sampling Bootstrapping 242 5.3.2 Faulting Model The local calibration of the faulting model was performed using two methods. The first method calibrated the model by changing only the C 1 coefficient. The second method used an approach to calibrate seven of the eight calibration coefficients needed for the faulting model. 5.3.2.1. Method 1 The first method was used because the predicted faulting could not be calculated outside of the software. Only C1 was changed since every time a coefficient was adjusted the software needed to be executed to extract the predicted faulting. This was a very time consuming task and since C1 directly affects the magnitude of faulting it was decid ed to only change this coefficient. The results for the calibration of the C1 coefficient are summarized next. The results for local calibration for the faulting model are shown in Table 5 - 52. It can be seen that C 1 =0.4 gives the lowest SEE and bias for a ll but the Option 4. A C 1 value of 0.85 for Option 4 gives the lowest SEE and bias. This option is not recommended due to the limited number of unbonded overlay pavement sections and low magnitude of measured faulting. Table 5 - 52 Summary of Option 1 local calibration Faulting model Parameter Option 1 Option 2 Option 3 Option 4 SEE Bias SEE Bias SEE Bias SEE Bias Global model 0.059 0.035 0.051 0.026 0.049 0.023 0.005 - 0.002 C 1 = 0.4 0.024 0.007 0.021 0.004 0.022 0.002 0.005 - 0.003 C 1 = 0.5 0.029 0.011 0.025 0.008 0.025 0.005 0.005 - 0.002 C 1 = 0.6 0.034 0.015 0.029 0.011 0.029 0.008 0.005 - 0.002 C 1 = 0.65 0.036 0.017 0.032 0.013 0.031 0.010 0.005 - 0.002 C 1 = 0.7 0.039 0.020 0.034 0.015 0.033 0.012 0.005 - 0.002 C 1 = 0.75 0.042 0.022 0.037 0.016 0.035 0.014 0.005 - 0.002 C 1 = 0.8 0.045 0.025 0.039 0.018 0.038 0.015 0.004 - 0.002 C 1 = 0.85 0.048 0.027 0.042 0.020 0.040 0.017 0.004 - 0.001 C 1 = 0.9 0.045 0.025 0.039 0.018 0.038 0.015 0.004 - 0.002 243 Reliabilit y for faulting model The standard errors of the calibrated faulting models were used to establish the relationship between the standard deviation of the measured faulting and mean predicted faulting as explained in Chapter 4. These relationships are used t o calculate faulting for a specific reliability. Table 5 - 53 summarizes these relations for the options considered for the faulting model. It should be noted that the changes in the faulting model reliability are not correctly accounted for in the predicti ons; therefore, the global model reliability standard error is recommended until this issue is resolved in a future software update. Table 5 - 53 Faulting model reliability Data option Global model equation Loca l model equation Option 1 Option 2 Option 3 Option 4 5.3.2.2. Method 2 Genetic Algorithm As mentioned in Chapter 4, the faulting model calibration requires that seven calibration coefficients change simultaneously in order to minimize the sum of squared error between the measured and predicted faulting. As discussed above, the faulting model local calibration for Michigan conditions only adjusted the C 1 coefficient ( 43 ) . The reason for only changing C 1 was because the predicted faulting from the Pavement - ME could not be replicated outside of the software. Additional efforts were taken to determine how the faulting model predictions can be replicated and the findings were discussed above. This section details the methods developed to locally calibrate the faulting model coefficients simultaneously using Michigan data. Three 244 calibration sampling methods were used to compare the calibration coefficients, model standard error and bias. The calibration met hods include: No sampling (use the entire dataset) Split sampling (70% for calibration 30% for validation) Bootstrapping validation (80% for calibration, 20% for validation) Several methods are available to adjust the local calibration coefficients of the faulting model. The first and most time consuming method consists of changing the calibration coefficients in the Pavement - ME software and running it every time. This method does not provide the best estimate of the calibration coefficients because the ran ges of coefficients may not be fine enough for a particular calibration coefficient and the interactive effects may not be captured. The second method consists of coding the faulting prediction equations outside of the Pavement - ME and using statistical sof tware such as SAS to numerically optimize the model by changing the calibration coefficients. Such calibration was performed when the global model local calibration coefficients were updated to reflect updates regarding the measurement of the coefficient o f thermal expansion (CTE) ( 18 ; 44 ) . The third method consists of using the geneti c algorithm (GA) optimization technique available in the MATLAB Software ( 45 ) . In general, the GA can solve constrained and unconstrained optimization problems based on processes from biological evolution. The GA was used for the faulting model calibration to modify the solutions in a population (calibration coefficients). The individual values within a population are randomly selected to represent a parent function which is then used to determine the children for the next generation. This proces s is repeated and the population eventually reaches an optimal solution ( 45 ) . One benefit of the GA is that it is capable of converging to a unique global minimum regardless of if any local minima exist ( 46 ) . The GA - based optimization was performed by using 245 the MATLAB function ga . The optimization function consisted of minimizing the sum of squared error between the predicted and measured faulting as shown in Equation (5 - 1 ). ( 5 - 1 ) The error was minimized by changing the seven calibration coefficients in the faulting model. The calibration coefficients w ere constrained to ensure reasonable results. The lower and upper bound constraints were set based on the global model calibration coefficient values to ensure a broad range for each population in the GA. The constraints for each calibration coefficient ar e summarized in Table 5 - 54. Table 5 - 54 Faulting model calibration coefficient constraints Calibration coefficient Global Lower bound Upper bound C1 1.0184 0.1 1.32392 C2 0.9166 0.4583 1.19158 C3 0.0022 0.00 286 0.0066 C4 0.0009 0.00045 0.0027 C5 250 125 5000 C6 0.4 0.2 0.52 C7 1.8331 0.91655 2.38303 The GA was executed by selecting 300 random samples for each calibration coefficient between the lower and upper bounds specified in Table 2. This populatio n produces a 300 - by - 7 matrix of possible combinations to search for an optimal solution. Once an optimum solution is found for a particular population, these values are used to generate another matrix and the process is repeated (generations). Thirty gener ations were selected for the faulting model calibration. This method would be computationally impossible if it was performed manually by changing each calibration coefficient and running the Pavement - ME. 246 The results for the no sampling, split sampling and bootstrapping validation techniques are summarized below. The standard error, bias and calibration coefficients are compared. Figures 5 - 69 and 5 - 70 summarize the SEE and bias for the various sampling techniques. The results show that the error and bias wa s reduced after local calibration for all options. All sampling methods showed similar magnitudes for SEE and bias. Figure 5 - 69 Summary of standard error for all sampling techniques Figure 5 - 70 Summary of bias for all sampling techniques 247 The faulting model calibration coefficients are summarized in Table 5 - 55. The coefficients were similar between the no sampling and split sampling options with slight variati ons. The bootstrapping validation option showed a larger difference compared to the other two sampling techniques. These results are expected because the bootstrapping validation takes into account the average of 1000 bootstrap samples and can capture the overall variability in the dataset. Figure 5 - 71 shows the distribution of the standard error and bias for the 1000 bootstraps. The mean, median and confidence intervals are also displayed. Table 5 - 55 Faulting model calibration coefficients Sampling Technique Global Model No sampling Split sampling Bootstrap validation C1 1.0184 0.10417 0.10417 0.13188 C2 0.9166 0.46669 0.51552 0.48129 C3 0.0022 0.00653 0.00657 0.00524 C4 0.0009 0.00255 0.00265 0.00247 C5 2 50 4940.03284 4887.15813 4600.59465 C6 0.4 0.48839 0.51088 0.50979 C7 1.8331 0.92048 0.95148 0.95246 Figure 5 - 71 Frequency distributions of SEE and bias for the bootstrapping validation sampling techniq ue Hypothesis testing was performed to determine if there was a significant difference 248 between the measured and predicted faulting. The results in Table 5 - 56 show that even after local calibration there was still a significant difference between the measur ed and predicted faulting. However, for the split sampling and bootstrapping techniques, the validation sections had a p - value greater than 0.05 which indicates that there is no significant difference between the measured and predicted faulting for the pav ement sections not included in the local calibration. Figure 5 - 72 shows the comparison between the measured and predicted faulting before and after local calibration. The global model coefficients were used for the validation sections in Figure 5 - 72(a) to compare the faulting predictions before and after local calibration. The global faulting model over - predicts measured faulting for Michigan pavement sections. The locally calibrated model improves the faulting predictions significantly. Table 5 - 56 Hypothesis testing p - value results Sampling Technique Global Local Validation No sampling 0.0000 0.0010 Split sampling 0.0000 0.0003 0.2147 Bootstrap validation - - 0.3718 (a) Global model coefficients (b) Local model validation (bootstrapping) Figure 5 - 72 Measured versus predicted faulting before and after local calibration 249 The faulting model was successfully calibrated to reflect observed faulting for rigid pavements in Michigan. The standard error and bias were reduced for all sampling techniques. Two limitations for the current faulting model local calibration include 1) a limited number of pavements sections were available for the local calibration of rig id pavements and 2) the measured faulting for the selected pavement sections were very low (less than 0.1 inch). 5.3.3 Rigid Pavement Roughness (IRI) Model The IRI model was calibrated after the local calibration of the transverse cracking and faulting models we re completed. These distresses are considered directly in the IRI model along with the site factor and spalling. The IRI model was calibrated by minimizing the error between the predicted and measured IRI values. The model was calibrated for all options. T he results for each data option are summarized below. 5.3.3.1. Global Model The Pavement - ME was executed for all pavement sections included in the dataset options. The performance predictions were used to determine how well the Pavement - ME software predicts distres s for Michigan conditions. Figure 5 - 73 compares the measured and predicted IRI for all four dataset options. The SEE and bias is calculated for each option. These values were different for each dataset. Option 3 showed the highest SEE of 23.1 in/mile and O ption 4 showed the lowest SEE of 11.3 in/mile. Option 1 showed the largest bias of - 11.4 in/mile compared to all options. The results for Option 3 were expected because it included the LTPP SPS - 2 sections. These sections showed higher transverse cracking w hich is a directly affects the IRI predictions. The global model under - predicted IRI for all data options. Hypothesis testing was performed to determine if there was a significant difference between the measured and predicted IRI. The results are summariz ed in Table 5 - 58. The results showed that 250 all hypothesis test p - values were less than 0.05 which indicates that there is a significant difference between the measured and predicted IRI and local calibration is needed to improve the prediction capabilities of the Pavement - ME rigid IRI model. (a) Option 1 (b) Option 2 (c) Option 3 (d) Option 4 Figure 5 - 73 Rigid IRI model measured versus predicted comparison using global model coefficients SEE: 17.3 in/mile Bias: - 11.4 in/mile C 1 : 0.82 C 2 : 0.442 C 3 : 1.493 C 4 : 25.24 SEE: 15.5 in/ mile Bias: - 10.0 in/mile C 1 : 0.82 C 2 : 0.442 C 3 : 1.493 C 4 : 25.24 SEE: 23.1in/mile Bias: - 9.9 in/mile C 1 : 0.82 C 2 : 0.442 C 3 : 1.493 C 4 : 25.24 SEE: 11.3 in/mile Bias: - 7.6 in/mile C 1 : 0.82 C 2 : 0.442 C 3 : 1.493 C 4 : 25.24 25 1 Tabl e 5 - 57 Global Rigid IRI model hypothesis testing results Hypothesis test Hypotheses p - value Option 1 Option 2 Option 3 Option 4 Mean difference (paired t - test) H 0 = (predicted measured) = 0 H 1 = (predic ted 0.00 0.00 0.00 0.00 Intercept H 0 = intercept = 0 H 1 0.00 0.00 0.00 0.00 Slope H 0 = slope = 1 H 1 0.00 0.00 0.00 0.00 5.3.3.2. Sampling Technique Results The rigid pavement IRI model was locally calibrated to reduce the error between the measured and predicted IRI for Michigan pavements. The model calibration was performed using various sampling techniques and dataset options. The sampling techniques and data options were compared to each other to study the difference s in model standard error, bias, and the calibration coefficients. The SEE and bias results are summarized in Figures 5 - 74 and 5 - 75. The SEE was slightly reduced after local calibration for all sampling techniques and dataset options. Alternatively, the m odel bias was reduced substantially after local calibration. The global model bias was approximately 10.0 in/mile for most of the sampling techniques and dataset options and was reduced to less than 5.0 in/mile. Options 1 and 2 showed very similar results when comparing the SEE. Option 3 had the greatest SEE before and after local calibration for all dataset options and sampling techniques. The majority of the sampling techniques showed similar magnitudes of SEE. The bootstrapping validation technique showe d much lower SEE compared to the other techniques. These results were obtained because of the random split of the dataset prior to local calibration. The 80/20 split for calibration and validation removed certain projects from the calibration dataset which were later used for validation of the IRI model. The bias was reduced for all dataset options and sampling techniques. The global model 252 under - predicted IRI for Michigan conditions. After local calibration, the model bias shows minimal over/under - predicti ons for the majority of the sampling techniques and dataset option. However, for the bootstrapping validation technique, the locally calibrated model showed a slight over - prediction for Options 1 and 3. The validation sections for the bootstrapping validat ion technique showed that the global model slightly over - predicts IRI for pavement sections not included in the calibration dataset. The larger bias in these sections is again attributed to the random split of the dataset prior to calibration. The bootstr apping validation sampling technique was used to validate the bootstrapping sampling with pavement sections not included in the calibration dataset. Twenty percent of the total number of pavement sections was used to validate the locally calibrated IRI mod el. The local calibration coefficients were used to evaluate the IRI model using these pavement sections. The results for each dataset option are shown in Figure 5 - 76. The SEE, bias and calibration coefficients for each Option are also shown. 253 (a) No sampl ing (b) Split sampling (c) Repeated split sampling (d) Bootstrapping (e) Jackknifing (f) Bootstrapping validation Figure 5 - 74 Summary of SEE for IRI model 254 (a) No sampling (b) Split sampling (c) Repeated split sampling (d) Bootstrapping (e) Jackknifing (f) Bootstrapping validation Figure 5 - 75 Summary of bias for IRI model 255 (a) Option 1 (b) Option 2 (c) Option 3 (d) Option 4 Figure 5 - 76 Measured versus predicted IRI for the validation of the bootstrapping sampling technique After the calibration of each sampling technique, hypothesis testing was performed to determine if there was a significant difference between th e measured and predicted IRI. The hypothesis testing results for all sampling techniques and dataset options are summarized in Table 5 - 59. Option 1 : The sampling techniques used for Option 1 indicate that the global model did not predict IRI well for Mic higan conditions. The local calibration results showed a better prediction and no significant difference between the measured and predicted IRI was determined SEE: 13.9 in/mile Bias: 5.2 in/mile C 1 : 2.257 C 2 : 1.603 C 3 : 1.493 C 4 : 25.24 SEE: 13.9 in/mile Bias: 6.8 in/mile C 1 : 0.598 C 2 : 11.846 C 3 : 1.493 C 4 : 25.24 SEE: 14.5 in/mile Bias: 7.5 in/mile C 1 : 1.07 C 2 : 5.328 C 3 : 1.493 C 4 : 25.24 SEE: 11.9 in/mile Bias: - 1.0 in/mile C 1 : 0.351 C 2 : 2.258 C 3 : 1.493 C 4 : 25.24 256 for the split sampling technique. The repeated split sampling, bootstrapping, and bootstrapping v alidation techniques showed that only 120, 62, and 99 out of 1000 calibrations resulted in no significant difference. These results are surprising because the global model predictions were not drastically different than the measured values. Alternatively, the validation sections did show better results. The split sampling technique had a p - value of 0.19, the repeated split sampling and jackknifing methods showed that over half of the validation sections showed no significant difference. The bootstrapping va lidation showed that the validation sections had a significant difference between the measured and predicted IRI. Option 2 : The hypothesis testing results were much better for Option 2. All sampling techniques showed that there was no significant differen ce between the measured and predicted IRI after local calibration. The bootstrapping validation technique showed a p - value less than 0.05 when validating the model using pavement sections not included in the database. These results could be because of the limited sample size for rigid pavement sections and due to the random splitting of the dataset for calibration and validation. These results should improve once a large enough sample size is available. Option 3 : The majority of the sampling techniques show ed that there was no significant difference between the measured and predicted IRI after local calibration. Some of the techniques showed a significant difference when validating the model coefficients with paement sections not included in the calibration dataset. Overall, the calibration was successful and the new calibration coefficients improve the prediction capabilities. Option 4 : The hypothesis testing results were promising for Option 4. All sampling techniques showed that there was no significant di fference between the measured and predicted IRI after local calibration. 257 Table 5 - 58 IRI model hypothesis testing results Option 1 Sampling Technique Global p - value Local - p - value Validation p - value No samp ling 0.00 0.00 - Split sampling 0.00 0.08 0.19 Repeated split sampling 0/1000 120/1000 530/1000 Bootstrapping 0/1000 62/1000 - Jackknifing 0/29 0/29 18/29 Bootstrapping validation 0/1000 99/1000 0.01 Option 2 No sampling 0.00 0.64 - Split sampling 0.00 0.42 0.49 Repeated split sampling 0/1000 999/1000 640/1000 Bootstrapping 0/1000 967/1000 - Jackknifing 0/44 35/44 28/44 Bootstrapping validation 0/1000 942/1000 0.00 Option 3 No sampling 0.00 0.62 - Split sampling 0.00 0.86 0.00 Repeated split sampling 0/1000 971/1000 625/1000 Bootstrapping 0/1000 870/1000 - Jackknifing 0/47 8/47 27/47 Bootstrapping validation 0/1000 565/1000 0.01 Option 4 No sampling 0.00 0.79 - Split sampling 0.00 0.78 0.68 Repeated split sampling 0/1000 1000/1000 630/ 1000 Bootstrapping 0/1000 1000/1000 - Jackknifing 0/15 15/15 11/15 Bootstrapping validation 1/1000 1000/1000 0.76 The IRI model calibration coefficients are summarized in Figures 5 - 77 and 5 - 78. Only C 1 and C 2 were changed in the calibration of the IRI model which corresponds to transverse cracking and spalling. The C 3 and C 4 coefficients correspond to faulting and site factor. These coefficients were kept constant using the global model values. The faulting coefficient was held constant because minimal measured faulting was observed for Michigan pavements. Additionally, the site factor for the selected pavement sections did not vary greatly due to their 258 geographical location. As seen in the figures, the C 1 coefficient reduced from the global coefficient for Options 1 and 4 and increased for Options 2 and 3. The C 2 coefficient increased for all Options with the greatest increase observed for Option 1. Figure 5 - 77 Summary of IRI model C 1 calibration coeffic ient Figure 5 - 78 Summary of IRI model C 2 calibration coefficient 5.3.3.3. Rigid IRI Model Calibration Summary The IRI model was successfully calibrated using Michigan specific pavement sections. The local calibrat ion was performed by minimizing the sum of squared error between the measured and predicted IRI. Several sampling techniques were utilized to determine the best estimates of the calibration coefficients and to provide the greatest confidence in those 259 coeff icients. The C 3 and C 4 calibration coefficients were not changed from the global coefficients because of the minimal faulting predictions in Michigan and the geographical locations. There are several limitations to the current local calibration which shoul d be improved upon in future calibrations. These limitations include: A Limited number of pavement sections available for rigid pavements. IRI predictions are dependent on transverse cracking, faulting, spalling, and site factor. Spalling was not measured on MDOT pavement and was not specifically calibrated. Additional pavement sections from all areas of Michigan so that the site factor coefficient can be adjusted to improve the prediction capabilities. Benefits of repeated split sampling and bootstrapping The traditional calibration method (split sampling) is an effective method when there is a large enough sample size. For rigid pavements, only 20 pavement projects were available with sufficient time - series cracking, faulting and IRI. Having so few sectio ns make it difficult to decide if these coefficients truly represent local design practices. Therefore more robust techniques were used to study the variability in the calibration parameters. The repeated split sampling distributions for SEE, bias, C 1 and C 2 are shown in Figure 5 - 79. These distributions show the variability of the calibration parameters when random sampling is performed. In Figure 5 - 79(a) there seems to be a bimodal distribution. This indicates that there could be two different clusters in the dataset and becomes apparent when a 70/30 split of the dataset is performed. This was observed in the measured data because some of the pavement sections showed much higher IRI compared to others. Alternatively, Figure 5 - 80 shows the parameter estimate s for the bootstrapping technique. There does not seem to be a bimodal distribution for the parameter 260 estimates. This result is expected because of the way bootstrapping does random sampling with replacement. The confidence interval was obtained for both r epeated split sampling and bootstrapping. These confidence intervals provide a better idea of where the SEE, bias and calibration coefficients will fall for a particular dataset. Figure 5 - 79 IRI model param eter distributions for repeated split sampling 261 Figure 5 - 80 IRI model parameter distributions for bootstrapping 5.4 S ATELLITE S TUDIES 5.4.1 Repeated bootstrapping and validation The bootstrapping validation techniqu e was further expanded to repeatedly bootstrap a split sample for calibration and validation. The bootstrapping validation technique only splits the pavement section sample once and then performs bootstrap sampling. The new method will split the sample, pe rform bootstrapping on the split sample for calibration and use the remaining pavement sections not included in the calibration set for validation. This process is then repeated to ensure that there is a new random selection of calibration and validation s ections each time the calibration is performed. The results are then summarized using a histogram to calculate the mean, median, and confidence intervals of the SEE, bias and calibration coefficients. The main purpose of this method was to see if there is a difference in SEE, bias and calibration coefficients 262 compared to the other methods. As a demonstration, this procedure was performed for the rigid pavement transverse cracking model. The SEE and bias results are summarized in Figures 5 - 81 and 5 - 82 . The local calibration coefficients for each sampling method are summarized in Table 5 - 59 . The results in Figure 5 - 81 show that the local calibration SEE is very similar to all other methods. Additionally, the validation results are also similar to the other me thods. The results in Figure 5 - 82 indicate that the repeated bootstrapping method resulted in lower bias compared to the other methods. The main benefit of the repeated bootstrapping method is that it can determine the mean, median and confidence intervals for both the calibration and validation sections and does not rely on one single set of validation pavement sections. This gives greater confidence in the calibration coefficients since it can account for the variability in the data. Figure 5 - 81 Transverse cracking model standard error 263 Figure 5 - 82 Transverse cracking model bias Table 5 - 59 Transverse Cracking Cal ibration coefficients Sampling Technique C1 C2 Global Model 1.00 - 1.980 No sampling 0.27 - 1.560 Split sampling 0.35 - 1.280 Repeated split sampling 0.26 - 1.630 Bootstrapping 0.25 - 1.710 Jackknifing 0.26 - 1.591 Bootstrapping validation 0.27 - 1.644 Re peated bootstrapping 0.26 - 1.740 5.5 O VERALL S UMMARY In summary, the standard error from the bootstrapping validation technique was compared to the reasonable standard errors discussed in Chapter 2. The results are summarized in Table 5 - 60. The standard erro rs for most performance models are similar to the reasonable values. The alligator cracking, rutting, and joint faulting models had lower standard error and the thermal and transverse cracking had higher standard error values. The thermal cracking model ha d significantly higher standard error values. These results are due to the large differences 264 between the measured and predicted thermal cracking. Additionally, the calibration coefficient was changed iteratively and was not subjected to the automated optim ization tools due to the form of the model. Overall, the local calibration results are close to the reasonable standard error values which indicates that the local calibration for Michigan conditions are adequate . Table 5 - 60 Comparison of reasonable standard error after local calibration Pavement type Performance prediction model S e Michigan S e Bootstrapping validation New asphalt Alligator cracking (%) 7 6.3 Transverse (thermal) cracking (ft/mile) 250 732 Rut depth (inch) 0.1 0.09 New JPCP Transverse cracking (% slabs cracked 7 8.97 Joint faulting (inch) 0.05 0.019 265 6 - CONCLUSIONS , RECOMMENDATIONS AND FUTURE RESEARCH 6.1 S UMMARY The main objectives of this research study were to (a) select candidate pavement projects for the local calibration of performance models in Michigan, (b) evaluate the adequacy of the current global or national calibrated models for Michigan conditions, (c) calibrate the pavement performance prediction models for flexible and rigid pavements to Michigan conditions using different dataset options and resampling techn iques, (d) provide a catalog of calibration coefficients for each performance model for rigid and flexible pavements, (e) compare the local and globally calibrated models and recommend the most representative models for Michigan conditions, (f) recommend f uture local calibration guidelines and data needs. The local calibration of the performance models in the mechanistic - empirical pavement design guide (Pavement - ME) is a challenging task, especially due to data limitations. A total of 108 (129 sections) rec onstruct flexible and 20 (29 sections) rigid pavement ca ndidate projects were selected. Similarly, a total of 33 (40 sections) and 8 (16 sections) rehabilitated pavement projects for flexible and rigid pavements, respectively were selected for the local c alibration. The selection process considered pavement type, age, geographical location, and number of condition data collection cycles. T he selected set of pavement sections met the following data requirements (a) adequate number of sections for each perfo rmance model, (b) a wide range of inputs related to traffic, climate, design and material characterization, ( c ) a reasonable extent and occurrence of observed condition data over time. The nationally calibrated performance models were evaluated by using da ta from the selected pavement sections. The results showed that the global models in the Pavement - ME do not adequately predict pavement performance for Michigan conditions. Therefore, local calibration of the models was essential. The local calibration for all performance 266 prediction models for flexible and rigid pavements were performed for multiple datasets (reconstruct, rehabilitation and a combination of both) and using robust statistical techniques (e.g. repeated split sampling and bootstrapping). The r esults of the local calibration and validation of various models show that the calibrated model significantly improves the performance predictions for Michigan conditions. The local calibration coefficients for all performance models are documented in Chap ter 5. Additionally, recommendations on the most appropriate calibration coefficients for each of the performance models in Michigan, along with future local calibration guidelines and data needs are included. Potential future research to expand the local calibration process for Michigan is also presented. 6.2 L OCAL C ALIBRATION F INDINGS Based on the results of the analyses performed in this study, various conclusions were drawn. These conclusions can be divided into the following three broad topics: Data collec tion for the selected pavement sections Local calibration process Catalog of the local calibration coefficients 6.2.1 Data Needs for Local Calibration The first step in the local calibration process includes the selection of an adequate number of pavement sect ions representing state - of - the - practice for the local conditions. Subsequently, an essential step is to collect the required data for the selected pavement sections. The data includes information about (a) measured pavement condition, and (b) many Pavement - ME inputs, for each project. Chapter 3 describes the process for pavement section selection for local calibration and the procedures adopted to collect the necessary information for the selected pavement sections. The data needs for the local calibration are: 267 1. Readily available measured condition data 2. Project selection criteria 3. Pavement cross - section information 4. Traffic inputs 5. Construction materials inputs 6. Climate inputs Table 6 - 1 summarizes the inputs and corresponding levels for the available data. Table 6 - 1 Summary of input levels and data source Input Input level Input source Traffic AADTT 1 Historical traffic counts TTC 2 Clusters from previous traffic study ALS Tandem 2 Clusters from previous traffic study HDF 2 Clusters from previous traffic study MDF 3 Traffic characterization study AGPV 3 Traffic characterization study ALS single, tridem, quad 3 Traffic characterization study Cross - section (new and existing) HMA thickness 1 Design drawings PCC thickness 1 Design drawings Base thickness 1 Design drawings Subbase thickness 1 Design drawings Construction materials HMA Binder type 3 Project specific binder and mixture gradation data obtained from data collection HMA mixture aggregate gradation 3 Project specific binder and mixture gradation data obtained from historical record Binder type 1 Pseudo Level 1 - MDOT HMA mixture characterization study HMA mixture aggregate gradation 1 Pseudo Level 1 - MDOT HMA mixture characterization s tudy PCC Strength ( f' c , MOR) 1 Psuedo Level 1(Project specific QC/QA) CTE 3 CTE study Base/subbase MR 2 Unbound MR study Subgrade MR 2 Subgrade MR study Soil type 1 Subgrade MR study Climate 1 Closest available climate station (Pavement - ME) N ote: Level 1 is project specific data, pseudo level 1 means that the inputs are not project specific but the material proper ties (lab measured) corresponds to similar materials used in the project. Level 2 inputs are based on regional averages in Michigan . Level 3 inputs are based on statewide averages in Michigan 268 6.2.2 Process for Local Calibration The NCHRP Project 1 - 40B ( 2 ) documented the recommended practices for local calibration of the Pavement - ME. The guide outlines the significance of the calibration process and the general approach for local calibration. In general, the calibration process is used to: a. Confirm that the performance models can predict pavement distress and smoothness with minimal bias, and b. Determine the standard error associated with th e prediction equations. The standard error estimates the scatter of the data around the line of equality between predicted and measured values of distress. The bias indicates if there is any consistent under or over - prediction by the prediction models. I n general the local calibration of the performance models involve the following steps: 1. Select the appropriate number of pavement sections based on the selection criteria documented in Chapter 3 for each performance model. The final list of candidate paveme nt sections should be refined based on the magnitude and extent of the measured performance. 2. Collect traffic, climate, pavement cross - section, and materials data for all the selected pavement sections. 3. Execute the Pavement - ME software to predict the pavem ent performance for each selected pavement section. 4. Extract the predicted distresses and compare with the measured distresses. 5. Test the accuracy of the global model predictions and determine if local calibration is required. 6. Adjust the local calibration co efficients to minimize bias and standard error by using different sampling and resampling techniques, if local calibration is required. It 269 should be noted that different subsets of data representing reconstruct and rehab can be analyzed separately to deter mine the need for distinct calibration coefficients. 7. Validate the adjusted coefficients with pavement sections not included in the calibration set. 8. Modify the reliability equations for each performance model based on the final calibrated models. 6.2.3 Coefficien ts for the Locally Calibrated Models Based on the local calibration of the performance models by following the above mentioned process, the following conclusions can be made: Bootstrapping consistently showed the lowest SEE and bias for different dataset o ptions for alligator cracking, rutting, transverse cracking, faulting, and IRI models. Thermal cracking model require s rerunning of the Pavement - ME software every time the local coefficients are modified. Therefore, only the no - sampling technique was utili zed for the local calibration of those models . For each performance prediction model, the following options (data subsets) provided the most rational results: o Alligator cracking: Option 1a provided the most realistic model because more pavement sections an d measured cracking data were available. Option 1b is not recommended because only a limited number of pavement sections exhibited alligator cracking based on the corresponding PDs in the MDOT PMS database. o Rutting: Individual layer calibration provided the best results. Option 2 (combination of reconstruct and rehabilitation) gave the most realistic results for the rutting model. 270 o Thermal cracking: Level 1 calibration coefficients for Option 2 showed the best results. For level 3, Option 2 provided the m ost practical results. o IRI (flexible): The options provided varying results and, Options 1 and 4 showed the most appropriate results for reconstruct and rehabilitated pavement section, respectively. o Rigid transverse cracking: Option 2 provided the most pra ctical results and contains more pavement sections. Therefore, the model coefficients based on Option 2 are recommended for both reconstruct and rehabilitation. o Faulting: Even though the magnitude of the measured faulting in the selected pavement sections was low, Option 2 provided the best results. o IRI (rigid): Option 2 provided the best results. It should be noted that the faulting coefficient in the IRI model is set to the global model coefficient because of the low levels of measured faulting due to do welled joints in Michigan. Based on the above findings, the local calibration coefficients and standard error equations for reliability for each performance model within flexible and rigid pavements are presented in Tables 6 - 2 and 6 - 3, respectively. The ta bles also contain the recommended options (dataset used for calibration) for each performance model. If option 1 is recommended, then use that model coefficients for only reconstruct pavement designs. On the other hand, if option 4 is recommended, the mode l should only be used for rehabilitation design. In the cases where only option 2 is recommended, use those models for both rehabilitation and reconstruct pavement designs. 271 Table 6 - 2 Summary of flexible pav ement performance models with local coefficients in Michigan Data Subset Performance prediction model Local coefficient Standard error Option 1a Fatigue cracking bottom - up Option 1b Option 2 Rutting HMA Base/subgrade Option 2 Thermal cracking Option 1 IRI Internally determined by the software Option 4 Note: Option 1 = Reconstruct pavements, Option 2 = Combined recon struct and rehabilitated pavements, Option 4 = Rehabilitated pavements . The model coefficients in red color show the local calibrated new coefficients. Option 1a uses both alligator and longitudinal cracking; therefore, if Option 1a is used longitudinal mo del should not be used. 272 Table 6 - 3 Summary of rigid pavement performance model coefficients and standard errors Data Subset Performance prediction model Local coefficient Reliability Option 2 Transverse cr acking Option 2 Transverse joint faulting Option 2 IRI Internally determined by the software Note: For reliability, th e SE models mentioned in this table corresponds to the local calibration; however, the global faulting model is recommended due to software issues. 6.3 F INDINGS AND RECOMMENDATIONS The following observations, findings and recommendations were made during the data collection and local calibration for Michigan conditions: 1. T he bottom - up and top - down fatigue cracking in the wheel - paths are combined for new HMA reconstruct projects due to the difficulty determining the initiation (top or bottom) of the fatigue cra cks observed at the surface . 2. The average surface rutting (left & right wheel - paths) was determined for the entire project length. No conversion is necessary. 3. The faulting values reported in the MDOT sensor database corresponds to the average height of a ll faults at a discontinuity (crack or joint) observed for an entire 0.1 mile section. However, the Pavement - ME faulting prediction does not distinguish between 273 faulting at cracks or joints and only predicts faulting at the joints. Therefore, only measured average joint faulting should be compared with the predicted faulting by Pavement - joints, the average joint faulting needs to be calculated. The average joint faulting is calculated usi ng Equation ( 6 - 1 ) as mentioned in chapter 3 : ( 6 - 1 ) 4. The transverse crac king distress is predicted as % slabs cracked in the Pavement - ME. However, for pavements in Michigan, transverse cracking is measured as the total number of transverse cracks in a 0.1 mile section. The measured transverse cracking needs conversion to perc ent slabs cracked by using Equation (6 - 2). ( 6 - 2 ) 5. It is strongly recomm ended that all new (constructed after the completion of this research) flexible and rigid pavement rehabilitation or reconstruct projects be considered for future calibrations. This will ensure that the most accurate as - constructed input data (traffic, and material properties for HMA/PCC and base/subbase layers) can be collected at the time of construction and stored in a database for future local calibrations. 6. The number of pavement projects and sections shortlisted in the study should be increased by addi ng pavement projects constructed after 2006 (since the local calibration was performed by using projects constructed until 2006), especially for rigid pavements. It is strongly recommended to include all the newly constructed projects where level 1 HMA dat a were obtained. 274 7. A separate database including all the input data should be created and maintained along with linkage (project specific information, e.g. control section, BMP and EMP) with the PMS database. The database should be updated as more projects b ecome available as mentioned in step 11. 8. Condition data on the selected projects (during this round of calibration) should be collected every other year to increase the number of data points for each selected pavement project. 9. The process adopted for the local calibration of the performance models in this study is also recommended for the future calibrations. The recommended options for each model should be used for future calibration. Resampling techniques like bootstrapping are strongly recommended for f uture calibration. 10. The local calibration model coefficients shown in Tables 6 - 2 and 6 - 4 are recommended to replace the global/national models coefficients in Michigan. However, these coefficients need to be re - evaluated and modified when more pavement sect ions with higher levels of distress, become available, especially for rehabilitated pavements with two or more cycles of distress measurements. 11. It is recommended that the local calibrations should be performed every six years. In six years, both the exist ing and new pavement sections will have three additional performance data points for the local calibration. 12. The input data for additional pavement sections should be collected at the time of construction to populate the local calibration database before t he next calibration cycle. 275 13. It is strongly recommended to characterize pavement materials in the laboratory for the additional pavement sections and the data should be added to the local calibration database. 6.4 F UTURE R ESEARCH Based on the conclusions, findin gs, and recommendations of this study, the following are research topics which can improve the local calibration process developed for Michigan in the future: At this time, the Pavement - ME flexible and rigid pavement rehabilitation models local calibration is very limited due to the available number of pavement sections and the current assumptions for many important input parameters. To address these limitations, increased use of FWD for backcalculation of layer moduli to characterize existing pavement cond itions for all the rehabilitation options adopted in Michigan is warranted, especially for high traffic volume roads. PMS distress data and unit conversion is also necessary to ensure compatibility between measured and Pavement - ME predicted distresses in the long - term for implementation of the new design methodology (see Tables 6 - 6 and 6 - 7 ). The units can be converted b y using the equations mentioned in Chapter 3. T he results from the conversion should be stored separately in th e database for the selected principal distress indices. Sensor data (IRI, rut depth) do not need any further conversion because of their current compatibility with Pavement - ME. 276 Table 6 - 4 Flexible pavement distresses Flexible pavement distresses MDOT principle distresses MDOT units Pavement - ME units Conversion needed? IRI Directly measured in/mile in/mile No Top - down cracking 204, 205, 724, 725 miles ft/mile Yes Bottom - up cracking 234, 235, 220, 221, 730, 731 miles % area Yes Th ermal cracking 101, 103, 104, 114, 701, 703, 704, 110 No. of occurrences ft/mile Yes Rutting Directly measured in in No Reflective cracking No specific PD None % area N/A Note: Bold numbers represent older PDs that are not currently in use Table 6 - 5 Rigid pavement distresses Rigid pavement distresses MDOT principle distresses MDOT units Pavement - ME units Conversion needed? IRI Directly measured in/mile in/mile No Faulting Directly measured in in Yes Tra nsverse cracking 112, 113 No. of occurrences % slabs cracked Yes The significant input variables that are related to the various reconstruct and rehabilitation types summarized in Chapter 3 should be an integral part of a database for construction and ma terial related information. Such information will be beneficial for future design projects and local calibration of the performance models in the Pavement - ME. Table 6 - 8 summarizes the testing requirements for the significant input variables needed for the local calibration. 277 Table 6 - 6 Testing requirements for significant input variables for rehabilitation Pavement layer type Significant input variables Lab test 1 Field test Reconstruct and Overlay HMA air voids Yes HMA effective binder Yes HMA binder and mixture characterization ( G* , E* etc.) Yes PCC CTE (per °F x 10 - 6) Yes PCC MOR (psi) Yes Existing HMA thickness Extract core Pavement condition rating Distress survey Subgrade modulus FWD testing Subbase modulus FWD testing PCC thickness Extract core Existing PCC elastic modulus (psi) FWD The local calibration process developed for Michigan builds upon what others state highway agencies have performed. The use of resampling t echniques increases confidence in selecting the best calibration coefficients based on the measured performance and inputs used to characterize local pavement sections. In the future, this process should be applied to data sources from other state highway agencies to test if the local calibration results improve from the traditional methods. Additional studies related to the number of pavement sections required for a robust bootstrapping calibration should be performed. The limits of the bootstrapping techn ique can have implications related to the number of projects selected for local calibration. If the results indicate that less pavement sections are required to obtain similar results, then it is very beneficial to adopt the bootstrapping calibration techn ique. Several limitations exist in the current local calibration for Michigan conditions. Several calibration coefficients were not adjusted since the required data was not available. The following calibration coefficients were not adjusted in this round o f local calibration: 1 Either use current practice or AASHTO test methods 278 o Fatigue cracking: The fatigue damage model in the Pavement - ME was not calibrated since level 1 material testing data was not performed for the pavement sections in the local calibration datase t. The model should be recalibrated when th is data becomes available. o HMA Rutting: The HMA rutting model calibration only adjusted the Br1 calibration coefficient which directly affects the magnitude of the HMA rut predictions and does not affect the slope of the prediction curve. In order to impro ve the local calibration of the rutting mode, r2 and r3 needs to be adjusted to further minimize the bias and standard error between the measured and predicted HMA rutting. The r2 and r3 coefficients are related to the number of axle load repetitions a nd the mix or pavement temperature. These coefficients are related to the kr2 and kr3 global model coefficients obtained from laboratory testing. In order to improve the HMA rutting calibration results, lab testing should be performed using local HMA mixtu res to adjust the r2 and r3 calibration coefficients. Recalibration of the rutting model is required once these values are obtained. o Transverse cracking (rigid): The fatigue damage calibration coefficients C 1 and C 2 were not adjusted in the local calibration of the rigid pavement transverse cracking model. The allowable number of load repetitions at various conditions is affected by C 1 and C 2 . These values are also dependent on the modulus of rupture and the applied stress at the various conditions. Each time the C 1 and C 2 coefficients are adjusted the software needs to be executed to obtain the fatigue damage values. The local calibration for Michigan conditions can be improved if 279 this value can be calculated or obtained outside of the software to further minimize the erro r between the measured and predicted cracking o Faulting: The faulting model calibration changes all the calibration coefficients simultaneously to minimize the error between the predicted and measured faulting. At this time, the measured faulting for the M ichigan pavement sections are extremely low (<0.1 inch) and the calibration process should be tested for pavement sections with greater faulting to truly determine the effectiveness of the method. o Rigid pavement IRI: The rigid pavement IRI model was calibr ated by changing the cracking and spalling coefficients. In future calibrations, the faulting and site factor coefficients should also be adjusted to further improve the local calibration. Project selection for future calibrations should follow similar pro cedures developed in this research. The genetic algorithm used for calibrating the faulting model can be used to calibrate the other models. The optimization technique may reduce standard error and bias further. This method is much more time consuming sin ce it searches a large domain of possible solutions for each calibration coefficient. Develop a local calibration software which will calibrate each model using various sampling techniques automatically. This software could include a measured performance d atabase, input variable database, geographical location database, and a Pavement - ME output and performance prediction database. The local calibration can be performed using traditional split sampling, jackknifing, and the bootstrapping validation technique . The calibration coefficients, standard error and bias are compared to determine the best 280 estimates for a particular highway agency. Additionally, the standard error used for reliability can also be updated using the local calibration software. Additional benefits includes that the database can be continually updated to add new pavement sections and delete older ones as more ME designed pavements accumulate distress and performance data. The validity of the transverse profile analysis should be verified by performing field trench testing. Measured built - in - curl of the selected pavement sections should be incorporated to verify values used in the MEPDG. 281 REFERENCES 282 REFERENCES [1] Pierce, L. M., and G. McGovern. Implementatio n of the AASHTO Mechanistic - Empirical Pavement Design Guide and Software.In, NCHRP, 2014. [2] NCHRP Project 1 - 40B. Local Calibration Guidance for the Recommended Guide for Mechanistic - Empirical Pavement Design of New and Rehabilitated Pavement Structures.I n, Final NCHRP Report, 2009. [3] AASHTO. Mechanistic - Empirical Pavement Design Guide: A Manual of Practice: Interim Edition.In, American Association of State Highway and Transportation Officials, 2008. [4] Von Quintus, H. L., M. I. Darter, and J. Mallela. Local Calibration Guidance for the Recommended Guide for Mechanistic - Empirical Design of New and Rehabilitated Pavement Structures.In, 2004. [5] Mallela, J., L. Titus - Glover, H. Von Quintus, M. I. Darter, M. Stanley, and C. Rao. Implementing the AASHTO Mec hanistic - Empirical Pavement Design Guide in Missouri Volume II: MEPDG Model Validation and Calibration.In, Missouri Department of Transportation, 2009. [6] Mallela, J., L. Titus - Glover, H. Von Quintus, M. I. Darter, M. Stanley, C. Rao, and S. Sadasivam. Im plementing the AASHTO Mechanistic - Empirical Pavement Design Guide in Missouri Volume I: Study Findings, Conclusions, and Recommendations.In, Missouri Department of Transportation, 2009. [7] Glover, L. T., and J. Mallela. Guidelines for Implementing NCHRP 1 - 37A M - E Design Procedures in Ohio: Volume 4 - MEPDG Models Validation & Recalibration.In, Ohio Department of Transportation, 2009. [8] Hall, K. D., D. X. Xiao, and K. C. P. Wang. Calibration of the MEPDG for Flexible Pavement Design in Arkansas. Transport ation Research Record , 2010. [9] Streveler, R. A., T. A. Litzinger, R. L. Miller, and P. S. Steif. Learning conceptual knowledge in the engineering sciences: Overview and future research direction. Journal of Engineering Education, Vol. 97, No. 3, 2008, pp. 279 - 294. 283 [10] Velasquez, R., K. Hoegh, I. Yut, N. Funk, GeorgeCochran, M. Marasteanu, and L. Khazanovich. Implementation of the MEPDG for New and Rehabilitated Pavement Structures for Design of Concrete and Asphalt Pavements in Mi nnesota.In, Minnesota Department of Transportation, 2009. [11] Quintus, H. V., and J. S. Moulthrop. Mechanistic - Empirical Pavement Design Guide Flexible Pavement Performance Prediction Models for Montana: Volume I Executive Research Summary.In, The State o f Montana Department of Transportation,, 2007. [12] Wu, C. - F. J. Jackknife, bootstrap and other resampling methods in regression analysis. The Annals of Statistics, Vol. 14, No. 4, 1986, pp. 1261 - 1295. [13] Mallela, J., L. Titus - Glover, S. Sadasivam, B. Bh attacharya, M. I. Darter, and H. L. Von Quintus. Implementation of the AASHTO Mechanistic - Empirical Pavement Design Guide for Colorado.In, Colorado Department of Transportation, 2013. [14] Guo, X., and D. Timm. Local Calibration of the MEPDG Using NCAT Tes t Track Data.In Transportation Research Board Annual Meeting , 2015. [15] Sun, X., J. Han, R. L. Parsons, A. Misra, and J. Thakur. Calibrating the Mechanistic - Empirical Pavement Design Guide for Kansas.In, Kansas Department of Transportation, Kansas Departm ent of Transportation, 2015. p. 235. [16] Efron, B. Bootstrap methods: another look at the jackknife. The Annals of Statistics , 1979, pp. 1 - 26. [17] Banerjee, A., J. P. Aguiar - Moya, and J. A. Prozzi. Texas Experience using LTPP for Calibration of the MEPDG Permanent Deformation Models. Transportation Research Record , 2009. [18] Mallela, J., L. Titus - Glover, B. Bhattacharya, A. Gotlif, and M. I. Darter. Recalibration of the JPCP Cracking and Faulting Models in the AASHTO ME Design Procedure. Transportation R esearch Record , 2015. [19] Hoerner, T. E., M. I. Darter, L. Khazanovich, and L. Titus - Glover. Improved Prediction Models for PCC Pavement Performance - Related Specifications.In, Federal Highway Administration,, 2000. [20] Xiao, D. X., Z. Wu, and Z. Zhang. L essons Learned in Local Calibration of MEPDG for Louisiana Flexible Pavement Design.In Transportation Research Board Annual Meeting , 2015. 284 [21] Buch, N., K. Chatti, S. W. Haider, and A. Manik. Evaluation of the 1 - 37A Design Process for New and Rehabilitate d JPCP and HMA Pavements, Final Report.In, Michigan Department of Transportation, Construction and Technology Division, P.O. Box 30049, Lansing, MI 48909, Lansing, , 2008. [22] Haider, S. W., N. Buch, and K. Chatti. Evaluation of M - E PDG for Rigid Pavement s Incorporating the State - of - the - Practice in Michigan.In the 9th International Conference on Concrete Pavements San Francisco, California, USA, 2008. [23] Buch, N., S. W. Haider, K. Chatti, G. Y. Baladi, W. Brink, and I. Harsini. Preparation for Implementa tion of the Mechanistic - Empirical Pavement Design Guide in Michigan - Part 2: Rehabilitation Evaluation.In, Michigan Department of Transportation, 2013. [24] Kutay, M. E., and A. Jamrah. Preparation for Implementaiton of the Mechanistic - Empirical pavement Design Guide in Michigan Part 1: HMA Mixture Characterization.In, Michigan Department of Transportation, 2013. [25] Buch, N., S. W. Haider, J. Brown, and K. Chatti. Characterization of Truck Traffic in Michigan for the New Mechanistic Empirical Pavemnet De sign Guide, Final Report.In, Michigan Department of Transportation, Construction and Technology Division, P.O. Box 30049, Lansing, MI 48909, Lansing, , 2009. [26] Haider, S. W., N. Buch, K. Chatti, and J. Brown. Development of Traffic Inputs for Mechanisti c - Empirical Pavement Design Guide in Michigan. Transportation Research Record, Vol. 2256, 2011, pp. 179 - 190. [27] Baladi, G. Y., T. Dawson, and C. Sessions. Pavement Subgrade MR Design Values for tment of Transportation, Construction and Technology Division, P.O. Box 30049, Lansing, MI 48909, Lansing, , 2009. [28] Baladi, G. Y., K. A. Thottempudi, and T. Dawson. Backcalculation of Unbound Granular Layer Moduli, Final Report.In, Michigan Department of Transportation, Construction and Technology Division, P.O. Box 30049, Lansing, MI 48909, Lansing, , 2010. [29] Buch, N., and S. Jahangirnejad. Quantifying Coefficient of Thermal Expansion Values of Typical Hydraulic Cement Concrete Paving Mixtures, Fina l Report.In, Michigan Department of Transportation, Construction and Technology Division, P.O. Box 30049, Lansing, MI 48909, Lansing, , 2008. [30] Miller, J. S., and W. Y. Bellinger. Distress Identification Manual.In, FHWA - RD - 03 - 031, 2003. 285 [31] Rauhut, J. B., A. Eltahan, and A. L. Simpson. Common Characteristics of Good and Poorly Performing AC Pavements.In, Federal Highway Administration,, 1999. [32] Khazanovich, L., D. M, B. R, and M. T. Common Characteristics of Good and Poorly Performing PCC Pavements.I n, Federal Highway Administration,, 1998. [33] Schwartz, C. W., R. Li, S. Kim, H. Ceylan, and R. Gopalakrishnan. Sensitivity Evaluation of MEPDG Performance Prediction.In, 2011. [34] Huang, Y. H. Pavement Analysis and Design . Pearson Prentice Hall, Upper S addle River, NJ 07458, 2004. [35] NCHRP Project 1 - 40B. User Manual and Local Calibration Guide for the Mechanistic - Empirical Pavement Design Guide and Software In, NCHRP, 2007. [36] Sahinler, S., and D. Topuz. Bootstrap and Jackknife Resampling Algorithms for Estimation of Regression Parameters. Journal of Applied Quantitative Methods, Vol. 2, No. 2, 2007, pp. 188 - 199. [37] Fox, J. Bootstrapping regression models. An R and S - PLUS Companion to Applied Regression: A Web Appendix to the Book. Sage, Thousand Oa ks, CA. URL http://cran . r - project. org/doc/contrib/Fox - Companion/appendix - bootstrapping. pdf , 2002. [38] Quintus, H. V., C. Schwartz, R. H. McCuen, and D. Andrei. Jackknife Testing - An Experiment Approach to Refine Model Calib ration and Validation.In, NCHRP, 2003. [39] White, T. D., E. H. John, J. T. H. Adam, and F. Hongbing. Contribution of Pavement Structural Layers to Rutting of Hot Mix Asphalt Pavements.In, NCHRP, Washington, DC., 2002. [40] White, T. D., J. E. Haddock, A. J. T. Hand, and H. Fang. Contributions of pavement structural layers to rutting of hot mix asphalt pavements.In, Transportation Research Board, 2002. [41] Haddock, J. E., J. T. H. Adam, F. Hongbing, and T. D. White. Determining Layer Contributions to Rutti ng by Surface Profile Analysis. ASCE Journal of Transportation Engineering, Vol. Vol. 131, No. 2, 2005, pp. 131 - 139. [42] Hoerner, T. E., M. I. Darter, L. Khazanovich, L. Titus - Glover, and K. L. Smith. Improved prediction Models for PCC Pavement Performanc e - Related Specifications - Volume 1: Final Report.In, Federal Highway Administration, 2000. 286 [43] Haider, S. W., W. Brink, N. Buch, K. Chatti, and G. Baladi. Preparation for Implementation of the Mechanistic - Empirical Pavement Design Guide in Michigan - Par t 3: Local Calibration and Validation of the Pavement - ME Performance Models.In, Michigan Department of Transportation, 2014. [44] Sachs, S., J. M. Vandenbossche, and M. B. Snyder. Calibration of the National Rigid Pavement Performance Models for the Pavem ent Mechanistic - Empirical Design Guide.In Transportation Research Board 94th Annual Meeting , 2015. [45] The MathWorks, I. Global Optimization Toolbox: User's Guide (r2015a).In, 2015. [46] Varma, S., M. Kutay, and E. Levenberg. Viscoelastic Genetic Algorith m for Inverse Analysis of Asphalt Layer Properties from Falling Weight Deflections. Transportation Research Record: Journal of the Transportation Research Board , No. 2369, 2013, pp. 38 - 46.