A METHOD FOR THE VALIDATION OF COMPUTATIONAL MODELS USING DIGITAL IMAGE CORRELATION AND IMAGE DECOMPOSITION By Christopher M. Sebastian A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Mechanical Engineering 2011 ABSTRACT A METHOD FOR THE VALIDATION OF COMPUTATIONAL MODELS USING DIGITAL IMAGE CORRELATION AND IMAGE DECOMPOSITION By Christopher M. Sebastian Recently, there has been a drive to create more efficient designs of vehicles. This has led to the use of new materials and more rigorous optimization of components. As the design envelope is pushed and new materials are explored, it is more important than ever to ensure that designs are based on a model that has been validated. Currently, a typical validation consists of a simple point-to-point comparison. This method assumes that the model has identified the areas of high stress correctly, but leaves the rest of the model un-validated. This thesis proposes a method to validate computational models using image decomposition and digital image correlation. Digital Image Correlation (DIC) is capable of producing full-field strain maps with on the order of 105 to 106 data points. The challenge then becomes comparing the massive amounts of experimental and simulation data. Image decomposition provides a way to reduce those data sets to less than 100 moments while preserving the original information. These compressed data sets can be compared more easily than the original data. As a part of the validation process, the DIC system was calibrated according to a recently published draft ISO standard. Finally, two studies are presented which use image decomposition to compare simulation and experimental results. The first involves an aluminum plate with a hole loaded in tension. The second is a composite protective panel which is bolted at one end and loaded compressively. Copyright by CHRISTOPHER M. SEBASTIAN 2011 To my parents, Mark and Mary and my godparents, Bill and Cherie iv ACKNOWLEDGMENT I am grateful to my advisor, Dr. Eann Patterson. I have learned a lot from him over the past two years, and I look forward to learning more in the future. I would also like to thank Amol Patki for helping to get me started with my research, and Victor Wang for his collaboration. I would like to acknowledge the Army and the Tank Automotive Research, Development and Engineering Center for their generous support for this research and all of the people at TARDEC that helped to make this work possible, specifically Dr. Douglass Templeton and also the LWS group. v TABLE OF CONTENTS List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . List of Figures viii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix 1 Introduction 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 3 4 6 2 Literature Review 2.1 Validation of Computational models using Experimentation 2.1.1 Calibration . . . . . . . . . . . . . . . . . . . . . . . 2.2 Shape Descriptors . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Moment Descriptors . . . . . . . . . . . . . . . . . . 2.2.2 Image Reconstruction . . . . . . . . . . . . . . . . . . 2.3 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 8 12 13 14 14 15 3 Calibration 3.1 Reference Material for Calibration . . . . . . . . . . . . . . . . . . 3.2 Experimental Set-Up and Procedure . . . . . . . . . . . . . . . . 3.2.1 Reference Material . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Determination of the correction factors k and η . . . . . . 3.2.3 Calibration Measurements . . . . . . . . . . . . . . . . . . 3.3 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Correction Factors and their Uncertainty . . . . . . . . . . 3.3.2 Specimen Dimensions and Associated Uncertainty . . . . . 3.3.3 Measurement System Results and Associated Uncertainty . 3.3.4 Overall System Results and Associated Uncertainty . . . . 3.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 17 20 20 20 22 26 26 27 27 31 34 4 Experimental Methods 4.1 Aluminum Plate Tensile Study . 4.1.1 Specimen Description . . 4.1.2 Experimental Procedure 4.2 Composite Plate Study . . . . . 4.2.1 Specimen Description . . 4.2.2 Experimental Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 36 36 36 42 42 42 . . . . . . . . . . . . . . . . . . vi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Image Decomposition Methodologies 5.1 Zernike Moment Descriptor . . . . . 5.2 Tchebichef Descriptor . . . . . . . . . 5.3 Hybrid Fourier-Moment Descriptors . 5.4 Summary of Results . . . . . . . . . . . . . 47 47 49 52 60 6 Results from Aluminum Plate Study 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Computational Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Comparison of Results using Image Decomposition . . . . . . . . . . . . . . 63 63 64 65 7 Results from Composite Panel Study 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . 7.2 Panel Description and Manufacturing . . . . . . . . 7.3 Computational Model . . . . . . . . . . . . . . . . 7.4 Comparison of Results using Image Decomposition 73 73 74 74 79 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Discussion 92 8.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 9 Conclusions 103 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 vii LIST OF TABLES 3.1 The critical dimensions of the calibration specimen and their associated uncertainties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Summary of the fit parameters calculated for each of the displacements, as well as their associated 95% confidence levels . . . . . . . . . . . . . . . . . . 28 Summary of the uncertainty values for each of the load steps. The uncertainty values listed are for the maximum strain of each load step. Unless otherwise noted, the values listed are µstrain. . . . . . . . . . . . . . . . . . . . . . . . 33 The material properties used for the finite element models. All values are in MPa, unless otherwise noted. . . . . . . . . . . . . . . . . . . . . . . . . . . 78 7.2 ANOVA results from the sensitivity study of the finite element model . . . . 79 8.1 Summary of reconstruction results for the experimental strain map . . . . . 93 8.2 Summary of reconstruction results for the cropped experimental strain map . 94 8.3 Summary of correlation results from the composite panel study. . . . . . . . 97 3.2 3.3 7.1 viii LIST OF FIGURES 1.1 The desire by companies to move from a reactive design process to a predictive process has resulted in more emphasis being placed on modeling and simulation [1]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Flowchart for verification and validation activities for simulation and experimental methods [2]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.1 Flowchart showing the model hierarchy from concept to computational [2]. . 11 3.1 Drawing of the reference material showing some of the design features. A: Measurement tabs. B: Mounting holes for tensile loading and semi-circle for compressive loading. C: Interlock to prevent overload. D: Whiffle tree that allows for monolithic manufacturing of the specimen. E: Gauge section. . . . 19 Close-up of the calibration specimen installed in the MTS load frame. The digital indicators are mounted to the specimen by custom-machined mounting blocks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Model of the reference material showing the strain gauge locations and the reference coordinate system. . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.4 The pattern produced on the surface of the specimen by using grit paper. . . 24 3.5 Plot of the strain from the gauges mounted to the top and bottom surfaces of the specimen. A linear fit was applied to the data to determine the correction factors k and η. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 The measured and calculated xx strain values for each of the three displacement load steps: 27.5 µm (top), 110 µm (middle), and 218 µm (bottom). . . 29 Principle strain data generated by the DIC system for the 218 µm displacement load. For interpretation of the references in color in this and all other figures, the reader is referred to the electronic version of this thesis. . . . . . 30 The total measurement uncertainty of a system is a combination of uncertainties from the calibration specimen and from the measurement system. . . . . 31 1.2 3.2 3.3 3.6 3.7 3.8 ix 3.9 Plots of the expanded uncertainty of the calibration specimen uCS along with the mean residual deviations αk + βk y and the scatter band around the deviations ±2u(dk ) for each load step: 27.5 µm (top), 110 µm (middle), and 218 µm (bottom). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.10 One of the early measurements obtained for the 27.5 µm displacement load. It was determined that the original hand polishing technique was causing deviations in the measured strain values at the extreme edges of the beam. . 34 4.1 The dimensions of the aluminium plate used for this study. The area outlined by the dashed line indicates the region of interest for the experimental data capture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Pictures showing the original loading configuration using hydraulic wedge grips (left), and the final configuration using pin loading (right). . . . . . . . 39 4.3 The experimental test setup used for the aluminum plate study. . . . . . . . 40 4.4 The calibration procedure using an 8mm x 8mm plate supplied with the DIC system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.5 Dimensions of the composite panel described as a function of the tile size a. . 43 4.6 The model showing how the protective panel attached to the test fixture and where the loading was applied (top). The panel installed in the test fixture (bottom). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 4.7 Picture of the revised experimental test setup for the composite panel study. 45 5.1 An example of image decomposition and reconstruction of a maximum principal strain map using Zernike moments with a maximum order of 8, which results in 45 moments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 This decomposition and reconstruction uses a subset of the data from Figure 5.1. The data has been cropped to eliminate the discontinuity resulting from the hole. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 An example of image decomposition and reconstruction of a maximum principal strain map using Tchebichef moments with a maximum order of 8, which results in 45 moments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.2 5.2 5.3 x 5.4 5.5 5.6 This decomposition and reconstruction uses a subset of the data from Figure 5.3. The data has been cropped to eliminate the discontinuity resulting from the hole. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 An example of applying the Discrete Fourier Transform (DFT) to an experimental data set, which produces a phase map and a magnitude map. The magnitude is also shown separated into real and imaginary parts. . . . . . . 56 The Discrete Fourier Transform (DFT) applied to a subset of the experimental data set from Figure 5.5. The subset has been cropped to eliminate the discontinuity caused by the hole. . . . . . . . . . . . . . . . . . . . . . . . . 57 5.7 The results of applying the hybrid Fourier-Zernike decomposition to the experimental strain data. In comparison to the decomposition that used only the Zernike descriptor, the hole feature has been retained in this reconstruction. 58 5.8 The results of applying the hybrid Fourier-Tchebichef decomposition to the experimental strain data. In comparison to the decomposition that used only the Tchebichef descriptor, the hole feature has been retained in this reconstruction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Comparison of the correlation of the original and reconstructed strain maps for the different decomposition methodologies. . . . . . . . . . . . . . . . . . 61 5.10 Comparison of the correlation of the original and reconstructed strain maps for the cropped experimental results. . . . . . . . . . . . . . . . . . . . . . . 62 5.9 6.1 6.2 6.3 6.4 The load history of the plate up to failure. The data from step 50 was used for this comparison study. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 This figure shows the different mesh sizes overlaid on the finite element analysis results. The maximum principle strain is shown. . . . . . . . . . . . . . 66 An example of one of the meshes used, showing the boundary conditions. The elements highlighted in blue were fixed in all degrees of freedom. The elements highlighted in red had a force applied to them in the −y direction. . . . . . . 67 Comparison of the Zernike moments generated by using the logarithm of the magnitude of the Fourier transform. . . . . . . . . . . . . . . . . . . . . . . . 69 xi 6.5 6.6 6.7 7.1 7.2 7.3 The Zernike moments generated from the simulation data plotted against the moments from the experimental data (top). The moments plotted without the first one to show the detail of the others (bottom). . . . . . . . . . . . . . 70 Comparison of the Tchebichef moments generated by using the logarithm of the magnitude of the Fourier transform. . . . . . . . . . . . . . . . . . . . . 71 The Tchebichef moments generated from the simulation data plotted against the moments from the experimental data (top). The moments plotted without the first one to show the detail of the others (bottom). . . . . . . . . . . . . . 72 A cross-section showing the composition of the panel which consists of ceramic tiles bonded to an aluminum backing plate with urethane, and a pre-preg composite skin wrapped around the panel. . . . . . . . . . . . . . . . . . . . 75 Screenshots of the protective panel model used for the finite element analysis without (top) and with (bottom) the composite cover layer. . . . . . . . . . . 76 Results of the finite element analysis superimposed on the experimental setup, indicating the areas that were used for the comparison with the experimental data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 7.4 The Tchebichef moments from the maximum principle strain maps of the experimental and simulation results from the area above the bolt hole (RS-bh). 82 7.5 The Tchebichef moments generated from the simulation plotted against the moments from the experimental data. The solid regression line is fitted to all of the data while the dashed line excludes the first moment. . . . . . . . . . 83 The Tchebichef moments from the maximum principle strain maps of the experimental and simulation results from area RS-3. . . . . . . . . . . . . . . 86 The Tchebichef moments generated from the simulation plotted against the moments from the experimental data. The solid regression line is fitted to all of the data while the dashed line excludes the first moment. . . . . . . . . . 87 The Tchebichef moments from the maximum principle strain maps of the experimental and simulation results from area LS-1. . . . . . . . . . . . . . . 88 The Tchebichef moments generated from the simulation plotted against the moments from the experimental data. The solid regression line is fitted to all of the data while the dashed line excludes the first moment. . . . . . . . . . 89 7.6 7.7 7.8 7.9 xii 7.10 The Tchebichef moments from the maximum principle strain maps of the experimental and simulation results from area LS-3. . . . . . . . . . . . . . . 90 7.11 The Tchebichef moments generated from the simulation plotted against the moments from the experimental data. The solid regression line is fitted to all of the data while the dashed line excludes the first moment. . . . . . . . . . 91 8.1 8.2 8.3 8.4 Strain map of the panel from the simulation results, with the boxed areas indicating where the experimental data were obtained. . . . . . . . . . . . . 96 Comparison of the speckle patterns on the left side of the panel (left) and the right side of the panel (right). . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Flowchart of verification and validation activities for simulation and experimental methods [2]. The outlined area indicates the activities addressed by this thesis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Flowchart showing the decomposition process developed in this thesis to compare simulation and experimental outcomes. This flowchart addresses the outlined area from Figure 8.3. . . . . . . . . . . . . . . . . . . . . . . . . . . 100 xiii Chapter 1 Introduction Computational methods of stress and strain analysis, such as finite element analysis, have been a very important tool in the engineering design process, especially as computing power has decreased in cost. Initially, this led to a shift in the engineering design community from experimentation to computational analysis. Experimentation can be slow and costly, and often the desired geometry has to be greatly simplified in order to perform tests. Computational methods allow complex geometries to be tested (and many different iterations) for a relatively low cost. This has led to creation of products from largely un-validated computational models. The problem with designs using this method are that any problems with the computational model will not be discovered until the prototype phase, or worse, during production. Companies have since realized money is more wisely spent earlier in the design phase, where discovering a problem earlier on can save several times the cost of discovering a problem in the prototype or manufacturing phase [1]. This has led to the development of such tools as Design For Six Sigma (DFSS) which provide a disciplined approach to the design process 1 Reactive Design Quality Predictive Design Quality DFSS Disciplined CTC (critical-tocustomer requirements) flowdown ● Evolving design requirements ● Extensive design rework ● Controlled design parameters ● Performance assessed by “build and test” ● ●Performance modeled and simulated Performance and process problems fixed after product in use ● Designed for robust performance and producibility ● Quality “tested in” ● Quality “designed in” ● Figure 1.1: The desire by companies to move from a reactive design process to a predictive process has resulted in more emphasis being placed on modeling and simulation [1]. with the ultimate goal of no defects in the manufactured product. This is a very ambitious goal, and DFSS has varying levels of success depending on the type and scope of product, but the underlying concepts are sound. Figure 1.1 illustrates the desire in industry to shift from a test-heavy design approach to one that makes extensive use of modeling and simulation. Rather than going through an iterative design-and-test process, companies would prefer to perform only enough tests to validate their computational models. The iterative process can then be carried out through less expansive simulations. As a result, more emphasis has once again been placed on experimental methods and their role in the validation process of computational models. Making a comparison between the experimental data and computational results presents several challenges. For a simple 1-D or 2-D problem, a quick graphical comparison can usually be made by simply plotting the two sets of results and gaging how well they match. If a more extensive analysis is desired, the data points must align with each other, or interpolation 2 must be performed. The comparison becomes even more difficult in 3-D, as one must ensure that the coordinate systems are properly aligned, that both data sets are scaled and oriented the same. This work proposes the use of image decomposition to perform the comparison between experimental and computational data sets. Image decomposition be can used to address the alignment and scaling issues, as well as providing data compression. Often a data set with 105 or 106 data points can be represented by less than 100 moments resulting from an image decomposition. If these moments can be shown to capture with sufficient accuracy the information from the original data set, then a simpler comparison is possible using 100 moments as opposed to 105 or more data points. 1.1 Background Current validation practices, especially in industry, usually consist of point to point comparisons of areas of high stress identified by a computational model. This assumes that the model has correctly identified the high stress areas, and leaves the rest of the model unvalidated. To make matters worse, the design is often optimized by removing material from the un-validated areas of predicted low stress. For lightweight structures that are pushing the design boundaries, this could potentially cause problems if the model was not correct. Optical techniques such as digital image correlation (DIC) [3], electronic speckle pattern interferometry (ESPI) [4], and digital photoelasticity [5] have the ability to produce full-field stress and strain data that could be used for more comprehensive validations of computational models. While these techniques are well-established in the experimental mechanics field, the wider stress analysis community and industry, in particular, seem to be reluctant to 3 adopt these methods. One possible explanation for the lack of interest by the stress analysis community is the almost complete lack of standards or accepted methodologies for processes such as the calibration of the optical instruments. This thesis addresses that problem by performing a calibration of the optical strain measurement device used for the experimental data collection. Calibration of the strain measurement system is an essential step in acquiring data for the validation of computational models. The recently published guide from the Standardised Project for Optical Techniques of Strain measurement (SPOTS) seeks to fill the current lack of standards by providing a universally appropriate calibration procedure for static and pseudo-static experiments [6]. The product of performing this calibration procedure is a minimum level of uncertainty for the measurement system which is important to provide confidence in the experimental results used in the validation process. 1.2 Motivation The primary motivating factor for this thesis is the concept of design validated by experiment, which involves a structured methodology to use experimentation to ensure the quality of components designed using computational simulations. As was previously discussed, this concept has become especially important as more emphasis has been placed on the use of computers for simulation. The flowchart in Figure 1.2 shows the dual paths of simulation and experimentation ultimately coming together with a quantitative comparison between the two. This flowchart is from the ASME Guide for the Verification and Validation of Computational Solid Mechanics [2], which states that a comparison should be made, but doesn’t provide a procedure to do so. 4 Reality of Interest (Component , Subassembly, Assembly, or System) Abstraction Conceptual Model Mathematical Modeling Physical Modeling Mathematical Model Code Verification Physical Model Implementation Implementation Computational Model Calculation Verification Preliminary Calculations Experiment Design Calculation Experimentation Simulation Results Revise Appropriate Model Or Experiment Experimental Data Uncertainty Quantification Validation Uncertainty Quantification Simulation Outcomes Quantitative Comparison Experimental Outcomes Modeling, Simulation & Experimental Activities Acceptable Agreement ? No Assessment Activities Yes Next Reality of Interest in the Hierarchy Figure 1.2: Flowchart for verification and validation activities for simulation and experimental methods [2]. 5 Typically, image decomposition has been used in biometrics (for face, iris, and fingerprint recognition), and image analysis and retrieval. The properties that make image decomposition well-suited to those tasks have also given it potential for use in the experimental mechanics field. Work by Wang demonstrated the use of image decomposition for mode shape recognition [7], and Patki used it for damage assessment in composite samples [8]. There are many different types of shape descriptors available, but this thesis concentrates on the Zernike [9], Tchebichef [10], and Fourier descriptors [11]. Image decomposition makes the data sets much more manageable, as it is possible to reduce a data set by several orders of magnitude. The Zernike descriptor is invariant under translation, rotation, and scaling, which eliminates the need to exactly align the two data sets to be compared. 1.3 Thesis Overview This thesis is organized as follows. A literature review is presented which describes the current state of validation practices used for computational models. The calibration of a digital image correlation system is presented. The various types of shape descriptors are investigated to identify which ones to pursue for image decomposition in experimental mechanics. An overview of the current uses of image decomposition and reconstruction is given. The objectives of this thesis are described, in relation to the current state of art and the knowledge gaps that this thesis will fill. The Experimental Methods section begins with a brief overview of digital image correlation, which was the primary method used to acquire experimental data. The practical knowledge necessary to perform the calibration of a DIC system is described, and the results of performing such a calibration are presented. The experimental setups, methods, and 6 procedures used to acquire experimental data are detailed. The image decomposition methodologies used for the analysis in this thesis are described in detail. The basic underlying math of each descriptor is described. An example decomposition is provided for each descriptor to illustrate the various strengths and weaknesses of the different methods. The hybrid decomposition technique resulting from applying both the Fourier transform and a moment descriptor is discussed. The results of two studies using image decomposition to compare full-field experimental data obtained from digital image correlation and finite element analysis are presented. The first study uses the data obtained from loading a thin, aluminum plate with a hole in tension. The second study compares the strain distributions generated from applying a structural load to an inhomogeneous composite panel. Finally, conclusions and future work are discussed. 7 Chapter 2 Literature Review 2.1 Validation of Computational models using Experimentation Over the past 15 years, there have been several different guides, standards, and editorials published which addressed the need for verification and validation of computational models. In 1998, the American Institute for Aeronautics and Astronautics (AIAA) published a guide for the computational fluid dynamics community [12]. The Department of Defense has published several iterations of instructions regarding Modeling and Simulation Verification, Validation, and Accreditation activities, the latest of which was issued in 2009 [13]. In 2005, the Clinical Biomechanics journal issued an editorial statement which defined minimal requirements for a numerical study for a paper to be considered for publication. The most recent document published concerning the solid mechanics community is the 2006 ASME Guide for Verification and Validation in Computational Solid Mechanics (ASME V&V) [2], which incorporates material from a paper published a few years earlier by Oberkampf 8 et al. [14]. An overview and summary of the guide was published by Schwer [15], who was the chair of the Performance and Test Codes (PTC) 60 committee that produced the guide. In the overview, Schwer states that one of the most common misconceptions about the ASME V&V guide was that it would provide a definitive verification and validation procedure for computational solid mechanics. Rather, it is a “foundational document” which provides a framework and defines terminology to create a standardized language. The flowchart in Figure 1.2 outlines the procedure that should be followed to compare simulation outcomes and experimental outcomes. The relevant terms defined in the ASME guide are included throughout this section, and are given in bold type. The following are some basic terms necessary for the discussion in this section. experimental data: raw or processed observations (measurements) obtained from performing an experiment. experimental outcomes: features of interest extracted from experimental data that will be used, along with estimates of the uncertainty, for validation comparisons. simulation: the computer calculations performed with the computational model. simulation outcomes: features of interest extracted from simulation results that will be used, along with estimates of the uncertainty, for validation comparisons. simulation results: output generated by the computational model. The guide is broken up into four main sections: Introduction, Model development, Verification, and Validation. The primary concern of this thesis is the validation section, which deals with the comparison of the computational model and experimental results. Often, the 9 terms verification and validation are used interchangeably, but there is a distinction between them: verification: the process of determining that a computational model accurately represents the underlying mathematical model and its solution. validation: the process of determining the degree to which a model is an accurate representation of the real world from the perspective of the intended uses of the model. The main goal of the verification process is to ensure that mathematical model and algorithms used by the software are correct and that they produce accurate results, while the validation process ensures that physics of the model are correct by comparing the results to experimental data. There are also several different types of models, each representing a different concept or component. The flowchart in Figure 2.1 shows the progression from the conceptual model to the final computational model. This thesis focuses on the results produced from the computational model. model: the conceptual, mathematical, and numerical representations of the physical phenomena needed to represent specific real-world conditions and scenarios. Thus, the model includes the geometrical representation, governing equations, boundary and initial conditions, loadings, constitutive models and related material parameters, spatial and temporal approximations, and numerical solution algorithms. Both the ASME guide [2] and Oberkampf et al. [14] discuss the use of metrics to make a quantitative comparison between the simulation outcomes and the experimental outcomes. The two sources differ slightly in the wording of the definition of “metric,” but it is essentially a measure. The metric should quantify both errors and uncertainties, and “actively resolve 10 Conceptual model Mathematical model Numerical model Code Physical parameters Discretization parameters Computational model Figure 2.1: Flowchart showing the model hierarchy from concept to computational [2]. assessment of confidence for relevant system response measures for the intended application of the code” [14]. In essence, the ideal metric would provide a number indicating the level of confidence in the agreement between the simulation outcomes and experimental outcomes, taking into account the error and respective uncertainties. An examination of the literature published in the last ten years reveals that a common type of comparison is to plot the experimental data along with the computation data generated from the model together on the same plot [16, 17]. A metric is not used when performing this type of comparison. Instead, the extent of the agreement is qualitatively judged by how well the two lines match up with one another. To perform a simple quantitative assessment the relative error between the two sets of data can be calculated according to equation 2.1 [18]. RelativeError = computation − measured measured 11 (2.1) The relative error metric can be effectively applied to point comparisons or lines of data, but is not very effective for more complex comparisons. For example, it is not a good choice to compare tensors, or data with time or spatial components. Also, as the measured value approaches zero, the quantity becomes undefined. This can be a problem for comparing things like waveforms, in which the signal can cross zero. Even with the availability of full-field methods of collecting strain data, the comparison is still often reduced to checking a few points rather than using the full data set. There are studies available which describe the collection of data using full-field methods, but do not explicitly describe the validation procedure, e.g. [19]. 2.1.1 Calibration There is not much literature on the calibration of full field optical methods of stress and strain measurement, and most of it was produced recently. There are two ASTM standards from 1997-98 stemming from the glass industry that discuss a method of calibration for surface stress measuring devices used in glass production [20, 21]. The procedure consists of using either a cantilever beam or a beam subject to four-point bending and loaded in such a fashion as to produce a known stress on the surface of the beam. In 2002, ASTM published a guide for evaluating optical strain measurement systems, but its primary purpose was to help users of optical measurement systems understand some of the accuracy issues related to them [22]. This standard did help to establish a framework and terminology relevant to the performance of calibrations of optical systems. In 2002, a European consortium was formed involving a group of University and National labs, device designers and manufacturers, and end-users. This consortium eventually led to 12 the Standardisation Project for Optical Techniques of Strain measurement (SPOTS) which published a two part online guide in 2007, Calibration and Evaluation of Optical Strain Measurement Systems [6]. The first part of the guide describes the development of a reference material and the procedure to perform a calibration of an optical system of strain measurement. The second part describes the development of a standardized test material for the evaluation of optical systems of strain measurement. In 2008, a calibration of a speckle interferometry system was published that followed the SPOTS guidelines [23]. An updated version of the guide was published in booklet form in 2010. The process has been started to implement the guide as an ISO standard. 2.2 Shape Descriptors Shape descriptors are a part of a larger body of algorithms used for shape analysis. Generally speaking, there are space-domain techniques and scalar transform techniques. A review paper by Pavlidis [24] helped to classify the many types of shape analysis algorithms. Techniques which produce another image are space-domain techniques. Those that produce a number or numbers (scalar or vector), are scalar transform techniques. Shape descriptors can also be information preserving or information non-preserving. Descriptors which are considered information preserving allow accurate reconstruction of the image from the shape descriptors. This thesis will primarily concerned with moment descriptors, and combining moment descriptors with the use of the Fourier transform. 13 2.2.1 Moment Descriptors The use of moment descriptors is considered an information preserving scalar transform technique. There are two main types of moment descriptors: continuous and discrete. The commonly used continuous moments are Legendre and Zernike, both of which have the advantage of being invariant under translation, rotation, and scale [25]. This means that transformations which do not impose a strain on the object will produce the same values for the moments. This is a very important property, which means that two data sets do not have to be perfectly aligned or of the same scale to make a comparison using moment descriptors. There are several types of discrete moment descriptors, but the two most common are Tchebichef and Krawtchouk. Moment descriptors are also orthogonal, which minimizes redundancy. The Zernike moments are only valid over a unit circle domain. In order to use the continuous Zernike moments with a discrete image, the Zernike polynomial must be approximated. Hwang provides an accurate form of the Zernike moments for discrete images [26]. Zernike moments are commonly used in biometrics for things such as face recognition [27, 28] and handprint recognition [29]. They have also found their way into the medical field for tumor and polyp detection [30, 31]. 2.2.2 Image Reconstruction In image decomposition, there is a desire to ensure that the moments that have been produced as a part of the process accurately describe the original image. For the discrete Tchebichef moments, an image can be completely described by a finite set of moments related to the size of the image [32]. The continuous moments use integrals that must be approximated by 14 summations that result from numerical errors. This affects the invariance and orthogonality properties, which in turn makes it impossible to achieve perfect reconstruction of a discrete image [10]. Usually it is not necessary to use all of the moments, or “full-spectrum” to adequately describe an image. This is due to the fact that the lower-order moments tend to capture the shape of the object, while the higher order moments tend to provide detail. Typical examples of reconstruction of continuous moments usually use a binary image of a letter or a character, and perform the reconstruction with different orders of moments [9, 33]. In comparison, there are studies available for the discrete moments which evaluate the reconstruction using grayscale images [34]. A metric such as the mean-square error or the signal-to-noise ratio is used to compare the reconstructed image to the original image. While the discrete moment methods would appear to be the better choice for analyzing images, they do have some disadvantages. Discrete moments do not have the discretization error that the approximated continuous moment functions exhibit, but they do show numerical instability for larger images. Bayraktar et al. showed that for a 256 x 256 image, the reconstruction error decreases as the order increases up to a point, then it jumps up suddenly [35]. This typically only affects larger images using high orders of moments. 2.3 Objectives While there is an awareness by the stress analysis community that computational models need to be validated by experiment, the current practice of using strain gauges to make a point-by-point comparison is not sufficient. This section has presented some of the work that is being done to change this, but currently there is a very limited amount of literature 15 describing the use of full-field experimental data to validate simulations [36]. However, there has been some recent success using image decomposition for vibration mode-shape recognition [7, 37, 38] and damage characterization [8]. The objectives of this thesis are therefore: 1. To perform a calibration of the experimental measurement system to quantify the level of uncertainty in future measurements. 2. To develop an image decomposition methodology using moment descriptors. 3. To use the methodology developed to compare experimental and simulation outcomes. 16 Chapter 3 Calibration This chapter describes the use and results of the procedure published by the Standardisation Project for Optical Techniques of Strain measurement (SPOTS) for a successful calibration of a digital image correlation system. The details of the calibration specimen used are discussed together with procedure and criteria that must be met to achieve calibration. The system was evaluated over a range of 289 to 2110 µstrain, with a resulting calibration uncertainty ranging from 14.0 to 28.7 µstrain. This calibration uncertainty will be used in the validation process to provide confidence in the experimental results. 3.1 Reference Material for Calibration To perform a successful calibration of a strain measurement system, it is necessary to be able to produce a known strain field to compare to the results generated by the measurement system. This can be achieved using a reference material. The SPOTS consortium performed an extensive investigation to identify five essential attributes (easy optical access, lack of hysteresis, an in-plane strain field, traceability to an international standard, and utilization 17 of the length standard for traceability) and 24 desirable attributes of a reference material for the calibration of a full-field optical system for strain measurement [6]. These attributes were used to assess a large number of candidate designs for the reference material that resulted in the selection of the design shown in Figure 3.1, which consists of a central beam loaded in symmetrical four-point bending. The reference material has several noteworthy design features. Reproducibility between experiments and between laboratories was a major concern of the SPOTS group, which has been addressed by making the specimen and loading frame monolithic. The beam and the loading frame are therefore machined from a single piece of material. The specimen is designed such that the user has options for applying and measuring displacement loading. Feature A from Figure 3.1 shows the tabs which allow the use of a variety of measurement devices that need to have been calibrated with respect to the standard for length. Feature B shows the holes for tensile loading of the specimen as well as the half-cylinder at the top to ensure proper alignment in the case of compressive loading. Feature C is simple design feature which serves as an interlock to prevent an overload to the reference material which could cause plastic deformation. Feature D is one of the whiffle-trees which supports the beam. Ideally rollers or knife edges would be used to minimize the transmission of lateral and rotational forces, but the monolithic design makes their use difficult. The constraint imposed by the use of the whiffle-trees can be described by Equation 3.1. The correction factors are k and η which are unknown, and must be found either through experimentation or analysis. The quantity vavg is average of the two displacement readings obtained from the measurement devices attached to Feature A. The experimental procedure is used here, and is explained in more detail in the next section. 18 Figure 3.1: Drawing of the reference material showing some of the design features. A: Measurement tabs. B: Mounting holes for tensile loading and semi-circle for compressive loading. C: Interlock to prevent overload. D: Whiffle tree that allows for monolithic manufacturing of the specimen. E: Gauge section. 19 xx = vavg (ky + η) 6W 2 (3.1) Finally, Feature E is the gauge section, which is the area between the two inner loading points where the bending moment is constant along the length of the beam. The strain is measured in this region using the instrument being calibrated and the results compared to the analytical solution. A linear least squares fit is used to analyze the differences between the two quantities, producing two parameters, α and β. The SPOTS guidelines provide a detailed procedure for this comparison and for determining the calibration uncertainty which have been followed here. 3.2 3.2.1 Experimental Set-Up and Procedure Reference Material The reference material used for this calibration procedure was manufactured using electric discharge machining (EDM) from a single piece of 5 mm thick 2024-T351 aluminum. The specimen beam depth (W ) was 15 mm. A detail of the gauge section of the calibration specimen is shown in Figure 3.1, and illustrates the dimensions W , a, and c. The values for these dimensions are listed in Table 3.1. 3.2.2 Determination of the correction factors k and η There were two steps in the calibration process: one to determine the correction factors described in Equation 3.1, and the second to acquire the strain data using the instrument being calibrated. For both steps, the specimen was mounted in a servo-hydraulic test ma20 Table 3.1: The critical dimensions of the calibration specimen and their associated uncertainties Parameter Units Nominal Value W a c vavg k η mm mm mm µm mm 15 45 15 27.5, 110, 218 0.986 -0.068 Uncertainty, u 0.017 0.017 0.017 0.89 0.006 0.006 chine (MTS 810, Eden Prairie, MN) using the loading holes. This arrangement allowed the specimen to be loaded in tension. The specimen was oriented upside down in the load frame relative to Figure 3.1 and the usual orientation in guidelines. This minimized the rigid body movement of the specimen relative to the camera. To measure the applied displacement in the specimen, two aluminum blocks were machined to hold a pair of digital displacement indicators (Mitutoyo 543-392, Kawasaki-shi, Japan) on each side of the specimen. The calibration results are evaluated using the unit of length, so it is important that displacement measurement devices have a calibration history traceable to an international standard. In this case, the indicators were purchased with calibration certificates which provide traceability to the National Institute of Standards and Technology (NIST). The certificates provide a level of expanded uncertainty for each of the indicators, which was taken into account when calculating the overall uncertainty of the system. Figure 3.2 shows the calibration specimen installed in the test machine with the digital indicators attached using the custom mounting blocks. Strain gauges (Vishay EA-13-120LZ120/E, Raleigh, NC) were bonded to the top and bottom surfaces of the gauge section. The relatively small (5 mm) thickness of the specimen meant that single axis gauges had to be used. Figure 3.3 shows the location of the strain gauges on the specimen. The strain 21 Figure 3.2: Close-up of the calibration specimen installed in the MTS load frame. The digital indicators are mounted to the specimen by custom-machined mounting blocks. measurements were acquired using two digital strain indicators (Measurements Group P3500, Raleigh, NC). To find the correction factors k and η described earlier, it was necessary to acquire data from the strain gauges as a function of applied displacement to the specimen. For this, the test machine was operated in displacement control, and readings were taken from the strain gauges. In this way, approximately 30 strain measurement readings were obtained as a function of the applied displacement load. 3.2.3 Calibration Measurements To prepare a specimen for use with a digital image correlation system, its surface is typically painted with a white and black speckle pattern. However, this calibration was performed with the specimen surface prepared using a procedure described by Lopez-Crespo et al. [39] 22 y Strain gauge locations x Figure 3.3: Model of the reference material showing the strain gauge locations and the reference coordinate system. 23 Figure 3.4: The pattern produced on the surface of the specimen by using grit paper. which eliminated the need to use paint. First the surface was sanded with 220 grit and then 400 grit paper, which yielded a high contrast pattern for the DIC images. A typical image of the surface is shown in Figure 3.4. A macro lens (Tamron SP 90mm F/2.8 1:1 macro, Saitama-city, Japan) was fitted to the DIC camera which made it possible to fill the camera’s field of view with the gauge section of the specimen. To provide bright, uniform illumination of the specimen, a fiber optic ring light (Edmund Optics 58-839, Barrington, NJ) and illuminator (Dolan-Jenner DC-950H, Boxborough, MA) were used. The ring light was used in place of the standard LED floodlight because when it was used with the highly reflective bare metal surface it tended to produce glare. The camera and lens were positioned perpendicular to the surface of the gauge section at a distance of 265 mm. The SPOTS procedure states that for a proper calibration test setup, the gauge section 24 of the beam should fill 90% of the short dimension of the detector, with at least a resolution of 10 x 10 pixels. The detector used for this calibration has a resolution of 1040 x 1392 pixels, which for 90% coverage of the short dimension yields an area of approximately 1.3 million pixels. The procedure also states that the applied displacement be over a range between 10% and 90% of the maximum allowable load. The minimum slit diameter that can be typically manufactured using EDM for the interlock (C in Figure 3.1) is the maximum allowable deflection for a beam with a depth of 15 mm (see Appendix A of the SPOTS Guidelines at www.opticalstrain.org). This value is typically between 0.25 and 0.30 mm, which gives a maximum allowable displacement range between 225 to 270 µm. To avoid damage to the specimen, a maximum displacement value of 225 µm was chosen. In order to ensure a correct initial displacement reading, the digital indicators were installed in their proper locations and zeroed with the specimen laying flat on a laboratory bench. This prevented the weight of the indicators from imposing a displacement load on the system. The specimen was then mounted in the test machine and a 1-2 µm displacement pre-load applied to remove any slack in the system prior to applying the calibration loading. The test machine control software (MTS Station Manager, MTS, Eden Prarie, MN) was set to apply the load at a rate of 0.3 mm/minute and the DIC system was programmed to record images at 2 second intervals. To capture the displacement measurements from the digital indicators as the specimen was being loaded, the DIC system’s second camera was positioned to capture the values displayed by both indicators. This provided an exact displacement measurement corresponding to each image captured of the gauge section of the specimen. Using this procedure resulted in more data than required for the calibration procedure, so the three displacements that were used for the analysis were 27.5, 110, and 25 218 µm. These values were chosen in part to allow comparison of the results to those from a previous calibration of an electronic speckle pattern interferometer [23]. 3.3 3.3.1 Results and Discussion Correction Factors and their Uncertainty The SPOTS procedure gives values for the correction factors from finite element analysis, but it also allows them to be found experimentally, as was done in this study. The values of the strain parallel to the neutral axis, obtained from the strain gauges on the top and bottom surfaces of the beam were plotted in order to find the constraint correction factors k and η. Figure 3.5 shows the quantities plotted as a function of applied displacement load between 0 and 180 µm. The correction factors k and η were determined from the coefficients resulting from a linear fit to the data. Using this method, k was found to be 0.986 and η was -0.068. The results from the finite element analysis given in the SPOTS guidelines are for a beam depth W ranging from 15 mm to 30 mm and yield k = 0.94 and η varying linearly from -0.10 to -0.18. The slightly higher k value from the experiment indicates that the specimen had slightly less rotational constraint than the finite element model, and the smaller η value indicates less translational constraint in the x-direction. Table 3.1 lists the values found for the correction factors as well as their associated uncertainty, which was calculated from the variance of the fitted line. 26 W 2 y= y= W 2 + - y=- W 2 y=- W 2 Figure 3.5: Plot of the strain from the gauges mounted to the top and bottom surfaces of the specimen. A linear fit was applied to the data to determine the correction factors k and η. 3.3.2 Specimen Dimensions and Associated Uncertainty For this calibration, the uncertainty in these dimensions was calculated from the manufacturing tolerances, which is specified as 0.05 mm by the SPOTS guidelines. However, an alternative way to determine the uncertainty would be to measure the geometry using a series of independent observations, and then calculate the mean and standard deviation from the measurements. The uncertainty for the applied displacement was a function of the digital indicators, which were purchased with NIST calibration certificates. The certificates provided the measurement range and uncertainty for each indicator. 3.3.3 Measurement System Results and Associated Uncertainty Figure 3.6 shows the strain along the centerline of the gauge section parallel to the applied load obtained from Equation 3.1 and from the digital image correlation system for the three 27 displacement load steps. Figure 3.7 shows the data generated by the DIC system for the 218 µm load step. The deviations, d, were then calculated for each of the load steps by subtracting the measured strain, xx from the strain predicted using Equation 3.1. A linear least-squares fit was then applied to the deviations, resulting in the fit parameters α and β shown in the following equation, dk = α + βk y (3.2) where the subscript k indicates the load step. The variances of α and β were also calculated and used to find the uncertainty for each fit parameter. The range formed by plus or minus twice the uncertainty creates a 95% confidence interval for each of the parameters. This data is summarized in Table 3.2. Table 3.2: Summary of the fit parameters calculated for each of the displacements, as well as their associated 95% confidence levels Applied Displacement, vavg 27.5 µm 110 µm 218 µm Fit Parameter Units α u(α) 95% Confidence Interval (α) β u(β) 95% Confidence Interval (β) µstrain µstrain µstrain µstrain mm−1 µstrain mm−1 µstrain mm−1 -3.2 1.29 ±2.57 -4.4 0.32 ±0.65 2.0 1.69 ±3.38 -4.9 0.42 ±0.85 11.9 2.66 ±5.36 -3.6 0.67 ±1.34 Next, the total uncertainty in the measured strain, xx for the calibration was calculated using the root-sum-of-squares method to combine the contributing uncertainties. This quantity includes the uncertainty in the dimensions, material properties and correction factors for the calibration specimen, as well as for the applied displacement measurements. The flowchart in Figure 3.8 shows how each of these components contributes to the minimum measurement uncertainty for the system. For each of the three displacement load steps, the 28 200 100 2 -4 8 4 6 8 2 -6 6 2 -8 4 4 6 8 0 -2 -100 -200 800 400 -8 -6 -4 0 -2 -400 -800 1500 1000 500 -8 -6 -4 -2 0 -500 -1000 -1500 Figure 3.6: The measured and calculated xx strain values for each of the three displacement load steps: 27.5 µm (top), 110 µm (middle), and 218 µm (bottom). 29 15 mm 11.2 mm 1100 µstrain x y -1100 µstrain Figure 3.7: Principle strain data generated by the DIC system for the 218 µm displacement load. For interpretation of the references in color in this and all other figures, the reader is referred to the electronic version of this thesis. mean residual deviations (αk + βk y) ± 2u(dk ) were plotted along with the expanded uncertainty of the calibration specimen, ±2uCS , in Figure 3.9. The quantity 2u(dk ) provides a 95% confidence interval around the regression line for the deviations found from Equation 3.2, and the area bounded by ±2uCS indicates the uncertainty from the calibration specimen. According to the SPOTS guideline, an appropriately calibrated instrument is one in which there is complete overlap between the expanded uncertainty of the reference material and the mean residual deviations αk +βk y. In an ideal system, both αk and βk would be zero and no adjustments would be necessary. On the other hand, when |αk | is less than twice its expanded uncertainty, 2u(αk ), there is a statistically significant offset in the calibration. When |βk | is less than twice its expanded uncertainty, 2u(βk ), there is a statistically significant deviation 30 Key to sources of error/uncertainty material uncertainty, u(υ ) geometric uncertainties u(W), u(a), u(c) constraint correction uncertainties u(k), u(η) displacement measurement uncertainty, u(vavg ) arising primarily from optical instrument arising primarily from RM manufacture calibration specimen uncertainty, arising primarily from measurement/analysis uCS calibration uncertainty, ucal minimum measurement uncertainty strain measurement uncertainty, u(dk ) Figure 3.8: The total measurement uncertainty of a system is a combination of uncertainties from the calibration specimen and from the measurement system. in the calibration in which the deviations increase as a function of the distance, y from the center of the beam. It should be noted that both of these cases were true for all of the values of α and β with the exception of α for the 110µm displacement. 3.3.4 Overall System Results and Associated Uncertainty The resulting calibration uncertainty of the system, uCAL , was found using the maximum strain value from each of the load steps and the results from the combination of the uncertainties of the reference material and of the strain measurements. To better understand the significance of uCAL , the relative uncertainty, urel , can be calculated which expresses uCAL as a percentage of the measured strain value. Table 3.3 summarizes these values. There were some challenges encountered during the calibration procedure that had to be 31 200 100 4 4 8 4 -8 8 8 0 -4 -100 -200 200 100 -8 0 -4 -100 -200 200 100 -8 -4 0 -100 -200 Figure 3.9: Plots of the expanded uncertainty of the calibration specimen uCS along with the mean residual deviations αk + βk y and the scatter band around the deviations ±2u(dk ) for each load step: 27.5 µm (top), 110 µm (middle), and 218 µm (bottom). 32 Table 3.3: Summary of the uncertainty values for each of the load steps. The uncertainty values listed are for the maximum strain of each load step. Unless otherwise noted, the values listed are µstrain. vavg (µm) u(dk ) uCS uCAL 13 17 27 5 6 8 14.0 18.3 28.7 27.5 110 218 ( xx )y=W/2 −( 289 1103 2110 xx )y=−W/2 urel 4.8% 1.7% 1.4% overcome to achieve acceptable results. In comparison to Whelan et al. [23], the specimen used for this calibration had a thickness of 5 mm as compared to 10 mm. The greater thickness allowed Whelan et al. to use strain gauge rosettes, where it was only practical to use single axis gauges for this specimen. The lack of a rosette meant that care had to be taken to ensure that the gauges were aligned properly on the specimen. It was also found that the thin specimen did not stand upright on its own very well, making it unstable under compressive loading. As a result, tensile loading using the mounting holes was chosen for this calibration procedure. Another challenge was to obtain accurate strain measurements at the smallest load displacement step, 27.5 µm. An example of an initial measurement taken at this displacement is shown in Figure 3.10. This figure shows that there is a deviation in the measured versus predicted strain values as a function of the distance in the y-direction from the neutral axis of the specimen. This was corrected first, by ensuring that the surface stayed flat when preparing it with the grit paper by using a sanding block. Initially, the surface was sanded by hand, which may have affected the flatness of the specimen. And second, the camera was rotated by 90 degrees which aligned the sensor’s higher resolution dimension with the y-axis of the gauge section and allowed the camera to be moved closer to the specimen. 33 200 100 -8 -4 -50 0 4 8 -150 -250 Figure 3.10: One of the early measurements obtained for the 27.5 µm displacement load. It was determined that the original hand polishing technique was causing deviations in the measured strain values at the extreme edges of the beam. 3.4 Conclusions The results of calibrating a digital image correlation system have been presented according to the procedure described by the SPOTS guideline. The system was shown to achieve an acceptable calibration as defined by the guideline and the minimum relative uncertainty for future measurements made with the optical setup was evaluated as 5% for maximum strains of the order of 120 µstrain and 1.4% for maximum strains of the order of 1000 µstrain. The calibration provides confidence for future strain measurements made with the system, and establishes traceability for the measurements to the international standard for length. The latter is important when analyses are associated with regulatory procedures. In addition, the procedure was performed using only the bare metal of the specimen, without the need to paint the surface with a speckle pattern. Some of the challenges of the procedure have been discussed. 34 Chapter 4 Experimental Methods This chapter details the experimental equipment and methods used for the two studies discussed in this thesis. The first study used an aluminum plate loaded in tension. The second study used a composite panel which was bolted on one end and loaded compressively. Experimental data for these two studies was acquired using digital image correlation, which yielded maps of principle strain data. The previous chapter described the calibration procedure for the digital image correlation system (Q-400, Dantec Dynamics, Ulm, Germany) which was used to gather the full-field experimental data for these studies. The same 50,000N servohydraulic loading frame and control system (MTS 810, Eden Prairie, MN) that were used for the calibration in the previous chapter were also used for the experimental setups described in this chapter. 35 4.1 4.1.1 Aluminum Plate Tensile Study Specimen Description The specimen was a 100 mm x 300 mm plate manufactured from 1 mm thick 6061-T6 aluminum. A 15 mm hole was drilled in the center of the specimen. The complete dimensions are shown in Figure 4.1. 4.1.2 Experimental Procedure For this study, the aluminum specimen was mounted in the loading frame and subjected to tensile loading. Initially, the specimen was located in the machine using the hydraulic wedge grips supplied with the machine (MTS Series 647 hydraulic wedge grips, Eden Prairie, MN). The wedges were 25 mm wide and designed to be used with specimens of the same width or narrower. When this is the case, there are small tabs on the side of the wedges that can be used to align the specimen in the fixture. Since the aluminum plate specimen was 100 mm wide, these tabs had to be removed and the alignment performed visually. As a result, there was concern that misalignment of the plate might be causing asymmetric loading. Also, as the width of the plate was much wider than the grips, the force was not evenly distributed along the entire width of the plate. To overcome these problems, the mounting system of the plate was changed from a clamped system to a pin system. Two thick aluminum plates were bolted to each end of the specimen using three bolts per side. The thick aluminum plates had a hole to mount the specimen into the loading frame using a steel pin. This allowed the plate to naturally align itself as force was applied. The images in Figure 4.2 provide a comparison of the original 36 75 50 15 75 300 R12.5 150 100 Figure 4.1: The dimensions of the aluminium plate used for this study. The area outlined by the dashed line indicates the region of interest for the experimental data capture. 37 and updated loading methods. The first step in the experimental data collection procedure was to mount the specimen in the load frame to properly position the camera and light source. The lens with a medium focal length was chosen (Schneider Cinegon 1.4/12-0515) to provide an appropriate field of view of the experimental region of interest. Care was taken to position the camera perpendicular to the specimen and to fill the short width of the camera’s detector with the area of interest. The HILIS LED floodlight supplied with the DIC system was positioned to provide uniform illumination of the specimen. The image in Figure 4.3 shows the experimental setup used for this test. The next step was to run the calibration routine for DIC software so that it could determine the intrinsic and extrinsic parameters for the particular setup being used. At this point the focus and aperture settings on the lens were locked, and the specimen removed from the loading frame. The 8 mm x 8 mm plate was used to perform the calibration procedure shown in Figure 4.4. The specimen was then mounted back in the loading frame in preparation for the experiment. To acquire the data, the load frame was operated in displacement control with a crosshead speed of 1 mm/min. The DIC system was set up to record images at 2 second intervals. A small 120N preload was applied to the specimen before starting the test to remove any slack from the system. Images were acquired of the loading sequence until the plate reached failure, which was around 20kN. Then the images were processed using the Istra 4D software to determine the displacement and strain components of the specimen during the loading procedure. A facet size of 25 pixels was used with a grid spacing of 5 pixels. The data were exported in HDF5 format for further 38 Figure 4.2: Pictures showing the original loading configuration using hydraulic wedge grips (left), and the final configuration using pin loading (right). 39 Figure 4.3: The experimental test setup used for the aluminum plate study. 40 Figure 4.4: The calibration procedure using an 8mm x 8mm plate supplied with the DIC system. 41 processing in Matlab. 4.2 4.2.1 Composite Plate Study Specimen Description The specimen for this study was an inhomogeneous composite panel consisting of an aluminum backing plate with ceramic tiles and a pre-impregnated composite overwrap. Figure 4.5 gives the dimensions of the panel as a function of the tile size a. 4.2.2 Experimental Procedure For this study, it was desired to load the composite panel specimen in compression by holding one end fixed with a bolted joint, and to apply a force to the opposite end of the panel. The initial testing was performed using a reconfigurable hydraulic actuator mounted to a T-bed, which is shown in Figure 4.6. It was later discovered that the data captured from this setup was not valid due to out-of-plane displacement of the specimen. A fixture was manufactured to mount the specimen in the MTS 810 load frame to repeat the tests. To help minimize the impact of the out of plane displacement of the panel, a smaller area was investigated with a dimension of 45 mm x 32 mm. The Tamron macro lens was also used (the same one from the calibration procedure) because its 90 mm focal length meant that the camera could be moved farther away from the panel, to approximately 650 mm. The LED floodlight supplied with the DIC system was positioned to provide uniform illumination of the specimen. The image in Figure 4.7 shows the experimental setup. The focus and aperture settings on the lens were locked, and the composite panel spec42 0.23 a 0.02 a 0.29 a a 2x 0.4 a 0.34 a 0.21 a Figure 4.5: Dimensions of the composite panel described as a function of the tile size a. 43 Figure 4.6: The model showing how the protective panel attached to the test fixture and where the loading was applied (top). The panel installed in the test fixture (bottom). 44 Figure 4.7: Picture of the revised experimental test setup for the composite panel study. 45 imen was removed from the load frame. The DIC software was calibrated using 4 mm x 4 mm target plate, which is similar to the one used in Figure 4.4. The loading for this study was performed in a quasi-static fashion. A 0.5 kN pre-load was applied, and a reference image captured manually. Then the load was increased incrementally to 4.45 kN, 8.9 kN, 13.34 kN, and 17.79 kN, with images captured at each step. The images were processed using the Istra 4D software to determine the displacement and strain components of the specimen during the loading procedure. A facet size of 25 pixels was used with a grid spacing of 12 pixels. The data was exported in HDF5 format for further processing in Matlab. 46 Chapter 5 Image Decomposition Methodologies This section describes the methodologies for the Zernike, Tchebichef, and Fourier descriptors, as well as a hybrid approach applying both the Fourier transform and moment descriptor. An example deconstruction and reconstruction is provided for each methodology. The same experimental data set was used for all of the examples in this section. For each, first the entire strain map was used, then a cropped smaller area examined. The entire experimental data set was 175 x 175 pixels with a total of 30,625 points and the cropped data set was 59 x 59 pixels for a total of 3,481 pixels. 5.1 Zernike Moment Descriptor A set of orthogonal complex polynomials valid over a unit circle was given by Zernike [33]: Vn,m (x, y) = Vρ,θ = Rn,m (ρ)eimθ where, 47 (5.1) n Non-negative integer representing the order of the radial polynomial m Positive and negative integers subject to constraints n − |m| even, |m| ≤ n representing the repetition ρ Length of vector from the origin to (x, y) θ The angle between vector ρ and the x-axis in the counter-clockwise direction Rn,m Radial polynomial defined as n−|m| 2 Rn,m (ρ) = (−1)s s=0 (n − s)! ρn−2s n+|m| n−|m| s! ! ! 2 2 (5.2) The inner product between any two Zernike polynomials can be written as 2π Vp,q , Vn,m = 0 1 0 Vp,q (ρ, θ) Vn,m (ρ, θ) ρdρdθ (5.3) The Zernike moments of an image with intensity, I(x, y), can be found by projecting the image onto the complex Zernike polynomials, Vn,m Zn,m = I(x, y), Vn,m Vp,q , Vn,m (5.4) Figure 5.1 and Figure 5.2 show the maximum principle strain map from the aluminum plate decomposed into Zernike moments with a maximum order of 8. A maximum order of 8 results in 45 Zernike moments to completely describe the image. To check how well these 45 moments describe the original strain map, a reconstruction is performed from the moments. In the case of Figure 5.1, the reconstructed map does not match the original strain map very 48 well. The discontinuity produced by the hole in the experimental specimen was not captured by the Zernike moments. Figure 5.2 gives the results from analyzing a subset of the experimental strain map not containing the hole. For this subset of data, the reconstructed map matches the original more closely than Figure 5.1. 5.2 Tchebichef Descriptor The Tchebichef moments of an MxN discrete image are given by [10], M −1 N −1 ˜ ˜ tp (x)tq (y)f (x, y) Tp,q = (5.5) x=0 y=0 ˜ ˜ where tp (x) and tq (y) are the normalized Tchebichef polynomials ˜ tp (x) , ρ(p, M ) ˜ ˜ tp (x) = ˜ tq (x) = ˜ tq (x) ρ(q, M ) ˜ (5.6) and, M 1 − 12 M ρ(p, M ) = ˜ 2 p2 1 − 2 2 ... 1 − 2 M M 2p + 1 (5.7) The discrete Tchebichef polynomials are given by, n tn (x) = n! (−1)n−k k=0 N −1−k n−k n+k n n k (5.8) The experimental strain map was then decomposed into Tchebichef moments with a maximum order of 8. Figure 5.3 shows the Tchebichef moments from the decomposition 49 Figure 5.1: An example of image decomposition and reconstruction of a maximum principal strain map using Zernike moments with a maximum order of 8, which results in 45 moments. 50 original strain map reconstructed strain map 10 6000 10 6000 20 5000 20 5000 30 30 4000 40 3000 50 4000 40 3000 50 10 20 30 40 50 10 20 30 40 50 Zernike moments 3000 2500 2000 1500 1000 500 0 0 5 10 15 20 25 original strain map 30 35 40 45 50 reconstructed strain map 8000 8000 6000 6000 4000 4000 2000 100 2000 100 100 50 100 50 50 0 0 50 0 0 Figure 5.2: This decomposition and reconstruction uses a subset of the data from Figure 5.1. The data has been cropped to eliminate the discontinuity resulting from the hole. 51 and the reconstructed strain map. Similar to the results using the Zernike descriptor, the Tchebichef moments were not able to reconstruct the hole in the original strain map. Figure 5.4 shows the Tchebichef moments and the reconstructed strain map from a subset of the experimental strain map not containing the hole. The reconstructed strain map using the subset of data matches the original very well. 5.3 Hybrid Fourier-Moment Descriptors This section explores the use of taking the Fourier transform of the image before decomposing with either the Zernike or Tchebichef moments. The Fourier transform is usually performed on a signal or shape and converts it from the spatial domain to the frequency domain. The result of applying the Fourier transform to a data set is a magnitude and phase map. All of the information contained within the image is preserved by taking the Fourier transform, but no data compression is achieved, which is why the Fourier transform is not suitable for image decomposition by itself. However, transforming the image into the frequency domain enables the moment descriptors to operate on discontinuities such as the hole. In the examples presented here, the moment descriptors are used decompose the magnitude map. The continuous Fourier transform is given by, +∞ F T (u, v) = −∞ +∞ −∞ e−2π(ux+uy) I(x, y)dxdy 52 (5.9) Figure 5.3: An example of image decomposition and reconstruction of a maximum principal strain map using Tchebichef moments with a maximum order of 8, which results in 45 moments. 53 Figure 5.4: This decomposition and reconstruction uses a subset of the data from Figure 5.3. The data has been cropped to eliminate the discontinuity resulting from the hole. 54 The Discrete Fourier Transform (DFT) is given by, 1 DF T (u, v) = KL K−1 L−1 −i2π uk + vl K L I(k, l) e k=0 l=0 (5.10) Figure 5.5 shows the magnitude and phase maps from applying the discrete Fourier transform to the experimental data set. The magnitude map has very large spikes at the corners which are difficult for the moment descriptors to capture. By taking the logarithm of the magnitude map it is possible to reduce the size of these spikes and make it easier to see the detail of the map. The DFT can also be split into real and imaginary parts, which are also shown at the bottom of Figure 5.5. Figure 5.7 shows the results of performing an image decomposition and reconstruction by first applying the DFT to the experimental data set. The Zernike moment descriptor is then used to decompose the logarithm of the magnitude map from the DFT. The reconstruction is performed by reconstructing the magnitude map from the Zernike moments, and then using the magnitude map along with the phase information to move back to the spatial domain using the inverse Fourier transform. Figure 5.8 performs the same procedure with the Tchebichef moments used in place of the Zernike moments from the previous example. Using this approach, the reconstructed strain maps shown in Figures 5.7 and 5.8 capture the shape of the original strain map, including the hole, which the moment descriptors were not able to achieve by themselves. 55 Figure 5.5: An example of applying the Discrete Fourier Transform (DFT) to an experimental data set, which produces a phase map and a magnitude map. The magnitude is also shown separated into real and imaginary parts. 56 Figure 5.6: The Discrete Fourier Transform (DFT) applied to a subset of the experimental data set from Figure 5.5. The subset has been cropped to eliminate the discontinuity caused by the hole. 57 Figure 5.7: The results of applying the hybrid Fourier-Zernike decomposition to the experimental strain data. In comparison to the decomposition that used only the Zernike descriptor, the hole feature has been retained in this reconstruction. 58 Figure 5.8: The results of applying the hybrid Fourier-Tchebichef decomposition to the experimental strain data. In comparison to the decomposition that used only the Tchebichef descriptor, the hole feature has been retained in this reconstruction. 59 5.4 Summary of Results Figure 5.9 shows the reconstruction results from all of the image decomposition methods for the experimental strain map. The reconstructed map is shown along with the residual found by subtracting the reconstructed map from the original map. The percentage correlation between the original and reconstructed maps is also shown. This figure shows that applying the Fourier transform to the data set permitted the hole feature to be retained. The FourierTchebichef decomposition provided the best results with 84% correlation between the original and reconstructed strain maps. Figure 5.10 shows the results for the subset of experimental data. In comparison to the results described in Figure 5.9, applying the Fourier transform to this data set caused the reconstruction quality to decrease. For the Zernike descriptor, the correlation decreased from 99% to 76%. For the Tchebichef descriptor, the correlation was reduced from 99% to 97%. These results indicate that there is not one single image decomposition methodology that is appropriate for all situations. The Fourier-Tchebichef approach provided the best results when there was a hole present in the strain map, while both the Zernike and Tchebichef descriptors provided similar results when applied to a smaller subset of the original data that did not contain the hole feature. 60 Reconstructed Strain Map Original – Reconstructed Figure 5.9: Comparison of the correlation of the original and reconstructed strain maps for the different decomposition methodologies. 61 Reconstructed Strain Map Original – Reconstructed Figure 5.10: Comparison of the correlation of the original and reconstructed strain maps for the cropped experimental results. 62 Chapter 6 Results from Aluminum Plate Study 6.1 Introduction For the initial study to determine the feasibility of the Zernike moment descriptor for image decomposition, it was decided to use a 100 mm x 300 mm plate loaded in tension. A hole was added in the center of the plate to provide a discontinuity in the strain field for the study. The plate was manufactured from 1 mm thick 6061-T6 aluminum. The drawing in Figure 4.1 gives the detailed geometry of the plate. A series of images were collected as the plate was loaded to failure, which resulted in a total of 120 images. It was desired to compare the strain map from one step, so one was selected that had large strain values, but before yielding of the plate occurred. The graph in Figure 6.1 shows the load history and indicates which data set was used for this study. 63 25000 Force (N) 20000 15000 Step 50: 14,189 N 10000 5000 0 0.00 1.00 2.00 3.00 4.00 5.00 6.00 Crosshead Displacement (mm) Figure 6.1: The load history of the plate up to failure. The data from step 50 was used for this comparison study. 6.2 Computational Model A computational analysis was performed using Hypermesh 10 and RADIOSS (Altair, Troy, MI, USA). The plate was modeled in 2 dimensions, with quad shell elements. The elements were assigned the material properties of 6061-T6 aluminum with a modulus of 69 GPa and a thickness of 1 mm. Three different mesh sizes were used to investigate convergence of the model. The coarse mesh had an average element size of 1 mm resulting in a total of 6,900 elements. The fine mesh had an average element size of 1 mm for a total of 27,600 elements. The finest mesh had an average element size of 0.5 mm with a total of 110,000 elements. The pictures in Figure 6.2 show the differences in the mesh sizes. To simulate the experimental loading condition, the elements at the top of the plate were held fixed, while the elements at the bottom had a force applied to them in the −y direction. The loading conditions can 64 be seen in Figure 6.3. The post-processing of the models was performed using Hyperview (Altair, Troy, MI, USA). A simple averaging filter was applied to the results which calculated the average of the elements passing through each node. The data was exported as a TIF image file for further processing in Matlab. 6.3 Comparison of Results using Image Decomposition The comparison of the experimental and computational results for this study was performed using a modified version of the Matlab script ImPaCT produced by Patki [8]. The maximum principal strain maps for the experimental and computational results were imported into Matlab and the script was used to decompose them using the Fourier-Zernike and FourierTchebichef descriptors. These methods were chosen from the results of Chapter 5, which demonstrated that applying the Fourier transform to the data set before using one of the moment descriptors produced the best results for strain maps with discontinuities. The first comparison applies the Zernike descriptor to the magnitude map from the Fourier transform. This is shown in Figure 6.4. To perform a comparison of the moments, they were plotted against one another as shown in Figure 6.5. The experimental moments are shown on the x-axis, and the computational moments on the y-axis. If there were perfect correlation between the experimental and simulation moments, they would all lie along a straight line with a gradient and an R2 value of 1. Therefore, information about the correlation of the two data sets can be determined from the gradient of the line and the R2 value from the scatter of the data points. In this case, the data is skewed towards the experimental data with a gradient value of 0.88. The R2 value of 0.98 indicates that form 65 1 mm element size 2 mm element size 0.5 mm element size Figure 6.2: This figure shows the different mesh sizes overlaid on the finite element analysis results. The maximum principle strain is shown. 66 Figure 6.3: An example of one of the meshes used, showing the boundary conditions. The elements highlighted in blue were fixed in all degrees of freedom. The elements highlighted in red had a force applied to them in the −y direction. 67 of the data sets match well. A comparison was then made by applying the Tchebichef descriptor to the magnitude map from the Fourier transform. The moments produced from applying the Fourier-Tchebichef decomposition to the experimental and simulation data sets are shown in Figure 6.6. Figure 6.7 shows the moments from the simulation data set plotted against the moments from the experimental data set. The results from applying the Fourier-Tchebichef decomposition are similar to those from the Fourier-Zernike decomposition. The gradient of the line is 0.87, which indicates a slight bias towards the experimental data. The R2 value of 0.99 indicates that the form of the data sets match well. 68 Figure 6.4: Comparison of the Zernike moments generated by using the logarithm of the magnitude of the Fourier transform. 69 FEA data Fourier−Zernike moments 2 1.5 1 0.5 0 0 0.5 1 1.5 DIC data Fourier−Zernike moments 2 Figure 6.5: The Zernike moments generated from the simulation data plotted against the moments from the experimental data (top). The moments plotted without the first one to show the detail of the others (bottom). 70 Figure 6.6: Comparison of the Tchebichef moments generated by using the logarithm of the magnitude of the Fourier transform. 71 FEA data Fourier−Tchebichef moments 200 150 100 50 0 −50 −50 0 50 100 150 200 DIC data Fourier−Tchebichef moments Figure 6.7: The Tchebichef moments generated from the simulation data plotted against the moments from the experimental data (top). The moments plotted without the first one to show the detail of the others (bottom). 72 Chapter 7 Results from Composite Panel Study 7.1 Introduction In the previous section image decomposition was applied to the comparison of strain maps for an isotropic homogeneous aluminum plate. This section examines the strain distribution in an inhomogeneous composite panel. Composite panels are sometimes bolted to a vehicle structure in order to provide impact protection to the vehicle and its occupants. The panel is designed with the intention that it should not carry the service loads that are encountered in typical operation. It is critical to protect the panel from the repetitive or high service loads that might damage or degrade its protective capacity. The complex design of the panel makes the development of a computational model of its loading and deformation difficult and so it was decided that a comprehensive and quantitative validation of the model was both appropriate and highly desirable. The aim of this study was to use the full-field optical data from the DIC system to validate the model by comparing the data sets from the computation and experimentation using image decomposition. 73 7.2 Panel Description and Manufacturing Typical composite protective panels can be rather large in area, which was not practical to manufacture or test with the equipment available. Therefore, it was decided to make a panel that was 305mm square. The panel was composed of a 6061-T6 aluminum backing plate to which alumina ceramic tiles were bonded. A pre-impregnated composite layer (Cycom 381, Cytec Engineered Materials, Anaheim, California) was wrapped around the entire panel. The panel was then vacuum-bagged and cured in an oven according to the manufacturer’s instructions for the composite. Figure 7.1 shows the arrangement of the layers in the panel. In order to mount the panel in the test machine, two holes were drilled in one end of the panel using a diamond-tipped drill bit. Figure 4.5 shows the detailed geometry of the composite panel. 7.3 Computational Model A numerical analysis was performed using a commercially available finite element software package (Altair Hypermesh and RADIOSS, Troy, Michigan, USA). The ceramic tiles, backing plate and composite cover were modeled as separate components. Figure 7.2 shows the model with and without the composite cover. The model was comprised primarily of quadrilateral elements with a few triangular elements used only when absolutely necessary for mesh continuity. The area under the washer faces were constrained for all translation and rotation degrees of freedom. A compressive force was evenly applied to the nodes on the top edge of the panel. An outer composite layer was included in the model as a two-dimensional sheet in which 74 ceramic tiles composite layer resin-rich layer urethane layer aluminum backing plate Figure 7.1: A cross-section showing the composition of the panel which consists of ceramic tiles bonded to an aluminum backing plate with urethane, and a pre-preg composite skin wrapped around the panel. 75 Figure 7.2: Screenshots of the protective panel model used for the finite element analysis without (top) and with (bottom) the composite cover layer. the resin properties and fiber orientations were defined, whereas the other components were modeled as three-dimensional. By modeling the composite cover as a separate entity, it was possible to more accurately define the material properties. The total cover thickness was 1 mm, which was further divided into the composite layer and a resin rich layer. Table 7.1 lists material properties used for the baseline model. The composite weave consisted of fibers oriented at 0◦ and 90◦ . An elastic modulus was defined for each of the weave directions, corresponding to E1 and E2. The shear modulus, G12, represents the strength of the material at 45◦ . The resin can be treated as isotropic, so the shear modulus (G12) has 76 the same value as the modulus (E1/E2). The value for the composite shear modulus was set equal to the modulus of the resin. The size of the original mesh that was created was a compromise between the element size dictated by the thin resin layer and a desire to minimize the number of elements in the interest of a reasonable processing time. This resulted in an average element size of 3 mm for a total of approximately 97,500 elements. This was the primary mesh used in this study. However, a finer mesh with an element size of 1.75 mm for a total of 369,400 elements was created to check if the original mesh was sufficient to describe the strain in the panel accurately. As there was little difference in the results between the two models, the 3 mm element size was used because it took much less time to solve. For the baseline model, the manufacturer’s given material properties were used. However, the strain values computed with the FE model were much lower than the values found experimentally. In composite materials, there can be significant variation in the material properties depending on the layup and curing conditions. Also, there was such a large mismatch in the modulus value between the ceramic and the urethane that it was decided to perform a sensitivity analysis to determine which material properties had the largest effect on the strain distribution in the composite cover. The sensitivity analysis was performed using design of experiments (DOE). Compared to the more traditional one-factor-at-a-time approach, design of experiments varies factors together. This is important because the interaction between factors is considered. A fullfactorial design with four factors was chosen (24 ), which requires 16 runs to acquire the necessary data. A 24 full-factorial design provides information about the main effects, the two-way interactions, and the three-way interactions [40]. 77 The four parameters that were chosen to be varied for this DOE were: Urethane modulus (E), Ceramic modulus (E), the composite layer 381 fabric modulus values (E1/E1), and the composite layer epoxy modulus (E). The modulus of the aluminum was not included in the analysis as it is an isotropic material whose properties were well-known. The shear modulus of the composite was set to match the modulus of the resin. The thickness (T) of the composite layer was held constant at 0.9 mm, and the thickness of the resin-rich layer was held constant at 0.1 mm for the analysis. A 24 full factorial model was used. The high and low values chosen for the four materials are listed in Table 7.1. The measured output for the DOE was the average of the strain values for six elements of the composite cover. Table 7.1: The material properties used for the finite element models. All values are in MPa, unless otherwise noted. Baseline DOE DOE properties low values high values Urethane E 3500 25 45 Ceramic E 350000 175000 350000 Aluminum E 69000 69000 69000 381 resin E1/E2 G12 T 4800 4800 0.1 mm 2400 2400 0.1 mm 4800 4800 0.1 mm 381 composite E1/E2 G12 T 50000 4800 0.9 mm 25000 2400 0.9 mm 50000 4800 0.9 mm The results of the DOE were analyzed using the analysis of variance methodology (ANOVA). The results of the ANOVA are shown in Table 7.2. It can be seen that the fabric modulus values (E1/E2) and epoxy modulus (E) had the largest impact on the strain values, while the Urethane and Ceramic had relatively little effect. The interaction between the ceramic and the epoxy had a significant effect, although much smaller than the primary effects. As 78 a result of this study, the low values of the material properties were used in the model. Table 7.2: ANOVA results from the sensitivity study of the finite element model Source SS df 2276.1 2571.3 76890.7 52193.2 482.2 1485.5 843.4 127.5 9759.8 4629.7 1774.5 1570.1 1852.6 1108.3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 Error 1677.5 1 Total 159242.5 15 Urethane E (A) Ceramic E (B) 381 fabric E1/E2 (C) 381 epoxy (D) AB AC AD BC BD CD ABC ABD ACD BCD 7.4 MS P % Contrib 2276.1 1.357 0.452 2571.3 1.533 0.433 76890.7 45.834 0.093 52193.2 31.112 0.113 482.2 0.287 0.687 1485.5 0.885 0.519 843.4 0.503 0.607 127.5 0.076 0.829 9759.8 5.818 0.250 4629.7 2.760 0.345 1774.5 1.058 0.491 1570.1 0.936 0.511 1852.6 1.104 0.484 1108.3 0.661 0.566 1.43% 1.61% 48.29% 32.78% 0.30% 0.93% 0.53% 0.08% 6.13% 2.91% 1.11% 0.99% 1.16% 0.70% 1677.5 F 1.05% Comparison of Results using Image Decomposition To perform the comparison of the experimental and computational data sets, several patches of data from different areas of the panel were compared. The ideal comparison would have been to compare the strain field from the entire panel, but the resolution of the digital image correlation system ultimately dictated the size of the experimental data that could be captured. The results from Chapter 5 showed that it is not necessary to use the Fourier transform with data sets such as those presented in this study that do not contain discontinuities. The Tchebichef moments were used for this study as they are discrete and natively valid over a rectangular domain, which makes them computationally more efficient than the 79 Zernike moments for this study. Figure 7.3 shows the results of the FE analysis superimposed over the experimental setup. The areas of the panel that were investigated experimentally are indicated in this figure. The first area examined was the region directly above the bolt hole on the right side of the panel (RS-bh). The camera system was set up to capture the triple-point, which is the intersection of three of the ceramic tiles. This area was chosen because the principal strain value was fairly large, and produced an interesting distribution. An examination of the strain maps in Figure 7.4, reveals that while the maps are similar in shape, the magnitude of the experimental strain is significantly larger than the strain predicted by the FE model. This is captured in the magnitude of the first Tchebichef moment, which is also much larger for the experimental data than for the FE data. Figure 7.5 shows a comparison of the experimental and computational moments from the Tchebichef decomposition; as for the study of the aluminum plate, perfect correlation of the two data sets would generate a perfectly straight line with a gradient of 1. In this case, the greater magnitude of the experimental data has resulted in a line with a gradient of 0.63. One potential reason for the difference in magnitudes of the experimental and simulation data sets could have been strains caused by the bolt clamp force. In the interest of simplicity, the area under the washer faces of the bolted joint were constrained for all rotational and translational degrees of freedom, but no clamp load was applied. It is therefore possible that the difference in the constraints between the FE model and the experimental condition could have caused the difference in the strain distributions. To lessen the effect of the clamping force on the experimental results, the next area for comparison was chosen to be farther away from the bolted joint. The area examined was RS-3 from Figure 7.3. 80 RS-3 LS-1 LS-3 RS-bh Figure 7.3: Results of the finite element analysis superimposed on the experimental setup, indicating the areas that were used for the comparison with the experimental data. 81 Figure 7.4: The Tchebichef moments from the maximum principle strain maps of the experimental and simulation results from the area above the bolt hole (RS-bh). 82 4 4.5 x 10 4 3.5 y = 0.7431x + 70.21 R2 = 0.6277 3 2.5 2 y = 0.648x + 19.04 R2 = 0.8603 1.5 1 0.5 0 −0.5 −1 −1 −0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 4 x 10 Figure 7.5: The Tchebichef moments generated from the simulation plotted against the moments from the experimental data. The solid regression line is fitted to all of the data while the dashed line excludes the first moment. 83 Similar to the area previously examined around the bolt hole, area RS-3 includes a triple point of the ceramic tiles. As shown in Figure 7.6, the area of high strain extends from the triple and continues along the interface of the tiles, resulting from the mismatch in stiffnesses of the ceramic and urethane. The magnitude of these strain maps is much closer than for the area around the bolt hole. This is evident from comparing the value of the first moment, which is almost the same for both data sets. This can also be seen from Figure 7.7, where the gradient of the line is largely dictated by the value of the first moment. For this comparison, the gradient is 1.01, indicating close agreement between the two data sets. The R2 value for this comparison is 0.79, which is slightly lower than for area RS-bh, but still indicates good agreement in the forms of the two data sets. One interesting thing the fact that while the experimental data has a higher overall magnitude which biases the line in its direction, the smaller FEA moments appear to have a higher magnitude in general than the experimental moments. This is evident in the gradient of the dashed line in Figure 7.7, which increases in value from 1.01 to 1.87 when the first moment is excluded from the fit. The next two areas that were examined were on the left side of the panel. The first area was LS-1, which was the farthest away from the bolt hole of all the areas examined. The region bounded by LS-1 contains two triple points and a high strain area connecting them. For these strain maps, the simulation results reported by the FE analysis were larger than the experimental results. An examination of the Tchebichef moments for the two data sets reveals that not only is the magnitude of the first moment different, but the general values of all of the moments are different for the two maps. This is more obvious in Figure 7.9, which has a gradient of 1.34, indicating a strong bias towards the FE results. Also, the moments do not fall on the regression line very well, as indicated by an R2 value of 0.7097. Similar to 84 Figure 7.7, when the line is fitted to the moments with the first one excluded, the gradient increases. For this comparison it went from 1.34 to 1.99. The final area of comparison was area LS-3, which was located slightly above and to the right of the bolt hole on the left side of the panel. The experimental data obtained for this area does not match well with the FE model, so the main purpose of including it was to see how the Tchebichef moments dealt with such a large difference. It is clear from Figure 7.10 that the strain maps do not match well, and the resulting Tchebichef moments appear to be very different as well. For this case, the overall magnitude of the two maps as determined by the first moment was very similar, but the majority of the other moments have much different values. Figure 7.11 supports this conclusion. The gradient of the regression line is 1.028, which would indicate that the magnitudes of the maps are very similar. However, the R2 value is very low at 0.5129, indicating that the shape of the two maps is very different. 85 Figure 7.6: The Tchebichef moments from the maximum principle strain maps of the experimental and simulation results from area RS-3. 86 4 3 x 10 y = 1.867x − 330.9 R2 = 0.6342 2.5 2 y = 1.011x − 193.4 2 R = 0.7956 1.5 1 0.5 0 −0.5 −1 −1 −0.5 0 0.5 1 1.5 2 2.5 3 4 x 10 Figure 7.7: The Tchebichef moments generated from the simulation plotted against the moments from the experimental data. The solid regression line is fitted to all of the data while the dashed line excludes the first moment. 87 Figure 7.8: The Tchebichef moments from the maximum principle strain maps of the experimental and simulation results from area LS-1. 88 4 2.5 x 10 y = 1.993x + 439.1 R2 = 0.3067 2 y = 1.344x + 253.5 R2 = 0.7097 1.5 1 0.5 0 −0.5 −1 −1.5 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 4 x 10 Figure 7.9: The Tchebichef moments generated from the simulation plotted against the moments from the experimental data. The solid regression line is fitted to all of the data while the dashed line excludes the first moment. 89 Figure 7.10: The Tchebichef moments from the maximum principle strain maps of the experimental and simulation results from area LS-3. 90 4 2.5 x 10 y = 1.028x + 174.3 R2 = 0.5129 2 1.5 1 0.5 y = 0.3122x − 21.82 R2 = 0.0218 0 −0.5 −1 −1 −0.5 0 0.5 1 1.5 2 2.5 4 x 10 Figure 7.11: The Tchebichef moments generated from the simulation plotted against the moments from the experimental data. The solid regression line is fitted to all of the data while the dashed line excludes the first moment. 91 Chapter 8 Discussion Chapter 5 described the procedure to decompose an image using the continuous Zernike moments and the discrete Tchebichef moments. The decomposition compresses the data set by several orders of magnitude, which facilitates making a quantitative comparison between the data sets. Examples of each decomposition methodology were presented using an experimentally obtained strain map with 30,625 data points. Applying the decomposition methodology reduced the data set to only 45 moments, which is less than one percent of the original amount of data. The exemplar decompositions of the experimental data set demonstrated that the Zernike and Tchebichef descriptors- when used by themselves- were not able to reconstruct the strain information or the geometric detail associated with a hole in the specimen. To capture these features, the Fourier transform was applied to the data set first, and the decomposition applied to the resulting magnitude map. The outcome for each of the decomposition methodologies is shown in Figure 5.9 which shows the percentage correlation of the original and reconstructed strain map for each of the decomposition methodologies. 92 It can be seen from this figure that the Zernike and Tchebichef descriptors achieved 82.3% and 78.6% correlation with the original data sets, but neither reconstruction retained the hole. When the Fourier transform was applied to the data first, both the descriptors were able to capture information associated with the hole. The Fourier-Tchebichef decomposition produced the best results with a correlation of 84.2% between the original and reconstructed data sets. When the decomposition methodologies were used on a subset of the experimental data set that did not contain the discontinuity associated with the hole, the application of the Fourier transform decreased the quality of the reconstruction. Figure 5.10 shows the percentage correlation for each methodology applied to the subset. For the Zernike descriptor, applying the Fourier transform decreased the correlation from 99.2% to 75.9%. For the Tchebichef descriptor, the correlation decreased from 99.2% to 95.5%. The Tchebichef descriptor had a distinct computation advantage over the Zernike descriptor. This is evident in the time required to process the data sets. Table 8.1 lists the processing times required for each methodology to decompose the experimental strain map containing the hole. The two decompositions performed using the Tchebichef descriptor took 5 seconds each, while the Zernike descriptor took 173 seconds and 209 seconds when used without and with the Fourier transform, respectively. Table 8.1: Summary of reconstruction results for the experimental strain map Decompostion Method Correlation Fourier-Tchebichef Zernike Tchebichef Fourier-Zernike 84.2% 82.4% 78.7% 53.0% 93 Process Time 5 173 5 209 seconds seconds seconds seconds Table 8.2 lists the processing times for the subset of the original strain map. The Zernike and Tchebichef descriptors produced reconstructions that exhibited a 99.2% correlation to the original data set, but the Tchebichef descriptor was almost five times faster. Table 8.2: Summary of reconstruction results for the cropped experimental strain map Decompostion Method Correlation Tchebichef Zernike Fourier-Tchebichef Fourier-Zernike 99.2% 99.2% 95.5% 75.9% Process Time 4 19 5 32 seconds seconds seconds seconds Two studies were performed using the image decomposition methodology with the aim of reducing the data sets to a few moments to perform a comparison. Besides reducing the number of data points, this methodology allows data with different numbers and pitches of points, different co-ordinate systems, and different scales to be easily compared. For the study of an aluminum plate with a hole, the comparison was made between an experimental data set containing 30,625 points and a simulation data set containing 47,961 points. The comparison of the experimental and simulation outcomes was performed by plotting the two sets of shape descriptors against each other. A linear regression line was fitted to the data and the R2 value calculated. If there was perfect agreement between the experimental and simulation data, all of the points would lie on a line with a gradient of 1 and an R2 value of 1. When the gradient deviates from 1, it indicates that the data is biased toward either the experiment or the simulation. For a gradient greater than 1, the simulation is predicting higher strain values than the experimental results. If the model is not corrected, this could lead to unnecessarily over-designing a component. Conversely, if the gradient is less than 1, the simulation is predicting strain values that are too low. This is a dangerous situation which could result in the component being under-designed for the intended application. 94 Similarly, the R2 value is an indication of how well the form of each data set matches. A value close to 1 indicates that the form data from the simulation matches the experimental data. This gives confidence that the design of the model is accurately representative of the experiment. As the value of R2 decreases, it means that the simulation results are not producing the same features as the experimental data, requiring further investigation of the model. The aluminum plate study utilized the hybrid approach in which the Fourier transform is applied to the data set first. The R2 coefficient of the regression lines fitted to the Fourier-Zernike and Fourier-Tchebichef comparison plots were essentially the same at 0.98 and 0.99, indicating that form of the experimental and simulation data sets match well. The gradients are also nearly the same at 0.88 and 0.87. This is an example in which the simulation is predicting values lower than the experimental results. The model should be updated to accurately reflect the experimental results to prevent potentially under-designing the component. The Tchebichef descriptor was used for the study of the composite panel because the data was continuous, without any holes or similar features. Figure 8.1 shows where the experimental data was obtained on the composite panel. Table 8.3 summarizes the results from the comparisons of the experimental and simulated data sets. The comparisons were made by fitting a linear regression line to all of the data and also to the data without the first moment included. The gradient of the line for all of the data points is influenced largely by the first moment, but as this moment contains information about the overall magnitude of the map, it is very important and should be included in the fit. However, an interesting trend appeared for this study in that as the quality of the match between the two data sets 95 Figure 8.1: Strain map of the panel from the simulation results, with the boxed areas indicating where the experimental data were obtained. 96 decreased, the remaining moments form a cluster around a line of increasing gradient. This can be seen in the gradients of the areas RS-3 and LS-1 in Table 8.3, which have values of 1.867 and 1.993, respectively. The fit is especially bad for area LS-3, which actually results in a fit line with a gradient of 0.312, but the R2 value is very low at 0.0218. Examining the comparison plot of the moments in Figure 7.11, the data appears to be almost vertical with the exception of two points. Table 8.3: Summary of correlation results from the composite panel study. Location RS-bh RS-3 LS-1 LS-3 All moments Gradient R-squared 0.648 1.011 1.344 1.028 0.8603 0.7956 0.7097 0.5129 Excluding 1st moment Gradient R-squared 0.743 1.867 1.993 0.312 0.6277 0.6342 0.3067 0.0218 The two comparisons performed for the left side of the panel (LS-1 and LS-3) appear to generate a worse correlation between the experimental and computational data sets than those from the right side. The speckle patterns for the left and right sides of the panel were painted at different times; it is possible that the quality of the speckle pattern has influenced the experimental results. Figure 8.2 shows the patterns from both sides of the panel. The speckle pattern on the left side has slightly larger features overall than the pattern on the right side of the panel. Initially, the left side of the panel was painted with a much larger speckle pattern than is shown in Figure 8.2. This was done for the purpose of examining a larger area of the panel. However, the experimental data showed that the speckle pattern was too large for the strain range that was being examined, so the left side of the panel was stripped and repainted with the pattern shown in the left image of Figure 8.2. Figure 8.3 shows the ASME flowchart which was discussed in Chapter 1 as a part of the 97 Figure 8.2: Comparison of the speckle patterns on the left side of the panel (left) and the right side of the panel (right). motivation for the research described in this thesis. The highlighted area is the validation portion of the chart in which the experimental and simulation outcomes are to be compared. The flowchart in Figure 8.4 was created based upon the methodology presented in this thesis to perform this quantitative comparison using image decomposition. This thesis has presented a methodology to make the comparison of the experimental and simulation data sets containing thousands of data point and compressing them to 45 moments. Moment descriptors are also invariant under rotation, scale, and translation, which permits the comparison of data sets that are not perfectly aligned, have different coordinate systems, or different spacing of the data points. These moments were then compared by performing a simple linear fit to the plot of the moments, which resulted in a gradient and R2 value relating to the level of agreement between the two data sets. In essence, the procedure is able to reduce a comparison of thousands of data points down to two parameters. 98 Reality of Interest (Component , Subassembly, Assembly, or System) Abstraction Conceptual Model Mathematical Modeling Physical Modeling Mathematical Model Code Verification Physical Model Implementation Implementation Computational Model Calculation Verification Preliminary Calculations Experiment Design Calculation Experimentation Simulation Results Revise Appropriate Model Or Experiment Experimental Data Uncertainty Quantification Validation Uncertainty Quantification Simulation Outcomes Quantitative Comparison Experimental Outcomes Modeling, Simulation & Experimental Activities Acceptable Agreement ? No Assessment Activities Yes Next Reality of Interest in the Hierarchy Figure 8.3: Flowchart of verification and validation activities for simulation and experimental methods [2]. The outlined area indicates the activities addressed by this thesis. 99 Simulation Outcomes Experimental Outcomes Finite Element Analysis results Digital Image Correlation Results Select Decomposition Methodology STEP 1 Decompose results into moments N STEP Reconstruct and compare with original results STEP 2 3 Acceptable Agreement ? Y To Conceptual Model Select Decomposition Methodology Zernike Fourier -Zernike Tchebichef Fourier -Tchebichef Decompose results into moments Compress data set 4 6 from 10 -10 points 1 2 to 10 -10 Reconstruct and compare with original results N Check that the moments are a unique representation of the original data Acceptable Agreement ? Quantitative comparison using moments from Step 2 Y Acceptable Agreement ? No Revise Appropriate Model or Experiment Yes Next Reality of Interest in the Hierarchy (Component , Subassembly, Assembly, or System) Figure 8.4: Flowchart showing the decomposition process developed in this thesis to compare simulation and experimental outcomes. This flowchart addresses the outlined area from Figure 8.3. 100 8.1 Future Work Reconstruction- Work reported in the literature of the electrical and computer engineering community describing the use of the Zernike and Tchebichef descriptors for shape analysis, tends to provide reconstruction results using a simple example such as a block letter E [9, 33, 10]. However, when these descriptors are used in other fields there is no discussion of reconstruction [41, 30, 29]. It would be worthwhile to investigate this phenomenon further to determine why reconstruction is not typically applied in these fields. The main reason so much importance has been placed on it here is because, in the example of the aluminum plate, the hole in the original data set was completely absent from the reconstructed data set. Therefore, it is conceivable that the moments produced from this decomposition could be the same as those produced from a different specimen without a hole, meaning that the moments are not a unique representation of the original data set. Strain as a Tensor- The analyses presented in this thesis made use of the maps of maximum principle strain from the experimental and computational results, but strain is a tensor. Ideally, for the purposes of validation, the tensor results from each data set should be compared, as opposed to one component at a time or a single component, as here. This would introduce more complexity to the comparison, but it could be very useful. Uncertainty Component- Oberkampf discussed the need to include uncertainty in the metric used to compare experimental and computational data [14]. The examples provided were for a simple input and the response graph for results, but the process should be extended into the image decomposition procedure. If the methodology could be improved to utilize the entire strain tensor as was discussed previously, it should be possible to include the uncertainty component as well. 101 Image Decomposition- The work described in this thesis used the continuous Zernike descriptor and the discrete Tchebichef descriptor for the decomposition procedure. There are many other types of moment descriptors, such as Legendre, Krawtchouk, Hahn, and Poisson-Charlier, that are worth investigating. It is also common in image processing to break the image down into smaller blocks and then to perform the decomposition on each block individually. This is done primarily to increase computational efficiency, and may have some benefit in this application. 102 Chapter 9 Conclusions • The results of calibrating a digital image correlation system were presented based on the SPOTS procedure. The minimum relative uncertainty for measurements made with the system was evaluated as 5% for maximum strains of the order of 120 µstrain and 1.4% for maximum strains of the order of 1000 µstrain. The calibration quantified the uncertainty in the system thereby providing a level of confidence for future measurements. • An image decomposition methodology was developed which can reduce a data set containing 104 − 106 data points to on the order of 102 moments. Two different approaches were used: the continuous Zernike descriptor and the discrete Tchebichef descriptor. • By applying the Fourier transform to a data set before a decomposition, it was possible to analyze data sets containing discontinuities. This thesis developed the procedure to use the Fourier transform with the Tchebichef descriptor. When this method was applied to an experimental strain map containing a hole feature, reconstruction was 103 achieved with 84.2% accuracy while preserving the original feature. • For strain maps without discontinuities, the Tchebichef descriptor was found to perform as well as the Zernike descriptor, but with a much shorter computation time. For a cropped sub-set of data from an aluminum plate with hole loaded in tension, the Zernike and Tchebichef descriptors both provided 99.2% correlation between the original and reconstructed strain maps, but the Tchebichef decomposition took only 4 seconds compared to 19 seconds for the Zernike decomposition. • A study was performed using the experimental and computational strain data from a composite protective panel under simulated structural loading. The Tchebichef descriptor was used to decompose the strain data maps. • A comparison method was developed which consists of evaluating the gradient and R2 values for a linear regression line fitted to the shape descriptors describing the simulation results plotted as a function of those for the experiment results. The gradient indicates how well the magnitudes of the two data sets match. The R2 value of the fitted line provides an indication of the level of agreement in the shape of the two outcomes. A Fourier-Zernike decomposition, for a study of aluminum plate with a hole subject to tension, yielded a line with a slope of 0.88 and an R2 value of 0.98, which indicated the data was biased to the experimental outcome, but that there was good agreement in the shape of the data sets. For this scenario, an investigation should be performed to determine why the simulation is predicting lower values than the experimental results to prevent a potentially under-designed component. 104 The agreement between the experimental and simulation outcomes for a composite panel varied depending on the location of the panel examined. The values for the slope of the fitted line ranged from 0.648 to 1.344, and the R2 value from 0.5129 to 0.86. The comparison of data from near the bolt hole (RS-bh) indicates that there is good agreement in the form of the data, but the simulation is predicting lower values than the experimental results. This could indicate that the original assumptions made for the boundary conditions at the bolt connection were not correct and should be reexamined to determine if they accurately represent the experiment. The poor gradient and R2 values obtained from the left side of the panel could be due to the quality of the speckle pattern, and should be investigated further. 105 REFERENCES 106 REFERENCES [1] L. A. Reagan and M. J. Kiemele, Design for Six Sigma: The Tool Guide for Practitioners. CTQ Media, 2008. [2] L. Schwer et al., Guide for the Verification and Validation of Computational Solid Mechanics, American Society of Mechanical Engineers Std., 2006. [3] M. Sutton, J. Orteu, and H. Schreier, Image correlation for shape, motion and deformation measurements: basic concepts, theory and applications. Springer Verlag, 2009. [4] O. Lekberg, “Electronic speckle pattern interferometry,” Physics in Technology, vol. 11, p. 16, 1980. [5] E. Patterson, “Digital photoelasticity: principles, practice and potential,” Strain, vol. 38, no. 1, pp. 27–39, 2002. [6] S. Consortium. (2010) Guidelines for the calibration and evaluation of optical systems for strain measurement. Standardised Project for Optical Techniques of Strain Measurement. [Online]. Available: www.opticalstrain.org [7] W. Wang, “Vibration mode-shape recognition using image processing,” Ph.D. dissertation, University of Liverpool, February 2009. [8] A. Patki, “Advances in structural damage assessment using strain measurements and invariant shape descriptors,” Ph.D. dissertation, Michigan State University, 2010. [9] M. Teague, “Image analysis via the general theory of moments*,” JOSA, vol. 70, no. 8, pp. 920–930, 1980. 107 [10] R. Mukundan, S. Ong, and P. Lee, “Image analysis by tchebichef moments,” Image Processing, IEEE Transactions on, vol. 10, no. 9, pp. 1357–1364, 2002. [11] D. Zhang and G. Lu, “Shape-based image retrieval using generic fourier descriptor,” Signal Processing: Image Communication, vol. 17, no. 10, pp. 825–848, 2002. [12] AIAA, “Guide for the verification and validation of computational fluid dynamics and simulations,” American Institute of Aeronautics and Astronautics, Reston, VA, Tech. Rep. G-077-1998, 1998. [13] DoD, “Modeling and simulation verification, validation, and accreditation,” Department of Defense, Tech. Rep. DODI 5000.61, 2009. [14] W. L. Oberkampf, T. G. Trucano, and C. Hirsch, “Verification, validation, and predictive capability in computational engineering and physics,” Applied Mechanics Reviews, vol. 57, no. 5, pp. 345–384, 2004. [Online]. Available: http://link.aip.org/link/?AMR/57/345/1 [15] L. Schwer, “An overview of the ptc 60/v&v 10: guide for verification and validation in computational solid mechanics,” Engineering with Computers, vol. 23, no. 4, pp. 245–252, 2007. [16] M. Greenwald, “Verification and validation for magnetic fusion,” Physics of Plasmas, vol. 17, p. 058101, 2010. [17] K. Solanki, M. Horstemeyer, W. Steele, Y. Hammi, and J. Jordon, “Calibration, validation, and verification including uncertainty of a physically motivated internal state variable plasticity and damage model,” International Journal of Solids and Structures, vol. 47, no. 2, pp. 186–203, 2010. [18] L. Schwer, “Validation metrics for response histories: perspectives and case studies,” Engineering with Computers, vol. 23, no. 4, pp. 295–309, 2007. [19] E. A. Patterson, P. Brailly, and M. Taroni, “High frequency quantitative photoelasticity applied to jet engine components,” Experimental Mechanics, vol. 46, pp. 661–668, 2006. [20] Standard Test Method for Calibration of Surface/Stress Measuring Devices, American Society for Testing and Materials Std. C1377-97, 2003. [21] Standard Test Method for Measurement of Glass StressOptical Coefficient, American Society for Testing and Materials Std. C770-98, 1998. 108 [22] Standard Test Method for Calibration of Surface/Stress Measuring Devices, American Society for Testing and Materials Std. E2208-02, 2002. [23] M. P. Whelan, D. Albrecht, E. Hack, and E. A. Patterson, “Calibration of a speckle interferometery full-field strain measurement system,” Strain, vol. 44, pp. 180–190, 2008. [24] T. Pavlidis, “A review of algorithms for shape analysis,” Computer Graphics and Image Processing, vol. 7, pp. 243–258, 1978. [25] S. Loncaric, “A survey of shape analysis techniques,” Pattern recognition, vol. 31, no. 8, pp. 983–1001, 1998. [26] S. Hwang and W. Kim, “A novel approach to the fast computation of zernike moments,” Pattern Recognition, vol. 39, no. 11, pp. 2065–2076, 2006. [27] A. Nabatchian, I. Makaremi, E. Abdel-Raheem, and M. Ahmadi, “Pseudo-zernike moment invariants for recognition of faces using different classifiers in feret database,” in Convergence and Hybrid Information Technology, 2008. ICCIT’08. Third International Conference on, vol. 1. IEEE, 2008, pp. 933–936. [28] W. Yau, D. Kumar, S. Arjunan, and S. Kumar, “Visual speech recognition using image moments and multiresolution wavelet images,” in Computer Graphics, Imaging and Visualisation, 2006 International Conference on. IEEE, 2006, pp. 194–199. [29] J. Kong, Y. Lu, S. Wang, M. Qi, and H. Li, “A two stage neural network-based personal identification system using handprint,” Neurocomputing, vol. 71, no. 4-6, pp. 641–647, 2008. [30] Z. Iscan, Z. Dokur, and T. ”Olmez, “Tumor detection by using zernike moments on segmented magnetic resonance brain images,” Expert Systems with Applications, vol. 37, no. 3, pp. 2540–2549, 2010. [31] B. Li, Y. Fan, M. Meng, and L. Qi, “Intestinal polyp recognition in capsule endoscopy images using color and shape features,” in Robotics and Biomimetics (ROBIO), 2009 IEEE International Conference on. IEEE, 2010, pp. 1490–1494. [32] C. Wee, R. Paramesran, R. Mukundan, and X. Jiang, “Image quality assessment by discrete orthogonal moments,” Pattern Recognition, vol. 43, pp. 4055–4068, 2010. 109 [33] C.-H. Teh and R. Chin, “On image analysis by the methods of moments,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 10, no. 4, pp. 496 –513, Jul. 1988. [34] K. See, K. Loke, P. Lee, and K. Loe, “Image reconstruction using various discrete orthogonal polynomials in comparison with dct,” Applied Mathematics and Computation, vol. 193, no. 2, pp. 346–359, 2007. [35] B. Bayraktar, T. Bernas, J. Paul Robinson, and B. Rajwa, “A numerical recipe for accurate image reconstruction from discrete orthogonal moments,” Pattern Recognition, vol. 40, no. 2, pp. 659–669, 2007. [36] A. Patki, W. Wang, J. Mottershead, and E. Patterson, “Image decomposition as a tool for validating stress analysis models,” in ICEM, vol. 14, 2010, pp. 4–9. [37] W. Wang, J. Mottershead, and C. Mares, “Vibration mode shape recognition using image processing,” Journal of Sound and Vibration, vol. 326, no. 3-5, pp. 909–938, 2009. [38] ——, “Mode-shape recognition and finite element model updating using the zernike moment descriptor,” Mechanical Systems and Signal Processing, vol. 23, no. 7, pp. 2088–2112, 2009. [39] P. L´pez-Crespo, R. L. Burguete, E. A. Patterson, A. Shterenlikht, P. J. Withers, o and J. R. Yates, “Study of a crack at a fastener hole by digital image correlation,” Experimental Mechanics, vol. 49, pp. 551–559, 2009. [40] D. Montgomery, Design and analysis of experiments. John Wiley & Sons Inc, 2008. [41] O. Pizarro and H. Singh, “Toward large-area mosaicing for underwater scientific applications,” Oceanic Engineering, IEEE Journal of, vol. 28, no. 4, pp. 651–672, 2004. 110