"4? 0 do ‘L 3'» b... + x v 4.:4L5,%€*1;” '7 _ , m 1 . 1‘ 5.. ‘Mfiudi' ' - - 92‘ _ r ”REA. 3:32”? '4), .0 'I 'd‘a‘s‘ff . n L. 5.; s 4 ’v wt 3} I! -. '. ~‘ ‘ v I I v . > _ g .4. .. Yr». ,1' ' i U?‘ ‘ 7‘" {a l ‘ .4, I V ‘ . ' , "I . ‘ I if 3?: {y . , . 3 I '. .L‘Jr. . n} 4mg. ,"ffli l ' 5..“1- ’ . ' “a “I ' i - _ . v | ,—. a . it: ¥ ’” I 'n u u I §%. .w 3:“ . I Is , '\ v .. \I n ‘ . .14 c I ' “J“: ‘d‘p ‘ H 7 n ' mu "" r " ~ I ~ ‘ - . - u’ ~ I ’ ‘ V" ‘ l , ‘ . a ‘ ‘ f o.‘ , . I . ' . > .. ‘ . i H ‘ ' r A ’ . . 45' ‘ “ k .. ‘4' “‘9 1“” "m . . . 4- .. AW." ‘ _ . . «ML .. h“ 3* 53 4!} .4r_ . V . (#1.. «.cu’“ in?" 1 Fl. .. w u“ 4.1. .. . 1..., {ififififg‘ - m - ‘ I)". .7 ‘ r «9 .3321»! VERSITY LIBRARI IES IIIIIIIIIIIIIIIIIIIIIIIIIII III)“ III II This is to certify that the thesis entitled Deve10pment of a Digital—based Particle Image Velocimetry (PIV) Technique presented by Suman Chakrabarti has been accepted towards fulfillment of the requirements for M.S. degree in Mechanical Engineering Major profe LIBRARY Mtchlgan State. University PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or before date due. DATE DUE DATE DUE DATE DUE I MSU Is An Affirmative Action/Equal Opportunity Institution cMMunS-nt DEVELOPMENT OF A DIGITAL-BASED PARTICLE IMAGE VELOCIMETRY (PIV) TECHNIQUE ‘ BY SUMAN CHAKRABARTI A THESIS SUBMITTED To MICHIGAN STATE UNIVERSITY 1N PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCIENCE DEPARTMENT OF MECHANICAL ENGINEERING 1992 ABSTRACT "DEVELOPMENT OF A DIGITAL-BASED PARTICLE IMAGE VELOCIMETRY (PIV) TECHNIQUE" By Suman Chakrabarti PIV is a technique for determining the velocity field of fluid flows at many points simultaneously. Images, separated by a time interval At, of flows containing seed particles are digitized and processed in pairs. Applying 2-D spatial cross-correlation methods to numerous small areas (or windows) of the images produces the average displacement of all particles within each area. Division of these displacements by At yields the respective area-averaged flow velocities. The above technique was applied to a simulated flow field corresponding to solid body rotation. It was found that a minimum of 4 particles per window was usually sufficient. A maximum out-of-plane particle loss of 20-25% appeared to represent a limiting tolerance. Typically, 90% of all measurements could be determined to 0.06 . subpixel accuracy in displacement. The technique was also applied to a wake-shear layer flow. Except for regions of high velocity gradients, the flow characteristics were readily resolvable. Copyright by SUMAN CHAKRABARTI © 1992 Dedicated to my family, whose support has been beyond invaluable; and to the memory of my father, Mani Lal Chakrabarti iv ACKNOWLEDGNIENTS This is to acknowledge the kindness, brilliance, breadth of knowledge, and creativity of my thesis advisor, Professor Manooch Koochesfahani. I grateful for his aid in helping me grow both professionally and personally. This work has been executed under the Air Force Office of Scientific Research Grant No. AFOSR-89-O417. TABLE OF CONTENTS LIST OF FIGURES ..................................... viii NOMENCLATURE ..................................... xi INTRODUCTION ...................................... 1 Part 1. FEATURES OF TECHNIQUE ......................... 8 l. VELOCITY FIELD PROCESSOR ....................... 8 1.1. Background .................................. 8 1.2. Derivation of data window functions ................... 13 1.3. Spatial cross-correlation ........................... 16 1.3.1. Optical cross-correlation ...................... 16 1.3.2. FFT-based digital cross-correlation ................ 18 1.3.3. Direct digital cross-correlation ................... 20 1.4. Velocity determination ............................ 21 2. VELOCITY FIELD POST-PROCESSING ................... 24 2.1. Background .................................. 24 2.2. Error correction: the LSD method .................... 29 2.2.1. Error detection ............................ 29 2.2.2. A "smarter" low pass filter ..................... 31 2.3. Subpixel accuracy ............................... 35 2.3.1. Probability functions ......................... 36 2.3.2. Statistical uncertainty ........................ 38 2.4. Circulation & Vorticity ........................... 41 2.4.1. Cartesian formulation ........................ 41 2.4.2. Radial formulation .......................... 43 vi Part II. TESTING & APPLICATION ......................... 47 3. SIMULATIONS .................................... 47 3.1. Artificial image generation ......................... 47 3.2. Variation of particle density ........................ 49 3.3. Out-of-plane motion ............................. 59 3.3.1. Effect of removal/addition of particles .............. 60 3.3.2. Variation of particle intensity ................... 68 4. REAL FLOWS ................................... 75 4.1. Wake-shear layer flow ............................ 77 4.1.1. Flow/PIV parameters ........................ 77 4.1.2. Velocity fields ............................ 79 Part III. CONCLUSIONS ................................. 83 FUTURE WORK ................................... 86 FIGURES ........................................... 93 APPENDICES ......................................... 134 APPENDIX A: Sample window grid . . ..................... 134 A.1. Edge effects ................................. 134 A.2. Window coordinate systems ........................ 135 APPENDIX B: Evaluation of correlation peak ................. 136 APPENDIX C: Determination of I‘(r) ....................... 138 REFERENCES ........................................ 141 vii Figure 1. Figure 2. Figure 3. Figure 4. Figure 5. Figure 6. Figure 7. Figure 8. Figure 9. Figure 10. Figure 11. Figure 12. Figure 13. Figure 14. Figure 15. Figure 16. Figure 17. Figure 18. Figure 19. Figure 20. . Figure 21. Figure 22. Figure 23. ILLIESII'IE)]F‘II?IIE}ILIIRJEIEI Experimental schematic from Keane & Adrian (1990). ................... 4 Experimental schematic from Willert & Gharib (1991). .................. 4 Image pair with consecutive locations of a group of particles. ............... 13 Grid of sample windows over an image (first of a pair) .................... l4 Superposition of sample and roam windows containing idealized particles. ....... 14 I‘(x,y) integration contour. ..................................... 42 Subdivision of contour arc sections into straight line segments. .............. 44 Seed particles randomly distributed over a 512 X 512 pixel2 image plane. ....... 93 Variation of p, — (RAW) velocity fields. ........................... 94 Variation of p, — (LSD) velocity fields. ............................ 95 Variation of pr — (FINAL) velocity fields. .......................... 96 Variation of p, — velocity-derived quantities, p, = 5.27 (FINAL) ............. 97 Correlation histogram — entire (raw) velocity field for p, = 5.27. ............ 98 Correlation histogram — 30 worst measurements of Figure 13. .............. 98 Variation of p, — probability density functions, p(£). .................... 99 Variation of p, — probability distribution functions, P(£). ................. 100 Variation of p, — £90, and FUD vs. pp. ............................. 101 Variation of p, — iso-accuracy plots (RAW). ......................... 102 Variation of p, -— iso-accuracy plots (LSD). .......................... 103 Variation of p, — iso-accuracy plots (FINAL). ........ - ................ 104 Variation of F, — (RAW) velocity fields. ........................... 105 Variation of F, — (LSD) velocity fields. ............................ 106 Variation of F, — (FINAL) velocity fields. .......................... 107 viii Figure 24. Figure 25. Figure 26. Figure 27. Figure 28. Figure 29. Figure 30. Figure 31. Figure 32. Figure 33. Figure 34. Figure 35. Figure 36. Figure 37. Figure 38. Figure 39. Figure 40. Figure 41. Variation of F, —— probability density functions, p(£). .................... 108 Variation of F, — probability distribution functions, P(E). ................. 109 Variation of F, — 5,0,, and F50 vs. F,. ............................. 110 Variation of F, — iso-accuracy plots (FINAL). ..... I ................... 111 Variation of F, — statistical uncertainty plots (FINAL). ................... 112 Variation of F, -— effect of filtering on statistical uncertainty. ............... 113 Randomization of Io [LOCAL] — (RAW) subpixel accuracy vs. w-component ...... 114 Randomization of Io [LOCAL] — (FINAL) subpixel accuracy vs. w-component ..... 115 Randomization of Io [LOCAL] — histograms of "dimensionless" w-component (RAW). ................................................ l 16 Randomization of [0 [LOCAL] —— histograms of "dimensionless” w-component (FINAL). ............................................... l 17 Randomization of Io [LOCAL] — histograms of “dimensionless" w-component (RAW) — scaled version. ..................................... 118 Randomization of Io [LOCAL] — histograms of "dimensionless” w—component (FINAL) — scaled version. .................................... 119 Randomization of 1,, [LOCAL] — behavior of histogram peaks (ordinate) and averages (abscissas). ........................................ 120 Randomization of Io [AVERAGE] —- (RAW) subpixel accuracy vs. w-component. 121 Randomization of Io [AVERAGE] -— (FINAL) subpixel accuracy vs. w- component. .............................................. 122 Randomization of 10 [AVERAGE] — histograms of "dimensionless" w-component (RAW). ................................................ 123 Randomization of 10 [AVERAGE] — histograms of "dimensionless" w-component (FINAL). ............................................... 124 Randomization of Io [AVERAGE] - behavior of histogram peaks (ordinate) and ix Figure 42. Figure 43. Figure 44. Figure 45. Figure 46. Figure 47. Figure 48. Figure 49. Figure 50. Figure C.1. averages (abscissas). ........................................ 125 Wake-shear layer flow —-— experimental setup. ......................... 126 Wake-shear layer flow — (RAW) velocity field. ....................... 127 Wake-shear layer flow — (FINAL) velocity field. ...................... 128 “Wake-shear layer flow — (FINAL) velocity contour field. ................. 129 Wake-shear layer flow — (FINAL) velocity profiles. .................... 130 Wake-shear layer flow — (FINAL) velocity field. [interpolated field images] ..... 131 Wake-shear layer flow — comparison of (FINAL) velocity profiles. ........... 132 Spatial distribution of: grid points failing primary LSD criterion (3K); grid points failing secondary LSD criterion (+). ........ 133 Representation of 2-D spatial perturbation around velocity vector grid point. ...... 91 I‘(r) integration contour segment (1—2). ............................. 139 Disg' laimer: At PP NONIENCLATURE The term pixel is not used in the usual sense herein. Strictly speaking, a pixel is an area. However, there are several parameters which require the use of the linear dimension only, including displacement, pixel coordinates, and correlation peak location. For this reason, the term pixel is assigned the dimensions of length. The "unit" of area becomes pixelz. time interval particle density average particle separation Window functions £391» roam window size sample window size roam window function sample window function Gridzpixel coordinates 3,335 N ‘3 ye yea unused image margin no. samples per grid row x-coord. of sample window y-coord. of sample window x-coord. of sample window center point (pixels) x-coord. of lst sample window center point (pixels) y-coord. of sample window center point (pixels) y-coord. of lst sample window center point (pixels) - between laser pulses or consecutive images - no. per interrogation or sample window - vector displacement moved during At - "particle distribution" function — "particle distribution" function - RE: n X n sample window grid - RE: n X n sample window grid - corresponding to n, - RE: 512 X 512 pixel2 image coordinates - corresponding to try - RE: 512 x 512 pixel2 image coordinates xi thin-bgsgd PIV (Adrian (1988)) 1(X) transmissivity of photograph interrogation spot intensity interrogation beam intensity interrogation volume vector coordinate of interrogation spot vector coordinate of interrogation spot center N<-N £4 FFT-bgsed PIV (Willert & Gharib (1991)) * denotes spatial convolution (b correlation function E[ ] expected value f sample window function g roam window function 5 spatial shift transfer function 2-D Gaussian fitting C, fit coefficients me correlation peak x-location — to nearest pixel ymx correlation peak y—location - to nearest pixel xsw subpixel peak x-adjustment has subpixel peak y-adjustment xp final peak x-location yp final peak y-location xii Emr Prmeging (LSD) (ETRAJq . Fun qMEAS 41111420 (qua) (ULSD)q velocity comp. fractional error fraction of velocity grid points that failed LSD criterion velocity component difference absolute value of (Sq primary standard deviation factor secondary standard deviation factor represents either It or v local average of velocity comp. local standard deviation (LSD) of velocity component measured value theoretical value average of qu0 standard deviation of (11.50 Probability functions horz. displacement difference vert. displacement difference probability x-density function probability y-density function denotes either or both pax & p5,. probability x-distribution function probability y-distribution function denotes either or both Fax & P5,. probability 5 corresponding to Pax = 0.9 5 corresponding to P5, = 0.9 displacement error variable higher of X90, and Y9”, xiii Calculated from surrounding points RE: either simulation or real flow Derived from theoretical equations ' Executed over entire velocity field Executed over entire velocity field obtained from 5n, At obtained from 5v, At - RE: probability functions Circulation and Vorticity 3 .‘ S. We then perform a direct spatial correlation of the two windows to obtain the averaged displacement for that sample window (the sample window can be considered analogous to an interrogation spot). This application is similar in vein to that mentioned briefly by Adrian (1991), which uses the terminology first interrogation window and second interrogation window. In Willert & Gharib’s case, the two windows are the same size: 32 X 32 pixel2 (R = S = 32), for reasons to do with the nature of an FFT (this will be expanded upon in §1.1). We have used a variety of sizes for both R and S, and our definition of a grid of sample windows also differs. Details of these variations will be discussed in §1.2. The remainder of the thesis is divided into three sections. Part I describes the principles upon which the software is based. Included are: the use of two-dimensional GauSsian fits to resolVe particle displacement; an error detection scheme based on examination of the local standard deviation (LSD) of velocity measurements; the filtering routines based upon the previous scheme; and the methods of calculation of circulation and vorticity. Readers interested in applications may wish to proceed to Part II, referring to Part I as needed. I Part 11 details the results of application of our implementation of PIV. Diverse simulations were executed upon image pairs generated to reflect a solid body rotation flow field. This velocity field was chosen for its inherently large dynamic range and uniform vorticity. The effects of the following were analyzed: variation of particle density, p P; random out-of-plane removal/addition of particles; and variation of particle intensity (which can also represent out—of-plane motion). A real wake-shear layer flow is also examined, with its results considered in the context of the simulation results. When appropriate, all results are also compared with those of Lourenco, Krothapalli & Smith (1989), Keane & Adrian ( 1990), Willert & Gharib (1991), and Fincham, Blackwelder & Spedding (1991). Part III discusses the conclusions and suggests possible improvements that can be made in the software. Also suggested are some guidelines for the application of PIV. All figures pertaining to the material of Parts 11 & III are included after Part III. Part I. FEATURES OF TECHNIQUE is pa 1. VELOCITY FIELD PROCESSOR 1.1 k round Digital vs. optical-based PIV As was mentioned in the Introduction, Willert & Gharib (1991) make several comparisons between PIV implementations based on fully digital and "opto-mechanical" analysis techniques. Briefly stated, their conclusions are that optical/ mechanical analySis can be tedious and time consuming; further, there are inherent limitations in determination of velocity vectors small in magnitude, and that the causality information is lost because these techniques only use a single photographic image. While there apparently exist techniques that can compensate for both of these inherent limitations, they would appear to add complexity to the experimental Setup (W illert & Gharib mention several papers covering such enhancements). Willert & Gharib’s digital implementation addresses the above concerns by performing the analysis computationally and using a pair of single-exposed video images. The latter preserves phase information and makes it possible to resolve displacements of particles that moved such a small distance that they partially overlap their original locations in the previous image. Nevertheless, there are at least two limitations to this operational mode of PIV. First, a true photographic transparency offers much better resolution at the present time than 512 X 512 pixelz, which is a video standard (Adrian (1991) also discusses this). Second, Willert & Gharib had the 30 Hz (video) framing rate 8 2‘: 9p; €35 for their experiments. Such a relatively low image acquisition frequency limits the resolvable velocity range to that of low speed flows (typically of velocity order < 10 cm/s). It should be understood that this aspect of PIV is undergoing rapid evolution: there apparently exist experimental procedures which have the potential for increasing the effective framing rate. Both Adrian (1991) and Willert & Gharib (1991) note that these are technological constraints, which will ease with further development. Indeed, Adrian mentions the availability of 2048 x 2048 pixel2 image arrays, as well as cameras with framing rates of 80 kHz or higher. It is observed that such equipment may not be of universal utility, with high cost being one mitigating factor at present. Another limitation is that very high spwd cameras apparently cannot currently produce enough frames to examine a flow field over a long time. Also, the high-resolution arrays are evidently relatively slow. Once again, it should be realized that imaging technology is rapidly advancing at present, presumably resulting in improved speed and performance in the foregoing equipment. In sum, there is (currently) a tradeoff between speed and resolution when considering digital- vs. optical-based PIV. Adrian (1991) notes that advances in electronics can only expand the possible areas of application of digital PIV. In particular, it is possible that higher framing rates will drastically reduce the various particle losses referred to in the Introduction.’ That is, any particle would be given less opportunity to exit the interrogation window or fade from the viewing plane, if permitted less time in which to do so. The ability of digital PIV to resolve velocity vectors from . e e I a e e e a e That ls, excessrve tn-plane or out-of-plane partlcle dlsplacements, or msufficrent overall partlcle densrty. 1'6 6.". II" I? COT wou Gha Dita 10 very small particle displacements — without the additional optical-based experimental enhancements mentioned by Willert & Gharib (1991) — should also be of increased utility with faster cameras. Our diflerences flom Willert & Gharib (19912 We have adapted our technique from that of Willert & Gharib (1991), with several modifications. As remarked upon in the Introduction, our sample and roam window sizes (R and S, in pixels) differ because of the method by which we compute the cross-correlation. We do it directly (also, Adrian (1991) refers to it as direct cross- correlation), while Willert & Gharib use two-dimensional FFT’s. Since the FFT is applied to data sets consisting of 2“ points (Press, Flannery, Teukolsky & Vetterling (1986)), one must either set R and S to a number of pixels equal to a power of 2, or "pad" the windows with zeroes to fill them out to such a number. The latter naturally would add noise to the process, perhaps requiring some sort of filtering. Willert & Gharib avoid this entirely by simply setting R = S = 32. Our implementation does not necessitate such a restriction upon window size. We have been able to vary both R and S in a semi-arbitrary fashion. Our main limitation is self—imposed: since it is easier to programmatically define a window’s center if its size is an odd number (in pixels), we have explicitly made R and S odd. Currently, our software permits size ranges as follows: 7 < S < 51 pixels (1) R s 85 pixels The above dimensions are in the context of 512 X 512 pixel2 images; both we and 11 Willert & Gharib ( 1991) use video cameras to digitize images at the standard 30 Hz framing rate. Willert & Gharib also emphasize the use of two l-D gaussian fits to determine the average particle displacement. These are performed around the highest correlation peak resulting from the spatial correlation of the two windows. They state that this method of obtaining the peak yields subpixel accuracy and is more accurate than the more "traditional" method of centroiding (Adrian (1988), Keane & Adrian (1990), and Adrian (1991) all discuss centroiding in varying degrees of detail). The basis of this conclusion apparently arose from a series of experiments using both techniques. We have carried this statement farther by using a single 2—D gaussian fit to obtain the subpixel location of the correlation peak. We expect that this should further reduce the uncertainty of the fitting process. We further do not deliberately use seed particles in our interrogation of fluid flows, whereas Willert and Gharib (1991) used 80 pm phosphorescent spheres. Lourenco & Krothapalli (1987) also use spherical " tracer" particles in their application of PIV; they establish several guidelines for particle size as well as other experimental parameters. Adrian (1991) states that actual. "particles" for use in PIV may in fact be solid, liquid, or gaseous, depending upon the medium in which they move. In our case, the particles used were simply the naturally occurring dirt grains in the local (Michigan State University) water.’ It is believed that they are passive, with negligible buoyancy. The drawback of this PIV mode is that some degree of image pre-processing becomes . e u I a I Flows Wlth hydrogen bubbles as particles are also under current Investigatlon. 12 necessary to enhance these nonuniform particles. Willert & Gharib (1991) explains that no image pre-processing such as thresholding was needed for their images because of the sufficiency of contrast between their flow medium and phosphorescent particles. It is probable that such techniques would enhance the contrast between our nonuniform particles and the flow background. At this juncture, our development of image pre- processing procedures is in the preliminary testing stage; some concepts under current investigation will be discussed in Part 111. It was nevertheless possible to analyze real flow images even without such enhancement. We also expect that our PIV system would even more readily process images with high "quality" particles such as those of Willert & Gharib (1991). We performed extensive simulations on artificially generated pairs of images to establish the effects of some important parameters (simulating particles of reasonable "quality"). These image pairs were designed and constructed to reflect a solid body rotation motion on the part of the particles. As stated in the Introduction, this flow field was chosen for its wide variation in velocity (in our case, from 0 at the lower left comer to a maximum at the largest radius) and uniform vorticity. We have obtained several different estimates of the upper limit of performance of our PIV implementation, as well as the effects thereon of several sources of loss/noise. The analysis of these simulations is presented within §3. Details of spatial correlation will be presented in mathematical and graphical form. Comparisons will be made with the extensive theoretical calculations in Keane & Adrian (1990), as well as the fully digital formulation of Willert & Gharib (1991). 13 1.2, Derivatign 9f dag window functiog e e [0.01 + t=tl t=tl+At Figure 3. Image pair with consecutive locations of a group of particles. Figure 3 shows two images separated by a time interval At with portions (windows) containing a finite number of particles. The intra-particle spacing remains roughly constant as the entire group moves an average distance As between the first and second images. This distance can be observed from the separation between the center points of the windows. In fact, this center point separation is exactly the quantity measured by the software to determine As, first to the nearest pixel, then to subpixel accuracy. First, a grid of sample windows is defined as shown in Figure 4. They are directly adjacent to one another, as well as non-overlapping (overlapping of sample windows will be retained as an optional capability). That is, the distance between center points of two consecutive sample windows of size S is also equal to S. This differs from Willert & Gharib (1991) in that they step their 32 X 32 window in increments of 8 pixels. This relative redundancy of information apparently permits them to take a 14 straight low pass filter Of their data as a part of their post- (Om-1) (n-lm'n processing. In- our case, the two window sizes, R and S, determine the total number of windows on the grid. The initial values for the number of (0,0) (rt-1,0) sample windows in one row of |——— 512 pixels ——-I . . Fi re 4. Grid of sam 1e windows over an ima e first of a air . the grid, n, rs g" p g ( p ) 2 n=il—2-;nelintegers}() Figure 4 therefore depicts an n x n sample window grid. Each roam its corresponding sample window, and I | I I l I window is emplaced concentric with I I | + I I I | I is therefore larger by an equal : : number of pixels in each direction (see Figure 5). Willert & Gharib O - SAMPLE WINDOW (dashed) e - ROAM WINDOW Figure 5.‘ Superposition of sample and roam windows containing idealized particles. (1991) mention that the Nyquist sampling criterion (with respect to FFT's) restricts their maximum recoverable displacement to S/2 pixels for an interrogation window of size S. They further state that attempting to resolve even this displacement often resulted in the 15 addition of noise to the cross-correlation. This last arises from the increased probability of random particles of the roam window matching those within the sample window. The limit chosen on basis of their experience was S/3. Note that for their value of S = 32, this represents a ceiling of about 10-11 pixels. We decided to use this limit as an initial guideline and to provide a basis for comparison. Thus, in our case, R=S+2§ 6) Note that the maximum values from equation (1) are selected to correspond to equation (3). The value of R given by equation (3) eventually served as a minimum, subject to increase for a variety of reasons (discussed in §4. 1.1 and the Conclusions). Note that we are not restricted by the Nyquist criterion in the same sense as in Willert & Gharib (1991) (ie., in a non-FFT environment). Nevertheless, it certainly remains true that one should not blithely assign larger and larger roam windows for the same reasons regarding the addition of noise to the process of correlation. The division in equation (2) always leaves a remainder (because S is always odd); this fractional part- is converted into the unused margin of the image as follows: m = MOD(2512,S) (4) where MOD(p,q) is the modulus function, returning the remainder of the division of p by q. In fact, this formulation of n and m needed to be modified for slightly larger values of m. Otherwise, interrogation of sample windows located at the edge of the image (ie., considering Figure 4, those windows with an "x" or "y" value of 0 or n-I) was hampered by the possibility that the corresponding roam windows might not be l6 entirely on the image (cf. Appendix §A.l for details). 1 . S ial cross-correlation The following consists of a discussion of PIV processing in three different contexts: optical cross-correlation in a double-pulsed laser system; FFT-based digital cross-correlation; and direct digital cross-correlation using the sample and roam windows from §1.2. The digression at this point from the material of §1.2 to these two prior implementations of PIV is done to place our methodology in an established framework and serve as a basis for comparison (readers may prefer to proceed directly to §1.3.3 for information pertaining to our own implementation only). 1.3.1. Optical cross-correlation The process of spatial cross-correlation is discussed in Adrian (1991) in several contexts. Most of the discussion is on optical-based PIV, with a reference to a possible digital-based system. This being the case, Adrian delineates a correlation formula — for a double-pulsed laser system—based upon the intensity of the laser beam interrogating a spot on the photographic transparency I,(X—X,) and the transmissivity of the photograph, 1(X). The vector coordinate X is defined within the interrogation spot (ie. , the "image" plane), with X, denoting the center of the spot. Then the interrogation spot intensity is 10?) = 1,(X‘-X’,)r(ff) (5) This can be interpreted as the intensity directly "after" or "behind" the photograph (Adrian (1988), Keane & Adrian (1990), Adrian (1991)). This intensity is then 17 correlated with itself over all possible displacements, s, over the interrogation spot as follows R(§) = [WI(X)I(X°+§)dX (6) The correlation function R thus represents the correlation of the beam intensity with a displacement s, as a function of s. Technically, R is called an autocorrelation function, since the intensity function is being correlated with itself, albeit at a later time (Rosenfeld & Kak (1982)). It appears therefore that R should have at least two locations with high correlations: at s = 0, which indicates zero displacement or the particles within the interrogation spot are correlating (" matching ") with themselves; and at the value of s corresponding to the actual distance moved by the particles. Suppose, for example, that ten particles are within a particular interrogation volume of a flow; then — for a double-pulsed system — twenty particle images would be contained within the interrogation spot (provided that none were lost). At s = 0, there would be a correlation peak corresponding to roughly twenty images matching with themselves. Adrian calls this the self-correlation or pedestal component of R, RP. For the s value representing the true distance moved, there would be a peak about half the size of RP, since there are only ten particle images matching. That is, the ten particle images at the initial locations within the spot match with the ten particle images at the final locations. Note that the reverse can also be true: the final locations can match with the original locations at an 3 value that is the exact opposite to the previous value. Adrian therefore terms these two values of s as the displacement correlation peaks RD, 18 and RD_. Thus, the magnitude of the displacement is established by either displacement peak, but the direction is not. This is the directional ambiguity referred to by Willert & Gharib (1991), which mentions that additional enhancements to the experiment are necessary to resolve that. They mention an example of such an enhancement, called image shifiing, from a paper by Landreth & Adrian (1988). Lourenco et a1 (1989) also discusses image shifting extensively. The basic principle of the technique is to experimentally perform an optical Galilean transformation on the velocity field. This addition of a constant velocity resolves the ambiguity by biasing the measurement of the displacement peak. The source of the ambiguity, however, is the basing of the measurement upon a single image, as mentioned in the Introduction. 1.3.2. FFT-based digital cross-correlation Willert & Gharib (1991) discuss the cross-correlation process in the context of data window functions instead of intensities (Adrian (1988) also mentions this parallel). They first state that the process can instead be expressed as a convolution involving the two window functions and a spatial shift transfer fitnction, together with noise, gw‘) = f(i,j) * so.» + (noise) (7) where f and g are the sample and roam window functions, 5 is the spatial displacement (Willert & Gharib also term it the "impulse response"), and * represents the spatial convolution of f and s. More explicitly, the convolution of the initial particle distribution 19 function f with a spatial shifts produces a new particle distribution function g.’ Willert & Gharib further state that the convolution process can be expedited if one neglects the noise effects and takes the Fourier transform of equation (7); division and deconvolution then yield 5. This method is apparently very sensitive to noise indeed. Therefore, they instead enact a spatial cross-correlation off and g as a statistical expected value E: ¢,,(i.j) = Etf(i,j).g(i.m (8) The above equation can be rewritten in discrete form in the following fashion (see Willert & Gharib ( 1991) for details of derivation): Z £f directly. Consider the sample and roam windows of §1.2 in terms of two-dimensional data window functions: SAMPLE: Ws(i,j) ; i,je[l,S] (pixels) 11 ROAM: WR(i,j) ; i,j e[l,R] (pixels) ( ) These functions may also be thought of as random particle distributions. The intervals [LS] and [1,R] represent the local pixel coordinates with respect to lower left corner of each windOw, which is considered to be [1,1]. The windows’ common center point is referenced to the overall 512 X 512 pixel2 image coordinate system.‘ With the coordinates of all samples defined, we simply performed the cross- correlation of each sample window with its corresponding roam window directly in the space domain, 5 s ¢(h,k) = 2 2 WS(i,j) WR(i+h,j+k) j-l i-l (12) h,k E [0,R—S] where ( O , O ) = upper lefi corner of domain (R—S,R-S) = lower right cornerof domain Note that the formula has already been discretized to resemble the enacted .See Appendix §A.2 for the conversion from grid coordinates to pixel coordinates. Note according to Appendix §A.2 that a roam window’s coordinates are defined with respect to the same (center) point as a sample window’s. 8} CC 5}: dis the whi rem Compt; profita- filSChC 21 programmatical procedure. Further, the functions have been defined such that there is a constant offset of (R—S)/2 in displacement measurement in both the horizontal (u-component) and vertical (v—component).' A final note regarding equation (12) is that ¢(h,k) is as defined for a specific grid point or (nx,ny) pair. Thus, one can loosely consider If) to be of four-dimensional nature. 1,4. Vglggfi y g germination Correlation peak analysis As stated in §l.3. l, the result of a cross-correlation based on a double-pulsed PIV system is a functional domain with a self-correlation peak and two displacement correlation peaks. The nature of this correlation domain is different in a single-pulsed system such as that of Willert & Gharib (1991) or ourselves. Because there are two distinct images, there is no self-correlation peak. Further, there is only a single displacement peak. In either case, the goal of PIV is to make a precise measurement of the location of the peak (presumably using some appropriate technique to determine which displacement peak is the true one in a double-pulsed system). Recall that there remain other methods to determine displacement, including the Young’s fringes method discussed in Adrian (1986), Lourenco et a1 ( 1989), Keane & Adrian (1990), and Adrian (1991). .It became apparent that it was much faster to simply subtract this constant from the measured u- and v- components after the 2-D gaussian fit, rather than account for it during the correlation process. Another programming alternative would have been to set the indices of the array variables containing the window functions and correlation domain to be symmetric about zero; this also would have entailed additional bookkeeping. 22 Keane & Adrian (1990) determine the peak’s location through finding its centroid. Willert & Gharib (1991) use three-point exponential curve fits to do the same; their claim of superior accuracy over the center-of-mass centroiding technique is evidently based on supplemental experiments comparing these two techniques along with a parabolic fit. They attribute this superiority to the roughly Gaussian shape of the correlation peak. sam [peak location Our implementation, like Willert & Gharib’s, finds the correlation peak initially to the nearest pixel. As previously described, we then employ a two-dimensional Gaussian fit over a 3 x 3 pixel2 correlation " window " centered upon the peak (pixel) location. The coefficients to be determined are contained in the functional form, F(x Y) = e(C° + Cl‘ * C232 * C3y * C03) (13) ’ The information in the 3 x 3 submatrix is then converted into a 5 X 5 system of equations to be solved for the Ci’s. For a thorough discussion of the matrix theory underlying the foregoing, see the pages indicated with the reference listing for Press et a1 (1986). The subpixel location of the peak is then found by determining the relative maximum of F(x,y) (see Appendix B for explicit formulae). Note that the actual displacement is realized by subtracting the constant offset mentioned regarding equation (12) from this relative maximum. The resulting displacement is divided by At to yield the subpixel velocity measurement. Repeating the correlation/ fitting process for the entire sample window grid produces the "raw" velocity field (together with other 23 supplemental data usable for diagnostic purposes). This raw velocity data becomes the input to most of the rest of the processing software, including: vector plotting; error detection and analysis; filtering; and circulation/vorticity calculation.’ These velocity post-processing procedures will be discussed in the remainder of Part I. .Examples of velocity vector plots are included after Part III, and will be discussed together with other results in Part II. 2. VELOCITY FIELD POST-PROCESSING 2 Ba k und PIV error sam As has been commented upon, a major source of error in PIV is a shortfall of particle density, p P. The obvious preventive measure against this error is to seed the flow with a sufficiently high number of particles. This does not prevent random fluctuations in p P from sample to sample. These variations may then result in too few particles in some of the sample (or interrogation) windows. The Introduction mentioned criteria recommended by other researchers to minimize these particle density fluctuations. These criteria included restrictions on particle density and both in-plane and out-of-plane particle displacements. Adrian (1988) and Keane & Adrian (1990) conduct an extensive theoretical study and categorization of these error causes and their effects (much of this is summarized in Adrian (1991)). Keane & Adrian, in particular, propose several definitions and suggest error reduction criteria and terminology. Although these were developed in the context of a double-pulsed laser, optical interrogation-based PIV system, the concepts are stated to be valid and conditionally usable both for PIV implementations and other multi-point velocity measurement techniques such as Laser Speckle Velocimetry (LSV). For example, their nomenclature for excessive in-plane displacement that takes a particle beyond the bounds of a sample (or interrogation) window is the loss of pairs efiect. This 24 25 terminology arises from the fact that a double-pulsed PIV system generates two images for each particle (as referred to in the Introduction); thus one particle image remains within the sample window while the other (later in time) does not, and becomes the missing half of a pair. Other losses can occur as a result of a biasing of the velocity measurement. One example of such a bias is called gradient bias by Keane & Adrian (1990). This occurs if the chosen sample window is not small enough to minimize a gradient of velocity within it. The result is a broadening of the correlation peak, with particles within the sample window producing correlation peaks that no longer overlap each other to produce a single narrow peak. Since PIV is a spatially averaging measurement technique, the faster part of the gradient is averaged with the remainder. Depending on the strength of the gradient, this may result in a shift of the displacement correlation peak to a lower value than the actual average particle displacement. A related type of bias. is called detection bias by Keane & Adrian (1990). This also takes place in flow regions with velocity gradients sufficient to cause variation in particle displacements within a sample window. Recall that the magnitude of a correlation peak is dependent upon the number of particles that match at that displacement value (cf. §l.3. l). The end result is a lowering of the correlation peak together with the broadening of gradient bias. A sample window sized to minimize internal gradients (ie., better spatial resolution) will make these biasing effects small. Since the two are interrelated, they are referred to as velocity bias in Adrian (1991). Willert & Gharib (1991) also treat them jointly. Willert & Gharib (1991) further mention a strictly digital consideration: their use 26 of three points in each direction to simulate the correlation peak for the purpose of a Gaussian fit introduces a certain amount of error. This remains true with our own implementation, although we have reduced the error somewhat by utilizing nine points and using a 2-D Gaussian fit to obtain the horizontal and vertical displacements simultaneously (cf. §l.4). A more general limitation is that PIV cannot easily take curvature of the particle trajectory into account. Such curvature can be minimized with a smaller At or increased spatial resolution. It is also possible that an improved algorithm can actually "search" for the magnitude of curvature in a trajectory being measured. Note that an increased overall particle density permits the use of a smaller sample window, which increases the spatial resolution and decreases the apparent spatial gradients over the size of the window. A higher pp can therefore compensate to some degree for these other error sources; an insufficient density only exacerbates them. An increase in pp is not always possible in a given experiment.’ As expressed in §1.1, we do not seed the flow with extraneous particles as a part of our experimentation, instead using the naturally occurring bubbles and dirt particles. It is therefore necessary to develop techniques for testing the veracity of all measurements and correct those that are most questionable due to any of the preceding errors. Qenergl eerr analysis Adrian (1991) divides the class of PIV error analysis into two categories: those . n e n a o v Even if it IS possrble, an augmentation of p, may not be advrsable. One reason already stated would be that an excessive particle density would result in effectively a two-phase flow. 27 enacted during the processing of the velocity field; and those performed after that processing has been completed. In the former case, the crux is to determine whether the highest correlation peak is the true displacement peak or a noise peak. Such a noise peak can occur if the actual displacement peak is not sufficiently higher than the background level of the correlation domain (which in turn can be attributed to insufficient particle density to produce a higher displacement peak). Adrian states two methods by which to reduce the possibility of a noise peak being the highest: restricting the search for. the displacement peak to that region of the correlation function where the velocity vector is expected to be; and/or mandating that the size of the highest peak be higher than a detection threshold. Keane & Adrian (1990) extend the latter by defining the detectability, D, to be the ratio of the tallest peak to the second tallest peak. It is suggested that D should be greater than some Do for a valid measurement, where 1.2 < D0 < 1.5 is the recommended range for the threshold. Once again, the essence of both of the foregoing procedures is that the measured vectors are scrutinized immediately for validity. The other class of error processing is to allow invalid vectors during the velocity field processing and replace them afterwards if they are drastically different from their neighboring vectors. Adrian (1991) refers to work by Landreth & Adrian (1989) which carries out this type of procedure. Also mentioned is that several of the next highest correlation peaks can be stored to determine if they are more suitable displacement peaks; this can be done concurrently with the checking of neighboring vectors. Willert & Gharib (1991) also perform a procedure based upon comparison of a velocity vector with adjacent vectors. The first part of their processing consists of 28 stepping their sample windows by SM to produce measurements redundant over a 3 x 3 set of vectors. Then, the post-processing involves interactive removal of those points judged to be extremely different from the eight adjacent points immediately adjacent. It was affirmed that only a small percentage of the overall velocity field needed to be replaced in this fashion. Each bad vector is replaced by the average of the 3 x 3 set centered on that vector. Finally, the entire velocity vector field is replaced in the same fashion. It is stated that this has the effect of a spatial low-pass filter "which removes the high frequency jitter associated with the different location estimates of the peak correlation. " V Our own procedure is a mostly post-interrogation procedure that is also founded upon the comparison of vectors with the immediately surrounding vectors. This comparison is carried out through basic statistical analysis of the eight vectors adjacent to each central one being investigated. The result is the mean and standard deviation of each eight vector set (henceforth these eight-point properties will be referred to as local 1 properties). If the u-/v-components of the measured central vector are sufficiently different from the local mean u-/v-components, then that central vector is targeted for replacement by this local mean. The local standard deviation —- or LSD -— becomes the crucial parameter, since it yields a measure of the size of the aforementioned differences in the u— and v-components (compared to the local mean u and v). The focus of the remaining sections is to illustrate the statistics, capabilities, and limitations of the LSD method. Eventually, the methods of PIV accuracy determination will be discussed. These include techniques to. establish the accuracy of the PIV implementation upon a simulated flow field, followed by methods of estimation of the 29 average fluctuation of measurements of real flows. Vechity derived quantities The final section of Part I covers the calculation of velocity-derived quantities. The two quantities evaluated in our study are circulation and vorticity; the capability to determine additional properties is readily available as a future option. Since it is apparently difficult to measure vorticity and circulation directly, an ability to calculate them indirectly provides a useful supplement to velocity field determination. For developmental purposes, our study emphasized obtainment and accuracy of velocity data. Further evolution of the PIV technique — especially in image enhancement — will allow for a greater focus on velocity-derived quantities. 2,2, Error cgmetion; the LSD method 2.2.1 . Error detection One note regarding the forthcoming error processing is that some of it applies only to the simulated flows (cf. §3. 1). This is done to establish the accuracy of the PIV implementation; therefore all obtained velocities are compared to those given by the appropriate theoretical equations (eg. , for §3. 1, these would be the equations governing solid body rotation). The difference between measured and theoretical velocity components can be expressed as Iéql = Iqmo - quI (14) where q can represent either a or v. Equation (14) can be considered an indicator of relative velocity (component) 30 error. A fractional velocity error can also be defined. Taking into account the possibility of zero values in u or v, '0 , loq|=0, Ibql oq| —— , q r 0 . E = I = , 11150 ( FRAC)¢I q 4"” (15) Ibql , qmw=0. \ firm where: q e {u,v} The fractional error (Ema, can provide an insight into the magnitude of velocity error at a specific location, independent of the magnitude of velocity itself. In regions of low velocity, this error may be inflated. Therefore it is not incorporated into the LSD method, but nevertheless retained as an alternative diagnostic. In real flows, it is more difficult to find a standard for comparison for the measured velocity field based on a known analytical solution. Therefore, the average of the surrounding eight u-lv-components (cf. §2. l) were employed as the new basis. The "error" was calculated from the difference between measured and local mean u-lv-components (with q still representing either): Ifiql = Iqm - WI (16) Equation (16) is therefore an indication of consistency of measured velocity vectors with respect to their neighbors. The essence of the LSD method is to provide a criterion for a tolerable level of inconsistency. This criterion is the local standard deviation or LSD of the eight grid points (vectors) surrounding the velocity vector being analyzed. In a more general sense, standard deviation is a measure of the uniformity (or lack thereof) of a list of values. The smaller it is, the less variation exists among 31 individual values of the list. This also indicates that the average of the list is more reliable in some sense (eg., if all the values of a list are equal, the standard deviation is zero). Calculation of each velocity component LSD is based upon the unbiased estimator, number of items in list the ith item in list (17) average of values in list 2(9 __ avg)2 where: n O = i pi \ n - 1 avg This is sometimes referred to as the sample standard deviation in a statistics context. For any grid point or vector, (n,,n,), under investigation, each component’s LSD is defined relative to the following local average of those components, ny+l nx+l { 2 2 40.1)} - q(n,.n,) j-ny-l i-nx-l qAVG “ 8 (18) nx,nyc-:[O,n-1] , wherenano.gridpointsper row Note that the velocity component of the center point (n,,n,) is included within the double summation, then explicitly subtracted. Substituting this u- or v—component average into equation (17) then yields that component’s LSD: ny+1 nx+l { z 2 (90,1) "qug)2} " (q(nx’ny) _qAVG)2 (19) j-ny-l itnx-l qrso ‘ * 8 Again, observe that equation (19) is similar in form to equation (17) except for the exclusion of the term corresponding to the center point (n,,ny). 32 2.2.2. A ”smarter” low pass filter At this point, the magnitude of the velocity difference, [qu (equation (16)) is compared to the LSD for each velocity component (equation (19)) according to the following condition: qul > Nsor x qrso (20) where: N sop 5 standard deviation factor That is, a grid point having a u- or v-cOmponent that deviates by more than NSDF standard deviations from the local average, qua, is targeted for replacement. NSDF is a constant that can be chosen arbitrarily for a particular flow. This enables adjustment of the sensitivity of the process of filtering and replacement of velocity vectors. We have generally used N50,. = 2 for our PIV implementation. Note that this is a relatively conservative criterion, usually isolating roughly the very worst 5-10% of the measured velocity vectors (this grid fraction is denoted FUD; this definition will be used in Part II). Such a coarse "filter" is desirable here as a substitute for interactive identification and removal of the very worst vectors. NSDF = 2 appeared to provide a sufficiently coarse level of selectivity, without being so narrow as to fail to identify all of the conspicuously bad vectors. Willert & Gharib (1991) mentions an interactive step (of bad vector identification and removal) before application of a straight low pass filter to remove the "high frequency jitter associated with the velocity data". Performing the low pass filter without the foregoing intermediate step would result in data contamination: the vectors adjacent to the extremely bad ones would become more erroneous due to the "influence" of the 33 bad vectors in the filtering.‘ The low pass filter remains useful in removing this "jitter" from our own velocity data. Note that in our case, the mechanism of bad vector detection can become nearly automatic given an appropriate choice of NSDPI Conversely, the technique may fail in cases of several adjoining bad vectors. Recall from equation (19) that the velocity component LSD 's are calculated based on the surrounding eight vectors. If any of these vectors are in error, then the LSD of the grid point under inspection is artificially inflated (as mentioned before, standard deviations are a measure of the uniformity of lists of values). Isolated regions of such bad vectors would thereby produce higher LSD ’5 within those regions. Unless the vector being examined has a very large |5u| or |6v| , it will not be identified as a bad vector by the equation (20) condition (this condition can also be termed the "LSD criterion"; the identification of a vector by equation (20) may therefore be labeled as the vector’s " failing the LSD criterion" or similar phrasing). It must be emphasized that failure of identification of bad points by the LSD method occurs mostly with closely adjacent clusters of bad grid points. Further, it has been observed that if only two or three bad points are adjacent, then they do not always .Note that this low pass filtering process consists of replacement of every velocity grid point with the average of the (usually eight) surrounding grid points. 1'Even the targeting of mildly erroneous vectors is not unduly harmful for three reasons: 1) NS”- = 2 is apparently sufficiently coarse to ensure that such targeting is uncommon; 2) the replacement of such a vector by its local average (4.4m: based on the surrounding vectors as shown by equation (18)) will not introduce significant error unless at least some of the neighboring vectors are in fact grossly erroneous; 3) the previous is rendered even less likely by the procedure of sorting the list of bad vectors in order of decreasing velocity differences (quI) before replacing these vectors (this strategy is described on page 34). It is therefore more probable that one of these hypothetical neighboring grossly erroneous vectors will be replaced first, meaning that the local average of the moderately erroneous vector will not be contaminated by that extremely bad vector. This erroneous vector simply will no longer exist, because of its earlier replacement by its own local average. 34 fail to be identified by the LSD criterion. Nevertheless, interactive selection similar to that of Willert & Gharib (1991) appears to be needful in such circumstances to complete the task begun by the LSD method. At this point in the development of our PIV system, such an interactive step has been necessary only for a very small number of vectors in a given velocity data set, if at all (eg. , the wake-shear layer flow of Part II had less than 1% of its vectors detected interactively). Later in this section, a preliminary suggestion will be made of a secondary LSD filter that may be able to. isolate the aforementioned areas of erroneous vectors for replacement, identifying them via their excessively inflated LSD ’s. The result of the application of the LSD method to all vectors in the velocity grid is a list of points sufficiently erroneous to have failed the criterion of equation (20). The potential problem of adjacent bad vectors is then mitigated by sorting this list in order of decreasing velocity difference (I6q | ). Thus, a principal advantage of the technique: the very worst points are the f‘ust ones to be replaced. For example, a vector having a u-/v-component that has |5q| = A (where A is high enough to fail the LSD criterion) will only be replaced afier another vector with |6q| = A, where A > A. Lower values than NSDF =2 can be applied to a velocity field; this would increase the number of vectors failing the equation (20) criterion. One can consider that in the limit as NSDF -> 0, the number of points to be replaced approaches the total number of grid points, n2. This case is reminiscent of the spatial low pass filter of Willert & Gharib (1991) referred to earlier (cf. §2.l). The pivotal difference is that this LSD "filter" is not an indiscriminate one, ie. , one that automatically processes all vectors in the same order every time. One can instead speak of a sorted low pass "filter" which replaces 35 points in order of decreasing error. Taking NSDF to lower values has the side effect of increasing the amount of time needed to sort an increasingly large list of "bad" vectors. To alleviate this, a twofold or combination filter was adopted. The first stage involves analyzing a raw velocity field with the LSD filter with a high value of NSDF. This would replace the very worst points, ideally leaving a nearly smoothly varying velocity field. This intermediate vector set can then be processed by a straight low pass filter analogous to that of Willert & Gharib (1991). We expect that the formerly discussed data contamination effects will be small as a result of this filtering combination. Also, as stated earlier, we have greatly reduced — but not totally eliminated -— the need for interactive detection and replacement of erroneous VCCtOI'S . 2 u ixel c c Because of the nature of equations (14) and (16), it becomes necessary to use slightly different methods to determine the accuracy of the simulated and real flows. Especially for real flows, "accuracy" is not necessarily correct language, since it is more difficult to establish a standard for comparison. Measures of statistical uncertainty (W illert & Gharib (1991)) were therefore developed as an adjunct to those of "accuracy. " It is implicit that the purpose of the following techniques were to determine the limits of accuracy of our digital-based PIV implementation through testing on simulations. The determination of real flow parameters in §4. 1.1 was made in the context of these limits. 36 2.3.1. Probability functions In either simulated or real flows, the initial measure of accuracy is derived from the absolute differences in velocity components from equations (14) and (16). These are converted to differences in displacement in units of hundredths of pixels (or pixel hundredths): ox = 100 x IbuIAt bY = 100 x Iblet (21) These displacement differences are then reconstructed as probability density functions (or histograms) PMS) and p,y(£), in terms of the displacement variable E: (NE 3 6X s €+AE} p5x(E) “'. A5 (22) ~ ‘PI 51' +A } p.,(£)- F" M“ E In the usual sense, these density functions represent the probability that a displacement difference falls between the displacement values E and 5 + A5 (Rosenfeld & Kak (1982)). These functions are calculated over a statistical sample size equal to all velocity grid points on an image (n2). Because pays) and p,,(£) are typically very similar functions, either or both will sometimes be denoted p(E) for conciseness, especially on plot axes of figures discussed in Part II (the u- and v-component probability density functions are not identified separately on these figures because of this parallelism to one another). In accordance with the definitions of 6X and 6Y in equation (21), As is taken to be 1 hundredth of a pixel (0.01 pixels). The range of 6X and (W is generally set to between 1 and 50 pixel hundredths (0.01 and 0.50 pixels). Any values greater than 50 37 pixel hundredths —- meaning that the measurement of horizontal/vertical distance was more than 0.50 pixels offset from the "proper" value — were simply set to 50 pixel hundredths. This has the effect of manifesting a "kink" in the "tail" of the histogram (the high end). The size of this kink will therefore indicate the probability that a measured displacement difference will have a value greater than or equal to the maximum of 50 pixel hundredths (0.5 pixels). As the initial location of the displacement correlation peak is found to the nearest 1.00 pixels (100 pixel hundredths), it seems reasonable to set a uniform range for the density function to have a maximum of half this amount (cf. §l.4 regarding determination of displacement). There appears to be little need to extend the histogram beyond this point, since any greater value represents an equivalently conspicuous deviation. Discrete integration of these density functions produce the corresponding probability distribution functions, E Pant) = [pagoda = mama} 2’ (23) PBYG) = fP5y(E)dE = 'PlbYsE} 0 The right hand sides of equations (23) are derivable from considerations of calculus and probability theory (Rosenfeld & Kak (1982)). In this sense, P5x(£) and PMs) represent the probability that a measurement yielded a displacement less than or equal to 5. These distribution functions have proven to be more useful in evaluating the accuracy of a velocity field than the density functions. In an analogous sense to probability density functions, P,x(£) and PME) are 38 sometimes represented collectively as P(E), for the same reasons (again, it has not appeared necessary to distinguish between the closely similar u- and v—component probability distribution functions). To provide a more concise statement of accuracy, the distribution functions were evaluated at a specific value, Pox“) = 0.9 AND P5,,(E) = 0.9 (24) These yielded values of E which signify that 90% of the vectors of the displacement field were measured to within 2 pixels or less. Note that a displacement field can have 90% of the horizontal displacements (5X) accurate to within one value of E, and 90% of the vertical displacements (61/) accurate to a slightly different value of E. The conservative, or higher, estimate of E was chosen, and designated as 5905-. Thus, a raw velocity field has a single value of £90,, associated with it. The effect of filtering with each of the LSD and combination filters (LSD followed by straight low pass filtering) can be seen by comparing the values of £90,, before and after filtering. This became a principal criterion for the establishment of the accuracy and limitations of PIV in respect to the various simulations in §3. It is less definitive in relation to real flows, again because of the possible lack of analytical values for the velocity field against which to establish accuracy. .In an analogous sense, the values for each component could be denoted as X90, and Y9“. 39 2.3.2. Statistical uncertainty Willert & Gharib (1991) reports results concerning the variation of PIV accuracy with particle density and the magnitude of displacement (together with some information regarding sample window sizing). These results were obtained through imaging a pattern of black dots (as particles) on a white background for two cases: several images with no relative displacement; images where particle image pairs were shifted mechanically over a range of possible displacements. After running the images through their PIV system, they apparently determined the statistical uncertainty of the velocity field by measuring the RMS fluctuation thereof. "Fluctuation" appears to refer to the variation of results of velocity measurements given multiple image pairs with a common characteristic (ie. , several pairs that were mechanically moved the same distance with respect to each other). Both types of experiments were carried out with variation of particle seeding density, pp. The conclusion reached was that the lowest possible uncertainty was approximately 0.01 pixels, according to their Figure 6b. Upon inspection of that figure, it would seem that this uncertainty corresponds to p p = 74 and mechanical displacement of on the order 0.1 pixels. Thus, a fairly high density coupled with a rather small displacement yielded an uncertainty that is 10% of the distance moved. For a more moderate density of p P = 24 and the same distance, the uncertainty becomes about 0.02 pixels or 20% of the distance moved. For a larger mechanical displacement — about 1 pixel - with this density, the uncertainty increases to about 0.04 pixels, which is now 4% of the distance moved. It would seem desirable to remain cognizant of the comparative magnitudes of uncertainty versus displacement. 40 Our own {measure of statistical measurement uncertainty is based upon the information obtained from application of the LSD method. Consider equation (19), noting the implicit dependence of qLSD upon (n,,n,), the particular grid point or vector being examined. The measurement uncertainty of the entire velocity field can be estimated by averaging the LSD field over all values of (n,,n,) as follows: "3 z .1 41500.1) (25) j-ll (qu) = "2 This represents an average of the differences between each vector and its surrounding eight neighbors for the whole field. However, wide variations in the function qLSD (n,,n,) will not be reflected in (qLSD). An indicator of the magnitude of these variations over the whole field can be illustrated by the standard deviation from this average (gm) "3 ll, .2: (4151)“ Li) - (qlsp))2 (26) (0,31,)q = "I "I n2 Equations (25) and (26) together yield an independent demonstration of the degree of fluctuation of the velocity field. Plots of these uncertainties are included in Part II in addition to those of the aforementioned probability functions. Further, (015D), presents interesting possibilities for modification of the LSD filter. A secondary standard deviation filter can be devised by comparing the individual values of qwmpny) against (050),, in a vein similar to that of equation (20). This new criterion will generate another list of grid points which have failed it, with the idea being that these erroneous vectors would have excessively inflated LSD ’s. These vectors would then be replaced because of their failure of this secondary LSD criterion, even if they had .41 \ already passed the primary LSD criterion. A preliminary application of this modified LSD filter will be briefly discussed in Part III, and is currently under investigation. 2 4. i 1 tion Vorti i During the development of our implementation of PIV, emphasis was placed upon the determination of velocity and velocity error, with the latter establishing some guidelines on the capabilities and limitations of the technique (cf. §3). The importance of velocity-derived quantities should not be underestimated. The ability of PIV to provide velocity data over numerous points simultaneously is certainly advantageous: direct numerical integration and/or differentiation of the velocity field is sufficient to produce desired quantities such as vorticity and circulation. The standard Cartesian formulations used in our technique will be outlined in §2.4.l. Since the simulations of Part II are based on a velocity field (solid body rotation) that is more easily described in polar coordinates, an alternative set of formulas in §2.4.2 was developed for that case to provide an estimation of the accuracy of PIV. 2.4.1. Cartesian formulation . Since the obtained velocity data is Cartesian by definition, this section represents the default formulation. The circulation was obtained by calculating the line integral of velocity around a four-point square contour, 4 3 I I‘(x.y) = from of (27) 1 2 Figure 6. I‘(x,y) integration contour. 42 The discrete integration is performed by assuming linear behavior in the velocity over a small interval.’ The approximation is simply, a = “1 + “2 g = V2 + V3 1 ’ 2 2 2 (28) _ u3 + u4 _ v4 + vl q3 = 2 ’ q4 = 2 Since the velocity grid is square, all distances between nodes are equal (the possibility that future implementations may allow for rectangular node spacing has been taken into account). The expression for the circulation I‘(x, y) becomes 1 4 I‘(x.y) {3 5,141),- i l +Al , i 1,2 -Al , i 3,4 The area-averaged vorticity is then available directly from the previous equation through (29) where (Al )1. division of the apprOpriate area (ie. , the area bounded by the contour of Figure 6): w(x.y) = FLU?) (30) (M) This vorticity definition is sometimes referred to as the microcirculation. A differential version of vorticity will also be instituted as an option. The formula would be a first order approximation based on finite differences involving the two nodes on either side of the center node, in each dimension. A limitation on this variant is that the approximation to the first order derivatives may be invalid if the node spacing is too large. Naturally, this is more likely to be true for a non—overlapping sample window grid. By contrast, Willert & Gharib (1991) use a second order . o I I u a n u - Analogous to the trapezordal integration approxrmation, in two dimensmns. 43 approximation with their 75 % overlapping grid. Nevertheless, even an overlapping grid should be designed with care to ensure that the node spacing — and therefore the length scale of the derivatives — is appropriate for the scales of motion of the flow being investigated. 2.4.2. Radial formulation Circulation and vorticity The calculation of circulation and vorticity were used to confirm the accuracy of the PIV technique when tested upon a simulated solid body rotation (SBR) flow field. As stated previously, a polar formulation was used for better comparison with the theoretical circulation for this flow, which can be derived from the SBR velocity field, ur=0 u —Qy 31 uo=Qr v= 0x () where Q is the angular rotation rate. Substitution of the above into equation (27) and integrating around an appropriate contour yields, I"(r) = 323—an (32) for a quarter-circle of radius r (for simplicity, the actual SBR flow field used was a solid "quarter-body" rotation flow). The new circulation function PM was initially calculated by two different methods: 1) surface integration of vorticity over a quarter-circular area; 2) circular line integration of velocity.‘ A comparison showed the latter algorithm to be more accurate in our case. It therefore became the method of choice, and is the .Strictly speaking, the radial circulation function is actually calculated using (n,,n,) grid coordinates, then converted to pixel coordinates (cf. Appendix §A.2 for more on coordinate systems). Consequently, I‘(n,) would actually be more appropriate notation, but I‘(r) is used for brevity. For example, Figure 7 uses grid coordinates. 44 subject of discussion for the rest of this section. As just expressed, the simulation encompasses a single quadrant of a hypothetical circular body. The integration therefore proceeds around successively increasing quarter-circular contours, which can be separated into three parts, 33 Pm = I‘,