mE‘TiEg -—— {Q n _ I.- . _ r"'~""8 .uan --- i ‘r- mast-Jr This is to certify that the thesis entitled The Quantification of MOtion in Three-Dimensional Space Using Photogrammetric Techniques presented by Glenn Carleton Beavis has been accepted towards fulfillment of the requirements for M. S . degree in PE chani CS Major professor Date 11/14/86 07539 MSU ii‘an Aflirmative Action/Equal Opportunity Institution u 111111111111111111111an L 3 1293 00669 37 RETURNING MATERIALS: .bV1ESI_J RTEce in book drop tof remove this checkout rom w your record. FINES wiII be charged if book is , returned after the date C “fivyp ' stamped beIow. THE QUANTIFICATION OF MOTION IN THREE-DIMENSIONAL SPACE USING PHOTOGRAMMETRIC TECHNIQUES BY Glenn Carleton Beavis A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE Department of Metallurgy, Mechanics, and Material Science 1986 ABSTRACT THE QUANTIFICATION OF MOTION IN THREE DIMENSIONAL SPACE USING PHOTOGRAMMETRIC TECHNIQUES BY Glenn Carleton Beavis A research tool has been developed at Michigan State University which is used for the quantification of motion in three-dimensional space. The method involves filming of a subject using two or more high speed motion picture cameras, each recovering a two-dimensional position record stored in the film emulsion. The positional information is then analytically combined to generate four "intersection equations" which in turn are solved to estimate the three- dimensional position of the subject. Use of a calibration structure allows experimental determination of both camera and comparator orientation. This protocol substantially reduces laboratory set-up time. The analytical procedures were developed by Dr. James S. Walton in his 1981 doctoral dissertation entitled: CLOSE RANGE CINE-PHOTOGRAMMETRY: A GENERALIZED TECHNIQUE FOR QUANTIFYING GROSS HUMAN MOTION [20]. ACKNOWLEDGMENTS The author wishes to express his sincerest gratitude to the following people. Without their combined efforts and encouragement, it is likely that this endeaver would not have been a success. To Dr. Robert Soutas-Little for instilling a "You Can" attitude when I was sure I could not, and for his patience during the lengthy time in which this document was completed. To Dr. Eugene Brown for his enormous time commitment in terms of both help and guidance during every filming sequence. To Carol Gremmel for her speedy resolution of that seemingly impossible to find software glitch, and for her warm friendship. To Maurine (Moe) Clemens, for rmm‘luanp in developing programs "G02","SWITCHED",and "SORTS". To my good friend and colleague Mary Verstraete, for allowing me to bend her ear more often than either one of us probably care to remember, and for her extensive support in providing me with test results, photographs, etc. necessary for completion of this project. ii To WOlverine Worldwide Corporation for their generous funding of this project. And finally, to my loving wife Louise, whose gentle encouragement and full support kept the motivation level high during those last difficult months. TABLE OF CONTENTS Page ACKNOWLEDGEMENTS. . . . . . . . . . . . . . . . 1 LIST OF FIGURES . . . . . . . . . . . . . . . . iV LIST OF TABLES. . . . . . . . . . . . . . . . . V Section I. INTRODUCTION. . . . . . . . . . . . . . . 1 II. SURVEY OF LITERATURE. . . . . . . . . . . 3 III. THEORETICAL DEVELOPMENT . . . . . . . . . 11 IV. THE CALIBRATION STRUCTURE . . . . . . . . 27 V. PROTOCOL. . . . . . . . . . . . . . . . . 36 a) Test Set—Up b) Joint Coordinate Analysis c) Processing VI. VALIDATION. . . . . . . . . . . . . . . . 57 VII. DISCUSSION. . . . . . . . . . . . . . . . 69 VIII. CONCLUSIONS . . . . . . . . . . . . . . . 83 BIBLIOGRAPHY. . . . . . . . . . . . . . . . . . 85 iii LIST OF FIGURES Figure 1. 2. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. Elementary Optical Geometry (J.S.Walton) . . . The Combined Optical Geometry of an Ideal Camera and Motion Analizer (J.S.Walton). . . . Simplified Geometric Representation of an Ideal Camera and Motion Analizer . . . . . . . An Idealized Central Projection (J.S.Walton) . The Generalized Three-Dimensional Problem (J.S.Walton) . . . . . . . . . . . . . . . . Geometric Representation of the "Intersection Equations" Using Two Cameras . . . . . . . . . The Calibration Structure. . . . . . . . . . Typical Target Placement for Program "J.C.A.". Initial Orientation of the Coordinate Axes . . Aligned Body Coordinate System for the Foot and Shank.. . . . . . . . . . . . . Dorsi-Plantar Flexion. . . . . . . . . . . . Inversion-Eversion & Medial—Lateral Rotation . Control Points Used vs. Radius of Error. . . . Graphical Representation of Treadmill Data . . Left Side View of Treadmill Data . . . . . . . Top & Front View of Treadmill Data . . . . . . Dorsi-Plantar Flexion (Graphic). . . . . . . . Inversion-Eversion (Graphic) . . . . . . . . . Medial—Lateral Rotation (Graphic). . . . . . . iv Page 13 15 16 17 19 22 29 41 43 46 47 49 6O 73 74 75 78 79 80 LIST OF TABLES Table Page 1. Control Point Coordinates Relative to the Inertial Reference Frame of the Systems Anthropometry Laboratory . . . . . . . . . . 31 2. Vector Coordinates Relative to Control Point #6 . . . . . . . . . . . . . . . . . . 32 3. Calibration Structure Vector Coordinates . . 35 4. COPY3D.01 Output Data (Forward). . . . . . . 62 5. COPY3D.O1 Output Data (Aft). . . . . . . . . 63 6. COPY3D.01 Output Data (High) . . . . . . . . 64 7. Measured Magnitude of Control Points Relative to Their Local Origins. . . . . . . 65 8. Computed vs. Known Vector Magnitudes . . . . 66 9. COPY3D.01 Output data (treadmill). . . . . . 72 10. Typical "JCA" Output Data. . . . . . . . . . 77 I. INTRODUCTION The objective of the following work was to integrate some known, but recently developed technology and mathematics to establish a facility and research protocol for three dimensional motion analysis. Particular emphasis is placed on gross human motion. Until recently, most investigators using photogrammetric techniques have experienced difficulty in the proper treatment of perspective errors. In addition, to varying degrees, pre-defined camera orientation was generally required. Both items were of concern to this investigator, primarily due to the three dimensional nature of the motion to be studied and the lack of a permanent facility where cameras could remain undisturbed once positioned. A suitable technique, developed by Dr. James Walton [20], was identified. which adequately treats perspective errors, and allows for arbitrary camera placement. Integration of this method into a working facility at Michigan State University, however, required a number of adaptations. A calibration structure was needed to analytically approximate camera orientation. Due to the need for complete portability, an orthogonal coordinate reference frame was required to be an integral part of the movable structure. Sizing of the structure had to be such that it encompassed the motion to be studied without becoming overly cumbersome or losing its' structural integrity. Computer programs, provided by Walton, were origionally designed to read digitized data points from computer punch cards. In addition to precise formatting requirements, the programs required all data points to be assembled in ascending temporal order. As will be pointed out in the following chapters, this ordering could not be easily obtained. Several additional computer programs were developed to interactively accumulate data during the film digitization process, and then properly condition the data for entry into Walton's programs. Lighting, film speed, camera positioning for continuous target identification, target material selection, and a host of other filming related questions had to be answered through trial and error. Adequate validation techniques were needed, and all the processing bugs that were bound to appear, had to be identified and resolved. In the following chapters, Walton's method, as well as the required adaptations will be presented. An application of the three dimensional coordinate output data will also be presented. II. SURVEY OF LITERATURE The first known study in which optical methods were used to record gross body motion occurred in 1872 when Eadweard Muybridge [12] was commissioned by Leland Stanford to study the motion of horses. Specifically, Stanford wanted to know if all four feet would leave the ground during any part of a horse's stride during a full speed trot. Muybridge's solution was to place a number of cameras equally spaced along a length of race track. with each shutter trigger attached to a string stretched across the track. As the horse progressed down the track, a series of photographic records were made as each string was broken. This work was the beginning of photogrammetry, the science or art of obtaining reliable measurements by means of photography [18]. In the following years, demand for more quantitative positional information grew as did the desire to match this information with the time at which it had occurred. Reducing the time between photographic records by spacing the cameras closer together improved resolution, but had obvious practical limits. Events which occurred over extended periods of time would require a prohibitive amount of equipment. It wasn't until 1923, when the first motion picture cameras were introduced, that photogrammetric analysis became a viable research tool [7]. The use of photogrammetric methods to quantify athletic performance appears to have begun in 1930. At that time, Fenn & Morrison [9] turned their cameras to the motion of sprinters. Sprinters ran in a plane perpendicular to the optical axis of the camera, and these investigators were able to quantify motion based on what is now referred to as the "multiplier technique". During the filming sequence, a bar of known length was placed in the plane of motion. Knowledge of this length combined with the measured length of the corresponding image provided a single "multiplier" coefficient to relate image displacements to object displacements. In 1939, Cureton [8] published formal procedures for use of the multiplier technique in planar analysis. He pointed out the inherent shortcomings of the method due to perspective error when motion occured outside of the intended plane of motion (i.e., that plane which contained the calibration bar.). Errors associated with lens abberation and film deformation were also addressed. The problem of not being able to accurately quantify three dimensional motion perplexed investigators for nearly thirty years. In 1967, Noss [13] proposed a correction for perspective errors when attempting to determine the true angle Ibetween any 'three points contained. in. the object field. Using cameras located. precisely’ along' three orthoganol axes such that their optical axes intersect at a common origin, Noss claimed that the true value of an angle could be determined by averaging the projected angle as seen in each camera image plane. A review of vector algebra will show that this is incorrect and Putnam [16], Spray [17], and Walton [20], noted the error. Noss'work however, did seem to stimulate thought among athletic researchers about the potential use of multiple cameras to provide additional information that could somehow be combined to yield accurate three-dimensional position data. In 1968, Plagenhoff [15] adopted the use of a second camera located such that its optical axis was aligned in the plane of anticipated motion. Images produced by this camera were used to generate a correction factor to the multiplier technique when motions were outside of the intended plane. This method is based on. the assumption. of orthographic (parallel) projections which only holds true for extremely large subject to camera distances and is consequently imprecise. In addition, the assumption that the optical axis is perpendicular to the film plane is not necessarily true. The method does, however, provide some degree of correction for perspective errors incurred while filming. In 1973, Miller [11] developed an accurate analytical solution to the perspective error problem. In her approach, three cameras were aligned with their optical axes lying in a common plane such that they intersected at a common origin and were oriented 120 degrees apart. Her method used the ratio between image size and lens to film plane distance, and assumed that this ratio was equal to the ratio of object size and lens to object distance. For each camera, images captured on the film jplane, combined. with. knowledge of camera orientation and lens to film plane distance were used to generate two equations. Use of three cameras allowed her to generate six such equations with a total of nine unknowns; the three dimensional object coordinates with respect to a coordinate system. affixed to each camera. Selecting camera one as the primary reference frame, the other two systems were rotated through a transformation matrix to yield object coordinates as viewed by cameras two and three in terms of camera one. Then, using any two cameras, there were four equations from which to solve for the three unknown spatial coordinates. The use of a third camera was to insure that the target points appeared in at least two cameras at any time. While the problem of perspective error had been eliminated, Miller's method was inhibited by several factors. Precise orientation of the cameras was required before filming, and lens to film plane distance had to be experimentally determined using a "range pole" located at the origin. This generally required hours of pre-test set-up time. Furthermore, no provisions existed for data correction due to image deformations formed by the cameras, film, or data reduction device. Miller also experienced difficulty choosing the appropriate set of equations to solve for the spatial coordinates since not all combinations were linearly independent. Bergemann [4] solved the perspective error problem by determining the path of two vectors V1 and V2. His technique allowed for much less rigid camera placement; however, both optical axes were required to pass through a common origin, and camera positions relative to the origin must be pre- determined. As with Miller's technique, knowledge of lens to film plane distance was also required. Film plane coordinates of an object combined with the nodal coordinates of the camera lens allowed for the determination of a vector which extends into the object field. The three-dimensional object coordinates were defined as the intersection of vectors 'V1 and ‘V2, one 'vector’ being' generated. by' each camera. In the likely event that the two vectors did not intersect, the object coordinates were approximated by determining the midpoint of the shortest line between them. This was commonly performed by using a method of least squares. Until this point in the evolution of close-range photogrammetry , a great deal of pre-test time and effort was required to ensure proper camera alignment. This was done on site with the aid of such tools as plumb bobs, spirit levels, surveyor's transits,and other alignment tools. VanGheluwe [19] was the first investigator to significantly deviate from this lengthy alignment process. Through the use of fiducial markers with known object space coordinates, he 'was. able to ‘reduce the alignment restrictions of the cameras such that only their optical axes were required to intersect. Object space coordinates of each camera lens optical center could be back-calculated by comparing image coordinates of the fiducial markers with their known object coordinates. As in previous methods, knowledge of the length from film plane to lens Optical center was required. Once the orientation of each camera was established with respect to the reference system in which the fiducial markers were based, the remainder of the process proceeds in a manner similar to that of Miller. Using the laws of orthogonal coordinate transformation, a set of related simultaneous equations were developed and solved to generate an approximation for the object coordinates of any spatial point of interest. The term "approximation" is used here because the system of equations is overdetermined and the solution was obtained using a linear least squares technique. Abdel-Aziz and Karara [1] were the first to remove all restrictions regarding placement, relative orientation, and internal camera parameters. The technique, termed "Direct Linear Transformation Method (D.L.T.)", requires advance filming of at least six non-coplanar control points to generate a set of twelve equations for each camera. Each set of D.L.T. equations contain terms representing both the image and object coordinates of each control point as well as eleven "calibration coefficients". These coefficients contain implicit information regarding camera orientation, lens to film plane distance, as well as linear components of lens and film distortions. Knowledge of the known object and measured image coordinates for each control point provide the user with an overdetermined system of twelve equations with eleven unknowns. A linear least squares technique is used to approximate the values of the calibration coefficients. Twelve additional equations are generated for these control points as viewed by a second or subsequent camera. Again, using the known object and measured image coordinates of each control point, calibration coefficients can be approximated for these cameras. Spatial coordinates of unknown targets are then determined using the four D.L.T. equations (two from each camera), and the measured image coordinates established for each view. The four equations with three unknown spatial coordinates are again overdetermined, and a least squares technique is used to solve them. In addition to removing restrictions on camera placement, another important aspect of the Abdel-Aziz and Karara method is that the equations allow the user to transform from comparator, or digitizer coordinates directly to object coordinates. Methods described previously required additional manipulation to transform first from comparator to image (film plane) coordinates, and then from image to object coordinates. Information regarding comparator or digitizer scaling factors and axis orientation are also 10 contained in the eleven calibration coefficients that appear in each set of transformation equations. The Abdel-Aziz and Karara technique is remarkably similar to that derived by Walton [20], and reviewed in the Theoretical Development section of this document. A more complete understanding of Abdel-Aziz and Karara's method will evolve as Walton's method is presented. III. THEORETICAL DEVELOPMENT The following discussion on the theoretical development of three-dimensional motion analysis is imperative to an appreciation for the complexity of the project at hand. Several computer programs written by Dr. James S. Walton (formerly of General Motors Research Laboratories) are based on this theory and are currently being used in the motion analysis procedures. It was not the objective of the writer to develop the techniques, but rather to understand and apply them to develop an analysis facility as well as the necessary test protocol for the study of gross human motion at Michigan State University. The remainder of this section involving optical geometry and the 3-dimensional problem is based on the work of James S. Walton presented in his 1981 doctoral dissertation entitled: CLOSE-RANGE CINE-PHOTOGRAMMETRY: A GENERALIZED TECHNIQUE FOR QUANTIFYING GROSS HUMAN MOTION [20]. OPTICAL GEOMETRY The basic concepts of the optical geometry for motion analysis should be understood before consideration is given to development of the fundamental spatial transformations. ll 12 The elementary optical geometry of a non-metric camera is shown in Figure 1. ( A non-metric camera is one which is not specifically designed for the photogrammetric process.) Note that for a compound lense, the incident (N and ci) emergent (Nce) nodes of the lense are not coincident. Mp is the principal point and 8c is the principal distance associated with the images produced by the camera. ( The principal point is that point on the film plane through which a vector perpendicular to the plane can also pass through the emergent node of the lense.) Ms is the point of symmetry and is located at the intersection of the optical axis with the film plane. Walton points out that while Ms and M.p are ideally coincident, this is not generally the case due to improper alignment of glass components within the lense. Note that sc is not the focal length of the lens, but instead varies with focusing. Walton indicates that, provided the lense is free of optical distortion, the images on the film plane are true perspective projections of the object space. These images are stored on successive frames of film as a permanent record of position in object space with the progression of time. In analyzing motion, these primary images are projected on a plane surface to create a secondary image in the same geometric manner that the primary image was captured on film. This is done with some type of motion analyzer which incorporates a film gate, compound lense, with incident and l3 ._. 959.... 22.25.03 532.80 .6230 5359.05 14 emergent nodes, and a projection surface for the secondary image. Scaled, two-dimensional digitizer coordinates are recovered from the secondary image. Walton suggests it is possible to combine the optics of the camera and motion analyzer by superimposing the film plane of the camera with the projection gate of the analyzer. This is shown in Figure 2. Again, Sc is the principal distance associated with the images produced by the camera. Walton states: " It can be shown that any secondary image formed in the projection plane of the motion analyzer, can be superimposed on the incoming principal rays from the object-space which formed the corresponding primary image in the camera. This can occur only on a single plane located a distance S from the incident side of the camera objective (Nc)." Moving the secondary image plane of Figure 2 into the incoming principal rays, and centering N over Nci a pe simplified geometric representation is formed. This is shown in Figure 3. In the absence of image deformations, it can be shown that: S=MsC where sC is the principal distance from Nce to the primary image, M is the magnification factor, the ratio of secondary to primary image length, and S is the magnified principal distance from Nci to the secondary image. This geometric representation can be further reduced to an idealized central projection (Figure 4) where the 15 Object Camera Axis Projection Plane Nce ‘77S/\ ’7 LIZ / Motion Analizer Axis / N I pe Superimposed Film Pinned]: / Camera and Motion Analyzer / ’77 7 Primary Image The Combined Optical Geometryiof an Ideal Camera and Motion Analyzer Figure 2. l6 'V / l / Secondary Image Plane 1 l / \ U - I j / Secondary Image ' / S | / ‘ / l I I l/ N N / ’ Superimposed Film Plane of, s c [I Camera and Motion Analyzer 1 l // L / l l Pri a Ima / mry 9e Simplified Geometric Representation of an Ideal Camera and Motion Analyzer Figure 3. l7 .6 959". eased: 5:86... .223 63:82 :< omen: 330m 25.3.2: 62:8.“ / .m_x< EOEmO .8280 5:006...“ e 19 one...“ cams: a... \ /; 18 secondary image is an idealized central projection of the object space onto the image plane. N is referred to as the projection center and is coincident with the incident node of the camera lens Nci' Walton has dropped the optical axes of both instruments from the simplified representation, indicating that they are unimportant to the theoretical development of the object-to-image transformation equations. Idealized central projection of the object-image relationship is widely used by photogrammetrists to develop the mathematical models of photo-optical systems. It is used to provide a generalized coordinate transformation between obj ect-space coordinates and digitized image-space coordinates. THE THREE DIMENSIONAL PROBLEM Figure 5 is a representation of the generalized three dimensional problem. fix, Hy and 32 together with point A form a right-handed orthogonal triad of unit vectors in object-space. This represents the three dimensional coordinate system established in object-space at the time of filming. Point O is an arbitrary point in object-space with (x,y,z) coordinates. The projection center N and the image plane representing the projected image from the motion analyzer form an idealized central projection. U and V are vectors which originate at Point (B) and lie in the image plane. These may be thought of as the digitizer axes used in locating the coordinates of the two-dimensional image. 19 .m 2:9... E2995 .mco_m:OE_o|oe.E._. negate—.00 of. 20:25.93 F+NO+>$xm i "> ._+~¥+>_.+ x: P+~O+>u+xm / ocmE emu—E all .ii "3 D O+~O+>m+x< in Z c. ANSJOO 20 Although ideally these vectors are mutually orthogonal, it is not a requirement for the accurate construction of a mathematical trans formation from object to image coordinates. Point I represents the positive photographic image of Point O. Walton states that assuming the image plane is an ideal central projection of the object space, then, for any arbitrary point 0, there will be a corresponding point I such that points 0, I and the projection center N are colinear. Walton further states: "If point 0 has coordinates (x,y,z) with respect to the object-reference-frame, and if point I has digitizer-coordinates (U,V) with respect to the image reference frame, then it can be shown that the general form of the required object-to-image coordinate trans- formation is U = A; + Bv + C; + D 2 + 1 Ex + Fy + c (1) V = H; + JV + Kg + L Ex + Fy + cz + 1 " Coefficients A through L represent the geometric configuration of the camera and motion analyzer, while U and V represent the digitized coordinates of point 0. For every camera which envelops point 0 in its field of view, an independent set of transformation equations can. be generated. If the coefficients of these equations can be experimentally determined, and the U-V digitizer coordinates of a point in object space have been measured using two independent views, then it is possible to solve for the 21 remaining three unknown (x,y,z) spatial coordinates using the four transformation equations (Figure 6). Walton states that precise estimates for the 11 coefficients can be determined by the utilization of at least six non-coplanar "control points" having known spatial coordinates in object-space and corresponding measured digitizer-coordinates on the secondary image. As shown on page 23, rearranging the 12 transformation equations associated with these control points, allows for the determination of coefficients A through L. 22 .o 959... .mcozmzqm 5500225.. 9: Co cozmucomoaom Orzo—tome N 9.2a mums: r one... omnE. «z \m F2/ F2 N > : J N N N c _.+N 0+> "Ix m N \ N. .. < x c / w+Nw0+>wu+xwm P IIIIIIIIIIIIIII u > .. IIIIIIIIIIIIIIII u > N N N P N4+N ¥+> fi+x I ’1 P._+N_.v_ +>rfi+x I N c N > N N AN.>JSO ._.+N 0+ n_+x m ND _.+N w0+> ($me P IIIIINIIIIMIIIIMI u IIIIIIIIIIIIIII u D ND+N U+> M+X < —.D+N FU+>PM+Xr< 23 [X1 y1 z1 ‘U1X1 'U1Y1 ””121 ° ° iA iU1 |x2 y2 22 -U2x2 -U2y2 -U222 0 0 B U2 |x3 y3 z3 -U3x3 -U3y3 -U3z3 o 0 C U3 x4 y4 24 -U4x4 -U4y4 -U4z4 0 0 D U4 x5 y5 25 -U5x5 -Usy5 -U525 0 0 E US x6 y6 26 -U6x6 -U6y6 -0626 o o F U6 . . . . . . . . G . . . . . . . . . H . . . . . . . . . J . 0 0 O -V1x1 -V1y1 'Vlzl x1 Y1 21 K V1 0 0 O -V2x2 -V2y2 -sz2 x2 y2 22 L V2 0 o o -v3x3 -V3y3 -v3z3 x3 y3 23 v3 0 0 0 -V4x4 -V4y4 -V4z4 x4 y4 24 V4 0 O 0 -V5x5 -V5y5 -V525 x5 y5 25 V5 0 ° ° ’vsxs ‘V6Y6 'Vezs x6 Y6 26 V6 (2) Subscripts denote an association with a particular control point. Note that matrix equation (2) is overdetermined but can be solved using a linear least squares technique. The use of ellipses in equation (2) indicate that more than six control points can be used and is advised by Walton to improve the estimated values for the coefficients. 24 A set of equations similar to equation (2) must be generated for every camera (and corresponding image plane) that is used to view the motion in object-space. Having solved for coefficients A through L, transformation equations (1) can be rearranged again to yield equation (3), where values for U and V are inserted for each unknown point in object-space. (A-EU) (B-FU) (C-GU) x = (U-D) (H-EV) (J'FV) (K'GV) Y (V'L) (3) 2 Equation. (3) is 'underspecified. and 'therefore. cannot yield a unique solution to the spatial location (x,y,z) of an unknown point in object space. At least two image planes (cameras) are needed to yield precise estimates for the location of a point in three-dimensional space. Repeating equation (3) for every image space yields equations (4) where superscripts denote an association with a particular image space. (A1 _ E101) (H1 - E1V1) A2 - E2U2) H2 _ Ezvz) For any one instant in time, has a unique location - F1U1) - F1V1) _ F202) 25 (x,y,z) - Glul) - GZUZ) - sz2 (U1 (v1 (U2 (v2 .. 02) .. L2) (4) each moving object point and Walton indicates that equations (4) can therefore be combined provided that the data from different cameras is time—matched: (A1 - E1U1) (H1 _ Elvl) (A2 - EZUZ) (H2 - E2V2) (B1 (J1 (B2 (J2 _ F101) - F1V1) - szz) (c1 (K1 (c2 (K2 _ Glul) - lel) - GZUZ) - szz) (5) 26 Equation (5) consists of at least four equations (using two cameras) and the three unknown spatial coordinates of the unknown object point. Again, this is overspecified and Walton uses a least squares technique to estimate the spatial coordinates. This estimate is improved with the introduction of more views of the object space. Ellipses indicate the ability to include these views in equation (5). Equation (5) is known as the "intersection" equation by photogrammetrists and refers to the intersection of two or more principal optical rays at the location of the object point as was shown in Figure 6. IV . THE CALI BRATION STRUCTURE Walton [20] reported that precise estimates for coefficients A through L of the transformation equations (1) could be obtained by utilizing at least six non-coplanar "control-points" with known spatial and image coordinates. Geometrically, this process determines the focal point coordinates of the camera lens through which all principal rays must pass as well as the position and orientation of the corresponding film plane. When this is done for two or more cameras, the spatial coordinates of an unknown target can be determined by the intersection of the corresponding principal rays. Twelve "control-points" were generated for use in the calibration process by the construction of a calibration structure. The Center for the Study of Human Performance at Michigan State University was chosen as the site for all motion studies referred to in this discussion. Since this facility was not wholly dedicated to gait analysis, a removable calibration structure had ix) be designed, built and calibrated. The constraint of a portable calibration structure presented several design problems: 1) The precise location of each control point had to be known relative to an inertial reference frame located in object space. Since any movement of the structure would change the relative orientation of time control points, the 28 reference system had to become an integral part of the structure. 2) While it was desirable for the control points to fully encompass the rather broad object space where the anticipated motion would occur, it was imperative that the device maintain its structural integrity. Relative movement of the control points had to be minimized. 3) The structure had to be sufficiently light to provide for ease of transportation to and from the test site, and constructed to minimize the possibility of an obscured control point when viewed from different directions. 4) A true vertical reference could only be approximated when using a self-contained rigid calibration structure. A calibration structure was designed and built which seemed to best satisfy these constraints. The completed structure, shown in Figure 7, is constructed of plexiglass, 3/8 inch aluminum round stock, piano wire, and 3/8 inch plastic beads. While the aluminum round stock served as the primary structural material, piano wire was stretched in four opposing directions from. the end of each aluminum segment. This further improved its structural integrity while providing ideal locations for the placement of 12 "control-points" along the perimeter of the structure. 29 The Calibration Structure Figure 7 30 Once construction was completed, the control points were calibrated relative to an inertial reference frame established within. This was achieved with the help of Dr. H.M.Reynolds at the M.S.U. Department of Biomechanics. The technique used , referred to as Roentgen Stereophotogrammetry, is similar theoretically to the photogrammetric technique used in the present study. It involves locating an object relative to a fixed reference frame established in the laboratory using two x-ray photographs. Reynolds was able to provide the three- dimensional coordinates (accurate to within + 0.1mm) of the 12 control points with respect to the inertial reference frame of the Systems Anthropometry Laboratory. This raw data can be seen in Table 1. CONTROL POINT COORDINATES RELATIVE TO THE INERTIAL REFERENCE FRAME OF THE SYSTEMS ANTHROPOMETRY LAB. CONTROL POINT 1 2 10 11 12 Lisle). 34.46 55.08 20.73 40.78 51.52 17.16 57.83 22.91 35.16 51.85 23.74 40.54 31 TABLE 1 . um) we) 9-) i-‘°> l-‘-) P“) r») P') P9 P9 9-) l-«> ) u.) u) u.) u.) u.) a.» u.) u.) 3.19.11). 86.56 86.49 86.95 87.07 69.47 69.71 69.81 70.17 23.09 23.09 23.22 23.52 W>><>>r>w>w>w>w>>=9x>ww>w> 32 Choosing Object Point #6 as a local origin, new vectors were calculated for each object point relative to this origin in the same inertial reference frame. The results are given in Table 2. TABLE 2. VECTOR COORDINATES RELATIVE TO CONTROL POINT #6 V____r_ecto £1213). Elfin). Z1991). r1/6 17.30 ’i‘ 3.60 ’3" 16.85 I? r2/6 37.92 ’1‘ 10.42 3" 16.78 i? r3/6 3.57 1‘ 17.11 5‘ 17.24 1? r4/6 23.62 ’1‘ 30.94 3‘ 17.36 i? r5/6 34.36 ’i‘ 6.73 5‘ 0.24 1’? r6/6 0.00 ’1‘ 0.00 ’3" 0.00 12 r7/6 40.67 ’1‘ 28.01 3‘ 0.10 12‘ r8/6 5.75 ’1‘ 34.51 '3" 0.46 k‘ r9/6 18.00 Ii‘ 0.37 3" 46.62 R r10/6 34.69 1 11.81 3" 46.62 R r11/6 6.58 ’1‘ 17.23 5‘ 46.49 R r12/6 23.38 ’1‘ 28.64 3‘ 46.19 i? 33 The x-axis of the calibration structure was defined as being coincident with the vector extending from control point #6 to #5 with magnitude: |r5/6| = v/(34.36)2 + (6.73)2 + (0.24)2 = 35.01. Then the unit vector 1 in the direction of the x-axis is given by: ‘A r5/6 4 4 A l = ------ = 0.98141 - 0.19223 - 0.0068k . |r5/6l The z-axis of the calibration structure was defined as the cross product of r5/6 into r8/6: 0 a -A r5/6 x r8/6 = 1 j k 34.36 -6.73 -0.24 5.75 34.51 0.46 . a = 5.191 - 17.193 + 1224.462 , with magnitude: |r5/6 x r/8/6| =¢Q5.19)2 +(17.19)2 +(1224.46)2 = 1224.59. Then the unit vector n in the direction of the z-axis was given by: r5/6 x r8/6 A A_ A n = ------------- = 0.00421 - 0.0140j + 0.9999k . |r5/6 x r8/6| The unit vector‘fi in the direction of the y-axis of the calibration structure is given by n x l: A» 4A ‘9 =‘H x‘I = 'I j k .0042 “.0140 .9999 .9814 -.1922 -.0068 0 .4 «A = 0.19231 + 0.98133 + 0.0129k . 34 The unit vectors, I, H) and 3 lie along the x,y,z axes of the calibration structure, respectively. They are expressed in terms of the laboratory inertial reference frame. The vectors which eminate from Control Point #6 and extend to the various control points can be described in terms of calibration structure coordinates, X,Y,Z, defined in the I, H, and'fi directions respectively. This is done by projecting these vectors (represented in terms of the inertial reference frame) onto the 'unit ‘vectors of the calibration. structure (also :represented in 'terms of the inertial reference frame). The resulting vectors will be expressed in terms of the . . . .A .A z\ calibration structure's own unit vectors L, M, N where: A L = (1,0,0) 3? " (07110) /\ N = (0,0,1) All of which are independent of the laboratory inertial reference frame. Then the general vector Rn/6 in calibration structure coordinates is given by: A Rn/6 = XL + yfi + 2% where X = rn/6 ' I Y = rn/6 "H Z = rn/6 ° 3 Table 3 contains the resulting vector coordinates of the 12 control points expressed in calibration structure 35 coordinates and originating from control Point #6. These coordinates are the values stored in a computer file, "CALCDS", for use during the calibration segment of a computer program named "COPY3D.01". Program "COPY3D.01", was written by Walton, and is used in determining the three- dimensional loci of unknown targets in object-space. Use of this program will be discussed further in Chapter V. TABLE 3. CALIBRATION STRUCTURE VECTOR COORDINATES (cm.) VECTOR GNITUD A A Rl/6 = 17.56 L + 00.01 M 16.97 N 24.42 A A A R2/6 = 35.10 L + 17.73 M 16.79 N 42.76 A R3/6 = 00.10 L + 17.70 M 17.01 N 24.55 R4/6 = 17.11 L + 35.13 M 17.02 N 42.62 A R5/6 = 35.02 L + 00.00 M 00.00 N 35.02 A R6/6 = 00.00 L + 00.00 M 00.00 N 00.00 A A R7/6 = 34.53 L + 35.31 M 00.12 N 49.39 A R8/6 = -0.99 L + 34.98 M 00.00 N 34.99 A R9/6 = 17.91 L + 03.22 M 46.54 N 49.97 A R10/6 = 32.09 L + 17.66 M 46.63 N 59.30 A A A Rll/6 = 03.46 L + 17.57 M 46.70 N 50.01 A A A R12/6 = 17.75 L + 32.00 M 46.49 N 59.16 V. PROTOCOL For this investigation, all filming sequences used for the study of human motion were carried out at the Michigan State University Center for the Study of Human Performance. While the photogrammetric procedure used is in no way restricted to use in this facility, it did provide for access to several valuable tools which could be used in conjunction with filming to better understand human motion. These were a motor driven treadmill (with variable speed and inclination) and a.sxfl: of precision force-plates hardwired to an IBM 9000 series computer and printer. Dr. Eugene Brown was responsible for the cinematographic aspects of the project as well as the photographic equipment used, and provided guidance and assistance in utilizing the facility. Set-up and procedures for filming remained consistent for the various types of motion analysis studied. To date, this analysis tool has been applied to the study of human gait while: 1) running on a motor-driven treadmill at zero grade and a constant velocity of 3.58 meters/sec, 2) running over precision force—plates, and 3) walking over precision force-plates by amputees wearing various prosthetic devices. In all cases, no attempt was made to align the cameras other than to maintain an included angle of 60 to 90 degrees with the motion, and to insure that the field of view of 36 37 each camera completely enveloped the motion. Cameras were 16 mm LOCAMS, outfitted with 12 mm to 120 mm zoom type lenses and 400 ASA Video-News film. Pezzack, et al. [14] and Walton [20] reported that adequate results using a film speed of 100 frames per second could be obtained for position, velocity, and acceleration studies of the human form. Sufficient artificial lighting was available for filming at this speed so this was chosen as the standard for all studies in this investigation. As stated earlier, digitized position data from two or more image planes cannot be combined to produce the corresponding "intersection" equations unless the object point remains stationary or the images are time-matched in some way. To this end, two electronic timing lights utilizing a quartz crystal oscillator and four columns of ten lights each were used. The lighted columns represented seconds, tenths, hundredths, and thousandths respectively. The timing lights were synchronized with each other so that one could be placed in the field of view of each camera and provide for sufficient temporal data recovery. Before any motion analysis could begin, it was necessary to calibrate the object space where the anticipated motion would occur. This would provide for values of the calibration coefficients contained in transformation equations (1). Placement and filming of the calibration structure in object-space provided for later determination of these 38 coefficients as well as establishing a coordinate reference frame for the subsequent motion. The structure could then be removed, since a permanent record of the reference frame was contained in the film emulsion. Actual filming of the motion study could begin after successful completion of the calibration sequences. Care was taken not to move or jar the cameras during this phase since any change in orientation relative to the initial placement of the calibration structure would require that a new set Of calibration coefficients be determined. As a precautionary measure, anytime the cameras required re- loading of film, a new calibration sequence was filmed. Care was taken to properly document each event both on paper and the film. This prevented unnecessary confusion at the time of processing. JOINT COORDINATE ANALYSIS An excellent application of three-dimensional position data is its use in the generation of secondary coordinate systems associated with various body segments. Further manipulation of this data allows the investigator to develop a " joint coordinate system " [10] which is then used to establish a geometric description of the relative rotational motions occurring between two theoretically rigid bodies. Development and execution of this concept is presented below with the application in this case being to quantify foot/shoe orientation relative to the shank while running. 39 To date, most angular analysis of the human, ankle during the heel strike-toe off phase of running has been concerned with a single rotational measurement which most closely resembles a measure of inversion-aversion [2,3,5,6]. To the athlete, the term "pronation" is perhaps more familiar. The rotation axis is a laboratory coordinate extending approximately anterior-posterior to the subject. As the foot moves away from this axis or undergoes additional medial-lateral or dorsi-plantar rotations, the angular motion loses physical significances. Since the human ankle undergoes rotation not only in terms of inversion- eversion, but also in dorsi-plantar flexion and medial- lateral rotations, an alternate approach was needed. R.W. Soutas-Little recognized the opportunity to implement a variation of the works of E.S. Grood and WRJ. Suntay[10]. They describe the construction of a joint coordinate system based on Euler angles for the clinical description of three-dimensional motions as applied to the knee . This method allows for the independent determination of rotations of one body with respect to the other. The coordinate system consists of one axis embedded in each of the two bodies whose motion is to be described, and a third "floating" axis which is always computed to be perpendicular to both fixed axes regardless of body orientation. For this investigation, the construction of these axes began with the careful placement of six 0.250 inch diameter 40 targets or flags on the subject. Three were cemented to the test shoe which represented Body 1, and the remaining three were affixed to the lower shank or Body 2. The loci of each target was tracked with time so that each set of three was sufficient to generate an independent cartesian coordinate system. Figure 8 indicates typical placement of the targets on each body segment. Considering Body 1, vectors AB and AC can be determined as follows: 705 = (Bx-Ax)i + (By-14y)? + (Liz-AZ)? AT: = (ox-Ax)? + (Cy-14y)? + (CZ-AZ)E where Ax' Ay, Az, Bx' By, Bz, Cx' Cy, C2 are the x-y-z coordinates of targets A, B and C at some time (t) using the position analysis programs. Careful placement of targets A and B will allow the vector AB to approximate a line perpendicular to the plane of the sole and as such, this line will be chosen as the local Z—Axis. is a unit vector in the Z-direction of Body 1. 41 Body 1 Typical Target Placement fOr Program "J.C.A." Figure 8. 42 The vector product of i; with AC is a third vector . a which is perpendicular to 12 and AC: 4 A :1.2 x AC = N where: N n = -—- INI is a unit vector in the direction of N. Finally, a cartesian coordinate system is generated by ‘Q the vector multiplication of H'with z A A ’.\ t = n x 12 The system must now be rotated about the 12 axis an amount (theta) so that 3 lies parallel to an imaginary line passing anterior-posterior to the shoe. This is shown in Figure 9. The angle (theta) through which the coordinate system must rotate is determined as follows: -1 | AC - AB | (f) = C03 | -------- l | IACI IABI I where 0 < f < 90. 43 C < . lAClSin(f) t\ A,B . n \\ Initial Orientation of the Coordinate Axes Figure 9. 44 Then, SIN(theta) = ............. |AC| SIN (7) where (d) is a measured parameter at the time of testing. The segment coordinate system is aligned to the foot by the transformation; cos 8 - SIN e 0 ’1? 1x SIN 8 cos 8 0 S = ’iy 0 0 1 I ‘Iz N Note that 1* lies parallel to the plane of the sole and is oriented to extend anterior, relative to the shoe.’i\y is also parallel to the plane of the sole and extends lateral to the shoe. I; is perpendicular to the plane of the sole. For each time frame of data, a new coordinate system for the shoe is generated so that it always lies in this orientation. Using a similar procedure, a second local coordinate system can be generated for the shank (or Body #2), defined as Ix, Ty, I I z' x is aligned perpendicular to the tibial axis and anterior to it. I& is also perpendicular to the tibial axis but is aligned laterial to it. Finally, IE is aligned parallel to the tibial axis and is directed proximally. 45 As with the coordinate system for Body 1, the Body 2 system is also reconstructed for each time frame of data. The coordinate systems for the foot/shoe and shank are represented in Figure 10. Note that the medial-lateral calf and shoe width at the location of landmarks (C) and (F) are needed to orient the coordinate systems. 6» l\ 1 Choosing z and Iy as the representative "embedded axes" for Body 1 and Body 2 respectively, let 2% = Iy (Body 2, Shank) 3g =‘iz (Body 1, Shoe). Then the "Floating Axis", which completes the joint coordinate system, is defined as follows and will always be perpendicular to both.€§ and 35: a £235-32- 1 — A A |e2 x e3| Vector 31 is directed anterior to the foot and is perpendicular to both‘aé and 83. As the foot/shoe begins to dorsiflex,"é3 will begin to rotate posterior to the shank. The new 33 will remain perpendicular to both'e‘2 and 33, but will develop a positive angle (alpha) relative to the Ix axis. Note that the inclusion of any inversion-eversion or medial-lateral rotations will not affect the value of (alpha) so long as Ie‘1 remains perpendicular to ’62 and 33. Vector $1 and its orientation relative to the IX axis is shown in Figure 11. 46 Laboratory Coordinates ’L Body #2 7/. Body #1 Y X Aligned Body Coordinate System for the Foot and Shank Figure 10. 47 Y Dorsi-Plantar Flexion Figure 11. 48 For dorsi-planter flexion: 6‘1 - ’1‘z = COS( (pi/2) - alpha ) = SIN( alpha ) where dorsi-flexion is indicated by a positive (alpha) and plantar flexion by a negative (alpha). A measure of inversion-eversion angle can be determined by evaluating the relationship between'aé and 83 (Figure 12). 32 - 83 = COS( (pi/2) - beta ) = SIN( beta ) When evaluating the left limb, inversion is indicated by a positive B and eversion by a negative B. Medial- lateral rotations of the foot about the ’e‘:3 axis or dorsi- planter flexion about the I axis will not alter the value Y of,B. For medial-lateral rotation: 4 €5_' 1x = COS( gamma ) or A /.\ . e1 ' 1y = COS( (pi/2) — gamma ) = -SIN( gamma ) where positive values of gamma indicate lateral rotation and negative values indicate medial rotation. Again, gamma is unaffected by the inclusion of dorsi-plantar flexion or inversion-eversion angles. Vector orientation for medial-lateral rotations are also shown in Figure 12. This analytical method was used to determine the angular motion of the foot relative to the shank in gait studies. 49 :2 65mm cozmuom BESSIEBE :o_mto>wic0_mam>c_ >0 (0— E505. _mtm~m4 50 PROCESSING Processing of filmed data is a two-phase operation. Phase one, which is by far the most difficult and time consuming, involves the digitizing of each data point, frame by frame, for each camera. The digitized location of each data point is sent to a data file in the PRIME system computer at the M.S.U. AME.Case Center for CAD/CAM where it is stored for further processing. Phase Two is generally completed at a later date and consists of additional. processing of ‘the digitized data file. This is a multi-step procedure involving a series of computer programs. It is this phase which actually determines the three dimensional coordinates of the object points as well as the relative angles between two rigid bodies. The computer programs involved in both phases of processing are in order of application: 1) 602 2) TMATOl 3) COPY3D.01 4) JCA A brief description of each program follows: The program "G02" is designed to operate interactively with the user throughout the digitizing process. Its Objective is to generate the data files which contain the digitized coordinates of each object point (or target) being 51 tracked on the secondary image (the image projected on the digitizing table by the motion analyzer). For any one motion sequence being analyzed, this program is run twice. The first run is to create a calibration file (CAL.XXXX.XXXX) for later use in program "COPY3D.01". The program will prompt the user to begin digitizing the control points on the calibration structure first from one camera then from the other. When this is complete, the program will convert the raw digitizer coordinates to U,V coordinates relative to a user specified origin and store these values in units of centimeters. The second run of program "G02" generates the U,V digitizer coordinates of the moving object points and stores them in a data file (DAT.XXXX.XXXX). It should be noted that any time the motion analyzer is raised or lowered from the table, a new calibration file (CAL.XXXX.XXXX) must be generated to correspond with any subsequently produced data files (DAT.XXXX.XXXX) . 0n the other hand, if several motion sequences are to be analyzed during the same digitizing session, it is possible to use the same calibration file for several data files. If this is done, the user should: 1) Insure that all motion sequences are spooled on the same film reel containing the calibration sequence. 2) Insure that the motion and calibration sequences do not occur on separate splicings of film (this can happen if a camera is reloaded during filming). 52 3) Insure that the motion analyzer has not been moved since the calibration sequence was digitized. This completes phase one of the analysis procedure. Phase two is generally begun after all digitizing for a particular day's filming is complete. This is because of the high volume of data files which can be processed in a relatively short time frame. "TMATOl" and "COPY3D.01", both ‘used in Phase 2 «of processing, are modified versions of programs written by and purchased from Dr. James S. Walton. The original versions, "TMATCH" and "JSW3D", remain on file in the Brooks account on the Prime system computer. As purchased, these programs were written to read from and write to data punch cards. Clearly, this is an outdated as well as exceedingly cumbersome means to load data. In addition, they were strictly formatted such that the digitized coordinates of target points tracked with all cameras had to be assembled in ascending temporal sequence. This became a problem because of the procedures developed for digitizing. Since it was imperative that the motion analyzer not be moved after digitization of the calibration points, the film from camera #2 was spliced onto the end of the film from camera #1. The combined film. was then loaded on the analyzer. In order to assemble the combined target data from both cameras in ascending temporal sequence, the operator would have been required to continually advance from one end of 53 the film to the other.Therefore, "TMATOl" and "COPY3D.01" have been modified to operate interactively with the user as well as to read from data files created during the digitizing process. In: addition, subroutines "SWITCH" and "SORTS" have been appended to "TMATOl". These subroutines temporally sequence the output of program "602" (i.e. DAT.XXXX.XXXX). As presented in the Theoretical Development of Chapter III, it is imperative that the digitizer-coordinates recovered from different sources (cameras) be time-matched so that they can be combined to yield accurate position information. The easiest way to time-match the data is to synchronize the shutters of the cameras. Lacking the appropriate equipment, an alternate approach is to use some form of interpolation routine. This is the function of program "TMATOl". The second of the analysis programs, "TMATOl" is run following the successful termination of program "602". "TMATOI" will prompt the user for several input parameters as well as the name of the data file (i.e. DAT.XXXX.XXXX) produced by "G02". "TMATOl" will then rewrite the data file with the digitizer coordinate observations from both cameras in ascending temporal order. The new data will be stored in a temporary file called "SWITCHED" and serves as the correctly formatted input to the remainder of program "TMATOl." 54 Next, the U-V coordinates of the various targets as viewed from cameras 1 and 2 will be interpolated at user specified. time increments. This is done using a linear interpolation between the nearest observed coordinate value on either side of the desired time. An interpolated U and V coordinate for each camera and time increment is determined and stored in the output files of "TMATOl". These files have the fixed names "PRINTTM" and "PUNCHTM". "PUNCHTM" contains all data necessary for transfer into program "COPY3D.01" and a very limited amount of text. File PRINTTM contains the same output data as well as extensive text and formatting designed to aid in visually inspecting the data. Any error or warning messages generated while running "TMAT01" will appear in this file. It should be noted that during any subsequent use of program "TMATOl", the contents of output files PRINTTM and PUNCHTM are overwritten. It is therefore important that the motion sequence be fully analyzed once phase two of the analysis is begun. Before proceeding with the next step in the analysis routine, the output of TMATOl should be checked for successful termination. This is most easily done by accessing file PRINTTM in edit mode. Two consecutive "LOCATE **WARNING" commands will display any warning messages should they exist. "COPY3D.01" combines the time-matched digitizer coordinates of object points viewed from both cameras to determine their three-dimensional loci. The program begins 55 by comparing the known three—dimensional coordinates of the calibration. structure control points ‘with their corresponding U-V digitizer coordinates as viewed by each camera. This comparison is accomplished by reading files CALCDS and CAL.XXXX.XXXX, respectively. Processing of this information yields values for the eleven coefficients Of the transformation equations. Once calibration is complete, "COPYBD.01" will begin reading the time-matched digitizer coordinates of the object points located in file PUNCHTM. For each object point at a given instant in time, four transformation equations (two for each camera) will be generated. U1, V1, U2, V2, and coefficients A through L are known from digitized data and calibration, respectively, while values for X, Y, Z are unknown. Using a linear least squares approximation, the X, Y, Z coordinates of the object are determined. This procedure is repeated for all object points at all specified instances in time until all three- dimensional loci have been determined. As with" TMATOl", "COPY3D.01" generates both a printed (PR3D.XXXX.XXXX) and punched (PU3D.XXXX.XXXX) output file. PU3D.XXXX.XXXX is a condensed version of PR3D.XXXX.XXXX and is best suited for input to other jprocessing programs. PR3D.XXXX.XXXX is useful for hardcopy printout. The three-dimensional loci of object points generated by "COPY3D.01" may be all the user is interested in. If not, it is good practice to check the output 56 (PR3D.XXXX.XXXX) before further processing is attempted. As with file PRINTTM, warning messages can easily be located by accessing PR3D.XXXX.XXXX in edit mode and twice entering the command: "LOCATE **WARNING". "JCA" is the final program. used in the Phase II analysis. An abbreviation for Joint Coordinate Analysis, JCA, is capable of determining the angular orientation of one rigid body relative to another about three independent axes. As written, this program requires that only six object points be digitized for processing and that they have the orientation shown in Figure 8. Should the user desire to track the three-dimensional loci of more than these six targets, a minor modification of the program could be made. Using' the output from "COPY3D.01" (PU3D.XXXX.XXXX), "JCA" generates output in file JCA.XXXX.XXXX containing inversion-eversion, medial-lateral, and dorsi-plantar flexion angles for the time period that has been digitized. VI . VALIDATION The three—dimensional technique used in this study was validated by Walton at the time of its development. A brief review of his work preceeds verification procedures used by this investigator. Static validation consisted of filming 18 golf balls suspended from a horizontal surface such that each of three cameras were approximately 12 meters from the nearest ball. The eight outermost balls were used as control points while the innermost ten were treated as unknowns. Walton indicated a worst case radius of error for this test of 5.6 mm in a field of view of nearly 13 meters. Dynamic validation was conducted by examining the trajectory of a thrown golf ball fully contained within the space defined by the control points. The three—dimensional locus of the ball was determined and then fitted to a plane using a linear least squares fit. Deviations from the plane were recorded as was the angle between the plane of best fit and the vertical axis. The acceleration of the ball due to gravity was also determined. Walton's worst case results are reported as 2.55 mm mean deviation from the plane of best fit, -0.334 degrees between the plane of best fit and vertical, and an estimate for gravity of 9.89 m/s . For this investigation, the necessity of a portable calibration structure presented several formidable obstacles to verification. First, it runs impossible in) construct a 58 structure large enough to completely encompass the motion space being studied and yet small enough to be truly portable. As a result, approximately one third of the volume containing the investigated motion occurred outside the control zone of the calibration structure. Verification of the three-dimensional loci of such points was not possible because the absolute location of other non-control point targets were unavailable. An alternative was to use the twelve control points of the calibration structure itself in the verification process. Since only six control points are required to determine the D.L.T. coefficients the remaining six control points could be treated as unknown targets. These could be chosen to effectively lie outside the volume contained by the control points. Several combinations of six calibration control points were chosen as unknowns and their corresponding three- dimensional loci were determined. The procedure was then repeated using eight control points as knowns and again using ten control points as knowns. For each case, the calculated three-dimensional loci of all points were compared with their known values. In all cases, the three dimensional loci of both known and "unknown" control points were determined. It should be noted, however, that digitized data identical to that used in the calibration sequence was used as input for determining the known control point loci. As a result, the radius of error of these points is somewhat 59 less than one might expect had they been re-digitized. The highest radius of error was attained by "bunching" all of the knowns at one end of the calibration structure. Complete results are shown in Figure 13. The resulting "worst case" radius of error using eight control points or more was approximately 0.3 cm for the unknown control points. An alternate approach for verification was to film the calibration structure at extremes in the field of view of each camera, and then to compare analytically generated vector magnitudes between control points to their known values. In this manner, all twelve control points could be used in the generation of calibration coefficients for the transformation equations. The calibration structure was filmed in three extreme positions as well as a fOurth, centrally located position. The central position was used for the calibration sequence. Control point coordinates for the other three configurations were determined with respect to the central position in which control point #6 was taken as the origin. Relative displacements of the calibration structure in its various locations can be determined by examination of the computed vectors which extend to the new locations of control point #6. Each of these vectors may be considered as local origins. Their description follows: 1 iii) UT. 13 115:1 U510"; {if IFE 1:1) _. —-l.'-l-—-l'-lI-l~ K1-.. _ 'D—J /\ i l l U) "i +1 ' C U —-1 ~ '3 - — C) Cl. C’) amtral 8 3 I L: I.- Figure 13. 5" '-' ._' \J._- NI-imE-E’. i‘ 1' 131 E] . LU— —--—[]———--—~ —-* —-—--=5.l—=f.l1 CD “mgr-*7 _-_-_- .-__-- LC} It“. '/ ON: 111 . ,3 l v'oL. u H” U -_- :3 <3 (:5 . {—4— °wo‘10133 )0 amipud 61 Position Vector Magnitude 0 0 FORE: RGF = 67.261 +49.443 -00.78E 83.5 cm. a c ‘N AFT: R6A = -72.871 -46.80] +01.08k 86.6 cm. A I.\ A HIGH: R6H = -02.801 -02.653 +18.12k 18.5 cm. Tables 4, 5, and 6 present actual computer generated output of the control point loci at these extremes in the field of view. Note that there are two "Observations" shown for each position. This is due to a minimum requirement of two observations for sucessful operation of program "TMATOl". Control point #1 for "Observation #1" was not digitized and hence appears in the output as having infinite X,Y,Z coordinate locations. For this reason, "Observation #2" was chosen for all validation work. Subtracting the appropriate local origin vector from those listed in Tables 4, 5, and 6, yield vectors whose magnitudes should ideally compare with their known values listed in Table 3. These magnitudes are shown in Table 7. Calculated and known vector magnitudes are compared in Table 8. Since it was not possible to verify the absolute location of any "measured" control point , a radius of error must be associated with each control point that generated SKILL: TRIAL 1: INITIAL TABLE 4,- FORWARD FOR WHICH THERE IS(ARE) TIME TIME INCRI‘IML'N'I‘ CUTOFF FRISQ'S: OBSERVATION CONTROL CONTROL CONTROL CONTROL CONTROL CONTROL CONTROL CONTROL CONTROL CONTROL CONTROL CONTROL POI NT PO .1 NT POI N‘l' PO I NT POI NT POI NT POI NT POINT POINT POI NT POINT POI NT OBSERVATI ON CONTROL CONTROL CONTROL CONTROL CONTROL CONTROL CONTROL CONTROL CONTROL CONTROL CONTROL CONTROL POINT POINT POINT POINT POINT POINT POINT POINT POINT POINT POINT POINT FOR WHI CH T #1 NJ #3 #4 HS #6 #7 #8 #9 410 "11 #12 FOR 62 COPY 31) . 01 1 1.050000 10000‘10— "‘0 IS IS IS IS IS IS I}; IS IS IS IS AT AT AT AT AT AT AT AT AT AT AT AT WHICH IS IS IS IS IS IS IS S IS IS IS IS AT A '1‘ AT A T AT AT AT ‘9 q A 1‘ AT I A T AT AT IV. >112: Z": ’1 V Au. v" A ‘<><><><>< >1 . OUTPUT DATA (FORWARD) TRIAL( El) . SECS. 6 SECS . = 1 . 050000 102. 66 . 84. 101. 67 101 66. 84. 99. 70. 84. I ‘ : H H H I! II I: H H il H H 85. 103. 67. 84 102. 67. 102. 66. 85. 99 71 85. H H II H H H II II H H II II —9999'J999. 1702 8900 3471 2418 .0306 .9167 0808 9423 2176 6340 6323 .060000 0340 2521 7671 .7626 5552 2593 2349 7277 3516 .5164 .0489 0804 *<*<*<*<*<*<*— + l /' 7 I/’ / / Z / / // / Fix-q}. ‘TFIV '- C1114 111;..1gggg ._.,__1 j Figure 14. 74 .3 mama 11.101»- 111 11 5 5.01 .9111: n.. w . .. _ WW'MMW m .\ .001 _ t .p. l l I [I'll ‘. Ill." ' ll||||l|| ' _~ .1. .. _ \ 1,1 (I ... . f1? _ . r161. I “.2. 1. \y _ u \b T . "1, id \l‘u _ III\_ LA. .09 "3910. IL.- . _ 7.1. fl . TOFLF* ‘ _ _ L1H“ Tovtl 1. 4r. _ __ m it YE» I TI 1 I 11 III I1 l 1 1 1 ).U ' ‘9 t5. «INFO... f... u? (t [x . _. .1 7.. _ \ _ _ r. ”.7 v.— .. 194.1 11L . . 1 .. _ 1.1.»; _ _ _ . ___ “Y: :3 _ _ n . ”.1.- ::: :c . fw_‘ .1..uk 5. 11111111111 n 7:. :. 11.. (U 51.11... ”1.. :2. .2 .. .. .. . . . 79015. :1... .104‘ 12:“ m. a. 1 1 Fr‘. 90...“..1 .vO'...‘ .“ __H “:7. . .1 H_— ., . \I w...“ . .. 7"] l III-Ill '1'. 1111.1 III: [1111].] l i all! I'll." . OI. ' .0190\ +0.10 7;: L. L. _. _ .1: "tr. ; 1 .11 v.1..- :.J .V . _,,u” ITS m _ r1_ . 1...- m .._r . a. a.“ 9.1-1.1444...“- 1 11 1 1-.. -- .41 12.-31141.1 :51: 1--- z - -1 -.- 4.11. 1 D11 ull't"..tl“|ll‘l". ll. ll... 1 -0'1' Ill ' 'u ‘I -II' 1‘ C \l..., I _.. . :W... . .. _ _ . N .1100EIII1 _ . _ . u . _ _ _ \IU — . . _ . [ll IV a l :2“: X , lll'llllllllllllll'lll 74 .m1. 2:911 I I I I .1 . 1 1 1 II\ (L . . 1 1 1 _ 1 . _ . 1 1 1 1 «INIWIF . I kikl IJ 12.1.1 1 \III 1!! . 1 iIflufi / I \ IMO: VII 0 .1111— I I II I .I I I .1 1 1v11t 11 \ 1 1 1 .1". ( Ina. 119 j; . \v 1 _ 1 1 ___ 1 H.) 111 1 1.... .1 . .1414 .* m _ 1 191‘ .IOVVO 111:1 1 yr _ 1 1 1 11.11 .1. VLkhb ” _ 1 LIc'O TtdI-I. 1 . _ . . .11 . 11. 11w 1 4r. . 1 _ ) I/ 1 .11. .1 I p I I I I II I I I I I I .I I rvII I»... .1: 1 1 (I\ .. .. 1 . .11“ 1 . 1 1 1 . 11.. 11.L r: .1 1 1.--... 1.... 1.1. 1 1 11 .11. “.651 — . v0 1. 11.1 1; 1 1 1. 111 1.“ 1. . .11 .1 1. * _ 1 \I) Tr!“ 7011 1.. . I I I I I I 1 1I1 I _1_ .11. 11.254 _ ( It r..It‘.I .II1 .1. .. . ..... .11»... _ . “ 'sbl: 1 .XIO1' u IIIII 1.1111“ n 1 .1 VPIJ. KIWI TII.~ _.. 1 1. 1 .1 1 T. 1. .r (. 11 1 1 11. 1.14.... .. 1 11.31 — V . . J I Tr: v1: 1 ~11 11 1 . 1. . . ..._... Tr. 1 111...»...1“ 1...: «.11.. . 1 11 fiwTIvJIIIIIIIlIIIH.IIII-I u II I . . I.-- -I...II.nIuI.l-I II I III II c I 0 II .IIHII... IIHJIH. MI. 0 . . . . II.| |.-N.l! FI...I «7 t v i . . ,1 - I - I I- I'II I- -Ill‘l" \‘ III II IOIIIII I I I] II.“ I I .| -J11_.— _ 1. 1 B. It EFFIII — _ 1 1 _ . 1 .1 1 3“. 1 1 (I I 1 1 1 1 _ 1 1 1 $2101 X 75 Titan» 0‘ 30 4O 50 60 7O 80 W x 3v! hit) a “‘ r '- ‘ 1‘ I q I -m---.- _._- — . . .....__._.i . .— o 10 20 30 4o WI ”WI Y (CM! '80 .. '2 (CM) Top & Front View of Treadmill Data Figure 16. 76 While precise orientation of the calibration structure to the treadmill is unknown, a reasonable approximation can be made by examining trends in the motion data. Figures 14, 15, and 16, show the target point to be low, and moving down and forward during the first 0.05 seconds. At this point, the target appears to level off and move in the direction of the motorized tread. Very little motion is observed perpendicular to the treadmill direction at this time. During the final 0.18 seconds, the target begins to rapidly rise as it continues to move back. At the same time, it begins to move a total of approximately 3 centimeters in a direction perpendicular to the treadmill direction and lateral to the athlete. An example of the output from program "JCA" is shown in Table 10. Only the first two observations are shown for briefness. Note that the local body coordinate systems change with each time frame. Dorsi/plantar, inversion/eversion, and medial/lateral rotations are listed as well. The complete output from this data file was used to generate graphs of each rotation with time (Figures 17, 18, & 19). These are presented in both filtered and unfiltered form. Note that for dorsi/plantar flexion, camera placement was such that a significant component of this motion was recorded by each. This may in part be responsible for the reduced scatter in the data. Planar redundancy did not exist for medial/lateral and inversion/eversion data. Another factor contributing to overall scatter in angular data is 77 TABLE 10. - “J’I'ICAL "JCA" OUTPUT DATA OK, SLIST JCA.TD19.1126 NEXUS FILMING DATE WAS 11 26 85 MEDIAL-LATERAL FOOT WIDTH WAS 7.33 cm. MEDIAL-LATERAL TIBIA WIDTH WAS 5.62 cm. * JOINT COORDINATE SYSTEM ANALYSIS OF THE ANKLE * OBSERVATION #2 TIME = 1.5800 LOCAL COORDINATES OF BODY #11 (P'UO'I‘) X-UNIT VECTOR ~0.9820 -0.].887 0.0091 Y-UNIT VECTOR 0.1801 -0.9-i‘)8 -0.2560 Z-UNIT VECTOR 0.0570 -0.24‘)7 0.9666 LOCAL COORDINATES Oi.“ BODY ”2 (TI BIA) X-UNIT VECTOR -0.9591 -0.2492 ~0.1345 Y-UNIT VECTOR 0.2522 -0.9()7'/ -0.0054 Z-UNIT VECTOR -0.1288 -0.0391 0.9909 THE ..:-')!E."‘ I"!I'.45f"?fi."~.'i‘§""1 Hi“? :31 , i'li' "-'.,’ ;'~'i afi M» HHW \. Mfr-JR H. 3-1 w'I .‘ii 0 ~ “i““v’ " " " Iii YI-IfiAi'. ( "i flit!) J'i .."-~I.‘i' 32" i-‘H'is. i i" ( '1 11.85410 ‘ "’ "‘ lii‘x‘l-Zi3fj I OH ( i ) ANS} LENSES Lbii i-) -'— 14.0 $08 "‘ * * I.."\'l‘111?.’31. ( i ) AND !"IJ'_;I..‘L."JL ( - ) " 2 . (314‘) .10] N'i' CUURDINA'I‘E il‘z’;’.'.'I‘EI*I 7*.i‘1Af'.‘x"SIS UP THE ANKLE * (llifiliR‘JA'ii1.0}! ii 3 TIM". -; .1. . 59W) LOCAL C(N ’19111N.1\.‘l‘l‘3;_.7 OI" WI} “1’ ii i i I'VE-TIT) X “UH IT V 1".“'l'(’)R --O . 91’: '4 .‘i - (P . l-i i) 3 0.1028 Y-UH I ’1' ‘fltiil’I'UR. 0 . i 18'.) *0 . ”'0 1/i) 41.0%". 1.5 21 -- UNIT VEfQTTL'L‘IR 0 . H 34 -~().0E>i51} 0 . 99] ‘1 Lon/.1, (700131)]NA‘I‘i".‘-3 m? Ito! X 1:2 ( 'i‘l U I A) X -UN {'1‘ Vi-IC'J‘i m. «0 . 8mm 4). (you ., -(_’) . 0 J, 4 L Y—UNI’I' Vi-jii'i‘wii. I.) . BOB (3 -~() . 8(3 10 U . 002‘! Z~UNIT ViilC'l‘UR --O .0131"; ~‘) . 0047 ‘0 . 9999 THE JOINT (71’,wa LNATES AP E? :51 , EL}; :2 , 133.221 E 1. -UN LT VEC'I‘OR : «U . 5:5 5‘5 9 -' 0 . 5059 0 . 0990 * "‘ " [WARS/3. L ( + ) AND i‘LF‘M‘J'i‘I-fi FLEX I OH ( *' i I 6 . 4 {3 90 * k * II‘I‘J'E'IRI? [ON ( 4') AW" ,FT".-"i'ii3i.‘5 i ON ( -- ) Z i} . 7 60 4 * * * LATE -‘AL ( i-) AND HEUI A1. ( ~‘r ;—_ ~ 21 . 8881 78 .t 8:9“. n.c aim m-o N5 «.0 0.0 ~ ~ . r _ ~ _ r ~ _ . w r b w _ _ _ l _ a T. .. .J airlii. \ x I _ ax (Cf/x am . x... E m 1 r I? -- 1.x... .. rid... D I l - x. . t. . -.\r.4\ drill/DI .r.. “.1 In t .. . .II / “it; x .__.. .._ _ r... x -. Xxx“ N. . gi. . J x O J aux/x. ..\< .. ._ .1... J x .o. .. $2.1 .1? x Aral I _ 1| : i #— 0.“. .2; II II II II its 3.35:: .m> 3.65m 62me .mEmELmuoo 79 ‘6'] .2 8:9”. .v . O m - a N . a P . o o . o _ _ ~ w H P P . . . ~ L _ _ p l _ . _ M 60' 0 on; u .. .l. ._ x _ 5. a) 1 I. . “ ¢ _ . c . ._T\lI/.-Y b . v r O Ilihr/ . \.w . -f. . ~ . x. .. .11.? 4.x. \fl . .7; t... .. x _ w. ..I[ L. .r... a — 7111??) .u a. -. a l. x. x t. t .C /_ . 4 .. v . I\| Tl /. .t a... . an. 7.. .. l—\n\n 4.. n \ . \u\\ A v ../ * J. v 5.. «E. 2k. .K- 1 r. 14: ii. ._ ON . ~. . .. . I . 4. s . .UV. r. o m o_l -r r: -- -r 4.1:; 3.5er: .m> 3.3:". .co_m.m>mrco_m._m>c_ 80 vs. Unfiltered Fll. m--w‘M—‘-‘—.-. -:‘~‘—~~ww Medial-Lateral Rotation, Filtered 10 I ”FMI 2 l‘" T" -_...~~-“ ~~ ~5‘ . -.....,_. K-~~~.‘._.... . — ---~ ~--0~.-.-‘ ‘ .. ---._ o ¢~{ ._.,_._‘J‘-..._-..-‘._o‘~\ .11 \----.‘~~.‘~n-\K' .— _. _ ‘ _-. - _‘ A, ~- .9. .— \. s‘— A . .6 ‘- .‘ . \‘n " ‘ ., _. “I u.— ‘A A 0 I 0 a I . - . I " . ~ ‘ _ ’ v ‘A J - v- r. " v ‘ .1 — .A r. "‘ .‘ -' f" I A D I " \o -_ .‘ l \‘A .- ~ ~ - f ,. | r I ‘- l I . '..,\:_‘ . ‘ v. \. ' I l “" ‘4 c -' ‘ .- " IA * I _ - s.- ‘a‘ ‘ q n w ‘v - ~ .—~. - w‘ - - - .q ’1 ~ '- O \I ..‘ . . .—. ~ . - \ fl _ )- ... ' . ‘5. V o. " -‘ ~ ~. l,’] 7 --.—.~ ~~o-.‘~~~~‘. *~‘~~~-o‘~~._ -- - — - ‘ 1* % v‘ w .'\. . v._ ~- ... v .- M. ‘ . . -1 \- II ‘Q ' V . J 0 c-" - .0 t .- .1- "‘ '. Q cl. ." o- d .. —. -.—~ -fi---‘ ‘0. ~~~~ - -‘ 0" r... ~~-- “‘fi‘ it - I‘ ‘ Q _ r- H' ~' .4 \ . "‘ "- r. w “ 'I l -- ~ I “I... I ‘~ . I v \. .. a .n. 9‘ ~. -_ ‘ ‘~a~~.-‘.~s~~o‘~a >q‘~~“~~~~~~.d ‘ v.‘ '\ a g A "‘ .. Tbs ’ -. n... ~~~~~~-n-b'v ro-ec --‘~ - O... 0.5 *0 Figure 19. 81 the radius of error involved with each of the six targets used in generating the body coordinate systems. For each system, vectors are generated using two targets which are typically on the order of seven centimeters apart. For a worst case radius of error of 0.73 cm. per target, it is conceivable to generate vectors whose computed direction is in excess of six degrees away from the true values. There are two such vectors generated for each coordinate system. For data to be used in angular determinations, it is therefore preferable to place targets as far apart as possible. As stated earlier, overall system error might be reduced by : 1) Ensuring that the digitizing projector identically locate film in it's gate from frame to frame. 2) Taking precautionary steps to minimize stretching of the film. 3) Exercising extreme care in the digitizing process to minimize operator error. 4) Improve digitizing equipment reliability. Additional reduction in error would be realized by upgrading the cameras so that synchronized shutter operation is possible. This would eliminate the need to approximate the two-dimensional position of targets from one camera for each time interval. Program "TMATOl" makes this approximation using a linear interpolation from digitized 82 data existing for the target before and after the time period in question. This works well so long as no acceleration of the target is taking place. Since this is generally not the case, it would be desirable to eliminate the procedure. Other, more sophisticated equipment exists which virtually eliminates the need to do any digitization. SELSPOT is one such device but these systems have their disadvantages as well. They require the affixation of small light emitting diodes (L.E.D.'s) to the object under investigation. This hardware can potentially impede the natural pattern of motion. In addition, they require sophisticated "camera" and computer equipment which drives the price beyond reach at this time. 83 VIII. CONCLUSIONS The quantification of motion using photogrammetric techniques described in this thesis has generated satisfactory preliminary results. Mean radius of error for targets lying in extremes of the field of view for each camera was 0.73 cm. For targets lying a maximum of 18 cm. outside of the control volume, the mean radius of error was reduced to 0.39 cm. Additionally,several factors which contribute to the error term have been identified. Continued development of the system with careful attention given to these items should further reduce system error. Three-dimensional position data can be further combined to generate angular information regarding foot placement relative to the shank while running. Preliminary results of this work are in agreement with works presented by other investigators. It should be noted, however, that angles obtained using the Joint Coordinate Analysis retain their physical significance during the complete motion. Application for this type of photogrammetric technique is limited primarily to motions which are contained in, or close to, the volume of the calibration structure. If this criteria can be met, then almost any study which attempts to analyze human motions can be undertaken . Motions which occupy a volume significantly larger than that of the control volume can still be analyzed if a suitably large calibration structure is developed. Conversly, extremely 84 small motions are best analyzed using a small calibration structure for adequate resolution. Several generalized suggestions for application of photogrammetry are listed below. Subsets of several of these are currently under investigation at Michigan State University. 1) Estimate major factors governing both outstanding, and "normal" human performance. 2) The development of footwear and other athletic equipment which promote effective athletic performance. 3) Impact studies for the development of athletic devices which maximize energy absorption or re-distribution during collision. 4) The development of more sophisticated prosthetic devices to allow closer approximation of "normal" motion. BIBLIOGRAPHY BIBLIOGRAPHY Abdel-Aziz,Y.I., and Karara,H.M., "Direct Linear Transformation from Comparator Coordinates into Object Space Coordinates in Close-Range Photogrammetry.", Proceedings of the Symposium on Close-Range Photogrammetry, January 26—29, 1971. Falls Church, Va.: American Society of Photo- grammetry, 1971. Bates,B.T., L.R.Osternig, and B.Mason., "Lower Extremity Function During the Support Phase of Running", Biomechanics IV. Baltimore: University Park Press, pp. 31-39, 1978. Bates,B.T., S.L.James and L.R.Osternig., "Foot Function During the Support Phase of Running", Running 3(4): pp. 24-31, 1978. Bergemann, B.W., "Three—Dimensional Cinematography: A Flexible Approach.", Research Quarterly, Vol. 45 pp. 302-309, 1974. Cavanagh,P.R., "The Running Shoe Book", Mountain View, Ca.: World Publications, 1981, pp.83-89, 259-260. Clarke,T.E., E.C.Fredrick, and C.L.Hamill. "The Effects of Shoe Design Parameters on Rearfoot Control in Running", Med. Sci. Sports,Vol. 15: pp. 376-381, 1983. Coe, B.W., "Cine History.", Focal Encyclopedia of Photography , 1973. Cureton, T.K. "Elementary Principals and Techniques of Cinimatographic Analysis as Aids in Athletic Research", Research Quarterly, Vol. 10, pp. 3-24, 1939. Fenn, W.O., and C.A. Morrison, "Frictional and Kinetic Factors in the Work of Sprint Running", American Journal of Physiology, Vol. 92, pp. 583 -611, 1930. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 86 Grood, E.S. and W.J. Suntay,"A Joint Coordinate System for the Clinical Description of Three- Dimensional Motions: Application to the Knee", Journal of gigmeghanigal Engineering, ASME, Vol. 105, pp. 136-144, 1983. Miller 0.1., and K.L. Petak, "Three-Dimensional Cinematography.", Kinasiology III 1973, pp 14-19. Muybridge, E., "A Horse's Motion Scientifically Determined.", Scientifig American, Vol. 39, pp. 239 -241, 1878. Noss, J., "Control of Photographic Perspective in Motion Analysis.", Journal of Health Ph sical Edugatioa and Racreatign, Vol. 38, pp. 81-84, 1967. Pezzack, J.C., R.W. Mann, and D.A. Winter, "Technical Note: An Assessment of Derivitive Determining Techniques Used for Motion Analysis.", gournal of giomechanics, Vol. 10, pp. 177-182, 1977 Plagenhoff, 8., "Computer Programs for Obtaining Kinetic Data on Human Movement.", Journal of B'omechan'cs, Vol. 1, pp. 221-234, 1968. Putmam, C.A., "The Tri-axial Cinimatographic Method of Angular Measurement.", Researgh Quarterly, Vol. 50, pp. 140-145, 1979. Spray,J., "Three Dimensional Film Data Validation Proceedures: A Vector Approach.", Master's Thesis, The University of Arizona, 1973. Thompson, M.M., Manual of Phatogrammetry, 3rd ed. Falls Church, Va.: American Society of Photogrammetry, 1966. Van Gheluwe B., "A New Three-Dimensional Filming Technique Involving Simplified Alignment and Measurement Proceedures.", Proceedings of the Fourth Iaternational Seminar on Biomechanics, Biomechanics IV, pp.476-481, Baltimore: University Park Press, 1974. Walton, J.S., "Close Range Cine-Photogrammetry: A Generalized Technique for Quantifying Gross Human Motion.", P.H.D. Dissertation, The Pennsylvania State University, 1981. UN V. LIBRQRIES MICHIGAN Stars 1 llllllllllllllllllll [WWI 3129300 69 3703 £5