LIBRARY Michigan State University This is to certify that the dissertation entitled SYSTEMS ANALYSIS OF THE INVERSE HEAT CONDUCTION PROBLEM presented by DAVID BRUCE MANNER has been accepted towards fulfillment of the requirements for Ph.D. . Mechanical Engineering degree m Major profe or Date May 5, 1987 MSU is an Affirmative Action/Equal Opportunity Institution 0- 12771 MSU LIBRARIES ”- RETURNING MATERIALS: Place in book drop to remove this checkout from your record. ‘FINES will be charged if book is returned after the date stamped below. . ‘|' — v T‘ ‘W v— v “— n‘ - . l \ .1 '1 201': ‘,SYSTEMS ANALYSIS OF THE INVERSE HEAT CONDUC‘HON PROBLEM By David Bruce Manner A DISSERTATION Submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Mechanical Engineering v - ' 33 s ’ «gm 8; .1 o ' 'I fl“ ‘ . , 1., . _,. ._ . V ,, : I gtBRU: I 1‘ M Jun.» 1987 _ , . ,~--"—'"'"""-"""““ymmzwm‘ .e.:-.. i \V ‘. . .K ,9 I," ' ‘ 'l {i 17? *. ~ 1.0-. . ’t: : . - .» ‘2 _ V V .| _ ‘. - ’ A 5W? i‘-.‘.'.:‘\rj 1:11 I‘uv ‘ 7‘ . 471 “‘7. N . \.I l'.' \ W I. " ‘-—' \. g. ' “* WM ‘- 33%»: "‘- tilt-«isn‘t ABSTRACT SYSTEMS ANALYSIS or THE INVERSE HEAT CONDUCTION PROBLEM By David Bruce Manner The groundwork necessary for frequency domain analysis of solutions to the inverse heat conduction problem, using Laplace and z-transform techniques is presented. The solution is traced back through the direct solution methods used to generate the inverse solution. and draws heavily upon transfer function con- cepts. Transformation methods from continuous time ladder network models to discrete time models are developed. The mathematical relationship between lad- der network models and finite difference models is derived. In addition, a new procedure for generating errorless discrete time models from Fourier series solu- tions is presented. The frequency response of direct discrete time solutions and the impact on the stability and noise performance of inverse algorithms arising from these direct solutions is presented. Inverse algorithm methods are gener- alized according to their frequency domain structure, rather than time domain, which effectively decouples the algorithm classification from a direct solution. Precompensation filter concepts are introduced, and a frequency domain test for inverse algorithm stability is developed. Examples of Stolz's method and Beck‘s method are included, and results show that noise is actually enhanced in some frequency bands under the proper circumstances. A new class of unconditionally stable inverse algorithms is derived, and three particular cases are developed and tested. Results showing the effects of the choice of direct model used to iirani i, , a, i i - . , -imiuumfi itchinuufhi. illil. imp . . l i ,n . .uo . intuit? ashlrfltfifiuihiiil. ., 4., I A: i- It I l Dedication I would like to dedicate this thesis to my father, Richard John, who's scientific genius and determination provided motivation; to my mother, Anna Bruno, who‘s imagination and optimism inspired me; and to my wife, Mary Ufford, and son, Richard Henry, who unselfishly supported me during the countless hours con- sumed by this work. ACKNOWLEDGEMENTS I would like to acknowledge: Dr. James V. Beck for introducing me to this topic, and for his technical guidance while serving as committee chairman. Dr. Michael D. Olinger for offering his considerable technical skill, insights, and endless enthusiasm. Dick Palsrok, my supervisor, for providing unwaivering moral and financial support throughout. Lear Siegler Inc., for providing an atmosphere in which this research was encour- aged and could be completed. Dr. H. Roland Zapp, Dr. Richard W. Bartholomew, and Dr. David H. Y. Yen for their help, encouragement, and for serving on the committee. Karen Nagle for her infinite patience and good humor while typesetting this dis- sertation. vi Table of Contents vii List of Tables xi List of Figures xv List of Symbols ' xvi Introduction 1 1.1 Problem Description ........................... 1 1.2 Outline .................................. 7 Background 12 2.1 Introduction ................................ 12 2.2 Continuous Time System Concepts .................. 13 2.3 Discrete Time System Concepts .................... 19 Analysis of Direct Methods 24 3.1 Introduction ................................ 24 3.2 Analytical Methods ............................ 26 3.3 Numerical Methods ........................... 29 3.3.1 Ladder Networks ........................ 30 3.3.2 Finite Difference Methods .................... 52 3.3.3 Continuous Time to Discrete Time Relationship ....... 61 3.4 Exact Discrete Time Solutions ..................... 63 4 Inverse Solution Methods 67 4.1 Introduction ................................ 67 4.2 Stolz's Method .............................. 68 4.3 Beck’s Method .............................. 7O 5 Inverse Algorithm Stability , 79 5.1 Introduction ................................ 79 5.2 Stability Criterion for Discrete Time Systems ............. 81 5.3 Inverse Algorithm Stability Dependence on Time Step ....... 82 5.4 Stabilization of Inverse Algorithms ................... 69 5.4.1 FIR Precompensation Filter ....... ' ............ 92 5.4.2 FRP Precompensation Filter .................. 93 6 Frequency Domain Analysis A 95 6.1 Introduction ................................ 95 6.2 Beck’sMethod............ .................. 97 6.3 FIR Precompensation Filter ....................... 116 6.4 FRP Precompensation Filter ...................... 123 6.5 APF Precompensation Filter ...................... 123 7 Time Domain Results 127 7.1 Introduction ................................ 127 7.2 Discrete Time Solutions ......................... 134 7.3 Error Free Observation Results ..................... 136 7.4 Impulse Noise Response ........................ 145 7.5 Truth Model Effects on Performance ................. 152 7.6 Prefiltering of Observations ....................... 159 viii .’;{S .L“ 0 V t n. . V‘. 18’ \, .3 .‘Y V 4 ,, Forward 0.0:;er . 9; ; ‘. ,Wd Ilrfit 70. a: :L.... ‘ . T~: Im‘NfiZOl: ‘..'I ~1in Try-2»; . , ‘ "v.1 ‘3 173 List of Tables 2.1 Useful Transform Pairs ......................... 20 3.1 Boundary Condition Parameters .................... 33 3.2 Continuous Time Transfer Function Parameters ........... 42 3.3 Continuous Time Transfer Function Poles .............. 43 3.4 Continuous Time Transfer Function Residues ............ 43 3.5 Impulse Invariant Transfer Function .................. 50 3.6 Step Invariant Transfer Function .................... 50 3.7 Ramp Invariant Transfer Function ................... 51 3.8 Finite Difference Parameters ...................... 53 3.9 Forward Difference Transfer Function ................. 56 3.10 Backward Difference Transfer Function ................ 58 3.11 Crank - Nicolson Transfer Function .................. 59 3.12 Implicit 5/6 Transfer Function ...................... 59 3.13 Implicit 3/4 Transfer Function ...................... 60 5.1 Impulse Invariant Stolz Inverse ..................... 86 5.2 Step Invariant Stolz Inverse ....................... 86 5.3 Ramp Invariant Stolz Inverse ...................... 86 5.4 Forward Difference Stolz Inverse .................... 87 5.5 Backward Difference Stolz Inverse . . . - ................ 87 5.6 Crank-Nicolson Stolz Inverse ...................... 87 X 5.7 5.8 6.1 6.2 6.3 6.4 6.5 6.6 6.7 6.8 7.1 7.2 Implicit 5/6 Stolz Inverse ........................ 88 Implicit 3/4 Stolz Inverse ......... I ............... 88 Beck‘s Method Roots - Impulse Inyariant ................ 101 Beck‘s Method Roots - Step Invariant ................. 101 Beck's Method Roots - Ramp Invariant . . .. ............. 102 Beck’s Method Roots - Forward Difference .............. 102 Beck's Method Roots - Backward Difference ............. 103 Beck’s Method Roots - Crank Nicolson ................ 103 Beck's Method Roots - Implicit 5/6 .................. 104 Beck's Method LP Filter Parameters .................. 105 Prefilter Bandwidth versus Pole Location ............... 160 Prefilter Bandwidth Performance .................... 163 xi l List of Figures 3.1 Insulated Wall with Surface Heat Flux ................. 28 3.2 Lumped Parameter Approximation .................. 32 i 3.3 Ladder Network Approximation .................... 32 3.4 Insulated Wall Example Ladder Network ............... 34 3.5 Insulated Wall Example Ladder Network (alternate form) ...... 34 3.6 Generalized Internal Node ....................... 35 3.7 Generalized Boundary Node ...................... 35 3.8 Cascade System Representation .................... 44 . 3.9 Parallel System Representation .................... 44 3.10 Invariant Simulation of a Continuous Time System ......... 46 3.11 Discrete Time System Behavior .................... 57 3.12 Exact Discrete Time System Simulation ................ 66 5.1 Inverse Algorithm Representation ................... 91 6.1 Inverse Filter Representation ...................... 96 6.2 Beck's Precompensation — Impulse Invariant R = 2 ........ 106 6.3 Beck's Precompensation - Impulse Invariant R = 4 ...... g. . 107 i 6.4 Beck's Precompensation — Impulse Invariant R = 6 ........ 108 6.5 Beck's Precompensation — Forward Difference R = 5 ....... 109 6.6 Beck‘s Precompensation - Forward Difference R = 7 ..... , . . 110 II (D 6.7 Beck’s Precompensation — Forward Difference R xii 6.8 Beck's Precompensation — Crank - Nicolson R = 3 ......... 112 6.9 Beck's Precompensation — Crank - Nicolson R = 5 ......... 113 6.10 Beck’s Precompensation — Implicit 5/6 R = 7 ............ 114 6.11 Beck's Precompensation - Implicit 5/6 R = 9 ............ 115 6.12 FIR Precompensation — Impulse Invariant ............... 117 6.13 FIR Precompensation — Step Invariant ................. 118 6.14 FIR Precompensation - Ramp Invariant ................ 119 6.15 FIR Precompensation — Crank-Nicolson ................ 120 6.16 FIR Precompensation - Implicit 5/6 .................. 121 6.17 FIR Precompensation — Implicit 3/4 .................. 122 6.18 APF Precompensation — Impulse Invariant .............. 125 6.19 APF Precompensation - Implicit 5/6 .................. 126 7.1 Simple Representation of the IHCP .................. 128 7.2 Practical Representation of the IHCP ................. 126 7.3 Truth Model Representation of the IHCP ............... 128 7.4 Impulse Invariant Stolz Inverse Roots ................. 130 7.5 Step Invariant Stolz Inverse Roots ................... 130 7.6 Ramp Invariant Stolz Inverse Roots .................. 130 7.7 Forward Difference Stolz Inverse Roots ................ 130 7.8 Backward Difference Stolz Inverse Roots ............... 131 7.9 Crank - Nicolson Stolz Inverse Roots ................. 131 7.10 Implicit 5/6 Stolz Inverse Roots ..................... 131 7.11 Implicit 3/4 Stolz Inverse Roots ..................... 131 7.12 Temperature Step Response - Impulse Invariant Method ...... 139 7.13 Impulse Invariant Method - FIR Inverse Filter Step Heat Flux Estimatesi39 xiii 7.14 Impulse Invariant Method - FRP Inverse Filter Step Heat Flux Esti- 7.15 Impulse Invariant Method - APF Inverse Filter Step Heat Flux Esti- mates ................................... 140 7.16 Temperature Step Response - Forward Difference Method ..... 141 7.17 Fon/vard Difference Method - Inverse Filter Step Heat Flux Estimates 141 7.18 Temperature Step Response - Backward Difference Method . . . . 142 7.19 Backward Difference Method -- Inverse Filter Step Heat Flux Estimates142 7.20 Temperature Step Response - Crank-Nicolson Method ....... 143 7.21 Crank-Nicolson Method .- FIR Inverse Filter Step Heat Flux Estimates143 7.22 Crank-Nicolson Method.- FRP Inverse Filter Step Heat Flux Estimates144 7.23 Crank-Nicolson Method - APF Inverse Filter Step Heat Flux Estimatesi44 7.24 Impulse Noise Response - Impulse Invariant FIR Inverse Filter. . . 147 7.25 Impulse Noise Response - Impulse Invariant FRP Inverse Filter . . 147 7.26 Impulse Noise Responsev- Impulse Invariant APF Inverse Filter . . 146 7.27 Impulse Noise Response - Step Invariant FIR Inverse Filter . . . . 148 7.28 Impulse Noise Response - Step Invariant FRP Inverse Filter . . . . 149 7.29 Impulse Noise Response - Step Invariant APF Inverse Filter . . . . 149 7.30 Impulse Noise Response - Ramp Invariant FIR Inverse Filter . . . . 150 7.31 Impulse Noise Response - Ramp Invariant FRP Inverse Filter . . . 150 7.32 Impulse Noise Response - Ramp Invariant APF Inverse Filter . . . 151 7.33 Temperature Step Response - Truth Model ............. 154 7.34 Impulse Invariant FIR Inverse Filter (Truth Model) Step Heat Flux Estimates ................................. 154 7.35 Impulse Invariant FRP Inverse Filter (Truth Model) Step Heat Flux - Estimates ................................. 1 55 xiv 7.36 Impulse Invariant APF Inverse Filter (Truth Model) Step Heat Flux Estimates ................................. 155 7.37 Fonivard Difference Inverse Filter (Truth Model) Step Heat Flux Es- timates .................................. 156 7.38 Backward Difference Inverse Filter (Truth Model) Step Heat Flux Estimates ................................. 156 7.39 Crank-Nicolson FIR Inverse Filter (Truth Model) Step Heat Flux Es- timates ................ i .................. 157 7.40 Implicit 5/6 FIR Inverse Filter (Truth Model) Step Heat Flux Estimates157 7.41 Implicit 3/4 FIR Inverse Filter (Truth Model) Step Heat Flux Estimates158 7.42 Triangle Heat Flux Estimates with Prefilter (a = 0.732) ....... 161 7.43 Triangle Heat Flux Estimates with Prefilter (a = 0.184) ....... 161 7.44 Triangle Heat Flux Estimates with Prefilter (a = 0.544) ....... 165 7.45 Beck’s Method Triangle Heat Flux Estimates ............. 165 List of Symbols 5 - continuous time complex frequency variable 2 - discrete time complex frequency variable 0,.(i) - temperature (n - space index) (i - time index) H(a) - transfer function (Laplace transform description). H(z) - transfer function (z-transform description) Subscripts iiv - impulse invariant cnm - Crank Nicolson siv - step invariant 1'56 - implicit 5/6 riv - ramp invariant 2'34 - implicit 3/4 fdm - forward difference bdm - backward difference 9 - conductance p - density Fa - Fourier Number 1- - resistance c, - specific heat hc - convection coefficient c - capacity I: - thermal conductivity St(z) - Stolz inverse filter transfer function P(z) - precompensation filter transfer function I (z) - inverse filter transfer function At - time step A - state equation coefficient matrix B - state equation input matrix E -observation equation state matrix F - observation equation input ma- trix I - identity matrix xvi a,fl - FD time averaging coefficients a), u - FD space averaging coefficients W - FD space averaging matrix C - capacity matrix G - conductance matrix P - special FD matrix V - special FD matrix q - heat flux 1: - state vector u - input y - observation 9 - gain 6(t),6(n) - unit impulse function h(t), h(n) - impulse response p(t),p.(n) - unit step function s(t), s(n) - step response p(t),p(n) - unit ramp function r(t),r(n) - ramp response an) - estimates Of q(n) adj [A] - adjoint of matrix A det[A] - determinant of matrix A £{} - Laplace transform operator L-‘{} - inverse Laplace operator Z{} - z-transform operator Z '1 {} - inverse z-transform operator RHS - LHS refers to the right hand side and left hand side of an equation PFE - partial fraction expansion Chapter 1 Introduction 1.1 Problem Description The research presented here deals generally with the solution to ill-posed problems. This is a rather broad class of problems, described in some detail by Tikhonov and Arsenin [1]. This work is aimed toward finding solutions to a specific ill-posed problem called the inverse heat conduction problem (IHCP). Several techniques for solving the inverse heat conduction problem are developed and analysis of these and other important solutions are also presented. Laplace and z-transform techniques have been used rather extensively in this thesis and for that reason, attention has been directed toward linear heat trans- fer systems. Laplace and z-transform analyses will frequently be referred to as frequency domain analyses since the functions used will depend upon complex frequency variables 3 and 2 respectively. Transfer functions concepts are used throughout. The conditions necessary to distinguish between well-posed and ill-posed problems are described by Tikhonov and Arsenin [1] as: Given 2 = R(u) where u e u z E .7: that a unique solution 2 exists for all u and that small changes in 11. lead to small changes in z. 2 Mathematical problems that do not meet these conditions are called ill-posed. For example, imagine an observer measuring the effect on temperature, at some internal point in a semi-infinite body, to a heat flux source at the surface. Let the heat flux at the surface be defined by: W) 0 t<0 t q(t) = {at 0 0 is referred to as a causal system, a system which produces non-zero output prior to t = 0 is called acausal, non-causal, or anticipatory, since it begins to respond before the input is applied. Anticipatory systems are not physically realizable. 14 Acausal signals are sometimes necessary to describe signals which turn on at some time prior to the time origin t = 0. The inverse transform is defined to be: fit) = c-‘{F(s)} = 51/“? F(.s)e+"d.s (2.2) tr] PM. where a is a complex frequency variable, and F(s) is also complex. ' Some of the useful theorems which are used extensively, are: 1) superposition property £{af(t) + bg(t) = aF(s) -+- bG(s) 2) derivative theorem signal = .. F(s) — rm 3) integral theorem cl/o flt) dt} = %F(s) 4) time shift theorem £{f(t —T)} = e‘"F(s) 5) initial value theorem mm = lira-s Fla) 6) final value theorem mm = lists A.) 7) convolution theorem W) = [o'raluu-rldr +—+ Y(s)'= F(s)U(s) These properties can be used to find the solution of ordinary differential equa- tions. In particular, state equations which will be defined next, are easily solved using Laplace transforms. 15 Continuous time dynamical system models are frequently expressed in the form of state equations, which are coupled first order ordinary differential equa- tions. The state vector contains N state variables, where N is the number of energy storage elements in the model. The state variables are, in most cases proportional to the energy stored in the corresponding component. Heat transfer models are no exception, and the state variables used in these analyses will be the temperatures of the nodes. Standard state equation form is defined to be [6]: $2“) = A z(t) + B u(t) (2.3) in which, 2(t) is called the state vector with dimension (N x 1), A is a known (N x N) matrix, B is a known (N x 1) matrix, and u(t) is the system input vector. State equations are usually accompanied by an observation (or output) equa- tion of the form: y(t) = Ez(t) + Fu(t) (2.4) where y(t) is a scalar and E and F are known (1 x N) matrices. Using properties 1) and 2), the state equations can be transformed into fre- quency domain: .9 X(s) — :r(0+) = AX(.9) + B U(s) Rearranging yields: [.91 - A] X(.s) = z(o*) + B U(.s) (2.5) where I is the (N x N) identity matrix. 16 Assuming that the initial state of the system, z(0*), is known, and that the input, u(t) <=> U (s), is known, equation (2.5) can be solved for state vector X (.s) by premultlplying by [51 — A]": X(s) = [51 — A]'1(z(0+) + B U(s)) (2.6) Transforming (2.4) yields: m) = E X(s) + F 0(5) (2.7) and substituting (2.6) into (2.7): Y(s) = E[sI — A]‘1z:(0*) + (E[sI—AA]‘1B -+- F)U(s) (2.8) The solution in time domain, y(t), can be found by taking the inverse Laplace transform of (2.8). The system transfer function. denoted H (s), is defined when the system is initially relaxed (i.e. 2(01) = 0), and is the ratio of the frequency domain output variable, Y(.s), to the input variable U (5): Ho) = % = EM— .4143 + F (2.9) Now the response of the system to any input, U (s), can be found by multiplying the transfer function by U (s): Y(s) = H(S)U(8) (2.10) Using the convolution theorem, equation (2.10) becomes: 1 y(t) = / h(t)u(t—-r)dr (2.11) 0 which is Duhamel's theorem. 17 Notice that H(.s) is, in addition, the frequency domain response of the relaxed system to a unit impulse 6(t). The companion time function, h(t), is known as the impulse response, or Green's function, of the system. BeCause of their special properties, symbols h(t) and H (a) will be reserved for these funciions. When a frequency function, F(s), is evaluated at s = jw the resultant com- plex function is the Fourier transform of f(t) [5]. This is important because the frequency content of random processes are often described in terms of Fourier transforms. The frequency response (or spectrum) of the system is defined to be the amplitude (magnitude), and angle response of H (.9) evaluated when 3 = jw = j21rf. These two quantities are ordinarily plotted as a function of w (radians/sec) or frequency, f (Hertz). The magnitude of H (jw), denoted | H (jw)-|, is usually * expressed in decibels (db), defined to be 20 log I H (a) |. The angle of H (jw), denoted 1H (jw), is usually expressed in degrees between -180 and +180. One interpretation of the frequency response is that it is the sinusoidal steady state response of the system; is. if a unit amplitude, zero phase sinusoidal function, of frequency f, is applied to the input, the amplitude and phase of the resulting sinusoidal steady state response will be I H Um) | and £H(jw). Bandwidth or cutoff frequency and passband are two related and useful con- cepts. The spectra for most of the transfer functions encountered here contain a band of frequencies which allow sinusoidal signals to pass relatively uncorrupted. In the case of the direct heat transfer models, the amplitude and phase of low frequency signals pass through the system with little distortion. High frequencies are severely distorted; the amplitude response is attenuated and the phase of the output is delayed. These are characteristics of a low pass (LP) filter. The band- width of a low pass filter is ordinarily defined in terms of the amplitude response 18 when the frequency of the applied signal is zero. This zero frequency signal can be interpreted as a constant signal. It is also called a DC (for direct current) signal in the electrical engineering literature. The bandwidth or cutoff frequency of a low pass filter is defined to be the frequency, f,,, at which the amplitude response of the system Is 3db less than the DC or zero frequency magnitude: i.e. _ l H (127%) l —3db — 20109.0 l H(0) l —] The passband is defined to be all frequencies from zero to f... High pass filters behave in the opposite way with respect to frequency: High frequency signal pass through the system with relatively little distortion and low frequency signals are substantially attenuated. The cutoff frequency for this case is defined to be the frequency at which the amplitude response of the system is 3db down (less) than the amplitude response of the system for very high frequencies. The passband consists of the frequencies above the cutoff frequency. The power spectral density (PSD) of signal g(t) is defined to be the square of the amplitude response when s = jw [5]: Sgg(w) E | 020w) l and is the Fourier transform of the autocorrelation of g(t): eglrl = f: g(tlg(t + rldt The PSD describes the amplitude frequency components in a signal. It is most often used to describe the frequency components of an unknown random signal. Note that uncorrelated (white) random noise, has a PSD which is constant at all frequencies, and if passed through a linear system, becomes correlated, with PSD equal to Shh (w) [7]. TF—_.———fi 19 2.3 Discrete Time System Concepts There are a many similarities and parallels between discrete time systems and continuous time systems. .There are, however, some subtle differences which must be dealt with carefully. One particularly important issue is the relationship between continuous time systems, and the discrete time systems used to simulate them. The qualitative behavior and stability of the discrete time simulation may or may not mimic the continuous time system. Further, the behavior and stability are dependent upon the time step At. The z-transform will prove to be extremely useful when analyzing the behavior and solutions of discrete time systems. It is defined to be [8]: Flz) = Z{f(n)} = i flnlz‘" (212) also denoted: F(z) 1= f (n) The inverse transform is defined to be: f(n) = 3%}10‘ F(z)z'--‘ dz (2.13) where equation (2.13) represents a contour integral in the complex z-plane over any closed path in the region which encloses the origin. Table 2.1 contains the transform pairs frequently used throughout this thesis. Note that the transforms shown for F(z) are for the corresponding time functions, f (t), when sampled at At intervals. Similar to the Laplace transform, there are several useful z-transform theorems: 8) superposition property Z{af(n) + bg(n)} = aF(z) + bG(z) 20 Table 2.1: Useful Transform Pairs Lit.) Fla) M impulse 6 (t) «if 1 4:1) 1 / At 1 2 step Mt) 4:» — => .9 z — 1 ram r(t) 4:» 1 <==> At 2 I p .92 (z - 1)? e“°" => 1 z . 5 +0. z-—e(“I A" 9) time shift theorem Zlfln — m)} = 2"" F(2) 10) final value theorem - . z — 1 I'm fin) = In) — F(z) n-ooc z_, z 11) convolution theorem ylnl= i flm)u(n-m) m=-oo Y(z) = F(z)U(z) The properties above can be used to solve difference equations, and will be used in the following development to solve discrete time state equations. Discrete time state equations and observation equation have the following form: 1:(n + 1) = A a:(n) + B u(n) (2.14) y(n) = E 2(n) + Fu(n) (2.15) where :(n) is the state vector with dimension (N x 1), A is an (N x N) matrix, B is an (N x 1) matrix, and u(n) is the system input. 21 Using theorems 8) and 9) on equation (2.14), zX(z) = AX(z) + BU(z) Rearranging yields: [zI — A] X(z) = B U(z) (2-16) and solving for X (z) X(z) = [21 — A]-1 B U(z) _ (2.17) The z-transform of the observation equation is: Y(z) = E X(z) + F U(z) (2.18) Substituting (2.17) into (2.18) yields: Y(z) = (E[21 — A]"BU(z)) + F U(z) (2.19) The transfer function is defined to be the ratio of the input to the output: Y(z (2) and is interpreted to be the frequency domain representation of the time impulse v H(z) = = E[zI — A]“B + F (2.20) Q response, h(n). The convolution theorem may be applied to find the response of the system to any input, once the impulse response (or transfer function) is known: Y(z) = H(z)U(z) (2.21) 'or in time domain, y(n) = i h(m)u(n —m) (2.22) m=—oo A causal discrete time system is one which does not anticipate the input signal, and similarly a causal signal is one which is zero for negative values of its time 22 argument. Inspection of (2.22) shows that if h(n) and u(n) are causal, then y(n) must be causal. As will be shown in later chapters, the transfer function of a causal system has numerator order that is less than or equal to the denominator order, and an anticipatory system has numerator order which is greater than the denominator order. Similar to continuous systems, a physically realizable discrete time system must be causal. . The frequency response of discrete time system are used extensively in follow- ing chapters as a method of analysis and design tool. The frequency response of a transfer function is defined to be the magnitude and phase response of H (z) evaluated at z = e'j” 8‘ = trim/h, denoted : Uneven). [H(e'j“’m) respectively, where f, = 1 /At is the sampling frequency. It will be convenient to define dimensionless frequency variable u: u = — = — (2.23) so that e-J'” M = e-i". Note that H (e—J'") and is periodic on the interval [-1, 1], and that the magnitude response is an even function of frequency, and angle is an odd function of frequency. Because of this symmetry, the frequency response is usually plotted on the interval [0,1], expressing magnitude in (db) and phase in degrees between -180 and +180. In a way. quite similar to the continuous time case, the frequency response of H (e-J'") is interpreted as the magnitude and phase response of a discrete time system with a sampled sinusoidal input of frequency w. Bandwidth, cutoff frequency, and pass band are as defined in the continuous time cases, but now are a function of the sampling frequency as well. The discrete time power spectral density is defined to be the z-transform eval- uated along the unit circle of the autocorrelation function [8]. The autocorrelation .. 1 , '1' '0 I e. I ' “1‘ ‘1 ' A a ( I um 1 N" 9, in NW N Each)” m) '“- b} _‘ powerspectraldens‘ny: chapter .7» .. 2‘. 5111‘") = 2 mimic-W'- - are”) (".3 AnaIFL‘QB 2,59 ‘ . ‘ '~ fl. :i~.'.mJ$8QIIlIEU‘) (:r‘:,_ n]. h} _ sci selectwg -. . a Simofia gems-p: can :emrfte f"€?1i~”il“ 1“ prove-310 be :3 2'2“: '~ : 1 ' ~ . was. " - Kent 091113.239: -. : . 3.,» Chapter 3 Analysis of Direct Methods 3.1 Introduction The purpose of this chapter is to review some of the analytical and numerical methods which can be applied to heat transfer systems. Direct methods refer to situations in which the boundary and initial conditions are known and the inter- nal temperatures of a conducting body are sought. The importance of the direct model to this work stems primarily from the fact that inverse algorithms are de- rived from them. The direct model which serves "as the basis for developing a specific inverse algorithm will be called the truth model, for that algorithm. It has been observed that for some choices of finite difference models. the resulting inverse algorithms are numerically unstable, or exhibit poor transient behavior. Subsequent chapters deal with these difficulties and demonstrate the importance of selecting appropriate direct models. V Simple geometries yield problems in which analytical solutions for each di- mension (in time and space) can be separated and solved independently. As geometries and boundary conditions become more complex, the analytical solu- tions can become mathematically unwieldy. Numerical methods are widely used and have proved to be successful in many such cases. Extensive finite difference and finite element computer programs for these kinds of calculations are readily 24 25 available. The heart of the finite difference methods relies upon approximating space and time derivatives with differences: spatial derivatives at a point are approximated by calculating temperature differences at points finite distances away. Similarly, time derivatives are approximated by calculating temperature differences for finite time increments. This approach leads to a set of difference equations which can, in principle, be solved simultaneously for each temperature node, and each time step. The memory requirements for these discrete time and space models grows rapidly as the number of nodes increases, and the execution time increases as the time increment decreases. In addition to these antagonistic difficulties with accuracy and computational overhead is a problem associated the numerical stability of the solution. This question is also intimately related to the choices of temperature node spacing and time step. The routine assumption is that the finer the spacing, and the smaller the time step, the more accurate and stable the numerical solution will be. While this appears to be the case for direct numerical solutions, it is shown in Chapter 5 that it is not the case for many inverse situations. A related approach which has not received much attention in the heat transfer literature produces ordinary differential equations in time. By approximating space derivatives only, a lumped parameter thermal network model can be formulated which leads to coupled first-order differential equations. These models for heat transfer will be referred to as discrete space models. Continuous time solutions to these types of equations have been the subject of considerable study by circuit theorists. While the thermal resistor,and thermal capacitor concepts are familiar to heat transfer investigators, little work has been done which makes use of the circuit and system theory results. 26 Some results are presented which show the relationship between the discrete time — discrete space models, and the discrete space models. In addition, it is shown that for certain types of boundary conditions, the discrete models solution produces no error: i.e., the solution of the discrete model will be exactly the same as the continuous time model evaluated at the temperature nodes (and the time steps if the model is also discrete time). Throughout, the example used for the development of these analyses is a planar wall with unit dimensions and properties, described below. An insulated boundary condition exists at one surface and an applied heat flux condition at the other. The initial temperature of the conducting body is zero, and the tempera- tures are observed at the insulated surface. 820 _ 80 322 6t 0(2,0) = 0 at) _ = t = 1 8:1: 0 a 1: with observations, 9(301 ti) = y, and estimates, _ 80(c,t.-) 91h) — T z=0 3.2 Analytical Methods Many direct problems with simple geometries and boundary conditions can be solved using classical boundary value problem techniques. Linear forms of the transient heat equation with heat flux and/or temperature boundary conditions known at all points on the surface used in connection with the initial conditions of the conducting body lead to partial differential equations with separable solutions. T—_—__.——”‘ The resulting ordinary differential equations establish eigenconditions which lead to a countably infinite set of eigenvalues and eigenfunctions. This collection of equations fits a fairly general form, referred to in the literature as Sturm - Liouville type problems [9]. The principle of superposition and the orthogonality of the eigenfunctions are used to formulate a series solution to the problem. The coefficients of the in- dividual terms are determined by integrating the product of the initial condition function and the corresponding eigenfunction, over the orthogonal interval. With proper scaling, the orthogonal eigenfunction set can be shown to be a complete orthonormal set (CON) and the solution is in the form of a Fourier series [9]. Extensive work with separation of variables and related techniques has pro- duced solutions to many important geometries, boundary conditions, and initial conditions. Several excellent catalogs of Solutions have been compiled [10], [11]. Only the results of this analysis for the example problem will be reported below. For the example at hand, refer to Figure 3.1. The solution of the insulated wall problem to a unit step heat flux is denoted in Figure 3.1 as s(:c,t) ( pg.106 [11]). The impulse response or Green’s function for an impulse heat flux applied at z = 0, denoted h(z,t), is found by differentiating the step response with respect to time. In a similar fashion, the ramp response of this system. denoted r(:r:,t), can be found by integrating the step response with respect to time. :ffifi' r I if". Numerical‘i: I (I ' a I' t. ’P. emai'lv ‘ ",,...'.E ‘ " . ' - ,.."'ir;.' 5331131 int)“ V . . i ‘ “I.“Ut’FVEII 'I'Ni‘F ‘ ’ ‘ ‘ ‘ '~ ’ ' ‘ " ' . . J . . ~ ‘ 1‘ ‘ .5. :«t’ J (All. l“:‘~' 5’1”) a 4 I ' 5.210. "LIVE“ . ' t I '-':' t . f Ismrs-n II o . “I a “’10 ”on 3 u! u: /////f//(7 .- o 1 z - 5t"- _ u = 0 f(t) = 6(t) unit impulse v" h(z,t) = 1 + 2Ze'"z'z'cos(mrz) ' ‘1 perf‘m’ ' "'1 “tum” '> no .2131 gt, 11w... . . 0(2,t) - t+(§-c+%zZ)—;222e " 0:51373). . . 3.343 n 51.035 [121 ...'7‘ 7|" 1 4 ir‘1 ‘ . . . BID finite t. . {(1) a P1” mg! ramp : mm =. [13314396 numb. ; .1. c713“) " I T (i " '7 T ’2)?" ._ 1 _§zm(nt§)( _02'g _1) ' j WOO "C131?" ‘ II‘ '. “-1 n‘l’z . ' WIRDQK' ' ’ ‘ 1‘ “-3 ‘5‘”? Figure31 InsulatedWallwithSurfaceHeatFlwt 'uen lher- 1: - -.~rwr’at :ihem’ICE‘l (13330"; 7- ' ’ 1 » t "\W‘. s. \ l {1 2‘ 2 v) b“"-”"'3'~'” f(t) - ”(t) unitstep ~ - a ‘1: vii 3.3 Numerical Methods Numerical method solutions to the direct problem are used routinely, partic- ularly in situations where the boundaries are complicated shapes. Many useful numerical solutions depend upon breaking the conducting body up into small isothermal cells. Lumped parameter thermal resistor and capacitor analogies are developed as a result of this procedure. An energy balance between a cell and its neighbors leads to a set of equations which can be solved using various ordi- nary differential equation techniques, in continuous time. These continuous time - discrete space (CTDS) models, referred to as ladder networks for their resem- blance to electrical circuits of the same name, are developed in Section 3.3.1. Continuous time solutions to the Fl-C ladder networks can require considerable computer resources if the number of nodes exceeds 20. It does however provide an analytical basis to study the behavior of some inverse solutions. The finite difference (FD) methods presented in Section 3.2.2 have the ad- vantage of being described by rather simple algebraic relationships which can be described with sparse matrix equations. Solution of these simultaneous alge- braic equations can be obtained without matrix inversions. The sparse matrices are tridiagonal for the examples worked here, and require only 3N memory loca- tions as compared to the N2 locations required by other techniques. This makes the finite difference methods a powerful tool for solving direct problems, since a large number of nodes can be dealt with, even with modest computing facilities (8000 nodes is typical). Since space and time are d‘iscretized using this approach, they will be referred to as discrete time - discrete space (DTDS) models. As will be seen, they are not, in general, good models to employ for deriving inverse algorithms. The mathematical relationship between the CTDS and DTDS models is devel- 30 oped in Section 3.3.3. based on spatial and time averaging parameters. Section 3.4 presents some interesting results, in which DTDS models are de- veloped that have no error associated with them when compared to the analytical solution, sampled in space and time“. If the boundary condition variation with time can be accurately modeled as a series of impulses, steps, or ramps, then the temperature responses inside the body can be calculated from this discretized model with perfect accuracy. This development is the logical evolution of the im- pulse, step and ramp invariant methods presented in Section 3.2.1, applied to the Fourier Series solutions. 3.3.1 Ladder Networks The approach taken here will be to first estimate spatial derivatives by creating cells within the conducting body which have lumped parameter characteristics. Interpretation of the resulting equations leads to a thermal resistor and capacitor analogy. An excellent discussion of this type of analogy also appears in [12] and [3]. The resulting R-C ladder networks can be solved in continuous time, or further approximations can be used to find solutions in discrete time. Figure 3.2 depicts an internal section of a one dimensional conducting body which has been broken up into cells. The thermal energy stored in each cell can be approximated by: en = pcfl,A:1:AyA20fl (3.1) Since the change in energy with respect to time can only be due to heat conducted in through the right face, or out through the left, the following energy balance can be constructed: _ = Qn - Qn-tl (32) 31 Defining: AA = Ay A2 and, AV = A2: Ay A2 and substituting (3.1) into (3.2) yields: d9" "d—t' =P CP1AV (Qw— Qn+1) (3'3) with thermal capacitor: c = p c, AV (3.4) Thus “do C]? = Qn _ Qn‘H (35) Using Fourier’s equation Q = -k 6A ZE (3.6) and approximating the spatial derivative with differences, heat transfer rates Q(n) and Q(n + 1) become: I: AA Q71: A3 —[0n-1 — on] (3.7) _ k AA Qn+1 — —A_:c— [9n — 9M1] (3.8) But, these equations describe the constitutive relationship for a thermal resistanCe with: r = Az/(k AA) (3.9) or in conductance form : g = (k AA)/A2: (3.10) Thus 7.0.71 = 071-1 - 9n (3.11) and rQn'tI = on — 0n+l (3.12) This resistance - capacitance analogy leads to the Fi-C ladder network shown in Figure 3.3 . Substituting equations (3.7), (3.8) and (8.10) into (3.5) yields: “do cd—t =g [0n+1 — 20,; + 0,.-1] (3.13) which will be used in the state equation formulations developed shortly. 32 0n-1 l I l I | l | 0,, I I . I I I .L .L qn Qn-t 1 en =pcpAzAAOn Figure 3.2: Lumped Parameter Approximation 0,.-1 r 0,, r 0,.“ a a? /\\/ 9n :?7 /\\/ gn+1 :: I: I: c c c 1: T: '2}: A1: == ______ =: ‘f r 1: AA 6 p c” A Figure 3.3: Ladder Network Approximation 33 Table 3.1: Boundary Condition Parameters ition temperature Boundary nodes are generally handled in one of two ways: a) with the tem- perature nodes on the boundary'surface, and b) with internal boundary nodes. Figures 3.4 and 3.5 show these cases and their circuit equivalents. The energy balance for the surface node‘ is: dc — = — .14 dt q. + q. (12 (3 ) where: d do 61 1 —‘ = — .1 dt pa. Av a <3 5) and 9c = h AA(0. - 91) (3.16) ‘ 1: AA 13 Then d0 1: AA p c, AV}; = — A: (01 — 02) + h AAIII, — o.) + q, (3.18) or «10 1 1 (!-—1 = --(91— 92) + —(0¢ - 01) + 9a (3.19) dt r Th where n. = 1/(h AA) (3.20) The form of the energy balance and the resulting equations for Figure 3.5 are exactly the same as those for Figure 3.4 except that the capacitor value is half. Special cases for boundary conditions are listed in Table 3.1. Note that when r), = 0, 00 will be equal to the known ambient temperature 0. . A generalized internal node is depicted in Figure 3.6. The energy balance for this node is: $2 0e ——> 00 l 01 l 02 O l C i O ‘1“ ' l 1 Th .___/\/.__ r r F33“ V II all M nll ll all Figure 3.4: Insulated Wall Example Ladder Network 00 01 1' 1- l l I I 6 I o I o l I I I «L J- Figure 3.5: Insulated Wall Example Ladder Network (alternate form) 35 9m Figure 3.6: Generalized Internal Node 7'2 C1 Figure 3.7:. Generalized Boundary Node 36 den , (ii For constant property systems, internal nodes become: do... _ (ow-o...) (om—0....) + (0....- m) + =Qm—qm-1+q:m +q:m dt rm rm+1 Thin m Or 40m em? = gmom-1 + (“gm -9m+1- ghm)0m + gm+10m+1 + ghmoem + 9m (3.21) (3.22) (3.23) A generalized boundary node is depicted in Figure 3.7. The energy balance for this node is: (181 it— = —92 + qc1+ (1.1 For constant property systems: dot (91 - 92) (0:1 - 01) — = —-—-——— + —— + 1 dt 1'2 Th1 q” or do 61-51- : (‘92 - QMWI 'l’ 9292 + ghloel + 9.1 The resulting equations can be expressed in matrix form: Czi:(t) = Gz(t) + Pu(t) where . d cu) — 372m and unknown node temperatures : 01“) 2(t) = . 0N“) and known source functions : P qal (t) (3.24) (3.25) (3.26) (3.27) (3.28) (3.29) r C1 0 0 0 ca 0 c = 0 0 ca (3.30) . 0 O . . . CN .. l 9M 0 0 0 l 0 ghz 0 0,. = 0 0 gm (3.31) l 0 0 9m ) ' —92 92 0 0 Ge = 9:2 ‘92 - .93 :93 (3.32) O 0 91V *9N G = Ge — G}; (3.33) l P = P9, I P9. (3.34) l P9, = I EHd P0. = G}; Note that the structure of the G matrix shown is specialized for the one dimensional ladder shown. Other geometries lead to different forms for the G matrix, but the following development is valid for any generalized form of G. Equation (3.27) can be rearranged to state equation form: é(t) = Ax(t) + Bu(t) (3.35) where: A = C“G (3.36) and B = C“P (3.37) 38 Continuous Time Solutions Equation (3.35) can be solved using Laplace transform techniques [13]. Note that the functions in frequency domain (i.e. those that are functions of complex frequency variables) are, in this development, denoted with upper case, with frequency variable a argument. Taking the transform of equation (3.35) yields: sX(s) — 22(0) = AX(s) + BU(s) (3.38) Then rearranging [31 — A]X(.s) = BU(s) + 3(0) (3.39) and solving X(..) = [.91 — Ar1 [BU(s) + 2(0)] (3.40) where X (3) is a vector which contains the transform of each temperature response in the network. This solution can be broken up into two responses. One response, called the zero state response (ZSR), is due to the input U (s) only. Similarly, the other response is due to the initial state, 22(0) and is called the zero input response (ZIR). X25303) = (31 — A1-‘z(0) (3.41) sz(s) = [81 - Al"1 BU (8) (3-42) Either solution might be the object of an IHCP investigation. Subsequent de- velopment will assume that the system is initially relaxed, i.e. 2(0) = 0, since the primary objective in later chapters is to develop methods for estimating the boundary conditions. It is certainly possible to extend many of these methods in a way which allows estimates of the initial state to be included. Response X(s) is now understood to be X gm (3). State equations are usually accompanied by an output or observation equa- tion: y(t) = Ez(t) + Fu(t) (3.43) 39 where y(t) is a scalar which represents the observed variable. This observation equation can be used to select a single temperature for observation. For instance, if the temperature at the right hand wall is sought, the observation equation be- comes: y(t) = [000 - - - 1] z(t) (3.44) The Laplace transform of the observation equation is: Y(3) = EX(3) + FU(3) (3.45) Using (3.13) yields: n.) = E [31 — A)" B + FU(s) (3.46) For single source systems the transfer function of the system can be defined as the ratio of the output variable to the input variable: )’(3) H") = U(s) = E[31 .. A]'1 B + F (3.47) The transfer function has some rather interesting properties. Once the transfer function is known, the response of the system can be found to any input (which has a transform that exists): Y(3) = H(3)U(s) (3.48) In addition, the time function corresponding to H (3) is the impulse response of the system. The heat transfer literature more commonly calls this the Green’s function. Thus, the behavior of the system can be completely described if the impulse response is known. For ladder network systems of this type, H (s) is a ratio of two polynomials in s. The order of the denominator, N, is the number sections (nodes) in the model, M . Z 648‘ 11(3) = ‘;° (3.49) 2;... i=0 4g and the order of the numerator, M is less than or equal to N since the system is causal [7]. The denominator is the characteristic polynomial and is the determi- nant of [.91 — A]. This term appears as a result of the matrix inversion operation in equation (3.47). Further, ' det[sI-— A] = irks" = 0 . (3.50) i=0 is the characteristic equation (CE) of the system, and its roots determine the qual- 'itative behavior of the system: Real roots lead to exponential solutions, complex I roots lead to sinusoidal solutions in exponential envelopes, roots with positive real parts lead to solutions which grow with time, and roots with negative real parts 4 lead to solutions which decay with time. Roots of the CE are called the poles of the system and are singular points of the transfer function when evaluated in the complex plane. Similarly, the roots of the numerator polynomial are called zeros and are the complex values of 3 which satisfy : is. As will be shown in later developments, the poles of the ladder networks under consideration will always be real, negative, and distinct. By factoring equation (3.49) the system can be broken up into simple first order systems in cascade, represented by ”k, )=1;]:°"’ (3 + :3 (3.51) where the a.- are the negatives of the poles, and the k,- and b,- result from factoring the numerator. Note that —k,-/b, are the zeros. Referring to Figure 3.5 and applying this analysis to our example yields state equations of the form: 41 P010). ' P—2 2 0 0 -- O 0' P01(t)' ’1' d 02“) 1 1 —2 1 0 - ° 0 0 02(t) 1 .0 ._ 03(t) = _ 0 1 -2 1 O 0 03(t) + _ 0 q. (3,52) ‘11 : rc : : : : -. : : : c : .057“). _ 0 0 0 0 2 —21 _0N(t)1 .0. For temperature observations taken at the insulated surface, output equation be- comes: y(t) = [000---1]:c(t) (3.53) The following useful matrix forms are defined : V = (r c)A (3.54) P = (c/2)B (3.55) The transfer functions resulting from the solution of equation (3.52) for various values of N are presented in Table 3.2. When the transfer function is factored into first order sections with the poles shown in Table 3.3, it can be thought of as the cascade system shown in Figure 3.8. An alternate formulation, that will prove to be useful in later developments, results from forming a partial fraction expansion (PFE) from the factored form of 11(3) and determining values for the ri'SI N his ‘l" b; _ N 1‘; H") = ”m ‘24”) i=1 i=1 Shinners [14] presents an excellent discussion of PFE methods. A block (3.56) diagram depicting the structure represented by the right side of (3.56) appears in Figure 3.9. Table 3.4 contains the values of r,- (residues) for the example. One advantage to the form of equation (3.56) is that the inverse transtrm can be found by inspection: h(t) = fine-“'1 (3.57) i=1 42 Table 3.2: Continuous Time Transfer Function Parameters rc= p—kczAzz for the dimensionless case r c = 1 /(N — 1)2 Numerator Coefficient Resistance Capacity N co(rc)‘”"’ 2 4 1 .000000 1 .000000 3 8 0.500000 0.500000 4 1 2 0.333333 0.333333 5 16 0.250000 0.250000 6 20 0.200000 0.200000 7 24 0.166666 0.166666 8 28 0.142857 0.142857 9 32 0.125000 0.125000 10 36 0.111111 0.111111) Denominator Coefficients d,(rc)lN-‘l i 0 1 2 3 4 5 6 7 8 9 10 N 2 0 4 1 3 0 8 6 1 4 0 12 19 8 1 5 0 16 44 34 10 1 6 0 20 85 104 53 12 1 7 0 24146 259 200 76 14 1 8 0 28 231 560 606 340 103 16 1 9 0 32 344 1092 1572 1210 532 134 18 1 10 0 36 489 1968 3630 3652 2171 784 136 20 1 Table 3.3: Continuous TIme Transfer Function Poles 2 Characteristic EquationgRoots : —a,,(rc) OOOVO’OhQM —L r-4.000 -2.000 -1.000 —.5858 -.3829 -.2679 —.1981 -.1522 -.1206 —4.000 -4.000 -3.000 —2.000 —1.382 -1.000 —.7530 —.5858 -.4679 -4.000 -3.414 -2.618 -2.000 -1.555 —1.235 -1.000 -4.000 -3.618 —3.000 -2.445 -2.000 -1.653 -4.000 —3.732 -3.247 -2.765 —2.347 —4.‘000 —3.802 .—3.414 -3.000 —3.523 —3.879 Table 3.4: Continuous Time Transfer Function Residues 2 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 ommwmmbmm _L —1.0 -2.0 1.0 —2.0 2.0 —1.0 —2.0 2.0 —2.0 1.0 -2.0 2.0 -2.0 2.0 -1.0 -2.0 2.0 -—2.0 2.0 -2.0 1.0 -2.0 2.0 —2.0 2.0 —2.0 2.0 —1.0 —2.0 2.0 —2.0 2.0 —2.0 2.0 -2.0 1.0 -2.0 2.0 -2.0 2.0 —2.0 2.0 —2.0 2.0 —1.0 £0290 _1__1k 8+5 . 9“) s+ao s+a1 . . ’0 3+1) ‘ ' ' "‘1 “3+7,“ ’—‘ 0(1) Figure 3.8: Cascade System Representation 91‘) A 3 +110 s+a1 0(t) 5+0" Figure 3.9: Parallel System Representation 45 Invariant Methods Once the continuous time transfer function and the associated impulse re- sponse have been found, there are several ways to convert them to discrete time. Which method is best hinges largely on the application and what is required from the discrete time model. Discussion of the attributes of the various methods is widely available [15], [8], [16], and will not be considered here, except to the extent that it affects inverse solutions. Impulse, step, and ramp invariant methods are appealing methods since they produce solutions which are identical to a sampled version of the continuous time solution. This property insures that the continuous time model and the discrete time model will have the same Qualitative behavior (i.e. overdamped, stable, un- stable, etc.). This desirable property is achieved when some prior knowledge regarding the input is available; specifically, can the input be adequately repre- sented by a series of impulses, steps, or ramps [17]. Also, the resulting discrete time transfer functions are aliased versions of continuous time transfer functions, and as a result distort the frequency response somewhat, particularly at frequen- cies approaching the sample rate. Using these methods to convert continuous time transfer functions to discrete time transfer functions, from a computational standpoint, can be quite cumbersome, and at the very least, tedious. Results will be presented for impulse, step and ramp invariant methods. While these methods can easily be extended to higher order approximations, these three are presented as representative, and useful in many practical situations. An excellent, in depth, development of these methods methods appears in reference [17]. Keep in mind that continuous time transfer function 11(3) and discrete time transfer H (z) are different functions. 46 Mt) H (a) 6(n) W) —" \L‘L m.) At Figure 3.10: Invariant Simulation of a Continuous Time System Note that in the following discussion, operator notation will be used: 2 {f(n)} - z transform of discrete time function f(n) Z“ {F(z)} - inverse z transform of frequency function F(z) L {f(t)} - Laplace transform of time function f(t) I.“ {F(s)} - inverse Laplace transform of frequency function F(s) Invariant methods endeavor to select H (z), the discrete time transfer function, in a way that the response of H (z) is the sampled response of H (s). Figure 3.10 depicts this approach in block diagram form. Error signal, e(n), is a discrete time signal which is the difference between the value of the continuous time response at samples times, and the discrete time system response. Iy selecting the ap- propriate H (z) given the functional form of the input, it can be shown that this error signal is identically zero. If the switches in Figure 3.10 close for an infinitesimal period of time at At intervals, the resulting discrete time signals ( u(n) and yc(n)) are samples of their continuous time counterparts ( u(t) and y(t)) . This operation is equivalent to multiplying the continuous time signal by sample function: 47 V(t/At) = i 6(t—1'At) i=-oo where: 6(t) is the dirac delta f, is the sampling frequency At is the sampling interval At = 1/f. Since the error signal is zero, E'(z) must also be zero: 13(2) = Kid-121(2) = 0 Concentrating on the upper portion of the figure, Y.(z) = Z{V(t/At)yc(t)} and y¢(t) = £‘1{H(3)U(s)} or, combining (3.60) and (3.61) Y.(2) = Z{V(t/At)£‘1{H(3)U(S)}} Using the lower portion of Figure3.10 Ya(2) = H(Z)Z{V(t/At)u(t)} Equating (3.62) and (3.63) then rearranging yields: HM z Z{V(t/At)£‘ {H(s)U(s)}} Z{V(t/At)u(t)} (3.58) (3.59) (3.60) (3.61) (3.62) (3.63) (3.64) Since H (a) is the system we are attempting to simulate, it is assumed to be known. As stated earlier and shown here, the discrete time transfer function, H (2) depends explicitly on the form of the input, u(t). Using (3.64) when u(t) = 5(t) leads to the impulse invariant form of H (z). H...(2) = At2’3{V(t/At)£'1{1’1l(~9)}} (3.65) 43 Using (3.64) when u(t) = u(t) , where p.(t) is defined to be a unit step function, leads to the step invariant form of H (z). Z{V(t/At)£"]{H(8)/s}} [1:1 Using (3.64) when u(t) = r(t), where r(t) is defined to be a unit ramp function, H3311 (2) = (3.66) leads to the ramp invariant form of H (z). 2{V(t/At)c-‘{H(s)/s‘°-}} [ Atz (2-1)2 111-133(2) = (3.67) As shown in the previous section, signals which decay exponentially are used to describe the impulse response of the ladder networks used in the examples. These signals have Laplace transform: 1 (s-f-a) H(3) = 4:) h(t) = e‘“‘p(t) (3.68) then the impulse invariant form becomes: H,,-,,(z) = At Z{V(t/At)e‘“‘p(t)} or) At 2 Hafiz) - m (3-69) and, lim H,,,(.) = A‘ " (3.70) a-v0 z — 1 The step invariant form becomes: Z Vt At L‘1 1 . Hn‘v(2) = { (/ )z {a(:+a)}} (3.71) z—1 49 Using a PFE on the inverse Laplace transform term yields: Z{V(t/At)(1 - e"‘)u(t)} Hliv(z) = , . Lil] 1 1 __ e-aAt H...(z) — 5—(2 _ rm) (3.72) and, "m Hail: (2) At (373) ““0 z - 1 and in similar fashion: (aAt ‘1' e_¢A‘ — 1)z - (aAte-CAl + e—aAt _ 1) ' = .74 Hunk) 02At(z _ e-¢A¢) (3 ) and, - . __ At (2 + 1) 1'33 Hmlzl " 3 (z _1) (3.75) Using the transfer function form of the RHS of (3.56), each first order section can be converted separately using (3.69) through (3.75), and added to yield the invariant transfer function for the system. The results of this procedure are pre- sented in Tables 3.5 through 3.7, for several choices of time step. These transfer functions will be useful in later derivations of inverse algorithms. His‘. (1) Hm (I) Hi“ (I) Hiio (3) Hub (2) Hai! (3) HI“! (3) Hair (I) ' 0.824101 50 Table 3.5: Impulse Invariant Transfer Function 2.3 + 3.928722.2 + 0.964640: 2‘ - 3.9290423 + 5.78%222 - 3.790112 + 0.930531 (2. + 0)(z + 0.263163)(z + 3.66556) (1 - 0.964640)(z — 0.973361)(z - 0.991040)(z — 1.00M) 0.143202E-8 0. 1 4320258 :3 + 3.347851.2 + 0.697676: 2‘ - 3.3749923 + 4.24288:2 — 2.35465: + 0.486752 (2 + 0)(z + 0.223288)(z + 3.12456) (2 — 0.697676) (2. —‘0.763379)(z — 0.913931)(z - 1.0000) 0. 1 220219541 0. 1220295-4l :3 + 0.00437322 + 0273237542 2‘ — 1.5011023 + 0.541368:2 — 0.4101565-12 + 0.746586E-3 (z + 0)(z + 0.3553925-1)(z + 0.768834) (2 - 0.273237E-1)(z - 0.6720555-1)(z — 0.406570)(z -— LM) 0.293948E-1 0.2939485—1 :3 + 012344003;2 + 02319525452 2‘ - 1.0001223 + 0.1234105-312 — 0.231981E-152 + 0.5380195-31 (z + 0)(z + 0.187907E-11)(z + 0.123440E-3) (z - 0.23195E-15)(z — 0.18795E-11)(z — 0.12341E-3)(z - 1.0000) 0.999753 0.999753 Table 3.6: Step Invariant Transfer Function 23 + 10.8431;2 + 10.68802 + 0.955772 2‘ — 3.9290423 + 5.78862:2 - 3.79011: + 0.930531 (2 + 099557331”: + 0.985703)(z + 9.75777) (2 - 0.964640)(z — 0.973361)(z — 0.991040)(z — 1.0000) 0.3592955-9 0.3592%E 9 23 + 9.552511.2 + 8.271662 + 0.649222 2‘ — 3.3749923 + 4.242887.2 — 2.35465: + 0.486752 (2 + 0.871858E-1)(z + 0.865922)(z + 8.59940) (1 — 0.697676)(z - 0.763379)(z - 0.913931)(z —- 1.0000) 0.3161765-5 0.3161765-5 :3 + 3.42747.2 + 0.8357372. + 0.13541 15-1 2‘ - 1.5011013 + 0.54136822 — 0.4101565—12 + 0.7465865-3 0 10203751 (2 + 0.1744425-1)(z + 0.245281)(z + 3.16475) _ (z - 0.2732375-1)(z - 0.6720555-1)(z - 0.406570)(z — 1.0”) 0.102037E-1 :3 + 021328622 + 069329055; + 0.7815785-17 z.4 — 1.0001223 + 0.123410E-3z.2 — 0.231981 E-152 + 0.5380195-31 (z + .112735E-11)(z + .325101E-11)(z + .21354) (2 — .231952E-15)(z - .187953E-11)(z — .1234105-3)(z — 1.0000) 0.824101 Eric (1) Eric (1) Hn't (1) Eric (3) At -2.001 51 Table 3.7: Flamp Invariant Transfer Function 24 + 25.690623 + 64.437522 + 25.08132 + 0.953132 24 — 3.9290423 + 5.7886222 — 3.79011; + 0.930531 (2 + 0.425809E-1)(z + 0.425432)(z + 2.29481)(z + 22.9278) (2 - 0.964640)(z — 0.973361)(z —- 0.991040)(z - 1.000(1)) 0.720317E-10 0.7203175—10 z‘ + 23.1302;3 + 52.1417;2 + 18.19622 + 0.618798 0.647 145 5 ‘6 z‘ - 3.37499;3 + 4.2428822 — 2.354652 + 0.486752 (2 + 0.3809535—1)(z + 0.381274)(z + 2.06335)(z + 20.6475) (2 — 0.697676)(z — 0.763379)(z — 0.913931)(z — 1.00000) 0.64751 45-6 z4 + 10.279123 + 9.1295522 + 0.9979812 + 0.8421145-2 z‘ — 1.5011023 + 0.541368;2 - 0.4101565-12 + 0.746586E-3 (z + 0.920534E-Z)(z + 0.116505)(z + 0.843409)(z + 9.30995) (2 - 0.273237E-1)(z — 0.672055E-1)(z — 0.406570)(z — 1.00000) 0.251 4235-2 0.251423E-2 :4 + 1.8179723 + 0.6526285-1z2 + 0.701718E-6z + 0.515865E-18 z‘ — 1.0001223 + 0.1234105-322 — 0.231981E-152 + 0.538019E-31 (z + 0.73514E-12)(z + 0.1075554)(z + 0.366265-1)(z + 1.7813) (2 -— 0.23195E-15)(z - 0.18795E-11)(z — 0.12341 E-3)(z - 1.0000) 0.346791 0.346791 52' 3.3.2 Finite Difference Methods The approach taken in this section is parallel to that taken by many re- searchers in this area ([11] [12] are representative). The fundamental precept in finite difference methods is approximation of space and time derivatives with differences. Starting with the one dimensional heat equation: fl _ 1:91 82:2 aBt where: a = k/(p or) AA = 1 AV = A: TO approximate 00/32:: 60 2 0m” — 0m 63* _ A2: and 21 2 ____0’" - 0"" 33‘ - A2: To approximate 620 / 82:2: and substituting (3.77) and (3.78) into (3.79) yields: 620 1 5? 3 3? [am—1 - 29m + 0m+1] To approximate 80/59:: 2: ~ 0(i + 1) —0(z‘) att At and _§0_ 0(i) — 0(z' — 1) 81- At (3.76) (3.77) (3.78) (3.79) (3.80) (3.81) (3.82) Generalized finite difference methods can be expressed in the form of a weighted time average of the spatial derivatives (3.80), and a weighted spatial 53 Table 3.8: Finite Difference Parameters . a b V w u Forward difference , 0 - 1 0 1 0 Backward difference 1 0 0 1 0 Crank-Nicolson 1/2 1/2 0 1 0 lmplicitS/B 1/2 1/2 1/12 5/6 1/12 lmplicit3/4 1/2 1/2 1/8 3/4 1/8 average of the time derivatives (3.81) and (3.82), which leads to an equation for internal node approximation of (3.76): a Fo[0,,._1(i+1)- 20m(i+1)+ 0m+1(i+1)]+ b 180(1),.-. (1') — 20...“) + 0,...1(i)] = +u[o.,.+1(i + 1) —0...+1(i)] +w[0,,.(i + 1) - 0m(i)] +v[0m-1(i + 1) — 0m-1(z’)] (3.83) where: I: At At . F0 — 2 = — (3.84) p 6, A2: 7‘ c and weighting constants: a -l- b = 1 V + w + V = 1 Table 3.8 lists some common methods and the associated weighting parameters. For a boundary node, the energy balance is formulated in the same way as the Section 3.3.1 (see Figure 3.7): of do § = 6;» AVI‘ = q. + q. - qz (3.85) where: qc = M0. - 91) (3.86) k «12 = 3(01 — 02) (3.87) q. - is a known surface heat flux 0. - is a known ambient temperature 54 Taking weighted time and spatial averages as with the internal nodes yields: +a For q,(i + 1) + bForq,(i) +a F0 Bi[0,(i + 1) - 01(i+1)] + b F0 Bi[0.(i) — 01(1)] +4 Fo[02(i + 1) — 01(1' + 1)] + b Fo[02(i) — 01(1)] = w(01(i + 1) — 01(1') + 2u(02(i + 1) — 02(1)) (3.88) and weighting constants a+b w+2u II II 4 Refer to Table 3.8 for special cases. Generallzed Node The generalized node depicted in Figure 3.6 can exchange heat by conduc- tion with its neighbors, and with the environment via applied heat flux q,m and convection. The finite difference equations for this type of node can be formulated using an energy balance: (10". and estimating the LHS derivative using a weighted spatial average: 80... ~ 89,... do... do... .1 __ = __ + — . dt V dt w dt + V dt (3 90) Ol‘ — _V_ ' _. i —w— i — i L +11 — +11: .. At((9,,,.,(.+1) a..._.())+ At(o,,.( +1) 0...()+ MW... ( +1) 0... H) (3.91) Using a weighted time average to estimate the RHS of (3.89) : 4% -==- alqu + 1) — q..-1(1+ 1) + 4.11 + 1) + «1....(2' + 1)] + b lama) — 4.311) + 9....(1') + «14.10) - (3.92) and substituting: q... ---= g,,.(0,,._1 - 0...) (3.93) gin-1 = gin-1(om '— 0m+1) (3.94) qcm = 91:77; (am - 0m) . (3.95) where: 55 gm = known heat flux source applied to node m 0,... = known ambient temperature surrounding node m yields: do... ~ . . . . ens—d? : “banana-1h +1)-gm0m(' +1)-gm+10m(1 +1) + 9m+10m+1(1 +1) Tymama 1‘ 1) -g).m0m(i + 1) + 9m“ + 1)] + burnout—1“) - 91130778“) — 9718140775“) + gm+10m+1(i) + 91177102175“) -g)m.0m(i) + qm(i)] (3-96) Equating the LHS and Flt-IS approximations yields: V , V — - + — — Aime” ‘0 1) At 0: . w . Atcmom“ 1) Atcmomb) c,,.0m-1(z) + V V Atcmorn‘l’“z ) Atcvnom+1(1) = a[gm0m-1(i+1)—gm0m(i+1)—g,,.+10m(i+ 1) + gm+10m+1(i+1) gmomli 1' 1) -ghm0m(i 1‘ ”Qua“ 1' 1)] Hyman-1“) " 97710171“) _ 97711110171“) + gm+10m+1(i) gmomfi) - ghm0m(i)qm(i)] (3.97) or v . (fie... — agm)0m_1(z +1)+ (f—tc... + a (g... + g...) + g1...))o...(z' + 1)+ (fie... —ag....)o....(i + 1) L At ch... — b (g... + 9.... + g...))o..(i) = +1 c...+bg..)o..-.(i) V + — +b m t... ‘ (Atcm 9 +1) +10) +aghm9m(i + 1) + bghmomfi) +aq,,,.(i + 1) + bq,,,.(i) (3.98) In matrix form, equation (3.98) becomes: 1 , _ 1 . .+ . —A-tC'W—aG]z(z+ 1) — [ECW+bG]x(z) +aPu(z . 1) +bPu(z) (3.99) where 2:, u, G, C and P matrices are defined in Section 3.3.1. and the spatial averaging matrix is: 56 (w 21!. 0 0 u w u 0 0 u w V W = E (3.100) 11qu Ouwu _ 0 0 2V w j Note that these equations can be expanded to include more general time and space averaging techniques, as are common in finite element (FE) methods. The solution method that follows applies to any of these methods which can be manipulated into discrete time state equation form: 2(n +1);= A 3(n) + a B u(n + 1) + b B u(n) (3.101) with: 1 1 .. __ _ -1 _ A - [AtC W a G] [AtC W + bG] (3.102) .. 1 -1 B - [—A—tC W — a G] P (3.103) The solution to (3.101) can be found, in much the same way as the continuous time case, using z-transforms techniques [15]. While, for large systems, this so- Iution technique may prove to be more cumbersome than the simple algebraic approach, it does provide useful analytical insights into the behavior of the nu- merical solution, and as later sections will show, provides the analytical basis for a new type of solution to the inverse problem. Taking the z-transform of (3.101), and assuming that the initial state 3(0) = 0: zX(z) = AX (z) + B[azU(z) + bU(z)] (3.104) Gathering terms and rearranging yields: X(z) = [21 — A]-‘B(az + b)U(z) (3.105) The observation equation is defined to be: y(i) = 85(1) + Fu(z') (3.106) which has z-transform: Y(z) = EX(z) + FU(2) (3.107) 57 (1 8(2) " 8(2) ——>< x x *——~ —1 +1 Figure 3.11: Discrete Time System Behavior or Y(z) = E[zI — A]-‘B(az + b) + FU(z) (3.108) The transfer function can now be defined as H(z) = Z): = E[zI — A]_1B(az + b) + F (3.109) The transfer functions and roots for a fourth order ladder example, for several values of At are presented in Tables 3.9 through 3.13. These transfer functions. are provided as a reference for later developments, in Chapters 4, 5, 6, and 7. For the class of problems at hand, the det[zI — A] is a rational polynomial in 2. In a manner quite similar to the continuous time case, the qualitative behavior of this system can be determined by examining poles of the transfer function. As shown in Figure 3.11, poles which lie outside the unit circle lead to solutions which are unstable and grow without bound as the time indeX'increases, and poles which lie within the unit circle diminish with time. Poles which lie in the right half plane (RHP) have signs which oscillate with each time step, and poles in the left half plane (LHP) do not. Poles which lie off of the 32(2) axis contain sampled sinusoidal signals. 58 Table 3.9: Forward Difference Transfer Function A: -3.001 1 . 7 0 8 4300543,. _ 3.9230023 + 5.7855422 — 3.78707: + 0.929530 1 (2 — 0.964000)(2 — 0.973000) (2 — 0.991000)(2 - 1.00000) Hunk) 0.8748(DE-8 0:20.01 .4 1 Hun-(1) “748005 ‘24 — 3.2803023 + 3.99390:2 - 2.13905; + 0.425152 1 7 037480054 (2 — 0.640000) (2 — 0.730000) (2 - 0.910000)(z - 1.00000) At_-_1_0.1 1 . 7 0 8 480324 + 3.2OWO23 - .21000022 — 4.432002 + 0.442000 1 (2 - 0.1000GJ)(2 — 1.0000(1))(2 + 1.700000)(2 + 2.60m0) Hunk) = 0.8743!) At:1.0 1 7 . 8 48 m2“ + 68.00W23 + 1329.0022 + 5882.002 — 7230-“) 1 (2 -1.000000)(2 + 8.000000)(2 + 26.000m)(2 + 35.00000) Hunk) = 8748.00 Table 3.10: Backward Difference Transfer Function 4 z . 148685- 0 8 82‘ - 3.9300423 + 5.7915822 -— 3.793032 + 0.931491 .24 (2 — 0.965251)(2 — 0.973710)(2 - 0.991080)(2 — 1.00000) Hume) “'=°'°°‘ = 0.8148685-8 4 2 0 2‘ -— 3.4401323 + 4.4160722 — 2.507112 + 0.531167 24 (2 — 0.735294)(2 — 0.787402) (2 — 0.917431)(2 - 1.00000) 3.413(1) = 0.4646655-4 At :0.1 24 . 7 1 - O 2 05 BE 1:4 - 2.0129823 + 1.32940;2 — 0.3463422 + 030923454 4 = 0.27051 -1 ‘ BE (2 — 0.217391)(2 — 0.270270)(2 - 0.526316)(z + 1.00000) Hum (43) 4 2 ”4140224 _ 1.1627423 + 0.169981;2 — 0.733591 E-22 - 0.96525E-4 z4 (2 — 0.270270E-1)(2 — 0.3571435-1)(2 - 0.100000)(z — 1.00000) 354mm = 0.844402 Haunt“) Hanna) Haunt“) Helm“) 3156(1) 3156(3) H1560) H.563) At -_o.001 “3" 0573053152 59 Table 3.11: Crank - Nicolson Transfer Function At-2.001 24 + 4.0003023 + 6.0000022 + 4.00000: + 1.00000 . 7 E-9 0 52 554 ,4 - 3.9290423 + 5.7886122 — 379010.. + 0.930526 (2 + 1.00000)(2 + 1.00000)(2 + 1.00m0)(2 + 1.00000) (2 - 0.964637)(2 -— 0.973360) (2 - 0.991040)(2 — 1.00000) 0.5275545-9 A: -_0.01 2‘ + 4.0GD023 + 6.0000022 + 4.00000: + 1.00000 39(556 .: 0' E “2‘ - 3.3709123 + 4.2320522 — 2.345142 + 0.483993 ,‘ (2 + 1.00000)(2 + 1.00000)(2 + 1.00m0)(2 + 1.00000) 03905565" (2 - 0.694915)(2 — 0.762115)(z — 0.913876)(2 - 1.00000) z‘ + 4.0000023 + 6.0000022 + 4.00000; + 1.00000 2‘ - 0.9446023 — 0.17765422 + 0.1061732 + 016140951 (2 +1.00000)(z + 1.000(1))(2 + 1.00000)(2 + 1.00000) (2 + 0.148936)(z + 0.285714)(2 — 0.379310)(2 - 1.00000) 0.573053E-2 13:11.0 24 + 4.0(XXX)23 + 6.000(IJ22 + 4.00000; + 1111000 0350332,. + 1.3931723 - 0.503877;2 _ 1.398452 — 0.490843 (2 +1.00000)(2 +1.00000)(2 +1.00000)(2 + 1.00000) 03608326 + 0.636364)(z + 0.862069)(z + (1894737)“ 71-00000) Table 3.12: Implicit 5/6 Transfer Function 24 + 4.0000023 + 6.0000022 + 4.000002 + 1.000(1) 2‘ — 3.9022923 + 5.7095822 - 3.712282 + 0.904986 (2 —1.11417)(2 —1.11417)(2 —1.11416)(2 + 1.00000) (2 — 0.947420)(2 — 0.964637)(z — 0.990230) (2 - 1.00000) -0.610456E-5 —0.6104565-5 2‘ -— 9.0434823 + 23.580322 — 3.898412 - 37.5222 2‘ -— 3.1761323 + 3.7264622 — 1.912392 + 0.362057 (2 + 1.00000)(2 - 3.34781)(2 - 3.34781)(2 - 3.34785) (2 - 0.574803)(2 - 0.694915)(2 - 0.906412)(2 — 1.00000) —0.4690305-5 —0.469030E-5 2‘ + 5.36364:3 + 10.710722 + 9.424492 + 3.07739 2‘ — 0.59629023 - 052688622 + 0.783501E-12 + 0.448253E-1 (2 + 1.00m0)(2 + 1.45455)(z + 1.45455)(2 + 1.45455) (2 + 0.285714)(2 + 0.459459)(2 - 0.341463)(2 — 1.00000) 0.41 7805E-2 0.41 7805E-2 0358774 14 + 4.1132133 + 63438922 + 4.348222 +1.11753 . z4 + 1.4848523 _ 0.44783122 _ 1.487392 _ 0549624 (2 + 1.00000)(2 + 1.03773)(2 + 1.03773)(2 + 1.03773) (2 + 0.661538)(z + 0.894737)(2 + 0.928571)(z — 1.00000) 0.358774 3119(3) 5134(1) 3634(3) 3534(1) Table 3.13: Implicit 3/4 Transfer Function z‘ — 2.22407;3 + 024080222 + 2.22365: — 1.24122 2‘ — 3.8779823 + 0.24080222 + 2.223652 + 0.882035 -0 m4 (2 - 1.07469)(2 - 1.07469)(2 — 1.07469)(2 + 1.00000) ' 7(2 — 0.930502)(2 — 0.957713)(2 - 0.989767)(2 — 1.00000) -0.36m995-4 24 - 5.3750023 + 7.1718822 + 3.951172 - 9.59570 24 - 3.0175023 + 3.32712;2 — 1.58335: + 0.273725 —0 64611754 (2 + 1.00000)(2 — 2.12500)(2 — 2.12500)(2 — 2.12500) 7(2 - 0.470588)(2 -— 0.644737)(2 - 0.902174)(2 — 1.00000) —O.646117E-4 2‘ + 6.3076923 + 14.698222 + 14.92852 + 5.53801 24 - 038844923 - 0.70310822 + 025004961; + 0.665517E-1 (2 + 1.00000)(z + 1.76924)(z + 1.76924)(z + 1.76924) (2 + 0.367089)(z + 0.565217)(2 — 0.320755)(2 — 1.00000) 0.342208E-2 0.3422086-2 z‘ + 4.1714323 + 6.5240822 + 4.53406; + 1.18141 035772314 + 1.5318723 — 0.41693622 — 1.533432 — 0.581507 (2 + 1.m0)(2 + 1.05714)(2 + 1.05714)(2 + 1.05714) (2 + 0.674419)(2 + 0.911504)(2 + 0.945946)(2 - 1.00000) 0.357723 61 3.3.3 Continuous Time to Discrete Time Relationship Continuous time - discrete space (CTDS) methods lead to ladder networks which can be described by state equations of the form: :i:(t) = C"Gz(t) + C“? u(t) (3.110) Generalizing finite difference results from Section 3.2.2 leads to weighted averages of time and space derivatives. Approximating the derivative and applying spatial averaging to the.LHS of (3.110) yields: 23(t) 3‘ A13“, [22(n + 1) — :0(n)] (3.111) Applying time averaging to the RHS of (3.110) yields: 5(t) '5 :e.(c-‘G 5(1) + k) + C-‘P 8(7) + k)) (3.112) #8? which is consistent with development in Section 3.2.2 if r = 0, s = 1, and the at; are replaced by the a and b parameters, respectively. Equating and rearranging for these cases yields: [W-aAtC'1G]:c(n +1) = [W + b At C’1G]z(n) + [AtC'1P][a u(n -+- 1) + b u(n)] (3.113) or in state equation form: 2(11 +1) = [W — a At C'1G]’1[W + b At C'1G]z(n) + [W — a At C'1G]'1[At C"1 P][a u(n + 1) + bu(n)] (3.114) For the ladder example 1- c can be factored out of c-‘G, and c can be factored out of C“‘P: 1 2 0‘0 = ——v and C-‘P = — P TC C Substituting this into equation (3.113) yields: 2(n + 1) = [W — aFoV]‘1[W + bFoV]:c(n) + [W — aFoV]’1[2rFaP][au(n + 1) + bu(n)] (3.115) where: Equation (3.115) is in the standard form of discrete time state equations. It shows the relationship between the CTDS state equations and the discrete time state equations which result when finite difference approximations are used. When an RC ladder network is defined by discretizing space, the C, G and P matrices can be specified. Selecting time and spatial weighting parameters determines the W and V matrices. Therefore, using Equation (3.115) it is possible to determine the discrete time state equations for the system directly from the continuous time state equations. 63 3.4 Exact Discrete Time Solutions Many important solutions to the boundary value problems associated with heat transfer problems involve series solutions composed of familiar functions. This section will show how these series solutions can be converted to discrete time models of arbitrary accuracy using the input invariant methods developed in Section 3.2.1. . ' The primary advantage to this approach is fidelity of the discrete time solution. In particular, the qualitative behavior of the solution is independent of the time step selected. In addition, the accuracy of the solution is also independent of the time step. Serles Solutions: The solution to many boundary value problems can be expressed as a Fourier series involving eigenfunctions arising from conditions at the boundaries. Solu- tions to these Sturm - Liouville problems can be expressed as: y(t) = irnhna) as N => 00 (3.116) 1880 where: h,.(t) are a complete orthonormal set of eigenfunctions. One interpretation of this solution is depicted in Figure 3.12 as a block diagram. Each simple section responds to an impulse, step, or ramp input given by h,,(t). By sampling the output of each section and taking the z-transform, an input invariant transform of each simple section can be found. Since the output of each section exactly matches the continuous time solution, regardless of the time step selected, the ' discrete time solution is error-free if an infinite number of terms is included in the $6086. For a finite series, errors in the discrete time solution occur, but because the series solutions are uniformly convergent, the error in the solution can be made 64 arbitrarily small by including a sufficient number of terms. For example, consider a one dimensional wall with an applied heat flux at one surface, and insulation at the other (see Figure 3.1). The continuous time impulse response for this problem is [10]: h(z,t) = 1 + 2icos(mr:t:)e"‘2’rt (3.117) 7331 For a given measurement location, the cosine terms are constant and the invariant solution for the exponential terms can be found as shown below. Using: Pa = "21.2 an = e-PQA‘ 1'fl = 2cos(n1r:c) the impulse, step, and ramp invariant forms of the discrete time transfer function can be formed: Impulse invariant : Ctn = At 1171(2) = 7.7161712 2 - afl Step invariant: con = (1 - an)/pn H..(z) = ’”‘°" 2 — an Ramp invariant: C1,. = (PnAt 1' 6"“A' - 1)/(P,2.At) co, = (pnAt e7?"At + 6'9”“ — 1)/(p§At) 1"n(c1fl.z - 607:) z -— afl Hn(2) = This discrete time solution requires no spatial lumping of properties, and the behavior of the solution exactly mimics that of the CTCS solution no matter what 65 time step is used. The accuracy of the solutions depends only upon the number of terms included in the finite summation, and the degree to which a series of impulses, steps, or ramps is an adequate representation of the actual input. Table 3.8 contains values for rm and p, for the insulated wall example. The interested reader may wish to compare these poles and residues with those presented in Tables 3.3 and 3.4. W) 3 .9 +110 0(t) _7Ls29_ 5+Poo Impulse Response of Insulated Wall at :2 = 1 Unit Properties Heat Flux applied at a: = 0 Insulated at :c = 1 h(t) = 1 + 22(_1)nel-n2,2¢) n=1 Poles Residues 1),, = n21r2 7,, = 2(—1)" Figure 3.12: Exact Discrete Time System Simulation Chapter 4 Inverse Solution Methods 4.1 Introduction Many inverse solutions have developed which depend upon two procedures: 1) a method of approximating the direct problem with discrete time equations, and 2) given those equations, the design of an estimation algorithm. There are, as was seen in Chapter 3, many reasonable ways to approximate continuous time problems with discrete time ones. While the algorithm behavior certainly depends on this choice, the estimation concept does not. The approach in the past has largely been one of trial and error numerical experiment: select- ing a numerical method to approximate the direct continuous time solution, an inverse algorithm is formulated, and tested for desirable behavior, using bench- mark functions. While this approach has been useful, it has not provided the analytical understanding of the problem required to decouple the two procedures above. No doubt, some good techniques have been overlooked because the al- gorithm behavior appeared to be poor, even though the estimation concept was valid, and would perform well given a proper direct model. The philosophical approach taken here is to separate the inverse problem into these two areas and attack them individually. Since there are so many choices for the direct problem simulation, inverse algorithms will be grouped based upon 67 68 the estimation concept, not upon the way that the direct problem is formulated. A review of two important inverse methods is presented in this section: one which evolved from early work reported by Stolz [4], and another which was developed by Beck [18]. Since many subsequent methods build on the concepts resulting from Stolz’s method, it is a logical place to start. Beck’s method was developed soon after and has been one of the more successful methods. It employs a heuristic which will in many cases stabilize the results from Stolz's methods. The review of each method is presented so that it parallels the authors devel- opment, and then is generalized such that the direct model is decoupled from the inverse solution. Each method is analyzed in frequency domain, using :- transforms, and inverse algorithm transfer functions are derived. 4.2 Stolz’s Method The method developed by Stolz will provide estimates of q, denoted if, given observations of an internal temperature. Stolz’s method is an algebraic solution for discrete time numerical deconvolution. The Stolz paper describes the two procedures statedabove: 1) a method for converting the continuous time problem ' into a discrete time problem, and 2) a method of back substitution to solve the resulting algebraic equations. A methods which numerically deconvolves a set of algebraic equations will, for this reason, be referred to as Stolz's method. The algorithm described by Stolz will be presented first and then generalized. Starting with Duhamel’s Integral (convolution): 0(z,t) = [:h(z,r)q(t— r)d1- (4.1) where: h(2,t) is the impulse response of the system to applied unit energy heat flux q(t). Assuming the impulse response and input q(t) are constant for uniform 69 time increments, the integral is trivial over that interval. Summing these incremen- tal solutions over the limits gives a discrete form of convolution: 012.1.) = i h(z,r.-)q(t.- — r.) (4.2) j=—oo where: t,- = i At Keeping in mind that q will be estimated from temperature observations at some point 2, the an argument will be dropped with the understanding that this is the temperature at the observation location. . Once a discrete time impulse response, Mn), is known for a given observation location, the solution to the direct problem can be found by convolving the heat flux input with the impulse response: 0(a) = f: h(j)q(1‘—j) (4.3) j=-oo where the index in parentheses is understood to be the discrete time index. Since the heat transfer system must be causal (i.e. not anticipatory), - h(z,t) = 0 for t < 0 This implies that the lower limit of the integral (4.1) can be replaced by zero since the integrand is identically zero for values of r less than zero. Similarly, the lower limit of Equations (4.3) can be replaced by zero. Assuming that the heat flux is zero for times less than zero, Equations (4.3) can be expressed in this form: 9(0) = h(0)q(0) 0(1) = h(0)q(1) 1' h(1)q(0) 0(2) = h(0)q(2)+'h(1)q(1)+h(2)q(0) (4-4) The zeroth equation can be solved using the zeroth temperature observation y(0) in place of 0(0): 1 W140) (4.5) (110) = 70 Substituting this estimate into the second equation, and using the second tem- perature observation yields: _1_ (1(0) The process continues until all of the equations in (4.4) are solved. 5111) = (11(1) - h(1)é10)) (4-5) Recognizing equations (4.4) as discrete convolution, this method can be ex- tended and analyzed using 2 - transform techniques. Dropping the requirement that q be zero for negative times, and applying 2 - transforms to (4.4) yields: 9(2) = H (2)0(2) (4.7) where H (2) is the transfer function for the system, and is the 2 - transform for the impulse response. The operation, equivalent to back substitution, applied to the frequency domain representation above leads to: 0(2) = H-‘(z)Y(z) (4.8) In principle, the RHS of (4.8) is known: H 71(2) is the algebraic inverse of the transfer function, and Y(z) is the z - transform of the temperature observations. Since this transfer function is the basis for many subsequent methods, define: St(z) = H-‘(z) (4.9) As noted in Chapter 3, H (z) is the ratio of two polynomials in z. The inverse transfer function is simply the inverse ratio of these two polynomials, which can now be examined to determine its qualitative behavior. It is interesting to note that the zeros of the direct transfer function become the poles of the inverse transfer function (and vice versa). 4.3 Beck’s Method Beck’s method, also referred to as the function specification method, employs a heuristic which extends the stability of deconvolution solutions to smaller time 71 steps. The development here will parallel that presented by Beck, and will then be generalized. A discrete form of convolution, like that of Stolz is assumed: 0(1) = ihuna -j) (4.10) j=0 or written out fori = 0,1,2,-.- 9(0) = h(0)q(0) 0(1) = h(0)q(1) + h(1)q(0) 0(2) = h(0)q(2) + h(1)q(1) + h(2)q(0) 0(m) = ih(i)q(m — i) (4.11) i=0 The heuristic employed temporarily assumes that the applied heat flux is con- stant for R time steps after the observation interval. Rearranging terms and chang- ing index variables in the summation: 0(m) = h10)q(m) + Ehim — .-)q(.-) i=0 0(m + 1) = h(0)q(m -+- 1) -+- h(1)q(m) + Zh(m —i + 1)q(i) J. - m_1 i=0 (4.12) ”(m +1) = Zh(i)9(m ‘i + j) + Z M771 -i + 1111(1) i=0 i=0 R m—1 0(m + R) = Zh(i)q(m —i + R) + Zh(m —i -+- R)q(i) i=0 i=0 Applying this heuristic to the m“ time step yields: 9071) = 11011 +1) = = q(m + R) = 61m) (4.13) Substituting this into (4.12) yields: m-1 0(m) = 91m)h(0) 1' ZMm — 179(1) . i=0 1 0(m + j) = flm)Zh(i) + Zhlm -i + 1190’) (4:14) i=0 i=0 R m-1 0(m + R) = 61m)z:h(i) + ZMm —i + R)q(i) i=0 i=0 72 Using the previous estimates of q in the rightmost summation, and using the temperature observations in place of 0 yields: y(m) = 01110140) +Ehtm-im) . i=0 1 y(m -+- j) = flm)2h(i) + ZMm -i + j)fli) (4.15) i=0 i=0 R m-1 y(m + R) = {11111)2116) + Zh(m —i + R)fli) i=0 i=0 which leaves us with R + 1 equations and only one unknown. Since it is not possible, in general, to simultaneously satisfy all of these equa- tions, 61m) is selected to minimize the squared error criterion: 5 = fair/(m + j) — 0(m +1112 (416) J: That is, we wish to minimize the difference between the observations and the temperature response if we pretend that the applied q is constant for the time interval from m = 0 to m = R. Using the result from equation (4.14): 0(m + j) = (111102110) + EMm -i + j)¢’1(i) (4.17) i=0 '=0 Substituting (4.17) into (4.16) yields: R J 711-1 2 5 = Z y(m + j) - flm)2h(i) — Zh(m —i + j)fi(i) (4.18) j=0 i=0 i=0 Define : . r(j) = 2110) (4.19) i=0 and -1 i’)‘,(m) = 211(1). + j—i)f11(i) (4.20) '=O Substituting (4.19) and (4.20) into (4.18) and squaring yields: R 5 = +Z[62(m)72(j) + 92(m + j) + (ff-(1n) 1'20 73 -2(fi(m)r0')y(m + j) +y(m + j)9,-(m) — 61m)r(i)0j(m))l (4.21) All the quantities in this expression are known except for @(m). To find this value, equation (4.21) must differentiated with respect to 61m), and set equal to zero: as 3&1m)2§[6(m)'21j) -r(i)(y(m + j)- 0 (m))] = 0 _ (4.22) Solving for (11m) yields Beck’ s method [18]: R 2'0)(y(m 1' j) - Wm» @(m) = "° R (4.23) Zr’b’) i=0 Since this method leads to a non-causal (anticipatory) algorithm, a closer look at the summation indices is indicated. A generalized form of the results above can be developed, which generates the inverse solution independently of the direct solution using a z-transform description of the direct transfer function. With this in mind, a generalized function specification method is developed next. Starting with general convolution form: 0(m) = in: h(i)q(m — i) (4.24) Since h(i) is a causal signal, 0(m) = ih(i)q(m — i) (4.25) i=0 and 0(m + R) ih(i)q(m + R - i) I (4.26) i=0 R co = Zh(i)q(m + R — i) + X h(z')q( m + R— i) (427) 3:0 - i=R = Zh(i)q(m + R — i) + ":7: h(m '1' R — 0910 (428) £80 i=-oo 74 Repeating the steps used on Equations (4.13) through (4.18) leads to: R 1‘ 711-1 5 = EMT" 1' j) — 61m)zh(i)- Z M?" - i 1‘ 171111)) (429) j=0 i=0 i=—oo and -1 @(m) = .2 h(m—i + jmi) (4.30) Continuing the process as above leads to equation (4.23) but with the (’9‘,- above. In order to determine the first non-zero value of 9‘,, recall that: h(k) = 0 for k < 0 and assume that measurements start at k = 0: y(lc) = 0 for k < 0 When the argument of h in equation (4.30) is negative: h(m—i+j)=0 for m h(m-i-r—j)=0 j= R then m h(m—i-+-j) = 0 Using these conditions in (4.30): m—1 793(m) = 2: h(m—imi) (4.31) m—1 43(m) = [Z h(m—i + Rm) (4.32) and since i < m: (Mm) = 0 until the first (11(1) 7% 0, Vi e (—oo,m— 1] 3)},(m) 4 0 Thus, 793(1)») = 0 until am. — 1) 7e 0 ' (4.33) Using equation (4.23) R DU) [141 + j) — 92(1)] 011') = "° ,, 2120') j=0 75 and noting that the result is valid for all i between -oo and co, and using the result in (4.33) implies that: mi) 7‘: 0 when the first y(i + j) 76 0. The first non- zero y(i + j) occurs whenz' = —R, and therefore, the first non-zero (112’), occurs wheni = —R: 1111') =0 for i<—R and therefore, {—1 7%) = 23"“ + j - km) The 2 - transform analysis of this method will proceed in a fashion similar to that for Stolz’s method, although it is more involved. Letting R . c, = Zrzm j=0 and rearranging (4.23), 0mm) = r(0)y(m)+r(1)y(m+1)+---+r(R)y(R) — r(0)9‘o(m)—r(1)i‘1(m)—~-—r(R)i2‘a(R) (4.34) Taking the z - transform of this equation: 0,6(2) = r(0)Y(z) +r(1)z‘Y(z) + + r(R)zRY(z) - 7(0)éo(Z)-T(1)é1(Z)—'“"‘7‘(R)§n(z) (4.35) or c,Q(z) = [r(0) + r(1)z + + r(R)zR] Y(z) — r(0)éo(z) - 7(1)é1(z) - - «mi-33(2) (4.36) Concentrating on 6, and recalling fi,(m) = E h(m + j — k)’qu) (4.37) k=-oo and expanding this expression to standard convolutional form: mm) = 2”: Mn» + ,- — was) —- #(k — mlfflkl) (4.38) 138-bp 76 Of 4.4m) = i h(m+j—k)41k> k=-—oo — f: h(m + j - k)p(k — m).r’flk) ‘ (4-39) kB-m The z-transform for 90(2) can be found as follows: letting j = 0, Equation (4.39) becomes, mm) = i h(m—kmk) kS-oo — f: h(m — mu: — mm) (4.40) kS-oc and observe that: h(m—lc) = 0 for m le) — 0(2) where: N(z) = (r(0) + 1-(1); + + r(R)zR)Y(z) ‘ (4.49) and 0(2) = (73(0) + 152(1) + + 3(3)) + (r(0) + r(1)z -+- + r(R)zR)H(z) z°(h(0)r(0) + h(1)r(1) + + hazy-(m) 78 z1(h(0)r(1) + h(1)r(2) + - - . + h(R - 1)r(R)) - 22(h(0)r(2) + h(1)r(3) + + h(R — 2)r(R)) ; zR(h(0)r(R)) (4.50) or R . R D(z) = H(z)£r(i)z‘ + C, — 22" £80 Ie=0 R—k Z h(l)r(l + 10] (4.51) (=0 R R-i = ZZ‘['(i)H(z)-Eh0)r(i+j) +0. (4.52) j=0 i=0 Substituting these results into (4.48) yields a final more useful form for the discrete time transfer function: R x 21:0)?! Y(z) :2: [r(i)H(z) - ZMjh-(z' + 3)] + 0, i=0 5:0 Using this expression, the transfer function for Beck’s Method can be found. It depends on a single design parameter, R, and is completely described in terms of the truth model, H (z), and truth model time responses h(i) and r(i). Checking these results and observing that Beck’s method should reduce to Stolz’s method for a prediction horizon of zero, let R = O: R = 0 c. = 3(0) = 112(0) Using Equation (4.48) h(O)Y(z) (12(0) + (1(0) [11(2) - 11(0)] = h(omz) h2(0) + h(O)H(z) — h2(0) 63(2) = Y(Z) H(z) H"1 (z)Y(z) which is Stolz’s method. Chapter 5 Inverse Algorithm Stability 5.1 Introduction Once a discrete time inverse algorithm has been formulated, its behavior can be analyzed using 2 - transforms. Of particular importance to the performance of inverse algorithms is the stability of the resulting filter. Rabiner and Gold [8] classify a system as stable if every bounded input produces a bounded output. The necessary and sufficient condition on the impulse response, h(n), for stability is: 00 Z |h(n)| < oo n=-oo Much of the previous work regarding this issue has determined the stability of inverse filters by numerical experiment. While this approach does prove useful, it provides little information about the underlying structure of the algorithm, or causes of the instability. The dependence of inverse algorithm stability on At has been a significant problem. One approach has been to search for reasonable direct solutions which lead to accurate and stable inverse solutions. This trial and error process can be a time consuming and computationally intensive process with no guarantee of success. Results are presented in Section 5.3 which provide an analytical method for determination of the inverse filter stability and its dependence on At, without 79 80 such numerical experiments, using the transfer function of the algorithm. This procedure also leads to a class of inverse filters which are unconditionally stable. A criterion for stability is developed in Section 5.3. Inverse algorithms are first described in state equation form, and then related to the transfer function form. The stability criterion is then presented involving the locations of the system poles. This is followed by several examples and a general test for stability using the coefficients of the characteristic polynomial. The principle advantage to this stability test is that the presence of unstable poles is detected-without factoring the characteristic polynomial. This leads to a method for determining the parametric dependence of system stability on At. A broad class of filters which will provide unconditionally stable inverse filters is developed in Section 5.4. Stability is achieved by applying a precompensation filter in cascade to the inverse filter. The transfer function of the resulting inverse filter is the product of the transfer functions of the precompensation filter and the original inverse filter. The precompensation filter is designed so that its zeros cancel the unstable poles of the inverse filter. This leads to a new and broad class of inverse algorithms, previously unseen in the literature. The particular examples worked in subsequent sections of this thesis are aimed at stabilization of Stolz inverses, although these precompensation meth- ods can be applied to any inverse algorithm which can be described by a transfer function. In addition, the precompensation filters presented here, replace each unstable pole with a stable one, in order to maintain the dimensionality of the ' original inverse. While this is desirable since it tends to reduce phase distortion, it is not necessary. 81 5.2 Stability Criterion for Discrete Time Systems In order to describe the qualitative behavior of an inverse algorithm, the lo- cations of the poles of the system must be determined. To this end, the inverse algorithm may be described by either of two equivalent methods, one of which may be selected dependingupon the original form of the inverse algorithm de- scription. One method deals with a difference equation description and the other with a transfer function description, although the transfer function is easily derived from the state equation form as was shown in Equation (3.110). The underlying object of both methods, is to obtain and factor the characteristic polynomial for the inverse system, so that the poles locations are specified. Assuming that the inverse algorithm can be described by difference equations of the form: :c(n + 1) = Az(n) -l- By(n) r’fln) = Ez(n) + Fy(n) (5.1) where: z(n) — state variables for inverse algorithm ’q‘(n) — heat flux estimates y(n) — temperature observations taking the z - transform of these equations and solving for the transfer function: 1(2) = E[zI — A]'1B + F (5.2) Expressing 1(2) polynomial form: = Eadj[zI — A]B + det[zI — A]F 1(2) det[zI — A] (5.3) _ N (Z) — D (z) (5.4) The characteristic equation for [(2) is U(z) = det[zI — A] =— O (5.5) 82 and the location of the system poles is determined by factoring the characteristic polynomial, or solving the characteristic equation. A transfer function is stable if the roots of the characteristic equation lie within the unit circle [8]. A system which has poles that lie on the unit circle will be called marginally stable. The transfer functions in Tables 3.5 through 3.13 show that the locations of the poles and zeros of the direct transfer function, consequently the zeros and poles of the inverse transfer function, are dependent on At. A method for determining the critical values of At at which an inverse filter’s poles pass outside the unit circle is the subject of the next section. 5.3 Inverse Algorithm Stability Dependence on Time Step The approach taken in this section will be to first work with two examples to demonstrate the necessary concepts, and then develop the more general method for testing the dependence of stability on At. The two examples will stem from third and fourth order ladder representations of the insulated wall with applied surface heat flux. The impulse invariant transformation will be used to form the discrete time transfer function, and the Stolz inverse will be formed from this result. The stability testing will be generalized using the Shur - Cohn method [19], which tests for the presence of poles outside the unit circle. It is the discrete time equivalent of the Routh - Hurwitz test for stability of continuous time systems. Starting with a third order ladder representation of the insulated wall example, the continuous time transfer function resulting from observations at the insulated surface is: 128 m3) ' .93 + 2452 + 128.9 (5'6) and performing the partial fraction expansion yields: 1 —2 1 H“"§*'§T§+.+1e (5'7) 83 Using the impulse invariant transformation yields discrete time transfer function: H(z) ’7 Atz + -—2Atz + Atz + At 2 n (5.8) 2 — p1 z — pg 2 - pa 2 — [)4 where: P1 = 1 P2 = e-BAt P3 = e-16At r1=1r2=-2 r3=1 Expressing H (2) as a ratio of two polynomials: _ At(a222 + (112) H(z) — 23 + 1:222 +~b1z + b0 (5'9) where: a3=r1+r2+r3=0 02 = -[1‘1(P2 + P3) + 1’2(Pt+ Pa) + 7301+ 102)] ¢1=("1P2P3)+(Pt 72m)+(P1P2 1'3) =-m-m-m 51=(P1 P2)+(P1P3)+(P2Pa) 50 = 101 p2 pa The Stolz inverse is formed by taking the reciprocal of H (2): 1(2) = St(z) = (II-1(2) (5.10) Factoring the denominator of I (z) (numerator of H (2)) will determine whether the Stolz inverse is stable; i.e. if the the roots lie within the unit circle, the system is stable. The roots of the denominator of [(2) are: 2 = 0 and, z = —a1/a2 The behavior of the second root as At varies is of interest. At=>0 -a1/a2=> -1 At => 00 —a1/a2 => 0 This result indicates that the Stolz inverse of the impulse invariant form of the third order ladder network approximation is unconditionally stable for values of At > 0. Repeating this analysis for the fourth order ladder: 8748 .11 s4 + 7253 + 153932 + 87488 (5 ) H(s) = 34 and performing the partial fraction expansion yields: —2 2 -1 + + 3+9 s+27 s+36 H(s) = g + (5.12) Using the impulse invariant transformation yields discrete time transfer function: _ At 2 r1 At 2 r2 At 2 r3 Atzn H (z) - + + + (5.13) z — p1 z - pg 2 — pa 2 - p4 Where: p1 __._ 1 1,2 = 8-90.: p3 = e-27At p4 = e-ssa: r1=1r2=—2 r3=2 r4=—1 or expressing H (2) as a ratio of two polynomials: H(z) = 4 At(a332 (122 (11):. (5.14) z +b32 +bzzz+b1zz+bo where: 04 = r1+r2+r3+r4=0 as = -"1(P2 +P3 +P4)-7'2(P1+ Pa +P4) -ra(p1 + 102 + M - Mp1 1' p2 + pa) 42 = 1'1 (P2103 + P2P4 + P3124) 'l' 7‘2(P1P3 + mm + mm) + 73(P1P2 + mm + mm) + “(mm + mm + mm) 01 = -1'1P2P3P4 - P172P3P4 — P1P27'3P4 - p1P2P37'4 b3 = -p1—p2-pa—p4 52 = 191102 + pm + pun + papa + papa 1" papa 51 = warm - mum - pfpam - P2P3P4 50 = mmram Forming the Stolz inverse and applying the Shur - Cohn test to the denominator of 1(z) will determine whether the Stolz inverse is stable. One root is located at z = 0, which does not affect the stability of the inverse. The Shur - Cohn test requires: D(+1) =a3+a2+a1 > 0 and D(—1) = a3— a2 + an > 0 and Ins/ml < .1 The second condition is violated first as At is made small, indicating that the critical value of At can be obtained by solving this equation: 85 1 _ 38—943: + 58—2715: _ 58—45At + 3663“ _ 2‘72“ = O This equation can be solved using a variety of techniques. Newton - Raphson was used to find the critical value: Atm, = 0.0814286 It is interesting to note that the lower order model (3rd order ladder) of the system leads to an inverse solution which is more stable than the higher order models. The method can be generalized using this procedure: 1. Find the transfer function of the inverse algorithm. 2. Apply the Shur - Cohn test to the resulting denominator leaving At as a parameter. 3. Reduce the value of At until one of the three Shur - Cohn conditions is violated. Factored forms of the Stolz inverse filters derived from the transfer functions presented in Chapter 3 ( Tables 3.5-7 and 3.11-13) appear in Tables 5.1 through 5.8. These tables show that each of the invariant method Stolz inverses becomes unstable as At is reduced. Since the Stolz inverse for the forward difference method contains no poles, it is unconditionally stable. In a similar way, the back- ward difference Stolz inverse poles occur only at the origin, indicating that it too is unconditionally stable. The Crank - Nicolson Stolz inverses for each time step considered are marginally stable with four poles located on the unit circle at z = -—1. Both the implicit 5/6 and implicit 3/4 Stolz inverses have one marginally stable pole located at z = —1, and have three unstable poles for all of the time steps considered. 1m (2) [its (3) It't’t (z) Iii: (z) 103': (3) 1033(3) In" (3) 103'. (z) Iris (2) In" (I) In" (I) Iris (7') 86 Table 5.1: Impulse Invariant Stolz Inverse m -o.001 (z — 0.964640)(z —— O.973361)(z — 0.991040)(z — 1.00000) = . 1 6 983 4368 (2 + 0)(z + O.263163)(z + 3.66556) At -=o.01 8119477354" -— O.697676)(z — 0.763379)(z - 0.913931)(z - 1.00000) (1 + 0)(z + 0.223288)(z + 3.12456) 5.3.1 3 40196251 (2 - 0.273237E-T)(z - 0.6720555-1)(z - 0.405570“; — 1.00000) ‘ (z + 0)(z + 0.355392E-1)(z + 0.768834) At-1.0 (z —- 0231952515) (2 - 0.187953E-11)(z - 0.1234105-3)(z - 1.0000) = 1.000247 (2 + 0)(z + 0.187907E-11)(z + 0.123440E-3) Table 5.2: Step Invariant Stolz Inverse m -_g.m1 278322859“ — 0.964640)(z — 0.973361)(.-. - O.991040)(z — 1.0000) (1 + O.995573E-1)(z + 0.985703)(z + 9.75777) At -=0.01 3.162795% (2 — 0.697676)(z 4 0.763379)(z - 0.913931)(z — 1.0000) (2. + O.871858£-l)(z + 0.865922)(z + 8.59940) At 10.1 (2 - 0.2732375-1)(z — 0.5720555-1)(z - 0.406570)(z — 1.0000) 98003678 (2' + 0.174442E-1)(z + 0.245281)(z + 3.16475) At;1.0 1 2134430 -— 0.231952E-15)(z — 0137953511“; - 0.1234105-3)(z — 1.0000) ' (z + 0.112735E-11)(z + 0.325101 E-11)(z + 0.21354) Table 5.3: Ramp Invariant Stolz Inverse “-0.001 (2 - 0.964640)(z — O.973361)(z - 0.991040)(z — 1.00000) = 1. 38822785108 + 0.425809E-1)(z + 0.425432)(z + 2.29481)(z + 22.9278) At-=9.01 1 544368E6 (z — 0.697676) (2 — 0.763379)(z — 0.913931)(z — 1.00000) ' (z + 0.380953E-1)(z + 0.381274)(z + 2.06335)(z + 20.6475) my: 3 97736152“ - 0.2732375-1)(z — 0.5720555-1)(z — 0.406570)(z - 1.00000) ' (z + 0.920534E-Z)(z + 0.116505)(z + 0.843409)(z + 9.30995) 81;» 2.883581 (2 - 0.2319525-15)(z — 0.187953E-11)(z — 0.1234105-3)(z — 1.0000) (2 + 0.7351455-12)(z + 0.1075545-4)(z + 0.366262E—1)(z + 1.7813) 1,4...(2) 114:»(2) luv-1(1) 114mb!) 1.2...(2) [um (1) Ian“) 1.4,..(2) 1mm”) 1mm“) 11mm”) learn (3) 87 Table 5.4: Forward Difference Stolz Inverse At-=0.001 1.1431 18,5,,(2 - 0.964000)(z — 0.9730(D1)(z - 0.991000)(z - 1.00000) At-=0.01 1.1431 1854(2 — 0.640000)(z - 0.7300(D1)(z — 0.910000)(z - 1.00000) A¢;O.1 1143118" - 0.100000)(z -1.000000)1(z + 1.700000)(z + 2.60000) At;1.0 1.1431 1815-4“ —1.000000)(z + 8.000000)1(z + 25.00000)(z + 35.00000) Table 5.5: Backward Difference Stolz Inverse A: .3001 10227193580 - 0.965251)(z — 0973710)}; — 0.991080)(z - 1.00000) 5.20.01 215208864 (2 — 0.735294)(z — 0.7874030: - 0.917431)(z — 1.00000) 5.3.1 3.696612E1 (z — 0.217391)(z — 0.270270)(z — 0.526316)(z + 1.00000) Z4 At;1.0 1.184270 (2 - 0.2702705-1)(z — 0.3571435-1)(z — 0.100000)(z - 1.00000) 24 Table 5.6: Crank-Nicolson Stolz Inverse 8: 17.001 (2 -— 0.964637)(z - 0.973360)(z —— 0.991040)(z - 1.00000) ’ 1'8955'1'59 (z + 1.00000)(z + 1.00000)(z + 1.00000)(z + 1.00000) At -=o.01 2559794650 — 0.694915)(2 - 0.762115)(z — 0.913876)(z - 1.000(D) (z + 1.00000)(z + 1.00000)(: + 1.00000)(z + 1.00000) 5110.1 (2 + 0.148936)(z + 0.285714)(2 — 0.379310)(z — 1.00000) - ”4503952 (2 + 1.00000)(z + 1.00000)(z + 1.00000)(z + 1.00000) A¢;1.0 2 771373 (2 + 0.636364)(z + 0.862069)(z + 0.894737)(z — 1.00000) ' (z + 1.00000)(z + 1.00000)(z + 1.00000)(z + 1.00000) 1.258(1) 1156(1) 556(2) 1156(1) 1134(1) 1.114(1) Its-1(1) 1134(2) 88 Table 5.7: Implicit 5/6 Stolz Inverse At-=o.001 _ 1153812055 (2 — 0.947420) (2 - 0.964637)(z — 0.990230)(z — 1.00000) (2 —1.11417)(z -1.11417)(z —1.11416)(z + 1.00000) At-___0.01 —2.132(BO§‘3(Z - 0.574803)(2 — 0.694915)(2 - 0.QW412)(2 - 1.00”) (z + 1.00000)(z — 3.34781)(z — 3.34781)(z — 3.34785) At;0.1 2 39346152" + 0.285714)(2 + 0.459459)(2 — 0.341463)(z — 1.00000) ' (z + 1.00000)(z + 1.45455)(z + 1.45455)(z + 1.45455) At;1.0 2 787270 (2 + 0.661538)(z + 0.894737)(z + 0.928571)(z — 1.00000) . (z + 1.00000)(z + 1.03773)(z + 1.03773)(z + 1.03773) Table 5.8: Implicit 3/4 Stolz Inverse At -__c_).oo1 _2.770858£4(z - 0.930502)(z - 0.957713)(z — 0.989767)(z — 1.00000) (2 - 1.07469) (2 — 1.07469)(z - 1.07469)(z + 1.00000) At10.01 _1 547707154" — 0.470588)(z — 0.644737)(z — 0.902174)(z - 1.00000) ' * (z +1.00000)(z-2.12500)(z —2.12500)(z-2.12500) A:;0.1 292219962" + 0.367089)(z + 0.565217)(z — 0.320755)(z — 1.00000) (2 + 1.00000)(z + 1.76924)(z + 1.76924)(z + 1.76924) At;1.0 2.795459(z + 0.674419)(2 + 0.911504)(z + 0.945946)(2 — 1.00000) (2. + 1.00000)(z + 1.05714)(2 + 1.05714)(2 + 1.05714) 89 5.4 Stabilization of Inverse Algorithms The results of the preceeding section provide the analytical framework neces- sary to understand the source and nature of instability in inverse solutions. This analysis does the pose the question of whether unstable methods might some- how be modified to yield stable results. Results are presented here which lead to a new class of unconditionally stable inverse algorithms. Stabilization is accom- plished using a precompensation filter. This concept is illustrated in Figure 5.1, in conjunction with the Stolz inverse filter, but may be used to stabilize any inverse filter. The precompensation concept will also prove to be useful in the analysis of other methods. ' Precompensation filter P(z) operates on the observation se- quence y(n). The output of P(z) is passed to the input of St(z), a Stolz inverse filter. The resulting output of St(z) is the estimate sequence (1(a). The actual implementation of the inverse algorithm need not separate the function of the two filters, and is defined in this way: [(2) = P(z)St(z) (5.15) The method of precompensation for stability suggested by this approach re- quires that all unstable poles of St(z) be replaced by stable poles. This is ac- complished in the precompensation filter: Unstable poles of $t(z) appear in the numeratof of P(z). When the product of P(z) and St(z) is formed, these unstable poles are cancelled and replaced by the poles of P(z). By combining P(z) and St(z) into a single filter, I (z), unstable responses, are eliminated. The functional form for P(z) depends upon the form of St(z). Assuming that St(z) is expressed in factored form: (z-p1)(z-P2)(z-P3)~°(z-Pn) (5.16) 1 St(z) - a (z — 21)(z — z2) - - - (z — 2m) 90 where: 171 p,, are the poles of H (z)(direct transfer function) 21 ---z,,, are the zeroes of H (z) 9,, is the gain constant of H (z) One possible form which replaces each unstable pole with a stable one is: P12) = g (z - 2012 - ,2)... (z - 2,.) (5.17) 1’(z - w1)(z-w2)--- (2 -wt) where 101 117). all lie within the unit circle. This form also maintains the dimen- sionality of the original Stolz inverse, and guarantees unconditional stability of the resulting inverse algorithm. Gain constant 9, can be selected to preserve the steady state behavior of the resulting filter. It will be desirable for the precompensation filter to have unity gain for constant, (e.g. unit step) inputs. This accomplished using the final value theorem: 1 = lim y(n) = 11:11 filmy.) ' (5.13) where, in this case, Y(z) = P(z)U(z). Using the z-transform of a unit step, U (z) = z/(z - 1), and equation (5.17) in the RHS of equation (5.18) yields: (2 — 21)(z — 22)-~(z - 21.) (2 —w1)(z -w2)---(z -wt) 1 = ”IT: P(z) = Hing, Solving for 9,, yields: 9 ___ (1-1111)(1-uvz)~-(1 -w1.) P (1—21)(1—zz)---(1—-zk) (5.19) 91 Precompensation Stolz Inverse Filter Filter fin) ——~ P(z) St(z) —» (1‘1“) Inverse Filter y(n) —' 1(2) = P(z)St(z) ‘—' q n) Filter - frequency domain description Algorithm - time domain description Figure 5.1: Inverse Algorithm Representation 92 5.4.1 FIR Precompensation Filter One precompensation solution which has many desirable features replaces each Stolz pole with a pole at zero. The gain constant is selected so that the steady state gain of the precompensation filter is unity. This implementation will be referred to as the FIR filter, since it leads to an inverse filter which has an impulse response that is finite in duration. The precompensation filter becomes: (2 - Z1)(Z - 22)--- (2 - zm) zm P(z) = .9? and . = 1 9" (1- 200- 221mm — 2...) Taking the product of P(z) and St(2) yields: 1(2) ___ 93(2 -p1)(z -pz)(2 -ps)~-(z -pn) 9h 2'" or expressing 1(2) as the ratio of polynomials: 1(2) = £32“ + a,,.1z"‘1 + .1. 0,121+ 00 9h 2m Since 9(2) = I(2)Y(z) or 6(2) = (g,/g).)[2N'M + (1.,,_12N‘M“1 + -+- 8121’” + aoz’M]Y(2) (5.20) (5.21) (5.22) (5.23) (5.24) (5.25) The inverse algorithm can be obtained by evaluating the inverse transform of this equation: (113/(11+ 1 — M) + cog/(i — M)] @(t) = j—P[y(t+N—M)+a,,_1y(t+N-M_1)+...+ (5.26) h 93 Note that the inverse algorithm consists N + 1 terms, where N is the order of the system used to represent direct problem. This implies that an impulse input to the inverse algorithm only produces non-zero output for N + 1 time steps. This is an important consideration if uncorrelated noise is present in the observations, since an error in a single observation will propagate no farther than N + 1 time steps. Section 7.4 contains several examples and plots of the impulse response of these filters. 5.4.2 FRP Precompensation Filter Another precompensation solution which has many desirable features replaces each unstable Stolz pole with a pole at zero. As before, the gain constant is selected so that the steady state gain of the precompensation filter is unity. This implementation produces a precompensation filter which is an FIR filter, and will - be referred to as FRP precompensation. Let 21 m2), be the list of unstable poles. (2 -21)(z-22)~-(z - 2k) P12) = g, z, (5.27) and _ ‘l g, (1 — 21)(1—->22)- - - (1 - 2).) Taking the product of P(z) and St(z) yields: 1(2) = g_p(z-p1)(z-P2)(z -pa)--~(z -pn) (5.28) g), 2*(2-2).-1)---(2-2m) or expressing I (2) as the ratio of polynomials: N N-I 1 :32 2 +aN-1z +---+a12 +80 [(2) g), 2"(2M-h + bM_)._12M"‘-‘ + - - - -I- be) (5.29) Solving for (3(2): 6(2) [1 + b -1.-12“ + + bozk—m] = (g’/g") [ZN-k + “IV—IZIV-k-1 + -l- 0024] Y(z) (5.30) 94 Taking the inverse z-transform of (5.29) produces the inverse algorithm: at.) = —bM-k-1fli— 1) — - boat '1' k — M) +(gp/gh)[y(i + N - M) + + a1y(i+1- M) +aoy(i— M)] (5.31) 'Note that this inverse algorithm uses the same N + 1 observations as the FIR inverse algorithm. It contains, in addition, terms which depend previous estimates at — 1), etc., and may have impulse response terms which are quite long lived. Section 7.4 contains several examples and plots of the impulse response of these filters. Chapter 6 Frequency Domain Analysis 6.1 Introduction Breaking the inverse algorithms into a precompensation filter and a Stolz filter allows us to analyze the behavior of each filter separately. Figure 6.1 depicts the inverse algorithm represented with these two filters. The Stolz filter is derived from the transfer function which represents the direct solution to the problem. This filter will, in a noise free environment, provide error free estimates. Difficulty is encountered when measurements are corrupted by noise. Unstable states in the inverse filter can be excited , and noise components in certain frequency bands can be amplified. One interpretation of the inverse algorithm in frequency domain terms sug- gests that the precompensation filter performs two functions: 1) generates es- timates of 0(n), denoted 3(a), given observations y(n), and 2) removes signals which can excite unstable poles in the inverse filter. If the spectrum of the heat flux has its largest amplitude components well below the sampling frequency, as is the case for our test functions, then the precompensation filter can remove high frequency noise components with low pass filter characteristics. The pre- compensation filter can also introduce time shifts to compensate for leads or lags introduced be the direct model. Stability is achieved by removing poles which 95 96 Inverse Filter you) -—u . 1(2) = P(2)St(z) ——> 1“) Precompensation Stolz Inverse Filter Filter y(n) -——-~ P(2) 0(a) St(2) —> ’(n) Figure 6.1: Inverse Filter Representation are outside the unit circle. Specifically, the precompensation filter can be found using the inverse filter and factoring out the Stolz filter: 0(2) = I(z)Ylpt (0-1) Figure 6.4: Beck’s Precompensation - Impulse Invariant R = 6 Hdgnatude (db) Angle (degrees) 109 Forward thference (N=04) dt8 0 810 Beck‘s Precompensdtton R=05 order: 4 0405-02 0.105-01 0.10 " 1.0 nu 8 (u dt)/pt (O-I) ° 4\ 0.105-02 0.105-01 8.18 1.0 nu 8'(u dt)/pt (8-1) Figure 6.5: Beck’s Precompensation - Forward Difference R = 5 110 Forward thference (N=04) dt= 0 010 Beck’s Precompensdtton R=07 order: 4 Hdgnttude (db) 0.100-02 0.10E-01 0.10 nu 8 (u dt)/p1 (0-1) 1.8 I33 Angle (degrees) 0 \ 0.105-02 0.105-01 0.10 nu . <0 dI)/ps <0-1) Figure 6.6: Beck’s Precompensation - Forward Difference R = 7 1.8 Hagnttude (db) Angle (degrees) 111 Forward thference (N=04) dts 0 010 Beck‘s Precompensatton R=09 order= 4 0.108-02 0.102-01 0.18 nu I (w dt)/p1 (0-1) 1.8 0.106-02 0.106-01 0.10 nu 8 (w dt)/pt (0-1) Figure 6.7: Beck’s Precompensation - Forward Difference R = 9 1.8 112 Crank - Ntcolson (N=04) dt3 0 010 Beck’s Precompensatton R=03 .order= 4 8 43¢:‘K Ragnatude (db) 0.105-02 0.105-01 0.10 1.0 nu s <0 d1)/p1 <0-1) Angle (degrees) 0 v,/ 0.105-02 0.105-01 0.10 1.0 nu = (w d1)/p1 <0-1) Figure 6.8: Beck’s Precompensation - Crank - Nicolson R = 3 113 Crank - Ntcolson (N=04) dt= 0 010 Beck’s Precompensatton R=05 order: 4 Hagnttude (db) 0.10E-02 0.105-81 0.10 1.8 nu I (w dt)/p1 (0-1) Angle (degrees) 0 0.105-02 0.105-01 0.10 1.0 nu = (w dt)/p1 (0-1) Figure 6.9: Beck’s Precompensation - Crank - Nicolson R = 5 Hagnttude (db) Angle (degrees) 114 Impltctt.5/6 (N=04) dt8 0 010 Beck’s Precompensatton R=07 order: 4 X 1 i 0.105-02 0.105-01 0.10 - 1.0 nu 8 (w dt)/p1 (0-1) I 91/ 8.18E-82 8.188-81 8.10 1.8 nu 8 (w dt)/p1 (0-1) Figure 6.10: Beck’s Precompensation - Implicit 5/6 R = 7 115 Im911C31 5/6 (N=84) d1= 0 818 Beck‘s Precompensatton R=09 order= 4 Hagnttude (db) 1' 0.105-02 0.105-01 0.10 1.0 nu a (0 dt)/p1 (o-t) 135 Angle (degrees) 0.105-02 0.105-01 0.10 1.0 nu = (w dt)/p1 (0-1) Figure 6.11: Beck’s Precompensation — Implicit 5/6 R = 9 116 6.3 FIR Precompensation Filter Frequency response plots of the FIR precompensation filter are presented in this section. The FIR precompensation filter transfer function is: P 2'" P(2) = g (Z—Zj)(2 -22)"'(Z—Zm) (6.8) _ 1 g" (1- 200— 22)---(1- 2.1.) Figures 6.12 through 6.17 contain frequency response plots for several choices of truth model. The impulse, step and ramp invariant methods produce lowpass filters as is shown in Figures 6:12 through 6.14. These figures generally indicate less attenua- tion in the high frequency bands than the precompensation filters generated with Beck’s method. The Crank-Nicolson precompensation filter has rather steep cut- off characteristics, and attenuates frequencies approaching the sample frequency by more than 70db. ' It is interesting to note that while Crank-Nicolson does not, the implicit 5/6 and implicit 3/4 methods lead to precompensation filters which enhance noise near the cutoff frequencies. These enhancement bands are broader than their Beck’s method counterparts, but smaller in amplitude. Note that since the forward difference method transfer function contains no zeros, no precompensation is necessary. Similarly, since the backward differ- ence method transfer function has its zeros at the origin, no precompensation is necessary. Hagnttude (db) Angle (degrees) 0 117 Impulse Invartant (N=04) dt= 0 010 FIR Precompensatton Ftlter order: 3 NNNN 0.10E-02 0.106-01 0.10 1.0 nu 8 (w dt)/p1 (0-1) \\ \N N 0.105-02 0.105-01 0.10 f 1.0 nu = (w dt)/p1 (0-1) Figure 6.12: FIR Precompensation - Impulse Invariant 118 Step Invartant (N=04) dt= 0 010 FIR Precompensatton Ftlter order= 3 Hagnatude (db) 0.105-02 0.105-01 0.10 1.0 nu . (w dt)/p1 <0-1) -45 \\ Angle (degrees) l ,1 8.18E-82 8.18E-81 8.18 1.8 nu = (w d1)/p1 (8-1) Figure 6.13: FIR Precompensation - Step Invariant 119 Ramp Invarxant (N=04) dt8 0 010 FIR Precompensatton Ftlter order: 4 Ragnttude (db) 0.105-02 0.105-01 0.10 1.0 nu . <0 d1)/p3 <0-1) 135 “*1 Angle (degrees) I I \ "3" \h 8.18E-82 8.185-81 8.18 1.8 nu 3 (w.d1)/p1 (8-1) Figure 6.14: FIR Precompensation — Ramp Invariant 120 Crank - Ntcolson (N=04) dta g 910 FIR Precompensatton Ftlter order: 4 Hagnatude (db) Angle (degrees) 0.105-02 0.10E-01 0.10 nu 8 (w dt)/p1 (0-1) 1.0 0.105-02 0.105-01 0.10 nu . <0 d1)/px <0g1) Figure 6.15: FIR Precompensation - Crank-Nicolson 1.8 121 5/6 InpltC1t (N=84) d1= g 919 FIR Precompensatton Ftlter order= 4 Hagnttude (db) 0.105-02 0.105-01 0.10 1.0 nu = (w dt)/p1 (0-1) 188 1\ 135 Angle (degrees) -1: ‘\ -... ‘1 0.105-02 0.10E-01 0.10 1.0 nu = (w dt)/pt (0-1) Figure 6.16: FIR Precompensation - Implicit 5/6 122 3/4 Impltctt (N=04) dt= 0 010 FIR Precompensatton Ftlter order= 4 10 71f/ Hagnttude (db) 8.10E-02 0.106-01 8.18 1.8 nu = (w dt)/p1 (0-1) ’\ ll Angle (degrees) -1: ’\ ) \ 8.18E-82 8.18E-81 8 .18 1.8 nu = (w dt)/p1 (0-1) * Figure 6.17: FIR Precompensation - Implicit 3/4 123 6.4 FRP Precompensation Filter The frequency response for the FRP precompensation filter presented in Sec- tion 5.4, identical or similar to that for FIR precompensation filter of the previous section. The precompensation filter transfer function is: (Z - 74)(2 - 2:) ° ' ° (2 "'- 21) (6.9) 2 _ 1 9p (1- Z1)(1 - 22) - - - (1 - 2).) where the 21 through 2,, have magnitude greater than one. Since the results of P(2) = 9p the previous Sedion are representative, further figures for these filters have been omitted. 6.5 APF Precompensation Filter In a fashion quite similar to the development presented in Section 5.4, a pre- compensation filter can be devised which is unconditionally stable. and which preserves the power spectral density of the true heat flux. This is particularly important if the heat flux is a random signal with high frequency components. The all pass filter (APF) has the property that its amplitude response is constant for all frequencies. We are interested, in particular, in the class of APFs which have unity gain. A first order APF, with unity gain, has the following form: p(z) = —%(—z(_z—;1%_) (6.10) Checking the amplitude frequency response: 1 85"” — a |_| 1 COS(1ru) + jSin(1ru) — a a. ej" —1/a. ‘ aCOS(7rV) -+- jSin(1ru) — 1/a 1 1 — 2aCOS(1r1/) + a2 1— 2/aCOS(1rV) -l-1/a‘2 |P(e"”)| = l I lal 124 \J 1 — 2aCOS(1ru) + a2 = 1 (12 — 2a cos(1ru) + 1 Precompensation is accomplished using one all pass filter section for each unstable pole of St(2), and cascading them together: _ (/z - 21) (z - 22) (z - It) P(2) - gP1(z_1/z1)gP2(z_1/22)”°gphm (6‘11) where: 9P1 = —zl 31C. (6.12) which leads to inverse filter: [(2) = 9,19,2---g,1. (2 - pt)(z - P2) - -- (z - A.) (6.13) .97. (2-1/21)“'(Z-1/zk)(Z-Zk+l)°"(Z-%) Figures 6.18 and 6.19 depict the frequency responses for precompensation filters arising from the 4th order insulated wall example. The amplitude response for these is equal to one (0 db) for all frequencies. The impulse invariant example shown in Figure 6.18 is representative of the other invariant methods. Notice that very little distortion in phase is introduced at the lower frequencies, and does not become appreciable until the sampling frequency is approached, when the max- imum phase distortion of 180 degrees occurs. The 5/6 implicit method example depicted in Figure 6.19 shows that the phase distortion starts to become signifi- cant at a frequency approximately one order of magnitude lower than the impulse invariant case, and the maximum phase distortion is double, at 360 degrees. 125 Impulse Invartant (N=04) dt8 0 010 APF Precompensatton Ftlter order= 1 Hagnatude (db) 0.105-02 0.105-01 0.10 1.0 nu . <0 at)/pi <0-1) Angle (degrees) 0.108-82 0.105-01 0.10 1.8 nu = (w dt)/p1 (8-1) Figure 6.18: APF Precompensation - Impulse Invariant 126 5/6 Imp1151t (N=04) dt= 0 010 APF Precompensatton Ftlter order= 3 Hagnttude (db) 188 135 Angle (degrees) 8.185-82 8.18E-81 8.18 nu 8 (w dt)/p1 (8-1) 1.8 \ \ \l 0.108-02 0.105-01 0.10 nu = (w dt)/p1 (0-1) Figure 6.19: APF Precompensation - Implicit 5/6 1.8 Chapter 7 Time Domain Results 7.1 Introduction Figure 7.1 depicts a simple approach to the inverse heat conduction problem, which represents numerical deconvolution of the observations to obtain estimates. This is phiIOSOphically equivalent to the method suggested by Stolz, which has proved to be useful under some circumstances, but does not reliably produce practical inverse filters. Many of the problems which plague this approach have been outlined in previous chapters: 1) the dependence of the inverse filter stabil- ity and noise sensitivity on the method selected to represent the direct solution transfer function, 2) the dependence of the inverse filter stability and noise sen- sitivity on At. If we could observe temperatures without errors, and if H (2) could represent the heat transfer system (at a given sample rate) with arbitrary accuracy, then this simple method would work fine. Any unstable responses in H (2) and its inverse would exactly cancel and q(n) = an). Neither of the above two conditions is satisfied in most situations of practical importance, however, making thismethod of rather limited interest. It does, however, provide the basis for formulation of several more useful solution techniques. A more successful and practical solution is depicted in Figure 7.2. A precom- pensation filter is added to the Stolz inverse filter in order to reduce the noise 127 128 t 1: 0n n ‘ n «(I \ ql) .H(z) () yl) St(z) €11) : Figure 7.1: Simple Representation of the IHCP ‘ 9‘0) 0111) P(2) —-U St(2) ._, 9(t) \ 4(11) ’ Figure 7.2: Practical Representation of the IHCP Wt) 917') ‘11") P(2) St (2) -——e- q(t) I Figure 7.3: Truth Model Representation of the IHCP 129 sensitivity and increase the stability of the inverse filter. Beck’s function spec- ification method can be analyzed in this fashion, and the FIR, FRP, and APF precompensation methods arose as a result of these considerations. Figure 7.3 represents the ultimate use, and circumstances under which the inverse filter must perform. The truth model for the direct solution is the physical heat transfer system, not the approximate solution used to generate an inverse filter. This brings to light a relatively subtle point which must be dealt with: even in - a noise free environment, the relatively small differences between the observations produced by Figure 7.2 and by Figure 7.3, can lead to substantially different estimates. This is the subject of discussion in Section 7.4. Tumlng now to the insulated wall example and using a fourth order ladder truth model implementation the dependence of the inverse filter stability on the truth model selected can be explored. Figures 7.4 through 7.11 show the locations, in the complex plane, of the Stolz inverse transfer function poles and zeros, for several truth models developed earlier. The Stolz inverse is generated simply by inverting H (2). Each of the eight methods have a direct transfer function with four poles, one pole resulting from each temperature node in the model, which become four zeros in the inverse. Notice that locations of these four zeros for all eight methods are quite similar; i.e. each method has a zero near 1.0, .9, .7 and .6, indicating that the direct solution of each is a sum of four decaying geometric sequences with characteristic time constants that are similar. The situation with the inverse poles (direct zeros) is quite different however. The number of inverse poles depends on the method selected. The impulse and step invariant methods have three poles, while the ramp invariant method has four. The forward difference method transfer function has no poles at all. Backward difference method and the other implicit methods considered, Crank- 130 c3(4) Fourth Order Example . At = 0.01 —-X: i t i % 1 a —3 -2 —1 +1 +2 +3 93(2) Figure 7.4: Impulse Invariant Stolz Inverse Roots 1* 9(2) Fourth Order Example At = 0.01 —l<—// 1 +x————*—ee++ t t ‘ 5 -8.6 -2 -1 +1 +2 +3 92(2) Figure 7.5: Step Invariant Stolz Inverse Roots 1) 8(2) Fourth Order Example . .. At = 0.01 -—-1(+——* 4, H—ee-e—k j 1 4* —20.6 -2 —1 +1 +2 +3 8(2) Figure 7.6: Ramp Invariant Stolz Inverse Roots . “ 8(2) Fourth Order Examme At = 0.01 l + 1 m4 , 1 e -3 -2 -1 +1 +2 +3 8(2) Figure 7.7: Forward Difference Stolz Inverse Roots 131 1) Fourth Order Example 1 I t —3 —2 —1 db 3(2) At = 0.01 4 fee-9;?“ .L 4 f v +2 +3 8(2) Figure 7.8: Backward Difference Stolz Inverse Roots 1) 3(2) Fourth Order Example At = 0.01 3 ) 4, 34x 996-4 1‘ l 5 -3 -2 —1 +1 +2 +3 33(2) Figure 7.9: Crank - Nicolson Stolz Inverse Roots 13(2) Fourth Order Example At = 0.01 t 1 1+ W 4 + >5 4, —3 —2 —1 +1 +2 +3 33(2) Figure 7.10: Implicit 5/6 Stolz Inverse Roots 1) 3(2) Fourth Order Example At = 0.01 3 t 1 15 13604 ' a 1 : —3 —2 —1 +1 +2 +3 93(2) Figure 7.11: Implicit 3/4 Stolz Inverse Roots 132 Nicolson, implicit 5/6, and implicit 3/4, all produce Stolz inverses with 4 poles. The locations of the poles are quite different for each method considered, and consequently the behavior of the Stolz inverse which results from each is different. In this case, the Stolz inverse derived from the impulse invariant method, is unstable as a result of the pole which is located at -3.12. This leads to an alternating sign solution which diverges. The step invariant case is less stable in the sense that it contains a pole at 0.59 which diverges even more rapidly, and the ramp invariant case is worse yet with two unstable poles, one of which is located at -20.6 which diverges extremely rapidly. The forward difference Stolz inverse has no poles which guarantees uncondie tional stability, even though the direct transfer function may not be. The backward difference method leads to a Stolz inverse which is unconditionally stable since all the poles are located at the origin. The Stolz inverse resulting from the Crank- Nicolson method is marginally stable since it has multiple poles located on the unit circle at -1, which lead to alternating sign solutions that neither grow or decay. The behavior of the implicit 5/6 and implicit 3/4 methods are similar. Each Stolz inverse contains one unstable pole located on the positive real axis which leads to a solution with constant sign that diverges. In addition, each contains a pole at -1.0, which like the Crank-Nicolson case produces a marginally stable component of the solution. The precompensation filter scheme of Figure 7.2 allows us to place the poles of inverse filter, 1(2), at any desired location. Each of the methods suggested in Chapter 6 has desirable characteristics under the proper circumstances. When a choice of precompensation has been made, the resulting inverse filter must be converted to a difference equation which operates on the observation sequence to generate estimates. This can be accomplished by taking the inverse z-transform 133 of the inverse filter, which is the subject of Section 7.2. 0 Section 7.3 shows the effects of precompensation on heat flux estimates when Operating in a noise free environment. In this environment, the observation series is identical to that produced by the truth model used to generate the inverse algorithm, and no measurement noise is present. A step heat flux is used to demonstrate these effects. ' The effect of uncorrelated errors in the observations is simulated with the pulse response of the inverse filter. That is, the effects of a single measurement error on the estimate sequence are indicative of the uncorrelated noise performance of the inverse filter. This is reasonable when viewed from the. frequency domain perspective since the PSD of uncorrelated noise is constant for all frequencies, as is the amplitude spectrum of a pulse. Section 7.4 presents the results of noise response benchmark testing. 1 ' Since the physical system behavior is only approximated by the discrete time truth models used to describe them, there are small but important differences between the response of the physical system and the truth model to any given input. These differences can be thought of as another source of noise which corrupts the observations. Results are presented in Section 7.5 on the effects of this source of noise. The observation sequence is generated using the exact continuous time Fourier series solution to a unit step applied heat flux, sampled at At intervals. These observations are used by various inverse filters to demonstrate this effect. FIR, FRP, and APF precompensation guarantees the stability of an inverse filter. Substantial gains in accuracy can be obtained if the observation sequence is prefiltered in a way that matches the bandwidth of the heat flux, if the bandwidth of the heat flux is substantially lower than the sample frequency. A triangle heat 134 flux benchmark signal is used demonstrate this concept, in Section 7.6. 7.2 Discrete Time Solutions Once the inverse transfer function has been found as a function of 2, it is generally a relatively easy task to transform it back into discrete time domain. This produces an estimation rule, which can easily be applied to the measurements. Since: 0(2) = I(z)Y(z) (7.1) and where I (2) can be expressed as the ratio of two polynomials in z: _ .(co + 1:12 + + c1142”) [(2) - 9‘ (do “1" (112 + + szN) (72) Using (7.2) in (7.1) yields: (do + 11121 + + dyzN)Q(2) = g,-(co + 0121 + + cM2M)Y(2) (7.3) Taking the inverse z-transform of each side reveals: dot/11k) + djiflk + 1) -l- + 111061]: + N) = g,(coy(k) -+- c1y(k +1) + + OMS/(k + N)) (7.4) Letn=k+Nork=n—Nin(7.4): dofln—N) +d1qn—N+1)+---+d~1’fln) = g;(coy(n—N) + cty(n— N +1) + + cMy(n— N + M)) (7.5) Rearranging and solving for the present value of the estimate {11(11): 6(a) = £14030:— N) + dtfln- N + 1) + + 01-14111- 1)] + :—;[coy(n— N) + cty(n-— N +1)...+ ch(n— N + 01)] (7.6) 135 Three important cases must be considered: 1) when M < N the estimate de- pends only on previous estimates and previous measurements, 2) when M = N the estimate depends only on the previous estimates, previous measurements, and the present measurement, 3) when M > N the estimate depends on the pre- vious estimates, previous measurements, present measurement, and a number of anticipated measurements. IfM < N then: (fin) = %3[dofln—N) + d1fi)n- N +1) + + dN-1§‘(n- 1)] + 5%[603’02— N) + c1y(n- N +1) + + ch(n— N + M)](7.7) where the RHS depends on previous values of the estimates and previous values of the measurements. IfM = N then: 3(a) = iguana—N) +dlqn—N + 1) + +d~-10(n—1)] ” éilcoyh- N) + emu — N + 1) + + car-win - 1)] N + 510010)] (7-8) N where the RHS depends on previous values of the estimates and previous values of the measurements, and the present value of the measurement, y(n). 136 If M > N then: 6(a) = 5:1[1100111— N) + 1119 n — N +' 1) + + 111040111 -— 1)] (7.9) + 3931000 - N) + c100 — N +1) +'3~+ c.0300 -1)l + 510000)) N + 119—£[CN+Iy(n +1) + 6111403101 1' 2) T 1' CAN/("— N + M)] N where the RHS depends on previous values of the estimates and previous val- ues of the measurements, and the present value of the measurement, y(n) and anticipated measurements y(n + 1) through y(n - N + M). Changing index variables one more time yields a form useful for sequential estimation of the heat flux sequence. Using (7.9) and k=n—N+M => n=k+N—M then: 01k+N-M) = 5£[do@(k—M)+---+d~-1@(k+N-M—1)] + ,9—‘lcoy(k—M)+~-+c~-1y(k+N—M—1)l N + £[ch(k+N—M)] div '1' %[CN+1y(k + N — M +1) + + Cuy(k)] (7.10) N The last term in (7.10), y(k), is the latest measurement taken. The resulting estimates, q(k + N — M), are delayed by N — M time intervals. 7.3 Error Free Observation Results With the truth models depicted in Figure 7.2 and e(n) = 0, several examples were run to benchmark the performance of the inverse filters. A unit step function heat 137 flux was applied to the truth model, and the resulting temperature step response output was applied to the inverse filter to produce step estimates. Figure 7.12 depicts the step response of the 4th order impulse invariant trans- fer function, used as observations for the inverse filter. Figures 7.13 through 7.15 depict the step estimates resulting from the FIR, FRP, and APF precompensation for this case. The FIR and FRP estimates are quite smooth and both methods produce similar results. The Stolz inverse has three poles at 0., 0.2232, and 0.1246. The FIR precompensation transforms the second two poles into poles located at 0. The FRP precompensation filter transforms the third pole into a pole located at 0., but leaves the second pole as is, at its location near 0. The FRP filter is somewhat broader band than the FIR filter, as can be observed from the frequency response plots of Chapter 6, and as a result responds somewhat more quickly than the FIR case. The APF precompensation filter places the third pole at location 0.32, resulting in broadband precompensation and a faster re- sponding inverse filter. This speed is at the expense of some overshoot in the estimates as can be seen in Figure 7.15. Figures 7.16 through 7.19 and Figures 7.20 through 7.23 depict similar results for the step invariant and ramp invariant cases, respectively. The forward difference and backward difference cases lead to stable inverses, as observed above. The result is that no precompensation is required. Figures 7.24 through 7.27 depict the step estimates and observation sequences for these MO 08868. Figures 7.28 through 7.31 contain the results for FIR, FRP, and APF precom- ‘ pensation of the Crank-Nicolson filter. These results are similar to the invariant filter cases, although the effects of precompensation are somewhat more pro- nounced. The narrow band low pass FIR inverse filter produces estimates which €557“! 138 are quite smooth, but the responds rather slowly. The FRP inverse is faster with less smoothing, and the APF is the fastest. 139 Impulse Invartant (N=04) dt- 0 010 Httv(z) - transfer functaon Temperature Step Response 8.28 u. // 0.15 . /' temperanurit O Si .. -/ ° / .. J/ ' '8.88 8.88 8.88 8.18 8.18 8.28 8.28 8.88 8.88 8.48 13.. Figure 7.12: Temperature Step Response - Impulse Invariant Method Impdlse Invartant (N=04) dt- 8 018 FIR Inverse Ftlter Step Heat Flux Estteates ( 8J5 Heat Flux Estteates / -8.88 8.88 8.88 8.18 8.18 8.28 8.28 8.88 8.38 8.48 1188 Figure 7.13: Impulse Invariant Method - FIR Inverse Filter Step Heat Flux Estimates 140 Impulse Invarsant (H884) dt- 8 818 FRP Inverse Fslter Step Heat Flux Estxmates 8.75 Heat Flux Estsmates IL88 -8.88 8.” 8 .85 8.18 8.15 8.28 8.25 8.38 8.3 8.48 {:88 Figure 7.14: Impulse Invariant Method - FRP Inverse Filter Step Heat Flux Esti- mates Impulse Invarsant 8J8 Heat Flux Estsmates .0. -.e“ 8.. 8.85 .e‘. .e" .0” 8.25 .9” .033 .0‘. III. Figure 7.15: Impulse Invariant Method - APF Inverse Filter Step Heat Flux Esti- . mates 141 Forward Dullerence (H884) dt- 8 818 Hlamtz) - transfer lunctxon Temperature Step Response IL28 //7 / / / e.ee // '8.88 8.88 8.88 8.18 8.18 8.28 8.28 8.88 8.88 8.48 Isl. temperature 0 3 A! Figure 7.16: Temperature Step Response - Forward Difference Method Forward DsIIerence (r/2) ,' r/2 1- £1? It |< (7/2) The Fourier transform of this function is: _ r . 'rf ll(f) - 551nc2(—2-) r»; where: (1 sinc(f) = SII'I(1rf) 1rf :_{ The bandwidth associated with this function is f. = 2 /1'. When the triangle function is sampled at frequency f, = 1 /At, the spectrum I? of the sampled triangle is replicated at nf, intervals along the frequency axis [5], where'n is any integer. Assuming that the sampling frequency is much greater than f... no overlap in the amplitude spectra (called aliasing) will occur. Consider the case when f, = 16.666Hz (At = 0.06) and r = 1.2 seconds. The bandwidth of this signal is, f), = 1.666611 2, which is well below the sampling frequency. Using this value in (7.15), a value for a can be determined: _ cos(21rf.,At) j: ficsz(21rfbAt)— 4 a " 2 a = 0.544114 and 1.83785 The second root is disregarded since it was assumed that a is a positive constant, less than one. Performance figures for the noise corrupted observations using the band- widths listed in Table 7.1 are presented in Table 7.2. The sum of the squared errors (SSE), and 0.,/a, are calculated for two different errors where: 1) the er- ror is the difference between the heat flux estimates and the true heat flux, and 2) the error is the difference between the heat flux estimates and the heat flux 163 Table 7.2: Prefilter Bandwidth Performance IQ» I f. I see, Ida/”yI SSE? I ”IF/”y II V 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.83333 1.66666 2.50000 3.33333 4.16666 5.00000 5.83333 6.66666 7.50000 0.123794 0.093909 0.182963 0.271402 0.343696 0.398671 0.438240 0.464683 0.479810 37.79 32.91 45.94 55.95 62.96 67.81 71 .10 73.21 74.39 0.02621 7 0.08991 5 0.1 66593 0.239356 0.300466 0.348038 0.382825 0.406306 0.41981 7 17.39 32.20 43.83 52.54 58.87 63.36 66.45 68.46 69.59 ' rs 1/2 XII/11‘ — 902] 8 {:1 1/2 2(61 - Iq: IU’30)2] ai-s SSEq = 2“}: — (BIZ 09 i=1 SSE; = :03 - ’9‘: Iag=0)2 i=1 0’;— a", = 0.0017 estimates in a noise free environment. Since most inverse algorithms distort the estimates, even in a noise free environment, the second performance criterion implicitly includes the effects of this distortion in addition to the effects of noise. This would suggest that the first criterion is a better indicator of benchmark in- verse algorithm performance. This is supported further by the observation that the performance figures for 06/0", continue to get smaller as the bandwidth of the prefilter is decreased, but aq/a', shows that the matched prefilter (1)., = 0.2) has optimal performance, as predicted by the spectrum analysis. Figure 7.44 contains a plot of the heat flux estimates for the matched bandwidth prefilter case. 'Notice that theestimates track the actual triangle heat flux quite closely, as is suggested by the minimum SSE, listed in Table 7.2 for this case. The comparable inverse filter using Beck’s method uses the 6th order impulse invariant truth model with a prediction horizon of R = 2. Figure 7.45 contains a plot of the estimates generated using this inverse, and operating on the observa- 164 tions used in the examples above. When these estimates are compared with the actual heat flux, SSE, = 0.266353 and therefore aq/ay = 57.4700. This roughly corresponds to the squared error performance of the FIR inverse with a prefilter bandwidth of 3.33333 Hz, which is a bandwidth that is about two times wider than the matched filter bandwidth. When the estimates are compared with the noise free estimates, SSE. = 0.277254 and therefore org/av = 56.5496, which is closer to the sum of squared error performance of the FIR inverse with a bandwidth of 4.16666 Hz. heat flux estsmates heat flux estsmates Impulse Invarsant (H886) FIR Inverse Filter 165 dt8 8 868 Hatched Prefilter k \ 0(\ .: 7 M / e.ee AW 1.88 Figure 7.44: Triangle Heat Flux Estimates with Prefilter (a = 0.544) Impulse Invar sant (H886) Beck's Inverse falter R882 w/ "01‘. d18 8 868 .N III M >0 .../\U M \l\ Figure 7.45: Beck’s Method Triangle Heat Flux Estimates Chapter 8 Summary and Conclusions The primary objective of this work was to develop the tools necessary to analyze the inverse heat conduction problem in frequency domain. This approach has provided insights, not readily obtainable in time domain, which ultimately lead to new solution techniques for direct as well as inverse techniques. Stability issues, which had previously been investigated via numerical experiment, have been broken down in frequency domain. Several direct and inverse methods were analyzed, drawing heavily upon transfer function, and transform concepts. A new class of inverse algorithms, which employ precompensation filtering, were developed. As a result of this inquiry, the importance of a frequency domain representa- tion of the direct problem was clear. Inverse algorithms described in the literature are frequently derived from a discrete time direct problem solution. The perfor- mance of the inverse algorithm was observed to be highly dependent upon this choice. By treating the direct problem and inverse problem separately, the inverse algorithm method and performance were decoupled from the direct problem. Chapter 3 was devoted to analysis of direct problems. Transfer functions of common finite difference methods were developed using z-transforms. Matrix difference equations for forward difference, backward difference, Crank-Nicolson, and two generalized implicit methods were developed. These equations were ma- 166 167 nipulated into standard discrete time state equation form, and solved for transfer functions, as a ratio of two polynomials in 2. Finite difference methods are based on approximation of space and time derivatives with simple differences. When spacial derivatives, only, are approx- imated, a thermal resistor - capacitor ladder network resulted. These lumped parameter models were described by ordinary differential equations in standard continuous time state equation form. The problem was, at this point, converted into a discrete time model. Several good choices for this conversion exist, and were explored. The time derivatives can be approximated with simple differences, as above, or error free discrete time solutions can be found, if some assumptions about the input signal can be made; if the input can be reasonably modelled as a sequence of impulses, steps or ramps, etc., a direct discrete time solu- tion can be constructed which exactly matches samples of the continuous time model, regardless of the time step selected. These techniques, referred to as input invariant methods, and the resulting transfer functions, were developed for the impulse, step and ramp invariant methods, in Section 3.2.1. The relationship between the lumped parameter continuous time state equa- tion models and finite difference discrete time state equation models was also developed, in Section 3.3.3. Input invariant methods may also be applied to direct problems, in which the continuous time solution can be expressed as a Fourier Series. The accuracy of discrete time solutions formed in this way is dependent only upon the number of terms included in the truncated Fourier Series, not on the time step selected. An example was presented in Section 3.3.4. Chapter 4 contains frequency domain descriptions of two important inverse solution methods: 1) Stolz’s method, 2) Beck's method. Both methods were 168 characterized by their frequency domain characteristics, not the direct problems from which a specific algorithm might be derived. The influence of the direct model was seen through the appearance of the direct model transfer function in the inverse filter description. Stolz’s method is a numerical deconvolution of sequences. This operation is accomplished in z-transform domain by simply algebraically inverting the ratio of polynomials in 2 which specify the direct transfer function. Thus the depen- dence of the Stolz inverse on the direct model: the numerator of the direct model becomes the denominator of the inverse filter, and vice versa. Beck’s method is substantially more complicated, and the frequency domain description containing the direct model transfer function appears in Section 4.3 The numerical stability of inverse algorithms and its dependence upon the selected time step, At, have been the subject of many investigations. A criterion for stability of discrete time systems, as it applies to the IHCP was described in Chapter 5. The inverse algorithm was described either in the form of difference equations, or as a transfer function. If the roots of the resulting characteristic equation lie outside the unit circle, then inverse algorithm is numerically unstable. The Shur-Cohn Test was used to determine if roots lie outside the unit circle, by manipulating the coefficients of the characteristic polynomial. A procedure, leaving At as a parameter is described, and critical values of At were determined. These analyses suggest that any unstable inverse algorithm can be stabilized by removing the unstable roots and replacing them with stable ones. This was accomplished using a precompensation filter, which had a zero at the precise location of each unstable pole. The precompensation filters developed provide such a zero and replace it with another stable pole, while maintaining steady state fidelity. This procedure produced a class of inverse algorithms which are uncon- 169 ditionally stable, regardless of the inverse algorithmon which they are based, or the time step selected. It is interesting to note that the Stolz inverse derived from the forward differ- ence, and backward difference methods needed no precompensation. The Stolz inverse derived from the forward difference method contains no poles, and as a result is unconditionally stable. Similarly, the Stolz inverse derived from the back- ward difference method has all of its poles concentrated at the origin, leading to an unconditionally stable inverse. I Results for finite impulse response (FIR) precompensation filters were pre- sented in Section 5.3. These filters arise when the replacement pole is placed at the origin, and have the desirable property that their impulse response is finite in time duration. Thus uncorrelated measurement errors propagate through the precompensation filter for only a finite number of time" steps. Two cases were considered: 1) all poles, regardless of their stability, are replaced by ones at the origin (FIR), 2) only unstable poles are replaced by ones at the origin (FRP). Another desirable choice for precompensation was the all pass filter. It too produced an unconditionally stable inverse algorithm, but it replaces the unstable pole with a stable one in such a way that the amplitude frequency response of the resulting precompensation filter is unity. If an unstable pole occurs at some point a which is outside the unit circle, it is replaced by a pole at 1/a. The unity amplitude frequency response results in the power spectral density of the estimates being identical to the power spectral density of the input. Analyses were presented in Section 6.1 in which Beck’s method is factored to separate the precompensation filter from the Stolz inverse which is imbedded in it. Frequency response plots for several choices of direct model using a fourth order example were presented. In several of these precompensation filters, the 170 gain was greater than one, indicating that noise which has frequency components in these bands was actually enhanced, rather than attenuated. This phenomenon decreased as the prediction horizon was increased. Similar analyses were applied to the FIR, FRP, and APF precompensation filters derived from the input invariant direct models and finite difference models. Frequency response plots were presented in Section 6.2 for the fourth order example. FIR precompensation filters which arose from the implicit methods also display noise enhancement bands. The APF precompensation filter, by definition, has unity gain over the entire band of frequencies. The precompensation filter may be interpreted as a filter which produces an estimate of the true temperature, given observations corrupted by noise. As a functional block, then, it removes undesirable noise and modes which might excite unstable responses in the inverse filter. It is not necessary, or desirable to implement an inverse algorithm with a time domain representation of the precompensation filter separate from the Stolz in. verse. This can lead to numerical difficulties since the resulting system contains unobservable states. The problem was avoided if the precompensation filter was combined with the Stolz inverse in frequency domain, first. The resulting inverse filter was then be transformed back into time domain, generating and inverse algorithm with much better results. Section 7.1 presented the procedure for gen- erating these time domain algorithms from the frequency domain descriptions of the inverse filter. The performance of several inverse algorithms was tested, using a step heat flux as a benchmark function. Plots depicting the step estimates were presented for several precompensation schemes, using error free observations. Since all inverse algorithms are based on some direct solution method, this same direct ‘1...) 171 solution method was used to produce the observations. For example, if the inverse algorithm was based on a Forward Difference method, then the forward difference method was used to generate the observations. Performance of inverse algorithms under these conditions was generally quite good; estimates rise rapidly after the input step heat flux turns on (at t=0) and settle out to unity within a few time steps. Since the example used was relatively low order, little difference was observed between the FIR and FRP implementations. The FIR and FRP algorithms did show more smoothing, and estimates which responded somewhat sluggishly to thestep input. The APF counterpart was faster to detect the presence of the step change, but at the expense of overshoot in the estimates. The effect of a single observation error was explored in Section 7.4 This test simulates the effect of uncorrelated noise in the observations. The FIR inverse produced non-zero output only for a finite number of time steps. The FRP and APF inverses contained longer lived impulse responses as a result of the remaining non-zero poles. For the step invariant example, one pole resides at approximately -0.86. Since the FRP and APF inverses have no effect on this pole, the slowly decaying, oscillating sign solution associated with it, showed up in this impulse response test. FIR precompensation was clearly superior in this situation. Another source of noise associated with our observations stemmed from the differences between the direct model used to generate and inverse algorithm, and the truth model. For the insulated wall example, with a unit step heat flux input, the Fourier Series solution was used to generate the observations for benchmark testing of these algorithms. All algorithms tested in this fashion show a lag of approximately three time steps before non-zero estimates appear. For small times, the derivatives of the 172 of the Fourier Series solution are much smaller than those for the approximate solutions. This means that the temperatures predicted by the Fourier Series solution are also much smaller for these early times. These early temperatures are sufficiently close to zero that little or no output is generated as a result. The inverse filters arising from the input invariant methods tend to detect the edge of the step function faster than .the inverse filters derived from FD methods. Their rise time is shorter. This is at the expense of more overshoot in the esti- mates. The inverse filters derived from the implicit 5/6 and the implicit 3/4 show little or no overshoot. They are quite smooth and stable, but respond rather slowly to the presence of the edge. The presence of an uncompensated pole near the unit circle for the Crank-Nicolson FRP and APF filters make its estimates essentially useless. ‘ A triangle heat flux example was used to show the effects of matching the bandwidth of the precompensation filter to the bandwidth of the heat flux signal. A first order prefilter was developed, and the relationship between filter parameter a and the filter bandwidth was derived. Frequency domain analysis produced the bandwidth associated with the triangle function, which was used to specify the value for a. This prefilter was then combined with the 6th order impulse invariant inverse with FIR precompensation. The resulting inverse algorithm was tested using simulated measurements which contained noise. The performance was quite good and nearly matched the performance of filters with over three times the dimension. Bibliography Bibliography [1] A. N. Tikhonov and V. Y. Arsenin, Solutions of Ill-Posed Problems, Winston and Wiley 1977 [2] James V. Beck, Ben Blackwell, and Charles R. St. Clair, Inverse Heat Con- duction, Ill-Posed Problems, John Wiley and Sons 1985 [3] Ronald C. Rosenberg and Dean C. Karnopp, Introduction to Physical System Dynamics, McGraw Hill 1983 [4] G. Stolz Jr., “Numerical Solutions to an Inverse Problem of Heat Conduction for Simple Shapes”, Transactions of the ASME, Feb. 1960, pp20-26 [5] Ron Bracewell The Fourier Transform and Its Applications, McGraw Hill 1965 [6] Herman E. Koenig, Yilmaz Tokad, and Hiremaglur K. Kesavan, Analysis of Discrete Physical Systems, McGraw Hill 1967 [7] Dean K. Frederick and A. Bruce Carlson, Linear Systems in Communication and Control, John Wiley and Sons 1971 [8] Lawrence R. Rabiner and Bernard Gold, Theory and Application of Digital Signal Processing, Prentice Hall 1975 [9] RV. Churchill and J.W. Brown, Fourier Series and Boundary Value Problems, McGraw Hill, 1978 173 174 [10] H.S.Carslaw and J.C.Jeager, Second Edition, Conduction of Heat in Solids, Oxford Press 1959 [11] Glen E. Myers, Analytical Methods in Heat Transfer, McGraw Hill 1971 [12] JP. Holman, Heat Transfer, McGraw Hill 1976 [13] Charles A. Desoer and Ernest S. Kuh,’ Basic Circuit Theory, McGraw Hill 1969 [14] Stanley M. Shinners, Modern Control System Theory and Application, Addi- son - Wesley 1972 ' [15] James A. Cadzow, Discrete Time Systems, Prentice Hall 1973 [16] William D. Stanley, _Digital Signal Processing, Prentice Hall 1975 [17] Samuel D. Steams, Digital Signal Analysis, Hayden Book Company 1975 [18] James V. Beck, “Surface Heat Flux Determination Using an Integral Method”, Mechanical Engineering and Design 7 (1968), pp170-178 [19] Steven A. Tretter, Introduction to Discrete Time Signal Processing, John Wiley and Sons 1976 “1111111111[liliilllliillr