it“; .. ‘ ,...... (4: .5, (r0. WU. ‘Ba’cfllhulicfh!£oitfltlrallt.iri u . m5} :1. ”I { . .4; t .v‘ .. .. .. . .. l . 1 . . .. . . i . y, . . C. . ‘ . l ‘ . , ‘ ‘ v .y. ‘ . . . .‘ . . :1. , . . .3 . i. . . , , ‘ .. . . . . , . r.. 3.4.. ,.. . .. , .. .. 4 t. . . . , .. .. . . ., M . . . ‘ ..‘ . .. .7 ‘ l ‘ V. . .. Ir . x . ‘ . .. . . ~ . . 3 u ., ‘ ‘ ,V . ,. 4 . ,. y. .u . , I .v ;r.l E . ' 'Kt’x . ‘ . I . . . .. .. . ‘ xi. .7 ‘ .. .‘. ‘ ‘ . , .a. . . ‘ .‘rlvrnn‘tlly‘!lv(¢ u 4 I . . . . . . . . > . . | .. I . . > v .y . . . . V . 2 trl I (It .OJIr. . . n I .. . . ‘ ‘ , . , . . . y. ‘u . ..' . .. . III. .. . . . . C . .u . . 1 ‘ . .3 .. . n . . u . .. . , 4 ..... . . l, . ... . .. . .1. { .. . L . ‘ . . l... ‘ u 2. .1 ‘ . , , ‘ . . i c 5 . 1 cl £3. 1 In: . «W n... i§V-§I. W“! ' .-L|!:: i "Q . . V 18:: ‘ . ....I|IP~\?. xt ‘o4 ¢|D >0lvt 7... ‘ u ,5» L,‘ . ‘9' .3. .V' mat ‘r'ltl. I'sul :I (ti. .’.f‘0\ $l f7.“ ‘ o (a l 3.: Ix...) {tutti I l ‘1‘ 1/ c F 1‘4"f. . 'r u... \( ‘ I mt: llAII I ‘ . , . , y 3. .352!!! . . .. . ‘ .‘ u, .‘ . ~ » ~ ‘ . . k1.f.rvt3illffilla\l. I. D‘. II v ‘ .. , . , . f , . t. _ . . V . 1. . T . .. . , p... . ‘ ‘Io...}lla\ It’lt’akji‘l L71? .4 .. a! . 7 .. .. ‘ . , . . : . ‘ ‘ . .. . . 1. . . .9» .vat) a. [Ion-If; f- ‘h: ‘ . . . . . . . L . . , , I , . r. r ‘ I, . r . ‘Y! . Iii (Iii! :3" ‘m t~'.. 1' " .fithl: ‘ '5“ ,2; ‘I . {Sims 0 ' :0. e1 :nlmm. . up __ ‘Jf‘pfi 4n. q . [til Ct ' \ t¢ H Kw ', t3: ) n‘u..\ ra‘lxv .iltnlvl" ‘3 II In; . . (If:0V .IIA #Lfi!‘ If; «I.lr:. . . uu-qti. ) ..J...xm\ . .‘ g. .u, ’V . u. ., .b! , ‘v. lt.lilu110a. KIN} . . ill}. I. <1: , ‘Irn J} .l .. . 3 .. saith». u {1.41: Tl.inorib. 7):). 2.... #5.... A. fit! .t A 19 11:31): ‘ 1k 13 u .z xv . . .lt‘xrnl. .... 1.?! 7.41.370. q)..- . a .. . .V 2,. I? 1 - 4 . . ‘ . { . . I..- .V ...f . .. u.‘ . 7. . ‘ . . .. . . . y . ‘ r f .. r33! 1-. ...y ‘ H L .I ‘ .. . :3... I ri- Iv, . . bl ' . . q VI. 0. o In. I 4 10A! 0. ! vb bx D.Sllfl...fl» (no. .I I- ~ . V . . . l’... ..v . I... llll A1! | i In t: It . . . t 4 . L, c . . .2. .. . . .11!» fl nuslflul5... hviéufl. In :7 tip l- c I v. 06.. - J..,|. : W P . .. . . 5.. L1 : . . . q . , . 0 ll II I II: t l '1 0 x I I I'll \ .l' "fl. «v.1! IIMUW .fiil ‘ . _ 2 '5 (0‘ 5 '/0 b IIIIII IIIIIIIIIIIIIII‘IIIIII "BRA" I..!I' 3 1293 00570 6936 Michigan State University This is to certify that the dissertation entitled Transform Encryption Coding presented by Chung Jung Kuo has been accepted towards fulfillment of the requirements for Ph.D. degree in Electrical Engineering {BMW}... Major professor John R, Deller. J,r., Ph.D. Date May 16, 1990 MS U is an Affirmative Action/Equal Opportunity Institution 0-12771 __—— ”—7 7 PLACE IN RETURN BOX to remove this checkout from your record. TO AVOID FINES return on or betoro date due. DATE DUE DATE DUE DATE DUE MSU Is An Affimetlve ActioNEquel Opportunity Institution TRANSFORM ENCRYPTION CODING By Chung Jung Kuo A DISSERTATION submitted to Michigan State University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Department of Electrical Engineering 1990 moBflflq ABSTRACT TRANSFORM ENCRYPTION CODING By Chung Jung Kuo This dissertation consists of two parts. The first part presents an independence technique for any m-dependent random variable and its application to data compres- sion. The second part deals with new types of twoodimensional pseudo—noises, quasi m-arrays and the Gold code arrays. These pseudo-noises are used during the applica- tion of independence technique to ensure signal encryption. O In the first part of this dissertation, a technique to generate independent random variables from the m-dependent ones is proposed and its validity is proved. The pro- bability density function of any resulting independent random variable is also shown to be Gaussian. This technique can be applied to any m-dependent signal, and the result- ing independent signal is unrecognizable. Therefore, signal encryption is also achieved. Since these encrypted signals are independent Gaussian random variables. they can be quantized individually. A simple scalar quantizer can be used for these independent Gaussian random variables and the sum of mean-square quantization errors can be minimized. In addition, vector quantization is no longer necessary because these encrypted signals are already independent. Secondly, two-dimensional quasi m-arrays and Gold code arrays are developed. These arrays are easily generated by the modulo-2 addition of m-sequences. The cyclic correlation properties of these arrays are studied. The cyclic auto-correlation of any array is similar to a delta-function. In addition, the cyclic cross-correlation between any two arrays is small compared with their cyclic auto-correlation peaks. Therefore, these arrays have the quasi-orthogonal property. In addition, these arrays can be generated easily which is helpful for many applications. Application of the independence technique to cosine and Hadamard transform image coding is also studied. The quasi m-arrays are used before transform coding to ensure the signal encryption. Therefore, this technique is named as transform encryp- tion coding. Computer simulation shows a saving of 0.5 bit/pixel can be obtained with satisfactorily reconstructed images by transform encryption coding. However, 1 bit/pixel is required to have comparable results by transform coding. In addition, the blocking efi'ecr is also removed by the proposed transform encryption coding. While only an application of the proposed technique to the images is given, it can also be applied to other signals such as speech. To My Family ACKNOWLEDGEMENTS ‘ I would like to acknowledge the late Dr. H. Rigas and Dr. LR. Deller, my thesis advisors, for their invaluable inspiration and guidance throughout the course of this dissertation. Although Dr. Rigas can not see this work completed, her inspiration to this work is greatly appreciated Special thanks go to Dr. Deller for his kindness in resuming the duty of advisor after Dr. Rigas passed away. I also like to thank Dr. A.K. Jain for his valuable comments and constructive suggestions which helped me a lot in this dissertation. Special thanks also go to Drs. P. Parker and M. Siegel for lending a hand and moral support. The consideration, concern, and support of Drs. Deller, Jain, Siegel, and Parker after Dr. Rigas passed away are also greatly appreci- ated. Without the help from my advisors and committee members, this work would never have been possible. The kindness of Dave Marks in shooting the pictures and the financial support offered by the Department of Electrical Engineering in the form of teaching assistantship are also appreciated. Personally, I am in debt to my parents and like to thank them for their continuous encouragement and support throughout all these years. Without their sacrifice, I would not be who I am today. Finally, I like to thank my wife, Chih J. Hsu, who takes good care of me and our baby, Andrew, such that I do not have to worry about anything but study. Her under- standing, patience, and love during the period of my Ph.D. study is truly appreciated. iii TABLE OF CONTENTS LIST OF TABLES LIST OF FIGURES I INTRODUCTION 1.1 The Problems 1.2 Purposes and Significances 1.3 Organization II BACKGROUND - 11.1 Scalar Quantization 11.2 Transform Image Coding 11.3 One-dimensional Pseudo-noises 111 TRANSFORM ENCRYPTION CODING 111.1 Independence Technique 111.2 Computational Algorithm for Signal Encryption IV TWO-DIMENSIONAL PS EUDO-NOISES 1V.l Two-dimensional Quasi Maximal Area Arrays IV.2 Two-dimensional Gold Code Arrays V COMPUTER SIMULATION V.l Simulation Studies v.2 Results VI CONCLUSION V1.1 Limitations and Advantages V1.2 Recommendations for Further Works APPENDIX: OPTICAL EN CRYPTION BIBLIOGRAPHY iv 51 51 62 85 85 87 89 97 LIST OF TABLES 11.1 Feedback taps for m-sequence generators V1.1 Simulation results of Hadamard transform coding V1.2 Simulation results of cosine transform coding 15 73 83 II. 1 I12 [1.3 EA [1.5 LIST OF FIGURES Bit allocation maps Two typical 63-bit m-sequence generators Cyclic correlations of two 63-bit m-sequences A typical 63—bit Gold code sequence generator Cyclic correlations of two 63-bit Gold code sequences 1V.1 A 255x255 quasi m-array and its binary phase spectrum 1V .2 Cyclic correlations of two 63x63 quasi m-arrays IV.3 A 255x255 Gold code arrays and its binary phase spectrum 1V.4 Cyclic correlations of two 63x63 Gold code arrays V. l v.2 v.3 v.4 V.5 V.6 v.7 V.8 v.9 A.1 A.2 A3 A4 The natural images and their encrypted images used in the simulation The estimated probability density functions for the natural and encrypted images The reconstructed images from Hadamard transform coding at 1.52 bits/pixel The reconstructed images from Hadamard transform coding at 1.09 bits/pixel The reconstructed images from Hadamard transform coding at 0.74 bit/pixel The decrypted images according to the wrong reference images The reconstructed images from cosine transform coding at 1.5 bits/pixel ‘ The reconstructed images from cosine transform coding at 1 bit/pixel The reconstructed images from cosine transform coding at 0.5 bit/pixel An optical system for information processing An optical system for encrypted image formation Optical encrypted and decrypted images obtained by linear convolution The method of obtaining cyclic convolution using linear convolution vi 12 16 18 20 21 37 42 45 50 52 58 67 70 76 77 79 81 92 94 96 CHAPTER I INTRODUCTION 1.1 The Problems For digital information transmission or storage, a continuous signal must be con- verted into a digital one. Therefore, how to efficiently quantize and code the continu- ous signals is important. The correlation between adjacent samples of signals is often high. Hence the redundancies in these samples are also large. These redundancies will decrease the quantization and coding efficiency if measures are not taken to reduce them. A technique to reduce these redundancies is transform coding [1]-[10]. The idea of transform coding is to transform the dependent signals into the transform coefficients where the redundancies are greatly reduced [7]-[10]. Instead of quantizing the original signal, these transform coefficients are quantized. The Karhunen-Loé’ve (K-L) transform is often considered as the "best" in the sense that it produces uncorre- lated transform coefficients [1]-[12]. Since the K-L transform does not have a fast computation algorithm, it is usually used only as benchmark comparison for other uni- tary mathematical transformations. However, the K-L transform coefficients are still dependent, and redundancies remain . Therefore, vector quantization should be used to quantize these dependent K-L transform coefficients [1]-[10], [131-[15]. Problems with the techniques of vector quantization are the difficulty in estimating the joint 2 probability density function and the complexity in searching for the decision and reconstruction vectors. However, if scalar quantization is used for these dependent 1(- L transform coefficients, the efficiency is low. Therefore, the first part of this disserta- tion is to present a technique to produce the independent K-L transform coefficients from dependent ones. Hence the efficiency of scalar quantization on these K-L transform coefficients increases and vector quantization is no longer necessary. When the bit rate is sufficiently low, the blocking effect [7], which results from nonoverlapping coding of each subimage, becomes highly visible. Reconstructed images exhibiting blocking effects can be very unpleasant visually, and often become the dominant degradation. This problem will also be solved in this dissertation. Maximal length sequences (m-sequences) with good correlation properties are useful for radar and navigation applications [151-[19]. Two dimensional maximum area arrays (m-arrays, pseudo-noises) have been studied by Normura et al. [20] and MacWilliams and Sloane [21]. Correlation properties of these m-arrays are similar to those of m-sequences. Applications of these m-arrays are found in coded aperture imaging [22], ”add-on" data transmission [23], pattern synchronization [24], and code division image multiplexing [25]. Problems with these m-arrays are their symmetrical appearance, the restriction in the selection size, the necessity to generate long m- sequences, and the complexity in construction. Therefore, the second part of this dissertation is to provide new types of two-dimensional pseudo-noises to solve the problems above. Sometimes, the security is an0ther concern for the information transmission or storage. Most encryption techniques are either public- or private-key type based on algebraic coding theory [261-[27]. However, these techniques either require large com- putational efi‘orts or are easy to attack by unauthorized persons. In other words, there is a tradeoff in the calculation burden and security. This problem will also be con- sidered and solved in this dissertation. 1.2 Purposes and Significances A technique to generate independent random variables from generally dependent ones is first presented. The probability density function of these resulting independent random variables is also shown to be Gaussian. Therefore, a simple scalar quantizer is enough to quantize these independent Gaussian random variables and minimize the sum of mean-square quantization errors. In addition, vector quantization is no longer necessary. This independence technique can be applied to any signal, and the resulting signal is always unrecognizable. Therefore, the signal encryption is achieved, and the security of information during the transmission or storage is also maintained. Besides, the blocking effects can also be removed by this technique because the decryption pro- cess will spread the quantization error over the whole image. Secondly, two new types of two—dimensional pseudo-noises, the quasi m-arrays and the Gold code arrays, are presented in this dissertation. These arrays are easy to generate and appear random. The cyclic auto-correlation of any quasi m-array is simi- lar to a delta-function. In addition, the cyclic cross-correlation between any two quasi m-arrays is small compared with their cyclic auto-correlation peaks. The quasi m- arrays are so-named because their cyclic correlation properties are similar to those of the m—arrays. Two dimensional Gold code arrays generated by the same construction method are also studied. The correlation properties of these Gold code arrays are similar to those of the quasi m-arrays. In addition, the Gold code array provides fami- lies of arrays in which any two arrays have the same bound on their cyclic cross- correlation. Therefore, both the quasi m—arrays and the Gold code arrays have the quasi-orthogonal property. Finally, the pseudo—noises are used in the application of independence technique during transform coding to achieve the signal encryption. To break this type of cryp- tography is difficult and time-consuming because there are too many pseudo-noises available. In addition, this encryption process requires only two FFTs which can be easily and quickly obtained. In summary, the significances of this dissertation are the following. 1. Any m—dependent signal can be decomposed into independent Gaussian random variables by the proposed independence technique. Therefore, a simple scalar quantizer is enough to minimize the sum of mean-square quantization errors, and vector quantization is no longer necessary. 2. The blocking effect, which results from nonoverlapping coding of each subblock of signal, is removed. Therefore, the reconstructed signal from the proposed tech- nique is much more acceptable by observors. 3. An easily obtained cryptography is proposed. Therefore, the security of informa- tion during the transmission or storage is guaranteed 4. Two new types of two-dimensional pseudo-noises are developed. These pseudo- noises are easy to generate and appear random. 5. These pseudo-noises have the quasi-orthogonal properties which are useful for many applications. 1.3 Organization The organization of this dissertation is as follows. In Chapter 11, some back- ground material is presented. Section 11.1 shows the optimal scalar quantizer for a sin- gle random variable. This includes the Max and companding quantization. How to apply the Max quantization to a sequence of independent random variables is also reviewed. Section 11.2 presents the fundamental considerations for transform image coding. Finally, the properties of m-scquences and Gold code sequences are discussed in Section 11.3. The definition of correlation for these sequences are also given. The independence theorem and its application are presented in Chapter III. The transform encryption coding is developed in Section 111.1. A theorem to support the 5 claim of the independence wchnique is proved. Some issues related to the indepen- dence theorem are also studied. In Section 111.2, a computational algorithm for signal encryption is developed. This algorithm is proved to satisfy the independence theorem. Comparisons between this encryption technique and the other related tech- niques are discussed The number of operations required by the encryption process are also calculated. Chapter IV is devoted to the study of two-dimensional pseudo-noises. In Section IV .1, the construction method for the quasi m-arrays is proposed and the correlation properties of these arrays are investigated Advantages and limitations of these arrays in applications are also studied Some characteristics of the quasi m-arrays are also summarized in this section. Section IV .2 is similar to Section IV.2 except the atten- tion is switched to the Gold code am. The simulation results of transform coding are given in Chapter V. Section v.1 discusses the considerations for simulation. Simulation verifications of the indepen- dence theorem are also provided The results of simulations are shown in Section V.2 These results are compared awording to signal-to-noise ratio and paired-comparison method Advantages and limitations of transform encrypted image coding are also dis- cussed The conclusion is given in Chapter VI. Section V1.1 summarizes the limitations and advantages of the proposed transform encryption coding and two-dimensional pseudo-noises. Some recommendations for further work are also presented in Section V1.2. Finally, a brief discussion about optical encryption are presented in the Appendix. Some computer simulations are shown there. A method to implement cyclic convolu- tion by linear convolution is presented Directions for further work are also discussed. CHAPTER II BACKGROUND 11.1 Scalar Quantization An analog quantity that is to be processed by a digital computer or system is usu— ally represented by an integer number proportional to its amplitude. During the conversion process, the analog quantity must be represented by a digital one with finite precision. This process is called quantization. The following contains an analytic treatment of the quantization process which can be applied to any signal [ll-[10]. Let x be a real scalar random variable with a known probability density function p(x). The quantization problem entails specification of a set of decision levels dJ and reconstruction level rj such that if d, S x < dj+r then x is quantized to a reconstruction value rj [10]. Decision and reconstruction levels are chosen to minimize some desired quantization error measure between x and rj. The quantization error meastu'e is usually defined as the mean-square error because this measure is tractable and correlates well with subjective criteria. For 2" quantization levels, the mean-square quantization error is rad“ E = z j (x-rj)2p(x)dx. (11.1) i=0 d: Setting the partial derivatives of the above error expression with respect to the decision 6 7 and reconstruction levels equal to zero yields Tj = Zdj-rj-l “1.23) and diet I xp(x)dx 5.: ———d: . (Mb) I p(x)d(x) d, Recursive solution of these equations for a given probability density function p(x) pro- vides optimal values for the decision and reconstruction levels. The solutions for some given probability density functions were studied by Max [28]. Therefore, it is also called Max quantization. The mean-square error of Max quantization is 2 2L1 2 Emin = E[X ]" er P[dj S K < dj-i-ll’ (11.3) j=0 If the probability density function is uniform, then the decision and reconstruction levels are uniformly spaced in the range of x. Therefore, it is desirable to perform a nonlinear transformation on x such that the transformed variable is uniformly distri- buted Hence a simple placement of decision and reconstruction levels can then be used After quantization is made, an inverse nonlinear transformation must be taken. This method is called companding quantization [10]. For companding quantization, the transformed variable is y = T[x], where T[x] is chosen such that the probability density function of y is uniform, that is, p(y) = 1 for —0.5 S y S 0.5. If x is a zero mean random variable, then the proper transformation function T[x] is the cumulative probability distribution of x. For example, let x be a zero mean Gaussian random vari- able, and -x3 CF. (11.4) _ 1 p(x) - {271-0 Then the forward transformation for companding quantization is = lent—L] (115) y 2 ~56 ' ' and the inverse transformation for companding quantization is x = ~f'2’cerr1 [2y]. (11.6) Max quantization can also be applied to a sequence of independent random vari- ables [291-[33]. 1f bi bits are allocated to the im random variable, then the bit alloca- tion must sum to a fixed number B, N B = 213i, where bi 2 0 for i = 1, 2, ..., N. (11.7) i=1 I How to choose the bit allocation bi for a fixed B to minimize the sum of mean-square Max quantization errors has been studied by Segall [29]. Mathematically, the problem is N 2‘5-1 rnini=21{1€[xi2 - 5'0 133%, J 5 xi < awn} (11.8) subject to the constraint of Equation (11.7). This problem is solved with the help of Lagrange multipliers and requires the solution of a set of nonlinear equations. Many approximations to this solution have been proposed The algorithm suggested by Wintz and Kurtenbach [31] is as follows. 1. Compute the bit allocation from B 2 N bi = —+210gmoi2--—£loglooi2, (11.9) where of is the variance of the i‘h sample. 9 2. Round off each bit bi to its nearest integer value. 3. Modify the resultant bit allocation until Equation (11.7) is satisfied The derivation leading to the above algorithm is based on an exponential approxima- tion for the error expression of Equation (11.3) between the mean-square quantization error of the im sample and its bit allocation bi. Another similar approximation sug- gested by Huang and Schultheiss [32] is B 1 1 N bi = fi-t—Elogzoiz 2N glogzoiz. (11.10) This approximation is obtained without the constraint of Equation (11.7). Direct application of quantization techniques to signals is undesirable because the correlation among samples of signals are large. A method to reduce these redundan- cies is transform coding which will be discussed in the next section. 11.2 Transform Image Coding Transform image coding represents a "radical departure" from the classical form of image coding such as PCM, predictive, and interpolative coding in which the image signal is directly coded [ll-[10]. In transform image coding, a unitary mathematical transform is performed on the image data to produce a set of transform coefficients, which are then quantized and coded for transmission or storage [‘7]-[10]. Transform coding has proven to be an effective and practical coding method for monochrome, color, and multispecrral images for both still images and real-time television. Transform coding is used to achieve maximum reduction in the quantity of data to be transmitted, subject to the constraint that a reasonable amount of fidelity be preserved. In transform image coding, an NXN image is first subdivided into (N/n)2 subirnages with size nxn. A unitary mathematical transform is then performed on these subimages. The probability density function for each transform coefficient are estimated from the transform coefficients of these (N/n)2 subimages. The resulting 10 transform coefficients are then quantized and coded for transmission or storage. The performance of transform coding depends primarily on the (1) transformation, (2) quantization, (3) subimage size, and (4) subimage shape. (1) Transformation The "best" transformation from both objective and subjective viewpoint is the 1(- L transform [l]-[10]. This transform can produce uncorrelated transform coefficients which is suitable for quantization because all but the "nonlinear" redundancies among transform coefficients are removed However, there is no efficient algorithm to com- pute K-L transform. Many other unitary mathematical transforms have been proposed and used for transform coding. These transforms are cosine, sine, Fourier, Hadamard, Harr, Slant, and Hartly. The coding performance of these transforms approaches that of K-L as the subimage size becomes large. Among them, cosine transform coding provides the best approximation to KL transform coding when the. image model is a first order Markov process [7]-[10]. (2) Quantization strategy Both mean-square quantization error and subjective quality are sensitive to the number of bits used in the quantization of transform coefficients. The simplest stra- tegy is to form normalized coefficients y. . Y1] = '33:. (11.11) and use the same quantizer for each coefficient. The probability density function for each transform coefficient is usually assumed to be Gaussian. Since there are (Nln)2 subimages. the mean and variance of Gaussian density for each transform coefficient can be estimated. If 11 coefficients are retained and m bits are used to code each coefficient, then a total of rim/n2 bits/pixel are required For good quality reconstruc- tions, about half the coefficients should be retained, in which case 7 bits must be used for each retained coefficient [8]. Therefore, m11/n2 = 3.5 bits/pixel are required. This 1 1 method is called zonal sampling quantization [10]. The best quantization strategy is the block quantization. 1n block quantization, a different quantizer is used for each transform coefficient. Each quantizer has different number of quantization bins and different spacing between bins. A total of B bits is used to code the coefficients. The bit allocation for each coefficient is usually set according to Equation (11.9) or (11.10), or a bit allocation map. Run-length coding [7]-[10] is required to determine the bit allocation when it is set according to some equations. This will increase the amount of overhead information required during transmission and storage. This problem can be solved by using a bit allocation map. However, this map can not be adapted to a changing image. Figure 11.1 show some commonly used bit allocation maps [5], [7], [10]. The quality of reconstructed images by block quantization is about the same as those by the Other quantizations, but usually only 1 bit/pixel is required [8]. (3) Subimage size Mean-square quantization error decreases with increasing subimage size. How- ever, most images contain significant correlations between pixels for only about 20 adjacent pixels, although this number is strongly dependent on the amount of detail in the image. Since the purpose of transformation is to produce the uncorrelated transform coefficients, 16x16 is a reasonable choice for subimage size [8]. However, the 8x8 subimage size does not significantly increase the quantization error. This argument can not be applied when subjective quality is the criterion of goodness. The subjective quality appears to be independent of subimage size when the subimage size is greater than 4x4 [8]. The subimage size is usually confined to the power of two to increase the computational efficiency of transformation. There are fewer samples available in estimating the probability density function for each transform coefficient when the subimage size is large or the number of subim- ages is small. This will increase the estimation error in the mean and variance. For 12 4.220000000000000 4.220000000000000 4.220000000000000 4220000000000000 4221000000000000 4.321000000000000 4.321100000000000 4321100000000000 5332110000000000 (2.332211000000000 7.342221100000000 7542222111000000 7643222211110000 8764443322222222 8876553333332222 8887775544‘44444 (2!) 0000000000000000 0000000000000000 1000000000000000 1111000000000000 1111110000000000 1111111100000000 1111111110000000 2111111111000000 2222111111100000 2222221111100000 3322222111110000 1.333222111110000 4433322211111000 5443322211111000 6544332211111000 765‘332221111100 0)) 0000000000000000 0000000000000000 0000000000000000 0000000000000000 0000000000000000 1000000000000000 2100000000000000 2110000000000000 2111000000000000 2111100000000000 3211110000000000 3222111000000000 3322211100000000 4322211110000000 5433221111000000 7543332221100000 (C) Figure 11.1 Bit allocation maps. (a) 1.5 bits/pixel. (a) 1.0 bits/pixel. (a) 0.5 bits/pixel. 13 example, there are 256 samples for each transform coefficient when the image size is 256x256 and the subimage size is 16x16. The number of samples reduces to only 64 when the subimage size increases to 32x32. Therefore, the subimage size can not be too large. Usually, the subimage size is limited to 16x16. Of course, this limitation is also dependent on the size of original image. (4) Subimage shape Transforming two dimensional nxn subimage yields better performance than one dimensional :1le image [8]. For example, if the two dimensional subimage is 4x4, then an one dimensional subimage with size 16x1 is required to have comparable results. However, the gain in using the two dimensional subimage is only about 0.1- 0.2 bit/pixel [8]. 11.3 One-dimensional Pseudo-noises The purpose of this background section is to discuss the codes used in communi- cations and ranging systems-those that act as noise-like (but deterministic) carriers for information transmission. These code sequences are of much greater length than those commonly used for information transfer. This is because they are intended for bandwidth spreading and not for the direct transfer of information. Two of these code sequences will be discussed in here. They are the maximal length sequences (m- sequences) and Gold code sequences. An m-sequence, by definition, is the longest sequence that can be generated by a given shift register or a delay element of a given length [151-[19]. For a binary shift register sequence generator, the. corresponding m-sequence has 2'“-1 bits, where m is the number of stages in the shift register sequence generator. Such a sequence genera- tor consists of a shift register with an appropriate logic circuit. This logic circuit will feedback a logical combination of the state of its stages (two or more) to its input. The output of a sequence generator is a function of the outputs of the stages at the 14 proceeding sample time. Table 11.1 shows some possible feedback taps for different m-sequences, and Figure 11.2 shows two typical 63-bit m-sequence generators. Briefly stated, properties of the m-sequences are: Pmperty 11.1: For any m-sequence, the number of ones is 2"“"1 and the number of zeros is 2m‘1-l, where m is the number of stages in the sequence generator. Property 11.2: The statistical distribution of ones and zeros is well defined. Relative positions of their runs change from sequence to sequence but the number of each run length does not. Property 11.3: The cyclic auto-correlation of any m-sequence is such that the correla- tion for all the cyclic phase shifts is -1 except the zero phase shift which is 2m—l. Property 11.4: A modulo-2 addition of any m-sequence with a cyclic phase shifted replica of itself results in another cyclic phase shifted replica of itself which differs from either of the originals. Property 11.5: Every possible state, or m-tuple, of a given m-stage generator exists for one and only one clock pulse during the generatidn of a complete code cycle. The exception is that the all-zeros state does not occur and can not be allowed to occur. Remark: Let [a] a [a0, a1, ..., anrl] be an m-sequence with length n” then ai = a,- mod n. because of the cyclic property of m—sequences. Let [a] and [a'] be any two m-sequences with the same length n.. The following is an alternative definition for the cyclic correlation between these two sequences [16], [34]. Definition: Let "o" be‘bit by bit modulo-2 addition and [a], a [ai, am, ..., aMH]. Then the cyclic correlation 9“:(i) between two m-sequences [a] and [a’li is the difference between the‘number of zeros and ones in [a]e[a’]i. Remark: The number of zeros in. [a]$[a’]i is [n,+9“:(i)]l2, while the number of ones is [n,-e,,,(i)1/2. 15 Table 11.1 Feedback taps for m-sequence generator. # of Stages Code Size Maximal Taps 2 3 [2.1] 3 7 [3.1] 4 15 [4.1] 5 31 [5.2] [5.4.3.2] [5.421] 6 63 [6.1] [6.5.3.2] [6.5.2.1] 7 127 [7.3] [7.1] [7.6.5.2] [7.6.4.2] [7.6.3.1] [7.4.3.2] [7.3.2.1] [7.6.5.421] [7.5.4.321] 8 255 _ [8.7.6.1] [8.6.5.3] [8.6.5.2] [8.6.5.1] [8.5.3.1] [8.4.3.2] [8.7.6.521] [8.6.4.321] 9 511 [9.4] [9.8.7.2] [9.8.6.5] [9.8.5.4] [9.8.4.1] [9.6.4.3] [9.5.3.2] [9.8.7.623] [9.7.6.4.3.1] [9.6.5.421] 10 1023 [10.3] [10.9.4.2] [10.9.4.1] [10.8.5.4] [10.8.5.1] [10.8.4.3] [10.8.3.2] [10.5.3.2] [10.5.2.1] [10.4.3.1] 11 2047 [11.1] [11.10.3.2] [11.9.8.3] [11.9.4.1] [11.8.6.2] [11.8.5.2] [11.7.3.2] [11.6.5.1] [11.5.3.1] 12 4095 [14.13.11.9] [l4.13.4.2] [14.12.11.1] [14.1221] [14,10,6.l] [14.6.4.2] [14.13.12.8.4.l] [14.13.12.7.6.3] [l4.13.11.10,8,3] [l4.13.6.5.3,1] [l4.11,9.6.5.2] [14,10,6,5,4,1] [14.8.7.6.4,2] 13 8191 [13.4.3.1] [13.12.11.9.5.3] [13.12.11.521] [13.12.9.8.4,2] [13.12.826.51 [13.12.6.5.4.3] [l3.11.8.7,4,l] [13.10.915.41 [133.93.75.11 [13,814.32] 16 411213141514 (a) ”411444150! (1)) Figure 11.2 Two typical 63-bit m-sequence generators. (a) Simple register configuration [6.1], (b) Equivalent modular configuration [6,5].“. Output Output 17 Example 11.1: Let [a] = [1110100]. Then u3 = 7. [ah = [0111010]. [a]+[a]1 = [1001110]. and 0u(1)=-1. The number of zeros in [a]+[a]1 is three ([7+(-l)]/2). while the number of ones is four ([7-(-1)]/2). Figure 11.3 shows the cyclic correlations of two 63-bit m-sequences. These two m-sequences are generated by the feedback taps [6.1] and [6.5.2.1]. respectively. The cyclic auto-correlation peak is 63. while the cyclic auto-correlation floor is always -1. The cyclic auto-correlation floor is the cyclic auto-correlation except the maximal peak. The cyclic cross-correlation values are 15. -17. and -l. A different pair of m-sequenoes will have different bound ([3) on the cyclic cross-correlation. As the size of m-sequences approaches infinity, the cyclic auto- correlation approaches a delta-function. In addition. the cyclic cross-correlation can be neglected when it is compared with the cyclic auto-correlation peak. Mathematically. let a(x) and a’(x) be two (2m-1)-bit m-sequences. then- bu 9.40) = [ a(x)a(x)dx = 2m—1 b. 1 ..., (11.12) 9.40) = ] a(x)a'(x)dx < [3, b. b where [3 < 2m-l as m -> no and bU—bL equals the length of 2m—1 bits. Two functions a(x) and a’(x) are defined to be orthogonal if r bu b" Ia2(x)dx for a(x) = a’(x) bl. [a(X).a’(X)] = I a(x)a’(x)dx = < bl L0 for a(x) at a’(x). Therefore. these m-sequences have a "quasi-orthogonal property" according to the Equations (11.12) and (11.13). 18 |‘ “‘ "l Cross-correlation .1 1| '1 ' l l I ' I l 1 Auto-correlation 63.0 1 53.0‘ 930‘ 33.0- 230- 130. Figure 11.3 Cyclic correlations of two 63-bit m—sequences. -7.00 n -17.0 19 A Gold code sequence is generated by modulo-2 addition of a pair (base pair) of m-sequences [15]-[19]. This pair of base m-sequences are added bit by bit by syn- chronous clocking. The two base m-sequences must have the same length and the resulting Gold code sequence has the same length as its base m-sequences. Figure 11.4 shows a Gold code sequence generator. Every change in phase position between the two base m-sequences generates a new wquence. Therefore. a pair of m-sequences with length n can generate a family of n Gold code sequences. Every sequence in this family has the same cyclic auto~correlation peak as its base m-sequences. The bound on the cyclic cross-correlation between any two sequences in this family is also the same as that between the two base m-sequenoes. The drawback is the cyclic auto- correlation floor of any sequence in this set is not constant. However. the bound on the cyclic auto-correlation floor is also the same as the bound on the cyclic cross- correlation between the two base m-sequences. The advantage of Gold code sequences is that they can provide a family of sequences in which any two sequences have the same bound on the cyclic cross- correlation and the cyclic auto-correlation floors. In other words. Gold code sequences offer families of sequences with the quasi-orthogonal property. Figure 11.5 shows the cyclic correlations of two 63-bit Gold code sequences. These Gold sequences are gen- erated by the m-sequences discussed in Example 11.]. 20 39.5 .8385» 8:038 38 300 23.9 303.9 < Y: enema sue—O 21 Auto-correlation 63.0 - 53.0 4 113.0“ Cross-correlation 23.0 ‘ 13.0 - 3.00‘ l 'l." -" "“ TB Figure 11.5 Cyclic correlations of two 63-bit Gold code sequences. CHAPTER III TRANSFORM ENCRYPTION CODING 111.1 Independence Technique It is necessary to convert continuous signals into digital ones for digital informa- tion transmission or storage. This conversion is done by the quantization of continu- ous signals. The optimal quantization for signals is usually defined as the one that minimizes the expected mean-square quantization error using a fixed number of quanti- zation levels or regions. Although the mean-square quantization error can be minim- ized over a data set empirically. it is time-consuming in doing so. The quantized data must be converted into digital symbols for digital information transmission and storage. However. direct digitization of these quantized data is inefficient. To have a high efficiency. a large amount of information is required for the mapping between the quantized data and the corresponding digital symbols. An efficient way is to approxi- mate the data set by a probability density function and minimize the mean-square quantization error according to this approximated function. In this case. overhead information is greatly reduced because only the mean and variance are required. The optimal quantization for a single random variable is the Max quantization [28] dis- cussed in Section 11.2. For a sequence of dependent random variables. vector 22 23 quantization is optimal. In vector quantization. the mean-square quantization error is minimized by jointly quantizin g and reconstructing the whole sequence of dependent random variables according to their joint probability density function [l]-[10]. [13]- [15]. The following theorem shows the condition under which the Max quantizer minimizes the mean-square quantization error for a sequence of random variables. Theorem 111.1: The Max quantizer minimizes the sum of mean-square quantization error for a sequence for random variables 21. 22. ..., 2N if and only. if 21. zz. ..., 2N are independent random variables. Proof: (Necessity) Let us assume 2]. 22. ..., 21.; are dependent random variables. Therefore. the joint probability density function £74.12“... “(ch C2. ..., CN) is nonseparable. In other words. the probability density function of any 2. is a function of 21. 2.2. ..., 2N. Hence the quantization levels of any 21 must be a function of 21. zz. .... 21.; to minimize the mean-square quantization error. However. the Max (scalar) quantizer chooses the quantization levels of any zi according to zi only. Hence the Max quantizer does not minimize the mean-square quantization error for any 2.. In addition. the Max quantizer does not minimize the sum of mean-square quantization error for z]. 22. ..., 2N. There- fore. 21. 22. .... 2" are independent random variables when the Max quantizer minim- izes the sum of mean-square quantization errors for 21. zz. ..., 2N. (Sufficiency) Since 21. zz. ..., 2,; are independent random variables. the probability density function of any 2; is a function of zi only. In addition. the mean-square quanti- zation error of any zi is minimized by the Max quantization. Therefore. the sum of mean-square quantization errors for 21. 2Q, .... 2].; is also minimized by the Max quan- tizer. Q.E.D. Both scalar and vector quantizations are used to effectively quantize the signal according to its probability density function. However. it is much more difficult to 24 implement vector quantization than scalar quantization. Problems with vector quanti- zation are the following. First. the joint probability density function of random vari- ables is not easily estimated. Secondly. the quantization regions are difficult to deter- mine. Thirdly. searching for the nearest matching vector template to any input vector is time-consuming. Usually. only a partial search is carried out which makes vector quantization sub-optimal. Finally. large codebook storage and overhead information are necessary. Therefore. Max (scalar) quantization is still a popular technique for quantizing dependent random variables although it is not optimum. The proposed independence technique generates independent random variables from the dependent ones. In this case. Max quantization can be applied to the resulting independent ran- dom variables and the sum of mean-square quantization errors minimized in the pro- cess. Before the independence theorem is stated. the following definitions and the extended central limit theorem are needed. Definition: x1. x2. ..., xN are m-dependent random variables if (x1. x2. ..., x,) is independent of (11,. X“. ..., xN) for all integers 1 S r S s S N and s—r a m 2 0 [35]- [39]. Remarks: The samples of signals are often independent random variables as long as the samples are widely separated. Therefore. these samples can be considered to be m-dependent random variables. In addition. 0—dependent means independent. This definition can be extended to the n-dimensional case in which x1. x2, ..., xN are n- dimensional vectors. Theorem 111.2 (Extended Central Limit Theorem): Let x1. x2. ..., 211.; be a sequence of m-dependent random variables with E[xi] = 0. (III.1a) 139:3] = 1. (111.113) EIxiB] < ..., (111.1c) 25 and N o < c = lim -1-Var[2xi] < ..., (111.1d) N-ooo N i=1 Then there exist a constant A such that for all integer N and all real a 1 ‘ x A I - — IP L21:x,/‘JNC S a T..— [e dxl S ‘1.— . (111.2) where A depends on the distribution of x1. x2. xN. but not on N. Proof: See [351-[39]. Remarks: The error in the Gaussian approximation is DUNN) by the theorem above [35]. In addition. central limit theorem is also true for two-dimensional m-dependent random vectors [36]. Definition: The encrypted random variables y1. y2. ..., yN are defined as the weighted sums of the random variables x1. x2. ..., xN. N Y1 = Z aux). (111.3) i=1 where aij c R for i.j = 1. 2. ..., N. Definition: yl. yz. ..., yN. are joint Gaussian random variables if and only if the sum by, + b2Y2 + ...+ bNyN is a Gaussian random variable for any real b1. b2. ..., hp; [40]. [41]. Theorem 111.3 (Independence Theorem): Let x1. x2. ..., xN be a sequence of m- dependent random variables. y1. y2. ..., yN be the corresponding encrypted random variables. and 21. zz. ..., 2M be the K-L transform coefficients of any M encrypted ran- dom variables chosen from y1. y2, ..., yN. where M S N. Then 21. 2Q, ..., 2M are independent Gaussian random variables as N —> oo. Proof: 26 N By the definition of encryption. we have yi = Z aiixi’ where aij e R for i. j = 1. i=1 2. ..., N. Let 371. 72. ..., YM be any M encrypted random variables chosen from y,. yz. N ..., yN. where M S N. Then we also have y. = Ziijxj. where a'fi e R for i =-- 1. 2. ..., i=1 M andj = 1. 2. ..., N. For any real b1. b2. ..., by]. 7513171 + 9272+ "' + bM'YM N N N = biz-a-lej + ”215.2ij ‘1’ ' ° ‘ '1' bMZ 3“ij J: 1:] ’31 N = 2 (131515 + bid-2‘: + ' ° ° + bMEMjh‘j- 5= 1 Therefore. '57 is a Gaussian random variable as N -) co by the extended central limit theorem. Consequently. 371. 72. ..., 7‘” are joint Gaussian random variables as N -> oo by the definition above. According to the definition of K-L transform. we have M _ 2. = 2 cij‘y}. where cij e R for i. j = l. 2. ..., M. Therefore. 21. 2,2. ..., 2M are also i=1 joint Gaussian random variables as N -> ea. In addition. 21. 2,2. ..., zM are uncorrelated because of K-L transform. Therefore. 21. z). ..., 2M are independent (Gaussian) ran- dom variables as N -) oo. Q.E.D. Remarks: The K-L transform coefficients of any encrypted signal are asymptotically independent Gaussian random variables as the size of the m-dependent signal approach infinity. In addition. the probability density function of random variable xi. x2. ..., xN is no: important in obtaining the independence results because of the extended central limit theorem. Therefore. simple Max quantizers can minimize the sum of mean- square quantization errors. Without this theorem. vector quantization must be used to minimize the quantization error. Because Max quantization is much simpler than vec- tor quantization and most techniques of vector quantization are sub-optimal. the impor- tance of independence theorem is clear. 27 Actually. the K-L transform coefficients of any m-dependent signal are also asymptotically independent Gaussian random variables as the signal size approach infinity according to the proof above. Therefore. it seems unnecessary to go through the encryption to obtain the independent Gaussian transform coefficients. The reason for doing so is as follows. In transform coding. the probability density function of each transform coefficient must be estimated first before the quantization. Therefore. the signals are usually partioned into blocks with the same size and then the uansfor— mation is performed on these blocks. If the block size is too large. then the estimation errors in the mean and variance will be large because fewer samples are available in the estimation. In addition. the number of calculations required by the transformation is also increased. For example. a 16-point FFI' requires 64 multiplications. while only 32 multiplications are needed by four 4-point FFI‘s [7]-[10]. Therefore. the smaller the block size. the faster the transformation. However. the K—L transform coefficients are independent only when the block size is infinity. Therefore. there is a tradeoff in choosing the size of the blocks. With this independence theorem. this tradeoff is avoided. Asymptotically. the encrypted signals are joint Gaussian. As long as the joint Gaussian property is induced by encryption . then the K-L transform coefficients will be independent. Therefore. the block size can be small. and the K-L transform coefficients of these blocks are still independent random variables. In addition the estimation error in the mean and variance of transform coefficients can also be minim- ized because the block size can be small. It has been observed by many researchers that the sum of mean-square Max quantization errors for any unitary mathematical transform decreases as the subimage size approach infinity [l]-[10]. However. no explanation has ever been given. With the proof of the independence theorem. an explanation is as follows. The K-L transform coefficients are asymptotically independent random variables as the subim- age size increases. This is because the K-L transform coefficients are the weighted 28 sums of the original signal samples. Therefore. they are asymptotically joint Gaussian random variables as the subimage size approaches infinity. In addition. they are also uncorrelated random variables because of the property of K-L transform. Hence they are asymptotically independent random variables. In addition. the performance of any unitary mathematical transform also approaches that of K-L as the block size approach infinity. Therefore. the unitary mathematical transform coefficients are asymptotically independent random variables as the subimage size increases. In addition. the sum of mean-square quantization errors will decrease when the transform coefficients are independent. With this independence theorem and the Wintz and Kurtenbach algorithm. the best way to quantize the m—dependent signals. according to the material presented thus far. is the following. 1. Obtain the encrypted signal which is weighted sums of the original m-dependent signal. 2. Partion the encrypted signal into blocks. and then apply the K-L transform to each block. 3. Estimate the mean and variance for each transform coefficient. These transform coefficients are Gaussian distributed. 4. Apply the Wintz and Kurtenbach algorithm to obtain the bit allocation for each transform coefficient. 5. Apply Max quantization to each transform coefficient with the number of quanti- zation levels obtained in step 4. This algorithm is optimal for the following reasons. First. the K-L transform coefficients of encrypted m-dependent signals are asymptofically independent Gaussian random variables. Secondly. the Wintz and Kurtenbach algorithm obtains the best bit allocation for independent Gaussian random variables under an exponential 29 approximation for the mean-square quantization error. Finally. the Max quantizer minimizes the sum of mean-square quantization errors for independent random vari- ables. The bit allocation maps discussed in Figure 11.2 can also be used. However. this will produce sub—optimal results. For the application of independence theorem. the encrypted signal must be obtained first. The techniques of transform coding can then be applied to these encrypted signal for quantization. Therefore. this technique can be referred as transfonn encryption coding. A computational algorithm for signal encryption is presented in the next section. 111.2 Computational Algorithm for Signal Encryption By the definition of encryption. Equation (111.3). we have I- 1‘ F q r 1 Yr a11 a12 . . . am x1 Y2 321 822 . . . am X2 = (111.4) LYN] 31m a112 ' ° ' aNN‘ 3‘11“ or Y = AX. Apparently. the matrix A must have full rank in order for it to be possi- ble to decrypt the encrypted signal Y (recover X). It is well known that an NXN image can be arranged into an Nix] vector. Simi- larly. any n-dimensional signal can also be arranged into a vector. In addition. the samples of most signals have the m-dependent property. Therefore. the independence theorem can be directly applied to any n-dimensional signal for any n. Strictly speak- ing. the result of the independence theorem holds only when the signal size approaches infinity. When the image size is 128x128. the error of Gaussian approximation defined by Equation (111.2) is about 0.008. while this error reduces to about 0.004 when the image size is 256x256. Therefore. the larger the signal size. the better the Gaussian approximation and the independence results. However. it is difficult to calculate the 30 signal encryption when signal size is too large. For example. if the image size is 256x256. then X is a 65536x1 vector. The direct computation of Y65536x1= A65536x65536X65536x1 is time-consuming because it requires 4,294,967,296 multiplications. In addition. to search a 65536x65536 matrix with full rank is also difficult. Therefore, the following algorithm is proposed as an easy way to achieve the signal encryption. Algorithm: The encrypted signal Y can be obtained by cyclic scrambling the phase spectrum of an m—dependent signal X according to the phase spectrum of a reference fiutction A. Proof: Let X = [21,. 112, a"), and Fm '5 mm 1e“, where F is the Fourier transform. Then F[Y] = 3.1me by the above algorithm. If A" '5 F71[ej¢x] a [3’ 1, 5'2. ..., "rm. then we have Y = A"X, where """ stands for the cyclic convolution. In other words, t 1 P 1 I- I! Yr 71 3’2 . . . a’N x1 -I —I —r Y2 an al ° . - aN-r x2 = I I I I (111.5) ' .3 .3 . . . _'. ' LYN. .32 as 31 EN. which has the same form as Equation (111.4). Therefore. the results of this algorithm satisfy the definition of signal encryption. Q.E.D. Remarks: It is possible that the phase spectrum of a reference function is so regular that the encrypted signal can be recognized without any effort. Therefore, a specific class of reference functions with irregular phase spectra must be used to ensure the encryption. This type of function will be discussed in the next chapter. Although the phase of the complex number (0.0) is undefined. it can be assigned any value between 0 and 21:. Here, the phase of (0.0) is defined as zero. With this assignment, the phase spectrum of any reference function is well defined. Therefore, the decryption can be 31 uniquely obtained by X = F'1[F[Y]e-j°x]. With this algorithm and the above assign- ment. the full rank requirement on A is relaxed. In addition, the encryption and decryption processes become simple because of the EFT and its cyclic property. In other words. this encryption process amounts to the passing of the m- dependent periodic signal through an all-pass periodic phase filter. If the phase spec- trum of the m-dependent signal is appropriately chosen. then the filtered output (encrypted signal) is uncognizable [42]. Actually. any cyclic filter can produce the encrypted signal although it maybe recognizable. For example. if a bandpass filter is used. the output is still in the form of encryption. A problem with this type of filter is as follows. Since the transform coding will operate on these encrypted signals, some quantization errors must occur. These quantization errors can be assumed to be unform all over the frequences. 1n the decryption, an inverse filter must be used. The quantization error will then be amplified or attenuated at most frequencies. This will create some noticeable error at specific frequencies. Therefore, an all-pass phase filter must be chosen to ensure this type of error does not appear in the reconstructed signal. In some ways. this signal encryption is also similar to phase modulation because the phase spectrum of the m-dependent signal is modulated by that of the reference func- tion [43]. The transform encrypted image coding is also similar to the transform image cod- ing using a spatial mask [441-[46]. In the techniques of transform coding using a spa- tial mask. the image is passed through a spatial mask before uansform coding. The amplitude and phase spectrum of this spatial mask is specially designed to match the human visual model. Usually. the size of this spatial mask is chosen from 3x3 to 9x9. In the transform encrypted image coding. the reference image (spatial mask) must be cyclic and as large as possible. In addition. the amplitude spectrum of reference image (spatial mask) must be constant to avoid the grid error pattern. These two techniques. of course. can be combined together to produce better results. 32 1f the reference function is complex, then the encrypted signal is also complex. This will create the redundancies. Therefore. the reference function should be real. in which case the phase spectrum of the reference function will be an odd function. Instead of generating a real reference function and computing its phase spectrum, an odd function can be used to replace the phase spectrum of the reference function. This replacement can reduce the computational burden of the encryption and decryption processes because it is not necessary to calculate the phase spectrum of the reference function in this case. As can be seen in the proof of encryption algorithm. the key point is to provide a uniquely defined phase spectrum for the encryption process. Therefore. the binary phase spectrum of a reference function can also be used. The binary phase spectrum is usually defined as the following [47]. F[K’] a {1 for 1:12 S 6; S 311/2 —1 elsewhere. (IE6) In summary. there are two ways to achieve the signal encryption. In either case. the m-dependent signal is treated as a periodic one. The first way is to use a continu- ous or binary phase spectrum of a reference function to modulate the phase spectrum of the m-dependent one. The other way is to use an odd function (continuous or binary) to modulate the phase spectrum of an m-dependent signal. Both methods can be easily obtained through the use of the FFT. Although at least two additional NxN FFTs are required by the encryption process for two—dimensional images. they can be calculated easily. The number of complex additions or multiplications required by the encryption process is 4Nzlog2N when N is a power of 2. Since a 1024x1024 FFT can be calculated in real-time (1/30 second) by today’s CMOS technology [48]. the encryption process is definitely realizable in real-time when the signal size is 512x512. Sometimes. security is another important issue during the information transmis- sion and storage. The proposed transform encryption coding also ensures the security because the the information is scrambled and uncognizable when the phase spectrum 33 of the reference function is sufficiently irregular. However. if the reference function used for encryption is known to an unauthorized person (attack). then the security is lost. Therefore, we need to have a specific type of reference function with the follow- ing properties to avoid these cases. 1. They are easy and fast to generate. 2. They must have the pseudo-random appearance. 3. There must be many this type of function available. One of this type of function is the pseudo-noises which will be discussed in the next chapter. Since there are so many pseudo-noises available, an attack by unauthorized persons is time—consuming or almost impossible. In addition, the phase spectra of these pseudo-noises are irregular because of their pseudo-random appearance. CHAPTER IV TWO-DIMENSIONAL PSEUDO-NOISES IV.l Two-dimensional Quasi Maximal Area Arrays Two-dimensional pseudo-noises. maximal area arrays (m-arrays). are useful for many applications such as coded aperture imaging [22]. "add-on" data transmission [23]. pattern synchronization [24], and code division image multiplexing [25] because of their pseudo-random and quasi-orthogonal properties. An m-array can be con- structed by folding a corresponding m-sequence [20]. [21]. For any m-sequence with length n = 2m—1 = 2k‘k2—1, the size of the corresponding m-array is n1 = 2k‘-1 and n2 = n/nl. where 1n and n2 are relatively prime. For example. ifn= 15.thenn1=3andn2=5form=4. k1=2. andk2=2; ifn=63, then n1=7ananzé9rorm=6, k1=3. andk2=2; ifn=511, then n1=7 andn2=73 form=9. k1=3. and k2=3. The m-array is then obtained by writing the m-sequence down the main diagonal of an nlxnz matrix and continuing from the opposite side whenever an edge is reached. For example, an m-sequence [30. al. ..., an] produces the corresponding m-array 34 35 80 as 312 a3 a9 a10 a1 a7 313 34 - 35 a11 a2 38 314 1f the m-sequence is [110101100100011]. then the corresponding m-array is 1 1 0 1 1 0 1 0 1 0 . 0 0 0 1 The cyclic correlation properties of m-arrays are similar to those of m- sequences. For example. the cyclic auto-correlation of any 3x5 m-array is [15 -1 -1 -1 -1] —l -1 -l —l —1 . —1 -1 —l -1 --1 Although m-sequences appear random to the human eye, the corresponding m- arrays have a nonrandom. appearance. They are symmetric about a column of zeros. The size of an m-array is also restricted because it depends on the prime factor decom- position of the length of the corresponding m-sequence. Generating a long m- sequence to construct the corresponding m-array is also time-consuming. In addition. the construction method of the m-arrays is complicated. To solve these problems. the quasi m-array'is developed Definition: Let [a] and [b] be any two m-sequences. where [z] a [20. 21. 2,...1] and z = a or b. Then the two-dimensional quasi m-array [A] is P 7 who 309131 . . . a(fibrin-1 . 319130 819131 . . . 319b,...l [A] a . (IV .1) an‘_1$bo aanle ° ° ' an._lebnb_1 By definition. every row or column of a quasi mcarray is either an m-sequence or its complement. Therefore. the quasi m-arrays preserve all the properties of the m- sequences in both rows and columns. It is also easier and faster to generate the quasi 36 m-arrays than the m—arrays. In addition, more freedom is available in selecting the size of the quasi m-arrays. For example, the size of a quasi m-array can be 7>(7. 7x15, 7x31. and so on. Finally. the quasi m-array is not symmetric when two different m- sequences are used for construction. However, the quasi m-array is symmetric about the diagonal when only one m-sequence is used. For example. if [a] = [b] = [1110100], then the corresponding quasi m-array becomes 00010111 0001011 0001011 1110100. 0001011 1110100 110100. These m-arrays can be used as reference functions for transform encryption cod- ing for the following reasons. First. they are easy and fast to generate. Secondly. they appear random which ensures security. Figure 1V.l shows a quasi m-array and its binary phase specu'um. However. the number of the quasi m-arrays available is lim- ited. For example, there are only 256 255x255 quasi m-arrays available. Therefore, in the next section a new type of array is derived which can provide many arrays with the pseudo-random property. Before this type of array is stated. let us study the cyclic correlation properties of quasi m-arrays. If [A113 is defined as the ith and jth cyclic shift of [A] in column and row respectively. that is. )- q 31'ij 31913341 . . . aiebne’ri-l 31“ij autebm - - - aiHebwi-l [AM a : I I . (IV.2) an.+i-1‘9bj an.+i-19bj+1 ‘ ' ° an,+i—lebn.,+j-l Then the cyclic correlation properties of the quasi m-arrays are given by the following. Theorem NJ: The cyclic auto-correlation @M(i.j) of a two-dimensional quasi m- array [A] is 37 ’" '"iW :éLf't. ”J? £111 i 2}. ultra I4. 1015;; . 1: \— H1) 41-2 .- ..o . - 2.1 v I a. 1.. 7 g _f;!' .5 #1:... 25133.. "'9 :15: :1..- ‘ K' 1:1 1.14 he: ‘ *5” 1%.?“ .. “it?“ :t f" 111551.. 31:. ,, 5;13-11111-5313168.16:1, stir-,1 Brawn-rem 2:25er ~15 3.511: 13;}. 1,11%;- 59 4'1 “.9 ...,. "9% $111 w 1? ir'ii'u’i’gw 43’6" 53'1“? ' r" 3'; 15‘ 9'15" - .451: ,.,x3:§cu., 313.54% 435. - 55:11-51.» u? . L'h’i‘i-I .. 3" 11151595.? fifty!" é ' 1.5 15551141. M In??? “1:1-.HE.:_:..%«..5,6 gin gnrnig: :35 Sta-:1 1:} 1: 3339' ' 51$: 1' ff £55315 '1' 1'1 8 'Eyiiii'i'fiii'is' H EE' 3:331:5- § 1131. t'a'f". | $1;'¥_Ijl" “£5.- fiiigtsf i 'in1??? write.» 113 {E} " 'lllil-lIIIE-I'gl ‘i"ii'i' IIIITIII 'Illlrll'llllllll'lll i i' ' “II 531?}?1 31““. :Emy' I; 1353-1115 Ti fifgafi {1111 “131113“ 1 $1111.. 31;? (b) (a) Figure IV.1 A 255x255 quasi m-array and its binary phase spectrum. (a) Quasi m-array. (b) Binary phase spectrum. 38 n,nb fori=j=0 .. —n fori¢j=0 9 (1J)=1 " M -n‘ forj ¢i=0 (W3) 1 foriathndjatO. Proof: Case I .' 9AA(O,O) By the definition of correlation, 9M(0,O) is the difference between the number of zeros and ones in .[A]G[A]o,01 But [1310le aoqibomfibo WIGWI . . . 30" bus-1930'” -1 ' aleboealebo aleblealebl . . . alebnflealebwl Lan._1$bo$an._19bo an..1$b1$a.n._10bl ' ‘ ° an._1$bm_l$an'_1@b __1 is a matrix whose elements are all zero. Therefore, 9M(0,O) = nanb. (The number of zeros in [A]0[A]0.o is n‘nb.) Case 2: 9M(i,0), i at 0 Similarly to Case 1, the matrix [A]0[A]m can be written as Wbfiafibo aogbfiaiwi . . . “Oahu-lgafl’m-l 31$b0$ai+l$bo 819b1$3i+19b1 . . . 31$bm_1$ai+19bnb_l 311-19bo‘Pan.+1-19bo 311.4% 13311911439131 ' ‘ ‘ 311-19bm-103nfi-19bm-1 which becomes 39 1 30931 W31 . . . a093i a1“”1‘111 31931“ - - - 31%111 ham-19.3w?! an.-1$.%.14-1 ' " 311-15811111-1 1 By Property 1L4, we have .- 1 at at . . . at a1+1 31+1 1 ° - am [AieiAiw = j 3 ; . banal-l an.+1-1 31114-14 where I: O and 1:9 i. Therefore, 9M(i,0) = -nb by the definition of correlation. (There are nb columns in [A10[A]i.o, and each column is an m-sequence.) Case 3: 9AA(0,j), j at 0 Similarly to Case 2, 6AA(O,j) = —n,. Case 4: 8AA(i,j), i at O andj at 0 Similarly to Case 2, the matrix [A]Q[A]i‘j can be written as aoeboeaiebj aoebloagabm . . . Wbm-fiafibmj—i al$boeai+1$bj a1$b1$am$bj+l . . . alebm_1$ai+l$bm+j_1 Lam-leboea-rwi—lebj 311-19b193n.+1-10bj+1 ° ° ‘ am-lebnb—leanfii-lebnfij-l By Property 11.4, we have " 'l 319'): 319131111 . . . alebMJ-l al+l$bl at+1$b1+1 - - - aI‘l'IGbnb'l'J-l [ARIAIQ = , an.+I-19b1 an.+I—1$bl+l ’ ' ' an.+I-l$bn.,+J—l where I at O, I at i, J a: O, and J #1 j. Therefore, we have 4o [lg-'1 nb—l l na+l nb+1 n,—1 nb-i-l na+1 nb-1 = emu”)=2 2T2 2 2 2 2 2 by the definition of correlation. Q,E,D, Theorem IV.2: The cross-correlation 9M1(i,j) between two different two-dimensional quasi m-arrays [A] and [A’] is 611.61) = 9..1(i)9w(i). (IV .4) where [A'] is the quasi m-array generated by [a’] and [b’]. Proof: Similarly to the proof of Theorem NJ, the matrix [A]e[A’]id- can be written as ’ i I I afibfia’ieb’j aoeb IQa’in’fll . . . aOanb-le? in r>+j—l algbfia’919b’j 810b1$aIi+IQb3+1 . . . algbnb.1@a i+1$b “5.6-1 art-19b093'n.+i-1$b'j am-lebfia'm'r-fib'jn ‘ ‘ ° ark—labnb-lea’nfii—lgb’mfi-l L .1 Now, let us define [7111 211 lag-lmz'b 2’ m1 z'n.+k-1] 3 like it» El(nu-1] and the cross-correlation between [20, ..., 2.1.4] and [2,11, ..., z’n'+k_1] is 921(k), where z =aorb,andk=iorj. Then i' . ' ' 540930 #0391 . . . ETOQFm-l ailQ-S'o yl$vl . . . 5‘19? -1 imam” = ' Finr19g0 5.4“...1991 . . . a'm_1$b'm_l .1 By the definition of correlation, we have 41 _ [n.+9..'(i)] [nb+9bb'(i)] 4 [n.-‘9..'(i)l [nu-91110)] 9“, 2 2 T 2 2 tn.+9..1(i)] [nu-91,510» [n,—9u1(i)] [“b‘l'ebb'Gn 2 2 2 2 3 = e“.(i)ew(i). Q.E.D. Remarks: The cyclic auto-correlation of any quasi m-array can also be written as " l —l/nb . . . —l/nbl —1/n, l/n,nb . . . lln,nb nanb L—l/na l/nanb ‘ ‘ ° l/n‘an The difference between the m-arrays and the quasi m-arrays is the shape of cyclic auto-correlation. The cyclic auto-correlation floor of any m—array is constant, while that of a quasi m-array is not. The cyclic cross-correlation between two different quasi m-arrays is the product of the cyclic cross-correlations between the generating m- sequences of the quasi m-arrays. For a different pair of the m-sequences, the bound on their cyclic crosscorrelation is different. Therefore, a different pair of the quasi m-arrays has a different bound on their cyclic cross-correlation. As the size of the quasi m-array approaches infinity, the cyclic auto-correlation approaches a delta- function. In addition, the cyclic cross-correlation is negligible when compared with the cyclic auto-correlation peak. Therefore, these quasi m-arrays are quasi-orthogonal. These quasi m-arrays are so-named because their cyclic conelation properties are simi- lar to those of m-arrays. Example IV.1: Figure IV.2 shows the cyclic correlations of two 63x63 quasi m- arrays. Each quasi m-array is generated by an m-sequence. The two m-sequences dis- cussed in Example 11.] were used to generate these quasi m-arrays. The cyclic cross- correlation values are (-1)2, (—17)2, 152,(-l)(15),(-1)(-17),and (-17)(15). In addition, 42 l\uk*c Ofrc ~ Peak lam“ ... . .... . ... . .. .. . . . . . . .. . ............... . .......... . ... . . . . ... ...... .... . . . .o s s ... . . .. .. .s. 0s 3969.0 - 3289.0 _ ......... ...... . ............ .............. ................ ........ ........ ................... ...... ............. ..... ....... ....... . ............. ........ 6 ...................... .................... .......... 1 ................... ....... ............. ......... .... ......o.............3...... ................ .....I .............. ....... ................ .... . ............... ............. . .................................. ......... .................. 3.1.2.2................ . ... ..... .... ..2............................ .. .22.... ... 1...... .............................. .............. ..................... ................. ...:.............1.2.2.2.......................... ...................... ..... .................... ...... ... ... ............... ... ...: ............ I... ..... . .............. 32...... ............ ... . ..... ................... .. 2...... ..... .... .....Z ................................ ..... . ............ .... .. ................. 2...... .............. ........... .......... 3...... . .. ..... ....... .. ............ .....I .....2... .... 2.2.2.2.... ....l ...... ... .....Z ............. .. .. ..... ...... ....... ...... .... ....... ....... ...... ....... ........... ............ ............... .............. .. ........... .................... ................... .......... ...................l .. ... ......... . .....2 .. ...... .... ...l . .... ... .... ......... ............ ... .... . ............._......... .... ...; ... .................... .... ....... ............ ....... .......... ....................... ....... ......... ................13...... Z... . ................ 23...... ...... .... . ........... .... ................. 1:... I... 1... ....................... t .... .. .........................0 8 .... .... ......................l ... ...............................0 U- .............. . .... ...........' ............ 2...... ..........O ........... ..................l 2.23.... ....... .... .I ....Q... .9. ... ... I... .. ... ... . ... . ... ... . . ...... .. .. 13.1.... ... ............... ...... .............. ...? ................ 7.32.2................l ......................... .....................a ....................0 ..................0 ...............0 . . .......................O' .. ..........a ...—...... O 21......) ... ....l .. ... 9 1 o . - ? . . . - . . ‘1 ‘ . 20: .1. '. ‘2 o ;. ‘s . . .2 L. .. . *2 of- - - .1. z. .- . .; -‘. -. 5'», .. ‘ ... ...... \ .. .....V .... .... ... . Z. V...... 3...... ; . . . . ............ 1. . _ , l....o.. .. -..... 1... 0.... I ...... .V......”.......: “...... ......t....... .‘2; ...; .....‘.‘..... ...Ot.;.... ...—......l .. .. n. . ... ......o..l‘. .. ...... fi'ln . ‘ . . .U....w.. . n . ... ... ' 0'. .v..§.‘ ...... u ..O........‘..v . ... .2 .... ...... .. .... ...? .............‘. ‘2. ........:...‘.. ...............; .............. . ...... 3.3.1.... ... .._.......... . ........‘.... ...... ‘.........> .321... I... ...... I... 2..., ,............. .. 5...... .‘. 8 .. .......,.'..._C....... ... ...)...Q... ...: .0. - ul .. 5...... . u......0............... ......a 6": ......L 9-.OV.'.,- .. ...... t ‘Y... . .... ‘I .. .....l........ ... ....................... .. . 0.0...” .o.§~o...~.....v‘4 ...Q. - .............,....... AJ ...._............... .. 3 ......C. ... ......................... 3‘... 3...... ‘3. .................. .... ..‘.......... ... 3.13.21... .... . .......... . . 3969.0 - 2389.0 - 0.1 ‘1071 . . .... 9.5.3... . . . . . . . . q _ Q .2 C . YChc . correlations of W0 63 6 x 3 quasi (a . ) Cyclic auto'correlat' Ion. ( 43 the cyclic auto-correlation values are 632, (-1)(63), and 1. These values are obtained from the theorems above and verified by computer simulation. In summary, properties of the quasi m-arrays are the following. Property IV.l: For any n.xnb quasi m-array, the number of zeros is (nanb+l)/2 and the number of ones is (n‘nb-1)/2, where n. and nb are the length of the m—sequences in the rows and columns respectively. Property IV.2: The statistical distribution of ones and zeros in each row or column is well defined. Relative positions of their runs change from sequence to sequence but the number of each run length does not. Every row or column is an m-sequence or its complement. Property IV.3: The cyclic auto-correlation of a quasi m—array is as follows: r n,nb for i = j '= O . . -n for i at j = 0 9AA(1J) = t b -n,I for j at i = 0 (W3) L1 for-ianandjae‘O Property IVA: A modulo-2 addition of a quasi m-array with a cyclic phase shifted replica of itself results in another cyclic phase shifted replica which differs from either of the originals. The advantages of the quasi m-arrays over those of the m-arrays are the follow- ing. First, they are easier and faster to generate than the m-arrays. This is useful for transform encryption coding and code division multiplexing [25]. Secondly, the selec- tion size of the quasi m-arrays is more flexible than that of the m—arrays. This is also useful for applications because many quasi m-arrays with different sizes and the quasi-orthogonal property can be easily generated [231-[25]. Finally, the random appearance of the quasi m-arrays is also useful for transform encryption coding and the testing of texture segmentation algorithms [49], [50]. 44 IV.2 Two-dimensional Gold Code Arrays Gold code sequences are generated by the modulo-2 addition of a pair of m- sequences. Therefore, the two—dimensional Gold code array is defined as the follow- ing: Definition: The Mo-dimcnsional Gold code array is 509.50 7506.51 Win-1 aIQbO 516.61 . . . alebwl [G] a : : I . (v.5) Fug-1030 Elf-19.51 Ta-r-|.-19E;n.,—l‘ where m a [z]0[z’] and z a a or b. In other words, [‘2'] is a Gold code sequence. [2] and [z’] are the m-sequenoes with length at. These Gold code arrays can also be used as the reference function for transform encryption coding. It can be inferred from the definition above that there are many Gold code arrays available. For example, the number of 25 5>a55 Gold code arrays available is 936,360,000. Therefore, it is almost impossible to. attack transform encryption coding when these arrays are used as reference functions. Figure IV.3 shows a Gold code arrayr and its binary phase spectrum. The Gold code arrays and the quasi m—arrays are generated by the same construction method. Similarly to the Gold code sequences, we have Wa’peboab’q . . . 30$?IPQbm-lab’11r1fl aloa'woboeb’q . . . alea MGR-10b Mr, [G]"'‘1 a ‘ ' , (IV.6) a,~_10a’n._1+p$boeb’q ' ' ° am_16a’n.+p_1$bm_1$b’m+q_1 and [G] at [6]”. In other words, [G] and [6]“‘1 are two different Gold code arrays generated by the same m-sequences with different relative phase shifts. 3:3 “an: 1- h a;- f}: «it ' 33:11. a: ' - -' gyvdm'EU. . ‘V-a’l'o' -. . ' I I... O 2%.»! it; \ rifle; Twit. .42 if}: .. v 19:?“ _ . cfilh'fi‘ “ii (b) (a) Figure IV.3 A 255x255 Gold code array and its binary phase spectrum. (a) Gold code array. (b) Binary phase spectrum. 46 Theorem IV.3: The cyclic auto-correlation 90065) of a two—dimensional Gold code array [G] is , n.nb for i = j = 0 9 (. .) uses-(1’4) for i *5 = 0 r = 4 0° ’1 n.0w(J’-J) for j a i= 0 (IV-7) Lt),,.(1'-I)el,.,4J'--J) for i at o and j at o, whereI¢0,I¢i,I’¢0,I'¢i,J ¢O,J¢j,l'¢0,andJ’¢j. Proof: Case 1 : 600(0,0) Similarly to the proof of Theorems NJ and IV.2, the matrix [G]$[G]o.o is aooboo'a‘oobo 'ioobleioeb'l . . . WEwrfiofi’Em—r aloboealob'o 5931333931 . . . alebnrloalobm Fn._l$B-o$ak_ 19.60 a_l$B-IQ-a-n._1QB-l ° ° ' Ek_1$‘t—an_1$§m_legnrld whose elements are all zeros. Therefore, 9040.0) = n.nb. Case 2: 9066.0), i at 0 Similarly to Case I, the matrix [G]0[G]i.o is $033065.ng fogfieiefi @306:ng El$b0$§i+l$b0 EIQblebo . . . ElabfiE-Hleo Em-figoeiui-figo R—regfiiw-IQFO ' ° ° ii'n.--rQB-o$ifn.+i--1950 which becomes 47 aoea’oeaatba’a wa’owafla’i I I I 3193,193i+106i+1 - - - 313’a 1321mml i+l fi-r9a'm-r@an.+i-r$a'n.+i-r ° ' ’ am-rm'm—r9an.+r-r$a'n.+i-r . In other words, P I I at“ r . . . 8193 r I I a1+10a I'+l - - - aI+1$ar+1 [G]9[G]i.o = Land-1933:3121 ° ' ° an.+I-—l$a’n.-I-I’--l a Therefore, we have 9006.0) = 111,9..41'4) from the proofs of Theorems NJ and IV.2. (I and I’ are different numbers because [a] and [a’] are different m-sequences.) Case 3: 666mm, j at 0 Similarly to Case 2, 966(0J) = n.0w(J’-J). Case 4: 90663), i at 0 and j at 0 From the proofs of Theorems NJ and IV.2, we have 900m) = 0“I(I’-I)9be(J’-J). Q.E.D. Theorem IVA: The cross-correlation emu) between two different Gold code arrays [G] and [6]“ is 966mm) = eu'(IP’I)9bb’(JCI“J), (N8) wherel¢0,1¢i,Ip¢0,lp¢p+i,J¢0,J¢j,Jq¢0,anqu¢q+j. Proof: Similarly to the proof of Theorem IV.2, we have 48 aoea’owboeb’oeaeajwebfib’qgj alea leboeb 093m“ WWW q+j [G]+[G]p'qi,j = bar... 193’“: 16b09b’0$am+i_1$a’n_+p+i_ 1$bj$bqfi W'fibm-fib'nt—193i93'm19bmj-19b'mqfi-1 I I I aloa labm-lab rib-1939433 pl-i-l-lebrt.+j-l$bn.+q+j—l an.-lea’nrl$bnrleb’nb—lean.+i-l$a’n.+p+i-lebn.+j—l$b’n.+q+j—1 By the proof of Theorem IV.2 and Property 11.4, we have apa’lvebjeb’Jq , . . 111933;;Q +1—1$b'n.+Jq—1 31+198’1W19b30b’1q . . . 31+193’1w1$bnb+1.1$b’nb+1q_1 [Quay-q“ = an.+r-193'n.+rp-r9b19b'rq ' ° ' and-I-lea’n.+Ip-lebnb+I-l$b’m+1q-lJ Therefore, 966,... = Ourap-DGqu-J) from the proof of Theorem IV.3. Q.E.D. Remarks: Similarly to the quasi m-arrays, every row or column of a Gold code array is a Gold code sequence or its complement. In addition, these Gold code arrays are also quasi-orthogonal. For the quasi m-arrays, a different pair of arrays has a different bound on its cyclic cross-correlation. On the contrary, the Gold code array provides families of arrays in which any two arrays have the same bound on their cyclic cross-correlation and auto-correlation floors. In addition, the cyclic auto-correlation peak of the Gold code array and that of the quasi m-array are the same when both arrays have the same size. These correlation properties are important for the application of code division multiplexing [25]. 49 Example IV.2: Figure IV.4 shows the cyclic correlations of two 63x63 Gold code arrays. These arrays are generated by the iii-sequences discussed in Example 11.1. The cyclic cross-correlation values of these Gold code arrays are the same as those of the quasi m-arrays discussed in Example IV.1. In addition, the cyclic auto-correlation values are 632, (~17)(63), (-1)(63), (15)(63),(—1)2, (~17)2, 152, (-1)(15), (ax-17), and (-17)(15). These values are also obtained from the theorems above and verified by computer simulation. 50 Auto-correlation Peak 3969.0 1 2289.0 " 3969.0 7 2289.0 - 609.00 r -10?1.0 ‘ Figure IV.4 Cyclic correlations of two 63x63 Gold code arrays. (a) Cyclic auto-correlation. (b) Cyclic cross-correlation. CHAPTER V COMPUTER SIMULATION V.l Simulation Studies In this section, simulation studies of the developed transform encrypted image coding technique are presented. The reference functions used for encryption are the pseudo—noises discussed in the previous chapter. These pseudo-noises have the follow- ing characteristics. First, they are easy and fast to generate. Secondly, they have a random appearance. Finally, it is difficult to compromise the security of these arrays because there are so many pseudo-noises available. The size of these pseudo-noises (ZN—l)x(2N-1), where N e N. Because of the properties of these pseudo—noises, the encrypted images can be obtained easily and remain secure. Figure V.1 shows the original images and their encrypted images used in the simulation. The size of these images is 256x256. The gray-scaled values for the ori- ginal images are from 0 to 255. Five randomly chosen 255x255 quasi m-arrays are used for the image encryption respectively. The gray-scaled values of the encrypted image depend on the original images and reference functions. In some applications, they have to be negative to ensure a uniform amplitude spectrum. These encrypted images are scaled and shown by using gray-scaled values from 0 to 255. Figure V.l shows these encrypted images are unrecognizable. In addition, all the high contrast 51 Figure V.1 The natural images and their encrypted images used in the simulation. (a), (b), (c), (d), (e) Natural images. (a'), (b'), (c'), (d'), (e’) Encrypted images. 53 Figure V.l (cont’d.) Figure V.l (cont’d.) _. .. ., ...fiwr ...... in «his... ((1') Figure V.l (cont’d.) 1‘ :- ‘\‘ x. . - I ‘ ‘ '2 i“ . :9 , _r. n 'j L ." ' 931‘» a" W?“ :- ‘ £12; 7 (e) Figure V.1 (cont’d.) 57 and detail of. the original images are lost, and the encrypted images appear more uni- form than the original images. Figure v.2 shows the estimated probability density functions for the original and encrypted images. The estimated probability density functions for the original images exhibit no regularity, while those for the encrypted images approximate Gaussian density functions. For the approximation errors defined in Equation (III.2), they are 0.00859, 0.00298, 0.00714, 0.01496, and 0.01095 for Fighter, Jackson, and Building, Lenna, and Mandrill respectively. These errors verify the extended central limit theorem because they are on the order of 0.00391 (libs—62‘). From these results, the signal encryption with size 256x256 is a good choice because the approximation error is about from 0.3% to 1.5% which [is acceptable in most engineering applications. In the independence theorem, the K-L transform is used to obtained independent Gaussian random variables. 11 is well known that the K-L transform is difficult to cal- culate and is without an efficient computation algorithm [ll-[10], [51]. Fortunately, many other unitary mathematical transforms have been proposed and their perfor- mances have been shown to approach that of the K-L transform [71-[10]. In addition, these unitary mathematical transforms have fast computation algorithms [71—[10]. These transformations include cosine, sine, Fourier, Hadamard, Harr, Slant, and Hartly. In the computer simulation, the K-L transform is replaced by the cosine and Hadamard transforms. The Hadamard and Fourier transforms share the same computational algoo rithm [1]-[10], [52]. The difference is that the transform coefficients of the Hadamard are real, while those of the Fourier are complex. Sometimes, there is -a restriction on the type of signal that can be transformed by a specific transformation. For example, cosine (sine) transform can only transform even (odd) functions. However, there is no such restriction in using the Hadamard transform. Therefore, Hadamard transform is widely used in transform coding. Since cosine transform was defined as the interna- tional standard for transform coding by CCITI‘ [53], some simulation results for cosine 58 .0339801 .029680‘ .025u00« Fighter 1 .081200 .016960J .012720‘ .008H801 1 .004240 I 1" L '\ ‘\ah “‘3 0.00000 . , , --- r T U I 0 32 6% 96 128 160 192 229 256 Figure v.2 The estimated probability density functions for the natural and encrypted images. 59 1 .021200 1 .016960 _ 0 12720 JaCkson 5.“ § §. 1 .008H80 .00H290‘ fix.“ 0.00000 “A“t’,- r t r t t r ‘ t 0 32 64 96 123 160 192 22“ 256 I .021200 Building .016960‘ Encrypted 0‘5,“ka Building .012720 J». I J 1 .008480 .004240 I 0.00000 I r V I I l ' 0 32 64 96 128 160 192 224 256 (e) Figme v.2 (cont’d.) .021200 .016960 .012720 .008H80 .004240 0.00000 .021200 .016960 .012720 .008480 .004240 0.00000 1 l l 1 Lenna 1”“"\ AK~ ‘0 l’ . t ‘ '| ’4 av“ ‘ ‘t r \ "A v V v! I T 1 128 160 192 (d) I T I 128 160 192 Figure v.2 (cont’d.) 61 transform coding are also studied. Although only cosine and Hadamard transform cod- ing are studied, similar results can be expected if other transformations are used. To perform transform image coding, the image must be partitioned into many subimages to estimate the probability density function for each transform coefficient. A unitary mathematical transform is then performed on these subimages. The larger the subimage size, the better the coding performance because the K-L transform coefficients are asymptotically independent Gaussian random variables. In addition, the performance of the unitary mathematical transforms will also approach that of the K-L transform. However, the coding performance does not improve significantly after subimage size 16x16 in practice [ll-[10]. In addition, the computational efficiency of transformation decreases when the subimage size increases. The number of samples available for the estimation of probability density functions for different transform coefficients also decreases as the subimage size increases. Therefore, . the subimage size is always chosen from 4x4 to 16x16 in practice. The subimage size used in these simulations is 16x16. The quantization technique used in the simulation is the Wintz and Kurtenbach technique [31] discussed in Section U2. The probability density function for each transform coefficient is assumed to be Gaussian because of the independence theorem. Since the cosine and Hadamard transforms are used in the simulation instead of the K-L transform, this assumption is true only when the subimage size is large. This is another reason why the 16x16 subimage size is chosen. Companding quantization is used instead of Max quantization because of its easy implementation. As long as the mean and variance of each transform coefficient is estimated, the quantization levels and reconstructed values are easily obtained. Usually, the bit allocation is set according to a map regardless of the particular image. Cosine transform coding for three different maps is also studied. The bit allo- cation maps are the ones shown in Figure 11.1. The advantage of using these maps is 62 that no run-length coding for bit allocation is required. However, the drawback is that the bit allocation cannot be changed according to the image statistics. V.2 Results Different cases are tested in the computer simulation. First, the Hadamard transform image coding is studied for Fighter, Jackson, and Building. For the Hadamard transform encrypted image coding, the 259055 quasi m-arrays are first used as the reference functions to produce the encrypted images. The size of original images is also reduced to 2550.55. Since 255 is not a power of 2, a prime factor FFT must be used to compute the encrypted images [54], [55]. After the encrypted images are obtained, the Hadamard transform encrypted image coding is investigated. The drawback is that more computation is required to compute the encrypted images using a 255x255 FFT than a 256>QS6 FFI‘. However, code division image multiplexing is possible because of the quasi-orthogonal property of the quasi m-arrays [25]. To obvi- ate the need for prime factor FFTs above, the 255x255 quasi m-arrays are put into 256x256 matrices as the reference functions with the boundaries filled with zeros. The experiment above is then repeated. The quasi orthogonal property of the quasi m- arrays is lost when they are zero padded. However, the speed of 256x256 FFT is about 3 times faster than that of 255>055 FFT [25]. Therefore, the speed of encryp- tion is greatly increased. In this case, three 256x256 FFI‘s are required by the encryp- tion process. To reduce the number of FFI's required in this process, the following case is tested. The quasi m-arrays are first put into 256x256 matrices with the boun- daries filled with zeros. The binary phase spectra of these 256x256 matrices are calcu- lated according to Equation (III.5). The binary phase spectrum of a quasi m-array is shown in Figure IV .2. These binary phase spectra are stored inside the computer. The encrypted images are then obtained by cyclic scrambling the phase spectra of the origi- nal images according to these binary phase spectra stored in the computer. The 63 Hadamard transform encrypted image coding is then studied. The encrypted images shown in Figure V.l are the results of this case. The advantage of this case is that the number of FFTs required by the encryption process is reduced to two which increases the speed of encryption by 1.5 times. The disadvantage is that the binary phase spec- trum must be computed and stored first. In all four cases, the encrypted images appear similar. Figure V.3, VA, and v.5 show the reconstructed images of Hadamard transform image and encrypted image coding at nominal 0.74, 1.09, and 1.52 bit/pixel, respec- tively. The encrypted images used for transform coding are shown in Figure IV.2. Apparently, the results of transform encrypted image coding are much better than those of transform image coding. The transform encrypted image coding preserves the edge, contrast, and detail better than the transform image coding. In the transform image coding, the quantization error only appears at the subimages where the error occurs. Therefore, the error is localized. The blocking efl'ect, which results from nonoverlap- ping coding of each subimage, is highly visible and very unpleasant visually. This effect can be observed in Figures v.3, v.4, and v.5. This blocking effect can be removed by the overlapping blocks [7]-[10]. However, the efficiency of data compres- sion will be decrease because of the redundancies between blocks increase. On the contrary, the quantization error spreads over the whole image in the transform encrypted image coding because of the required decryption. Therefore, the error is less noticeable. In addition, blocking effect is removed because the error associated with nonoverlapping coding of each subimage spreads over the whole image during the process of decryption. Table V.1 shows the objective simulation results, signal-to- noise (SNR), at nominal 0.74, 1.09, and 1.52 bits/pixel, respectively. The SNR is defined as the total signal energy over total noise energy, Figure V3 The reconstructed images from Hadamard transform coding at 1.52 bits/pixel. (a), (b), (c) From transform coding. (a'), (b’), (c’) From transform encryption coding. 65 Figure v.3 (cont’d.) Figure v.3 (cont’d.) 67 Figure VA The reconstructed images from Hadamard transform coding at 1.09 bits/pixel. (a), (b), (c) From transform coding. (a’), (b'), (c’) From transform encryption coding. 68 Figure V.4 (cont’d.) Figure v.4 (coni’d.) Figure v.5 The reconstructed images from Hadamard transform coding at 0.74 bit/pixel. (a), (b), (c) From transform coding. (a’), (b’), (c’) From transform encryption coding. 71 Figure v.5 (cont’d.) ’d.) Figure V.5 (cont 73 Table V.l Simulation results of Hadamard transform coding. Images SNR @ nominal bits/pixel A 296 @ 1.52 238 @ 1.09 185 @ 0.74 Fighter B 777 @ 1.52 525 @ 1.09 356 @ 0.74 C 794 @ 1.52 544 @ 1.09 361 @ 0.74 D 788 @ 1.52 541 @ 1.09 360 @ 0.74 A 282 @ 1.52 210 @ 1.09 161 @ 0.74 Jackson B 531 @ 1.52 332 @ 1.09 223 @ 0.74 C 495 @ 1.52 329 @ 1.09 219 @ 0.74 D 502 @ 1.52 315 @ 1.09 221 @ 0.74 A 120 @ 1.52 86.8 @ 1.09 62.4 @ 0.74 Building B 189 @ 1.52 122 @ 1.09 83.0 @ 0.74 C 192 @ 1.52 123 @ 1.09 83.9 @ 0.74 D 191 @ 1.52 121 @ 1.09 84.0 @ 0.74 Reconstructed images obtained by transform coding. Reconstructed images obtained by transform encryption coding. 255x255 quasi m-arrays are used as the reference images. The size of images is also reduced to 255x255. Reconstructed images obtained by transform encryption coding. The 256x256 version of 255x255 quasi m-arrays with boundary filled with zero are used as the reference images. Reconstructed images obtained by transform encryption coding. The 256x256 binary phase spectra of quasi m-arrays are used to encrypt the images. 74 2; £002 NR 2 ‘J , , (v.1) ;; lf(i.i)-f(i.i)]2 1,] where f(i,j) and f(i,j) are the original and reconstructed image respectively. The simu- lation results show a great improvement in the SNR at different bits/pixel. The SNR of Hadamard transform encrypted image coding is about 1.34 - 2.66 times higher than that of the Hadamard transform image coding. This improvement depends on the types of image and the bits/pixel used. Although 1.09 bits/pixel is required by the conventional Hadamard transform image coding to have an acceptably reconstructed image, the same quality image can be obtained by the Hadamard transform encrypted image coding at only 0.74 bit/pixel. Objective measure is not always suitable for the evaluation of image quality. Therefore, a subjective paired-comparison method [8] is also conducted The paired- comparison method involves the showing of two images to observers at a time and asking them to express a preference. The image pairs used in this measure are shown in Figure V.3, VA, and v.5. Twenty observers were asked to do the comparison. All the observers preferred the results of transform encrypted image coding to those of transform image coding. Although the quality of reconstructed images does improve from transform image coding to transform encrypted image coding, there are some discrepancies. The improvement for the high contrast image such as Fighter is obvi- ous, while, the improvement for the Other type of image is not. An explanation for the improvement of transform encrypted image coding for the high contrast image is as follows. For the high contrast image, the image data are less uniform. Therefore, the variance of transform coefficients is large. Sometimes, the distribution of these transform coefficients can not be approximated well by a Gaussian density function. Therefore, as expected, the mean-square quantization error is large. In addition, blocking effect at the region of high contrast is very obvious. On the 75 contrary, the encrypted image seems more uniform than the original image. Therefore, blocking effect is not so serious as in transform coding. In addition, the decryption process will spread the quantization error over the whole image. Therefore, the recon- structed image from transform encryption coding are more acceptable. A correct phase spectrum of reference function must be used to decrypt the encrypted image. If a wrong phase spectrum is used, the original image can not be recovered. Figure V.6 shows the results of this case. Therefore, a correct phase spec- trum [must be given to recover the original image, otherwise, a search for the correct phase spectrum is necessary. Since there are so many quasi m—arrays and Gold code arrays available, the search for the phase spectrum of a correct array is time- consuming. Hence the security of information is achieved because unauthorized per- sons do not have the knowledge of the phase spectrum of the reference function and it is too difficult to find this information. Lenna and Mandrill are used for the study of cosine transform coding because their approximation errors for Gaussian distribution are the largest among the test images. Figures v.7, V.8. and v.9 show the results of cosine transform coding when three different bit allocation maps shown in Figure TM are used. Table v.2 shows the SNR at 0.5, 1, and 1.5 bits/pixel, respectively. The SNR of cosine transform encrypted image coding is about 1.12-1.97 times better than that of cosine transform image coding. It is evident that the results of cosine transform encryption coding are better than those of cosine transform coding when the same number of bits/pixel is used. In addition, the blocking effect is removed and the results are more acceptable. The subjective paired-comparison method also shows the results of transform encrypted image coding is better than those of transform image coding. In general, the results of transform encryption coding at 0.5 bit/pixel have the same good quality as those of transform coding. Although adaptive transform coding can also achieve about 0.5 bit/pixel, it usually requires a significant amount of calculation and overhead Figure V.6 The decrypted images according to the wrong reference functions. (a) The decrypted Fighter. (b) The decrypted Jackson. 77 Figure V.7 The reconstructed images from cosine transform coding at 1.5 bits/pixel. (a), (b) From transform coding. (a'), (b') From transform encryption coding. Figure V.7 (cont’d.) 79 (a') Figure V.8 The reconstructed images from cosine transform coding at 1.0 bit/pixel. (a), (b) From transform coding. (a’), (b’) From transfonn encryption coding. 80 (b') Figure V.8 (cont’d.) can: ’31 I r -1 w .V D 1‘; Figure v.9 The reconstructed images from cosine transform coding at 0.5 bit/pixel. (a), (b) From transform coding. (a'), (b’) From tmnsl'omi encryption coding. Figure V.9 (cont’d.) 83 Table v.2 Simulation results of cosine transform coding. Images SNR @ bits/pixel , Lenna A 397@ 1.50 270@ 1.00 182@ 0.50 B 730@ 1.50 531@ 1.00 280@ 0.50 Mandrill A 117 @ 1.50 95.7 @ 1.00 67.6 @ 0.50 B 134 @ 1.50 118 @ 1.00 75.9 @ 0.50 A. Reconstructed images obtained by transform coding. B. Reconstructed images obtained by transform encryption coding. The 256x256 binary phase spectra of quasi m-arrays are used to encrypt the images. 84 information. With the technique of transform encryption coding, 0.5 bit/pixel can be easily obtained without these drawbacks. In addition, this technique is compatible with all the other techniques of transform coding. Therefore, an image transmitted or storaged at less than 0.5 bit/pixel can be obtained when transform encryption coding is employed with the other techniques. In summary, the transform encrypted image coding outperforms the transform image coding according to objective and subjective criteria. 0.5 bit/pixel can be expected by transform encryption coding according to the simulation results shown above, while 1 bit/pixel is necessary by transform coding. The only drawback is the encryption process which requires at least two NxN FFI‘s. Since FFT can be calcu- lated easily and quickly by today’s technology, this will not be a problem in practice. In addition, transform encrypted image coding also offers much higher security than transform image coding. CHAPTER VI CONCLUSION V1.1 Limitations and Advantages An independence technique for modependent random variables is first developed in this dissertation. The resulting independent random variables are also shown to fol- low the Gaussian distribution. The application of this technique to the transform image coding is also investigated. Since the transform coefficients are independent Gaussian random variables, a simple Max quantizer can achieve the optimal quantiza- tion. In addition, the redundancies in the transform coefficients are also removed. , This technique can be directly applied to any image or speech signal because these sig- nals have the m-dependent property. Although the cryptography is required by the proposed independence technique, the encrypted signal can be obtained easily and quickly by the algorithm developed in this dissertation. In addition, transform encryp- tion coding has the higher security than transform coding. The simulation results show a great improvement in the coding efficiency. Satisfactorily reconstructed image from the cosine/Hadamard transform encrypted image coding at 0.5 bit/pixel were obtained. However, the similar results were obtained by cosine/Hadamard transform image cod- ing at l bit/pixel. The results of transform encrypted image coding outperform those of transform image coding under the objective and subjective criteria. Higher coding 85 86 efficiency can be expected if K-L transform is used. In addition, the technique of transform encryption coding is compatible with the all other techniques of transform coding. Therefore, they can be employed together to produce a better results at less than 0.5 bit/pixel. In addition, the blocking effect is also removed by the decryption process. Although only the application of the independence technique to image signals is studied, similar results can also be expected in speech signals. Two-dimensional quasi m-arrays and Gold code arrays are also studied in this dissertation. These arrays have the quasi-orthogonal properties which are useful for many applications. The cyclic auto-correlation of any array is close to the delta- function. In addition, the cyclic cross-correlation between any two arrays is small compared with their cyclic auto—correlation peak. These arrays also have the pseudo- random property which is suitable for the reference function used in transform encryp- tion coding. In addition, the number of these arrays available is so large that a suc- cessful attack by unauthorized persons is almost impossible. In summary, these arrays have the following advantages in applications. 1. They can be easily and quickly generated. 2. The size of these arrays is flexible. 3. These arrays are pseudo-random. 4. There are many of this type of array available. The only limitation is that the cyclic auto-correlation floor of these arrays in not con- stant. N-dimensional quasi m-hypercubes and Gold code hypercubes can also be gen- erated by the same construction method proposed in this dissertation. A quasi- orthogonal property of these hypercubes is also expected. These n-dimensional hyper- cubes are much easier to generate than the n-dimensional Welti codes [56]. In addi- tion, these hypercubes can also be used in the n-dimensional transform encryption 87 coding. V1.2 Recommendations for Further Work Transform coding has been studied for more than two decades, and many tech- niques have been proposed [l]-[10]. All these techniques can be employed with the transform encryption coding. The following is a summary of some important direc- tions for further work. I 1. Image signals share many similarities with speech signals. Therefore, similar results fi'om transform encrypted speech coding can also be expected. The prob- lems is how good the transform encrypted Speech coding can be under the objec- tive and subjective criteria. This is an interesting problem for researchers in speech coding to study. There are many transformations which can be used for transform coding. The cosine transform is the best approximation to K-L transform when the image model is a first order Markov process [7]-[10]. Since the encrypted signals are always joint Gaussian, a comparison of these transformations based on the joint Gaussian images is necessary. There are many techniques in transform coding such as visual, thresholded, adap- tive, and hybrid transform coding [1]-[10], [57]. These techniques can always be employed with the transform encrypted coding. How good the coding perfor- mance can be when these techniques are merged is also an interesting problem. Noise is always presented during the information transmission and storage. Therefore, a study of transform encryption coding under the presence of noise is also necessary. The optimal bit allocation for independent random variables proposed by Segall are under the assumption that the bit allocation can be any real number [29]. This is an incorrect assumption because the quantization levels are always integers 88 and then the bit allocation cannot be any real number. How to solve this optimal bit allocation problem under the assumption of integer quantization levels is also essential. The independence theorem holds only when the K-L transform is used. However, an efficient computation algorithm for K-L transform does not yet exist [1]-[10], [51]. Although the performance of many unitary mathematical transformations is close to that of K-L, they are not exactly the same. Therefore, an efficient com- putation algorithm for K-L transform must be investigated. The two-dimensional pseudo-noises developed in this dissertation are useful for many applications. Some possible topics for further work are the following. 1. The m-sequences or m-arrays can be represented by the primitive polynomial under the finite field [15], [16]. How can one represent the quasi m-arrays and the Gold code arrays by the primitive polynomial? What other properties do they have? These are some interesting issues in understanding the properties of two- dimensional finite fields and primitive polynomials. Artificial even-odd textures (random patterns, pseudo—noises) are useful for the study of texture segmentation algorithms [49]. Some even-odd textures can be discriminated by these algorithms, but some cannot [57], [58]. What is the rea- son? Is it because of the correlation between the even and odd texture? The quasi m-arrays are indeed even textures. Is there any odd texture with quasi- orthogonal property? If so, can this type of even-odd texture be discriminated? This might be an intesting problem for researchers in computer vision. APPENDIX OPTICAL ENCRYPTION Two dimensional convolution and correlation can be easily performed by optics because of the Fourier transform ability of converging lens. [59]-[62]. In addition, an optical system processes the information at the speed of light which is impossible with today’s electronic technology. A typical optical information processing system is shown in Figure A.1. The first lens is to take the Fourier transform of the input object. The inverse Fourier transform of the filtered output at the filter plane is then taken by the second lens and displayed at the output plane. Although two-dimensional convolution and correlation can be performed by a match (V anderLught) filter [59]- [62], the filter synthesis is difficult and time-consuming. In addition, the light efficiency of matched filter is low. Real-time devices that can modulate the phase or amplitude of light are already available [63]-[65]. Two-dimensional convolution and correlation can be performed by these devices. In addition, the light efficiency of these devices is also better than that of the matched filter. Most devices of this type are controlled by a digital computer. Therefore, filters can be stored in the computer and filter switching is easy. An optical system with such real-time devices is called an electro-optics system. This system is important because of its programmability and high light efficiency. An image formation system usually consists of lenses and recording devices. If the images obtained by such system are to be used in digital transmission or storage, then the quantization is necessary. To have a high quantization efficiency, the encryp- tion must be performed on these images to produce the joint Gaussian encrypted 89 .wemmmoooa acumen—85 com 83%.... ~8qu =< fi< 293m .. \ as a M Alll . E»: 23$ 23 an 23 oueBonoU 39:0 .5829? 05$ 5.832% ease—Sou Horne"— uofim Busch b.3054 0235 91 image. Since the encryption is obtained by convolution, it can also be obtained by an optical or electro-optical system. Obtaining the encrypted image by optics is a better way than by electronics because all that is necessary is to modify the image formation system. In addition, the filter used for encryption can be changed easily and quickly by computer. Figure A.2 shows a proposed system for an encrypted image formation system. This system is important because the encrypted images are directly obtainable by the recording device and ready for transform coding. The liquid crystal light valve is used to convert an incoherent optical image into a coherent one which is suitable as an input for any coherent optical information processing system. This conversion is necessary because the Fourier transform property of converging lens holds only when a coherent (laser) light source is used. The lens takes the Fourier transform of the input optical wave, and displays the results at the output plane. Therefore, it is the output optical wave (amplitude and phase) that is equivalent to the result of Fourier transform. The filter must be changed easily for security consideration. This can be done by using a programmable magneto-optics device. In addition, the requirement of an all-pass filter is also achieved because the magneto-optics device can be set to modulate the phase of light only. The output of this system can be recorded directly by the CCD. Although the amplitude of the optical wave for an image after the liquid crystal light valve is always positive, that for the encrypted image can be negative. In other words, there can be a 180° phase difference between the optical wave of the ori- ginal image and the encrypted one. However, CCD can only detect the light intensity and cannot distinguish the phase difference in the optical wave. This problem can be solved by interferometry [59] shown in Figure A.2. CCD always records the light intensity of an encrypted image twice. The first time is with the shutter closed which will record the light intensity of the encrypted image only. Therefore, the amplitude of the optical wave for an encrypted image can be obtained by the square root of the 92 20328 omen: canteen .5.“ 889?. ~8qu =< N.< 25E sauna 3:20:00 :05 Q a»: .852 Baum 3:98:85 Edam 0 “sun“ ammo \ . / S . gamma he a e. 3 J m m 23m 23m _ \ j l / \ é l 1 \ 23 meow— Euoteufi. 5.828% 838 Busch 285 850m 335006020 0235 BEL 93 CCD output. The second time is with the shutter open which will record the interfer- ence between the optical wave of the encrypted image and the reference optical wave. Since the difference between the two optical path lengths is set to be smaller than the coherence length of the laser, there will be a constructive or destructive interference at the output plane. The constructive interference corresponds to the no phase difference conditiOn between the optical waves, while the destructive one corresponds to a 180° phase difference. Therefore, the 0° or 180° phase shift in the optical wave of the encrypted image can be determined by the output of CCD at this time. Usually, only linear convolution is performed by an optical system. This will degrade the joint Gaussian property of the encrypted images. In addition, it also degrades the performance of transform encryption coding. All of these are because the definition of encryption is not fully satisfied. In addition, linear convolution also dou- bles the space-bandwidth product of the encrypted images. A simple simulation is used to see the effect above. The original images are Fighter, Jackson, and Building shown in Figure V.l. The original size of the encrypted images is 512x512 because the space-bandwidth product is doubled by the linear convolution. The size of these encrypted images are reduced to 256x256 to have compatible resolution with the origi- nal 256x256 images. These 256K256 encrypted images are then decrypted by an opti- cal linear deconvolution. Figure A.3 shows the simulation results. The SNR of simu- lation results for Fighter, Jackson, and Building, are 350, 246, and 89, respectively. The noise occurs because the resolution of the encrypted image is reduced from 512x512 to 256x256. To obtain better results, cyclic convolution must be implemented in optics. Therefore, the space-bandwidth product of the encrypted image will not be doubled. In addition, the joint Gaussian property of the encrypted signals can be maintained. Indeed, cyclic convolution can be implemented by a linear convolution. This is done as follows. Let us assume there are two signals of length N. Then the cyclic 94 Figure A.3 Optical encrypted and decrypted images by linear convolution. (a), (b), (c) Encrypted images. (a’), (b’), (c’) Decrypted images. 95 convolution of these two signals can be obtained by the linear convolution of one sig- nal with a periodic version of the other. The periodic signal consists of two identical copies of the original sequence. Therefore, the total length of the periodic signal is 2N in this case. Figure A.4 shows the method for obtaining cyclic convolution using linear convolution. The total length of the linear convolution output is 3N, while only the middle output with length N is the result of cyclic convolution. Although cyclic convolution can be obtained by linear convolution, cyclic decon- volution by linear deconvolution is not yet available. This is an interesting problem in optical information processing and deserves further attention. 96 3,> [Signal I ‘ [Signal 11 1 signal 11 | \\ , Cyclic Convolution Region Figure A.4 The method of obtaining cyclic convolution using linear convolution. BIBLIOGRAPHY [1] [2] [3] [4] [5] [6] [7] [8] [9] BIBLIOGRAPHY A.G. Tescher & J.A. Saghri, "Advances in transform image coding," Optical Engineering, vol. 26, pp. 563-569, July 1987. AG. Tescher & J.A. Saghri, "Ada tive transform coding and image quality," Opt- ical Engineering, vol. 25, pp. 979- 83, August 1986. H.G. Musmann, P. Pirsch, & H.J. Grallert, "Advances in Picture coding," Proceedings of the IEEE, vol. 73, pp. 523-548, April 1985. M. Kunt, A. lkonomopoulos, & M Kocher, "Second- generation image-codin g techniques," Proceedings of the IEEE, vol. 73, pp. 549-574, April 1985. AK. Jain, "Image data compression: a review," Proceedings of the IEEE, vol. 69, pp. 349-389, March 1981. AN. Netravali & J.O. Limb, "Picture coding: a review,” Proceedings of the IEEE, vol. 68. pp. 366-406, March 1980. J .S. Lim, Two-dimensional Signal and Image Processing, Englewood Cliffs: Pren- tice Hall, 1990. RC. Gonzalez & P. Wintz, Digital Image Processing, Reading: Addison-Wesley, 1987. R.J. Clarke, Transform Coding of Images, London: Academic Press, 1985. [10] W.K. Pratt, Digital Image Processing, New York: John Willey & Sons, 1978. [11] V.R. Algam & D.J. Sakrison, "On the optimality of the Karhunen-Loeve expan- sion," IEEE Transactions on Information Theory, vol. IT-15, pp. 319-320, March 1969. [12] G.L. Turin, "On the optimum energy distribution in sequential detection," IEEE Transactions on Information Theory, vol. IT-15, pp. 321, March 1969. [13] N.M. Nasrabadi & R.A. King, "Image coding using vector quantization: a review," IEEE Transactions on Communications, vol. COM-36, pp. 957-971, August 1988. 97 98 [14] H. C. Tseng & T. R. Fischer, "Transform & hybrid transform/DPCM coding of images using pyramid vector quantization, 'IEEE Transactions on Communica- tions, vol. COM-35, pp. 79-86, January 1987. [15] LG. Proakis, Digital Communications, New York: McGraw Hill, 1989. [16] R. J. McEliece, Finite Fields for Computer Scientists and Engineers, Boston: Kluwer Academic, 1987. [17] R. Skaug & J .F. Hjelmstad. Spread Spectrum in Communication, London: Peter Peregrinus, 1985. [18] RC. Dixon, Spread Spectrum Systems, New York: John Willey & Sons, 1984. [19] D.V. Sarwate & M.B. Pursley, "Cross-correlation properties of pseudorandom and related sequences: a review," Proceedings of the IEEE, vol. 68, pp. 593-619, May 1980. [20] T. Nomura, H. Miyakawa, H. Irnai & A. Fukuda, "A theory of two-dimensional linear recurring arrays," IEEE Transactions on Information Theory, vol. IT-18, pp. 775-785, November 1972. [21] FJ. MacWilliams & N.J.A. Sloane, "Pseudo-random sequences and arrays," Proceedings of the IEEE, vol. 64, pp. 1715-1729, December 1976. [22] N. Ohyama, T. Honda, & I. Tsujiuchi, "An advanced coded imaging without sidelobes," Optics Communications, vol. 27, pp. 339-344, December 1978. [23] W. Szepanski, "Compatibility problems in add-on data transmission for TV chan- nels," Proceedings of the Second Symposium on Electromagnetic Compatibility, pp. 263-268, June 1977. [24] S.W. Golomb & H. Taylor, "Two-dimensional synchronization patterns for minimum ambiguity," IEEE Transactions on Information Theory, vol. IT-28, pp. 600-604, July 1982. [25] C.J. Kuo & H. Rigas, "Image multiplexing by code division technique," SPIE Proceeding, vol. 1153, August 1989. [26] T.R.N. Rao & K.H. Nam, "Private-key algebraic-code encryptions," IEEE Tran- sactions on Information Theory, vol. 35, pp. 829-833, July 1989. [27] W. Diffie & M.E. Hellman, "New directions in cryptography," IEEE Transactions on Information Theory, vol. 22, pp. 644-654, November 1976. [28] J. Max, "Quantizing for minimum distortion," IRE Transactions on Information Theory, vol. IT-6, pp. 7-12, March 1960. [29] A. Segall, "Bit allocation and encoding for vector sources," IEEE Transactions on Information Theory, vol. IT-22, pp. 162-169, March 1976. 99 [30] L.D. Davisson, "Rate-distortion theory and application," Proceedings of the IEEE, vol. 60. PP. 800-808, July 1972. [31] RA. Wintz & AJ. Kurtenbach, "Waveform error control in PCM telemetry," IEEE Transactions on Information Theory, vol. IT- 14, pp. 650-661, September 1968. [32] J .J .Y. Huang & PM. Schultheiss, "Block quantization of correlated Gaussian ran- dom variables," IEEE Transactions on Communications Systems, vol. CS-l 1, pp. 289-296, September 1963. [33] HP. Kramer & M.V. Mathews, "A linear coding for transmitting a set of corre- lated signals," IRE Transactions on Information Theory, vol. IT-2, pp. 41-45, September 1956. [34] S. Lin & DJ. Costello, Jr., Error Control Coding: Fundamentals and Applica- tions, Englewood Cliffs: Prentice-Hall, 1983. [35] C. Stein, "A bound for the error in the normal approximation to the distribution of a sum of dependent random variables," Sixth Berkeley Symposium on Mathemati- cal Statistics and Probability, vol. 2, pp. 583-602, 1972. [36] L. Heinrich, A Method for the Derivation of Limit Theorems for Some Classes of Dependent Random Variables, Berlin: Akademie Der Wissenschaften Der DDR, 1983. [37] K.N. Berk, "A central limit theorem for m-dependent random variables with unbounded m," Annual Probability, vol. 1, pp. 352-354, April 1973. [38] M. Resenblatt, "A central limit theorem and a strong mixing condition," Proceed- ing of National Academy of Sciences, vol. 42, pp. 43-47, 1956. [39] W. Hoeffding & H. Robbins, "The central limit theorem for dependent random variables," Duke Mathematical Journal, vol. 15, pp. 773-780, 1948. [40] A. Papoulis, Probability, Random Variables, and Stochastic Processes, New York: McGraw Hill, 1984. [41] J.M. Wozencraft & I.M. Jacobs, Principles of Communication Engineering, New York: John Wiley & Sons, 1965. [42] A.V. Oppenheim & J.S. Lim, "The importance of phase in signals," Proceedings of the IEEE, vol. 69. pp. 529-541, May 1981. [43] AB. Carlson, Communication Systems, New York: McGraw Hill, 1986. [44] AN Netravali & B. Prasada, "Adaptive quantization of picture signals using spa- tial masking," Proceedings of the IEEE, vol. 65, pp. 536-548, April 1977. 100 [45] J.L. Mannos & DJ. Sakrison, "The effects of a visual fidelity criterion on the encoding of images," IEEE Transactions on Information Theory, vol. IT-20, pp. 525-536, July 1974. [46] Z.L. Budrikis, "Visual fidelity criterion and modeling," Proceedings of the IEEE, vol. 60. PP. 771-779, July 1972. [47] J.L. Homer & H.O. Barteit, "Two-bit correlation," Applied Optics, vol. 24, pp. 2889-2893, September 1985. [48] S.Y. Kung, R.E. Owen, & J.G. Nash, VLSI Signal Processing, 11, New York: IEEE Publication, 1986. [49] A. Rosenfeld & A.C. Kak, Digital Picture Processing vols. l & 2, New York: Academic Press, 1982. [50] B. Julesz & LR. Bergen, "Textons, the fundamental elements in preattentive vision and perception of textures," The Bell System Technical Journal, vol. 62, pp. 1619-1645, July-August 1983. [51] W.D. Ray & R.M. Driver, "Further decomposition of the Karhunen-Loeve series representation of a stationary random process," IEEE Transactions on Information Theory, vol. IT-16, pp. 663-668, November .1970. [52] KG. Beauchamp, Application of Walsh and Related Functions, London: Academic Press, 1984. [53] R. Lucky, Distinguished lecture, Michigan State University, April 14, 1989. [54] A.V. Oppenheim & R.W. Schafer, Digital Signal Processing, Englewood Cliffs: Prestice Hall, 1975. [55] J.S. Lim & A.V. Oppenheim, Advanced Topics in Signal Processing, Englewood Cliffs: Prentice Hall, 1988. [56] H.D. Luke, "Sets of one and higher dimensional Welti codes and complementary codes," IEEE Transactions on Aerospace and Electronic Systems, vol. ABS-21, pp. 170-179, March 1985. [57] B. Julesz, E.N. Gilbert, & J.D. Victor, "Visual discrimination of textures with identical third-order statistics," Biological Cybernetics, vol. 31, pp. 137-140, 1978. [58] F. Farrokhnia, private communications, 1989. [59] E. Hecht, Optics, Reading: Addison-Wesley, 1987. [60] J .L. Homer, Optical Signal Processing, San Diego: Academic Press, 1987. [61] 11391838. Yu, Optical Information Processing, New York: John Willey & Sons, 101 [62] J .D. Gaskill, Linear Systems, Fourier Transforms, and Optics, New York: John Willey & Sons, 1978. [63] AD. Fisher, L.C. Ling, J.N. Lee, & R.C. Fukuda, "Photoemitter membrane light modulator," Optical Engineering, vol. 25, pp. 261-268, Feburary 1986. [64] WE. Rose, D. Psaltis, & R.H. Anderson, “Two-dimensional magneto-optic spatial light modulator for signal processing," Optical Engineering, vol. 22, pp. 485-490, July-August 1983. [65] DR. Pape & L.J. Hombeck, "Characteristics of the deformable mirror device for optical information processing," Optical Engineering, vol. 22, pp. 675-681, November-December 1983. "Illllllllllllllllllll