[*«EQNaSFEECH mam DESCQEMMAFEGN EN SUSfiECW WE‘EH EMPAEREQ E€EARENG “tasks éor i'iua Degree (925 pk. D. MECHEGAN HATE WIVE‘ES‘LE“. P'aui S‘ Niswwéer 1968 “LIBRARY ”“ mags ‘ Michigan State ’ ” University ( This is to certify that the thesis entitled NON-SPEECH SOUND DISCRIMINATION IN SUBJECTS WITH IMPAIRED HEARING presented by Paul S. Niswander has been accepted towards fulfillment of the requirements for (lflXQM/L/ ’ mg; ”Gamay Date /4//;/C/D 9 / 0-169 fx ‘,'- ”3'1 may . ’ ‘ "fi' ‘ —.. ' ' A l 1‘ ‘ .- ~_ 7“ 1’ . 'u 4.1. 0.1 ‘n In t/ ‘ l . 1% ' ’ 331.4,. ‘ . “2 ‘ ‘. “‘3’. J‘v‘_- ,y _‘ ‘ ; ‘ ‘ .: ‘Wft g, (71.“ ‘ ‘ L 1 i ‘ “Hts . .~ , ‘ : Stmdtthq.‘ ~ . , _ ‘ ‘ ‘ I. ' l .. fit ., . ,\ L“. as»; ,L :. ‘ . 35' “atrium“ ;". saw-tun) : . .li. ‘, :20.) ’ mm}. thing “2' ._ - ”.- 9 M, (f- .~ ”Lawytcm L‘-‘ :1 . 3 ‘ ‘I NWCQIA tn 3.171.. '. , A teat: What“. ml Con :7 t r neg»- , w, 9 _-.UI1 1‘83: 35-? T. 4-. _ 5; - .737 .- , . "la-'KM‘ - ,fi. )‘ - I”. _ ‘_‘.'»_.'- .‘ ‘ mi ABSTRACT NON-SPEECH SOUND DISCRIMINATION IN SUBJECTS WITH IMPAIRED HEARING BY Paul S. Niswander Although speech stimuli provide the basis for much human interaction, a large proportion of an individual's sound environment is composed of meaningful, non-speech sounds. Some non-speech sounds serve warning or alerting L functions and thereby are important to the physical well- ' being of the individual. Other non—speech sounds provide additional information and assist man to adjust more suc- cessfully to his environment. The characteristics important in discriminations -r ...,_ s, involving non-speech sounds are not known. This research was conceived in an attempt to evaluate some of these parameters in subjects with impaired hearing. h A test consisting of fifty familiar, non-speech It sounds was constructed. The stimuli included animal cries, mechanical noises, musical instruments, and human non-speech sounds. The test was administered to sixteen subjects with impaired hearing. The subjects were divided into two T—_——— Paul S. Niswander .groups of equal size on the basis of the extent of the hearing loss. Class I subjects had average thresholds for the speech frequencies (500 Hz, 1000 Hz and 2000 Hz) .between 20 dB and 40 dB ISO in the better ear. Subjects assigned to Class II had average thresholds between 40 dB and 60 dB ISO in the better ear at these frequencies. Initial pure-tone thresholds were determined for each ear at 250 Hz, 500 Hz, 1000 Hz, 2000 Hz, 4000 Hz, 6000 Hz, and 8000 Hz. The speech reception threshold was determined for binaural earphone presentation. Lists of CID W—22 words were administered binaurally through ear— phones at -8 dB, 0 dB, +8 dB, +16 dB, and +24 dB relative to the SRT. A non-speech sound list was presented at each of the same sensation levels. Each subject was tested twice with at least two weeks between the test sessions. Spectral analysis of the non-speech sounds provided the frequency characteristics of each sound. The results of the study indicated that the dis- ’ crimination functions (sensation level vs. discrimination score) for the non-speech sounds very closely resembled those obtained from the speech test. The slope of the linear portion of the non-speech function was slightly less than the slope of the corresponding portion of the speech function. In addition, the correlation between the speech and non—speech scores at the higher sensation levels ‘W———_——' is Paul S. Niswander was quite high. The results suggest that similar processes mediate the discriminations of these stimuli. High reliability coefficients were obtained for the speech and non-Speech tests of auditory discrimination. There was no significant difference between the correla— tions obtained from the two tests. No significant correlations were found between the audiometric configurations of the subjects and the spectra of the sounds which were identified correctly at each sen— sation level. Moreover, the various frequency bands of the non-speech sounds were discovered to have no differen- tial effects in their contributions to intelligibility. NON-SPEECH SOUND DISCRIMINATION IN SUBJECTS WITH IMPAIRED HEARING BY \0 6 Paul SieNiswander A THESIS Submitted to Michigan State University in partial fulfillment of the requirements for the degree of, DOCTOR OF PHILOSOPHY Department of Audiology and Speech Sciences 1968 #— - 455/0 67 343" 67 TABLE OF CONTENTS Page LIST OF TABLES I I o I I I I o I I I I I I I I I I I iV LIST OF FIGURES C O O O I O O O O O I Q C O O I O 0 v Chapter I. STATEMENT OF THE PROBLEM . . . . . . . . . . 1 Introduction . . . . . . . . . . . . . . 1 Purpose of the Study . . . . . . . . . . . 6 Definition of Terms . . . . . . . . . . . 7 II. REVIEW OF SENSORY DISCRIMINATION I I I I O o 00 Introduction . . . . . . . . . . . . . . . 8 Speech Discrimination . . . . . . . . . 8 Non- -Speech Sound Discrimination . . . . . 27 III. SUBJECTS, MATERIALS, EQUIPMENT AND PROCEDURES . . . . . . . . . . . . . . . . 30 Introduction . . . . . . . . . . . . . . . 30 Subjects . . . . . . . . . . . . . . . 30 Stimulus Materials . . . . . . . . . . . . 31 Equipment and Procedures . . . . . . . . . 34 Response Scoring . . . . . . . . . . . 37 Additional Equipment and Procedures for Analyzing the Sound Tapes . . . . . . . 33 IV. ANALYSIS OF THE DATA AND DISCUSSION . . . . 42 Introduction . . . . . . . . . . . . . . 42 Summary of the Data . . . . . . . . . . . 42 Discussion . . . . . . . . . . . . . . . . 51 V. SUMMARY, CONCLUSIONS AND RECOMMENDATIONS . . 65 Introduction . . . . . . . . . . . . . . . 65 Summary . . . . . . . . . . . . . . . . . 65 Conclusions . . . . . . . . . . . . . . . 68 Recommendations . . . . . . . . . . . . . 69 ii fiIIOGRAPHYAI I I I I I I I I'I I I I I I I I‘I I 9&2: dix A Summary of the Non-Speech Sound Lists, Sva;ndix B Sounds Ranked from Least Difficult to Most Difficult I I ‘I I I I I I I 111 ‘ Page 71 ‘73 74 LIST OF TABLES Table Page 1. Non-Speech Test and Retest Discrimination Scores for Class I Subjects . . . . . . . . 43 2. Speech Test and Retest Discrimination Scores for Class I Subjects . . . . . . . . 44 3. Non-Speech Test and Retest Discrimination Scores for Class II Subjects . . . . . . . 45 4. Speech Test and Retest Discrimination Scores for Class II Subjects . . . . . . . 46 5. Slopes in the Linear Region of the Dis- crimination Function for Class I and Class II Subjects . . . . . . . . . . . . . 52 6. Differences in Discrimination Scores . Between Class I and Class II Subjects for the Non-Speech and Speech Tests . . . . 54 7. Test-Retest Reliability Coefficients of the Speech and Non-Speech Sounds at the Various Sensation Levels . . . . . . . . . 57 8. Product—Moment Correlations Between the Results of the Speech and Non-Speech Discrimination Tests . . . . . . . . . . . 59 9. Summary of Statistic for Testing Differences Between the Correlations Obtained from Class I and Class II Subjects . . . . . . . 59 10. Product-Moment Correlations Between the Relative Intensity Levels of the Ident- ified Sounds at a Given Frequency and the Threshold of the Subjects at the Same Frequency . . . . . . . . . . . . . . 62 . ll. Ninety-Five Percent Confidence Intervals . Around Zero Correlation for the Various 1 Sensation Levels . . . . . . . . . . . . . 62 iv -‘amw. ‘- Figure 1. 2. 10. 11. LIST OF FIGURES Page Average Percent Articulation as a Function of Number of Sounds per Syllable . . . . . 11 Relative Intelligibility of Three Types of Speech Stimulus Material as a Function of Relative Intensity . . . . . . . . . . l4 Articulation Scores for Three Different Types of Test Material as a Function of Signal-to—Noise Ratio . . . . . . . . . 15 Average Relative Intelligibility of Words and Sentences Under a Wide Variety of Listening Conditions . . . . . . . . . . . 15 Frequency-Power Distribution of Speech . . . 17 Articulation Score for Monosyllabic Words as a Function of the SPL of the Speech . . 18 Syllable Articulation for Speech Passed Through High Pass and Low Pass Filters in the Quiet O I I O I O O D C O O O O I O 20 Block Diagram of Equipment Used for Spectral Analysis of the Non—Speech Sounds 0 I I O I I I O O O I I O O O I I C 4 o Non-Speech Discrimination Score as a Function of Intensity for Class I Subjects . . . . 47 Non—Speech Discrimination Score as a Function of Intensity for Class II Subjects . . . . 48 Non-Speech Discrimination Scores as a Function of Intensity for Class I and Class II SllbjeCts O O I O O I O I O I O I I I I C O 48 "v. Figure Page 12. Speech Discrimination Score as a Function of Intensity for Class I Subjects . . . . . 49 13. Speech Discrimination Score as a Function of Intensity for Class II Subjects . . . . 50 14. Speech Discrimination Scores as a Function of Intensity for Class I and Class II Subjects . . . . . . . . . . . . . . . . . 50 15. Mean Speech and Non-Speech Discrimination Scores as a Function of Intensity for all Subjects . . . . . . . . . . . . . . . 51 vi CHAPTER I STATEMENT OF THE PROBLEM Introduction Pure—tone and speech audiometry comprise the basic armamentarium of the clinical audiologist. More recently, special tests have been developed for the differential diagnosis of auditory pathologies and now are administered routinely in many clinics as part of the audiological assessment battery. The pure-tone tests determine certain performance characteristics of the auditory mechanism. Depending on the site of the lesion, the impaired hearing patient gives certain characteristic responses to various types of audi- tory stimuli. The special hearing tests were developed to investigate these pecularities and thereby provide site of lesion information. None of these tests provides directly, however, information regarding an individual's performance in complex and meaningful auditory environments. This can only be inferred from the results of such tests. Tests utilizing speech stimuli were developed originally as a means of investigating the characteristics of various communication systems. Reasoning that the human auditory mechanism could be regarded as a special 1 7. type of (or vital link in any) communication network, scientists applied speech tests to the assessment of human audiological performance. In conjunction with the pure- tone tests, the speech tests provided vital site of lesion information. Perhaps even more important, however, was the fact that these tests assessed human auditory performance with meaningful auditory stimuli. DeSpite the fact that speech stimuli provide the basis for much human interaction, a large proportion of an individual's sound environment is composed of meaningful non—speech sounds. Some non-speech sounds serve warning or alerting functions and thereby are important to the physical well-being of the individual. The automobile and train horns, the air raid siren and the railroad crossing bell all are examples of formal alerting devices. That is, they exist mainly to provide alerting information. Other sounds may serve an informal alerting function, pro- viding alerting information only within a specific context. The gun shot at the firing range or in the hunting field may have very little alerting value. But gun shots shattering a quiet summer evening in a residential neigh- borhood may convey highly alerting information. Other non-speech sounds provide additional informa- tion and thereby assist man to adjust more successfully to his environment. The sound of rain falling on the roof; of the neighbor's power mower; of the clock ticking in the hall; all provide information regarding the condition of the environment. Man's knowledge of the true state of his environment is based on successful reception and recognition of such sounds. Some human non-speech sounds may serve as an impor- tant vehicle of communication between individuals. A sigh or audible yawn may, within a given situation, communicate as much information as a verbal remark. Likewise, a hearty laugh may convey much more information than a whole series of words. Because of the importance of non—speech sounds, it is indeed unfortunate that so little research effort has been expended in specifying the relevant parameters of such stimuli. Such investigations could provide knowledge having both theoretical and clinical implications. Consider, for a moment, studies of speech intelli- gibility. Investigations using speech stimuli have revealed some of the physical and psychological factors important in speech discrimination. The physical parameters of the stimuli can usually be specified exactly. Intensity, frequency bandwidth, and frequency spectrum have been accurately measured for the various speech sounds. Manipulation of these parameters, singly and in combination, has revealed the relative con- tribution of each to discrimination. The psychological factors associated with speech material are well known but their effect on discrimination is more difficult to quantify. Type of stimulus material, familiarity of items, and the effect of practice all con- tribute their share to speech intelligibility. The physical and psychological factors of the stimulus material interact to provide the cues for discrimination. In addition, the state of the auditory mechanism of the listener is a relevant factor in the measured dis- crimination performance. Some types of impairments affect speech discrimination more than other types. Physical manipulation of the physical parameters of the speech stimuli has more effect in some types of impairments. In contrast, none of the above information is available for situations in which meaningful non-speech sounds were utilized. One would expect many of the same parameters to be relevant for both speech and non-speech material. The intensity of any auditory stimulus reaching the listener's ears is of paramount importance in detection and discrimination. Certain relationships exist between the intensity and intelligibility of speech materials. Do the same relationships hold for meaningful non-speech stimuli? If the relationships are similar, with what type of speech stimulus material do the non-speech functions most directly compare? Frequency filtering affects speech intelligibility. Does this type of filtering also affect the discrimin- ability of non-speech stimuli? How do different types of filtering change the intelligibility? Are some frequency bands more important than others? It has been noted that psychological factors may be associated with meaningful non-speech sound stimuli. How do these factors affect intelligibility? Are the same factors important for speech and non-speech stimuli? Are non—speech sounds which have a formal alerting connotation more intelligible under a given set of listening conditions than those which do not? How do changes in listening situations differentially affect sounds of various psychological connotation? Is the discriminability of meaningful non-speech sound stimuli related to the type of hearing loss in a subject? Is there a relationship between the spectra of identified sounds and the measured pure-tone thresholds in a listener? Can the results of non-speech discrimination tests be used to predict an individual's success in adjust— ing to his auditory environment? The quantity and variety of unanswered questions such as posed above regarding the discrimination of mean- ingful non-speech sounds reveal the need for research in this area. The present study was conceived in such a light. ,-~..r-_ It was not meant to be an investigation of all relevant factors in non—speech discrimination. Rather, the research was intended to be an initial step in evaluating some of these parameters. Purpose of the Study The purpose of the present study was to evaluate some of the characteristics of non-speech sound discrimina— tion in hearing impaired subjects. A tape recorded test consisting of fifty meaningful non-speech sounds was developed and administered to the subjects. The following questions were posed as a means of specifying some of the relevant parameters of non-speech sound discrimination in these subjects: 1. What is the relationship between the measured discrimination score and the intensity of presentation? What is the relationship between the measured discrimination score and the extent of the individual's hearing loss? What is the relationship between the individual's pure—tone threshold configuration and the fre- quency spectra of the sounds correctly identi- fied? Is there a difference in the relative contri- butions to intelligibility of the various frequency bands of the sounds? What is the relationship between speech and non-speech discrimination scores? What is the test-retest reliability of the test? Definition of Terms Meaningful.—-The discrimination data described in the previous section were obtained by means of a test in- corporating meaningful non-speech sound stimuli. In this context, the term "meaningful" can be used interchangeably with "environmental." That is, the sounds were not syn— thesized or generated in the laboratory specifically for the purposes of this study. Many of the sounds were those which the subjects might have encountered in their everyday activities. How- ever, some of the sounds might have been encountered only in a highly specific situation. The sounds of some musical instruments, while familiar to most people, might be heard only in the concert hall. Likewise, the cries of particular animals might be encountered in a visit to a zoo. Non-Speech.--The term "non—speech" was sufficiently unambiguous for most of the sounds, since they were gener- ated mechanically. Several human sounds of the non-speech variety were included, however. It has been mentioned that such sounds can, in certain situations, convey information between individuals. These sounds are nevertheless non- speech in that they do not contain formal, linguistic con- tent. Thus, they are meaningful but non—speech. CHAPTER II REVIEW OF SPEECH AND NON-SPEECH DISCRIMINATION Introduction This chapter briefly reviews some of the characteristics of speech and non-speech sound discrimina- tion. The literature concerned with speech discrimination is quite vast and no attempt has been made to cite all of the relevant studies. Rather, the review covers some of the more salient features of speech discrimination. Very little information is available concerning the important parameters of non-speech sound discrimination. This review includes the few studies that have concentrated on this type of discrimination. Speech Discrimination Introduction Discriminations involving speech stimuli are quite complex and involve higher-order neural mechanisms. The results of intensity and frequency discrimination studies can be analyzed neatly in terms of the physical parameters of the stimuli involved. Performance is predictably re— lated to changes in these parameters. Speech discrimination performance is not so intimately related to the physical parameters of the stimuli, however. Psychological factors such as meaning, context, and familiarity enter the picture. Physical and psychological factors interact to produce a very complicated process. This section covers, very briefly, some of these factors. Stimulus Materials The measured discrimination score is a function of the type of speech material presented to the listener. Isolated speech sounds, nonsense syllables, words, phrases, and sentences all have been used. Each involves separate perceptual factors and has particular advantages and dis- advantages as a test tool. As is discussed in succeeding sections, the scores obtained with the various types of materials under a given set of testing conditions are in- terrelated. The early researchers called the score obtained with any type of speech material the "articulation score."l More recently, speech articulation has been reserved for describing the perception of speech sounds.2 Intelligibility, 1James P. Egan, "Articulation Testing Methods," The Laryngoscope, LVIII (1948), pp. 955-991. 2Karl D. Kryter, "On Predicting the Intelligibility of Speech from Acoustical Measures," The Journal of Speech and Hearing Disorders, XXI (1956), pp. 208-217. 10 then, is used to refer to the discrimination of words, phrases, or sentences. Nonsense syllables were originally the most common speech stimulus material. The first extensive lists were constructed by Fletcher and Steinberg at Bell Telephone Laboratories.1 These lists comprised sixty-six syllables of the consonant-vowel-consonant variety. The method of testing was to read an introductory sentence followed by three of the syllables. Subjects wrote their responses in phonetic symbols. Sound articulation and syllable articu- lation (intelligibility) scores were obtained from a care- fully selected and trained listening crew under a variety of testing conditions. It was found that the sound articu- lation was not dependent on the order of the sounds even though some sound pattern combinations were more common in the language than others. Sound articulation was discovered to depend on the number of sounds in the syllable, espe— cially as the number increased beyond three sounds per syllable. Figure 1 illustrates this relationship. An optimal transmission system was constructed and various types of distorting systems were compared against this system. The efficiency of an unknown system could lH. Fletcher and J. C. Steinberg, "Articulation Testing Methods," The Bell System Technical Journal, VIII (1929), pp. 806-854. p... “L— .. n 11 then be quickly evaluated with respect to the optimal reference system by means of the obtained syllable articu— lation scores. 100 E o c 90 OO H-H H 80 H (D5 m3 544-) 70 $2 “1 60 50 3 4 5 6 Average Number of Sounds Per Syllable Figure l.--Average Percent Articulation as a Function of Number of Sounds per Syllable. Monosyllabic words currently are the most common test material for measuring speech discrimination ability. Perhaps the best known series of word lists was developed at the Psycho-Acoustic Laboratory at Harvard University for wartime research on communication equipment.1 These are referred to as the PB-50 lists. A rather similar set of fifty-word lists, using a somewhat smaller and more lHallowell Davis and S. Richard Silverman, Hearin and Deafness (New York: Holt, Rinehart and Winston, 1965;, pl 12 familiar vocabulary, was prepared at the Central Institute for the Deaf.1 Recorded versions of both lists are avail- able. The recorded CID W-22 lists yield somewhat higher scores than the PB-SO lists due to greater word familiarity and better recorded quality. The development of the Harvard word lists proceeded on several premises.2 First, all or nearly all of the fundamental speech sounds should be represented by a test. Furthermore, these speech sounds should occur within the lists with a frequency equal to their occurrence in every— day speech. Second, there should be a satisfactory distri- bution of difficulty in the lists. The material should be made reasonably sensitive to small differences in intellig- ibility by excluding items which might be too easy or too difficult. In addition, however, there should be a wide distribution of difficulty. Third, the test items could be single nonsense syllables, meaningful words, or meaning- ful phrases and sentences in which there would be contextual relations among the words. Each type of material has relative advantages and disadvantages. Nonsense syllables measure phoneme intelligibility better but require trained speakers. For words, intelligibility is dependent on the lIbid. 2Egan, The Laryngoscope, LVIII, pp. 955-991. 13 number of sounds. Sentence material provides the only adequate method of satisfactorily measuring inflection, rhythm and intonation. Responses to sentences may be difficult to score, however, and the lists too often are highly dependent on the intelligence or education of the subject. The development of the Harvard monosyllabic lists proceeded through several steps. Directly preceeding the PB—50 lists was a pool of 1200 words (revised monosyllabic or RM) divided into 24 lists of 50 words each. From these words, the PB-SO lists were constructed to satisfy the following criteria: (1) monosyllabic structure; (2) equal average difficulty; (3) equal range of difficulty; (4) equal phonetic composition; (5) composition representative of English Speech; and (6) common usage. The result of this project was the lists of fifty words comprising the PB-50 series. Spondaic words are useful for establishing the level at which speech can just be heard. These words are of equal difficulty. Auditory tests W—l and W-2, consist- ing of Spondaic words, were developed at the Central Institute for the Deaf as a modification of Auditory Test No. 9 of the Harvard Psycho-Acoustic Laboratory.1 The 1Davis and Silverman, pp. 182-183. l4 latter test was developed at Harvard for wartime use at the Army and Navy Aural Rehabilitation Centers. Several sentence intelligibility tests have been developed, the best known of which is the set constructed by Fletcher and Steinberg.l Sentence tests present parti- cular cultural and intelligence problems and are not often used in discriminatién testing. The relationships between the relative intellig- ibilities of several types of speech stimuli as a function of intensity are illustrated in Figures 2 and 3. 8 101 Selected g Spondees H 8 8Q Unselected Spondees m gee H “—-Monosyllables “540 4.1 8 o 20 H o m 0 10 20 30 40 50 60 Relative Intensity in dB Figure 2.-- Relative Intelligibility of Three Types of Speech Stimulus Material as a Function of Rela- tive Intensity. 1Fletcher and Steinberg, The Bell System Technical Journal, VIII, pp. 806-854. 15 “ 100 o I l o Digits H 3 80 Sentences U m 5 60 Nonsense Syllables f. g 40 4.) 5 20 o H 0 D4 0 -18 -12 -6 O 6 12 18 Signal-to-Noise Ratio in dB Figure 3.--Articulation Scores for Three Different Types of Test Material as a Function of Signal-to— Noise Ratio. As might be expected, sentence intelligibility is higher than word articulation in all but the most optimal listening conditions. Figure 4 shows this relationship. 100 w o s m. .1.) g. m U). Q. s m o H o m 0 0 20 40 6U 80 100 Percent Word Articulation Figure 4.--Average Relative Intelligibility of Words and Sentences Under a Wide Variety of Listening Conditions. 16 The contextural clues of sentences make it possible to miss sounds or even whole words while still receiving the general idea of the sentence. Acoustical Correlates of Speech Discrimination Speech stimuli comprise a wide range of intensities, not only between different levels of speaking but also be— tween the different speech sounds. A speaker talking as loudly as possible can radiate an average speech power of 1000 microwatts. The power in the faintest whisper may decrease to as little as 0.001 microwatt. Thus there is a range of about sixty decibles detween the loudest shout and the faintest whisper. Fletcher has calculated the relative powers of the different speech sounds.1 The loudest vowel, /9/, was assigned a relative phonetic power of 680. The faintest consonant, /6/, has a relative phonetic power of 1. Thus the power ratio between the strongest sound and weakest sound is 680 to 1. This is a range of about 28 dB. The range of frequencies in speech sounds also is great.. Figure 5 illustrates the spectral distribution of speech for a group of talkers. Although the frequency lHarvey Fletcher, Speech and Hearing in Communica- tion (Princeton, New Jersey: D. Van Nostrand Company, Inc., 1953), p. 86. l7 spectrum is quite broad, it is to be shown that not all frequencies are necessary for high intelligibility. -2o‘ ,-30 -40 -50 -60 -70 -80 Relative Intensity-dB -90 50 100 200 500 1000 2000 5000 10000 Frequency in Hz Figure 5.—-Frequency—Power Distribution of Speech. The distribution of energy is not constant through- out the frequency range for most sounds. Rather, the principal energy components tend to group into five or six prominent frequency regions. These "formant" regions are the result of acoustical resonances within the vocal tract. Even though there may be a wide range of vocal qualities between speakers, the formant regions for a given sound will remain relatively constant. The intensity of the speech material reaching the listener's ear is of great importance in intelligibility. If some type of noise masking is mixed with the speech signal, the intelligibility becomes a function of the signal-to-noise ratio. Figures 2 and 3 illustrated the 18 effect of intensity on the intelligibility of several types of speech material. Figure 6 shows the relationship between sound pressure level and the intelligibility of monosyllabic words. The intelligibility of the material increases with increases in intensity until the threshold of pain is reached. At this level, further increases in intensity may result in slightly reduced discrimination scores. 100 ’8 o 80 u H 8 6C U) o 3 3 4C 4.) 8 0 2( H o m 0 20 40 60 80 100 Average Sound Pressure Level of Speech (dB) Figure 6.--Articulation Score for Monosyllabic Words as a Function of the SPL of the Speech. The sound pressure level necessary for identifica- tion of half the test items depends on the speech material presented. Figure 6 shows that this value is about 33 dB for monosyllables. This value is for monaural, earphone presentation. If the words are presented binaurally, twice the sound pressure is available for processing at the input of the auditory system. Consequently, the threshold is 19 lowered (improved) by about 3 dB. Spondees are reported correctly fifty percent of the time at 17 dB monaurally and 14 dB binaurally. Connected discourse presented through earphones can be followed by the listener when the speech level is about 24 dB. The relationship between word intelligibility and intensity is somewhat complicated by the differential in- tensities of the various speech sounds noted earlier. In less than optimal listening conditions, some sounds will be missed more consistently than others. Fletcher has determined the articulation functions for the individual speech sounds.1 He estimates that the sounds /v/, /f/, and /6/ account for more than half of the phonetic mistakes made in listening to ordinary conversations. The intelligibility of Speech material depends also on the range of frequencies presented to the listener. The results of filtered speech experiments have contributed substantially to the knowledge of frequencies important for speech discrimination. As noted earlier, the entire speech spectrum ranges from below 50 Hz to over 10,000 Hz. Many of these frequencies can be deleted with but little decrease in intelligibility. lIbid., pp. 415-418. 20 Figure 7 illustrates the effect of frequency filtering on syllable intelligibility. Although they con- tain most of the speech power, the lower frequencies apparently contribute little to intelligibility. When all of the components of a speech signal below 1000 Hz are removed, the speech power is reduced approximately eighty percent, but the articulation score drops only ten percent. The following figure illustrates that frequencies above 1900 Hz contribute about the same to syllable articulation as do frequencies below 1900 Hz. 100 High Pass Low Pass 80 a) ' . H «‘3 H 60 H m p 40 s o S 0 20 04 0 100 300 1000 3000 10,000 Cutoff Frequency (Hz) Figure 7.--Syllable Articulation for Speech Passed Through High Pass and Low Pass Filters in the Quiet. It has been shown that removal of the high fre- quency components affects consonant articulation more than 21 vowel articulation.l Conversely, removal of the low frequencies has more effect on vowel articulation scores. Another way of viewing the frequencies important to speech discrimination is to inspect the formants necessary for high intelligibility. It has been noted that most speech sounds contain five or six well defined formant regions.2 Research has indicated that elimination of all but the three lower formants has but little effect on intelligibility. The higher formants appear to be more important in transmitting particular characteristics of a speaker's voice, i.e. in speaker identification tasks. Investigations dealing with the effects on intelli- gibility of duration, amplitude limiting (peak clipping), modulation, interruption and masking are too numerous to cite here. They have contributed to the body of knowledge concerning relevant acoustical parameters in speech intelli- gibility. They have shown that speech, because of redund— ancy due to frequency, intensity, and psychological characteristics, is remarkably immune to intelligibility reducing distortion. 1Ibid., p. 86. 2Peter B. Denes and Elliot N. Pinson, The S eech Chain (Baltimore: Waverly Press, Inc., 1964), pp. - 122. 22 Efforts to predict speech intelligibility on the basis of the physical characteristics of the speech waves and the basic discriminative properties of the ear have not been particularly successful. One such study made the basic assumption that the interpretation of speech sounds was largely dependent upon differential pitch and intensity sensitivity and this could be used as a basis for predict- ing results of intelligibility tests.l By the use of empirical methods, a functional relation was obtained between the energy distribution of speech, the differential sensitivity of the ear, the masking properties of the sounds, and the number of correctly perceived syllables in an articulation test. A comparison was made between these empirical functions and the corresponding fundamental speech and hearing characteristics. In some cases, the computed intelligibility compared favorably with experimental re- sults. In many cases, large discrepancies were noted. Because of psychological factors, it seems doubtful whether speech intelligibility can ever be accurately predicted from acoustical characteristics. 1W. A. Munson, "Relation Between the Theory of Hearing and the Interpretation of Speech Sounds," The Journal of the Acoustical Society of America, XVII (1945), p. 103. 23 Psychological Correlates of Speech Discrimination The psychological correlates of speech discrimina- tion are somewhat more difficult to evaluate quantitatively than are the acoustical correlates. Yet they are, in most situations, of significant proportion. It has been mentioned that the intelligibility of syllables increases as a function of the number of sounds per syllable. In addition, the intelligibility of words increases as the number of syllables increases. Part of the increased intelligibility may be attributed to a fundamental physical increase in the duration of the syllables and words. As the length of the word is increased, however, the listener has more contextural cues at his dis- posal on the basis of which he is able to make a more accurate identification. The importance of such contextural clues is shown by the higher intelligibility of continuous discourse over isolated words.l Whole sounds or even complete words can be missed in continuous discourse with no significant reduction in intelligibility. 1John O'Neill, "Recognition of Intelligibility Test Materials in Context and Isolation," The Journal of gpeech and Hearing Disorders, XXII (March, , pp. - 0 24 The familiarity of the test material affects the measured discrimination score.l Owens composed lists where the phonetic content was held constant but the familiarity was varied. He found that lists characterized by greater familiarity, even to a slight degree, were significantly more intelligible. Related to the factor of familiarity is the effect vocabulary size. If the total vocabulary size from which the test items are chosen is reduced, the intelligibility score will show a corresponding increase. The listener's chances of matching a poorly discriminated word with the known test item it resembles are increased. The intelligibility score will increase as a func- tion of practice.2 This again is related to the familiarity effect. As the familiarity of the lists is increased by repeated presentations, the discrimination scores will increase. Another psychological correlate of speech discrimina- tion is the effect of group pressure.3 As the amount of 1Elmer Owens, "Intelligibility of Words Varying in Familiarity," The Journal of Speech and Hearing Research, IV (June, 1961), pp. 113-129. 2Fletcher and Steinberg, The Bell System Technical Journal, VIII, p. 854. 3G. R. Miller and W. R. Tiffany, "The Effects of Group Pressure on Judgements of Speech Sounds," The Journal of Speech and Hearing Rgsearch, VI (1963), pp. 149-156. 25 group pressure is increased, the number of incorrect responses offered by subjects will increase. The effect of group pressure is correspondingly increased as the difficulty of the speech discrimination situation is in- creased. Further review of the psychological correlates of speech discrimination is not within the scope of this chapter. It can be seen that their effects are many and varied. Their existence contributes heavily to the diffi- culty encountered in quantitatively predicting speech dis- crimination performance in specified situations. Neural Correlates of Speech Discrimination The discrimination of speech is based not only on the physical parameters of the stimuli but also on psycho- logical factors inherent in the speech material and listening situation. Hence, in speech discrimination, the fundamental discriminative capabilities of the auditory system are supplemented by higher-order cortical function- ing. The auditory pathway proceeds from the medial geni- culate body by way of the auditory radiations (geniculo- temporal fibers) to terminate on the auditory cortex (Brodmann's areas 41 and 42). This region lies on the dorsal surface of the superior temporal convolution, buried in the floor of the lateral sulcus. 26 Area 41 comprises the principal auditory reception area. Area 42 is largely an auditory association area. Another region of interest, area 22, is located on the superior temporal convolution posteriorly. The neural correlates of the more basic discrimina- tive processes are equally represented bilaterally. That is, ablation of portions of the auditory areas in either hemisphere produces a deficit in performance. Because of unequal cortical representation, however, the deficit normally is greatest in the contralateral ear. In contrast, discriminations involving meaningful speech associations appear to be mediated largely in the dominant hemisphere. Ablations of the non-dominant hemisphere normally produce little deficit in such performance. Area 22 appears to be responsible for mediating the associations involved in recognizing auditory sounds. This may include speech, musical sounds, or any such familiar noises. Lesions of area of 22 in the dominant hemisphere produce the loss of ability to distinguish these sounds known as auditory agnosia. Wernicke's area, area 42 in the dominant hemisphere, appears to be involved in the more complicated mechanisms underlying comprehension of language. Lesions of this area result in an inability to perceive spoken language, termed auditory or receptive aphasia. 27 Non-Speech Sound Discrimination Previous Studies Utilizing Non-Speech Sounds Very little effort has been expended in the quantitative study of non-speech sound discrimination. One of the few tests which has been developed utilized non-speech sounds as a means of screening the hearing of pre-school children.1 The sounds included in the test were: a dog's bark, an auto horn, a gun shot, a bird song, a cat meow, and a telephone bell. These sounds were filtered so that two of the sounds contained only frequencies between 250 Hz and 750 Hz; two of the sounds contained frequencies between 1000 Hz and 2000 Hz; and the remaining two sounds included only the band of frequencies between 3000 Hz and 5000 Hz. The children were shown pictures representing the six sounds. The sounds were presented at 50 dB ASA, then at 15 dB ASA. If the child was not able to identify all the sounds at the lower level, he was given a follow-up pure-tone test. 1Marion P. Downs, "The Familiar Sounds Test and Other Techniques for Screening Hearing," The Journal of School Health, XXVI (1956). pp. 77-87. 28 It was found that the familiar sounds screening test was nearly as efficient as the complete pure-tone threshold test in uncovering hearing impaired children. In addition, it took much less time to administer. Sound lists have been used in psychological assess- ment.l’ 2 Projective techniques use largely visual association materials. The sounds were used to assess auditory association. The Stone study required subjects to compose stories based on three heard sounds. The listeners were to include: (1) what caused the sounds; (2) what was happening; and (3) the outcome of the action. The responses in story form were thus comparable to those obtained with the Thematic Apperception Test. Neural Correlates of Non-Speech Sound Discrimination Very little information is available regarding the neural mechanisms responsible for discriminations involving non-speech material. The section which discussed the neural correlates of speech discrimination indicated that the dominant hemisphere (left hemisphere in normal persons) mediated speech discriminations. Areas 22 and 42, in 1D. R. Stone, "A Recorded Auditory Apperception Test as a New Projective Technique," Journal of Psychology, XXIX (1950), pp- 349—353. 2Harry Wilmer, "An Auditory Sound Association Technique," Science, CXIV (1951), pp. 621-622. 29 particular, were responsible for this function. It was also assumed that area 22 in the dominant hemisphere was responsible for mediating the associations involved in recognizing non-speech stimuli. Such views were well founded, for clinical studies had shown that lesions in the dominant hemisphere could disturb recognition of both speech and non-speech sounds. There is a growing body of evidence to suggest that the right or non-dominant hemisphere is responsible for non-speech discriminations. For example, the percep- tion of melodies presented dichotically is better in the left ear.1 Since the crossed auditory pathways are stronger than the uncrossed, it was hypothesized that the right hemisphere was most important in such discriminations. Other writers also suggest that the right hemisphere is predominant in the processing of non-verbal sounds.2 Summary The lack of studies to review concerning the dis- crimination of non-speech sounds reveals the need for re- search in this area. Both theoretical and clinical infor— mation could be contributed by such studies. lDoreen Kimura, "Left—Right Differences in the Per- ception of Melodies," The Quarterly Journal of Experimental Psychology, XVI (1964), pp. 355-358. 2V. Mountcastle, ed., Interhemispheric Relations and Cerebral Dominance (Baltimore: Johns Hopkins Press, 1962). CHAPTER III SUBJECTS, MATERIALS, EQUIPMENT AND PROCEDURES Introduction This chapter summarizes the test procedures. Subject selection and classification are reviewed. Speech and non-speech test materials have been discussed. A list of the equipment employed has been included. Finally, the procedural details of the subject testing sessions and subsequent sound analysis have been included. Subjects Subject Selection Sixteen hearing impaired individuals were selected as subjects for the present investigation. They were chosen from among patients having received a relatively recent hearing evaluation at the Speech and Hearing Clinic of Michigan State University. Subject Classification The subjects were subdivided into two equal size groups on the basis of their pure-tone threshold averages at the three speech frequencies. Class I subjects had 30 VWWW ‘- 31 average pure—tone thresholds between 20 dB and 40 dB ISO in the better ear at 500 Hz, 1000 Hz, and 2000 Hz. Class II subjects displayed average thresholds between 40 dB and 60 dB in the better ear at these three frequencies. No attempt was made to classify the subjects on the basis of type of hearing impairment. Site of lesion testing was not conducted for the purpose of differential diagnosis. It was likely that both groups included con- ductive and sensorineural type pathologies. Although subjects were selected without regard to sex classification, approximately equal numbers of males and females participated. Class I included four males and four females. Class II included three males and five females. All subjects were adults at least eighteen years of age. No maximum age limit was set. The only restric- tion was that subjects be sufficiently mentally alert to provide the necessary discrimination responses. Informal observation of the subjects before and during the test sessions suggested that all satisfied this criterion. Stimulus Materials Speech Materials Tape recorded lists of CID W-l words were used to establish the speech reception threshold. The speaker of the words was a member of the Audiology and Speech Sciences faculty at Michigan State University. 32 Tape recorded lists of CID W—22 words were used to obtain the speech discrimination scores at the various in- tensity (sensation) levels. Lists 1A, 2A, 3A, and 4A were used. The Speaker for these lists was the same as for the W-l lists. Both the W-l and W-22 lists were c0pied directly from the master tapes prepared for use in the Speech and Hearing Clinic at Michigan State University. Non-Speech Materials The non-speech sounds were recorded on magnetic tape. The final test comprised a master and four random- izations of fifty familiar non-speech environmental sounds. These included animal cries, musical instruments, mechanical sounds, and non-speech human sounds. The sounds are summarized in Appendix A. The sounds were drawn from several sources. Most were recorded from sound effects records and auditory training tapes. Three of the sounds were recorded directly from the source especially for the purposes of this tape. An attempt was made to minimize cultural and ex— perience differences between the subjects by including only quite familiar sounds. An initial tape of seventy sounds was prepared. This was presented to a group of eight graduate students and faculty members in a very informal and less than Optimal listening Situation. The fifty most frequently identified sounds were retained for inclusion in the final test tapes.- 33 The sounds were recorded at 7-1/2 inches per second on one track of a dual-track tape recorder. An identifica- tion number for each sound ("number one, number two," etc.) was recorded on the second channel. A male talker was used for the identification numbers. This procedure allowed the identification to be maintained at a comfortable listening level (SRT + 40 dB) independent of the presentation level of the sounds. For recording the master tape, the intensity of each sound was adjusted so that the average of the peaks for all sounds was the same. This procedure did result in some apparent loudness differences between the sounds. This was due to the fact that the intensity adjustment procedure did not take into account the total energy of the sounds. In addition, the damping of the measurement apparatus did not allow the true peaks to be registered for impulse sounds with extremely short rise times. Nevertheless, no gross loudness differences were noted between the sounds. The durations of the different sounds varied be- tween approximately eight seconds and thirty seconds. Although no attempt was made to match the sounds in dura- tion, a segment that seemed to be reasonably long for .correct identification was allowed for each sound. Some sounds, such as animal cries, varied in the length required for a complete "cycle." A five second silent interval was allowed between sounds. 34 The order of presentation of the sounds in the random lists was determined from a table of random numbers. These lists were dubbed directly from the master tape. Equipment and Procedures quipment The following equipment was used for testing the subjects: Suttle pre-fabricated sound treated room Allison Model 22 Research Audiometer Telephonic Model TDH-39 headphones In addition, the following equipment was used for the preparation and analysis of the non-speech sound tapes: Garrard Lab-80 automatic turntable Ampex Model 601 dual-track tape recorder Ampex Model 350-AG dual-track tape recorder Magnecord Model 1022 dual-track tape recorder Bruel and Kjaer Model 2305 power level recorder Bruel and Kjaer Model 2112 audio frequency spectro— meter Procedures The subject was seated in the sound treated room and given a brief introduction to the nature of.the study in an informal nature. He was told that the investigation was concerned with finding new ways to evaluate adequately 35 patients with hearing loss like himself, and that the first part of the session would involve tests such as he had taken before, and that pure-tone thresholds would be determined. Following this, Speech reception thresholds and speech discrimination scores at various loudness levels would be obtained. The subject was then told that this would conclude the first portion of the session. Following a short break, he would be given instructions for the second part of the study. Pure-tone air conduction thresholds were determined at 250 Hz, 500 Hz, 1000 Hz, 2000 Hz, 4000 Hz, 6000 Hz, and 8000 Hz for each ear. The descending level technique of pure-tone threshold evaluation was utilized. Following this, the speech reception threshold for binaural earphone presentation was determined. The tape recorded lists of CID W-l words were used to establish the SRT. Oral responses were obtained from the subjects. Tape recorded lists of CID W-22 words were pre- sented binaurally through earphones at -8 dB, 0 dB, +8 dB, +16 dB, and +24 dB re SRT in ascending order of loudness. Lists 1A, 2A, 3A, and 4A were utilized and one list was presented at each sensation level. The subjects provided oral reSponses to the stimulus words. The order of presentation of the lists was rotated so that each list was given at each sensation level. For example, Subject 1 received list 1A at -8 dB, 2A at 0 dB, 36 3A at +8 dB, 4A at +16 dB, and 1A at +24 dB. Subject 2 received list 2A at -8 dB, 3A at 0 dB, 4A at +8 dB, 1A at +16 dB, and 2A at +24 dB. Such rotation of lists was em- ployed to compensate for any inherent differences in intelligibility among the lists. Because there were five sensation levels and only four lists, it was necessary to present one of the lists at two sensation levels. As noted in the rotation descrip- tion given above, the same list was presented at the lowest and highest levels. The scores at the lowest level generally were less than ten percent, so it was assumed that repeating this list at the highest level would not affect significantly (by means of word familiarity) the score at the highest level. This concluded the first portion of the session. Following a short rest period, the subject again was seated in the test room and given instructions pertaining to the remainder of the session. He was told that he would hear a number of familiar sounds. Animal cries, musical instruments, human sounds and other environmental sounds would be included. A complete tape of the sounds would be played for him at a very soft level. Then a second tape would be played at a somewhat higher level, and so forth, until a total of five lists had been presented. It was pointed out to the subject that all the lists contained the same sounds, but they were ordered in somewhat different 37 manners on the various tapes. The subject was instructed that his task would be to identify, as precisely as poss- ible, each sound. If the sound was a musical instrument, he was to give the name of the instrument; if an animal cry, the name of the animal was to be furnished, etc. If he could not think of the specific name, a more general description should be given. The subject was requested to write his responses for each test item on an answer sheet which was furnished. The non-speech sound tapes were then presented binaurally through headphones at -8 dB, 0 dB, +8 dB, +16 dB, and +24 dB re SRT, again in ascending order of loudness. The presentation time for each tape was approximately fif- teen minutes. Each subject participated in two test sessions. At least two full weeks were allowed between the test ses- sions. The procedures in the second session, including the instructions, were identical to those of the first. The calibration of the pure-tone and speech audio- meters was checked prior to and periodically during the course of the data collection. Response Scoring The procedure of allowing the subject to furnish his own responses to the non-speech stimuli did not, in most cases, result in any scoring problem. Some responses did present difficulties, however. 38 Dogs, cats, pigs, sneezes, and squeaking doors are familiar to most peOple and their responses to these are quite precise. What the investigator conceived of as a church bell may have been identified, certainly correctly, as a clock hell or carillon chime. The sound of a tele- phone ringing, which seemed unambiguous enough, may have been identified simply as a ringing bell. While the sound was certainly that of a ringing bell, it was something in addition, and it seemed reasonable to assume that the sub- ject should have recognized those cues which distinguished the telephone from just another bell ringing. Since the study was concerned with discrimination, some preciseness in that discrimination was necessary. The eXperimenter decided that some latitude in responses scored correct would be allowed, provided the response made it reasonably clear that the subject knew the correct source of the sound to which he was listening. Thus "bell ringing“ would not be an acceptable answer to a telephone ringing. On the other hand, the distinctions between a trumpet, cornet, and horn are sometimes not really clear and all must be allowed as correct responses. Additional Equipment and Procedures Analyzing tfigrSound Tapes Preparation of the Tapes The sounds which were found originally on records were transferred to magnetic tape by means of the Garrard 39 turntable feeding an Ampex 601 recorder. Onto this same tape were transferred additional sounds found on various auditory training tapes. This tape-to-tape transfer was accomplished by means of the Magnecord recorder feeding an Ampex 350-AG recorder. From the tape thus recorded, the master sound tape was prepared. This involved not only transferrence of the sounds from the original tape to the master tape but also adjustment of the intensity of each sound so that, as mentioned previously, the average of the peaks for all the sounds was the same. The Magnecord and Ampex 350-AG recorders were used for this part of the recording procedure. The original tape was played on the Magnecord recorder. This recorder was connected to the Ampex machine, on which the master tape was prepared. The intensities of the sounds on the original tape were monitored by feeding an output from the Magnecord recorder to the power level recorder. Each sound on the original tape was played several times and the out- put level of the Magnecord recorder adjusted so that the average of the peaks, as monitored by the power level recorder, reached a certain Specified level. The sound was then transferred to the master tape on the Ampex recorder. This resulted in equal average peak intensities on the master tape. A 1000 Hz calibration tone was recorded onto the master tape at a level corre3ponding to the average peak intensity of the various sounds. 40 The randomized lists were prepared from the master tape using the same pair of recorders. Although the inten- sities of the sounds on the master tape were monitored as the randomized tapes were being prepared, no further change in any of the sound levels was required. Analysis of the Sounds The block diagram of the analysis equipment appears in Figure 8. Tape 100ps were prepared for use on the recorder. Each 100p allowed about three seconds recording time. Since all of the sounds were of greater duration than this, the sounds were analyzed by section from several tape loOpS. Hence, three to five 100ps were required to obtain the complete spectral characteristics for most of the sounds. The overall Spectra of these sounds were ob- tained by averaging the spectra obtained from each section (100p). Tape Power Level Recorder Spectrometer Recorder Figure 8.--Block Diagram of Equipment Used for Spectral Analysis of the Non-Speech Sounds. 41 The spectra of the sounds were obtained from one- third octave band analyses. For continuous sounds, the average power within any given one-third octave band was taken as the relative sound pressure within that band. The average of the peaks within a given one-third octave band was recorded as the pressure level within that band for impulse sounds. CHAPTER IV ANALYSIS OF THE DATA AND DISCUSSION Introduction This chapter summarizes and discusses the statistical procedures and results of the investigation. Conclusions are drawn regarding non-speech sound discrimination in hearing impaired subjects and the implications for such discrimina- tion testing in clinical situations. Summary of the Data Table 1 summarizes the test and retest non-speech sound discrimination scores from the individual subjects in Class I (20 dB - 40 dB loss) at the various sensation levels. The first score listed is that obtained in the first session; the second that obtained in the second session. For the purpose of comparison, Table 2 summarizes in the same format, speech discrimination scores from the same subjects. Tables 3 and 4 contain the non-speech and speech discrimination scores, reSpectively, from the indi- vidual subjects in Class II, (40 dB - 60 dB loss). 42 43 N.m H.HH m.oH m.vH v.m $.5H N.mH m.vH m.oa o.NH om N.vw m.wm m.mh N.mh o.Hh m.vm N.H¢ o.mm m.m N.m M No mm mm Nm Nh Nm om ow o v m mm mm mm om Nb mm ow 0v m OH 5 m5 mh on mm on vm mm om mm mm m ¢m om vm mm Nm vb Nv mN OH m m om mm mm mm mu ou Nm mN N N v m5 mm ov om mv «N NH NN N o m om om mm Nm on vb Nv 0% v m N mm vm mm 00 Nb Nm mm vm vH m H mm B mm H mm B 0% B mm B vm+ oa+ m+ mu pomflbsm ABMm mu mvv Hw>ma .muownnsm H mmmHU How mwuoom coaumcflfifluomfln umwumm can ummB nommmmlcozll.H magma 44 m.h m.h m.b N.h m.v m.m m.0H h.m m.m h.N Om m.om N.Hm o.mm m.¢m o.mo m.mm m.ov m.Nv m.N N.v M «m mm om mm mm on mm mm o g m cm mm mm mm mm vb mN Nv o m h vm om vm mm we mm No ow OH w m mm mm gm mm Nb on mm vw m w m mm mm vm om em on om om N N v ow mm Nb Nb Nm om wv Nm v N m mm Nm Nm mm Nb on mm vv o m N on om on Nm mm on Nm Nm 0 o H mm 9 mm 9 mm 9 mm 9 mm a ¢m+ ma+ m+ o mu pommnsm Afimm mu mvv Hm>wq .muoanSm H mmmau How mwuoom coaumcwfifluomfla ummumm cam umma sommmmuu.m magma m2: m.m.~ m.N._.. N.:.. m.NH v.3” H.m._n 0.3” m.m ¢.m Om 45 m.mn o.os m.mn o.mm m.mm o.nm m.m¢ m.m~ m.aa m.¢ m mm om we we we we mm mm m ma m em mm mm mm om me am ma m m 5 mm om mm we mm mm as ea em o m mm mm mm om on am am om m o m we we on an mm mm as we we a w om mm mm om we we om we NH m m mm mm mm mm we om om ma m o m an «a am we mm mm om om ma 0 H mm 9 mm 9 mm 9 mm 9 mm a ¢~+ wa+ m+ o m: pomflnsm Aemm mu moo Hm>mq .muomhasm HH mmmau Mom mmuoom Goaumcweflnomwa ummpmm can umma nommmmlc02I|.m magma 46 N.m m.m o.m m.oa m.m H.va N.m m.> H.N m.N om m.om m.bm m.mm N.Hm m.mm m.mw m.nm 0.0m N.N N.H M mm mm om mu om mm Nv mm o o m mm mm mm mm om mm Nm mm N o n em mm om mm vs mo Nm ma o o m mm vm om om Nb om Nv mm v o m om vm wm om mm om mm mm N w v mm mm vm om Nw em om vN o o m vm Nm mo om mv om vN NN m N N vb Nu mm mm on em Nv mN v o H mm B mm 9 mm 8 mm 8 mm B vN+ ma+ m+ 0 ml nomnnsm ABMW mH mflv Hm>mq .muomnnsm HH mmmao How mmuoom COHuMGHEHHOmHQ umwumm paw umma sommmmll.v mHQMB 47 Figure 9 illustrates the relation between non-speech discrimination score and intensity of presentation (sensa- tion level) for Class I subjects. The brackets on the curve delimit i one standard deviation unit. Figure 10 illustrates the same features for Class II subjects. Figure 11 summarizes, for the purposes of comparison, the functions shown in Figures 9 and 10. Sensation Level (dB re SRT) 100 8 80 ‘1 o z/z””” U) 8:: 60 I --I C: +3 0) 2:: 40 . . I En: .HV ‘6 20 .1”. o o I -8 0 +8 +16 +24 Figure 9.--Non-Speech Discrimination Score as a Function of Intensity for Class I Subjects. 48 Sensation Level (dB re SRT) 100 (D 8 U 80 .— u: x 1 ‘HC: 60 I/ «90) MO .533"? .I—{V H a I ,H 20 Q o I -8 0 +8 +16 +24 Figure lO.--Non-Speech Discrimination Score as a Function of Intensity for Class II Subjects. Sensation Level (dB re SRT) 100 (D H 8 m 80 8:: «4:: 6O 4J0) €60 C334 «am an. 40 .Hv u U «'3 20 D )k -8 0 +8 +16 +24 Figure ll.--Non-Speech Discrimination Sc0res as a Function of Intensity for Class I and Class II Subjects. 49 Figures 12 and 13 illustrate the speech discrimina- tion functions from Class I and Class II subjects respect- ively. Figure 14 shows, on the same graph for ease of comparison, the speech discrimination functions from Class I and Class II subjects. Figure 15 summarizes the mean speech and non-Speech discrimination scores for all subjects as a function of intensity. Sensation Level (dB re SRT) 100 80 ’,,»”"%I 40 :I 20 Discrimination Score (Percent) 0 I -8 0 +8 +16 +24 Figure 12.--Speech Discrimination Score as a Function of Intensity for Class I Subjects. 50 Sensation Level (dB re SRT) 100 3 0 CA .22 / +,m 60 1 rd 0 C: H '28 w, 40 H O .2 I Q 20 0 «I -8 0 +8 +16 +24 Figure l3.--Speech Discrimination Score as a Function of Intensity for Class II Subjects. Sensation Level (dB re SRT) o/. 80 Class V$/$ 60 as II 40 Discrimination Score (Percent) O at -8 0 +8 +16 +24 Figure l4.--Speech Discrimination Scores as a Function of Intensity for Class I and Class II Subjects. 51 Sensation Level (dB re SRT) 100 3 0 Speech /).( o/ 1‘; 60 - Non-Speech Discrimination Score (Percent) 4o 1 \L 20 0 3:. -8 0 +8 +16 +24 Figure 15.--Mean Speech and Non-Speech Discrimination Scores as a Function of Intensity for all Subjects. Discussion Non-Speech Data The pooled non-speech discrimination function relating sensation level with discrimination score, shown in Figure 15 increases monotonically and tends to reach an asymptote at higher intensity levels. The function is approximately linear between -8 dB and +8 dB sensationw levels. The slope of the curve in this linear region is about 3.4%/dB. The standard deviation remained relatively constant, the smallest occuring at the lowest intensity level of 52 pre-sensation. The fact that zero percent discrimination score served as a lower boundary at this sensation level undoubtedly resulted in the smaller standard deviation here. Figure 11 reveals that the functions for the two groups of subjects were quite similar in shape. Both were approximately linear between -8 dB and +8 dB sensation level. Table 5 summarizes the slopes in this linear region for both groups of subjects, The curve from Class II subjects showed a slightly smaller slope. This caused the functions to diverge somewhat at the higher sensation levels. In general, however, the same function appeared to describe the data from both groups, the main difference being that the Class II function was shifted to the right along the intensity axis. The average discrimination score difference between Class I and Class II subjects was 8.0 percent. Table 5.--Slopes in the Linear Region of the Discrimination Function for Class I and Class II Subjects. GROUP SLOPE (%/dB) Class I 3.5 Class II 3.4 53 Comparison of Speech and Non-Speech Data Figure 15 illustrates the differences between the pooled speech and non-speech discrimination scores. The speech function started at a lower level but, because of a larger lepe, intersected the non-speech function. Hence, at the higher sensation levels, the Speech discrim- ination scores were higher than the non-speech discrimina- tion scores. Comparison of Figures 12 and 13 with Figures 9 and 10 reveal that the standard deviations for the speech scores were smaller at all intensity levels than the corresponding non-speech score standard deviations. Figure 14 shows that the speech discrimination functions from the two groups of subjects, like the non- speech functions, were similar in shape but shifted rela- tive to each other. The speech materials did not appear to be as discriminating with respect to extent of loss, however. As opposed to an average non-speech discrimination score difference of 8.0 percent, the average speech score difference was 5.1 percent between the two groups of sub- jects. Table 6 summarizes the discrimination score differences between Class I and Class II subjects for the two tests at the various sensation levels. It is interesting to Speculate on the reasons why the non-speech items were more discriminating than the Speech material with reSpect to extent of loss. Part of 54 this might be attributed to differences in the normal listening eXperiences of the subjects. Table 6.--Differences in Discrimination Scores Between Class I and Class II Subjects for the Non-Speech and Speech Tests. Sensation Level (dB re SRT) -8 0 ‘+8 +16 +24 Speech 3.0 12.8 3.0 3.3 3.4 The majority of Class I subjects exhibited a mild hearing loss. Such a loss would be handicapping in many listening situations. Most of the subjects apparently did not regard the loss to be of sufficient magnitude to warrant use of a hearing aid, however, for only two indi- viduals in this group used such an instrument. Therefore, the majority of these individuals were not accustomed to listening under aided conditions.. Although some distortions in acoustic stimuli would have been present for these in- dividuals due to the existing hearing pathology, they were unfamiliar with the additional distortions created by a hearing aid. Six of the eight Class II subjects wore hearing aids regularly. They undoubtedly had learned to discriminate 55 both Speech and non—speech sounds which were colored by the distortion characteristics of the hearing aid. The sound stimuli delivered to the subjects in the test sessions were of a moderately "normal" quality.. That is, they contained little distortion and hence their characteristics very closely resembled the corresponding sound heard in the natural environment. Class II subjects were at a disadvantage in trying to discriminate such sounds, for they had learned to identify the distorted sounds delivered by the hearing aid. For them, the natural or normal sound had become the sound they heard through the _hearing aid. Thus the stimuli sounded unnatural to these subjects. It is reaSonable to believe that the effect of unnatural quality would be more pronounced for non-Speech than for Speech stimuli. The results of the studies cited in Chapter II revealed that quite distorted and unnatural sounding speech can be identified accurately. It is not necessary for the Speech to sound normal to be discriminated. Even though Class II subjects did not hear Speech which sounded normal to them on the basis of their listening experiences, this dimension of the speech stimuli probably had little effect on their discrimination scores. On the other hand, this aspect appeared to be more important for non-speech stimuli. Sounds which should have been familiar did not sound normal to Class II subjects. 56 They were not distorted by the hearing aid as customarily was the case. The effects of familiarity were reduced correSpondingly. Hence, the Class II subjects had more difficulty than Class I subjects on non-Speech than on Speech stimuli. On the basis of this discussion, two tentative conclusions can be advanced. First, subjects accustomed to identifying non-Speech sounds distorted by a hearing aid will have greater difficulty than subjects not so accustomed in identifying the undistorted sounds, even though both groups of subjects have comparable hearing losses. Second, these results seem to indicate that non- speech sounds may be more susceptible than speech stimuli to the effects of distortion. One other factor which might have contributed to the larger discrimination score differences from non-Speech items between Class I and Class II subjects should be men- tioned here. The fifty non-Speech items were selected from a larger group of seventy sounds which was presented pre- liminary to the investigation to a small number of graduate students and faculty members. The fifty most frequently identified sounds were retained for inclusion on the final test tape. Most of these panel members were relatively young. Likewise, most subjects in Class I were relatively young. Class II, because of the extent of loss required, comprised generally older individuals. The sounds which 57 were familiar to the initial panel and Class I subjects may have been not quite so familiar to the Class II sub- jects. Hence, for the non-Speech sounds, there may have been a familiarity factor operating which did not influence the speech discrimination scores. Pearson product-moment correlation coefficients were computed between the non-speech discrimination scores obtained in the two test sessions at the various sensation levels to estimate test-retest reliability. For the pur— poses of comparison, Similar correlations were calculated for the Speech scores. Table 7 presents the results of these computations. Table 7.--Test-Retest Reliability Coefficients of the Speech Non-Speech Sounds at the Various Sensation Levels. Sensation Level (dB re SRT) -8 0 +8 +16 +24 Non-Speech Class I .99 .46 .91 .94 .97 Class II -.08 .25 .76 .95 .99 Speech Class I .09 .10 .60 .76 .90 Class II .12 .78 .81 .88 .81 58 It can be seen that, with the exception of the lower sensation levels, the reliability coefficients of both lists were quite high. Moreover, there was little difference apparent between the values obtained either from the two subject classes or from the Speech and non—speech tests. The rather deviant results noted at -8 dB sensation level, especially for the non-Speech test, probably should be interpreted in terms of data limitations imposed by this level rather than in terms of unusual test or auditory characteristics. At this level, zero percent served as a lower boundary, and Since several subjects obtained this score, the values did not achieve a wide distribution. This reduced the standard deviation, producing a rather unstable and perhaps misleading value of correlation co- efficient. The existence of one extremely high test and retest score was largely responsible for the inflated co- efficient noted for Class I subjects at this level. Pearson product-moment correlations were computed between the speech and non-Speech discrimination scores at the various intenSity levels. Table 8 summarizes the results of these calculations. With the exception of the lower sensation levels, the correlations were quite high. Again, data limitations at the lower level Should be considered when interpreting the results from this level. 59 $ Table 8.--Product-Moment Correlations BetWeen the Results of the Speech and Non-Speech Discrimination Tests. Sensation Level (dB re SRT) -8 0 +8 +16 +24 Class I .08 .56 ' .61 .55 .72 Class II -.09 .35 .54 .83 .92 The correlation coefficients obtained from Class II subjects at +16 dB and +24 dB were higher than the corresponding values obtained from Class I subjects. To test for significant differences, a technique suggested ‘ by Hays was employed. A r to Z transformation was performed for each correlation figure. The resultant 2 values were used to compute the following test statistic: Z - Z Statistic = r(2) r(2) N-3 The results of these calculations are summarized in Table 9. Table 9.--Summary of Statistic for Testing Differences Between the Correlations Obtained from Class I and Class II Subjects. Level (dB) Statistic +16 1.26 +24 1.51 60 In a normal sampling distribution, the value of both Z and the test statistic required for rejection of an hypothesis at the .05 level of Significance is 1.96. Since the com- puted values did not exceed this figure, the observed differences in the correlations were not significant. The Similiarity of the non-speech and Speech dis- crimination functions, together with the high correlations between the tests, tend to suggest that similar neural function is involved in both types of discrimination. This is an attractive hypothesis, but it is, at best, only par- tially supported by results such as those being presented. Further evidence, based on various approaches, Should be at hand before a definitive statement can be made. It is known that the neural mechanisms involved in various types of discriminations have only a degree of Specificity. Some of the neural correlates seem to be especially important for judgments involving a particular type of discrimination. Many of the correlates, however, are involved in many types of discriminations. One would eXpect that the same would hold for non- speech and speech discrimination. While both would involve many common mechanisms, neural correlates especially import- ant for only one of the types of discriminations undoubtedly exist. Correlation techniques also were employed to in- vestigate the possible relationship between a subject's 61 pure-tone thresholds and the spectra of the sounds which he identified correctly at each sensation level. The thresholds for each subject were determined at 250 Hz, 500 Hz, 1000 Hz, 2000 Hz, 4000 Hz, 6000 Hz, and 8000 Hz. The threshold was established for each ear and the threshold of the better ear used in the analysis. By means of Spectral analysis, the relative intensity level of the one-third octave bands containing these frequencies was obtained for each sound. If, at a given sensation level, a particular sound was identified by a subject, the relative intensity of that sound at each frequency cited above was matched with the threshold of the subject at the same frequency. This matching between thresholds and sound Spectra was made at each sensation level for all sounds and all subjects. A product-moment correlation coefficient was calculated between the resultant pairs of values. Table 10 summarizes the results of this part of the study. As a means of interpreting the results in Table 10, ninety-five percent confidence intervals around zero correlation were calculated for each sensation level. These intervals are Shown in Table 11. Comparison of Tables 10 and 11 reveals that none of the obtained correlation values for a given level ex- ceeded the confidence interval limits for that level. Hence, no correlation was Significantly different from zero. 62 Table 10.--Product—Moment Correlations Between the Relative Intensity Levels of the Identified Sounds at a Given Frequency and the Threshold of the Sub- jects at the Same Frequency. Frequency (Hz) Level (dB) 250 500 1000 2000 4000 6000 8000 -8 .114 .029 .086 -.130 .058 -.043 -.115 0 .051 -.045 -.052 -.094 -.012 .015 .062 +8 -.008 -.025 -.032 -.027 -.031 -.047 -.034 +16 .008 -.007 -.003 -.018 -.009 -.018 -.015 +24 .000 .001 -.007 -.016 -.030 .003 -.007 Table ll.--Ninety-Five Percent Confidence Intervals Around Zero Correlation for the Various Sensation Levels. Level (dB) 95% Confidence Interval -8 -.2685r5+.268 o -.1185r5+.118 +8 —.0805r5+.080 +16 -.0805r5+.080 +24 -.o785r5+.078 These results indicate that there was no relation- ship between the threshold configuration of a subject and the Spectra of the sounds which he identified correctly. 63 The perceptual effects of the sounds apparently were dominant in determining correct identification. In speech discrimination tests, the words identified correctly depend somewhat on the threshold configuration of the subject. Words which contain high frequency sounds such as /s/, /v/, and /f/ are missed more frequently by patients having predominantly higher frequency losses. Given a list of words of Similar familiarity, the physical charac- teristics of the sounds comprising the words will be of great importance in determining correct identification. In contrast, no such effects were found in the present study for non-Speech sounds. This might, in part, be due to familiarity differences between the sounds. The psychological differences might have masked the physical variables. There might be some quite basic differences in the way speech and non-Speech stimuli are perceived. Words are comprised of separate Speech sound entities. It is necessary to identify all, or nearly all, of these sounds correctly before the word is discriminated. Each Speech sound might be identified somewhat independently of the others, with restrictions imposed by the physical influence of its sound environment in the word and limitations im- posed by the probability of certain letter combinations in the language. 64 For non-Speech sounds, however, the identification is more of an all-or-none nature. The separate cues are not available and the discrimination must be based on the long-term spectrum of the sound. Such differences could account for the apparent reversal of physical and psycho- logical effects noted in the data. An item analysis provided listings of the percent- age of subjects correctly identifying a sound at a given sensation level. Comparison of these data with the spectra of the sounds identified did not reveal any obvious rela- tionships. There did not appear to be any differences between the Spectral configurations of the sounds ident- ified and of the sounds missed. Again, the results lead to the conclusion that psychological variables were more important than spectral configuration in determining identification. For a given sound, some frequencies undoubtedly are more important than others for discrimination. The results of the present study did not reveal such effects, but further investigations might. Frequency filtering techniques which presented various bandwidths to the sub- jects would be a more definitive way of approaching this aspect of the problem. CHAPTER V SUMMARY, CONCLUSIONS AND RECOMMENDATIONS Introduction The purpose of the present study was to evaluate non—speech sound discrimination in subjects with impaired hearing. The results of Speech and non-Speech discrimina— tion tests were compared. This chapter summarizes the results of the investigation and discusses recommendations for further research. Summary A test consisting of fifty non-speech sounds was constructed and administered to Sixteen hearing impaired subjects. The subjects were divided into two groups on the basis of the extent of the hearing loss. Class I subjects had average thresholds for the speech frequencies (500 Hz, 1000 Hz and 2000 Hz) between 20 dB and 40 dB ISO in the better ear. Subjects assigned to Class II had average thresholds at these frequencies between 40 dB and 60 dB ISO in the better ear. Initial pure-tone thresholds were determined for each ear at 250 Hz, 500 Hz, 1000 Hz, 2000 Hz, 4000 Hz, 65 66 6000 Hz, and 8000 Hz. The speech reception threshold was determined for binaural earphone presentation. Lists of CID W-22 words were administered binaurally through ear- phones at -8 dB, 0 dB, +8 dB, +16 dB, and +24 dB relative to the speech reception threshold. A non-speech sound list was presented at each of the same sensation levels. Each subject was tested twice with at least two weeks between the test sessions. Spectral analysis of the non- Speech sounds provided the frequency characteristics of each sound. Statistical analysis of the data was performed to determine the reliability of the non-Speech test. Correla- tion coefficients were obtained between the results of the Speech and non-Speech tests. Correlation coefficients also were computed between the threshold configurations of the subjects and the spectra of the non-Speech sounds identified correctly at each sensation level. An item analysis provided a listing of the percent- age of subjects correctly identifying each non-Speech sound at the different sensation levels. This was compared to the Spectral characteristics of the various sounds to deter- mine which, if any, frequency bands contributed most heavily to correct sound discrimination. The results of the study indicated that the discri- mination functions (sensation level vs. discrimination score) for the non-speech sounds very closely resembled 67 those obtained from the Speech test. The lepe of the linear portion of the nonespeech function was Slightly less than the SlOpe of the corresponding portion of the Speech function, however. A high correlation was found between the results of the speech and non-Speech tests, particularly at the higher sensation levels. No significant differences were discovered between Class I (20 dB to 40 dB loss) and Class II (40 dB to 60 dB loss) subjects with regard to correla- tions between the Speech and non-Speech tests. The non- Speech test did appear to be slightly more discriminating than the speech test (CID W—22) with respect to extent of loss. High test-retest reliability was obtained for both tests at the higher sensation levels. No Significant differences existed between the reliability coefficients of the two tests. No Significant correlations were found between the threshold configurations of the subjects and the Spectra of the sounds which were identified correctly at each sensation level. Moreover, no frequency band of the sounds appeared to be particularly important in its contributation to correct identification. 68 Conclusions Within the limits of the present study, the following conclusions appear warranted: l. The Similiarity of the discrimination functions (sensation level vs. discrimination score) and the high correlations between the speech and the non-Speech sound tests suggests that highly similar processes mediate the discriminations involving these types of stimuli. 2. The psychological effect of familiarity appears to be more important for non-speech than for Speech sound discrimination. Less redundancy in the physical character- istics of non-Speech sounds may account for this result. In addition, individual differences in familiarity may be greater for non-Speech than for Speech stimuli. 3. Individual subjects, on the basis of their listening eXperiences, learn to associate certain physical characteristics with each familiar sound. These criteria of physical normality are more important for non-Speech than for Speech sounds. Moreover, the non-Speech stimuli appear to be more susceptible to the effects of distortion. 4. There seems to be no relationship between the pure-tone audiometric configuration of a subject and the frequency Spectra of the non-Speech sounds which he dis- criminates correctly. 5. The discriminability of non-Speech stimuli does not appear to be related to the frequency composition 69 of the sounds. In addition, differential contributions of the various frequency bands of the non-Speech sounds to intelligibility were not discovered. 6. Scores obtained on the non-speech test of auditory discrimination seem to be more highly affected by extent of hearing loss than are the scores obtained from the speech test. 7. The reliability of the non-Speech test of auditory discrimination appears to be sufficiently high to warrant its use in clinical Situations. Recommendations Much work remains to be done regarding non-Speech discrimination in hearing impaired subjects. This investi- gation perhaps has raised more questions than it has answered. Since no attempts were made to classify type of loss, such an investigation should be conducted. Non-Speech dis- crimination, like Speech discrimination, may be quite sensitive to type of impairment. The effects of noise masking on non-Speech discrim- ination should be investigated. Differential effects of various types of hearing pathologies in identifying noise masked non-speech sounds should be researched. Such pre- sentation of stimuli might be even more discriminating with respect to extent and type of loss. 70 Studies should be designed to investigate the effects of various types of frequency filtering on non— speech discrimination. Such research also would provide more definitive information regarding the contributions of the various frequency bands to intelligibility. Finally, much work remains to be done relative to the importance of psychological variables such as familiar- ity on non-Speech discrimination. Such data could provide further information regarding the interrelationships of Speech and non-Speech discrimination mechanisms. BIBLIOGRAPHY Books Davis, Hallowell and Silverman, S. Richard. Hearing and Deafness. New York: Holt, Rinehart and Winston, 1965. Denes, Peter B. and Pinson, Elliot N. The Speech Chain. Baltimore: Waverly Press, Inc., 1964. Fletcher, Harvey. Speech and Hearing in Communication. Princeton, New Jersey: D. Van Nostrand Company, Inc., 1953. Hays, William L. Statistics for Psychologists. New York: Holt, Rinehart and Winston, 1963. Mountcastle, V., ed. Interhemispheric Relations and Cerebral Dominance. Baltimore: John Hopkins Press, 1962. Stevens, S. 8., ed. Handbook of ExPerimental Psychology. New York: John Wiley and Sons, Inc., 1966. Truex, Raymond C. and Carpenter, Malcolm B.‘ Human Neuro- anatomy. Baltimore: The Williams and Wilkins Company, 1964. Wever, Ernest G. and Lawrence, Merle. Physiglogical Acoustics. Princeton, New Jersey: Princeton University Press, 1954. Articles and Periodicals Downs, Marion P. "The Familiar Sounds Test and Other Techniques for Screening Hearing," The Journal of School Health, XXVI (1956), pp. 77-87. 71 72 Egan, James P. "Articulation Testing Methods," The Laryngoscgpe, LVII (1948), pp. 955-991. Fletcher, H. and Steinberg, J. C. "Articulation Testing Methods," The Bell System Technical Journal, VIII (1929), pp. 806-854. Kimura, Doreen. "Left-Right Differences in the Perception of Melodies," TheQuarterlyJournal of Experimental Psychology, XVI (1964): PP. 355-358. Kryter, Karl D. "On Predicting the Intelligibility of Speech from Acoustical Measures," The Journal of Speech and Hearing Disorders, XXI (1956), pp. 208- 217. Miller, G. R. and Tiffany, W. R. "The Effects of Group Pressure on Judgements of Speech Sounds," The Journal of Speech and Hearinngesearch, VI (1963), pp. 149-156. Munson, W. A. "Relation Between the Theory of Hearing and the Interpretation of Speech Sounds," The Journal of the Acoustical Society of America, XVII (I945), p. 103. O'Neill, John. "Recognition of Intelligibility Test Materials in Context and Isolation," The Journal of Speech and Hearing Disorders, XXII (1957), pp. 87-900 Owens, Elmer. "Intelligibility of Words Varying in Familiarity," The Journal of Speech and Hearing Research, IV (1961), pp. 113-129. Stone, D. R. "A Recorded Auditory Apperception Test as a New Projective Technique," The Journal of Psycho- 1092, XXIX (1950), pp. 349-3530 Wilmer, Harry. "An Auditory Sound Association Technique," Science, CXIV (1951), pp. 621-622. APPENDICES \DQQONUI-waH O N no N no N «are H'P‘ H rd H F4 H +4 H m w»:» n::a c>xo a>~q oscn a on N r4 o O O O O O O O O O O O O O O O O APPENDIX A SUMMARY OF THE NON-SPEECH SOUND LISTS Sawing Clock ticking Hammering Pig Doorbell chime Goat Horse Cat Duck Cow Coyote Dog Boat whistle Jet airplane Car horn Train Dialing telephone Toy whistle Telephone busy Signal Typewriter Telephone ringing Ping-pong game Trumpet Banjo Snare drum Master List 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 73 Accordion Flute Church bell Gun Shots Siren Snore Cough Whistling (human) Footsteps Rooster Crickets Bird song Music box Sneeze Crow Telegraph key Alarm clock ringing Laugh Glass breaking Crowd cheering Baby crying Squeaking door Knocking on door Auto starting Xylophone \DCDQQUIQUJNH O rd H‘lfl #4 H n on N +4 o O O O O O 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. APPENDIX B SOUNDS RANKED FROM LEAST DIFFICULT TO MOST DIFFICULT Siren Laugh Dog Trumpet Knocking on door Bird song Church bell Cough Snare drum Telephone busy signal Crow Dialing telephone Auto starting Squeaking door Sawing Cat~ Rooster Train Cow Whistling (human) Horse Car horn Footsteps Telephone ringing Doorbell chimes 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 74 Baby crying Snore Clock ticking Banjo Typewriter Coyote Pig Gun shots Music box Accordion Duck Hammering Xlephone Alarm clock ringing Ping-pong game Toy whistle Goat Boat whistle Crowd cheering Crickets Jet airplane Telegraph key Sneeze Flute Glass breaking ,‘1 .. ?.—_—.rr— 7—-