Contents: 2017 | 2016 | 2015 | 2014 | 2013 | 2012 | 2011 | 2010 | 2009 | 2008 | 2007 | 2006 | 2005 | 2004 | 2003 | 2002 | 2001

2007, 1

György Wersényi

Localization in a HRTF-based Minimum-Audible-Angle Listening test for GUIB applications

language: English

received 15.12.2006, published 16.01.2007

Download article (PDF, 280 kb, ZIP), use browser command "Save Target As..."
To read this document you need Adobe Acrobat © Reader software, which is simple to use and available at no cost. Use version 4.0 or higher. You can download software from Adobe site (http://www.adobe.com/).

ABSTRACT

Listening tests were carried out for investigating the localization judgments of 40 untrained subjects through equalized headphones and with HRTF (Head-Related Transfer Function) synthesis. The investigation was made on the basis of the former GUIB (Graphical User Interface for Blind Persons) project in order to determine the possibilities of a 2D virtual sound screen and headphone playback. Results are presented about the minimum, maximum and average values of discrimination skills. The measurement method includes a special 3-category-forced-choice Minimum Audible Angle report on a screen-like rectangle virtual auditory surface in front of the listener. Average spatial resolution of 7-11° and 15-24° were measured in the horizontal plane and median plane respectively dependent of spectral content of the noise signal excitation. Accessory signal processing is suggested to enhance poor vertical localization performance.

Keywords: digital signal processing, sound source localization, user interface, blind

16 pages, 3 figures

Сitation: György Wersényi. Localization in a HRTF-based Minimum-Audible-Angle Listening test for GUIB applications. Electronic Journal “Technical Acoustics”, http://www.ejta.org, 2007, 1.

REFERENCES

[1] K. Crispien, H. Petrie. Providing Access to GUI’s Using Multimedia System – Based on Spatial Audio Representation. Audio Eng. Soc. 95th Convention Preprint, New York, 1993.
[2] G. Wersényi. Localization in a HRTF-based Minimum Audible Angle Listening Test on a 2D Sound Screen for GUIB Applications. Audio Engineering Society (AES) 115th Convention Preprint, New York, USA 2003.
[3] G. Wersényi. HRTFs in Human Localization: Measurement, Spectral Evaluation and Practical Use in Virtual Audio Environment. Ph.D. dissertation, Brandenburgische Technische Universitat, Cottbus, Germany, 2002.
[4] M. M. Blattner, D. A. Sumikawa, R. M. Greenberg. Earcons and Icons: their structure and common design principles. Human-Computer Interaction 1989, 4(1), 11–44.
[5] K. Crispien, K. Fellbaum. Use of Acoustic Information in Screen Reader Programs for Blind Computer Users: Results from the TIDE Project GUIB, The European Context for Assistive Technology (I. Porrero, R. Bellacasa), IOS Press Amsterdam, 1995.
[6] G. Awad. Ein Beitrag zur Mensch-Maschine-Kommunikation für Blinde und hochgradig Sehbehinderte. Ph.D. dissertation, Technical University Berlin, Berlin, 1986.
[7] D. Burger, C. Mazurier, S. Cesarano, J. Sagot. The design of interactive auditory learning tools. Non-visual Human-Computer Interactions 1993, 228, 97–114.
[8] M. O. J. Hawksford. Scalable Multichannel Coding with HRTF Enhancement for DVD and Virtual Sound Systems. J. Audio Eng. Soc. 2002, 50(11), 894–913.
[9] J. Blauert. Spatial Hearing. The MIT Press, MA, 1983.
[10] E. A. G. Shaw. Transformation of sound pressure level from the free-field to the eardrum in the horizontal plane. J. Acoust. Soc. Am. 1974, 56, 1848–1861.
[11] S. Mehrgart, V. Mellert. Transformation characteristics of the external human ear. J. Acoust. Soc. Am. 1977, 61(6), 1567–1576.
[12] D. Hammershøi, H. Møller. Free-field sound transmission to the external ear; a model and some measurement. Proc. of DAGA’91, Bochum, 1991, 473–476.
[13] A. J. Watkins. Psychoacoustical aspects of synthetized vertical locale cues. J. Acoust. Soc. Am. 1978, 63, 1152–1165.
[14] R. A. Butler, K. Belendiuk. Spectral cues utilized in the localization of sound in the median sagittal plane. J. Acoust. Soc. Am. 1977, 61, 1264–1269.
[15] S. K. Roffler, R. A. Butler. Factors that influence the localization of sound in the vertical plane. J. Acoust. Soc. Am. 1968, 43, 1255–1259.
[16] J. Blauert. Sound localization in the median plane. Acoustica 1969/1970, 22, 205–213.
[17] L. R. Bernstein, C. Trahiotis, M. A. Akeroyd, K. Hartung. Sensitivity to brief changes of interaural time and interaural intensity. J. Acoust. Soc. Am. 2001, 109(4), 1604–16161.
[18] D. McFadden, E. G. Pasanen. Lateralization at high frequencies based on interaural time differences. J. Acoust. Soc. Am. 1976, 59, 634–639.
[19] C. B. Jensen, M. F. Sorensen, D. Hammershøi, H. Møller. Head-Related Transfer Functions: Measurements on 40 human subjects. Proc. of 6th Int. FASE Conference, Zürich, 1992, 225–228.
[20] H. Møller, M. F. Sorensen, D. Hammershøi, C. B. Jensen. Head-Related Transfer Functions of human subjects. J. Audio Eng. Soc. 1995, 43(5), 300–321.
[21] D. Hammershøi, H. Møller. Sound transmission to and within the human ear canal. J. Acoust. Soc. Am. 1996, 100(1), 408–427.
[22] E. M. Wenzel, M. Arruda, D. J. Kistler, F. L. Wightman. Localization using nonindividualized head-related transfer functions. J. Acoust. Soc. Am. 1993, 94(1), 111–123.
[23] J. C. Middlebrooks. Virtual localisation improved by scaling nonindividualized external-ear transfer function in frequency. J. Acoust. Soc. Am. 1999, 106(3), 1493–1510.
[24] BEACHTRON – Technical Manual, Rev.C., Crystal River Engineering, Inc 1993.
[25] M. A. Senova, K. I. McAnally, R.L. Martin. Localization of Virtual Sound as a Function of Head-Related Impulse Response Duration. J. Audio Eng. Soc. 2002, 50(1/2), 57–66.
[26] H. Møller. Fundamentals of binaural technology. Applied Acoustics 1992, 36, 171–218.
[27] H. Møller. On the quality of artificial head recording systems. Proceedings of Inter-Noise 97, Budapest, 1997, 1139–1142.
[28] P. Maijala. Better binaural recordings using the real human head. Proceedings of Inter-Noise 97, Budapest, 1997, 1135–1138.
[29] H. Møller, D. Hammershøi, C. B. Jensen, M. F. Sorensen. Evaluation of artificial heads in listening tests. J. Acoust. Soc. Am. 1999, 47(3), 83–100.
[30] W. M. Hartmann. How we localize sound. Physics Today 1999, 11, 24–29.
[31] A. Härmä, J. Jakka, M. Tikander, M. Karjalainen, T. Lokki, J. Hiipakka, G. Lorho. Augmented Reality Audio for Mobile and Wearable Appliances. J. Audio Eng. Soc. 2004, 52, 618–639.
[32] H. Møller, M. F. Sorensen, C. B. Jensen, D. Hammershøi. Binaural Technique: Do We Need Individual Recordings. J. Audio Eng. Soc. 1996, 44(6), 451–469.
[33] J. C. Middlebrooks. Narrow-band sound localization related to external ear acoustics. J. Acoust. Soc. Am. 1992, (92), 2607–2624.
[34] H. Fisher, S. J. Freedman. The role of the pinna in auditory localization. J. Audiol. Research 1968, 8, 15–26.
[35] W. M. Hartmann, B. Rakerd. On the minimum audible angle – A decision theory approach. J. Acoust. Soc. Am. 1989, 85, 2031–2041.
[36] T. Z. Strybel, C. L. Manlingas, D. R. Perrott. Minimum Audible Movement Angle as a function of azimuth and elevation of the source. Human Factors 1992, 34(3), 267–275.
[37] D. R. Perrott, A. D. Musicant. Minimum auditory movement angle: binaural localization of moving sources. J. Acoust. Soc. Am. 1977, 62, 1463–1466.
[38] J. Zwislocki, R. S. Feldman. Just noticeable differences in dichotic phase. J. Acoust. Soc. Am. 1956, 28, 860–864.
[39] P. A. Campbell. Just noticeable differences of changes of interaural time differences as a function of interaural time differences. J. Acoust. Soc. Am. 1959, 31, 917–922.
[40] D. R. Begault, E. Wenzel, M. Anderson. Direct Comparison of the Impact of Head Tracking Reverberation, and Individualized Head-Related Transfer Functions on the Spatial Perception of a Virtual Speech Source. J. Audio Eng. Soc. 2001, 49(10), 904–917.
[41] A. Harma, J. Jakka, M. Tikander, M. Karjalainen, T. Lokki, J. Hiipakka, G. Lorho. Augmented Reality Audio for Mobile and Wearable Appliances. J. Audio Eng. Soc. 2004, 52(6), 618–639.
[42] J. Blauert. Localization and the law of the first wavefront in the median plane. J. Acoust. Soc. Am. 1971, 50, 466–470.
[43] M. Cohen, E. Wenzel. The design of Multidimensional Sound Intrefaces. In W. Barfield, T.A. Furness III (Editors) “Virtual Environments and Advanced Interface Design” Oxford Univ. Press, New York, 1995, Oxford, 291–346.
[44] J. Sandvad, D. Hammershøi. Binaural auralization. Comparison of FIR and IIR filter representation of HIR. Proc. of 96th Convention of the Audio Eng. Soc., Amsterdam, 1994.
[45] M. Kleiner, B. I. Dalenbäck, P. Svensson. Auralization — an overview. J. Audio Eng. Soc. 1993, 41, 861–875.
[46] K. D. Jacob, M. Jorgensen, C. B. Ickler. Verifying the accuracy of audible simulation (auralization) systems. J. Acoust. Soc. Am. 1992, 92, p. 2395.
[47] J. Blauert, H. Lehnert, J. Sahrhage, H. Strauss. An Interactive Virtul-environment Generator for Psychoacoustic Research I: Architecture and Implementation. Acoustica 2000, 86, 94–102.
[48] D. R. Begault, 3-D Sound for Virtual Reality and Multimedia. Academic Press, London, UK, 1994.
[49] K. Brinkmann, U. Richter. Zur Messunsicherheit bei psychoakustischen Messungen. Proc. of DAGA’87, Aachen, 1987, 593–596.
[50] K. Crispien, H. Petrie. Providing Access to Graphical-Based User Interfaces for Blind People: Using Multimedia System Based on Spatial Audio Representation. 95th AES Convention, J. Audio Eng. Soc, (Abstracts), 1993, 41, p. 1060.
[51] E. Mynatt, W. K. Edwards. Mapping GUIs to Auditory Interfaces. Proc. ACM Symposium on User Interface Software Technology, Monterey, November 1992, 61–70.
[52] E. Mynatt, G. Weber. Nonvisual Presentation of Graphical User Interfaces: Contrasting Two Approaches. Proc. 1994 ACM Conference on Human Factors in Computing Systems, Boston, April 1994, 66–172.
[53] S. H. Foster, E. M. Wenzel. Virtual Acoustic Environments: The Convolvotron. Demo system presentation at SIGGRAPH’91, 18th ACM Conference on Computer Graphics and Interactive Techniques, Las Vegas, NV, ACM Press, New York, 1991.
[54] F. L. Wightman, D. J. Kistler. Headphone Simulation of Free-Field Listening I.–II. J. Acoust. Soc. Am. 1989, 85, 858–878.
[55] M. Matsumoto, S. Yamanaka, M. Tohyama, H. Nomura. Effect of Arrival Time Correction on the Accuracy of Binaural Impulse Response Interpolation. J. Audio Eng. Soc. 2004, 52(1/2), 56–61.
[56] F. P. Freeland, L. W. P. Biscainho, P. S. R. Diniz. Interpositional Transfer Function for 3D-Sound Generation. J. Audio Eng. Soc. 2004, 52(9), 915–930.
[57] P. Minnaar, J. Plogsties, F. Christensen. Directional Resolution of Head-Related Transfer Functions Required in Binaural Synthesis. J. Audio Eng. Soc. 2005, 53(10), 919–929.
[58] S. E. Olive. Differences in Performance and Preference of Trained versus Untrained Listeners in Loudspeaker Tests: A Case Study. J. Audio Eng. Soc. 2003, 51(9), 806–825.
[59] V. R. Algazi, C. Avendano, R. O. Duda. Estimation of a spherical-head model from anthropometry. J. Audio Eng. Soc. 2001, 49(6), 472–479.
[60] E. Zwicker, R. Feldtkeller. Das Ohr als Nachrichtenempfänger. S. Hirzel Verlag, Stuttgart, 1967, p. 181.
[61] R. A. Butler, R. F. Naunton. Role of stimulus frequency and duration in the phenomenon of localization shifts. J. Acoust. Soc. Am. 1964, 36(5), 917–922.
[62] D. R. Perrott, J. Tucker. Minimum Audible Movement angle as a function of signal frequency and the velocity of the source. J. Acoust. Soc. Am. 1988, 83, 1522–1527.
[63] W. Mills. On the minimum audible angle. J. Acoust. Soc. Am. 1958, 30, 237–246.
[64] M. Kinkel, B. Kollmeier. Diskrimination interauraler Parameter bei Schmalbandrauschen. Proc. of DAGA’87, Aachen, 1987, 537–540.
[65] J. L. Hall. Minimum detectable change in interaural time or intensity difference for brief impulsive stimuli. J. Acoust. Soc. Am. 1964, 36, 2411–2413.
[66] D. W. Grantham. Detection and discrimination of simulated motion of auditory targets in the horizontal plane. J. Acoust. Soc. Am. 1986, 79, 1939–1949.
[67] J. M. Chowning. The simulation of Moving Sound Sources. J. Audio Eng. Soc. 1971, 19, 2–6.
[68] S. M. Abel, C. Giguere, A. Consoli, B. C. Papsin. Front/Back Mirror Image Reversal Errors and Left/Right Asymmetry in Sound Localization. Acoustica 1999, 85, 378–389.
[69] F. Chen. Localization of 3-D Sound Presented through Headphone - Duration of Sound Presentation and Localization Accuracy. J. Audio Eng. Soc. 2003, 51(12), 1163–1171.
[70] S. R. Oldfield, S. P. A. Parker. Acuity of sound localisation: a topography of auditory space I-II. Perception 1984, 13, 581–617.
[71] S. R. Oldfield, S. P. A. Parker. Acuity of sound localisation: a topography of auditory space III. Perception 1986, 15, 67–81.
[72] R. L. McKinley, M. A. Ericson. Flight Demonstration of a 3-D Auditory Display. In Binaural and Spatial Hearing in Real and Virtual Environments (edited by R.H. Gilkey and T.R. Anderson), Lawrence Erlbaum Ass., Mahwah, New Jersey, 1997, 683-699.
[73] R. O. Duda. Elevation Dependence of the Interaural Transfer Function. in Binaural and Spatial Hearing in Real and Virtual Environments (edited by R. H. Gilkey and T. R. Anderson), Lawrence Erlbaum Ass., Mahwah, New Jersey, 1997, 49–75.
[74] W. G. Gardner, 3-D Audio Using Loudspeakers. Kluwer Academic Publ., Boston, 1998.
[75] D. R. Begault, E. Wenzel, M. Anderson. Direct Comparison of the Impact of Head Tracking Reverberation, and Individualized Head-Related Transfer Functions on the Spatial Perception of a Virtual Speech Source. J. Audio Eng. Soc. 2001, 49(10), 904–917.
[76] R. L. Martin, K. I. McAnally, M. A. Senova. Free-Field Equivalent Localization of Virtual Audio. J. Audio Eng. Soc. 2001, 49(1/2), 14–22.
[77] E. M. Wenzel, S. H. Foster. Perceptual consequences of interpolating head-related transfer functions during spatial synthesis. Proceedings of the ASSP Workshop on Applications of Signal Processing to Audio and Acoustics, New York, USA, 1993.
[78] G. Wersényi. What Virtual Audio Synthesis Could Do for Visually Disabled Humans in the New Era? Proceedings of 12th AES Regional conference, Tokyo, 2005, 180–183.


 

György Wersényi was born in Gyor, Hungary, in 1975. He received a M.Sc. degree in electrical engineering from the Technical University of Budapest in 1998. He spent another four years as a full-time Ph.D. student there at the "Békésy György" Acoustic Research Laboratory. After spending the academic year 2000-2001 with research scholarship at the Brandenburg Technical University in Cottbus, Germany, he received a Ph.D. degree in 2002 from the BTU Cottbus. He is now lecturer and associate professor at the Szechenyi Istvan University in Gyor. He is the head of the multimedia laboratory and is responsible for international relations of the Department of Telecommunications. He frequently visits the Fachhochschule Leipzig, Germany as invited lecturer for electroacoustics. Dr. Wersenyi is a member of the Hungarian Acoustical Society (OPAKFI) and of the AES. His areas of interest include spatial hearing, acoustic measurements, psychological acoustics, auditory modeling, decoding and transmission of acoustical information.

e-mail: wersenyi(at)sze.hu

http://uni.sze.hu/
http://vip.tilb.sze.hu/~wersenyi/index.html