Skip to main content

Deokki Min

I’m from South Korea. I normally do not listen to Kpop, but I really like Korean food. I like to both play and watch football and am now supporting Southampton FC. I also like to listen various kinds of music, please recommend your favourites.

Project Title: Bio-inspired Auditory Models

Supervisors: Christine Evers and Jonathon Hare

What is your PhD is about?
My PhD project title is ‘Bio-inspired Auditory Models’, building AI model inspired by auditory system. I have three goals for this project. First, performing better than existing acoustic AI models. Second, parametrically efficient using domain knowledge of auditory system. Third, retain interpretability by making use of sequential stages of auditory system.

Why is it important to do this research?
Most existing acoustic AI models are from other domains, such as vision (e.g. CNN) and natural language processing (e.g. RNN, Transformer), which lacks interpretability and domain knowledge. What acoustic AI is trying to do is auditory scene analysis, understanding soundscene. This is performed spontaneously by auditory system. So this research can resolve those limitations by importing knowledge of auditory system.

What drew you to studying this PhD?
This project is an extension and expansion of my Master’s thesis. In my Master’s, I only reflected certain auditory cortex’s neural property into AI. In this PhD, I can reflect various sequential stages of the auditory system as a whole, which made me desperate to do this project.

What does a Sustainable Sound Future mean to you?
For me, sustainable sound futures is a society where hearing impaired people can have fair opportunity and be less isolated. I hope my understanding of the auditory system and bio-inspired AI model can contribute to mitigating those problems.

What were you doing before joining the CDT?
I was at Korea Institute of Science and Technology (KIST) as an intern researcher. I conducted a research with measuring brain response (EEG) to sound. I tried to extract meaningful information such as listener’s attention and sound itself from EEG.

What do you do on a typical PhD day so far?
I read papers related to auditory neuroscience and acoustic AI. I build learnable auditory layer in AI model. I prepare weekly lab meeting.

Tell us a fun acoustic fact!
Human beings still don’t understand how humans understand soundscene.