Alice Baird

Research Projects
I have been involved in various health and audio related research projects in recent years including DE-ENIGMA, RADAR-CNS, EMBOA.
A description is given below for my more recent project involvement:

DFG AUDI0NOMOUS: Agent-based Unsupervised Deep Interactive 0-shot-learning Networks Optimising Machines’ Ontological Understanding of Sound)
Runtime: 01.01.2021 - 31.12.2026
Soundscapes are a component of our everyday acoustic environment; we are always surrounded by sounds, we react to them, as well as creating them. While computer audition, the understanding of audio by machines, has primarily been driven through the analysis of speech, the understanding of soundscapes has received comparatively little attention. AUDI0NOMOUS, a long-term project based on artificial intelligent systems, aims to achieve a major breakthroughs in analysis, categorisation, and understanding of real-life soundscapes. A novel approach, based around the development of four highly cooperative and interactive intelligent agents, is proposed herein to achieve this highly ambitious goal. Each agent will autonomously infer a deep and holistic comprehension of sound. A Curious Agent will collect unique data from web sources and social media; an Audio Decomposition Agent will decompose overlapped sounds; a Learning Agent will recognise an unlimited number of unlabelled sound; and, an Ontology Agent will translate the soundscapes into verbal ontologies. AUDI0NOMOUS will open up an entirely new dimension of comprehensive audio understanding; such knowledge will have a high and broad impact in disciplines of both the sciences and humanities, promoting advancements in health care, robotics, and smart devices and cities, amongst many others.

ZD.B Fellowship: An Embedded Soundscape System for Personalised Wellness via Multimodal Bio-Signal and Speech Monitoring
Runtime: 01.01.2017 – 31.12.2020
The soundscape (the audible components of a given environment), is an omnipresence in daily-life. Yet research has shown, that elements of our acoustic soundscapes can negatively affect mental wellbeing. Taking a dual analysis-synthesis approach this project, through multimodal feedback analysis, will explore the benefits of synthesised soundscape design and develop a ‘deep-listening’ personalised embedded system to improve human wellness. The project will explore questions pertaining to audible perception and develop novel methods for soundscape generation, informed by intelligent signal state monitoring.