Investigations in Audio Captioning: Addressing Vocabulary Imbalance and Evaluating Suitability of Language-Centric Performance Metrics

European Signal Processing Conference (EUSIPCO) |

Organized by European Association for Signal Processing (EURASIP)

The analysis, processing, and extraction of meaningful information from sounds all around us is the subject of the broader area of audio analytics. Audio captioning is a recent addition to the domain of audio analytics, a cross-modal translation task that focuses on generating natural descriptions from sound events occurring in an audio stream. In this work, we identify and improve on three main challenges in automated audio captioning: i) data scarcity, ii) imbalance or limitations in the audio captions vocabulary, and iii) the proper performance evaluation metric that can best capture both auditory and semantic characteristics. We find that generally adopted loss functions can result in an unfair vocabulary imbalance during model training. We propose two audio captioning augmentation methods that enrich the training dataset and the vocabulary size. We further underline the need for in-domain pretraining by exploring the suitability of audio encoders that were previously trained on different audio tasks. Finally, we systematically explore five performance metrics borrowed from the image captioning domain and highlight their limitations for the audio domain.

Scores achieved by the five metrics on the x-axis, for each kindof perturbation error (semantic, temporal, spatial).

Scores achieved by the five metrics on the x-axis, for each kind of perturbation error (semantic, temporal, spatial). The y-axis shows the percentage of captions that achieve higher performance scores for type-1 errors than for type-2 errors. Metrics
with higher values on the y-axis can be considered as more suitable for audio captioning.