Senior Lecturer Birmingham City University Birmingham, England, United Kingdom
Abstract: In this paper, we propose a method of spectroscopic food analysis, whereby audio data is generated from spectra to allow users to discriminate between two classes of a given food type. To do this, we develop a system which first extracts features and applies dimensionality reduction, then maps them to the parameters of a synthesizer. To optimise the process, we compare Amplitude Modulation (AM) and Frequency Modulation (FM) synthesis, as applied to two real-life datasets to evaluate the performance of sonification as a method for discriminating data. The results indicate that the model is able to provide relevant auditory information, and most importantly, allows users to consistently discriminate between two classes of spectral data.