Classification of ecological images and audio using machine learning
Speaker: Ian Durbach (University of Cape Town)
Ecological studies often collect data in the form of images, audio, or video that must be classified into biologically meaningful categories like species, individuals within species, or behaviours. This can be done manually or with the assistance of models. Advances in machine learning, particularly in the area of neural networks, have greatly improved the accuracy with which many classification tasks can be carried out. This talk illustrates these methods using two applications where prediction, rather than inference or interpretability, is the focus, motivating a machine learning approach. The first uses images of otoliths – calcified structures found in the inner ear – to discriminate between different pelagic fish stocks. The second uses audio recordings to discriminate between individual field crickets of the same species (Plebeiogryllus guttiventris).