Effective biodiversity monitoring is vital to understand and respond to the ongoing effects of climate change. Motion-activated cameras – known as camera traps – are an indispensable tool for monitoring biodiversity; camera traps can be deployed in remote locations, are effective in monitoring rare and elusive animals, operate over all hours, and are often more cost-effective than other approaches. However, camera trapping projects face the challenge of needing to process the resulting images; a single camera trap can produce thousands of images per day, and a single study can commonly produce millions of images. Classifying these images can be very time-consuming, and often requires expert knowledge. Needing to manually classify images limits the scale of camera trapping projects, and can result in significant delays between data collection and any management intervention.

Deep learning can aid timely analysis of camera trap data to effectively monitor biodiversity responses to climate change. However, image classifications collected through citizen science projects typically feature disagreement amongst volunteers, i.e. label ground truth uncertainty, which may affect the accuracy of deep learning models trained on such labels. In joint work with student Leonard Hockerts and ecologist Dr Peter Stewart we consider combined camera trap and citizen science datasets featuring East African animals. We study the behaviour of AI models on this camera trap data under the real-world constraint of ground truth uncertainty, and reflect on various example difficulty metrics.

If you are interested in learning more about this work, come along to the poster session at the NeurIPS 2025 Tackling Climate Change with Machine Learning workshop or drop me an email. The full paper is currently under review.