Machine learning to classify animal species in camera trap images

Applications in ecology

Michael A. Tabak, Mohammad S. Norouzzadeh, David W. Wolfson, Steven J. Sweeney, Kurt C. Vercauteren, Nathan P. Snow, Joseph M. Halseth, Paul A. Di Salvo, Jesse Lewis, Michael D. White, Ben Teton, James C. Beasley, Peter E. Schlichting, Raoul K. Boughton, Bethany Wight, Eric S. Newkirk, Jacob S. Ivan, Eric A. Odell, Ryan K. Brook, Paul M. Lukacs & 4 others Anna K. Moeller, Elizabeth G. Mandeville, Jeff Clune, Ryan S. Miller

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

Motion-activated cameras (“camera traps”) are increasingly used in ecological and management studies for remotely observing wildlife and are amongst the most powerful tools for wildlife research. However, studies involving camera traps result in millions of images that need to be analysed, typically by visually observing each image, in order to extract data that can be used in ecological analyses. We trained machine learning models using convolutional neural networks with the ResNet-18 architecture and 3,367,383 images to automatically classify wildlife species from camera trap images obtained from five states across the United States. We tested our model on an independent subset of images not seen during training from the United States and on an out-of-sample (or “out-of-distribution” in the machine learning literature) dataset of ungulate images from Canada. We also tested the ability of our model to distinguish empty images from those with animals in another out-of-sample dataset from Tanzania, containing a faunal community that was novel to the model. The trained model classified approximately 2,000 images per minute on a laptop computer with 16 gigabytes of RAM. The trained model achieved 98% accuracy at identifying species in the United States, the highest accuracy of such a model to date. Out-of-sample validation from Canada achieved 82% accuracy and correctly identified 94% of images containing an animal in the dataset from Tanzania. We provide an r package (Machine Learning for Wildlife Image Classification) that allows the users to (a) use the trained model presented here and (b) train their own model using classified images of wildlife from their studies. The use of machine learning to rapidly and accurately classify wildlife in camera trap images can facilitate non-invasive sampling designs in ecological studies by reducing the burden of manually analysing images. Our r package makes these methods accessible to ecologists.

Original languageEnglish (US)
Pages (from-to)585-590
Number of pages6
JournalMethods in Ecology and Evolution
Volume10
Issue number4
DOIs
StatePublished - Apr 1 2019

Fingerprint

artificial intelligence
cameras
traps
ecology
wildlife
animals
Tanzania
Canada
machine learning
animal species
sampling
ungulates
ecologists
neural networks
animal
ungulate
image classification
train
taxonomy

Keywords

  • artificial intelligence
  • camera trap
  • convolutional neural network
  • deep neural networks
  • image classification
  • machine learning
  • r package
  • remote sensing

ASJC Scopus subject areas

  • Ecology, Evolution, Behavior and Systematics
  • Ecological Modeling

Cite this

Tabak, M. A., Norouzzadeh, M. S., Wolfson, D. W., Sweeney, S. J., Vercauteren, K. C., Snow, N. P., ... Miller, R. S. (2019). Machine learning to classify animal species in camera trap images: Applications in ecology. Methods in Ecology and Evolution, 10(4), 585-590. https://doi.org/10.1111/2041-210X.13120

Machine learning to classify animal species in camera trap images : Applications in ecology. / Tabak, Michael A.; Norouzzadeh, Mohammad S.; Wolfson, David W.; Sweeney, Steven J.; Vercauteren, Kurt C.; Snow, Nathan P.; Halseth, Joseph M.; Di Salvo, Paul A.; Lewis, Jesse; White, Michael D.; Teton, Ben; Beasley, James C.; Schlichting, Peter E.; Boughton, Raoul K.; Wight, Bethany; Newkirk, Eric S.; Ivan, Jacob S.; Odell, Eric A.; Brook, Ryan K.; Lukacs, Paul M.; Moeller, Anna K.; Mandeville, Elizabeth G.; Clune, Jeff; Miller, Ryan S.

In: Methods in Ecology and Evolution, Vol. 10, No. 4, 01.04.2019, p. 585-590.

Research output: Contribution to journalArticle

Tabak, MA, Norouzzadeh, MS, Wolfson, DW, Sweeney, SJ, Vercauteren, KC, Snow, NP, Halseth, JM, Di Salvo, PA, Lewis, J, White, MD, Teton, B, Beasley, JC, Schlichting, PE, Boughton, RK, Wight, B, Newkirk, ES, Ivan, JS, Odell, EA, Brook, RK, Lukacs, PM, Moeller, AK, Mandeville, EG, Clune, J & Miller, RS 2019, 'Machine learning to classify animal species in camera trap images: Applications in ecology', Methods in Ecology and Evolution, vol. 10, no. 4, pp. 585-590. https://doi.org/10.1111/2041-210X.13120
Tabak MA, Norouzzadeh MS, Wolfson DW, Sweeney SJ, Vercauteren KC, Snow NP et al. Machine learning to classify animal species in camera trap images: Applications in ecology. Methods in Ecology and Evolution. 2019 Apr 1;10(4):585-590. https://doi.org/10.1111/2041-210X.13120
Tabak, Michael A. ; Norouzzadeh, Mohammad S. ; Wolfson, David W. ; Sweeney, Steven J. ; Vercauteren, Kurt C. ; Snow, Nathan P. ; Halseth, Joseph M. ; Di Salvo, Paul A. ; Lewis, Jesse ; White, Michael D. ; Teton, Ben ; Beasley, James C. ; Schlichting, Peter E. ; Boughton, Raoul K. ; Wight, Bethany ; Newkirk, Eric S. ; Ivan, Jacob S. ; Odell, Eric A. ; Brook, Ryan K. ; Lukacs, Paul M. ; Moeller, Anna K. ; Mandeville, Elizabeth G. ; Clune, Jeff ; Miller, Ryan S. / Machine learning to classify animal species in camera trap images : Applications in ecology. In: Methods in Ecology and Evolution. 2019 ; Vol. 10, No. 4. pp. 585-590.
@article{93bf7ff174ea40bc9c158448c4e69f88,
title = "Machine learning to classify animal species in camera trap images: Applications in ecology",
abstract = "Motion-activated cameras (“camera traps”) are increasingly used in ecological and management studies for remotely observing wildlife and are amongst the most powerful tools for wildlife research. However, studies involving camera traps result in millions of images that need to be analysed, typically by visually observing each image, in order to extract data that can be used in ecological analyses. We trained machine learning models using convolutional neural networks with the ResNet-18 architecture and 3,367,383 images to automatically classify wildlife species from camera trap images obtained from five states across the United States. We tested our model on an independent subset of images not seen during training from the United States and on an out-of-sample (or “out-of-distribution” in the machine learning literature) dataset of ungulate images from Canada. We also tested the ability of our model to distinguish empty images from those with animals in another out-of-sample dataset from Tanzania, containing a faunal community that was novel to the model. The trained model classified approximately 2,000 images per minute on a laptop computer with 16 gigabytes of RAM. The trained model achieved 98{\%} accuracy at identifying species in the United States, the highest accuracy of such a model to date. Out-of-sample validation from Canada achieved 82{\%} accuracy and correctly identified 94{\%} of images containing an animal in the dataset from Tanzania. We provide an r package (Machine Learning for Wildlife Image Classification) that allows the users to (a) use the trained model presented here and (b) train their own model using classified images of wildlife from their studies. The use of machine learning to rapidly and accurately classify wildlife in camera trap images can facilitate non-invasive sampling designs in ecological studies by reducing the burden of manually analysing images. Our r package makes these methods accessible to ecologists.",
keywords = "artificial intelligence, camera trap, convolutional neural network, deep neural networks, image classification, machine learning, r package, remote sensing",
author = "Tabak, {Michael A.} and Norouzzadeh, {Mohammad S.} and Wolfson, {David W.} and Sweeney, {Steven J.} and Vercauteren, {Kurt C.} and Snow, {Nathan P.} and Halseth, {Joseph M.} and {Di Salvo}, {Paul A.} and Jesse Lewis and White, {Michael D.} and Ben Teton and Beasley, {James C.} and Schlichting, {Peter E.} and Boughton, {Raoul K.} and Bethany Wight and Newkirk, {Eric S.} and Ivan, {Jacob S.} and Odell, {Eric A.} and Brook, {Ryan K.} and Lukacs, {Paul M.} and Moeller, {Anna K.} and Mandeville, {Elizabeth G.} and Jeff Clune and Miller, {Ryan S.}",
year = "2019",
month = "4",
day = "1",
doi = "10.1111/2041-210X.13120",
language = "English (US)",
volume = "10",
pages = "585--590",
journal = "Methods in Ecology and Evolution",
issn = "2041-210X",
publisher = "John Wiley and Sons Inc.",
number = "4",

}

TY - JOUR

T1 - Machine learning to classify animal species in camera trap images

T2 - Applications in ecology

AU - Tabak, Michael A.

AU - Norouzzadeh, Mohammad S.

AU - Wolfson, David W.

AU - Sweeney, Steven J.

AU - Vercauteren, Kurt C.

AU - Snow, Nathan P.

AU - Halseth, Joseph M.

AU - Di Salvo, Paul A.

AU - Lewis, Jesse

AU - White, Michael D.

AU - Teton, Ben

AU - Beasley, James C.

AU - Schlichting, Peter E.

AU - Boughton, Raoul K.

AU - Wight, Bethany

AU - Newkirk, Eric S.

AU - Ivan, Jacob S.

AU - Odell, Eric A.

AU - Brook, Ryan K.

AU - Lukacs, Paul M.

AU - Moeller, Anna K.

AU - Mandeville, Elizabeth G.

AU - Clune, Jeff

AU - Miller, Ryan S.

PY - 2019/4/1

Y1 - 2019/4/1

N2 - Motion-activated cameras (“camera traps”) are increasingly used in ecological and management studies for remotely observing wildlife and are amongst the most powerful tools for wildlife research. However, studies involving camera traps result in millions of images that need to be analysed, typically by visually observing each image, in order to extract data that can be used in ecological analyses. We trained machine learning models using convolutional neural networks with the ResNet-18 architecture and 3,367,383 images to automatically classify wildlife species from camera trap images obtained from five states across the United States. We tested our model on an independent subset of images not seen during training from the United States and on an out-of-sample (or “out-of-distribution” in the machine learning literature) dataset of ungulate images from Canada. We also tested the ability of our model to distinguish empty images from those with animals in another out-of-sample dataset from Tanzania, containing a faunal community that was novel to the model. The trained model classified approximately 2,000 images per minute on a laptop computer with 16 gigabytes of RAM. The trained model achieved 98% accuracy at identifying species in the United States, the highest accuracy of such a model to date. Out-of-sample validation from Canada achieved 82% accuracy and correctly identified 94% of images containing an animal in the dataset from Tanzania. We provide an r package (Machine Learning for Wildlife Image Classification) that allows the users to (a) use the trained model presented here and (b) train their own model using classified images of wildlife from their studies. The use of machine learning to rapidly and accurately classify wildlife in camera trap images can facilitate non-invasive sampling designs in ecological studies by reducing the burden of manually analysing images. Our r package makes these methods accessible to ecologists.

AB - Motion-activated cameras (“camera traps”) are increasingly used in ecological and management studies for remotely observing wildlife and are amongst the most powerful tools for wildlife research. However, studies involving camera traps result in millions of images that need to be analysed, typically by visually observing each image, in order to extract data that can be used in ecological analyses. We trained machine learning models using convolutional neural networks with the ResNet-18 architecture and 3,367,383 images to automatically classify wildlife species from camera trap images obtained from five states across the United States. We tested our model on an independent subset of images not seen during training from the United States and on an out-of-sample (or “out-of-distribution” in the machine learning literature) dataset of ungulate images from Canada. We also tested the ability of our model to distinguish empty images from those with animals in another out-of-sample dataset from Tanzania, containing a faunal community that was novel to the model. The trained model classified approximately 2,000 images per minute on a laptop computer with 16 gigabytes of RAM. The trained model achieved 98% accuracy at identifying species in the United States, the highest accuracy of such a model to date. Out-of-sample validation from Canada achieved 82% accuracy and correctly identified 94% of images containing an animal in the dataset from Tanzania. We provide an r package (Machine Learning for Wildlife Image Classification) that allows the users to (a) use the trained model presented here and (b) train their own model using classified images of wildlife from their studies. The use of machine learning to rapidly and accurately classify wildlife in camera trap images can facilitate non-invasive sampling designs in ecological studies by reducing the burden of manually analysing images. Our r package makes these methods accessible to ecologists.

KW - artificial intelligence

KW - camera trap

KW - convolutional neural network

KW - deep neural networks

KW - image classification

KW - machine learning

KW - r package

KW - remote sensing

UR - http://www.scopus.com/inward/record.url?scp=85057300626&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85057300626&partnerID=8YFLogxK

U2 - 10.1111/2041-210X.13120

DO - 10.1111/2041-210X.13120

M3 - Article

VL - 10

SP - 585

EP - 590

JO - Methods in Ecology and Evolution

JF - Methods in Ecology and Evolution

SN - 2041-210X

IS - 4

ER -