Skip to content

EcoAssist

Simplifying camera trap image analysis with AI

Open Source

Every line of code is open source

User-friendly

Automated install and intuitive interface

Offline

Does not need any internet connection after installation

Video support

Works on both images and videos

Integration

Export results to the Timelapse Image Analyser

Human-in-the-loop

Option to human verify model predictions

GPU acceleration

Runs automatically on NVIDIA and Apple Silicon GPU

Post-process

Separate, visualise, crop, label or export results

EcoAssist is an application designed to streamline the work of ecologists dealing with camera trap images. It’s an AI platform that allows you to analyse images on your local computer and use machine learning models for automatic detection and identification, offering ecologists a way to save time and focus on conservation efforts.

It has the open-source MegaDetector model incorporated, which can filter out images containing animals, people, and vehicles. You can select one of the species recognition model listed below to further identify the animals. Addax will be adding more identification models in the future. Do you have a model that you would like to make open-source? Or do you want Addax to develop a model specifically for your project? Get in touch!

Species identification models

The following species models are incorporated into EcoAssist. Please keep in mind that these models were developed for a specific project. Whether or not a model may also be useful for other projects depends on factors such as the species pool, ecosystem, camera setup, and image resolution. Therefore, it is crucial to thoroughly test the models on your own data before use.

Developer

Addax Data Science

Description

Model to identify 30 species or higher level taxons present in the desert biome of the Skeleton Coast National Park, North Namibia. The model was trained on a set of more than 850,000 images.

Classes

  • aardwolf
  • african wild cat
  • baboon
  • bird
  • brown hyaena
  • caracal
  • cattle
  • cheetah
  • donkey
  • elephant
  • fox
  • gemsbok
  • genet
  • giraffe
  • hare
  • honey badger
  • hyrax
  • jackal
  • klipspringer
  • kudu
  • leopard
  • lion
  • mongoose
  • ostrich
  • porcupine
  • rhinoceros
  • spotted hyaena
  • springbok
  • steenbok
  • zebra

Links

Developer

The DeepFaune initiative

Model version

v1.1

Description

The Deepfaune initiative aims at developing 'artificial intelligence' models to automatically classify species in images and videos collected using camera-traps. The initiative is led by a core academic team from the French 'Centre National de la Recherche Scientifique' (CNRS), in collaboration with more than 50 European partners involved in wildlife research, conservation and management. The Deepfaune models can be run through a custom software freely available on the website, or through other software packages or platforms like EcoAssist. New versions of the model are published regularly, increasing classification accuracy or adding new species to the list of species that can be recognized. More information is available at: https://www.deepfaune.cnrs.fr.

Classes

  • badger
  • ibex
  • red deer
  • chamois
  • cat
  • goat
  • roe deer
  • dog
  • squirrel
  • equid
  • genet
  • hedgehog
  • lagomorph
  • wolf
  • lynx
  • marmot
  • micromammal
  • mouflon
  • sheep
  • mustelid
  • bird
  • bear
  • nutria
  • fox
  • wild boar
  • cow

Links

Owner

New Zealand Department of Conservation

Developer

Addax Data Science

Description

Model to identify 17 species or higher-level taxons present in New Zealand. The model was trained on a set of approximately 2 million camera trap images from various projects across the country. These projects were run by multiple organizations and took place in a diverse range of habitats, using a variety of trail camera brands and models. The model has an overall validation accuracy, precision, and recall of 98%. When tested on an out-of-sample test set, the model scored 95%, 96%, and 94%, respectively. The model was designed to expedite the monitoring of New Zealand's invasive species (deer, possum, pig, cat, rodent, and mustelid).

Classes

  • caprid
  • cat
  • cow
  • deer
  • dog
  • hedgehog
  • kea
  • kiwi
  • lagomorph
  • mustelid
  • other bird
  • pig
  • possum
  • rodent
  • sealion
  • wallaby
  • weka

Links

Developer & Owner

AI For Good Lab, Microsoft

Description

The model is trained to classify animals into their genus taxonomic group. The training data are collected by the Department of Biological sciences at Universidad de los Andes. All the images come from the 'Magdalena Medio' region in Colombia. The dataset contains 41,904 images across 36 labeled genera, and it is distributed into 33,569 images in the training set and 8,335 images in the validation set. In our inference of the Amazon Rainforest dataset, we implement a 98% confidence threshold as part of a human-in-the-loop procedure. We observe that the model predicts 90% of the data with recognition confidence exceeding this threshold. Furthermore, within this high-confidence subset, the model achieves an average classification accuracy of 92%. This means that, after filtering out empty images with MegaDetector, only 10% of the detected animal objects require human validation.

Classes

  • Dasyprocta
  • Bos
  • Pecari
  • Mazama
  • Cuniculus
  • Leptotila
  • Human
  • Aramides
  • Tinamus
  • Eira
  • Crax
  • Procyon
  • Capra
  • Dasypus
  • Sciurus
  • Crypturellus
  • Tamandua
  • Proechimys
  • Leopardus
  • Equus
  • Columbina
  • Nyctidromus
  • Ortalis
  • Emballonura
  • Odontophorus
  • Geotrygon
  • Metachirus
  • Catharus
  • Cerdocyon
  • Momotus
  • Tapirus
  • Canis
  • Furnarius
  • Didelphis
  • Sylvilagus
  • Unknown

Links

Developer

Addax Data Science

Description

Model to identify 13 species or higher-level taxons present in Iran. The model was trained on a set of approximately 1 million camera trap images. The model has an overall validation accuracy, precision, and recall of 95%, 93%, and 94%, respectively. The accuracy was not tested on an out-of-sample-dataset since local images were absent. The model was designed to expedite the monitoring of the Iranian Cheetah Society.

Classes

  • antilope
  • bird
  • camel
  • caracal
  • cat
  • cheetah
  • equid
  • fox
  • goat+sheep
  • hyena
  • leopard
  • porcupine
  • wolf+jackal

Links

Developer & Owner

San Diego Zoo Wildlife Alliance

Description

This model was trained by Mathias Tobler from the San Diego Zoo Wildlife Alliance. Information about the source of the training dataset and model metrics is not available.

Classes

  • Black-headed squirrel monkey
  • Brazilian rabbit
  • Brown agouti
  • Bush dog
  • Capybara
  • Coati
  • Collared anteater
  • Collared peccary
  • Common opossum
  • Crab-eating raccoon
  • Giant anteater
  • Giant armadillo
  • Gray four-eyed opossum
  • Great tinamou
  • Green acouchy
  • Grey brocket deer
  • Grey-fronted dove
  • Grison
  • Jaguar
  • Jaguarundi
  • Large-headed Capuchin
  • Long-nosed armadillo
  • Margay
  • Ocelot
  • Paca
  • Pale-winged trumpeter
  • Puma
  • Razor-billed curassow
  • Red brocket deer
  • Short-eared dog
  • Southern amazonian red squirrel
  • Southern naked-tailed armadillo
  • Spiny Rat
  • Spix's guan
  • Tapir
  • Tayra
  • Unknown bird
  • Unknown reptile
  • Unknown rodent
  • White-fronted capuchin
  • White-lipped peccary
  • Yellow-foot tortoise

Links

Developer & Owner

San Diego Zoo Wildlife Alliance

Description

This model was trained by Kyra Swanson from the San Diego Zoo Wildlife Alliance and distinguishes between 53 species native to the Peruvian Andes. The training data was collected by SDZWA and comprises 201,943 images. They used a 70/20/10 Train/Val/Test split. The model reached an overall accuracy, precision, and recall of 88.9%, 88.6%, and 87.3% respectively on the test set.

Classes

  • andean bear
  • andean fox
  • andean guan
  • andean white-eared opossum
  • black agouti
  • bolivian squirrel
  • brown agouti
  • bush dog
  • collared peccary
  • common opossum
  • cow
  • domestic dog
  • dwarf brocket deer
  • empty
  • forest rabbit
  • giant anteater
  • giant armadillo
  • grison
  • highland coati
  • human
  • jaguar
  • jaguarundi
  • little red brocket deer
  • long-nosed armadillo
  • long-tailed weasel
  • lowland paca
  • lowland tapir
  • margay
  • molinas hog-nosed skunk
  • mountain paca
  • mountain tapir
  • northern pudu
  • ocelot
  • oncilla
  • other opossum
  • pacarana
  • pale-winged trumpeter
  • puma
  • razor-billed curassow
  • red brocket deer
  • reptile
  • short-eared dog
  • sickle-winged guan
  • small mammal
  • south american coati
  • southern naked-tailed armadillo
  • spixs guan
  • tamandua
  • tayra
  • tinamou
  • unknown bird
  • white-lipped peccary
  • white-tailed deer

Links

Workflow

Step 1: Import

Avoid the hassle of cloud uploads. Simply select a local folder with images and/or videos on your device.

Step 2: Analyse

Choose a MegaDetector version for detecting animals, humans, and vehicles and select a custom model for finer classification.

Step 3: Verify

Optionally, perform a human-in-the-loop session to confirm specific classes, confidence ranges, or subsets based on different selection methods. Export verified images to enlarge the training set for future model refinements.

Step 4: Post-process

Once satisfied, decide how to utilise the output. Options include sorting images into folders, cropping detections, drawing boxes, and exporting results to CSV files for further analysis. Custom features are always welcome.

Users

8973

Downloads

180

Affiliations

83

Universities

86

Countries

The affiliations are solely based on user interactions. We would love to hear more about the projects you are involved in! Feel free to email us.

Install or update

Everything you need to get AI into your image workflow.

Windows

MacOS

Linux

Learn more

Navigate to the pages below for more information.

Source code

Find the EcoAssist’s GitHub repository listed here

Timelapse

Discover the integration with the Timelapse image analyser

MegaDetector

Learn more about the engine behind EcoAssist

CamTrap Pro

Import your results to the CamTrap Pro application

Frequently Asked Questions

Please use the following citations if you used EcoAssist in your research.

  1. van Lunteren, P. (2023). EcoAssist: A no-code platform to train and deploy custom YOLOv5 object detection models. Journal of Open Source Software8(88), 5581.
  2. Beery, S., Morris, D., & Yang, S. (2019). Efficient pipeline for camera trap image review. arXiv preprint arXiv:1907.06772.
  3. Plus the citation of the species identification model used.

EcoAssist should automatically run on NVIDIA or Apple Silicon GPU if available. The appropriate CUDAtoolkit and cuDNN software is already included in the EcoAssist installation for Windows and Linux. If you have NVIDIA GPU available but it doesn't recognise it, make sure you have a recent driver installed, then reboot. An MPS compatible version of Pytorch is included in the installation for Apple Silicon users. The progress window will display whether EcoAssist is running on CPU or GPU. Email me if you need more assistance.

It's always good practise to first run EcoAssist in debug mode, where it will print its output in a console window. That should point us in the right direction if there is an error. How to run it in debug mode depends on your operating system and can be found here. You can always email me if you if you need help with this.

Once you've opened EcoAssist in debug mode, you'll have to recreate the error so that the traceback will show up in the console window. You can copy-paste the output and email it to us, or raise an issue in the GitHub repository.

Interested in contributing to this project? There are always things to do. Do you feel comfortable handling one of the tasks listed here?

EcoAssist is an open-source project, so please feel free to fork the EcoAssist GitHub repository and submit fixes, improvements or add new features. For more information, see the contribution guidelines.

Previous code contributors can be found here. Thank you!

In previous versions of EcoAssist (v3.0 > v4.3) it was possible to train your own object detection models based on MegaDetector to detect your target species. Although this did work, it wasn't the best approach to develop a species recognition model. It required lots of training data, processing power, time, electricity and wasn't very accurate. Advancing insights revealed that better results can be obtained by using an object classification model to be used in conjunction with the results of MegaDetector. The animals will then be located by MegaDetector, and further classified by your custom model. EcoAssist > v4.2 does support the deployment of a classification model to be used in conjunction with MegaDetector, but training such a model is more complicated and hasn't been incorporated into EcoAssist > v4.4.

If you still want to use the training feature of v4.3, you can download the EcoAssist v4.3 install file below. The rest of the installation will be done as usual, as is described here.

We've placed a detailed tutorial on Medium that provides a step-by-step guide on annotating, training, evaluating, deploying, and postprocessing data with EcoAssist v4.3. You can find it here.

All EcoAssist files are located in one folder, called 'EcoAssist_files'. See these instructions on where to find it. Windows users also need to remove the 'ecoassistcondaenv' conda environments. Let me know if you need help with that.

Screenshots

Uninstall

Want a clean slate? Follow the instructions below to remove all files.

Windows

MacOS

Linux