AddaxAI
Simplifying camera trap image analysis with AI
Previously known as EcoAssist
To avoid any legal concerns, we have renamed our project from EcoAssist to AddaxAI.
The project itself remains the same—only the name has changed.

Open Source
Every line of code is open source
User-friendly
Automated install and intuitive interface
Offline
Does not need any internet connection after installation
Video support
Works on both images and videos
Integration
Export results to the Timelapse Image Analyser
Human-in-the-loop
Option to human verify model predictions
GPU acceleration
Runs automatically on NVIDIA and Apple Silicon GPU
Post-process
Separate, visualise, crop, label or export results

AddaxAI is an application designed to streamline the work of ecologists dealing with camera trap images. It’s an AI platform that allows you to analyse images on your local computer and use machine learning models for automatic detection and identification, offering ecologists a way to save time and focus on conservation efforts.
It has the open-source MegaDetector model incorporated, which can filter out images containing animals, people, and vehicles. You can select one of the species recognition model listed below to further identify the animals. Addax will be adding more identification models in the future. Do you have a model that you would like to make open-source? Or do you want Addax to develop a model specifically for your project? Get in touch!
Species identification models
The following species models are incorporated into AddaxAI. Please keep in mind that these models were developed for a specific project. Whether or not a model may also be useful for other projects depends on factors such as the species pool, ecosystem, camera setup, and image resolution. Therefore, it is crucial to thoroughly test the models on your own data before use.
Namibian Desert
Developer
Addax Data Science
Description
Model to identify 30 species or higher level taxons present in the desert biome of the Skeleton Coast National Park, North Namibia. The model was trained on a set of more than 850,000 images.
Classes
- aardwolf
- african wild cat
- baboon
- bird
- brown hyaena
- caracal
- cattle
- cheetah
- donkey
- elephant
- fox
- gemsbok
- genet
- giraffe
- hare
- honey badger
- hyrax
- jackal
- klipspringer
- kudu
- leopard
- lion
- mongoose
- ostrich
- porcupine
- rhinoceros
- spotted hyaena
- springbok
- steenbok
- zebra
Links
Europe
Developer
The Deepfaune initiative
Model version
v1.3
Description
The Deepfaune initiative aims at developing 'artificial intelligence' models to automatically classify species in images and videos collected using camera-traps. The initiative is led by a core academic team from the French 'Centre National de la Recherche Scientifique' (CNRS), in collaboration with more than 50 European partners involved in wildlife research, conservation and management. The Deepfaune models can be run through a custom software freely available on the website, or through other software packages or platforms like EcoAssist. New versions of the model are published regularly, increasing classification accuracy or adding new species to the list of species that can be recognized. More information is available at: https://www.deepfaune.cnrs.fr.
Classes
- bison
- badger
- ibex
- beaver
- red deer
- chamois
- cat
- goat
- roe deer
- dog
- fallow deer
- squirrel
- moose
- equid
- genet
- wolverine
- hedgehog
- lagomorph
- wolf
- otter
- lynx
- marmot
- micromammal
- mouflon
- sheep
- mustelid
- bird
- bear
- nutria
- raccoon
- fox
- reindeer
- wild boar
- cow
Links
New Zealand Invasives
Owner
New Zealand Department of Conservation
Developer
Addax Data Science
Description
Model to identify 17 species or higher-level taxons present in New Zealand. The model was trained on a set of approximately 2 million camera trap images from various projects across the country. These projects were run by multiple organizations and took place in a diverse range of habitats, using a variety of trail camera brands and models. The model has an overall validation accuracy, precision, and recall of 98%. When tested on an out-of-sample test set, the model scored 95%, 96%, and 94%, respectively. The model was designed to expedite the monitoring of New Zealand's invasive species (deer, possum, pig, cat, rodent, and mustelid).
Classes
- caprid
- cat
- cow
- deer
- dog
- hedgehog
- kea
- kiwi
- lagomorph
- mustelid
- other bird
- pig
- possum
- rodent
- sealion
- wallaby
- weka
Links
Columbian Amazon
Developer & Owner
AI For Good Lab, Microsoft
Description
The model is trained to classify animals into their genus taxonomic group. The training data are collected by the Department of Biological sciences at Universidad de los Andes. All the images come from the 'Magdalena Medio' region in Colombia. The dataset contains 41,904 images across 36 labeled genera, and it is distributed into 33,569 images in the training set and 8,335 images in the validation set. In our inference of the Amazon Rainforest dataset, we implement a 98% confidence threshold as part of a human-in-the-loop procedure. We observe that the model predicts 90% of the data with recognition confidence exceeding this threshold. Furthermore, within this high-confidence subset, the model achieves an average classification accuracy of 92%. This means that, after filtering out empty images with MegaDetector, only 10% of the detected animal objects require human validation.
Classes
- Dasyprocta
- Bos
- Pecari
- Mazama
- Cuniculus
- Leptotila
- Human
- Aramides
- Tinamus
- Eira
- Crax
- Procyon
- Capra
- Dasypus
- Sciurus
- Crypturellus
- Tamandua
- Proechimys
- Leopardus
- Equus
- Columbina
- Nyctidromus
- Ortalis
- Emballonura
- Odontophorus
- Geotrygon
- Metachirus
- Catharus
- Cerdocyon
- Momotus
- Tapirus
- Canis
- Furnarius
- Didelphis
- Sylvilagus
- Unknown
Links
Iran
Developer
Addax Data Science
Description
Model to identify 13 species or higher-level taxons present in Iran. The model was trained on a set of approximately 1 million camera trap images. The model has an overall validation accuracy, precision, and recall of 95%, 93%, and 94%, respectively. The accuracy was not tested on an out-of-sample-dataset since local images were absent. The model was designed to expedite the monitoring of the Iranian Cheetah Society.
Classes
- antilope
- bird
- camel
- caracal
- cat
- cheetah
- equid
- fox
- goat+sheep
- hyena
- leopard
- porcupine
- wolf+jackal
Links
Peruvian Amazon
Developer & Owner
San Diego Zoo Wildlife Alliance
Description
This model was trained by Mathias Tobler from the San Diego Zoo Wildlife Alliance. Information about the source of the training dataset and model metrics is not available.
Classes
- Black-headed squirrel monkey
- Brazilian rabbit
- Brown agouti
- Bush dog
- Capybara
- Coati
- Collared anteater
- Collared peccary
- Common opossum
- Crab-eating raccoon
- Giant anteater
- Giant armadillo
- Gray four-eyed opossum
- Great tinamou
- Green acouchy
- Grey brocket deer
- Grey-fronted dove
- Grison
- Jaguar
- Jaguarundi
- Large-headed Capuchin
- Long-nosed armadillo
- Margay
- Ocelot
- Paca
- Pale-winged trumpeter
- Puma
- Razor-billed curassow
- Red brocket deer
- Short-eared dog
- Southern amazonian red squirrel
- Southern naked-tailed armadillo
- Spiny Rat
- Spix's guan
- Tapir
- Tayra
- Unknown bird
- Unknown reptile
- Unknown rodent
- White-fronted capuchin
- White-lipped peccary
- Yellow-foot tortoise
Links
Peruvian Andes
Developer & Owner
San Diego Zoo Wildlife Alliance
Description
This model was trained by Kyra Swanson from the San Diego Zoo Wildlife Alliance and distinguishes between 53 species native to the Peruvian Andes. The training data was collected by SDZWA and comprises 201,943 images. They used a 70/20/10 Train/Val/Test split. The model reached an overall accuracy, precision, and recall of 88.9%, 88.6%, and 87.3% respectively on the test set.
Classes
- andean bear
- andean fox
- andean guan
- andean white-eared opossum
- black agouti
- bolivian squirrel
- brown agouti
- bush dog
- collared peccary
- common opossum
- cow
- domestic dog
- dwarf brocket deer
- empty
- forest rabbit
- giant anteater
- giant armadillo
- grison
- highland coati
- human
- jaguar
- jaguarundi
- little red brocket deer
- long-nosed armadillo
- long-tailed weasel
- lowland paca
- lowland tapir
- margay
- molinas hog-nosed skunk
- mountain paca
- mountain tapir
- northern pudu
- ocelot
- oncilla
- other opossum
- pacarana
- pale-winged trumpeter
- puma
- razor-billed curassow
- red brocket deer
- reptile
- short-eared dog
- sickle-winged guan
- small mammal
- south american coati
- southern naked-tailed armadillo
- spixs guan
- tamandua
- tayra
- tinamou
- unknown bird
- white-lipped peccary
- white-tailed deer
Links
Southwest USA
Developer & Owner
San Diego Zoo Wildlife Alliance
Description
This model distinguishes between 27 species native to the Southwest United States. The training data was collected partially by SDZWA and the California Mountain Lion Project, and includes examples from the NACTI, and CCT training datasets. The training data corpus comprises 91662 images. We used a 70/20/10 Train/Val/Test split. The model reached an overall accuracy of 88% on the test set. Created by Kyra Swanson in 2023 (tswanson@sdzwa.org).
Classes
- badger
- beaver
- bird
- boar
- bobcat
- cat
- corvid
- cougar
- cow
- coyote
- deer
- dog
- empty
- fox
- human
- opossum
- other
- owl
- rabbit
- raccoon
- raptor
- reptile
- rodent
- skunk
- squirrel
- vehicle
- weasel
Links
Kirghizistan
Developer
Hex Data
Owner
OSI-Panthera
Description
This model is dedicated to the classification of the fauna from Kirghizistan. It was developped by Hex Data (https://hex-data.io) on behalf of OSI-Panthera (https://www.osi-panthera.org/). The model was trained on 42k images, with around 4k images per class, provided by the camera traps set up by OSI-Panthera. Class 'vide' (empty, in French), is used to try to set aside the few false negatives returned by MegaDetector. The rest correspond to the scientific name of the family or species detected.
Classes
- panthera_uncia
- canidae
- ochotonidae
- vide
- aves
- ursidae
- mustelidae
- caprinae
- marmota
- muridae
- leporidae
Links
Tasmania
Developer
Barry Brook
Description
The MEWC model for Tasmania has been trained on 2.5 million labelled images from 96 classes. It is based on the EfficientNet v2 Small model architecture, initialised with pre-trained ImageNet base weights. The classes include all non-volant terrestrial mammals (native and introduced) that are found in Tasmania, along with over 50 of the most-commonly observed bird species seen on camera traps. Most classes represent species, but there are also some general classes like snake or insect, and an unknown/washed-out special class. Based on held-out test data, the overall classification accuracy and f1 scores are >99%, and for the common species, accuracy typically exceeds 99.5%. The results for the rarest taxa are worse, but still over 90% in almost all cases.
Classes
- antechinus
- australian fur seal
- australian magpie
- australian owlet nightjar
- australian pipit
- bait
- bare nosed wombat
- bassian thrush
- beautiful firetail
- bennetts wallaby
- black currawong
- black rat
- black swan
- blotched blue tongue
- brown falcon
- brown goshawk
- brown hare
- brown quail
- brush bronzewing
- brushtail possum
- cape barren goose
- cat
- cattle
- chestnut teal
- chicken
- common blackbird
- common bronzewing
- common ringtail
- crescent honeyeater
- crimson rosella
- dog
- dusky robin
- eastern barred bandicoot
- eastern bettong
- eastern quoll
- eastern rosella
- european goldfinch
- european rabbit
- european starling
- fallow deer
- flame scarlet robin
- forest raven
- forester kangaroo
- goat
- green rosella
- grey currawong
- grey fantail
- grey shrikethrush
- guinea fowl
- house mouse
- insect
- laughing kookaburra
- lewins rail
- little penguin
- long nosed potoroo
- long tailed mouse
- maned goose
- masked lapwing
- new holland honeyeater
- olive whistler
- pacific black duck
- painted buttonquail
- peafowl
- pink robin
- platypus
- purple swamphen
- pygmy possum
- rakali
- red fox
- scrubtit
- sheep
- short beaked echidna
- skink
- snake
- sooty shearwater
- southern brown bandicoot
- spotted tail quoll
- strong billed honeyeater
- sugar glider
- superb fairywren
- superb lyrebird
- swamp harrier
- swamp rat
- tasmanian boobook
- tasmanian devil
- tasmanian nativehen
- tasmanian pademelon
- tasmanian scrubwren
- thornbill
- unknown animal
- wedge tailed eagle
- white bellied sea eagle
- white faced heron
- white footed dunnart
- yellow tailed black cockatoo
- yellow throated honeyeater
Links
Workflow

Step 1: Import
Avoid the hassle of cloud uploads. Simply select a local folder with images and/or videos on your device.

Step 2: Analyse
Choose a MegaDetector version for detecting animals, humans, and vehicles and select a custom model for finer classification.

Step 3: Verify
Optionally, perform a human-in-the-loop session to confirm specific classes, confidence ranges, or subsets based on different selection methods. Export verified images to enlarge the training set for future model refinements.

Step 4: Post-process
Once satisfied, decide how to utilise the output. Options include sorting images into folders, cropping detections, drawing boxes, and exporting results to CSV files for further analysis. Custom features are always welcome.

Users

11,369
Downloads
345
Affiliations
135
Universities
108
Countries
The affiliations are solely based on user interactions. We would love to hear more about the projects you are involved in! Feel free to email us.
Open-source honesty box
AddaxAI is free and open-source because we believe conservation technology should be available to everyone, regardless of budget. But keeping it that way takes time, effort, and resources—all contributed by volunteers. If you’re using AddaxAI, consider chipping in. Think of it as an honesty box: if every user contributed just $3 per month, we could sustain development, improve features, and keep expanding the model zoo.
Let us know if you have any questions or want to receive an invoice for tax-deduction purposes.

Or choose your own amount via this link.
Install or update
Everything you need to get AI into your image workflow.
Windows
MacOS
Linux
Learn more
Navigate to the pages below for more information.
Source code
Find the AddaxAI GitHub repository listed here
Timelapse
Discover the integration with the Timelapse image analyser
MegaDetector
Learn more about the engine behind AddaxAI
CamTrap Pro
Import your results to the CamTrap Pro application
Frequently Asked Questions
How can I cite AddaxAI?
If you used AddaxAI in your research, please include the following citation, along with the models used to analyse your data.
- van Lunteren, P., (2023). AddaxAI: A no-code platform to train and deploy custom YOLOv5 object detection models. Journal of Open Source Software, 8(88), 5581, https://doi.org/10.21105/joss.05581
- Plus the citation of the models used to analyse your data.
It doesn’t use my GPU. What can I do?
AddaxAI should automatically run on NVIDIA or Apple Silicon GPU if available. The appropriate CUDAtoolkit and cuDNN software is already included in the AddaxAI installation for Windows and Linux. If you have NVIDIA GPU available but it doesn't recognise it, make sure you have a recent driver installed, then reboot. An MPS compatible version of Pytorch is included in the installation for Apple Silicon users. The progress window will display whether AddaxAI is running on CPU or GPU. Email me if you need more assistance.
I found a bug or ran into a problem. What should I do?
To help us fix issues quickly, we need to be able to reproduce the problem on our end. A description like “I clicked some buttons and now there’s an error” doesn’t give us enough information. Instead, please follow these steps before reaching out:
1. Update AddaxAI
First, make sure you’re running the latest version. The issue may already be fixed.
2. Identify the cause
- Does the issue happen after a specific sequence of button clicks?
- Does it involve a particular folder, image, or dataset?
- Can you minimize the problem? (For example, does the error still occur with just one image or video?)
3. Run AddaxAI in debug mode
Debug mode prints detailed error messages in a console window, which helps us pinpoint the issue. Instructions for running AddaxAI in debug mode, depending on your operating system, can be found here. Once in debug mode, try to recreate the error so that a traceback appears in the console.
4. Send a detailed report
Once you’ve gathered the necessary details, email us with:
- A minimal reproducible example including required images/videos (the simplest way to trigger the error)
- Error logs from the debug console.
This will help us quickly diagnose and fix the problem.
Thanks for helping us improve AddaxAI!
How can I contribute in terms of code?
Interested in contributing to this project? There are always things to do. Do you feel comfortable handling one of the tasks listed here?
AddaxAI is an open-source project, so please feel free to fork the AddaxAI GitHub repository and submit fixes, improvements or add new features. For more information, see the contribution guidelines.
Previous code contributors can be found here. Thank you!
What is the difference between detection, classification and post-processing confidence?
Every detected object (animal, person, vehicle) has a detection confidence provided by MegaDetector. This confidence score represents how certain the model is that an object belongs to one of these categories.
If you want to further specify the species of an animal, a species identification model is used. This model processes animal detections that exceed a specified confidence threshold and classifies them into species. The model then assigns a classification confidence to each prediction. Any prediction below the classification threshold and above the detection threshold is labeled as an "unidentified animal" because we know it is an animal, but we can't determine the species with confidence.
The post-processing confidence threshold is a third level of filtering. It allows you to apply additional processing steps only to predictions above a chosen confidence level. This means you can post process results at different thresholds without rerunning the models. For example, you might compare all results above 0.05 with only those above 0.60 to see how filtering affects the output. The detections remain the same—you’re just deciding which ones to display, export, crop, move, copy, etc.
Why do some images selected during annotation actually do not have any boxed items?
- The image selection criteria determine which images are included in the analysis.
- The annotation selection criteria control which annotations are displayed in those images.
These two settings are not always the same, which can lead to cases where an image is selected for review, but no annotations appear. Ideally, for a selected class, the annotation confidence range should match the image selection criteria automatically (this is on my to-do list!). However, the issue becomes more complex when dealing with multiple classes. For example, if you choose to view only images with animals, but some images also contain vehicles, which vehicle annotations should be shown?
- All detections (0.00 - 1.00)? This would include many false positives.
- Only highly confident detections (0.95 - 1.00)? This could miss many real detections.
To balance this, I've chosen a default range of 0.60 - 1.00 for unselected classes to reduce false positives while still showing relevant annotations.


What is the difference between MegaDetector 5a and 5b?
The only difference between MDv5a and MDv5b is their training data, which means each version may perform slightly better depending on your dataset. Each MDv5 model can outperform the other slightly, depending on your data. When in doubt, try both models and compare results. If you need a recommendation, MDv5a is generally preferred. If you want the best accuracy for your specific dataset, experimenting with both models is the best approach.
Screenshots
Uninstall
Want a clean slate? Follow the instructions below to remove all files.