Skip to content


Simplifying camera trap image analysis with AI

Open Source

Every line of code is open source


Automated install and intuitive interface


Does not need any internet connection after installation

Video support

Works on both images and videos


Export results to the Timelapse Image Analyser


Option to human verify model predictions

GPU acceleration

Runs automatically on NVIDIA and Apple Silicon GPU


Separate, visualise, crop, label or export results

EcoAssist is an application designed to streamline the work of ecologists dealing with camera trap images. It’s an AI platform that allows you to analyse images on your local computer and use machine learning models for automatic detection and identification, offering ecologists a way to save time and focus on conservation efforts.

It has the open-source MegaDetector model incorporated, which can filter out images containing animals, people, and vehicles. To further identify, select a species recognition model. At this moment, there is only one available: the Namibian Desert model, which can identify 30 African species. Addax will be adding more identification models in the future. Do you have a model that you would like to make open-source? Or do you want Addax to develop a model specifically for your project? Get in touch!


Step 1: Import

Avoid the hassle of cloud uploads. Simply select a local folder with images and/or videos on your device.

Step 2: Analyse

Choose a MegaDetector version for detecting animals, humans, and vehicles. Optionally, select a custom model for finer classification, like categorising animals by species, people by poacher or non-poacher, and vehicles by company or non-company affiliation.

Step 3: Verify

Optionally, perform a human-in-the-loop session to confirm specific classes, confidence ranges, or subsets based on different selection methods. Export verified images to enlarge the training set for future model refinements.

Step 4: Post-process

Once satisfied, decide how to utilise the output. Options include sorting images into folders, cropping detections, drawing boxes, and exporting results to CSV files for further analysis. Custom features are always welcome.










The affiliations are solely based on user interactions. We would love to hear more about the projects you are involved in! Feel free to email us.

Install or update

Everything you need to get AI into your image workflow.





EcoAssist is a collaboration between Addax Data Science and Smart Parks to support open-source projects in nature conservation. You can contribute to the development of this initiative via the sponsor button below. By contributing, you directly support the development of EcoAssist. Your support will enable us to invest more time, expand features and reach more conservationists in need. Thank you!

Learn more

Navigate to the pages below for more information.

Source code

Find the EcoAssist’s GitHub repository listed here


Learn more about the engine behind EcoAssist


Discover the integration with the Timelapse image analyser

Frequently Asked Questions

Please use the following citations if you used EcoAssist in your research.

  • van Lunteren, P. (2023). EcoAssist: A no-code platform to train and deploy custom YOLOv5 object detection models. Journal of Open Source Software8(88), 5581.
  • Beery, S., Morris, D., & Yang, S. (2019). Efficient pipeline for camera trap image review. arXiv preprint arXiv:1907.06772.

EcoAssist should automatically run on NVIDIA or Apple Silicon GPU if available. The appropriate CUDAtoolkit and cuDNN software is already included in the EcoAssist installation for Windows and Linux. If you have NVIDIA GPU available but it doesn't recognise it, make sure you have a recent driver installed, then reboot. An MPS compatible version of Pytorch is included in the installation for Apple Silicon users. The progress window will display whether EcoAssist is running on CPU or GPU. Email us if you need more assistance.

It's always good practise to first run EcoAssist in debug mode, where it will print its output in a console window. That should point us in the right direction if there is an error. How to run it in debug mode depends on your operating system and can be found here. You can always email us if you if you need help with this.

Once you've opened EcoAssist in debug mode, you'll have to recreate the error so that the traceback will show up in the console window. You can copy-paste the output and email it to us, or raise an issue in the GitHub repository.

EcoAssist is an open-source project, so please feel free to fork the EcoAssist GitHub repository and submit fixes, improvements or add new features. For more information, see the contribution guidelines.

Previous code contributors can be found here. Thank you!

In previous versions of EcoAssist (v3.0 > v4.3) it was possible to train your own object detection models based on MegaDetector to detect your target species. Although this did work, it wasn't the best approach to develop a species recognition model. It required lots of training data, processing power, time, electricity and wasn't very accurate. Advancing insights revealed that better results can be obtained by using an object classification model to be used in conjunction with the results of MegaDetector. The animals will then be located by MegaDetector, and further classified by your custom model. EcoAssist > v4.2 does support the deployment of a classification model to be used in conjunction with MegaDetector, but training such a model is more complicated and hasn't been incorporated into EcoAssist > v4.4.

If you still want to use the training feature of v4.3, you can download the EcoAssist v4.3 install file below. The rest of the installation will be done as usual, as is described here.

We've placed a detailed tutorial on Medium that provides a step-by-step guide on annotating, training, evaluating, deploying, and postprocessing data with EcoAssist v4.3. You can find it here.

All EcoAssist files are located in one folder, called 'EcoAssist_files'. See these instructions on where to find it.