Skip to content

Desert Lion Conservation


Automatic species identification in the Namib Desert biome

Desert Lion Conservation is a non-profit organisation dedicated to the conservation of desert-adapted lions in the Northern Namib. Their main focus is to collect base-line ecological data on the lion population and to study their behaviour, biology and adaptation to survive in the harsh environment. They use this information to collaborate with other conservation bodies in the quest to find a solution to human-lion conflict, to elevate the tourism value of lions, and to contribute to the conservation of the species.

Addax Data Science, in partnership with Smart Parks, has been asked to develop a species recognition model to identify 30 species or higher level taxons present in the Skeleton Coast National Park. The model was trained on a set of more than 850,000 images, of which ± 20% were provided by the client (see fig. 2). The rest was sourced from comparable ecological projects.

Figure 2. Horizontal bar chart of the dataset used for to develop the species recognition model. Bars represent classes and are divided into images taken from the project area (‘local’) and images sourced from other comparable ecological projects (‘non-local’).

As you can see in figure 2, the dataset is highly imbalanced. The largest class, ‘cattle’, consists of more than a third of the entire dataset. Nevertheless, through the application of a technique which combines up- and downsampling, the training process successfully achieved equal weighting for each class. The resulting classification model has an overall validation accuracy of 95.3%, a precision of 95.4%, and a recall of 95.3%.

Precision focuses on the reliability of positive predictions, while recall focuses on the probability of capturing all relevant positive instances. To illustrate, consider an image with 10 lions. If the model predicts 11 lions, the precision would be 91% (10/11, with one false prediction) and the recall would be 100% (10/10, correctly identifying all 10 lions). Now, envision the same image, but the model predicts 9 lions. This results in a precision of 100% (9/9, with all predictions being correct) and a recall of 90% (9/10, missing one lion).

In most projects, both precision and recall hold significance. This is why accuracy is often mentioned, as it is a balance between both. Refer to table 1 below for a detailed breakdown of the model’s class-specific metrics. A comprehensive discussion of the results follows afterwards. A large majority of the classes reach validation metrics above 90%.

african wild cat98.9%98.5%98.7%
brown hyaena96.3%97.6%97.0%
honey badger70.7%78.4%74.4%
spotted hyaena96.4%92.9%94.6%
Table 1. The model’s class-specific validation metrics based on the test set during training.

Validation metrics calculated during training, however, may not necessarily serve as a reliable guide for future predictions. To ensure that the model is flexible and performs well in scenarios beyond the training dataset, conducting an out-of-sample validation is helpful. Out-of-sample images refer to those captured in contexts that were never encountered during the training phase. These may include images from new locations, different seasons, and so on. In this project, we gathered an out-of-sample dataset from the previously unseen year 2023. The metrics derived from this dataset are slightly lower, with an overall accuracy of 93.4%, a precision of 93.8%, and a recall of 93.2%.

Below, you can see a visual representation of the model’s confusion matrix (fig. 3). It tabulates the predicted species against the actual species in a format that allows for a quick assessment of how well the model is performing. The matrix provides insight into which species are often confused by the model. For example, the graph indicates that about 5% of the foxes in the dataset are misclassified as jackals. Even though combining these animals into one group called ‘canid’ would make the model perform better, we’ve chosen to keep as much species separate as possible. This way, we provide more detailed information for the end user, even if there’s some overlap in the model’s predictions.

Figure 3. Visual representation of the model’s confusion matrix.

Let’s delve further into the relatively low-performing classes. What causes the accuracy of certain classes to drop below 90%? Besides potential factors like the distinctiveness of the species, including morphology and behaviour, there is a clear pattern of limited availability of training data. Figure 4 illustrates that all classes with an accuracy below 90% (indicated by the vertical dashed line) also have fewer than 7000 images available (marked by the horizontal dashed line). However, this doesn’t necessarily mean that classes with less than 7000 images are destined to score below 90%. A third of the classes below the 7000 image threshold still achieve above 90% accuracy.

Figure 4. Scatter plot showing the correlation between image availability and class-specific accuracy. The red dashed lines are included for illustrative purposes (refer to the accompanying text for guidance). The blue line represents a non-parametric locally estimated scatter plot smoothing (LOESS) with a 25% confidence interval.

The availability of training images is not the sole factor affecting class accuracy, but a large dataset certainly contributes to a higher accurately. In this project, all classes with more than 7000 images performed above 90% accuracy.

The model described here is freely available through our open-source camera trap analysis platform, EcoAssist.