Expert-based ecosystem services capacity matrices: Dealing with scoring variability
Capacity matrices are widely used for assessment of ecosystems services, especially when based on participatory approaches. A capacity matrix is basically a look-up table that links land cover types to ecosystem services potentially provided. The method introduced by Burkhard et al. in 2009 has since been developed and applied in an array of case studies. Here we adress some of the criticisms on the use of capacity matrices such as expert panel size, expert confidence, and scoring variability.
Based on three case-study capacity matrices derived from expert participatory scoring, we used three different approaches to estimate the score means and standard errors: usual statistics, bootstrapping, and Bayesian models. Based on a resampling of the three capacity matrices, we show that central score stabilizes very quickly but that intersample variability shrinks after 10–15 experts while standard error of the scores continues to decrease as sample size increases. Compared to usual statistics, bootstrapping methods only reduce the estimated standard errors for small samples. The use of confidence scores expressed by experts and associated with their scores on ecosystem services does not change the mean scores but slightly increases the standard errors associated with the scores on ecosystem services. Here, computations considering the confidence scores marginally changed the final scores. Nevertheless, many participants felt it important to have a confidence score in the capacity matrix to let them express uncertainties on their own knowledge. This means that confidence scores could be considered as supplementary materials in a participatory approach but should not necessarily be used to compute final scores.
We compared usual statistics, bootstrapping and Bayesian models to estimate central scores and standard errors for a capacity matrix based on a panel of 30 experts, and found that the three methods give very similar results. This was interpreted as a consequence of having a panel size that counted twice the minimal number of experts needed. Bayesian models provided the lowest standard errors, whereas bootrapping with confidence scores provided the largest standard errors.
These conclusions prompt us to advocate when the panel size is small (less than 10 experts), to use bootstrapping to estimate final scores and their variability. If more than 15 experts are involved, the usual statistics are appropriate. Bayesian models are more complex to implement but can also provide more informative outputs to help analyze results.