Possibly one of the more arcane titles in this series of blogs, but an important one nonetheless.
In an earlier blog I mentioned that there is a resurgence of papers querying No Net Loss (NNL) and offsets as part of the suite of delivery mechanisms for corporate landscape and site effects. Amongst the leaders in the field is Joe Bull at Kent. He has a knack of drilling down into papers through big data sets searches with his support team. This included a detailed search of multiplier data. Referred to in the zu Ermgassen et al paper, the 2017 study (Cons Letters 10: 656-669) looked at the practicality of multipliers as part of NNL.
Put simply, if causing an impact on a site/area, how many times larger an area should you use in order to offset impacts and deliver NNL? Whilst the theorists suggest a matter of tens or hundreds of times, few in practice get any way near that, and developers regularly fudge both the data and the measures used to claim an effective delivery via offsets. That fudging comes down to the rather familiar trait in data sets that I've worked on: unreliable and undeliverable in practice.
They also found that multipliers are often not explicitly given for policy or project purposes- with the effect that you can't fail what you don't state. Outcomes are equally non-decipherable. That is quite important; if you want to claim something, then don't tell them too much. If you are a developer, then a big multiplier costs money, and time to deliver outcomes. Ideally, do neither. Simple really.