This section discusses the data requirements for projects, which are required for the accounting. Like all projects, LUCF projects have several stages: (1) project design and registration at a national, regional, or international center; (2) project implementation, involving monitoring and evaluation; and (3) project verification, leading to certification. This section focuses on the second stage (implementation), although selected data may need to be collected and reported at each stage.
Examples of data to be reported are given in Table 6-4. Further details appear elsewhere in this report (Table 6-3; Chapter 5) and are not discussed here.
|Table 6-4: Examples of LUCF projects and their corresponding without-project cases.|
|Project-Level Features||Data Requirements|
|1. Baseline carbon stock||Amount of carbon stored by carbon pool|
|1.1. Free riders||Amount of carbon stored by carbon pool|
|2. Change in carbon stock due to project||Amount of carbon stored by carbon pool|
|2.1. Leakage||Amount of carbon lost due to leakage|
|2.2. Positive spillover||Amount of carbon gained due to spillover|
|2.3. Market transformation||Amount of carbon gained due to market transformation|
|3. Additionality||Additional (net) amount of carbon stored by carbon pool|
|4. Data collection and analysis methods||Methods used in measuring carbon stored by project; methods used in estimating leakage, spillover, and transformation|
|5. Verifiability||Criteria used to evaluate quality of data collection methods and analysis|
|6. Uncertainty||Quantitative and qualitative indicators of precision; discussion of sources of uncertainty|
|7. Permanence||Assessment of permanence of project|
|8. Environmental impacts||List of environmental impacts affected by project; relationship to environmental impact statements and legislation|
|9. Socioeconomic Impacts||List of socioeconomic impacts affected by project|
The most credible project results are derived from project-specific measurements. There is some concern that an arduous project-by-project review of the project data might impose prohibitive costs. Some researchers have proposed an alternative approach based on a combination of performance benchmarks and procedural guidelines that are tied to appropriate measures of output. In all cases, measurement and verification of the actual performance of the project are required. The performance benchmarks for new projects could be chosen to represent the high-performance end of the spectrum of current commercial practice (e.g., representing roughly the top 25th percentile of best performance). In this case, the benchmark serves as a goal to be achieved. In contrast, others might want to use benchmarks as a reference or default baseline-as an extension of existing technology, and not representing the best technology or process.
Other reports in this collection