ISIN International Sustainability Indicators Network
About ISIN News Work Groups Members Resources Join Home
 
 
 
Comparative Summary of Existing Indicator Initiatives
Pocantico
 
Pocantico Meeting
 
PreMeeting Materials
 
Pocantico Statement
 
Participant List
 
Endorsers of Pocantico Statement
 
Draft Characteristics of a National Sustainability Indicator System
The Comparative Summaries are intended to provide the following information:
 
1. Overview Information
 
  a. Initiator/Institutional home, Date project established
Who initiated the project? What year was the project first launched? Which organization is currently hosting the development/reporting of the indicators?
 
  b. Scale and Aggregation method
The scale is the spatial/geographic focus of the indicators - country, region, community, or a company. The aggregation refers to the type of indicators - are they addressing simple issues (e.g., vehicle miles traveled, energy use per capita) or are they aggregated measures (index) of several indicators (e.g., sense of community, employee job satisfaction, water quality, air quality)? If aggregated, what method is used (monetization, multivariable, index computation)?
 
  c. Scope and resulting indicator categories
The scope is the areas or topics covered by the indicators (e.g., environmental, social, economic). These categories are the main themes for organizing the indicators - e.g., transportation, resource conservation, economy and employment, etc. The actual indicators that are included should NOT be presented, rather the list will be provided as part of the summary handout to participants.
 
  d. Goals, targets, and benchmarks
Are goals, targets or benchmarks associated with the indicators? What kind? Who creates them, through what kind of process, and who is responsible for meeting them?
 
  e. Framework/concepts
This section covers the way indicators are organized (e.g., by issue area, cause-effect, hierarchy, etc.)
 
  f. Presentation and communication
How are the indicators presented - through tables, graphs, etc.? How often are they reported - on a monthly, yearly, biennial basis, etc.? Why was that timeframe selected? How are the indicators communicated? To whom?
 
2. Indicator development process
 
  a. Purpose of and audience for indicators
What was the strategy? Who was expected to use the indicators and why? How were the indicators intended, in theory, to create change? Examples of answers are: Inform the general public, provide a framework for regulatory and budgeting decisions of government agencies, control permitting levels as part of national covenants system (ala Netherlands), be the basis for internationally negotiated treaties, be the basis for managing between different levels of government (ala US EPA's NEPPS agreements with states).
 
  b. Organizational setup and Participation
 
  • What kind of organization initiated the indicator development process and what authority did that organization have? Was there a strategic reason that made that organization the most appropriate or most likely to succeed?
  • How many and which people or groups/stakeholders participated in the development of the indicators? (An answer of government, business and NGOs is not enough. All indicator projects say this. We need to know how and at what level they participated.)
  • How high a profile did the participants and the process have? Did the heads of major corporations endorse them? The president, governor, mayor, or prime minister? The legislature, town council, general assembly?
  • How were the participants involved? Did they just show up at a meeting or series of meetings, write comments or vote on the indicators? Did they invest time and resources? Why did they participate at that level? Best they could do or for some strategic reason?
 
  c. Authority/Institutional arrangements for ongoing reporting/Funding
 
  • What authority was vested in the indicators and the process, either legal and formal, or moral and informal?
  • What, if anything, was said about the process or indicators in legislation/executive orders/endorsements?
  • If the indicators were 'headline' status indicators or 'system' level indicators, were they linked to performance measures and programs of specific agencies and actors?
  • Who (which organization) is in charge of the reporting and with what frequency? Was there a strategic reason for selecting that organization? Are there any executive orders, laws and mandates for on-going reporting of the indicators?
  • What is the funding mechanism to allow it to continue? Where did the funding come from to do the development process and what funding is being used to continue to update and publish the indicators?
 
3. Results
 
  a. Achievements and known impacts
 
  • This should include both the product (e.g., published report with indicators, posted on the web) and the process (e.g., has affected decision-making, brought about new programs, initiated actions to address key problems). The latter relates to the "theory of change."
  • How did the indicators affect change? For example, informed the general public, provided a framework for regulatory and budgeting decisions of government agencies, became the basis for internationally negotiated treaties, etc.
  • Under "known impacts" one can describe the laws, rule and programs that changed in response to either the indicator tracking over time, or the process and enabling legislation of the indicator creation process.
 
  b. Lessons learned
 
  • What was the most important factor that contributed to the project's achievement, that is, what made it possible?
  • What were the key success factors? What were the factors for the limited success/impact of the indicators?
  • What were the most important barriers that hindered (or still hinder) the project's achievement and how were they overcome?
  • What would you do differently?
 
Comments or questions?
Contact contactus at sustainabilityindicators.org
Copyright © 2002,
International Sustainability Indicators Network
About ISIN News Work Groups Members Resources Join Home