The Investor Confidence Project is currently in the process of developing its Quality Assurance (QA) Provider credential. One topic of debate has involved the role of a QA Provider (or technical reviewer) with regards to “reviewing methodologies, assumptions and results to ensure that they follow best practices and are reasonable.”
So what does “reasonable” mean?
When the LEED rating system was first developed, I started supporting projects by developing Measurement and Verification (M&V) plans to satisfy the M&V Credit. I submitted about 20 of these plans for different projects, essentially following the same format every time, with variations on some of the details, depending on the measures and building systems involved. Of these plans, some were accepted outright, some were returned with a few issues, and a few were returned with many questions and items to address.
In some cases, comments I received from the reviewers directly contradicted comments I had received previously! Essentially, there was little consistency in the review process that was applied.
This experience is not limited to LEED certification projects of course. The same experiences occurred when I was involved with utility-sponsored programs, in which calculation assumptions or model calibration values were challenged in some cases, and the same inputs left unchallenged in others.
In the development of energy efficiency projects, there are many items that can be interpreted by reviewers as being reasonable or unreasonable. This can range from savings calculation assumptions, energy model inputs, calibration methods, or adjustments during the M&V process, among others.
Most reviewers we have talked to have said that there is little direction with regards to how to perform a review. Moreover, there are rarely guidelines for assumptions to use when information is not known.
So how do you determine what is reasonable? Can assumptions be standardized for savings calculations? Are there resources that have established these assumption ranges for different types of measures, systems, or regions? Were these established just for simple measures, or are their guidelines for complex measure assumptions? Are there specific calibration guidelines that can be used or referenced to limit model calibration inputs?
And, how far should the review process go? At some point, this review process broaches on re-engineering, and the scope of the review process becomes cost-prohibitive and overly cumbersome, stifling efficient project development. Where do you draw the line? What experiences have you had, good or bad, regarding technical reviews?
The challenge is to define, as specifically as possible, what is reasonable, conservative, and defensible. The ICP Technical Team wants to hear what you think - can this be done, and what resources are you aware of that are currently available, or under development, to help with this process? Or is “technical reviewer experience” simply the best and only option the industry can offer?
Like the protocols themselves, our intent is to develop a balanced approach to the quality assurance process that is thorough, yet efficient. One that meets the needs of the key stakeholders, protects their interests and investment, without over-encumbering a project with costs and complexity.
So please send us your thoughts!
ICP Technical Team
So what does “reasonable” mean?
When the LEED rating system was first developed, I started supporting projects by developing Measurement and Verification (M&V) plans to satisfy the M&V Credit. I submitted about 20 of these plans for different projects, essentially following the same format every time, with variations on some of the details, depending on the measures and building systems involved. Of these plans, some were accepted outright, some were returned with a few issues, and a few were returned with many questions and items to address.
In some cases, comments I received from the reviewers directly contradicted comments I had received previously! Essentially, there was little consistency in the review process that was applied.
This experience is not limited to LEED certification projects of course. The same experiences occurred when I was involved with utility-sponsored programs, in which calculation assumptions or model calibration values were challenged in some cases, and the same inputs left unchallenged in others.
In the development of energy efficiency projects, there are many items that can be interpreted by reviewers as being reasonable or unreasonable. This can range from savings calculation assumptions, energy model inputs, calibration methods, or adjustments during the M&V process, among others.
Most reviewers we have talked to have said that there is little direction with regards to how to perform a review. Moreover, there are rarely guidelines for assumptions to use when information is not known.
So how do you determine what is reasonable? Can assumptions be standardized for savings calculations? Are there resources that have established these assumption ranges for different types of measures, systems, or regions? Were these established just for simple measures, or are their guidelines for complex measure assumptions? Are there specific calibration guidelines that can be used or referenced to limit model calibration inputs?
And, how far should the review process go? At some point, this review process broaches on re-engineering, and the scope of the review process becomes cost-prohibitive and overly cumbersome, stifling efficient project development. Where do you draw the line? What experiences have you had, good or bad, regarding technical reviews?
The challenge is to define, as specifically as possible, what is reasonable, conservative, and defensible. The ICP Technical Team wants to hear what you think - can this be done, and what resources are you aware of that are currently available, or under development, to help with this process? Or is “technical reviewer experience” simply the best and only option the industry can offer?
Like the protocols themselves, our intent is to develop a balanced approach to the quality assurance process that is thorough, yet efficient. One that meets the needs of the key stakeholders, protects their interests and investment, without over-encumbering a project with costs and complexity.
So please send us your thoughts!
ICP Technical Team