Description
Question:
Is identifying gradually worsening issues with sensors in-scope for PVAnalytics? For example, pyranometers with soiling or other sources of drift, or slowly detaching back of module temperature sensors:
VID_20190726_091653.mp4
Idea:
If the answer is yes, it seems that using external solar resource/weather data plus pvlib
functions to estimate POA, module temperature, etc., could provide a useful reference beyond what can be done with, e.g., clear sky models. And PVAnalytics seems like a good home for tools to compare two (or more) measurement "channels" and flag alarming deviations. This could be done to better QC historical datasets and/or to check for sensor issues that need correction in "real time" (e.g., as part of weekly or monthly plant/site maintenance work).
On data sources:
Some users may have access to commercial near real time satellite data, and ERA5 could be valuable to anyone (with as little as 5 days of lag and going back to at least 1979). NSRDB PSM3 could work if the typical 6-18 months of lag is acceptable. Measurements from nearby PV plants or weather stations could also be an option in some cases. The tools could be agnostic to the reference data source, although results may vary based on site and data source.
Some relevant info on getting ERA5 or commercial satellite data into pvlib:
- Open
pvlib
pull requests with iotool functions for getting ERA5 and MERRA2 data: Add retrieval function for ERA5 reanalysis data pvlib-python#1264 and Add retrieval function for NASA MERRA2 reanalysis data pvlib-python#1274, . - Discussion on getting ERA5 data: What to do about ERA5 and MERRA2? pvlib-python#1484.
- Discussion on getting SolarAnywhere data: Should pvlib have a get_solaranywhere function for fetching weather data? pvlib-python#1310.
- Issue on getting Solcast data: Integrate Solcast API into iotools pvlib-python#1313
Models/statistics to implement:
I don't have much to add here right now, other than some general ideas like:
- look at "normal" historical deviations to set bounds outside of which deviations could be flagged as "abnormal"
- allow for possible "normal" seasonal biases
- start with regular statistics, although some machine learning might later prove useful
What else has been done here:
- Add irradiance QC algorithm from Forstinger et al. #110 and QA methods for plane-of-array irradiance #123 are relevant, but I don't think they include trending measurements against independent modeled values (or measurements for totally separate but still nearby sensors)
- Maybe lots of other things I haven't researched well...
Tagging @silverman since we recently discussed this.