Skip to content

Commit

Permalink
Merge branch 'dev' into tutorial
Browse files Browse the repository at this point in the history
  • Loading branch information
teubert committed Nov 5, 2024
2 parents d463a93 + 5990bda commit 061322d
Show file tree
Hide file tree
Showing 15 changed files with 233 additions and 54 deletions.
46 changes: 23 additions & 23 deletions .github/workflows/python-package.yml
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ jobs:
path: ~/.cache/pip
key: pip-cache-datadriven
- name: Update
run: pip install --upgrade --upgrade-strategy eager -e .
run: pip install --upgrade --upgrade-strategy eager -e .[datadriven]
- name: Run tests
run: python -m tests.test_data_model
test_datasets:
Expand Down Expand Up @@ -423,30 +423,30 @@ jobs:
path: ~/.cache/pip
key: pip-cache-datadriven
- name: Update
run: pip install --upgrade --upgrade-strategy eager -e .
run: pip install --upgrade --upgrade-strategy eager -e .[datadriven]
- name: Run tests
run: python -m tests.test_surrogates
test_tutorials:
timeout-minutes: 5
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.7'
- name: Install dependencies cache
uses: actions/cache@v2
with:
path: ~/.cache/pip
key: pip-cache-datadriven
- name: Update
run: |
pip install --upgrade --upgrade-strategy eager -e .
pip install notebook
pip install testbook
- name: Run tests
run: python -m tests.test_tutorials
# test_tutorials:
# timeout-minutes: 5
# runs-on: ubuntu-latest
# steps:
# - uses: actions/checkout@v3
# - name: Set up Python
# uses: actions/setup-python@v4
# with:
# python-version: '3.7'
# - name: Install dependencies cache
# uses: actions/cache@v2
# with:
# path: ~/.cache/pip
# key: pip-cache-datadriven
# - name: Update
# run: |
# pip install --upgrade --upgrade-strategy eager -e .
# pip install notebook
# pip install testbook
# - name: Run tests
# run: python -m tests.test_tutorials
test_uav_model:
timeout-minutes: 10
runs-on: ubuntu-latest
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/update-cache.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ jobs:
- name: Install dependencies
run: |
python -m pip install --upgrade pip
python -m pip install -e .
python -m pip install -e .[datadriven]
python -m pip install notebook
python -m pip install testbook
python -m pip install requests
Expand Down
95 changes: 95 additions & 0 deletions examples/00_Intro.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Welcome to ProgPy\n",
"**2024 NASA Software of the Year!**\n",
"\n",
"NASA’s ProgPy is an open-sourced python package supporting research and development of prognostics and health management and predictive maintenance tools. It implements architectures and common functionality of prognostics, supporting researchers and practitioners. The ProgPy package is a combination of the original prog_models and prog_algs packages."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Installing ProgPy\n",
"\n",
"The latest stable release of ProgPy is hosted on PyPi. For most users, this version will be adequate. To install via the command line, use the following command:\n",
"\n",
"```bash\n",
"pip install progpy\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Installing ProgPy- Prerelease"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Users who would like to contribute to ProgPy or would like to use pre-release features can do so using the ProgPy GitHub repo. This isn’t recommended for most users as this version may be unstable. To do this, use the following commands:\n",
"\n",
"```bash\n",
"git clone https://github.com/nasa/progpy\n",
"cd progpy\n",
"git checkout dev\n",
"pip install -e .```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Citing this repository\n",
"\n",
"Use the following to cite this repository in LaTeX:\n",
"\n",
"```BibTeX\n",
"@misc{2023_nasa_progpy,\n",
"author = {Christopher Teubert and Katelyn Jarvis Griffith and Matteo Corbetta and Chetan Kulkarni and Portia Banerjee and Jason Watkins and Matthew Daigle},\n",
"title = {{ProgPy Python Prognostics Packages}},\n",
"month = Oct,\n",
"year = 2023,\n",
"version = {1.6},\n",
"url = {https://nasa.github.io/progpy}\n",
"doi = {10.5281/ZENODO.8097013}\n",
"}\n",
"```\n",
"The corresponding reference should look like this:\n",
"\n",
"Teubert, K. Jarvis Griffith, M. Corbetta, C. Kulkarni, P. Banerjee, J. Watkins, M. Daigle, ProgPy Python Prognostics Packages, v1.6, Oct 2023. URL nasa/progpy.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Contributing and Partnering\n",
"\n",
"ProgPy was developed by researchers of the NASA Prognostics Center of Excellence (PCoE) and Diagnostics & Prognostics Group, with assistance from our partners. We welcome contributions and are actively interested in partnering with other organizations and researchers. If interested in contibuting, please email Chris Teubert at [email protected].\n",
"\n",
"A big thank you to our partners who have contributed to the design, testing, and/or development of ProgPy:\n",
"\n",
"* German Aerospace Center (DLR) Institute of Maintenance, Repair and Overhaul.\n",
"* Northrop Grumman Corporation (NGC)\n",
"* Research Institutes of Sweden (RISE)\n",
"* Vanderbilt University"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}
File renamed without changes.
19 changes: 19 additions & 0 deletions examples/03_Existing Models.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Using Provided ProgPy Models"
]
}
],
"metadata": {
"language_info": {
"name": "python"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
},
"language_info": {
"name": "python",
"version": "3.11.0"
"version": "3.12.0"
},
"orig_nbformat": 4,
"vscode": {
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
3 changes: 2 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ dependencies = [
"fastdtw", # For DTW error calculation
"filterpy"
]
requires-python = ">=3.7, <3.12"
requires-python = ">=3.7, <3.13"
authors = [
{name = "Christopher Teubert", email = "[email protected]"},
{name = "Katelyn Griffith", email = "[email protected]"}
Expand Down Expand Up @@ -52,6 +52,7 @@ classifiers = [
'Programming Language :: Python :: 3.9',
'Programming Language :: Python :: 3.10',
'Programming Language :: Python :: 3.11',
'Programming Language :: Python :: 3.12',
'Programming Language :: Python :: 3 :: Only'
]

Expand Down
2 changes: 1 addition & 1 deletion src/progpy/data_models/lstm_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -585,7 +585,7 @@ def from_data(cls, inputs, outputs, event_states=None, t_met=None, **kwargs):
output_data.append(t_all)

model = keras.Model(inputs, outputs)
model.compile(optimizer="rmsprop", loss="mse", metrics=["mae"]*len(outputs))
model.compile(optimizer="rmsprop", loss="mse", metrics=[["mae"]]*len(outputs))

# Train model
history = model.fit(
Expand Down
45 changes: 37 additions & 8 deletions src/progpy/prognostics_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
import json
from numbers import Number
import numpy as np
from typing import List # Still needed until v3.9
from typing import List, Mapping # Still needed until v3.9
from warnings import warn

from progpy.exceptions import ProgModelStateLimitWarning, warn_once
Expand Down Expand Up @@ -1273,8 +1273,8 @@ def estimate_params(self, runs: List[tuple] = None, keys: List[str] = None, time
"""Estimate the model parameters given data. Overrides model parameters
Keyword Args:
keys (list[str]):
Parameter keys to optimize
keys (list[str] or list[tuple[str]]):
Parameter keys to optimize. Use tuple for nested parameters. For example, key ('x0', 'a') corresponds to m.parameters['x0']['a'].
times (list[float]):
Array of times for each sample
inputs (list[InputContainer]):
Expand Down Expand Up @@ -1309,8 +1309,17 @@ def estimate_params(self, runs: List[tuple] = None, keys: List[str] = None, time
raise ValueError(f"Can not pass in keys as a Set. Sets are unordered by construction, so bounds may be out of order.")

for key in keys:
if key not in self.parameters:
raise ValueError(f"Key '{key}' not in model parameters")
if isinstance(key, (tuple, list)):
tmp = self.parameters
keys_so_far = ''
for key_element in key:
if not isinstance(tmp, Mapping) or key_element not in tmp:
raise ValueError(f"Key '{keys_so_far}[{key_element}]' not in model parameters")
keys_so_far += f'[{key_element}]'
tmp = tmp[key_element]
else:
if key not in self.parameters:
raise ValueError(f"Key '{key}' not in model parameters")

config = {
'error_method': 'MSE',
Expand Down Expand Up @@ -1401,7 +1410,13 @@ def estimate_params(self, runs: List[tuple] = None, keys: List[str] = None, time

def optimization_fcn(params):
for key, param in zip(keys, params):
self.parameters[key] = param
if isinstance(key, (tuple, list)):
tmp = self.parameters
for key_element in key[:-1]:
tmp = tmp[key_element]
tmp[key[-1]] = param
else:
self.parameters[key] = param
err = 0
for run in runs:
try:
Expand All @@ -1411,15 +1426,29 @@ def optimization_fcn(params):
# If it doesn't work (i.e., throws an error), don't use it
return err

params = np.array([self.parameters[key] for key in keys])
params = []
for key in keys:
if isinstance(key, (tuple, list)):
tmp = self.parameters
for key_element in key[:-1]:
tmp = tmp[key_element]
params.append(tmp[key[-1]])
else:
params.append(self.parameters[key])

res = minimize(optimization_fcn, params, method=method, bounds=config['bounds'], options=config['options'], tol=config['tol'])

if not res.success:
warn(f"Parameter Estimation did not converge: {res.message}")

for x, key in zip(res.x, keys):
self.parameters[key] = x
if isinstance(key, (tuple, list)):
tmp = self.parameters
for key_element in key[:-1]:
tmp = tmp[key_element]
tmp[key[-1]] = x
else:
self.parameters[key] = x

# Reset noise
self.parameters['measurement_noise'] = m_noise
Expand Down
17 changes: 14 additions & 3 deletions src/progpy/utils/calc_error.py
Original file line number Diff line number Diff line change
Expand Up @@ -139,8 +139,10 @@ def MSE(m, times: List[float], inputs: List[dict], outputs: List[dict], **kwargs
stability_tol represents the fraction of the provided argument `times` that are required to be met in simulation,
before the model goes unstable in order to produce a valid estimate of error.
If the model goes unstable before stability_tol is met, NaN is returned.
If the model goes unstable before stability_tol is met and short_sim_penalty is None, then exception is raised
Else if the model goes unstable before stability_tol is met and short_sim_penalty is not None- the penalty is added to the score
Else, model goes unstable after stability_tol is met, the error calculated from data up to the instability is returned.
short_sim_penalty (float, optional): penalty added for simulation becoming unstable before stability_tol, added for each % below tol. If set to None, operation will return an error if simulation becomes unstable before stability_tol. Default is 100
Returns:
float: Total error
Expand Down Expand Up @@ -180,8 +182,17 @@ def MSE(m, times: List[float], inputs: List[dict], outputs: List[dict], **kwargs
# This is true for any window-based model
if any(np.isnan(z_obs.matrix)):
if t <= cutoffThreshold:
raise ValueError(f"Model unstable- NAN reached in simulation (t={t}) before cutoff threshold. "
f"Cutoff threshold is {cutoffThreshold}, or roughly {stability_tol * 100}% of the data")
short_sim_penalty = kwargs.get('short_sim_penalty', 100)
if short_sim_penalty is None:
raise ValueError(f"Model unstable- NAN reached in simulation (t={t}) before cutoff threshold. "
f"Cutoff threshold is {cutoffThreshold}, or roughly {stability_tol * 100}% of the data")

warn(f"Model unstable- NAN reached in simulation (t={t}) before cutoff threshold. "
f"Cutoff threshold is {cutoffThreshold}, or roughly {stability_tol * 100}% of the data. Penalty added to score.")
# Return value with Penalty added
if counter == 0:
return 100*short_sim_penalty
return err_total/counter + (100-(t/cutoffThreshold)*100)*short_sim_penalty
else:
warn("Model unstable- NaN reached in simulation (t={})".format(t))
break
Expand Down
17 changes: 13 additions & 4 deletions tests/test_calc_error.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,11 +111,14 @@ def future_loading(t, x=None):

# With our current set parameters, our model goes unstable immediately
with self.assertRaises(ValueError) as cm:
m.calc_error(simulated_results.times, simulated_results.inputs, simulated_results.outputs, dt=1)
m.calc_error(simulated_results.times, simulated_results.inputs, simulated_results.outputs, dt=1, short_sim_penalty=None)
self.assertEqual(
"Model unstable- NAN reached in simulation (t=0.0) before cutoff threshold. Cutoff threshold is 1900.0, or roughly 95.0% of the data",
str(cm.exception)
)
)

# Shouldn't raise error for default case (i.e., short_sim_penalty is not None)
m.calc_error(simulated_results.times, simulated_results.inputs, simulated_results.outputs, dt=1)

# Creating duplicate model to check if consistent results occur
m1 = BatteryElectroChemEOD()
Expand All @@ -131,20 +134,26 @@ def future_loading(t, x=None):

# Checks to see if model goes unstable before default stability tolerance is met.
with self.assertRaises(ValueError) as cm:
m.calc_error(simulated_results.times, simulated_results.inputs, simulated_results.outputs, dt = 1)
m.calc_error(simulated_results.times, simulated_results.inputs, simulated_results.outputs, dt=1, short_sim_penalty=None)
self.assertEqual(
"Model unstable- NAN reached in simulation (t=1800.0) before cutoff threshold. Cutoff threshold is 1900.0, or roughly 95.0% of the data",
str(cm.exception)
)

# Shouldn't happen for default case (short_sim_penalty is not none)
m.calc_error(simulated_results.times, simulated_results.inputs, simulated_results.outputs, dt=1)

# Checks to see if m1 throws the same exception.
with self.assertRaises(ValueError):
m1.calc_error(m1_sim_results.times, m1_sim_results.inputs, m1_sim_results.outputs, dt = 1)
m1.calc_error(m1_sim_results.times, m1_sim_results.inputs, m1_sim_results.outputs, dt=1, short_sim_penalty=None)
self.assertEqual(
"Model unstable- NAN reached in simulation (t=1800.0) before cutoff threshold. Cutoff threshold is 1900.0, or roughly 95.0% of the data",
str(cm.exception)
)

# Shouldn't for default case
m1.calc_error(m1_sim_results.times, m1_sim_results.inputs, m1_sim_results.outputs, dt=1)

# Checks to see if stability_tolerance throws Warning rather than an Error when the model goes unstable after threshold
with self.assertWarns(UserWarning) as cm:
m.calc_error(simulated_results.times, simulated_results.inputs, simulated_results.outputs,
Expand Down
Loading

0 comments on commit 061322d

Please sign in to comment.