Skip to content

Commit b1e3d8b

Browse files
committed
Run more examples
Update transient absorption case study From ported examples Update examples and integrations tests This commit adds a (failing) integration test
1 parent d08fc1a commit b1e3d8b

13 files changed

+721
-195
lines changed

Diff for: .github/workflows/integration-tests.yml

+3-3
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ jobs:
2424
import os
2525
from pathlib import Path
2626
27-
example_names=["fluorescence"]
27+
example_names=["fluorescence", "transient_absorption", "transient_absorption_two_datasets", "spectral_constraints", "spectral_guidance"]
2828
gh_output = Path(os.getenv("GITHUB_OUTPUT", ""))
2929
with gh_output.open("a", encoding="utf8") as f:
3030
f.writelines([f"example-list={json.dumps(example_names)}"])
@@ -51,10 +51,10 @@ jobs:
5151
pip install git+https://github.com/glotaran/pyglotaran-extras.git@staging_support
5252
- name: ${{ matrix.example_name }}
5353
id: example-run
54-
uses: glotaran/pyglotaran-examples@main
54+
uses: s-weigand/pyglotaran-examples@main
5555
with:
5656
example_name: ${{ matrix.example_name }}
57-
examples_branch: staging_rewrite
57+
examples_branch: port-examples
5858
- name: Installed packages
5959
if: always()
6060
run: |

Diff for: .gitignore

+3
Original file line numberDiff line numberDiff line change
@@ -189,3 +189,6 @@ comparison-results-current
189189
docs/source/notebooks/quickstart/quickstart_project/data
190190
docs/source/notebooks/quickstart/quickstart_project/results
191191
docs/source/notebooks/quickstart/quickstart_project/project.gta
192+
193+
# Auto generated autocompletions files
194+
examples/case_studies/*/schema*.json
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,236 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"# Example Case Study 02A\n",
8+
"\n",
9+
"## Transient absorption case study\n",
10+
"\n",
11+
"This notebook details the (global) target analysis of a tim-resolved transient absorption spectroscope measurement on a so called `co` compound dissolved in toluene and excited at 530 nm.\n",
12+
"\n",
13+
"For more details see the references in [README.md](README.md) or have a look at the [inspect_data.ipynb](data/inspect_data.ipynb) notebook.\n",
14+
"\n"
15+
]
16+
},
17+
{
18+
"cell_type": "markdown",
19+
"metadata": {},
20+
"source": [
21+
"## Requirements\n",
22+
"\n",
23+
"Be sure to have installed [pyglotaran](https://pypi.org/project/pyglotaran/) version 0.8 or greater, as well as [pyglotaran-extras](https://pypi.org/project/pyglotaran-extras/).\n",
24+
"\n",
25+
"```shell\n",
26+
"pip install pyglotaran>0.8 pyglotaran-extras\n",
27+
"```"
28+
]
29+
},
30+
{
31+
"cell_type": "markdown",
32+
"metadata": {},
33+
"source": [
34+
"## Imports\n",
35+
"\n",
36+
"Imports needed for the whole notebook"
37+
]
38+
},
39+
{
40+
"cell_type": "code",
41+
"execution_count": null,
42+
"metadata": {},
43+
"outputs": [],
44+
"source": [
45+
"# Primary imports\n",
46+
"# For plotting\n",
47+
"from pyglotaran_extras import plot_data_overview, plot_overview\n",
48+
"\n",
49+
"# For backwards compatibility (with v0.7)\n",
50+
"from pyglotaran_extras.compat import convert\n",
51+
"\n",
52+
"from glotaran.io import load_dataset, load_parameters, load_scheme\n",
53+
"\n",
54+
"# Optional import for schema generation\n",
55+
"from glotaran.utils.json_schema import create_model_scheme_json_schema"
56+
]
57+
},
58+
{
59+
"cell_type": "markdown",
60+
"metadata": {},
61+
"source": [
62+
"## Load data"
63+
]
64+
},
65+
{
66+
"cell_type": "code",
67+
"execution_count": null,
68+
"metadata": {},
69+
"outputs": [],
70+
"source": [
71+
"data_path1 = \"data/demo_data_Hippius_etal_JPCC2007_111_13988_Figs5_9.ascii\"\n",
72+
"dataset1 = load_dataset(data_path1)"
73+
]
74+
},
75+
{
76+
"cell_type": "code",
77+
"execution_count": null,
78+
"metadata": {},
79+
"outputs": [],
80+
"source": [
81+
"plot_data_overview(dataset1, linlog=True)\n",
82+
"dataset1.data.coords.keys()"
83+
]
84+
},
85+
{
86+
"cell_type": "markdown",
87+
"metadata": {},
88+
"source": [
89+
"## Global Analysis\n",
90+
"\n",
91+
"TODO"
92+
]
93+
},
94+
{
95+
"cell_type": "markdown",
96+
"metadata": {},
97+
"source": [
98+
"## Target Analysis\n",
99+
"\n"
100+
]
101+
},
102+
{
103+
"cell_type": "markdown",
104+
"metadata": {},
105+
"source": [
106+
"### Load analysis scheme and parameters"
107+
]
108+
},
109+
{
110+
"cell_type": "code",
111+
"execution_count": null,
112+
"metadata": {},
113+
"outputs": [],
114+
"source": [
115+
"parameters = load_parameters(\"parameters.yml\")\n",
116+
"create_model_scheme_json_schema(\"schema.json\", parameters)\n",
117+
"# this generates a json schema file which helps to provide autocompletion support\n",
118+
"# in editors for the scheme file\n",
119+
"\n",
120+
"scheme = load_scheme(\"scheme.yml\")"
121+
]
122+
},
123+
{
124+
"cell_type": "markdown",
125+
"metadata": {},
126+
"source": [
127+
"#### Load data into scheme"
128+
]
129+
},
130+
{
131+
"cell_type": "code",
132+
"execution_count": null,
133+
"metadata": {},
134+
"outputs": [],
135+
"source": [
136+
"scheme.load_data({\"dataset1\": dataset1})"
137+
]
138+
},
139+
{
140+
"cell_type": "markdown",
141+
"metadata": {},
142+
"source": [
143+
"## Optimization (fitting)"
144+
]
145+
},
146+
{
147+
"cell_type": "code",
148+
"execution_count": null,
149+
"metadata": {},
150+
"outputs": [],
151+
"source": [
152+
"result = scheme.optimize(parameters=parameters)"
153+
]
154+
},
155+
{
156+
"cell_type": "markdown",
157+
"metadata": {},
158+
"source": [
159+
"## Visualize results (plotting)"
160+
]
161+
},
162+
{
163+
"cell_type": "code",
164+
"execution_count": null,
165+
"metadata": {},
166+
"outputs": [],
167+
"source": [
168+
"# We use `convert` for backwards compatibility (with v0.7 plotting functions)\n",
169+
"result_plot, _ = plot_overview(convert(result.data[\"dataset1\"]), linlog=True)"
170+
]
171+
},
172+
{
173+
"cell_type": "markdown",
174+
"metadata": {},
175+
"source": [
176+
"## Save result\n"
177+
]
178+
},
179+
{
180+
"cell_type": "code",
181+
"execution_count": null,
182+
"metadata": {},
183+
"outputs": [],
184+
"source": [
185+
"import tempfile\n",
186+
"from pathlib import Path\n",
187+
"\n",
188+
"folder_name = Path().resolve().name\n",
189+
"temp_folder = Path(tempfile.gettempdir())\n",
190+
"result_path = temp_folder / folder_name\n",
191+
"result_path"
192+
]
193+
},
194+
{
195+
"cell_type": "code",
196+
"execution_count": null,
197+
"metadata": {},
198+
"outputs": [],
199+
"source": [
200+
"result.save(result_path / \"result\")"
201+
]
202+
},
203+
{
204+
"cell_type": "markdown",
205+
"metadata": {},
206+
"source": [
207+
"## Further exploration\n",
208+
"\n",
209+
"Some ideas for further exploration of the data/analysis.\n",
210+
"\n",
211+
"- Compare global to target analysis\n",
212+
"- Pre-processing of data (baseline subtraction)"
213+
]
214+
}
215+
],
216+
"metadata": {
217+
"kernelspec": {
218+
"display_name": "pyglotaran310",
219+
"language": "python",
220+
"name": "python3"
221+
},
222+
"language_info": {
223+
"codemirror_mode": {
224+
"name": "ipython",
225+
"version": 3
226+
},
227+
"file_extension": ".py",
228+
"mimetype": "text/x-python",
229+
"name": "python",
230+
"nbconvert_exporter": "python",
231+
"pygments_lexer": "ipython3"
232+
}
233+
},
234+
"nbformat": 4,
235+
"nbformat_minor": 2
236+
}

0 commit comments

Comments
 (0)