@@ -115,8 +115,8 @@ The benchmark was run on my macbook (first plot), and on the Cheyenne HPC comput
115
115
cluster interactive node (second plot), which is a shared resource consisting of
116
116
approximately 72 cores.
117
117
118
- <img src =" results/fluxes_60lev_uriah .png " width =" 700 " >
119
- <img src =" results/fluxes_60lev_cheyenne4 .png " width =" 700 " >
118
+ <img src =" results/fluxes_60lev_uriah_flat .png " width =" 700 " >
119
+ <img src =" results/fluxes_60lev_cheyenne4_flat .png " width =" 700 " >
120
120
121
121
<!-- # Hybrid-to-pressure interpolation benchmarks
122
122
- I have yet to formalize this benchmark, but performed some tests for my research.
@@ -146,7 +146,7 @@ The results here were interesting. NCO is the winner for small files, but CDO be
146
146
for large files, at which point the time required for overhead operations is negligible.
147
147
XArray is the slowest across all file sizes.
148
148
149
- <img src =" results/slices_60lev_uriah .png " width =" 700 " >
149
+ <img src =" results/slices_60lev_uriah_flat .png " width =" 700 " >
150
150
151
151
empirical_orthogonal_functions.sh
152
152
---------------------------------
@@ -173,4 +173,4 @@ This time, NCL was the clear winner! The MetPy script was also issuing a bunch o
173
173
warnings when it ran. Evidently, the kinks in the MetPy algorithm haven't been ironed
174
174
out yet.
175
175
176
- <img src =" results/isobars2isentropes_60lev_uriah .png " width =" 700 " >
176
+ <img src =" results/isobars2isentropes_60lev_uriah_flat .png " width =" 700 " >
0 commit comments