If it is your beamtime, you can read the raw data and write to the processed directory. For the public data, you can not write to the processed directory.
-
The paths are such that if you are on Maxwell, it uses those. Otherwise, data is downloaded in the current directory from Zenodo: https://zenodo.org/records/12609441
-
-
[2]:
-
-
-
beamtime_dir="/asap3/flash/gpfs/pg2/2023/data/11019101"# on Maxwell
-ifos.path.exists(beamtime_dir)andos.access(beamtime_dir,os.R_OK):
- path=beamtime_dir+"/raw/hdf/offline/fl1user3"
- buffer_path=beamtime_dir+"/processed/tutorial/"
-else:
- # data_path can be defined and used to store the data in a specific location
- dataset.get("W110")# Put in Path to a storage of at least 10 Byte free space.
- path=dataset.dir
- buffer_path=path+"/processed/"
-
-
-
-
-
-
-
-
-INFO - Not downloading W110 data as it already exists at "/home/runner/work/sed/sed/docs/tutorial/datasets/W110".
-Set 'use_existing' to False if you want to download to a new location.
-INFO - Using existing data path for "W110": "/home/runner/work/sed/sed/docs/tutorial/datasets/W110"
-INFO - W110 data is already present.
-
Here, we get the path to the config file and set up the relevant directories. This can also be done directly in the config file.
-
-
[3]:
-
-
-
# pick the default configuration file for hextof@FLASH
-config_file=Path('../src/sed/config/flash_example_config.yaml')
-assertconfig_file.exists()
-
-
-
-
-
[4]:
-
-
-
# here we setup a dictionary that will be used to override the path configuration
-config_override={
- "core":{
- "beamtime_id":11019101,
- "paths":{
- "raw":path,
- "processed":buffer_path
- },
- },
-}
-
First, we take a look at our sideband measurement before any corrections. The sidebands on the W4f core levels can be used as a measure of the pump and probe cross-correlation, and hence our temporal resolution. We plot the data delay stage position vs Energy data, normalized by acquisition time.
As we see the sidebands are quite broad and one of the possible reasons for this could be long or short-term drifts (jitter) of the FEL arrival time with respect to e.g. optical laser or differences in the intra-bunch arrival time. To check and correct for this we can look at beam arrival monitor (BAM). The BAM gives a pulse-resolved measure of the FEL arrival time with respect to a master clock.
To correct the SASE jitter, using information from the bam column and to calibrate the pump-probe delay axis, we need to shift the delay stage values to centre the pump-probe-time overlap time zero.
-
-
[13]:
-
-
-
sp_44498.add_delay_offset(
- constant=-1448,# this is time zero position determined from side band fit
- flip_delay_axis=True,# invert the direction of the delay axis
- columns=['bam'],# use the bam to offset the values
- weights=[-0.001],# bam is in fs, delay in ps
- preserve_mean=True# preserve the mean of the delay axis to keep t0 position
-)
-
If it is your beamtime, you can read the raw data and write to the processed directory. For the public data, you can not write to the processed directory.
-
The paths are such that if you are on Maxwell, it uses those. Otherwise, data is downloaded in the current directory from Zenodo: https://zenodo.org/records/12609441
-
-
[2]:
-
-
-
beamtime_dir="/asap3/flash/gpfs/pg2/2023/data/11019101"# on Maxwell
-ifos.path.exists(beamtime_dir)andos.access(beamtime_dir,os.R_OK):
- path=beamtime_dir+"/raw/hdf/offline/fl1user3"
- buffer_path=beamtime_dir+"/processed/tutorial/"
-else:
- # data_path can be defined and used to store the data in a specific location
- dataset.get("W110")# Put in Path to a storage of at least 10 GByte free space.
- path=dataset.dir
- buffer_path=path+"/processed/"
-
-
-
-
-
-
-
-
-INFO - Not downloading W110 data as it already exists at "/home/runner/work/sed/sed/docs/tutorial/datasets/W110".
-Set 'use_existing' to False if you want to download to a new location.
-INFO - Using existing data path for "W110": "/home/runner/work/sed/sed/docs/tutorial/datasets/W110"
-INFO - W110 data is already present.
-
Here, we get the path to the config file and set up the relevant directories. This can also be done directly in the config file.
-
-
[3]:
-
-
-
# pick the default configuration file for hextof@FLASH
-config_file=Path('../src/sed/config/flash_example_config.yaml')
-assertconfig_file.exists()
-
-
-
-
-
[4]:
-
-
-
# here we setup a dictionary that will be used to override the path configuration
-config_override={
- "core":{
- "beamtime_id":11019101,
- "paths":{
- "raw":path,
- "processed":buffer_path
- },
- },
-}
-
We now will fit the tof-energy relation. This is done by finding the maxima of a peak in the tof spectrum, and then fitting the square root relation to obtain the calibration parameters.
Visualize trXPS data bin in the dldTimeSteps and the corrected delay axis to prepare for energy calibration using SB#
-
We now prepare for an alternative energy calibration based on the side-bands of the time-dependent dataset. This is e.g. helpful if no bias series has been obtained.
-INFO - Folder config loaded from: [/home/runner/work/sed/sed/docs/tutorial/sed_config.yaml]
-INFO - System config loaded from: [/home/runner/work/sed/sed/docs/src/sed/config/flash_example_config.yaml]
-INFO - Default config loaded from: [/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/sed/config/default.yaml]
-INFO - Reading files: 0 new files of 14 total.
-loading complete in 0.08 s
-INFO - add_jitter: Added jitter to columns ['dldPosX', 'dldPosY', 'dldTimeSteps'].
-
-
-
-
-
We correct delay stage, t0 position and BAM (see previous tutorial)#
-
-
[13]:
-
-
-
sp_44498.add_delay_offset(
- constant=-1448,# this is time zero position determined from side band fit
- flip_delay_axis=True,# invert the direction of the delay axis
- columns=['bam'],# use the bam to offset the values
- weights=[-0.001],# bam is in fs, delay in ps
- preserve_mean=True# preserve the mean of the delay axis to keep t0 position
-)
-
We now will fit the tof-energy relation. This is done using the maxima of a peak in the ToF spectrum and the known kinetic energy of those peaks (kinetic energy of e.g. W4f peaks (-31.4 and -33.6 eV) and their SB of different orders accounting energy of pump beam of 1030 nm = 1.2 eV. The calibration parameters are obtained by fitting the square root relation.
-
-
[16]:
-
-
-
### Kinetic energy of w4f peaks and their SB
-ref_energy=-30.2
-sp_44498.ec.biases=-1*np.array([-30.2,-31.4,-32.6,-33.6,-34.8])
-sp_44498.ec.peaks=np.expand_dims(data[peaks]['dldTimeSteps'].data,1)
-sp_44498.ec.tof=res_corr.dldTimeSteps.data
-
-sp_44498.calibrate_energy_axis(
- ref_energy=ref_energy,
- method="lmfit",
- d={'value':1.0,'min':.8,'max':1.0,'vary':True},
- t0={'value':5e-7,'min':1e-7,'max':1e-6,'vary':True},
- E0={'value':-100.,'min':-200,'max':15,'vary':True},
-)
-
While this calibration methods gives a reasonable approximation to the energy axis, there are some deviations to the bias method, so it should be used with care
-
-
[19]:
-
-
-
axes=['energy']
-ranges=[[-37.5,-27.5]]
-bins=[200]
-res_1D=sp_44498.compute(bins=bins,axes=axes,ranges=ranges)
-
-plt.figure()
-(res_ref/res_ref.max()).plot(label="bias series calibration")
-(res_1D/res_1D.max()).plot(label="side band calibration")
-plt.legend()
-
Binning demonstration on locally generated fake data#
-
In this example, we generate a table with random data simulating a single event dataset. We showcase the binning method, first on a simple single table using the bin_partition method and then in the distributed method bin_dataframe, using daks dataframes. The first method is never really called directly, as it is simply the function called by the bin_dataframe on each partition of the dask dataframe.
-/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/dask/dataframe/__init__.py:42: FutureWarning:
-Dask dataframe query planning is disabled because dask-expr is not installed.
-
-You can install it with `pip install dask[dataframe]` or `conda install dask`.
-This will raise in a future version.
-
- warnings.warn(msg, FutureWarning)
-
Compute distributed binning on the partitioned dask dataframe#
-
In this example, the small dataset does not give significant improvement over the pandas implementation, at least using this number of partitions. A single partition would be faster (you can try…) but we use multiple for demonstration purposes.
Demonstration of the conversion pipeline using time-resolved ARPES data stored on Zenodo#
-
In this example, we pull some time-resolved ARPES data from Zenodo, and load it into the sed package using functions of the mpes package. Then, we run a conversion pipeline on it, containing steps for visualizing the channels, correcting image distortions, calibrating the momentum space, correcting for energy distortions and calibrating the energy axis. Finally, the data are binned in calibrated axes. For performance reasons, best store the data on a locally attached storage (no network drive).
-This can also be achieved transparently using the included MirrorUtil class.
dataset.get("WSe2")# Put in Path to a storage of at least 20 GByte free space.
-data_path=dataset.dir# This is the path to the data
-scandir,caldir=dataset.subdirs# scandir contains the data, caldir contains the calibration files
-
-
-
-
-
-
-
-
-INFO - Not downloading WSe2 data as it already exists at "/home/runner/work/sed/sed/docs/tutorial/datasets/WSe2".
-Set 'use_existing' to False if you want to download to a new location.
-INFO - Using existing data path for "WSe2": "/home/runner/work/sed/sed/docs/tutorial/datasets/WSe2"
-INFO - WSe2 data is already present.
-
-
-
-
[3]:
-
-
-
# create sed processor using the config file:
-sp=sed.SedProcessor(folder=scandir,config="../src/sed/config/mpes_example_config.yaml",system_config={},verbose=True)
-
Bin and load part of the dataframe in detector coordinates, and choose energy plane where high-symmetry points can well be identified. Either use the interactive tool, or pre-select the range:
Next, we select a number of features corresponding to the rotational symmetry of the material, plus the center. These can either be auto-detected (for well-isolated points), or provided as a list (these can be read-off the graph in the cell above). These are then symmetrized according to the rotational symmetry, and a spline-warping correction for the x/y coordinates is calculated, which corrects for any geometric distortions from the perfect n-fold rotational symmetry.
-
-
[9]:
-
-
-
#features = np.array([[203.2, 341.96], [299.16, 345.32], [350.25, 243.70], [304.38, 149.88], [199.52, 152.48], [154.28, 242.27], [248.29, 248.62]])
-#sp.define_features(features=features, rotation_symmetry=6, include_center=True, apply=True)
-# Manual selection: Use a GUI tool to select peaks:
-#sp.define_features(rotation_symmetry=6, include_center=True)
-# Autodetect: Uses the DAOStarFinder routine to locate maxima.
-# Parameters are:
-# fwhm: Full-width at half maximum of peaks.
-# sigma: Number of standard deviations above the mean value of the image peaks must have.
-# sigma_radius: number of standard deviations around a peak that peaks are fitted
-sp.define_features(rotation_symmetry=6,auto_detect=True,include_center=True,fwhm=10,sigma=12,sigma_radius=4,apply=True)
-
Generate nonlinear correction using splinewarp algorithm. If no landmarks have been defined in previous step, default parameters from the config are used
-
-
[10]:
-
-
-
# Option whether a central point shall be fixed in the determination fo the correction
-sp.generate_splinewarp(include_center=True)
-
-
-
-
-
-
-
-
-INFO - Calculated thin spline correction based on the following landmarks:
-pouter_ord: [[203.0039765 342.99171918]
- [299.87633072 346.19427038]
- [350.95113635 244.77654127]
- [305.64228939 150.20244008]
- [199.5409951 152.78524092]
- [153.41883308 243.04327152]]
-pcent: (249.04108057657348, 249.1877259516608)
-
To adjust scaling, position and orientation of the corrected momentum space image, you can apply further affine transformations to the distortion correction field. Here, first a potential scaling is applied, next a translation, and finally a rotation around the center of the image (defined via the config). One can either use an interactive tool, or provide the adjusted values and apply them directly.
First, the momentum scaling needs to be calibrated. Either, one can provide the coordinates of one point outside the center, and provide its distance to the Brillouin zone center (which is assumed to be located in the center of the image), one can specify two points on the image and their distance (where the 2nd point marks the BZ center),or one can provide absolute k-coordinates of two distinct momentum points.
-
If no points are provided, an interactive tool is created. Here, left mouse click selects the off-center point (brillouin_zone_centered=True) or toggle-selects the off-center and center point.
-
-
[14]:
-
-
-
k_distance=2/np.sqrt(3)*np.pi/3.28# k-distance of the K-point in a hexagonal Brillouin zone
-#sp.calibrate_momentum_axes(k_distance = k_distance)
-point_a=[308,345]
-sp.calibrate_momentum_axes(point_a=point_a,k_distance=k_distance,apply=True)
-#point_b = [247, 249]
-#sp.calibrate_momentum_axes(point_a=point_a, point_b = point_b, k_coord_a = [.5, 1.1], k_coord_b = [0, 0], equiscale=False)
-
The purpose of the energy correction is to correct for any momentum-dependent distortion of the energy axis, e.g. from geometric effects in the flight tube, or from space charge
Here, one can select the functional form to be used, and adjust its parameters. The binned data used for the momentum calibration is plotted around the Fermi energy (defined by tof_fermi), and the correction function is plotted ontop. Possible correction functions are: “spherical” (parameter: diameter), “Lorentzian” (parameter: gamma), “Gaussian” (parameter: sigma), and “Lorentzian_asymmetric” (parameters: gamma, amplitude2, gamma2).
-
One can either use an interactive alignment tool, or provide parameters directly.
In a first step, the data are loaded, binned along the TOF dimension, and normalized. The used bias voltages can be either provided, or read from attributes in the source files if present.
Next, the same peak or feature needs to be selected in each curve. For this, one needs to define “ranges” for each curve, within which the peak of interest is located. One can either provide these ranges manually, or provide one range for a “reference” curve, and infer the ranges for the other curves using a dynamic time warping algorithm.
-
-
[21]:
-
-
-
# Option 1 = specify the ranges containing a common feature (e.g an equivalent peak) for all bias scans
-# rg = [(129031.03103103103, 129621.62162162163), (129541.54154154155, 130142.14214214214), (130062.06206206206, 130662.66266266267), (130612.61261261262, 131213.21321321322), (131203.20320320321, 131803.8038038038), (131793.7937937938, 132384.38438438438), (132434.43443443443, 133045.04504504506), (133105.10510510512, 133715.71571571572), (133805.8058058058, 134436.43643643643), (134546.54654654654, 135197.1971971972)]
-# sp.find_bias_peaks(ranges=rg, infer_others=False)
-# Option 2 = specify the range for one curve and infer the others
-# This will open an interactive tool to select the correct ranges for the curves.
-# IMPORTANT: Don't choose the range too narrow about a peak, and choose a refid
-# somewhere in the middle or towards larger biases!
-rg=(66100,67000)
-sp.find_bias_peaks(ranges=rg,ref_id=5,infer_others=True,apply=True)
-
Next, the detected peak positions and bias voltages are used to determine the calibration function. Essentially, the functional Energy(TOF) is being determined by either least-squares fitting of the functional form d2/(t-t0)2 via lmfit (method: “lmfit”), or by analytically obtaining a polynomial approximation (method: “lstsq” or “lsqr”). The parameter ref_energy is used to define the absolute energy position of the feature used for calibration in the calibrated energy
-scale. energy_scale can be either “kinetic” (decreasing energy with increasing TOF), or “binding” (increasing energy with increasing TOF).
-
After calculating the calibration, all traces corrected with the calibration are plotted ontop of each other, and the calibration function (Energy(TOF)) together with the extracted features is being plotted.
-
-
[22]:
-
-
-
# Eref can be used to set the absolute energy (kinetic energy, E-EF, etc.) of the feature used for energy calibration (if known)
-Eref=-1.3
-# the lmfit method uses a fit of (d/(t-t0))**2 to determine the energy calibration
-# limits and starting values for the fitting parameters can be provided as dictionaries
-sp.calibrate_energy_axis(
- ref_energy=Eref,
- method="lmfit",
- energy_scale='kinetic',
- d={'value':1.0,'min':.7,'max':1.2,'vary':True},
- t0={'value':8e-7,'min':1e-7,'max':1e-6,'vary':True},
- E0={'value':0.,'min':-100,'max':0,'vary':True},
-)
-
Finally, the the energy axis is added to the dataframe. Here, the applied bias voltages of the measurement is taken into account to provide the correct energy offset. If the bias cannot be read from the file, it can be provided manually.
-
-
[24]:
-
-
-
sp.append_energy_axis(bias_voltage=16.8)
-
-
-
-
-
-
-
-
-INFO - Adding energy column to dataframe:
-INFO - Using energy calibration parameters generated on 03/05/2025, 23:10:49
-INFO - Dask DataFrame Structure:
- X Y t ADC Xm Ym kx ky tm energy
-npartitions=100
- float64 float64 float64 float64 float64 float64 float64 float64 float64 float64
- ... ... ... ... ... ... ... ... ... ...
-... ... ... ... ... ... ... ... ... ... ...
- ... ... ... ... ... ... ... ... ... ...
- ... ... ... ... ... ... ... ... ... ...
-Dask Name: assign, 243 graph layers
-
The delay axis is calculated from the ADC input column based on the provided delay range. ALternatively, the delay scan range can also be extracted from attributes inside a source file, if present.
dataset.get("WSe2")# Put in Path to a storage of at least 20 GByte free space.
-data_path=dataset.dir# This is the path to the data
-scandir,_=dataset.subdirs# scandir contains the data, _ contains the calibration files
-
-
-
-
-
-
-
-
-INFO - Not downloading WSe2 data as it already exists at "/home/runner/work/sed/sed/docs/tutorial/datasets/WSe2".
-Set 'use_existing' to False if you want to download to a new location.
-INFO - Using existing data path for "WSe2": "/home/runner/work/sed/sed/docs/tutorial/datasets/WSe2"
-INFO - WSe2 data is already present.
-
-
-
-
[3]:
-
-
-
metadata={}
-# manual Meta data. These should ideally come from an Electronic Lab Notebook.
-#General
-metadata['experiment_summary']='WSe2 XUV NIR pump probe data.'
-metadata['entry_title']='Valence Band Dynamics - 800 nm linear s-polarized pump, 0.6 mJ/cm2 absorbed fluence'
-metadata['experiment_title']='Valence band dynamics of 2H-WSe2'
-
-#User
-# Fill general parameters of NXuser
-# TODO: discuss how to deal with multiple users?
-metadata['user0']={}
-metadata['user0']['name']='Julian Maklar'
-metadata['user0']['role']='Principal Investigator'
-metadata['user0']['affiliation']='Fritz Haber Institute of the Max Planck Society'
-metadata['user0']['address']='Faradayweg 4-6, 14195 Berlin'
-metadata['user0']['email']='maklar@fhi-berlin.mpg.de'
-
-#NXinstrument
-metadata['instrument']={}
-metadata['instrument']['energy_resolution']=140.
-#analyzer
-metadata['instrument']['analyzer']={}
-metadata['instrument']['analyzer']['slow_axes']="delay"# the scanned axes
-metadata['instrument']['analyzer']['spatial_resolution']=10.
-metadata['instrument']['analyzer']['energy_resolution']=110.
-metadata['instrument']['analyzer']['momentum_resolution']=0.08
-metadata['instrument']['analyzer']['working_distance']=4.
-metadata['instrument']['analyzer']['lens_mode']="6kV_kmodem4.0_30VTOF.sav"
-
-#probe beam
-metadata['instrument']['beam']={}
-metadata['instrument']['beam']['probe']={}
-metadata['instrument']['beam']['probe']['incident_energy']=21.7
-metadata['instrument']['beam']['probe']['incident_energy_spread']=0.11
-metadata['instrument']['beam']['probe']['pulse_duration']=20.
-metadata['instrument']['beam']['probe']['frequency']=500.
-metadata['instrument']['beam']['probe']['incident_polarization']=[1,1,0,0]# p pol Stokes vector
-metadata['instrument']['beam']['probe']['extent']=[80.,80.]
-#pump beam
-metadata['instrument']['beam']['pump']={}
-metadata['instrument']['beam']['pump']['incident_energy']=1.55
-metadata['instrument']['beam']['pump']['incident_energy_spread']=0.08
-metadata['instrument']['beam']['pump']['pulse_duration']=35.
-metadata['instrument']['beam']['pump']['frequency']=500.
-metadata['instrument']['beam']['pump']['incident_polarization']=[1,-1,0,0]# s pol Stokes vector
-metadata['instrument']['beam']['pump']['incident_wavelength']=800.
-metadata['instrument']['beam']['pump']['average_power']=300.
-metadata['instrument']['beam']['pump']['pulse_energy']=metadata['instrument']['beam']['pump']['average_power']/metadata['instrument']['beam']['pump']['frequency']#µJ
-metadata['instrument']['beam']['pump']['extent']=[230.,265.]
-metadata['instrument']['beam']['pump']['fluence']=0.15
-
-#sample
-metadata['sample']={}
-metadata['sample']['preparation_date']='2019-01-13T10:00:00+00:00'
-metadata['sample']['preparation_description']='Cleaved'
-metadata['sample']['sample_history']='Cleaved'
-metadata['sample']['chemical_formula']='WSe2'
-metadata['sample']['description']='Sample'
-metadata['sample']['name']='WSe2 Single Crystal'
-
-metadata['file']={}
-metadata['file']["trARPES:Carving:TEMP_RBV"]=300.
-metadata['file']["trARPES:XGS600:PressureAC:P_RD"]=5.e-11
-metadata['file']["KTOF:Lens:Extr:I"]=-0.12877
-metadata['file']["KTOF:Lens:UDLD:V"]=399.99905
-metadata['file']["KTOF:Lens:Sample:V"]=17.19976
-metadata['file']["KTOF:Apertures:m1.RBV"]=3.729931
-metadata['file']["KTOF:Apertures:m2.RBV"]=-5.200078
-metadata['file']["KTOF:Apertures:m3.RBV"]=-11.000425
-
-# Sample motor positions
-metadata['file']['trARPES:Carving:TRX.RBV']=7.1900000000000004
-metadata['file']['trARPES:Carving:TRY.RBV']=-6.1700200225439552
-metadata['file']['trARPES:Carving:TRZ.RBV']=33.4501953125
-metadata['file']['trARPES:Carving:THT.RBV']=423.30500940561586
-metadata['file']['trARPES:Carving:PHI.RBV']=0.99931647456264949
-metadata['file']['trARPES:Carving:OMG.RBV']=11.002500171914066
-
-
-
-
-
[4]:
-
-
-
# create sed processor using the config file, and collect the meta data from the files:
-sp=sed.SedProcessor(folder=scandir,config="../src/sed/config/mpes_example_config.yaml",system_config={},metadata=metadata,collect_metadata=True)
-
The paths are such that if you are on Maxwell, it uses those. Otherwise data is downloaded in current directory from Zenodo.
-
Generally, if it is your beamtime, you can both read the raw data and write to processed directory. However, for the public data, you can not write to processed directory.
-
-
[2]:
-
-
-
beamtime_dir="/asap3/flash/gpfs/pg2/2023/data/11019101"# on Maxwell
-ifos.path.exists(beamtime_dir)andos.access(beamtime_dir,os.R_OK):
- path=beamtime_dir+"/raw/hdf/offline/fl1user3"
- meta_path=beamtime_dir+"/shared"
- buffer_path="Gd_W110/processed/"
-else:
- # data_path can be defined and used to store the data in a specific location
- dataset.get("Gd_W110")# Put in Path to a storage of at least 10 GByte free space.
- path=dataset.dir
- meta_path=path
- buffer_path=path+"/processed/"
-
-
-
-
-
-
-
-
-INFO - Not downloading Gd_W110 data as it already exists at "/home/runner/work/sed/sed/docs/tutorial/datasets/Gd_W110".
-Set 'use_existing' to False if you want to download to a new location.
-INFO - Using existing data path for "Gd_W110": "/home/runner/work/sed/sed/docs/tutorial/datasets/Gd_W110"
-INFO - Gd_W110 data is already present.
-
Here we get the path to the config file and setup the relevant directories. This can also be done directly in the config file.
-
-
[3]:
-
-
-
# pick the default configuration file for hextof@FLASH
-config_file=Path('../src/sed/config/flash_example_config.yaml')
-assertconfig_file.exists()
-
-
-
-
The path to the processed folder can also be defined as a keyword argument later.
-
-
[4]:
-
-
-
# here we setup a dictionary that will be used to override the path configuration
-config_override={
- "core":{
- "paths":{
- "raw":path,
- "processed":buffer_path,
- },
- },
-}
-
In this notebook, we will show how calibration parameters can be generated. Therefore we want to clean the local directory of previously generated files.
-
WARNING running the cell below will delete the “sed_config.yaml” file in the local directory. If these contain precious calibration parameters, DO NOT RUN THIS CELL.
The following extra arguments are available for FlashLoader. None of which are necessary to give but helpful to know.
-
-
force_recreate: Probably the most useful. In case the config is changed, this allows to reduce the raw h5 files to the the intermediate parquet format again. Otherwise, the schema between the saved dataframe and config differs.
-
debug: Setting this runs the reduction process in serial, so the errors are easier to find.
-
remove_invalid_files: Sometimes some critical channels defined in the config are missing in some raw files. Setting this will make sure to ignore such files.
-
filter_timed_by_electron: Defaults to True. When True, the timed dataframe will only contain data points where valid electron events were detected. When False, all timed data points are included regardless of electron detection (see OpenCOMPES/sed#307)
-
processed_dir: Location to save the reduced parquet files.
-
scicat_token: Token from your scicat account.
-
detector: ‘1Q’ and ‘4Q’ detector for example. Useful when there are separate raw files for each detector.
-
-
-
[6]:
-
-
-
sp=SedProcessor(runs=[44762],config=config_override,system_config=config_file,collect_metadata=False)
-# You can set collect_metadata=True if the scicat_url and scicat_token are defined
-
-
-
-
-
-
-
-
-INFO - System config loaded from: [/home/runner/work/sed/sed/docs/src/sed/config/flash_example_config.yaml]
-INFO - Default config loaded from: [/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/sed/config/default.yaml]
-INFO - Reading files: 0 new files of 1 total.
-loading complete in 0.08 s
-
In order to avoid artifacts arising from incommensurate binning sizes with those imposed during data collection, e.g. by the detector, we jitter all the digital columns.
Looking at the dataframe can give quick insight about the columns loaded and the data available.
-
-
sp.dataframe shows the structure of the dataframe without computing anything. Interesting here are the columns, and their type.
-
The sp.dataframe.head() function accesses the first 5 events in the dataframe, giving us a view of what the values of each column look like, without computing the whole thing. sp.dataframe.tail()does the same from the end.
-
sp.dataframe.compute() will compute the whole dataframe, and can take a while. We should avoid doing this.
For getting a first impression of the data, and to determine binning ranges, the method sp.view_even_histogram() allows visualizing the events in one dataframe partition as histograms. Default axes and ranges are defined in the config, and show the dldPosX, dldPosY, and dldTimeStep columns:
Here we define the parameters for binning the dataframe to an n-dimensional histogram, which we can then plot, analyze or save.
-
If you never saw this before, the type after : is a “hint” to what type the object to the left will have. We include them here to make sure you know what each variable should be.
-
a:int=1# a is an integer
-b:float=1.0# b is a float
-c:str=1# we hint c to be a string, but it is still an integer
-
-
-
This is totally optional, but can help you keep track of what you are doing.
-
-
[11]:
-
-
-
# the name of the axes on which we want to bin
-axes:List[str]=['dldPosY','dldPosX']
-# the number of bins for each axis
-bins:List[int]=[480,480]
-# for each axis, the range of values to consider
-ranges:List[List[int]]=[[420,900],[420,900]]
-# here we compute the histogram
-res_chessy:xr.DataArray=sp.compute(bins=bins,axes=axes,ranges=ranges)
-
Here we load runs 44798 and 44799, which show the profile of the optical spot on the same spatial view as in our chessy run above. The two differ in transmission, being \(T=1.0\) and \(T=0.5\) respectively.
We now load a bias series, where the sample bias was varied, effectively shifting the energy spectra. This allows us to calibrate the conversion between the digital values of the dld and the energy.
as usual first we jitter, but here we also align in time the 8 sectors of the dld. This is done by finding the time of the maximum of the signal in each sector, and then shifting the signal in each sector by the difference between the maximum time and the time of the maximum in each sector.
-
For better precision, the photon peak can be used to track the energy shift.
We now will fit the tof-energy relation. This is done by finding the maxima of a peak in the tof spectrum, and then fitting the square root relation to obtain the calibration parameters.
plt.figure()# if you are using interactive plots, you'll need to generate a new figure explicitly every time.
-res.mean('sampleBias').plot.line(x='energy',linewidth=3)
-res.plot.line(x='energy',linewidth=1,alpha=.5);
-
The energy axis is now correct, taking the sample bias of the measurement into account. Additionally, we can compensate the photon energy (monochromatorPhotonEnergy) and the tofVoltage.
plt.figure()
-ax=plt.subplot(111)
-res.energy.attrs['unit']='eV'# add units to the axes
-res.mean('sampleBias').plot.line(x='energy',linewidth=3,ax=ax)
-res.plot.line(x='energy',linewidth=1,alpha=.5,label='all',ax=ax);
-
-INFO - Saved energy calibration parameters to "sed_config.yaml".
-INFO - Saved energy offset parameters to "sed_config.yaml".
-
-
-
A more general function, which saves parameters for all the calibrations performed. Use either the above or below function. They are equivalent (and overwrite each other)
-
-
[35]:
-
-
-
sp.save_workflow_params()
-
-
-
-
-
-
-
-
-INFO - Saved energy calibration parameters to "sed_config.yaml".
-INFO - Saved energy offset parameters to "sed_config.yaml".
-
To calibrate the pump-probe delay axis, we need to shift the delay stage values to center the pump-probe-time overlap timezero. Also, we want to correct the SASE jitter, using information from the bam column.
as we have saved some calibration and correction parameters, we can now run the workflow from the config file. This is done by calling each of the correction functions, with no parameters. The functions will then load the parameters from the config file.
-<matplotlib.collections.QuadMesh at 0x7f90d80eaad0>
-
-
-
-
-
-
-
-
-
-
[40]:
-
-
-
sp.add_delay_offset(
- constant=-1463.7,# this is time zero
- flip_delay_axis=True,# invert the direction of the delay axis
- columns=['bam'],# use the bam to offset the values
- weights=[-0.001],# bam is in fs, delay in ps
- preserve_mean=True# preserve the mean of the delay axis
-)
-
You may note some intensity variation along the delay axis. This comes mainly from inhomogeneous speed of the delay stage, and thus inequivalent amounts of time spent on every delay point. This can be corrected for by normalizing the data to the acquisition time per delay point:
Binning of temperature-dependent ARPES data using time-stamped external temperature data#
-
In this example, we pull some temperature-dependent ARPES data from Zenodo, which was recorded as a continuous temperature ramp. We then add the respective temperature information from the respective timestamp/temperature values to the dataframe, and bin the data as function of temperature For performance reasons, best store the data on a locally attached storage (no network drive). This can also be achieved transparently using the included MirrorUtil class.
dataset.get("TaS2")# Put in Path to a storage of at least 20 GByte free space.
-data_path=dataset.dir
-scandir,caldir=dataset.subdirs# scandir contains the data, caldir contains the calibration files
-
-# correct timestamps if not correct timezone set
-tzoffset=os.path.getmtime(scandir+'/Scan0121_1.h5')-1594998158.0
-iftzoffset:
- forfileinglob.glob(scandir+'/*.h5'):
- os.utime(file,(os.path.getmtime(file)-tzoffset,os.path.getmtime(file)-tzoffset))
-
-
-
-
-
-
-
-
-INFO - Not downloading TaS2 data as it already exists at "/home/runner/work/sed/sed/docs/tutorial/datasets/TaS2".
-Set 'use_existing' to False if you want to download to a new location.
-INFO - Using existing data path for "TaS2": "/home/runner/work/sed/sed/docs/tutorial/datasets/TaS2"
-INFO - TaS2 data is already present.
-
-
-
-
[3]:
-
-
-
# create sed processor using the config file with time-stamps:
-sp=sed.SedProcessor(folder=scandir,user_config="../src/sed/config/mpes_example_config.yaml",system_config={},time_stamps=True,verbose=True)
-
# Remaining fluctuations are an effect of the varying count rate throughout the scan
-plt.figure()
-rate,secs=sp.loader.get_count_rate()
-plt.plot(secs,rate)
-
-
-
-
-
[18]:
-
-
-
-
-[<matplotlib.lines.Line2D at 0x7f3788bf26b0>]
-
-
-
-
-
-
-
-
-
-
[19]:
-
-
-
# Normalize for intensity around the Gamma point
-res_norm=res.copy()
-res_norm=res_norm/res_norm.loc[{'kx':slice(-.3,.3),'ky':slice(-.3,.3)}].sum(axis=(0,1,2))
-
-<matplotlib.collections.QuadMesh at 0x7f3788b42380>
-
-
-
-
-
-
-
-
-
-
[21]:
-
-
-
# Lower Hubbard band intensity versus temperature
-plt.figure()
-res_norm.loc[{'kx':slice(-.2,.2),'ky':slice(-.2,.2),'energy':slice(-.6,0.1)}].sum(axis=(0,1,2)).plot()
-
This example showcases how to use the distortion correction workflow with landmarks that are not at symmetry-equivalent positions, such as for orthorhombic systems with different in-plane axis parameters.
For this example, we use the example data from WSe2. Even though the system is hexagonal, we will use it for demonstration.
-
-
[2]:
-
-
-
dataset.get("WSe2")# Put in Path to a storage of at least 20 GByte free space.
-data_path=dataset.dir# This is the path to the data
-scandir,_=dataset.subdirs# scandir contains the data, _ contains the calibration files
-
-
-
-
-
-
-
-
-INFO - Not downloading WSe2 data as it already exists at "/home/runner/work/sed/sed/docs/tutorial/datasets/WSe2".
-Set 'use_existing' to False if you want to download to a new location.
-INFO - Using existing data path for "WSe2": "/home/runner/work/sed/sed/docs/tutorial/datasets/WSe2"
-INFO - WSe2 data is already present.
-
-
-
-
[3]:
-
-
-
# create sed processor using the config file with time-stamps:
-sp=sed.SedProcessor(folder=scandir,user_config="../src/sed/config/mpes_example_config.yaml",system_config={},time_stamps=True,verbose=True)
-sp.add_jitter()
-
-
-
-
-
-
-
-
-INFO - Folder config loaded from: [/home/runner/work/sed/sed/docs/tutorial/sed_config.yaml]
-INFO - User config loaded from: [/home/runner/work/sed/sed/docs/src/sed/config/mpes_example_config.yaml]
-INFO - Default config loaded from: [/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/sed/config/default.yaml]
-WARNING - Entry "KTOF:Lens:Sample:V" for channel "sampleBias" not found. Skipping the channel.
-INFO - add_jitter: Added jitter to columns ['X', 'Y', 't', 'ADC'].
-
We will describe the symmetry of the system with a 4-fold symmetry, and select two K points and two M points as symmetry points (as well as the Gamma point).
-
-
[5]:
-
-
-
features=np.array([[252.,355.],[361.,251.],[250.,144.],[156.,247.],[254.,247.]])
-sp.define_features(features=features,rotation_symmetry=4,include_center=True,apply=True)
-# Manual selection: Use a GUI tool to select peaks:
-# sp.define_features(rotation_symmetry=4, include_center=True)
-
For the spline-warp generation, we need to tell the algorithm the difference in length of Gamma-K and Gamma-M. This we can do using the ascale parameter, which can either be a single number (the ratio), or a list of length rotation_symmetry defining the relative length of the respective vectors.
-
-
[6]:
-
-
-
gamma_m=np.pi/3.28
-gamma_k=2/np.sqrt(3)*np.pi/3.28
-# Option 1: Ratio of the two distances:
-#sp.generate_splinewarp(include_center=True, ascale=gamma_k/gamma_m)
-# Option 2: List of distances:
-sp.generate_splinewarp(include_center=True,ascale=[gamma_m,gamma_k,gamma_m,gamma_k])
-
-
-
-
-
-
-
-
-INFO - Calculated thin spline correction based on the following landmarks:
-pouter_ord: [[252. 355.]
- [361. 251.]
- [250. 144.]
- [156. 247.]]
-pcent: (254.0, 247.0)
-
dataset.get("WSe2")# Put in Path to a storage of at least 20 GByte free space.
-data_path=dataset.dir# This is the path to the data
-scandir,_=dataset.subdirs# scandir contains the data, _ contains the calibration files
-
-
-
-
-
-
-
-
-INFO - Not downloading WSe2 data as it already exists at "/home/runner/work/sed/sed/docs/tutorial/datasets/WSe2".
-Set 'use_existing' to False if you want to download to a new location.
-INFO - Using existing data path for "WSe2": "/home/runner/work/sed/sed/docs/tutorial/datasets/WSe2"
-INFO - WSe2 data is already present.
-
-
-
-
[3]:
-
-
-
# create sed processor using the config file:
-sp=sed.SedProcessor(folder=scandir,config="../src/sed/config/mpes_example_config.yaml",system_config={})
-
After loading, the dataframe contains the four columns X, Y, t, ADC, which have all integer values. They originate from a time-to-digital converter, and correspond to digital “bins”.
-
-
[4]:
-
-
-
sp.dataframe.head()
-
-
-
-
-
[4]:
-
-
-
-
-
-
-
-
-
-
X
-
Y
-
t
-
ADC
-
-
-
-
-
0
-
0.0
-
0.0
-
0.0
-
0.0
-
-
-
1
-
365.0
-
1002.0
-
70101.0
-
6317.0
-
-
-
2
-
761.0
-
818.0
-
75615.0
-
6316.0
-
-
-
3
-
692.0
-
971.0
-
66455.0
-
6317.0
-
-
-
4
-
671.0
-
712.0
-
73026.0
-
6317.0
-
-
-
-
-
-
Let’s bin these data along the t dimension within a small range:
We notice some oscillation ontop of the data. These are re-binning artifacts, originating from a non-integer number of machine-bins per bin, as we can verify by binning with a different number of steps:
To mitigate this problem, we can add some randomness to the data, and re-distribute events into the gaps in-between bins. This is also termed dithering and e.g. known from image manipulation. The important factor is to add the right amount and right type of random distribution, to end up at a quasi-continuous uniform distribution, but not lose information.
-
We can use the add_jitter function for this. We can pass it the columns to add jitter to, and the amplitude of a uniform jitter. Importantly, this step should be taken in the very beginning as first step before any dataframe operations are added.
This jittering fills the gaps, and produces a continuous uniform distribution. Let’s check again the longer-range binning that gave us the oscillations initially:
Now, the artifacts are absent, and similarly will they be in any dataframe columns derived from a column jittered in such a way. Note that this only applies to data present in digital (i.e. machine-binned) format, and not to data that are intrinsically continuous.
-
Also note that too large or not well-aligned jittering amplitudes will
-
-
deteriorate your resolution along the jittered axis
If the step-size of digitization is different from 1, the corresponding stepsize (half the distance between digitized values) can be adjusted as shown above.
-
Also, alternatively also normally distributed noise can be added, which is less sensitive to the exact right amplitude, but will lead to mixing of neighboring voxels, and thus loss of resolution. Also, normally distributed noise is substantially more computation-intensive to generate. It can nevertheless be helpful in situations where e.g. the stepsize is non-uniform.
If it is your beamtime, you can access both read the raw data and write to processed directory. For the public data, you can not write to processed directory.
-
The paths are such that if you are on Maxwell, it uses those. Otherwise data is downloaded in current directory from Zenodo: https://zenodo.org/records/12609441
-
-
[2]:
-
-
-
beamtime_dir="/asap3/flash/gpfs/pg2/2023/data/11019101"# on Maxwell
-ifos.path.exists(beamtime_dir)andos.access(beamtime_dir,os.R_OK):
- path=beamtime_dir+"/raw/hdf/offline/fl1user3"
- buffer_path=beamtime_dir+"/processed/tutorial/"
-else:
- # data_path can be defined and used to store the data in a specific location
- dataset.get("W110")# Put in Path to a storage of at least 10 GByte free space.
- path=dataset.dir
- buffer_path=path+"/processed/"
-
-
-
-
-
-
-
-
-INFO - Not downloading W110 data as it already exists at "/home/runner/work/sed/sed/docs/tutorial/datasets/W110".
-Set 'use_existing' to False if you want to download to a new location.
-INFO - Using existing data path for "W110": "/home/runner/work/sed/sed/docs/tutorial/datasets/W110"
-INFO - W110 data is already present.
-
Here we get the path to the config file and setup the relevant directories. This can also be done directly in the config file.
-
-
[3]:
-
-
-
# pick the default configuration file for hextof@FLASH
-config_file=Path('../src/sed/config/flash_example_config.yaml')
-assertconfig_file.exists()
-
-
-
-
-
[4]:
-
-
-
# here we setup a dictionary that will be used to override the path configuration
-config_override={
- "core":{
- "beamtime_id":11019101,
- "paths":{
- "raw":path,
- "processed":buffer_path
- },
- },
-}
-
Instead of making completely new energy calibration we can take existing values from the calibration made in the previous tutorial. This allows us to calibrate the conversion between the digital values of the dld and the energy.
-
For this we need to add all those parameters as a dictionary and use them during creation of the processor object.
We can do the SASE jitter correction, using information from the bam column and do calibration of the pump-probe delay axis, we need to shift the delay stage values to center the pump-probe-time overlap time zero.
-
-
[10]:
-
-
-
sp_44498.add_delay_offset(
- constant=-1448,# this is time zero position determined from side band fit
- flip_delay_axis=True,# invert the direction of the delay axis
- columns=['bam'],# use the bam to offset the values
- weights=[-0.001],# bam is in fs, delay in ps
- preserve_mean=True# preserve the mean of the delay axis to keep t0 position
-)
-
## EDC and integration region for XPD
-plt.figure()
-res_kx_ky.mean(('dldPosX','dldPosY')).plot()
-plt.vlines([-30.3,-29.9],0,2.4,color='r',linestyles='dashed')
-plt.vlines([-31.4,-31.2],0,2.4,color='orange',linestyles='dashed')
-plt.vlines([-33.6,-33.4],0,2.4,color='g',linestyles='dashed')
-plt.vlines([-37.0,-36.0],0,2.4,color='b',linestyles='dashed')
-plt.title('EDC and integration regions for XPD')
-plt.show()
-
-## XPD plots
-fig,ax=plt.subplots(2,2,figsize=(6,4.7),layout='constrained')
-res_kx_ky.sel(energy=slice(-30.3,-29.9)).mean('energy').plot(robust=True,ax=ax[0,0],cmap='terrain')
-ax[0,0].set_title("XPD of $1^{st}$ order sidebands")
-res_kx_ky.sel(energy=slice(-31.4,-31.2)).mean('energy').plot(robust=True,ax=ax[0,1],cmap='terrain')
-ax[0,1].set_title("XPD of W4f 7/2 peak")
-res_kx_ky.sel(energy=slice(-33.6,-33.4)).mean('energy').plot(robust=True,ax=ax[1,0],cmap='terrain')
-ax[1,0].set_title("XPD of W4f 5/2 peak")
-res_kx_ky.sel(energy=slice(-37.0,-36.0)).mean('energy').plot(robust=True,ax=ax[1,1],cmap='terrain')
-ax[1,1].set_title("XPD of W5p 3/2 peak")
-
-
-
-
-
-
-
-
-
-
-
[14]:
-
-
-
-
-Text(0.5, 1.0, 'XPD of W5p 3/2 peak')
-
-
-
-
-
-
-
-
-
As we can see there is some structure visible, but it looks very similar to each other. We probably have to do some normalization to remove the detector structure/artefacts. The best option is to divide by a flat-field image. The flat-field image can be obtained from a sample that shows no structure under identical measurement conditions. Unfortunately, we don’t have such a flat-field image.
-
In this case, we can make a flat-field image from the actual dataset using several different approaches.
-
As a first option, we can integrate in energy over the whole region and use this image as a background. Additionally, we introduce the Gaussian Blur for comparison.
Sometimes, after this division, you may not be happy with intensity distribution. Thus, other option for background correction is to duplicate the XPD pattern, apply large Gaussian blurring that eliminates the fine structures in the XPD pattern. Then divide the XPD pattern by its blurred version. This process sometimes enhances the visibility of the fine structures a lot.
-
-
[17]:
-
-
-
## XPD normalized by Gaussian-blurred background image
-
-### Define integration regions for XPD
-SB=res_kx_ky.sel(energy=slice(-30.3,-29.9)).mean('energy')
-W_4f_7=res_kx_ky.sel(energy=slice(-31.4,-31.2)).mean('energy')
-W_4f_5=res_kx_ky.sel(energy=slice(-33.6,-33.4)).mean('energy')
-W_5p=res_kx_ky.sel(energy=slice(-37.0,-36.0)).mean('energy')
-
-### Make corresponding Gaussian Blur background
-SB_blur=xr.apply_ufunc(gaussian_filter,SB,15)
-W_4f_7_blur=xr.apply_ufunc(gaussian_filter,W_4f_7,15)
-W_4f_5_blur=xr.apply_ufunc(gaussian_filter,W_4f_5,15)
-W_5p_blur=xr.apply_ufunc(gaussian_filter,W_5p,15)
-
-### Visualize results
-fig,ax=plt.subplots(2,2,figsize=(6,4.7),layout='constrained')
-(SB/SB_blur).plot(robust=True,ax=ax[0,0],cmap='terrain')
-(W_4f_7/W_4f_7_blur).plot(robust=True,ax=ax[0,1],cmap='terrain')
-(W_4f_5/W_4f_5_blur).plot(robust=True,ax=ax[1,0],cmap='terrain')
-(W_5p/W_5p_blur).plot(robust=True,ax=ax[1,1],cmap='terrain')
-fig.suptitle(f'Run {run_number}: XPD patterns after Gaussian Blur normalization',fontsize='11')
-
-### Apply Gaussian Blur to resulted images to improve contrast
-SB_norm=xr.apply_ufunc(gaussian_filter,SB/SB_blur,1)
-W_4f_7_norm=xr.apply_ufunc(gaussian_filter,W_4f_7/W_4f_7_blur,1)
-W_4f_5_norm=xr.apply_ufunc(gaussian_filter,W_4f_5/W_4f_5_blur,1)
-W_5p_norm=xr.apply_ufunc(gaussian_filter,W_5p/W_5p_blur,1)
-
-### Visualize results
-fig,ax=plt.subplots(2,2,figsize=(6,4.7),layout='constrained')
-SB_norm.plot(robust=True,ax=ax[0,0],cmap='terrain')
-W_4f_7_norm.plot(robust=True,ax=ax[0,1],cmap='terrain')
-W_4f_5_norm.plot(robust=True,ax=ax[1,0],cmap='terrain')
-W_5p_norm.plot(robust=True,ax=ax[1,1],cmap='terrain')
-fig.suptitle(f'Run {run_number}: XPD patterns after Gauss Blur normalization',fontsize='11')
-
Third option for background normalization is to use the simultaneously acquired pre-core level region. As an example for W4f 7/2 peak, we define a region on the high energy side of it and integrate in energy to use as a background
-
-
[18]:
-
-
-
### Define peak and background region on the high energy side of the peak
-W_4f_7=res_kx_ky.sel(energy=slice(-31.4,-31.2)).mean('energy')
-W_4f_7_bgd=res_kx_ky.sel(energy=slice(-32.0,-31.8)).mean('energy')
-
-### Make normalization by background, add Gaussian Blur to the resulting image
-W_4f_7_nrm1=W_4f_7/(W_4f_7_bgd+W_4f_7_bgd.max()*0.00001)
-W_4f_7_nrm1_blur=xr.apply_ufunc(gaussian_filter,W_4f_7_nrm1,1)
-
-### Add Gaussian Blur to the background image, normalize by it and add Gaussian Blur to the resulting image
-W_4f_7_bgd_blur=xr.apply_ufunc(gaussian_filter,W_4f_7_bgd,15)
-W_4f_7_nrm2=W_4f_7/W_4f_7_bgd_blur
-W_4f_7_nrm2_blur=xr.apply_ufunc(gaussian_filter,W_4f_7_nrm2,1)
-
-### Visualize all steps
-fig,ax=plt.subplots(4,2,figsize=(6,8),layout='constrained')
-W_4f_7.plot(robust=True,ax=ax[0,0],cmap='terrain')
-W_4f_7_bgd.plot(robust=True,ax=ax[0,1],cmap='terrain')
-W_4f_7_nrm1.plot(robust=True,ax=ax[1,0],cmap='terrain')
-W_4f_7_nrm1_blur.plot(robust=True,ax=ax[1,1],cmap='terrain')
-W_4f_7_bgd_blur.plot(robust=True,ax=ax[2,0],cmap='terrain')
-W_4f_7_nrm2.plot(robust=True,ax=ax[2,1],cmap='terrain')
-W_4f_7_nrm2_blur.plot(robust=True,ax=ax[3,0],cmap='terrain')
-fig.suptitle(f'Run {run_number}: XPD patterns of W4f7/2 with pre-core level normalization',fontsize='11')
-
-
-
-
-
[18]:
-
-
-
-
-Text(0.5, 0.98, 'Run 44498: XPD patterns of W4f7/2 with pre-core level normalization')
-
-
-
-
-
-
-
-
-
-
[19]:
-
-
-
fig,ax=plt.subplots(1,3,figsize=(6,2),layout='constrained')
-(xr.apply_ufunc(gaussian_filter,res_kx_ky/bgd_blur,1)).sel(energy=slice(-31.4,-31.2)).mean('energy').plot(robust=True,ax=ax[0],cmap='terrain')
-W_4f_7_norm.plot(robust=True,ax=ax[1],cmap='terrain')
-W_4f_7_nrm2_blur.plot(robust=True,ax=ax[2],cmap='terrain')
-fig.suptitle(f'Run {run_number}: comparison of different normalizations\nof XPD pattern for W4f 7/2 peak with Gaussian Blur',fontsize='11')
-
-
-
-
-
[19]:
-
-
-
-
-Text(0.5, 0.98, 'Run 44498: comparison of different normalizations\nof XPD pattern for W4f 7/2 peak with Gaussian Blur')
-
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/sed/stable b/sed/stable
index be5bf2a..60453e6 120000
--- a/sed/stable
+++ b/sed/stable
@@ -1 +1 @@
-v0.4.1
\ No newline at end of file
+v1.0.0
\ No newline at end of file
diff --git a/sed/switcher.json b/sed/switcher.json
index f8a8e48..6e8ea2d 100644
--- a/sed/switcher.json
+++ b/sed/switcher.json
@@ -1,13 +1,13 @@
[
{
"name": "latest",
- "version": "1.0.0a1.dev19+gf1bb527",
- "url": "https://opencompes.github.io/docs/sed/latest"
+ "version": "1.0.0",
+ "url": "https://opencompes.github.io/docs/sed/v1.0.0"
},
{
"name": "stable",
- "version": "0.4.1",
- "url": "https://opencompes.github.io/docs/sed/v0.4.1",
+ "version": "1.0.0",
+ "url": "https://opencompes.github.io/docs/sed/v1.0.0",
"preferred": "true"
},
{
diff --git a/sed/latest/_modules/index.html b/sed/v1.0.0/_modules/index.html
similarity index 97%
rename from sed/latest/_modules/index.html
rename to sed/v1.0.0/_modules/index.html
index 1ba289e..2c1e184 100644
--- a/sed/latest/_modules/index.html
+++ b/sed/v1.0.0/_modules/index.html
@@ -7,7 +7,7 @@
- Overview: module code — SED 1.0.0a1.dev19+gf1bb527 documentation
+ Overview: module code — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/binning/binning.html b/sed/v1.0.0/_modules/sed/binning/binning.html
similarity index 99%
rename from sed/latest/_modules/sed/binning/binning.html
rename to sed/v1.0.0/_modules/sed/binning/binning.html
index 26621a3..2a19fd4 100644
--- a/sed/latest/_modules/sed/binning/binning.html
+++ b/sed/v1.0.0/_modules/sed/binning/binning.html
@@ -7,7 +7,7 @@
- sed.binning.binning — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.binning.binning — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/binning/numba_bin.html b/sed/v1.0.0/_modules/sed/binning/numba_bin.html
similarity index 99%
rename from sed/latest/_modules/sed/binning/numba_bin.html
rename to sed/v1.0.0/_modules/sed/binning/numba_bin.html
index c18e81c..a07ed55 100644
--- a/sed/latest/_modules/sed/binning/numba_bin.html
+++ b/sed/v1.0.0/_modules/sed/binning/numba_bin.html
@@ -7,7 +7,7 @@
- sed.binning.numba_bin — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.binning.numba_bin — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/binning/utils.html b/sed/v1.0.0/_modules/sed/binning/utils.html
similarity index 99%
rename from sed/latest/_modules/sed/binning/utils.html
rename to sed/v1.0.0/_modules/sed/binning/utils.html
index 7f7d55b..f72ce88 100644
--- a/sed/latest/_modules/sed/binning/utils.html
+++ b/sed/v1.0.0/_modules/sed/binning/utils.html
@@ -7,7 +7,7 @@
- sed.binning.utils — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.binning.utils — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/calibrator/delay.html b/sed/v1.0.0/_modules/sed/calibrator/delay.html
similarity index 99%
rename from sed/latest/_modules/sed/calibrator/delay.html
rename to sed/v1.0.0/_modules/sed/calibrator/delay.html
index 51bc579..e5dca9c 100644
--- a/sed/latest/_modules/sed/calibrator/delay.html
+++ b/sed/v1.0.0/_modules/sed/calibrator/delay.html
@@ -7,7 +7,7 @@
- sed.calibrator.delay — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.calibrator.delay — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/calibrator/energy.html b/sed/v1.0.0/_modules/sed/calibrator/energy.html
similarity index 99%
rename from sed/latest/_modules/sed/calibrator/energy.html
rename to sed/v1.0.0/_modules/sed/calibrator/energy.html
index 40866be..c2baa15 100644
--- a/sed/latest/_modules/sed/calibrator/energy.html
+++ b/sed/v1.0.0/_modules/sed/calibrator/energy.html
@@ -7,7 +7,7 @@
- sed.calibrator.energy — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.calibrator.energy — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/calibrator/momentum.html b/sed/v1.0.0/_modules/sed/calibrator/momentum.html
similarity index 99%
rename from sed/latest/_modules/sed/calibrator/momentum.html
rename to sed/v1.0.0/_modules/sed/calibrator/momentum.html
index 4eb537a..77f6f6f 100644
--- a/sed/latest/_modules/sed/calibrator/momentum.html
+++ b/sed/v1.0.0/_modules/sed/calibrator/momentum.html
@@ -7,7 +7,7 @@
- sed.calibrator.momentum — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.calibrator.momentum — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/core/config.html b/sed/v1.0.0/_modules/sed/core/config.html
similarity index 99%
rename from sed/latest/_modules/sed/core/config.html
rename to sed/v1.0.0/_modules/sed/core/config.html
index dfcb681..06422e9 100644
--- a/sed/latest/_modules/sed/core/config.html
+++ b/sed/v1.0.0/_modules/sed/core/config.html
@@ -7,7 +7,7 @@
- sed.core.config — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.core.config — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/core/dfops.html b/sed/v1.0.0/_modules/sed/core/dfops.html
similarity index 99%
rename from sed/latest/_modules/sed/core/dfops.html
rename to sed/v1.0.0/_modules/sed/core/dfops.html
index fc2963f..485434e 100644
--- a/sed/latest/_modules/sed/core/dfops.html
+++ b/sed/v1.0.0/_modules/sed/core/dfops.html
@@ -7,7 +7,7 @@
- sed.core.dfops — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.core.dfops — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/core/metadata.html b/sed/v1.0.0/_modules/sed/core/metadata.html
similarity index 99%
rename from sed/latest/_modules/sed/core/metadata.html
rename to sed/v1.0.0/_modules/sed/core/metadata.html
index b5a446d..58d3b6f 100644
--- a/sed/latest/_modules/sed/core/metadata.html
+++ b/sed/v1.0.0/_modules/sed/core/metadata.html
@@ -7,7 +7,7 @@
- sed.core.metadata — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.core.metadata — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/core/processor.html b/sed/v1.0.0/_modules/sed/core/processor.html
similarity index 99%
rename from sed/latest/_modules/sed/core/processor.html
rename to sed/v1.0.0/_modules/sed/core/processor.html
index 881d1b7..48f6c8a 100644
--- a/sed/latest/_modules/sed/core/processor.html
+++ b/sed/v1.0.0/_modules/sed/core/processor.html
@@ -7,7 +7,7 @@
- sed.core.processor — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.core.processor — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/dataset/dataset.html b/sed/v1.0.0/_modules/sed/dataset/dataset.html
similarity index 99%
rename from sed/latest/_modules/sed/dataset/dataset.html
rename to sed/v1.0.0/_modules/sed/dataset/dataset.html
index 0bb8c44..e3879e1 100644
--- a/sed/latest/_modules/sed/dataset/dataset.html
+++ b/sed/v1.0.0/_modules/sed/dataset/dataset.html
@@ -7,7 +7,7 @@
- sed.dataset.dataset — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.dataset.dataset — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/diagnostics.html b/sed/v1.0.0/_modules/sed/diagnostics.html
similarity index 99%
rename from sed/latest/_modules/sed/diagnostics.html
rename to sed/v1.0.0/_modules/sed/diagnostics.html
index c4d5d12..fc8087a 100644
--- a/sed/latest/_modules/sed/diagnostics.html
+++ b/sed/v1.0.0/_modules/sed/diagnostics.html
@@ -7,7 +7,7 @@
- sed.diagnostics — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.diagnostics — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/io/hdf5.html b/sed/v1.0.0/_modules/sed/io/hdf5.html
similarity index 99%
rename from sed/latest/_modules/sed/io/hdf5.html
rename to sed/v1.0.0/_modules/sed/io/hdf5.html
index 637f513..6de0d4e 100644
--- a/sed/latest/_modules/sed/io/hdf5.html
+++ b/sed/v1.0.0/_modules/sed/io/hdf5.html
@@ -7,7 +7,7 @@
- sed.io.hdf5 — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.io.hdf5 — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/io/nexus.html b/sed/v1.0.0/_modules/sed/io/nexus.html
similarity index 98%
rename from sed/latest/_modules/sed/io/nexus.html
rename to sed/v1.0.0/_modules/sed/io/nexus.html
index 4606331..700068e 100644
--- a/sed/latest/_modules/sed/io/nexus.html
+++ b/sed/v1.0.0/_modules/sed/io/nexus.html
@@ -7,7 +7,7 @@
- sed.io.nexus — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.io.nexus — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/io/tiff.html b/sed/v1.0.0/_modules/sed/io/tiff.html
similarity index 99%
rename from sed/latest/_modules/sed/io/tiff.html
rename to sed/v1.0.0/_modules/sed/io/tiff.html
index b681e04..9e06afa 100644
--- a/sed/latest/_modules/sed/io/tiff.html
+++ b/sed/v1.0.0/_modules/sed/io/tiff.html
@@ -7,7 +7,7 @@
- sed.io.tiff — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.io.tiff — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/loader/base/loader.html b/sed/v1.0.0/_modules/sed/loader/base/loader.html
similarity index 99%
rename from sed/latest/_modules/sed/loader/base/loader.html
rename to sed/v1.0.0/_modules/sed/loader/base/loader.html
index 259129a..860dc24 100644
--- a/sed/latest/_modules/sed/loader/base/loader.html
+++ b/sed/v1.0.0/_modules/sed/loader/base/loader.html
@@ -7,7 +7,7 @@
- sed.loader.base.loader — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.loader.base.loader — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/loader/flash/loader.html b/sed/v1.0.0/_modules/sed/loader/flash/loader.html
similarity index 99%
rename from sed/latest/_modules/sed/loader/flash/loader.html
rename to sed/v1.0.0/_modules/sed/loader/flash/loader.html
index f5eb647..7e26e9c 100644
--- a/sed/latest/_modules/sed/loader/flash/loader.html
+++ b/sed/v1.0.0/_modules/sed/loader/flash/loader.html
@@ -7,7 +7,7 @@
- sed.loader.flash.loader — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.loader.flash.loader — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/loader/flash/metadata.html b/sed/v1.0.0/_modules/sed/loader/flash/metadata.html
similarity index 99%
rename from sed/latest/_modules/sed/loader/flash/metadata.html
rename to sed/v1.0.0/_modules/sed/loader/flash/metadata.html
index 97eca70..0af5d78 100644
--- a/sed/latest/_modules/sed/loader/flash/metadata.html
+++ b/sed/v1.0.0/_modules/sed/loader/flash/metadata.html
@@ -7,7 +7,7 @@
- sed.loader.flash.metadata — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.loader.flash.metadata — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/loader/generic/loader.html b/sed/v1.0.0/_modules/sed/loader/generic/loader.html
similarity index 99%
rename from sed/latest/_modules/sed/loader/generic/loader.html
rename to sed/v1.0.0/_modules/sed/loader/generic/loader.html
index c4e6f01..798a54a 100644
--- a/sed/latest/_modules/sed/loader/generic/loader.html
+++ b/sed/v1.0.0/_modules/sed/loader/generic/loader.html
@@ -7,7 +7,7 @@
- sed.loader.generic.loader — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.loader.generic.loader — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/loader/loader_interface.html b/sed/v1.0.0/_modules/sed/loader/loader_interface.html
similarity index 98%
rename from sed/latest/_modules/sed/loader/loader_interface.html
rename to sed/v1.0.0/_modules/sed/loader/loader_interface.html
index 6de04f7..82ed0cb 100644
--- a/sed/latest/_modules/sed/loader/loader_interface.html
+++ b/sed/v1.0.0/_modules/sed/loader/loader_interface.html
@@ -7,7 +7,7 @@
- sed.loader.loader_interface — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.loader.loader_interface — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/loader/mirrorutil.html b/sed/v1.0.0/_modules/sed/loader/mirrorutil.html
similarity index 99%
rename from sed/latest/_modules/sed/loader/mirrorutil.html
rename to sed/v1.0.0/_modules/sed/loader/mirrorutil.html
index b557c3c..56be11c 100644
--- a/sed/latest/_modules/sed/loader/mirrorutil.html
+++ b/sed/v1.0.0/_modules/sed/loader/mirrorutil.html
@@ -7,7 +7,7 @@
- sed.loader.mirrorutil — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.loader.mirrorutil — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/loader/mpes/loader.html b/sed/v1.0.0/_modules/sed/loader/mpes/loader.html
similarity index 99%
rename from sed/latest/_modules/sed/loader/mpes/loader.html
rename to sed/v1.0.0/_modules/sed/loader/mpes/loader.html
index aa973d9..9a49c2b 100644
--- a/sed/latest/_modules/sed/loader/mpes/loader.html
+++ b/sed/v1.0.0/_modules/sed/loader/mpes/loader.html
@@ -7,7 +7,7 @@
- sed.loader.mpes.loader — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.loader.mpes.loader — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/loader/sxp/loader.html b/sed/v1.0.0/_modules/sed/loader/sxp/loader.html
similarity index 99%
rename from sed/latest/_modules/sed/loader/sxp/loader.html
rename to sed/v1.0.0/_modules/sed/loader/sxp/loader.html
index 68684ed..aa3c039 100644
--- a/sed/latest/_modules/sed/loader/sxp/loader.html
+++ b/sed/v1.0.0/_modules/sed/loader/sxp/loader.html
@@ -7,7 +7,7 @@
- sed.loader.sxp.loader — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.loader.sxp.loader — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_modules/sed/loader/utils.html b/sed/v1.0.0/_modules/sed/loader/utils.html
similarity index 99%
rename from sed/latest/_modules/sed/loader/utils.html
rename to sed/v1.0.0/_modules/sed/loader/utils.html
index 8906464..2590b84 100644
--- a/sed/latest/_modules/sed/loader/utils.html
+++ b/sed/v1.0.0/_modules/sed/loader/utils.html
@@ -7,7 +7,7 @@
- sed.loader.utils — SED 1.0.0a1.dev19+gf1bb527 documentation
+ sed.loader.utils — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/_sources/index.md.txt b/sed/v1.0.0/_sources/index.md.txt
similarity index 100%
rename from sed/latest/_sources/index.md.txt
rename to sed/v1.0.0/_sources/index.md.txt
diff --git a/sed/latest/_sources/misc/contributing.rst.txt b/sed/v1.0.0/_sources/misc/contributing.rst.txt
similarity index 100%
rename from sed/latest/_sources/misc/contributing.rst.txt
rename to sed/v1.0.0/_sources/misc/contributing.rst.txt
diff --git a/sed/latest/_sources/misc/contribution.md.txt b/sed/v1.0.0/_sources/misc/contribution.md.txt
similarity index 100%
rename from sed/latest/_sources/misc/contribution.md.txt
rename to sed/v1.0.0/_sources/misc/contribution.md.txt
diff --git a/sed/latest/_sources/misc/maintain.rst.txt b/sed/v1.0.0/_sources/misc/maintain.rst.txt
similarity index 100%
rename from sed/latest/_sources/misc/maintain.rst.txt
rename to sed/v1.0.0/_sources/misc/maintain.rst.txt
diff --git a/sed/latest/_sources/sed/api.rst.txt b/sed/v1.0.0/_sources/sed/api.rst.txt
similarity index 100%
rename from sed/latest/_sources/sed/api.rst.txt
rename to sed/v1.0.0/_sources/sed/api.rst.txt
diff --git a/sed/latest/_sources/sed/binning.rst.txt b/sed/v1.0.0/_sources/sed/binning.rst.txt
similarity index 100%
rename from sed/latest/_sources/sed/binning.rst.txt
rename to sed/v1.0.0/_sources/sed/binning.rst.txt
diff --git a/sed/latest/_sources/sed/calibrator.rst.txt b/sed/v1.0.0/_sources/sed/calibrator.rst.txt
similarity index 100%
rename from sed/latest/_sources/sed/calibrator.rst.txt
rename to sed/v1.0.0/_sources/sed/calibrator.rst.txt
diff --git a/sed/latest/_sources/sed/config.rst.txt b/sed/v1.0.0/_sources/sed/config.rst.txt
similarity index 100%
rename from sed/latest/_sources/sed/config.rst.txt
rename to sed/v1.0.0/_sources/sed/config.rst.txt
diff --git a/sed/latest/_sources/sed/core.rst.txt b/sed/v1.0.0/_sources/sed/core.rst.txt
similarity index 100%
rename from sed/latest/_sources/sed/core.rst.txt
rename to sed/v1.0.0/_sources/sed/core.rst.txt
diff --git a/sed/latest/_sources/sed/dataset.rst.txt b/sed/v1.0.0/_sources/sed/dataset.rst.txt
similarity index 100%
rename from sed/latest/_sources/sed/dataset.rst.txt
rename to sed/v1.0.0/_sources/sed/dataset.rst.txt
diff --git a/sed/latest/_sources/sed/dfops.rst.txt b/sed/v1.0.0/_sources/sed/dfops.rst.txt
similarity index 100%
rename from sed/latest/_sources/sed/dfops.rst.txt
rename to sed/v1.0.0/_sources/sed/dfops.rst.txt
diff --git a/sed/latest/_sources/sed/diagnostic.rst.txt b/sed/v1.0.0/_sources/sed/diagnostic.rst.txt
similarity index 100%
rename from sed/latest/_sources/sed/diagnostic.rst.txt
rename to sed/v1.0.0/_sources/sed/diagnostic.rst.txt
diff --git a/sed/latest/_sources/sed/io.rst.txt b/sed/v1.0.0/_sources/sed/io.rst.txt
similarity index 100%
rename from sed/latest/_sources/sed/io.rst.txt
rename to sed/v1.0.0/_sources/sed/io.rst.txt
diff --git a/sed/latest/_sources/sed/loader.rst.txt b/sed/v1.0.0/_sources/sed/loader.rst.txt
similarity index 100%
rename from sed/latest/_sources/sed/loader.rst.txt
rename to sed/v1.0.0/_sources/sed/loader.rst.txt
diff --git a/sed/latest/_sources/sed/metadata.rst.txt b/sed/v1.0.0/_sources/sed/metadata.rst.txt
similarity index 100%
rename from sed/latest/_sources/sed/metadata.rst.txt
rename to sed/v1.0.0/_sources/sed/metadata.rst.txt
diff --git a/sed/latest/_sources/tutorial/10_hextof_workflow_trXPS_bam_correction.ipynb.txt b/sed/v1.0.0/_sources/tutorial/10_hextof_workflow_trXPS_bam_correction.ipynb.txt
similarity index 100%
rename from sed/latest/_sources/tutorial/10_hextof_workflow_trXPS_bam_correction.ipynb.txt
rename to sed/v1.0.0/_sources/tutorial/10_hextof_workflow_trXPS_bam_correction.ipynb.txt
diff --git a/sed/latest/_sources/tutorial/11_hextof_workflow_trXPS_energy_calibration_using_SB.ipynb.txt b/sed/v1.0.0/_sources/tutorial/11_hextof_workflow_trXPS_energy_calibration_using_SB.ipynb.txt
similarity index 100%
rename from sed/latest/_sources/tutorial/11_hextof_workflow_trXPS_energy_calibration_using_SB.ipynb.txt
rename to sed/v1.0.0/_sources/tutorial/11_hextof_workflow_trXPS_energy_calibration_using_SB.ipynb.txt
diff --git a/sed/latest/_sources/tutorial/1_binning_fake_data.ipynb.txt b/sed/v1.0.0/_sources/tutorial/1_binning_fake_data.ipynb.txt
similarity index 100%
rename from sed/latest/_sources/tutorial/1_binning_fake_data.ipynb.txt
rename to sed/v1.0.0/_sources/tutorial/1_binning_fake_data.ipynb.txt
diff --git a/sed/latest/_sources/tutorial/2_conversion_pipeline_for_example_time-resolved_ARPES_data.ipynb.txt b/sed/v1.0.0/_sources/tutorial/2_conversion_pipeline_for_example_time-resolved_ARPES_data.ipynb.txt
similarity index 100%
rename from sed/latest/_sources/tutorial/2_conversion_pipeline_for_example_time-resolved_ARPES_data.ipynb.txt
rename to sed/v1.0.0/_sources/tutorial/2_conversion_pipeline_for_example_time-resolved_ARPES_data.ipynb.txt
diff --git a/sed/latest/_sources/tutorial/3_metadata_collection_and_export_to_NeXus.ipynb.txt b/sed/v1.0.0/_sources/tutorial/3_metadata_collection_and_export_to_NeXus.ipynb.txt
similarity index 100%
rename from sed/latest/_sources/tutorial/3_metadata_collection_and_export_to_NeXus.ipynb.txt
rename to sed/v1.0.0/_sources/tutorial/3_metadata_collection_and_export_to_NeXus.ipynb.txt
diff --git a/sed/latest/_sources/tutorial/4_hextof_workflow.ipynb.txt b/sed/v1.0.0/_sources/tutorial/4_hextof_workflow.ipynb.txt
similarity index 100%
rename from sed/latest/_sources/tutorial/4_hextof_workflow.ipynb.txt
rename to sed/v1.0.0/_sources/tutorial/4_hextof_workflow.ipynb.txt
diff --git a/sed/latest/_sources/tutorial/5_sxp_workflow.ipynb.txt b/sed/v1.0.0/_sources/tutorial/5_sxp_workflow.ipynb.txt
similarity index 100%
rename from sed/latest/_sources/tutorial/5_sxp_workflow.ipynb.txt
rename to sed/v1.0.0/_sources/tutorial/5_sxp_workflow.ipynb.txt
diff --git a/sed/latest/_sources/tutorial/6_binning_with_time-stamped_data.ipynb.txt b/sed/v1.0.0/_sources/tutorial/6_binning_with_time-stamped_data.ipynb.txt
similarity index 100%
rename from sed/latest/_sources/tutorial/6_binning_with_time-stamped_data.ipynb.txt
rename to sed/v1.0.0/_sources/tutorial/6_binning_with_time-stamped_data.ipynb.txt
diff --git a/sed/latest/_sources/tutorial/7_correcting_orthorhombic_symmetry.ipynb.txt b/sed/v1.0.0/_sources/tutorial/7_correcting_orthorhombic_symmetry.ipynb.txt
similarity index 100%
rename from sed/latest/_sources/tutorial/7_correcting_orthorhombic_symmetry.ipynb.txt
rename to sed/v1.0.0/_sources/tutorial/7_correcting_orthorhombic_symmetry.ipynb.txt
diff --git a/sed/latest/_sources/tutorial/8_jittering_tutorial.ipynb.txt b/sed/v1.0.0/_sources/tutorial/8_jittering_tutorial.ipynb.txt
similarity index 100%
rename from sed/latest/_sources/tutorial/8_jittering_tutorial.ipynb.txt
rename to sed/v1.0.0/_sources/tutorial/8_jittering_tutorial.ipynb.txt
diff --git a/sed/latest/_sources/tutorial/9_hextof_workflow_trXPD.ipynb.txt b/sed/v1.0.0/_sources/tutorial/9_hextof_workflow_trXPD.ipynb.txt
similarity index 100%
rename from sed/latest/_sources/tutorial/9_hextof_workflow_trXPD.ipynb.txt
rename to sed/v1.0.0/_sources/tutorial/9_hextof_workflow_trXPD.ipynb.txt
diff --git a/sed/latest/_sources/user_guide/config.md.txt b/sed/v1.0.0/_sources/user_guide/config.md.txt
similarity index 100%
rename from sed/latest/_sources/user_guide/config.md.txt
rename to sed/v1.0.0/_sources/user_guide/config.md.txt
diff --git a/sed/latest/_sources/user_guide/index.md.txt b/sed/v1.0.0/_sources/user_guide/index.md.txt
similarity index 100%
rename from sed/latest/_sources/user_guide/index.md.txt
rename to sed/v1.0.0/_sources/user_guide/index.md.txt
diff --git a/sed/latest/_sources/user_guide/installation.md.txt b/sed/v1.0.0/_sources/user_guide/installation.md.txt
similarity index 100%
rename from sed/latest/_sources/user_guide/installation.md.txt
rename to sed/v1.0.0/_sources/user_guide/installation.md.txt
diff --git a/sed/latest/_sources/workflows/index.md.txt b/sed/v1.0.0/_sources/workflows/index.md.txt
similarity index 100%
rename from sed/latest/_sources/workflows/index.md.txt
rename to sed/v1.0.0/_sources/workflows/index.md.txt
diff --git a/sed/latest/_static/basic.css b/sed/v1.0.0/_static/basic.css
similarity index 100%
rename from sed/latest/_static/basic.css
rename to sed/v1.0.0/_static/basic.css
diff --git a/sed/latest/_static/doctools.js b/sed/v1.0.0/_static/doctools.js
similarity index 100%
rename from sed/latest/_static/doctools.js
rename to sed/v1.0.0/_static/doctools.js
diff --git a/sed/latest/_static/documentation_options.js b/sed/v1.0.0/_static/documentation_options.js
similarity index 88%
rename from sed/latest/_static/documentation_options.js
rename to sed/v1.0.0/_static/documentation_options.js
index 1cc336c..89435bb 100644
--- a/sed/latest/_static/documentation_options.js
+++ b/sed/v1.0.0/_static/documentation_options.js
@@ -1,5 +1,5 @@
const DOCUMENTATION_OPTIONS = {
- VERSION: '1.0.0a1.dev19+gf1bb527',
+ VERSION: '1.0.0',
LANGUAGE: 'en',
COLLAPSE_INDEX: false,
BUILDER: 'html',
diff --git a/sed/latest/_static/file.png b/sed/v1.0.0/_static/file.png
similarity index 100%
rename from sed/latest/_static/file.png
rename to sed/v1.0.0/_static/file.png
diff --git a/sed/latest/_static/language_data.js b/sed/v1.0.0/_static/language_data.js
similarity index 100%
rename from sed/latest/_static/language_data.js
rename to sed/v1.0.0/_static/language_data.js
diff --git a/sed/latest/_static/minus.png b/sed/v1.0.0/_static/minus.png
similarity index 100%
rename from sed/latest/_static/minus.png
rename to sed/v1.0.0/_static/minus.png
diff --git a/sed/latest/_static/nbsphinx-broken-thumbnail.svg b/sed/v1.0.0/_static/nbsphinx-broken-thumbnail.svg
similarity index 100%
rename from sed/latest/_static/nbsphinx-broken-thumbnail.svg
rename to sed/v1.0.0/_static/nbsphinx-broken-thumbnail.svg
diff --git a/sed/latest/_static/nbsphinx-code-cells.css b/sed/v1.0.0/_static/nbsphinx-code-cells.css
similarity index 100%
rename from sed/latest/_static/nbsphinx-code-cells.css
rename to sed/v1.0.0/_static/nbsphinx-code-cells.css
diff --git a/sed/latest/_static/nbsphinx-gallery.css b/sed/v1.0.0/_static/nbsphinx-gallery.css
similarity index 100%
rename from sed/latest/_static/nbsphinx-gallery.css
rename to sed/v1.0.0/_static/nbsphinx-gallery.css
diff --git a/sed/latest/_static/nbsphinx-no-thumbnail.svg b/sed/v1.0.0/_static/nbsphinx-no-thumbnail.svg
similarity index 100%
rename from sed/latest/_static/nbsphinx-no-thumbnail.svg
rename to sed/v1.0.0/_static/nbsphinx-no-thumbnail.svg
diff --git a/sed/latest/_static/plus.png b/sed/v1.0.0/_static/plus.png
similarity index 100%
rename from sed/latest/_static/plus.png
rename to sed/v1.0.0/_static/plus.png
diff --git a/sed/latest/_static/pygments.css b/sed/v1.0.0/_static/pygments.css
similarity index 100%
rename from sed/latest/_static/pygments.css
rename to sed/v1.0.0/_static/pygments.css
diff --git a/sed/latest/_static/scripts/bootstrap.js b/sed/v1.0.0/_static/scripts/bootstrap.js
similarity index 100%
rename from sed/latest/_static/scripts/bootstrap.js
rename to sed/v1.0.0/_static/scripts/bootstrap.js
diff --git a/sed/latest/_static/scripts/bootstrap.js.LICENSE.txt b/sed/v1.0.0/_static/scripts/bootstrap.js.LICENSE.txt
similarity index 100%
rename from sed/latest/_static/scripts/bootstrap.js.LICENSE.txt
rename to sed/v1.0.0/_static/scripts/bootstrap.js.LICENSE.txt
diff --git a/sed/latest/_static/scripts/bootstrap.js.map b/sed/v1.0.0/_static/scripts/bootstrap.js.map
similarity index 100%
rename from sed/latest/_static/scripts/bootstrap.js.map
rename to sed/v1.0.0/_static/scripts/bootstrap.js.map
diff --git a/sed/latest/_static/scripts/fontawesome.js b/sed/v1.0.0/_static/scripts/fontawesome.js
similarity index 100%
rename from sed/latest/_static/scripts/fontawesome.js
rename to sed/v1.0.0/_static/scripts/fontawesome.js
diff --git a/sed/latest/_static/scripts/fontawesome.js.LICENSE.txt b/sed/v1.0.0/_static/scripts/fontawesome.js.LICENSE.txt
similarity index 100%
rename from sed/latest/_static/scripts/fontawesome.js.LICENSE.txt
rename to sed/v1.0.0/_static/scripts/fontawesome.js.LICENSE.txt
diff --git a/sed/latest/_static/scripts/fontawesome.js.map b/sed/v1.0.0/_static/scripts/fontawesome.js.map
similarity index 100%
rename from sed/latest/_static/scripts/fontawesome.js.map
rename to sed/v1.0.0/_static/scripts/fontawesome.js.map
diff --git a/sed/latest/_static/scripts/pydata-sphinx-theme.js b/sed/v1.0.0/_static/scripts/pydata-sphinx-theme.js
similarity index 100%
rename from sed/latest/_static/scripts/pydata-sphinx-theme.js
rename to sed/v1.0.0/_static/scripts/pydata-sphinx-theme.js
diff --git a/sed/latest/_static/scripts/pydata-sphinx-theme.js.map b/sed/v1.0.0/_static/scripts/pydata-sphinx-theme.js.map
similarity index 100%
rename from sed/latest/_static/scripts/pydata-sphinx-theme.js.map
rename to sed/v1.0.0/_static/scripts/pydata-sphinx-theme.js.map
diff --git a/sed/latest/_static/searchtools.js b/sed/v1.0.0/_static/searchtools.js
similarity index 100%
rename from sed/latest/_static/searchtools.js
rename to sed/v1.0.0/_static/searchtools.js
diff --git a/sed/latest/_static/sphinx_highlight.js b/sed/v1.0.0/_static/sphinx_highlight.js
similarity index 100%
rename from sed/latest/_static/sphinx_highlight.js
rename to sed/v1.0.0/_static/sphinx_highlight.js
diff --git a/sed/latest/_static/styles/pydata-sphinx-theme.css b/sed/v1.0.0/_static/styles/pydata-sphinx-theme.css
similarity index 100%
rename from sed/latest/_static/styles/pydata-sphinx-theme.css
rename to sed/v1.0.0/_static/styles/pydata-sphinx-theme.css
diff --git a/sed/latest/_static/styles/pydata-sphinx-theme.css.map b/sed/v1.0.0/_static/styles/pydata-sphinx-theme.css.map
similarity index 100%
rename from sed/latest/_static/styles/pydata-sphinx-theme.css.map
rename to sed/v1.0.0/_static/styles/pydata-sphinx-theme.css.map
diff --git a/sed/latest/_static/styles/theme.css b/sed/v1.0.0/_static/styles/theme.css
similarity index 100%
rename from sed/latest/_static/styles/theme.css
rename to sed/v1.0.0/_static/styles/theme.css
diff --git a/sed/latest/_static/vendor/fontawesome/webfonts/fa-brands-400.ttf b/sed/v1.0.0/_static/vendor/fontawesome/webfonts/fa-brands-400.ttf
similarity index 100%
rename from sed/latest/_static/vendor/fontawesome/webfonts/fa-brands-400.ttf
rename to sed/v1.0.0/_static/vendor/fontawesome/webfonts/fa-brands-400.ttf
diff --git a/sed/latest/_static/vendor/fontawesome/webfonts/fa-brands-400.woff2 b/sed/v1.0.0/_static/vendor/fontawesome/webfonts/fa-brands-400.woff2
similarity index 100%
rename from sed/latest/_static/vendor/fontawesome/webfonts/fa-brands-400.woff2
rename to sed/v1.0.0/_static/vendor/fontawesome/webfonts/fa-brands-400.woff2
diff --git a/sed/latest/_static/vendor/fontawesome/webfonts/fa-regular-400.ttf b/sed/v1.0.0/_static/vendor/fontawesome/webfonts/fa-regular-400.ttf
similarity index 100%
rename from sed/latest/_static/vendor/fontawesome/webfonts/fa-regular-400.ttf
rename to sed/v1.0.0/_static/vendor/fontawesome/webfonts/fa-regular-400.ttf
diff --git a/sed/latest/_static/vendor/fontawesome/webfonts/fa-regular-400.woff2 b/sed/v1.0.0/_static/vendor/fontawesome/webfonts/fa-regular-400.woff2
similarity index 100%
rename from sed/latest/_static/vendor/fontawesome/webfonts/fa-regular-400.woff2
rename to sed/v1.0.0/_static/vendor/fontawesome/webfonts/fa-regular-400.woff2
diff --git a/sed/latest/_static/vendor/fontawesome/webfonts/fa-solid-900.ttf b/sed/v1.0.0/_static/vendor/fontawesome/webfonts/fa-solid-900.ttf
similarity index 100%
rename from sed/latest/_static/vendor/fontawesome/webfonts/fa-solid-900.ttf
rename to sed/v1.0.0/_static/vendor/fontawesome/webfonts/fa-solid-900.ttf
diff --git a/sed/latest/_static/vendor/fontawesome/webfonts/fa-solid-900.woff2 b/sed/v1.0.0/_static/vendor/fontawesome/webfonts/fa-solid-900.woff2
similarity index 100%
rename from sed/latest/_static/vendor/fontawesome/webfonts/fa-solid-900.woff2
rename to sed/v1.0.0/_static/vendor/fontawesome/webfonts/fa-solid-900.woff2
diff --git a/sed/latest/_static/webpack-macros.html b/sed/v1.0.0/_static/webpack-macros.html
similarity index 100%
rename from sed/latest/_static/webpack-macros.html
rename to sed/v1.0.0/_static/webpack-macros.html
diff --git a/sed/latest/genindex.html b/sed/v1.0.0/genindex.html
similarity index 99%
rename from sed/latest/genindex.html
rename to sed/v1.0.0/genindex.html
index 249dbf8..96fada4 100644
--- a/sed/latest/genindex.html
+++ b/sed/v1.0.0/genindex.html
@@ -7,7 +7,7 @@
- Index — SED 1.0.0a1.dev19+gf1bb527 documentation
+ Index — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -54,7 +54,7 @@
-
+
@@ -116,7 +116,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/index.html b/sed/v1.0.0/index.html
similarity index 98%
rename from sed/latest/index.html
rename to sed/v1.0.0/index.html
index 87d9832..86d2286 100644
--- a/sed/latest/index.html
+++ b/sed/v1.0.0/index.html
@@ -9,7 +9,7 @@
- SED documentation — SED 1.0.0a1.dev19+gf1bb527 documentation
+ SED documentation — SED 1.0.0 documentation
@@ -39,7 +39,7 @@
-
+
@@ -50,7 +50,7 @@
@@ -59,7 +59,7 @@
-
+
@@ -121,7 +121,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/misc/contributing.html b/sed/v1.0.0/misc/contributing.html
similarity index 98%
rename from sed/latest/misc/contributing.html
rename to sed/v1.0.0/misc/contributing.html
index b011fe5..416e797 100644
--- a/sed/latest/misc/contributing.html
+++ b/sed/v1.0.0/misc/contributing.html
@@ -8,7 +8,7 @@
- Contributing to sed — SED 1.0.0a1.dev19+gf1bb527 documentation
+ Contributing to sed — SED 1.0.0 documentation
@@ -38,7 +38,7 @@
-
+
@@ -47,7 +47,7 @@
@@ -57,7 +57,7 @@
-
+
@@ -119,7 +119,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/misc/contribution.html b/sed/v1.0.0/misc/contribution.html
similarity index 98%
rename from sed/latest/misc/contribution.html
rename to sed/v1.0.0/misc/contribution.html
index 64c51bf..39bae27 100644
--- a/sed/latest/misc/contribution.html
+++ b/sed/v1.0.0/misc/contribution.html
@@ -8,7 +8,7 @@
- Development — SED 1.0.0a1.dev19+gf1bb527 documentation
+ Development — SED 1.0.0 documentation
@@ -38,7 +38,7 @@
-
+
@@ -47,7 +47,7 @@
@@ -57,7 +57,7 @@
-
+
@@ -119,7 +119,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/misc/maintain.html b/sed/v1.0.0/misc/maintain.html
similarity index 98%
rename from sed/latest/misc/maintain.html
rename to sed/v1.0.0/misc/maintain.html
index 2e8ffc5..83af02e 100644
--- a/sed/latest/misc/maintain.html
+++ b/sed/v1.0.0/misc/maintain.html
@@ -8,7 +8,7 @@
- How to Maintain — SED 1.0.0a1.dev19+gf1bb527 documentation
+ How to Maintain — SED 1.0.0 documentation
@@ -38,7 +38,7 @@
-
+
@@ -47,7 +47,7 @@
@@ -56,7 +56,7 @@
-
+
@@ -118,7 +118,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/objects.inv b/sed/v1.0.0/objects.inv
similarity index 99%
rename from sed/latest/objects.inv
rename to sed/v1.0.0/objects.inv
index f418b44..6ed9f01 100644
Binary files a/sed/latest/objects.inv and b/sed/v1.0.0/objects.inv differ
diff --git a/sed/latest/py-modindex.html b/sed/v1.0.0/py-modindex.html
similarity index 98%
rename from sed/latest/py-modindex.html
rename to sed/v1.0.0/py-modindex.html
index 054dbd6..0dce1ad 100644
--- a/sed/latest/py-modindex.html
+++ b/sed/v1.0.0/py-modindex.html
@@ -7,7 +7,7 @@
- Python Module Index — SED 1.0.0a1.dev19+gf1bb527 documentation
+ Python Module Index — SED 1.0.0 documentation
@@ -37,7 +37,7 @@
-
+
@@ -46,7 +46,7 @@
@@ -55,7 +55,7 @@
-
+
@@ -119,7 +119,7 @@
-
SED 1.0.0a1.dev19+gf1bb527 documentation
+
SED 1.0.0 documentation
diff --git a/sed/latest/search.html b/sed/v1.0.0/search.html
similarity index 97%
rename from sed/latest/search.html
rename to sed/v1.0.0/search.html
index b7534ed..1e19422 100644
--- a/sed/latest/search.html
+++ b/sed/v1.0.0/search.html
@@ -6,7 +6,7 @@
- Search - SED 1.0.0a1.dev19+gf1bb527 documentation
+ Search - SED 1.0.0 documentation
@@ -36,7 +36,7 @@
-
+
@@ -45,7 +45,7 @@
@@ -56,7 +56,7 @@
-
+
@@ -118,7 +118,7 @@
-
If it is your beamtime, you can read the raw data and write to the processed directory. For the public data, you can not write to the processed directory.
+
The paths are such that if you are on Maxwell, it uses those. Otherwise, data is downloaded in the current directory from Zenodo: https://zenodo.org/records/12609441
+
+
[2]:
+
+
+
beamtime_dir="/asap3/flash/gpfs/pg2/2023/data/11019101"# on Maxwell
+ifos.path.exists(beamtime_dir)andos.access(beamtime_dir,os.R_OK):
+ path=beamtime_dir+"/raw/hdf/offline/fl1user3"
+ buffer_path=beamtime_dir+"/processed/tutorial/"
+else:
+ # data_path can be defined and used to store the data in a specific location
+ dataset.get("W110")# Put in Path to a storage of at least 10 Byte free space.
+ path=dataset.dir
+ buffer_path=path+"/processed/"
+
+
+
+
+
+
+
+
+INFO - Not downloading W110 data as it already exists at "/home/runner/work/sed/sed/docs/tutorial/datasets/W110".
+Set 'use_existing' to False if you want to download to a new location.
+INFO - Using existing data path for "W110": "/home/runner/work/sed/sed/docs/tutorial/datasets/W110"
+INFO - W110 data is already present.
+
Here, we get the path to the config file and set up the relevant directories. This can also be done directly in the config file.
+
+
[3]:
+
+
+
# pick the default configuration file for hextof@FLASH
+config_file=Path('../src/sed/config/flash_example_config.yaml')
+assertconfig_file.exists()
+
+
+
+
+
[4]:
+
+
+
# here we setup a dictionary that will be used to override the path configuration
+config_override={
+ "core":{
+ "beamtime_id":11019101,
+ "paths":{
+ "raw":path,
+ "processed":buffer_path
+ },
+ },
+}
+
First, we take a look at our sideband measurement before any corrections. The sidebands on the W4f core levels can be used as a measure of the pump and probe cross-correlation, and hence our temporal resolution. We plot the data delay stage position vs Energy data, normalized by acquisition time.
As we see the sidebands are quite broad and one of the possible reasons for this could be long or short-term drifts (jitter) of the FEL arrival time with respect to e.g. optical laser or differences in the intra-bunch arrival time. To check and correct for this we can look at beam arrival monitor (BAM). The BAM gives a pulse-resolved measure of the FEL arrival time with respect to a master clock.
To correct the SASE jitter, using information from the bam column and to calibrate the pump-probe delay axis, we need to shift the delay stage values to centre the pump-probe-time overlap time zero.
+
+
[13]:
+
+
+
sp_44498.add_delay_offset(
+ constant=-1448,# this is time zero position determined from side band fit
+ flip_delay_axis=True,# invert the direction of the delay axis
+ columns=['bam'],# use the bam to offset the values
+ weights=[-0.001],# bam is in fs, delay in ps
+ preserve_mean=True# preserve the mean of the delay axis to keep t0 position
+)
+
If it is your beamtime, you can read the raw data and write to the processed directory. For the public data, you can not write to the processed directory.
+
The paths are such that if you are on Maxwell, it uses those. Otherwise, data is downloaded in the current directory from Zenodo: https://zenodo.org/records/12609441
+
+
[2]:
+
+
+
beamtime_dir="/asap3/flash/gpfs/pg2/2023/data/11019101"# on Maxwell
+ifos.path.exists(beamtime_dir)andos.access(beamtime_dir,os.R_OK):
+ path=beamtime_dir+"/raw/hdf/offline/fl1user3"
+ buffer_path=beamtime_dir+"/processed/tutorial/"
+else:
+ # data_path can be defined and used to store the data in a specific location
+ dataset.get("W110")# Put in Path to a storage of at least 10 GByte free space.
+ path=dataset.dir
+ buffer_path=path+"/processed/"
+
+
+
+
+
+
+
+
+INFO - Not downloading W110 data as it already exists at "/home/runner/work/sed/sed/docs/tutorial/datasets/W110".
+Set 'use_existing' to False if you want to download to a new location.
+INFO - Using existing data path for "W110": "/home/runner/work/sed/sed/docs/tutorial/datasets/W110"
+INFO - W110 data is already present.
+
Here, we get the path to the config file and set up the relevant directories. This can also be done directly in the config file.
+
+
[3]:
+
+
+
# pick the default configuration file for hextof@FLASH
+config_file=Path('../src/sed/config/flash_example_config.yaml')
+assertconfig_file.exists()
+
+
+
+
+
[4]:
+
+
+
# here we setup a dictionary that will be used to override the path configuration
+config_override={
+ "core":{
+ "beamtime_id":11019101,
+ "paths":{
+ "raw":path,
+ "processed":buffer_path
+ },
+ },
+}
+
We now will fit the tof-energy relation. This is done by finding the maxima of a peak in the tof spectrum, and then fitting the square root relation to obtain the calibration parameters.
Visualize trXPS data bin in the dldTimeSteps and the corrected delay axis to prepare for energy calibration using SB#
+
We now prepare for an alternative energy calibration based on the side-bands of the time-dependent dataset. This is e.g. helpful if no bias series has been obtained.
+INFO - Folder config loaded from: [/home/runner/work/sed/sed/docs/tutorial/sed_config.yaml]
+INFO - System config loaded from: [/home/runner/work/sed/sed/docs/src/sed/config/flash_example_config.yaml]
+INFO - Default config loaded from: [/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/sed/config/default.yaml]
+INFO - Reading files: 0 new files of 14 total.
+loading complete in 0.08 s
+INFO - add_jitter: Added jitter to columns ['dldPosX', 'dldPosY', 'dldTimeSteps'].
+
+
+
+
+
We correct delay stage, t0 position and BAM (see previous tutorial)#
+
+
[13]:
+
+
+
sp_44498.add_delay_offset(
+ constant=-1448,# this is time zero position determined from side band fit
+ flip_delay_axis=True,# invert the direction of the delay axis
+ columns=['bam'],# use the bam to offset the values
+ weights=[-0.001],# bam is in fs, delay in ps
+ preserve_mean=True# preserve the mean of the delay axis to keep t0 position
+)
+
We now will fit the tof-energy relation. This is done using the maxima of a peak in the ToF spectrum and the known kinetic energy of those peaks (kinetic energy of e.g. W4f peaks (-31.4 and -33.6 eV) and their SB of different orders accounting energy of pump beam of 1030 nm = 1.2 eV. The calibration parameters are obtained by fitting the square root relation.
+
+
[16]:
+
+
+
### Kinetic energy of w4f peaks and their SB
+ref_energy=-30.2
+sp_44498.ec.biases=-1*np.array([-30.2,-31.4,-32.6,-33.6,-34.8])
+sp_44498.ec.peaks=np.expand_dims(data[peaks]['dldTimeSteps'].data,1)
+sp_44498.ec.tof=res_corr.dldTimeSteps.data
+
+sp_44498.calibrate_energy_axis(
+ ref_energy=ref_energy,
+ method="lmfit",
+ d={'value':1.0,'min':.8,'max':1.0,'vary':True},
+ t0={'value':5e-7,'min':1e-7,'max':1e-6,'vary':True},
+ E0={'value':-100.,'min':-200,'max':15,'vary':True},
+)
+
While this calibration methods gives a reasonable approximation to the energy axis, there are some deviations to the bias method, so it should be used with care
+
+
[19]:
+
+
+
axes=['energy']
+ranges=[[-37.5,-27.5]]
+bins=[200]
+res_1D=sp_44498.compute(bins=bins,axes=axes,ranges=ranges)
+
+plt.figure()
+(res_ref/res_ref.max()).plot(label="bias series calibration")
+(res_1D/res_1D.max()).plot(label="side band calibration")
+plt.legend()
+
Binning demonstration on locally generated fake data#
+
In this example, we generate a table with random data simulating a single event dataset. We showcase the binning method, first on a simple single table using the bin_partition method and then in the distributed method bin_dataframe, using daks dataframes. The first method is never really called directly, as it is simply the function called by the bin_dataframe on each partition of the dask dataframe.
+/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/dask/dataframe/__init__.py:42: FutureWarning:
+Dask dataframe query planning is disabled because dask-expr is not installed.
+
+You can install it with `pip install dask[dataframe]` or `conda install dask`.
+This will raise in a future version.
+
+ warnings.warn(msg, FutureWarning)
+
Compute distributed binning on the partitioned dask dataframe#
+
In this example, the small dataset does not give significant improvement over the pandas implementation, at least using this number of partitions. A single partition would be faster (you can try…) but we use multiple for demonstration purposes.
Demonstration of the conversion pipeline using time-resolved ARPES data stored on Zenodo#
+
In this example, we pull some time-resolved ARPES data from Zenodo, and load it into the sed package using functions of the mpes package. Then, we run a conversion pipeline on it, containing steps for visualizing the channels, correcting image distortions, calibrating the momentum space, correcting for energy distortions and calibrating the energy axis. Finally, the data are binned in calibrated axes. For performance reasons, best store the data on a locally attached storage (no network drive).
+This can also be achieved transparently using the included MirrorUtil class.
dataset.get("WSe2")# Put in Path to a storage of at least 20 GByte free space.
+data_path=dataset.dir# This is the path to the data
+scandir,caldir=dataset.subdirs# scandir contains the data, caldir contains the calibration files
+
+
+
+
+
+
+
+
+INFO - Not downloading WSe2 data as it already exists at "/home/runner/work/sed/sed/docs/tutorial/datasets/WSe2".
+Set 'use_existing' to False if you want to download to a new location.
+INFO - Using existing data path for "WSe2": "/home/runner/work/sed/sed/docs/tutorial/datasets/WSe2"
+INFO - WSe2 data is already present.
+
+
+
+
[3]:
+
+
+
# create sed processor using the config file:
+sp=sed.SedProcessor(folder=scandir,config="../src/sed/config/mpes_example_config.yaml",system_config={},verbose=True)
+
Bin and load part of the dataframe in detector coordinates, and choose energy plane where high-symmetry points can well be identified. Either use the interactive tool, or pre-select the range:
Next, we select a number of features corresponding to the rotational symmetry of the material, plus the center. These can either be auto-detected (for well-isolated points), or provided as a list (these can be read-off the graph in the cell above). These are then symmetrized according to the rotational symmetry, and a spline-warping correction for the x/y coordinates is calculated, which corrects for any geometric distortions from the perfect n-fold rotational symmetry.
+
+
[9]:
+
+
+
#features = np.array([[203.2, 341.96], [299.16, 345.32], [350.25, 243.70], [304.38, 149.88], [199.52, 152.48], [154.28, 242.27], [248.29, 248.62]])
+#sp.define_features(features=features, rotation_symmetry=6, include_center=True, apply=True)
+# Manual selection: Use a GUI tool to select peaks:
+#sp.define_features(rotation_symmetry=6, include_center=True)
+# Autodetect: Uses the DAOStarFinder routine to locate maxima.
+# Parameters are:
+# fwhm: Full-width at half maximum of peaks.
+# sigma: Number of standard deviations above the mean value of the image peaks must have.
+# sigma_radius: number of standard deviations around a peak that peaks are fitted
+sp.define_features(rotation_symmetry=6,auto_detect=True,include_center=True,fwhm=10,sigma=12,sigma_radius=4,apply=True)
+
Generate nonlinear correction using splinewarp algorithm. If no landmarks have been defined in previous step, default parameters from the config are used
+
+
[10]:
+
+
+
# Option whether a central point shall be fixed in the determination fo the correction
+sp.generate_splinewarp(include_center=True)
+
+
+
+
+
+
+
+
+INFO - Calculated thin spline correction based on the following landmarks:
+pouter_ord: [[203.00184761 342.98205366]
+ [299.87630041 346.19474964]
+ [350.95544165 244.77430106]
+ [305.63519239 150.21702617]
+ [199.37691593 152.83212495]
+ [153.41124117 243.05883096]]
+pcent: (249.23240623877487, 249.24332926024232)
+
To adjust scaling, position and orientation of the corrected momentum space image, you can apply further affine transformations to the distortion correction field. Here, first a potential scaling is applied, next a translation, and finally a rotation around the center of the image (defined via the config). One can either use an interactive tool, or provide the adjusted values and apply them directly.
First, the momentum scaling needs to be calibrated. Either, one can provide the coordinates of one point outside the center, and provide its distance to the Brillouin zone center (which is assumed to be located in the center of the image), one can specify two points on the image and their distance (where the 2nd point marks the BZ center),or one can provide absolute k-coordinates of two distinct momentum points.
+
If no points are provided, an interactive tool is created. Here, left mouse click selects the off-center point (brillouin_zone_centered=True) or toggle-selects the off-center and center point.
+
+
[14]:
+
+
+
k_distance=2/np.sqrt(3)*np.pi/3.28# k-distance of the K-point in a hexagonal Brillouin zone
+#sp.calibrate_momentum_axes(k_distance = k_distance)
+point_a=[308,345]
+sp.calibrate_momentum_axes(point_a=point_a,k_distance=k_distance,apply=True)
+#point_b = [247, 249]
+#sp.calibrate_momentum_axes(point_a=point_a, point_b = point_b, k_coord_a = [.5, 1.1], k_coord_b = [0, 0], equiscale=False)
+
The purpose of the energy correction is to correct for any momentum-dependent distortion of the energy axis, e.g. from geometric effects in the flight tube, or from space charge
Here, one can select the functional form to be used, and adjust its parameters. The binned data used for the momentum calibration is plotted around the Fermi energy (defined by tof_fermi), and the correction function is plotted ontop. Possible correction functions are: “spherical” (parameter: diameter), “Lorentzian” (parameter: gamma), “Gaussian” (parameter: sigma), and “Lorentzian_asymmetric” (parameters: gamma, amplitude2, gamma2).
+
One can either use an interactive alignment tool, or provide parameters directly.
In a first step, the data are loaded, binned along the TOF dimension, and normalized. The used bias voltages can be either provided, or read from attributes in the source files if present.
Next, the same peak or feature needs to be selected in each curve. For this, one needs to define “ranges” for each curve, within which the peak of interest is located. One can either provide these ranges manually, or provide one range for a “reference” curve, and infer the ranges for the other curves using a dynamic time warping algorithm.
+
+
[21]:
+
+
+
# Option 1 = specify the ranges containing a common feature (e.g an equivalent peak) for all bias scans
+# rg = [(129031.03103103103, 129621.62162162163), (129541.54154154155, 130142.14214214214), (130062.06206206206, 130662.66266266267), (130612.61261261262, 131213.21321321322), (131203.20320320321, 131803.8038038038), (131793.7937937938, 132384.38438438438), (132434.43443443443, 133045.04504504506), (133105.10510510512, 133715.71571571572), (133805.8058058058, 134436.43643643643), (134546.54654654654, 135197.1971971972)]
+# sp.find_bias_peaks(ranges=rg, infer_others=False)
+# Option 2 = specify the range for one curve and infer the others
+# This will open an interactive tool to select the correct ranges for the curves.
+# IMPORTANT: Don't choose the range too narrow about a peak, and choose a refid
+# somewhere in the middle or towards larger biases!
+rg=(66100,67000)
+sp.find_bias_peaks(ranges=rg,ref_id=5,infer_others=True,apply=True)
+
Next, the detected peak positions and bias voltages are used to determine the calibration function. Essentially, the functional Energy(TOF) is being determined by either least-squares fitting of the functional form d2/(t-t0)2 via lmfit (method: “lmfit”), or by analytically obtaining a polynomial approximation (method: “lstsq” or “lsqr”). The parameter ref_energy is used to define the absolute energy position of the feature used for calibration in the calibrated energy
+scale. energy_scale can be either “kinetic” (decreasing energy with increasing TOF), or “binding” (increasing energy with increasing TOF).
+
After calculating the calibration, all traces corrected with the calibration are plotted ontop of each other, and the calibration function (Energy(TOF)) together with the extracted features is being plotted.
+
+
[22]:
+
+
+
# Eref can be used to set the absolute energy (kinetic energy, E-EF, etc.) of the feature used for energy calibration (if known)
+Eref=-1.3
+# the lmfit method uses a fit of (d/(t-t0))**2 to determine the energy calibration
+# limits and starting values for the fitting parameters can be provided as dictionaries
+sp.calibrate_energy_axis(
+ ref_energy=Eref,
+ method="lmfit",
+ energy_scale='kinetic',
+ d={'value':1.0,'min':.7,'max':1.2,'vary':True},
+ t0={'value':8e-7,'min':1e-7,'max':1e-6,'vary':True},
+ E0={'value':0.,'min':-100,'max':0,'vary':True},
+)
+
Finally, the the energy axis is added to the dataframe. Here, the applied bias voltages of the measurement is taken into account to provide the correct energy offset. If the bias cannot be read from the file, it can be provided manually.
+
+
[24]:
+
+
+
sp.append_energy_axis(bias_voltage=16.8)
+
+
+
+
+
+
+
+
+INFO - Adding energy column to dataframe:
+INFO - Using energy calibration parameters generated on 03/06/2025, 09:26:57
+INFO - Dask DataFrame Structure:
+ X Y t ADC Xm Ym kx ky tm energy
+npartitions=100
+ float64 float64 float64 float64 float64 float64 float64 float64 float64 float64
+ ... ... ... ... ... ... ... ... ... ...
+... ... ... ... ... ... ... ... ... ... ...
+ ... ... ... ... ... ... ... ... ... ...
+ ... ... ... ... ... ... ... ... ... ...
+Dask Name: assign, 243 graph layers
+
The delay axis is calculated from the ADC input column based on the provided delay range. ALternatively, the delay scan range can also be extracted from attributes inside a source file, if present.
dataset.get("WSe2")# Put in Path to a storage of at least 20 GByte free space.
+data_path=dataset.dir# This is the path to the data
+scandir,_=dataset.subdirs# scandir contains the data, _ contains the calibration files
+
+
+
+
+
+
+
+
+INFO - Not downloading WSe2 data as it already exists at "/home/runner/work/sed/sed/docs/tutorial/datasets/WSe2".
+Set 'use_existing' to False if you want to download to a new location.
+INFO - Using existing data path for "WSe2": "/home/runner/work/sed/sed/docs/tutorial/datasets/WSe2"
+INFO - WSe2 data is already present.
+
+
+
+
[3]:
+
+
+
metadata={}
+# manual Meta data. These should ideally come from an Electronic Lab Notebook.
+#General
+metadata['experiment_summary']='WSe2 XUV NIR pump probe data.'
+metadata['entry_title']='Valence Band Dynamics - 800 nm linear s-polarized pump, 0.6 mJ/cm2 absorbed fluence'
+metadata['experiment_title']='Valence band dynamics of 2H-WSe2'
+
+#User
+# Fill general parameters of NXuser
+# TODO: discuss how to deal with multiple users?
+metadata['user0']={}
+metadata['user0']['name']='Julian Maklar'
+metadata['user0']['role']='Principal Investigator'
+metadata['user0']['affiliation']='Fritz Haber Institute of the Max Planck Society'
+metadata['user0']['address']='Faradayweg 4-6, 14195 Berlin'
+metadata['user0']['email']='maklar@fhi-berlin.mpg.de'
+
+#NXinstrument
+metadata['instrument']={}
+metadata['instrument']['energy_resolution']=140.
+#analyzer
+metadata['instrument']['analyzer']={}
+metadata['instrument']['analyzer']['slow_axes']="delay"# the scanned axes
+metadata['instrument']['analyzer']['spatial_resolution']=10.
+metadata['instrument']['analyzer']['energy_resolution']=110.
+metadata['instrument']['analyzer']['momentum_resolution']=0.08
+metadata['instrument']['analyzer']['working_distance']=4.
+metadata['instrument']['analyzer']['lens_mode']="6kV_kmodem4.0_30VTOF.sav"
+
+#probe beam
+metadata['instrument']['beam']={}
+metadata['instrument']['beam']['probe']={}
+metadata['instrument']['beam']['probe']['incident_energy']=21.7
+metadata['instrument']['beam']['probe']['incident_energy_spread']=0.11
+metadata['instrument']['beam']['probe']['pulse_duration']=20.
+metadata['instrument']['beam']['probe']['frequency']=500.
+metadata['instrument']['beam']['probe']['incident_polarization']=[1,1,0,0]# p pol Stokes vector
+metadata['instrument']['beam']['probe']['extent']=[80.,80.]
+#pump beam
+metadata['instrument']['beam']['pump']={}
+metadata['instrument']['beam']['pump']['incident_energy']=1.55
+metadata['instrument']['beam']['pump']['incident_energy_spread']=0.08
+metadata['instrument']['beam']['pump']['pulse_duration']=35.
+metadata['instrument']['beam']['pump']['frequency']=500.
+metadata['instrument']['beam']['pump']['incident_polarization']=[1,-1,0,0]# s pol Stokes vector
+metadata['instrument']['beam']['pump']['incident_wavelength']=800.
+metadata['instrument']['beam']['pump']['average_power']=300.
+metadata['instrument']['beam']['pump']['pulse_energy']=metadata['instrument']['beam']['pump']['average_power']/metadata['instrument']['beam']['pump']['frequency']#µJ
+metadata['instrument']['beam']['pump']['extent']=[230.,265.]
+metadata['instrument']['beam']['pump']['fluence']=0.15
+
+#sample
+metadata['sample']={}
+metadata['sample']['preparation_date']='2019-01-13T10:00:00+00:00'
+metadata['sample']['preparation_description']='Cleaved'
+metadata['sample']['sample_history']='Cleaved'
+metadata['sample']['chemical_formula']='WSe2'
+metadata['sample']['description']='Sample'
+metadata['sample']['name']='WSe2 Single Crystal'
+
+metadata['file']={}
+metadata['file']["trARPES:Carving:TEMP_RBV"]=300.
+metadata['file']["trARPES:XGS600:PressureAC:P_RD"]=5.e-11
+metadata['file']["KTOF:Lens:Extr:I"]=-0.12877
+metadata['file']["KTOF:Lens:UDLD:V"]=399.99905
+metadata['file']["KTOF:Lens:Sample:V"]=17.19976
+metadata['file']["KTOF:Apertures:m1.RBV"]=3.729931
+metadata['file']["KTOF:Apertures:m2.RBV"]=-5.200078
+metadata['file']["KTOF:Apertures:m3.RBV"]=-11.000425
+
+# Sample motor positions
+metadata['file']['trARPES:Carving:TRX.RBV']=7.1900000000000004
+metadata['file']['trARPES:Carving:TRY.RBV']=-6.1700200225439552
+metadata['file']['trARPES:Carving:TRZ.RBV']=33.4501953125
+metadata['file']['trARPES:Carving:THT.RBV']=423.30500940561586
+metadata['file']['trARPES:Carving:PHI.RBV']=0.99931647456264949
+metadata['file']['trARPES:Carving:OMG.RBV']=11.002500171914066
+
+
+
+
+
[4]:
+
+
+
# create sed processor using the config file, and collect the meta data from the files:
+sp=sed.SedProcessor(folder=scandir,config="../src/sed/config/mpes_example_config.yaml",system_config={},metadata=metadata,collect_metadata=True)
+
The paths are such that if you are on Maxwell, it uses those. Otherwise data is downloaded in current directory from Zenodo.
+
Generally, if it is your beamtime, you can both read the raw data and write to processed directory. However, for the public data, you can not write to processed directory.
+
+
[2]:
+
+
+
beamtime_dir="/asap3/flash/gpfs/pg2/2023/data/11019101"# on Maxwell
+ifos.path.exists(beamtime_dir)andos.access(beamtime_dir,os.R_OK):
+ path=beamtime_dir+"/raw/hdf/offline/fl1user3"
+ meta_path=beamtime_dir+"/shared"
+ buffer_path="Gd_W110/processed/"
+else:
+ # data_path can be defined and used to store the data in a specific location
+ dataset.get("Gd_W110")# Put in Path to a storage of at least 10 GByte free space.
+ path=dataset.dir
+ meta_path=path
+ buffer_path=path+"/processed/"
+
+
+
+
+
+
+
+
+INFO - Not downloading Gd_W110 data as it already exists at "/home/runner/work/sed/sed/docs/tutorial/datasets/Gd_W110".
+Set 'use_existing' to False if you want to download to a new location.
+INFO - Using existing data path for "Gd_W110": "/home/runner/work/sed/sed/docs/tutorial/datasets/Gd_W110"
+INFO - Gd_W110 data is already present.
+
Here we get the path to the config file and setup the relevant directories. This can also be done directly in the config file.
+
+
[3]:
+
+
+
# pick the default configuration file for hextof@FLASH
+config_file=Path('../src/sed/config/flash_example_config.yaml')
+assertconfig_file.exists()
+
+
+
+
The path to the processed folder can also be defined as a keyword argument later.
+
+
[4]:
+
+
+
# here we setup a dictionary that will be used to override the path configuration
+config_override={
+ "core":{
+ "paths":{
+ "raw":path,
+ "processed":buffer_path,
+ },
+ },
+}
+
In this notebook, we will show how calibration parameters can be generated. Therefore we want to clean the local directory of previously generated files.
+
WARNING running the cell below will delete the “sed_config.yaml” file in the local directory. If these contain precious calibration parameters, DO NOT RUN THIS CELL.
The following extra arguments are available for FlashLoader. None of which are necessary to give but helpful to know.
+
+
force_recreate: Probably the most useful. In case the config is changed, this allows to reduce the raw h5 files to the the intermediate parquet format again. Otherwise, the schema between the saved dataframe and config differs.
+
debug: Setting this runs the reduction process in serial, so the errors are easier to find.
+
remove_invalid_files: Sometimes some critical channels defined in the config are missing in some raw files. Setting this will make sure to ignore such files.
+
filter_timed_by_electron: Defaults to True. When True, the timed dataframe will only contain data points where valid electron events were detected. When False, all timed data points are included regardless of electron detection (see OpenCOMPES/sed#307)
+
processed_dir: Location to save the reduced parquet files.
+
scicat_token: Token from your scicat account.
+
detector: ‘1Q’ and ‘4Q’ detector for example. Useful when there are separate raw files for each detector.
+
+
+
[6]:
+
+
+
sp=SedProcessor(runs=[44762],config=config_override,system_config=config_file,collect_metadata=False)
+# You can set collect_metadata=True if the scicat_url and scicat_token are defined
+
+
+
+
+
+
+
+
+INFO - System config loaded from: [/home/runner/work/sed/sed/docs/src/sed/config/flash_example_config.yaml]
+INFO - Default config loaded from: [/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/sed/config/default.yaml]
+INFO - Reading files: 0 new files of 1 total.
+loading complete in 0.07 s
+
In order to avoid artifacts arising from incommensurate binning sizes with those imposed during data collection, e.g. by the detector, we jitter all the digital columns.
Looking at the dataframe can give quick insight about the columns loaded and the data available.
+
+
sp.dataframe shows the structure of the dataframe without computing anything. Interesting here are the columns, and their type.
+
The sp.dataframe.head() function accesses the first 5 events in the dataframe, giving us a view of what the values of each column look like, without computing the whole thing. sp.dataframe.tail()does the same from the end.
+
sp.dataframe.compute() will compute the whole dataframe, and can take a while. We should avoid doing this.
For getting a first impression of the data, and to determine binning ranges, the method sp.view_even_histogram() allows visualizing the events in one dataframe partition as histograms. Default axes and ranges are defined in the config, and show the dldPosX, dldPosY, and dldTimeStep columns:
Here we define the parameters for binning the dataframe to an n-dimensional histogram, which we can then plot, analyze or save.
+
If you never saw this before, the type after : is a “hint” to what type the object to the left will have. We include them here to make sure you know what each variable should be.
+
a:int=1# a is an integer
+b:float=1.0# b is a float
+c:str=1# we hint c to be a string, but it is still an integer
+
+
+
This is totally optional, but can help you keep track of what you are doing.
+
+
[11]:
+
+
+
# the name of the axes on which we want to bin
+axes:List[str]=['dldPosY','dldPosX']
+# the number of bins for each axis
+bins:List[int]=[480,480]
+# for each axis, the range of values to consider
+ranges:List[List[int]]=[[420,900],[420,900]]
+# here we compute the histogram
+res_chessy:xr.DataArray=sp.compute(bins=bins,axes=axes,ranges=ranges)
+
Here we load runs 44798 and 44799, which show the profile of the optical spot on the same spatial view as in our chessy run above. The two differ in transmission, being \(T=1.0\) and \(T=0.5\) respectively.
We now load a bias series, where the sample bias was varied, effectively shifting the energy spectra. This allows us to calibrate the conversion between the digital values of the dld and the energy.
as usual first we jitter, but here we also align in time the 8 sectors of the dld. This is done by finding the time of the maximum of the signal in each sector, and then shifting the signal in each sector by the difference between the maximum time and the time of the maximum in each sector.
+
For better precision, the photon peak can be used to track the energy shift.
We now will fit the tof-energy relation. This is done by finding the maxima of a peak in the tof spectrum, and then fitting the square root relation to obtain the calibration parameters.
plt.figure()# if you are using interactive plots, you'll need to generate a new figure explicitly every time.
+res.mean('sampleBias').plot.line(x='energy',linewidth=3)
+res.plot.line(x='energy',linewidth=1,alpha=.5);
+
The energy axis is now correct, taking the sample bias of the measurement into account. Additionally, we can compensate the photon energy (monochromatorPhotonEnergy) and the tofVoltage.
plt.figure()
+ax=plt.subplot(111)
+res.energy.attrs['unit']='eV'# add units to the axes
+res.mean('sampleBias').plot.line(x='energy',linewidth=3,ax=ax)
+res.plot.line(x='energy',linewidth=1,alpha=.5,label='all',ax=ax);
+
+INFO - Saved energy calibration parameters to "sed_config.yaml".
+INFO - Saved energy offset parameters to "sed_config.yaml".
+
+
+
A more general function, which saves parameters for all the calibrations performed. Use either the above or below function. They are equivalent (and overwrite each other)
+
+
[35]:
+
+
+
sp.save_workflow_params()
+
+
+
+
+
+
+
+
+INFO - Saved energy calibration parameters to "sed_config.yaml".
+INFO - Saved energy offset parameters to "sed_config.yaml".
+
To calibrate the pump-probe delay axis, we need to shift the delay stage values to center the pump-probe-time overlap timezero. Also, we want to correct the SASE jitter, using information from the bam column.
as we have saved some calibration and correction parameters, we can now run the workflow from the config file. This is done by calling each of the correction functions, with no parameters. The functions will then load the parameters from the config file.
+<matplotlib.collections.QuadMesh at 0x7f781c920100>
+
+
+
+
+
+
+
+
+
+
[40]:
+
+
+
sp.add_delay_offset(
+ constant=-1463.7,# this is time zero
+ flip_delay_axis=True,# invert the direction of the delay axis
+ columns=['bam'],# use the bam to offset the values
+ weights=[-0.001],# bam is in fs, delay in ps
+ preserve_mean=True# preserve the mean of the delay axis
+)
+
You may note some intensity variation along the delay axis. This comes mainly from inhomogeneous speed of the delay stage, and thus inequivalent amounts of time spent on every delay point. This can be corrected for by normalizing the data to the acquisition time per delay point:
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/sed/latest/tutorial/5_sxp_workflow.html b/sed/v1.0.0/tutorial/5_sxp_workflow.html
similarity index 69%
rename from sed/latest/tutorial/5_sxp_workflow.html
rename to sed/v1.0.0/tutorial/5_sxp_workflow.html
index 4394139..84a7e5a 100644
--- a/sed/latest/tutorial/5_sxp_workflow.html
+++ b/sed/v1.0.0/tutorial/5_sxp_workflow.html
@@ -8,7 +8,7 @@
- Tutorial for binning data from the SXP instrument at the European XFEL — SED 1.0.0a1.dev19+gf1bb527 documentation
+ Tutorial for binning data from the SXP instrument at the European XFEL — SED 1.0.0 documentation
@@ -39,7 +39,7 @@
-
+
@@ -50,7 +50,7 @@
@@ -60,7 +60,7 @@
-
+
@@ -122,7 +122,7 @@
-
Binning of temperature-dependent ARPES data using time-stamped external temperature data#
+
In this example, we pull some temperature-dependent ARPES data from Zenodo, which was recorded as a continuous temperature ramp. We then add the respective temperature information from the respective timestamp/temperature values to the dataframe, and bin the data as function of temperature For performance reasons, best store the data on a locally attached storage (no network drive). This can also be achieved transparently using the included MirrorUtil class.
dataset.get("TaS2")# Put in Path to a storage of at least 20 GByte free space.
+data_path=dataset.dir
+scandir,caldir=dataset.subdirs# scandir contains the data, caldir contains the calibration files
+
+# correct timestamps if not correct timezone set
+tzoffset=os.path.getmtime(scandir+'/Scan0121_1.h5')-1594998158.0
+iftzoffset:
+ forfileinglob.glob(scandir+'/*.h5'):
+ os.utime(file,(os.path.getmtime(file)-tzoffset,os.path.getmtime(file)-tzoffset))
+
+
+
+
+
+
+
+
+INFO - Not downloading TaS2 data as it already exists at "/home/runner/work/sed/sed/docs/tutorial/datasets/TaS2".
+Set 'use_existing' to False if you want to download to a new location.
+INFO - Using existing data path for "TaS2": "/home/runner/work/sed/sed/docs/tutorial/datasets/TaS2"
+INFO - TaS2 data is already present.
+
+
+
+
[3]:
+
+
+
# create sed processor using the config file with time-stamps:
+sp=sed.SedProcessor(folder=scandir,user_config="../src/sed/config/mpes_example_config.yaml",system_config={},time_stamps=True,verbose=True)
+
# Remaining fluctuations are an effect of the varying count rate throughout the scan
+plt.figure()
+rate,secs=sp.loader.get_count_rate()
+plt.plot(secs,rate)
+
+
+
+
+
[18]:
+
+
+
+
+[<matplotlib.lines.Line2D at 0x7f26553fcaf0>]
+
+
+
+
+
+
+
+
+
+
[19]:
+
+
+
# Normalize for intensity around the Gamma point
+res_norm=res.copy()
+res_norm=res_norm/res_norm.loc[{'kx':slice(-.3,.3),'ky':slice(-.3,.3)}].sum(axis=(0,1,2))
+
+<matplotlib.collections.QuadMesh at 0x7f2655376e00>
+
+
+
+
+
+
+
+
+
+
[21]:
+
+
+
# Lower Hubbard band intensity versus temperature
+plt.figure()
+res_norm.loc[{'kx':slice(-.2,.2),'ky':slice(-.2,.2),'energy':slice(-.6,0.1)}].sum(axis=(0,1,2)).plot()
+
This example showcases how to use the distortion correction workflow with landmarks that are not at symmetry-equivalent positions, such as for orthorhombic systems with different in-plane axis parameters.
For this example, we use the example data from WSe2. Even though the system is hexagonal, we will use it for demonstration.
+
+
[2]:
+
+
+
dataset.get("WSe2")# Put in Path to a storage of at least 20 GByte free space.
+data_path=dataset.dir# This is the path to the data
+scandir,_=dataset.subdirs# scandir contains the data, _ contains the calibration files
+
+
+
+
+
+
+
+
+INFO - Not downloading WSe2 data as it already exists at "/home/runner/work/sed/sed/docs/tutorial/datasets/WSe2".
+Set 'use_existing' to False if you want to download to a new location.
+INFO - Using existing data path for "WSe2": "/home/runner/work/sed/sed/docs/tutorial/datasets/WSe2"
+INFO - WSe2 data is already present.
+
+
+
+
[3]:
+
+
+
# create sed processor using the config file with time-stamps:
+sp=sed.SedProcessor(folder=scandir,user_config="../src/sed/config/mpes_example_config.yaml",system_config={},time_stamps=True,verbose=True)
+sp.add_jitter()
+
+
+
+
+
+
+
+
+INFO - Folder config loaded from: [/home/runner/work/sed/sed/docs/tutorial/sed_config.yaml]
+INFO - User config loaded from: [/home/runner/work/sed/sed/docs/src/sed/config/mpes_example_config.yaml]
+INFO - Default config loaded from: [/opt/hostedtoolcache/Python/3.10.16/x64/lib/python3.10/site-packages/sed/config/default.yaml]
+WARNING - Entry "KTOF:Lens:Sample:V" for channel "sampleBias" not found. Skipping the channel.
+INFO - add_jitter: Added jitter to columns ['X', 'Y', 't', 'ADC'].
+
We will describe the symmetry of the system with a 4-fold symmetry, and select two K points and two M points as symmetry points (as well as the Gamma point).
+
+
[5]:
+
+
+
features=np.array([[252.,355.],[361.,251.],[250.,144.],[156.,247.],[254.,247.]])
+sp.define_features(features=features,rotation_symmetry=4,include_center=True,apply=True)
+# Manual selection: Use a GUI tool to select peaks:
+# sp.define_features(rotation_symmetry=4, include_center=True)
+
For the spline-warp generation, we need to tell the algorithm the difference in length of Gamma-K and Gamma-M. This we can do using the ascale parameter, which can either be a single number (the ratio), or a list of length rotation_symmetry defining the relative length of the respective vectors.
+
+
[6]:
+
+
+
gamma_m=np.pi/3.28
+gamma_k=2/np.sqrt(3)*np.pi/3.28
+# Option 1: Ratio of the two distances:
+#sp.generate_splinewarp(include_center=True, ascale=gamma_k/gamma_m)
+# Option 2: List of distances:
+sp.generate_splinewarp(include_center=True,ascale=[gamma_m,gamma_k,gamma_m,gamma_k])
+
+
+
+
+
+
+
+
+INFO - Calculated thin spline correction based on the following landmarks:
+pouter_ord: [[252. 355.]
+ [361. 251.]
+ [250. 144.]
+ [156. 247.]]
+pcent: (254.0, 247.0)
+
dataset.get("WSe2")# Put in Path to a storage of at least 20 GByte free space.
+data_path=dataset.dir# This is the path to the data
+scandir,_=dataset.subdirs# scandir contains the data, _ contains the calibration files
+
+
+
+
+
+
+
+
+INFO - Not downloading WSe2 data as it already exists at "/home/runner/work/sed/sed/docs/tutorial/datasets/WSe2".
+Set 'use_existing' to False if you want to download to a new location.
+INFO - Using existing data path for "WSe2": "/home/runner/work/sed/sed/docs/tutorial/datasets/WSe2"
+INFO - WSe2 data is already present.
+
+
+
+
[3]:
+
+
+
# create sed processor using the config file:
+sp=sed.SedProcessor(folder=scandir,config="../src/sed/config/mpes_example_config.yaml",system_config={})
+
After loading, the dataframe contains the four columns X, Y, t, ADC, which have all integer values. They originate from a time-to-digital converter, and correspond to digital “bins”.
+
+
[4]:
+
+
+
sp.dataframe.head()
+
+
+
+
+
[4]:
+
+
+
+
+
+
+
+
+
+
X
+
Y
+
t
+
ADC
+
+
+
+
+
0
+
0.0
+
0.0
+
0.0
+
0.0
+
+
+
1
+
365.0
+
1002.0
+
70101.0
+
6317.0
+
+
+
2
+
761.0
+
818.0
+
75615.0
+
6316.0
+
+
+
3
+
692.0
+
971.0
+
66455.0
+
6317.0
+
+
+
4
+
671.0
+
712.0
+
73026.0
+
6317.0
+
+
+
+
+
+
Let’s bin these data along the t dimension within a small range:
We notice some oscillation ontop of the data. These are re-binning artifacts, originating from a non-integer number of machine-bins per bin, as we can verify by binning with a different number of steps:
To mitigate this problem, we can add some randomness to the data, and re-distribute events into the gaps in-between bins. This is also termed dithering and e.g. known from image manipulation. The important factor is to add the right amount and right type of random distribution, to end up at a quasi-continuous uniform distribution, but not lose information.
+
We can use the add_jitter function for this. We can pass it the columns to add jitter to, and the amplitude of a uniform jitter. Importantly, this step should be taken in the very beginning as first step before any dataframe operations are added.
This jittering fills the gaps, and produces a continuous uniform distribution. Let’s check again the longer-range binning that gave us the oscillations initially:
Now, the artifacts are absent, and similarly will they be in any dataframe columns derived from a column jittered in such a way. Note that this only applies to data present in digital (i.e. machine-binned) format, and not to data that are intrinsically continuous.
+
Also note that too large or not well-aligned jittering amplitudes will
+
+
deteriorate your resolution along the jittered axis
If the step-size of digitization is different from 1, the corresponding stepsize (half the distance between digitized values) can be adjusted as shown above.
+
Also, alternatively also normally distributed noise can be added, which is less sensitive to the exact right amplitude, but will lead to mixing of neighboring voxels, and thus loss of resolution. Also, normally distributed noise is substantially more computation-intensive to generate. It can nevertheless be helpful in situations where e.g. the stepsize is non-uniform.
If it is your beamtime, you can access both read the raw data and write to processed directory. For the public data, you can not write to processed directory.
+
The paths are such that if you are on Maxwell, it uses those. Otherwise data is downloaded in current directory from Zenodo: https://zenodo.org/records/12609441
+
+
[2]:
+
+
+
beamtime_dir="/asap3/flash/gpfs/pg2/2023/data/11019101"# on Maxwell
+ifos.path.exists(beamtime_dir)andos.access(beamtime_dir,os.R_OK):
+ path=beamtime_dir+"/raw/hdf/offline/fl1user3"
+ buffer_path=beamtime_dir+"/processed/tutorial/"
+else:
+ # data_path can be defined and used to store the data in a specific location
+ dataset.get("W110")# Put in Path to a storage of at least 10 GByte free space.
+ path=dataset.dir
+ buffer_path=path+"/processed/"
+
+
+
+
+
+
+
+
+INFO - Not downloading W110 data as it already exists at "/home/runner/work/sed/sed/docs/tutorial/datasets/W110".
+Set 'use_existing' to False if you want to download to a new location.
+INFO - Using existing data path for "W110": "/home/runner/work/sed/sed/docs/tutorial/datasets/W110"
+INFO - W110 data is already present.
+
Here we get the path to the config file and setup the relevant directories. This can also be done directly in the config file.
+
+
[3]:
+
+
+
# pick the default configuration file for hextof@FLASH
+config_file=Path('../src/sed/config/flash_example_config.yaml')
+assertconfig_file.exists()
+
+
+
+
+
[4]:
+
+
+
# here we setup a dictionary that will be used to override the path configuration
+config_override={
+ "core":{
+ "beamtime_id":11019101,
+ "paths":{
+ "raw":path,
+ "processed":buffer_path
+ },
+ },
+}
+
Instead of making completely new energy calibration we can take existing values from the calibration made in the previous tutorial. This allows us to calibrate the conversion between the digital values of the dld and the energy.
+
For this we need to add all those parameters as a dictionary and use them during creation of the processor object.
We can do the SASE jitter correction, using information from the bam column and do calibration of the pump-probe delay axis, we need to shift the delay stage values to center the pump-probe-time overlap time zero.
+
+
[10]:
+
+
+
sp_44498.add_delay_offset(
+ constant=-1448,# this is time zero position determined from side band fit
+ flip_delay_axis=True,# invert the direction of the delay axis
+ columns=['bam'],# use the bam to offset the values
+ weights=[-0.001],# bam is in fs, delay in ps
+ preserve_mean=True# preserve the mean of the delay axis to keep t0 position
+)
+
## EDC and integration region for XPD
+plt.figure()
+res_kx_ky.mean(('dldPosX','dldPosY')).plot()
+plt.vlines([-30.3,-29.9],0,2.4,color='r',linestyles='dashed')
+plt.vlines([-31.4,-31.2],0,2.4,color='orange',linestyles='dashed')
+plt.vlines([-33.6,-33.4],0,2.4,color='g',linestyles='dashed')
+plt.vlines([-37.0,-36.0],0,2.4,color='b',linestyles='dashed')
+plt.title('EDC and integration regions for XPD')
+plt.show()
+
+## XPD plots
+fig,ax=plt.subplots(2,2,figsize=(6,4.7),layout='constrained')
+res_kx_ky.sel(energy=slice(-30.3,-29.9)).mean('energy').plot(robust=True,ax=ax[0,0],cmap='terrain')
+ax[0,0].set_title("XPD of $1^{st}$ order sidebands")
+res_kx_ky.sel(energy=slice(-31.4,-31.2)).mean('energy').plot(robust=True,ax=ax[0,1],cmap='terrain')
+ax[0,1].set_title("XPD of W4f 7/2 peak")
+res_kx_ky.sel(energy=slice(-33.6,-33.4)).mean('energy').plot(robust=True,ax=ax[1,0],cmap='terrain')
+ax[1,0].set_title("XPD of W4f 5/2 peak")
+res_kx_ky.sel(energy=slice(-37.0,-36.0)).mean('energy').plot(robust=True,ax=ax[1,1],cmap='terrain')
+ax[1,1].set_title("XPD of W5p 3/2 peak")
+
+
+
+
+
+
+
+
+
+
+
[14]:
+
+
+
+
+Text(0.5, 1.0, 'XPD of W5p 3/2 peak')
+
+
+
+
+
+
+
+
+
As we can see there is some structure visible, but it looks very similar to each other. We probably have to do some normalization to remove the detector structure/artefacts. The best option is to divide by a flat-field image. The flat-field image can be obtained from a sample that shows no structure under identical measurement conditions. Unfortunately, we don’t have such a flat-field image.
+
In this case, we can make a flat-field image from the actual dataset using several different approaches.
+
As a first option, we can integrate in energy over the whole region and use this image as a background. Additionally, we introduce the Gaussian Blur for comparison.
Sometimes, after this division, you may not be happy with intensity distribution. Thus, other option for background correction is to duplicate the XPD pattern, apply large Gaussian blurring that eliminates the fine structures in the XPD pattern. Then divide the XPD pattern by its blurred version. This process sometimes enhances the visibility of the fine structures a lot.
+
+
[17]:
+
+
+
## XPD normalized by Gaussian-blurred background image
+
+### Define integration regions for XPD
+SB=res_kx_ky.sel(energy=slice(-30.3,-29.9)).mean('energy')
+W_4f_7=res_kx_ky.sel(energy=slice(-31.4,-31.2)).mean('energy')
+W_4f_5=res_kx_ky.sel(energy=slice(-33.6,-33.4)).mean('energy')
+W_5p=res_kx_ky.sel(energy=slice(-37.0,-36.0)).mean('energy')
+
+### Make corresponding Gaussian Blur background
+SB_blur=xr.apply_ufunc(gaussian_filter,SB,15)
+W_4f_7_blur=xr.apply_ufunc(gaussian_filter,W_4f_7,15)
+W_4f_5_blur=xr.apply_ufunc(gaussian_filter,W_4f_5,15)
+W_5p_blur=xr.apply_ufunc(gaussian_filter,W_5p,15)
+
+### Visualize results
+fig,ax=plt.subplots(2,2,figsize=(6,4.7),layout='constrained')
+(SB/SB_blur).plot(robust=True,ax=ax[0,0],cmap='terrain')
+(W_4f_7/W_4f_7_blur).plot(robust=True,ax=ax[0,1],cmap='terrain')
+(W_4f_5/W_4f_5_blur).plot(robust=True,ax=ax[1,0],cmap='terrain')
+(W_5p/W_5p_blur).plot(robust=True,ax=ax[1,1],cmap='terrain')
+fig.suptitle(f'Run {run_number}: XPD patterns after Gaussian Blur normalization',fontsize='11')
+
+### Apply Gaussian Blur to resulted images to improve contrast
+SB_norm=xr.apply_ufunc(gaussian_filter,SB/SB_blur,1)
+W_4f_7_norm=xr.apply_ufunc(gaussian_filter,W_4f_7/W_4f_7_blur,1)
+W_4f_5_norm=xr.apply_ufunc(gaussian_filter,W_4f_5/W_4f_5_blur,1)
+W_5p_norm=xr.apply_ufunc(gaussian_filter,W_5p/W_5p_blur,1)
+
+### Visualize results
+fig,ax=plt.subplots(2,2,figsize=(6,4.7),layout='constrained')
+SB_norm.plot(robust=True,ax=ax[0,0],cmap='terrain')
+W_4f_7_norm.plot(robust=True,ax=ax[0,1],cmap='terrain')
+W_4f_5_norm.plot(robust=True,ax=ax[1,0],cmap='terrain')
+W_5p_norm.plot(robust=True,ax=ax[1,1],cmap='terrain')
+fig.suptitle(f'Run {run_number}: XPD patterns after Gauss Blur normalization',fontsize='11')
+
Third option for background normalization is to use the simultaneously acquired pre-core level region. As an example for W4f 7/2 peak, we define a region on the high energy side of it and integrate in energy to use as a background
+
+
[18]:
+
+
+
### Define peak and background region on the high energy side of the peak
+W_4f_7=res_kx_ky.sel(energy=slice(-31.4,-31.2)).mean('energy')
+W_4f_7_bgd=res_kx_ky.sel(energy=slice(-32.0,-31.8)).mean('energy')
+
+### Make normalization by background, add Gaussian Blur to the resulting image
+W_4f_7_nrm1=W_4f_7/(W_4f_7_bgd+W_4f_7_bgd.max()*0.00001)
+W_4f_7_nrm1_blur=xr.apply_ufunc(gaussian_filter,W_4f_7_nrm1,1)
+
+### Add Gaussian Blur to the background image, normalize by it and add Gaussian Blur to the resulting image
+W_4f_7_bgd_blur=xr.apply_ufunc(gaussian_filter,W_4f_7_bgd,15)
+W_4f_7_nrm2=W_4f_7/W_4f_7_bgd_blur
+W_4f_7_nrm2_blur=xr.apply_ufunc(gaussian_filter,W_4f_7_nrm2,1)
+
+### Visualize all steps
+fig,ax=plt.subplots(4,2,figsize=(6,8),layout='constrained')
+W_4f_7.plot(robust=True,ax=ax[0,0],cmap='terrain')
+W_4f_7_bgd.plot(robust=True,ax=ax[0,1],cmap='terrain')
+W_4f_7_nrm1.plot(robust=True,ax=ax[1,0],cmap='terrain')
+W_4f_7_nrm1_blur.plot(robust=True,ax=ax[1,1],cmap='terrain')
+W_4f_7_bgd_blur.plot(robust=True,ax=ax[2,0],cmap='terrain')
+W_4f_7_nrm2.plot(robust=True,ax=ax[2,1],cmap='terrain')
+W_4f_7_nrm2_blur.plot(robust=True,ax=ax[3,0],cmap='terrain')
+fig.suptitle(f'Run {run_number}: XPD patterns of W4f7/2 with pre-core level normalization',fontsize='11')
+
+
+
+
+
[18]:
+
+
+
+
+Text(0.5, 0.98, 'Run 44498: XPD patterns of W4f7/2 with pre-core level normalization')
+
+
+
+
+
+
+
+
+
+
[19]:
+
+
+
fig,ax=plt.subplots(1,3,figsize=(6,2),layout='constrained')
+(xr.apply_ufunc(gaussian_filter,res_kx_ky/bgd_blur,1)).sel(energy=slice(-31.4,-31.2)).mean('energy').plot(robust=True,ax=ax[0],cmap='terrain')
+W_4f_7_norm.plot(robust=True,ax=ax[1],cmap='terrain')
+W_4f_7_nrm2_blur.plot(robust=True,ax=ax[2],cmap='terrain')
+fig.suptitle(f'Run {run_number}: comparison of different normalizations\nof XPD pattern for W4f 7/2 peak with Gaussian Blur',fontsize='11')
+
+
+
+
+
[19]:
+
+
+
+
+Text(0.5, 0.98, 'Run 44498: comparison of different normalizations\nof XPD pattern for W4f 7/2 peak with Gaussian Blur')
+