This is the package for running a photon-friendly rebalance and smear implementation on Ra2/b-style ntuples while on an LPC machine.
export SCRAM_ARCH=slc7_amd64_gcc900
cmsrel CMSSW_12_2_3
cd CMSSW_12_2_3/src
cmsenv
git clone https://github.com/sbein/SusyPhotons/
cd SusyPhotons/
mkdir jobs
mkdir -p output_mopho/{smallchunks,mediumchunks,bigchunks,signals}
mkdir pdfs
mkdir pdfs/ClosureTests
#generate file lists at FNAL
python tools/globthemfiles.py
I'm skipping the steps needed to create the prior and smearing templates. This command will run rebalance and smear and create histograms for the "truth" and "method" distributions with 10,000 events in one GJets file:
python3 tools/SkimDiphoton.py --fnamekeyword Summer16v3.GJets_DR-0p4_HT-600 --quickrun True
#OR
python3 tools/SkimMonophoton.py --fnamekeyword Summer16v3.GJets_DR-0p4_HT-600 --quickrun True
python3 tools/DrawAnalyzeSinglePho.py <output of last step>
#for example,
python3 tools/DrawAnalyzeSinglePho.py --fnamekeyword "/eos/uscms//store/group/lpcsusyphotons/SinglePhoRandS_skimsv8/*Autumn18.GJets_DR-0p4_HT-100To200*" &
Generate plots overlaying observed and R&S histograms
python3 tools/closurePlotter.py <output file from previous step>
This creates pdfs and a root file with canvases. You'll notice your histograms will likely suffer from low statistics, which is why it's good to use the batch system heavily when doing this (iteration time can be about 20 minutes to an hour).
This script defaults to submitting one job per input file. Assuming you have a valid proxy, the following script will initiate a large submission. Notice the first command below cleans up the jobs directory. It is important to do this before you submit. The script also suggests you to delete the output directory where your root files are returned so it can have a clean start.
bash tools/CleanJobs.sh
python3 tools/submitjobs.py --analyzer tools/SkimDiphoton.py --fnamekeyword Summer16v3.GJets_DR-0p4_HT --quickrun True
The quickrun option set to true tells the script to only run over 10,000 events per file. This argument can be removed when you're ready to max out your statistics. Output files will be put in the local output/ directory matching the specified keyword for the filename. The status of the jobs can be checked with
condor_q |grep <your user name>
Once the jobs are done, a wrapper for the hadd routine can be called which also fits a spline to each response function:
python tools/mergeHistosFinalizeWeights.py output/<folder name>
This applies the 1/n(simulated) on top of your the lumi*xsec weights, the latter of which was applied in the analyzer script. mergeHistosFinalizeWeights.py has hard code keywords defining the different datasets, each one corresponding to a unique cross section. If you want to run over another set of datasets, like inclusive QCD samples, you'd have to change these keywords by giving one key word per desired unique cross section. It creates a single file on which you can run the closurePlotter.py script.