Skip to content

Commit b13df1f

Browse files
committed
fix: minor fixes
1 parent 1855fef commit b13df1f

4 files changed

+76
-24
lines changed

_website/tutorials/components/05 - Exploring the FitResult.ipynb

+33-9
Original file line numberDiff line numberDiff line change
@@ -299,12 +299,15 @@
299299
"source": [
300300
"## Weighted uncertainties\n",
301301
"\n",
302-
"A weighted likelihood is technically not a likelihood anymore and the errors are not calculated correctly. However, the hesse method\n",
303-
"can be corrected for weights, which is done automatically as soon as the dataset is weighted.\n",
302+
"A weighted likelihood is technically not a likelihood anymore, and the errors are not calculated correctly. However, the hesse method\n",
303+
"can be corrected for weights, which is done automatically as soon as the dataset is weighted. The method for corrections can be specified using the `weightcorr` argument.\n",
304+
"There are two methods to calculate the weighted uncertainties:\n",
305+
" - `\"asymptotic\"` (default): The method used is the `asymptotically correct` yet computationally expensive method described in [Parameter uncertainties in weighted unbinned maximum likelihood fits](https://link.springer.com/article/10.1140/epjc/s10052-022-10254-8).\n",
306+
" - `\"effsize\"`: The method used is the `effective size` method scaling the covariance matrix by the effective size of the dataset. This method is computationally significantly cheaper but can be less accurate.\n",
304307
"\n",
305-
"The method used is the `asymptotically correct` yet computationally expensive method described in [Parameter uncertainties in weighted unbinned maximum likelihood fits](https://link.springer.com/article/10.1140/epjc/s10052-022-10254-8).\n",
308+
"To disable the corrections, set `weightcorr=False`.\n",
306309
"\n",
307-
"The computation involves the jacobian of each event that can be expensive to compute. Again, zfit offers the possibility to use the\n",
310+
"The `\"asymptotic\"` correction involves the calculation of the jacobian with respect to each event, which can be expensive to compute. Again, zfit offers the possibility to use the\n",
308311
"autograd or the numerical jacobian."
309312
]
310313
},
@@ -342,8 +345,8 @@
342345
"outputs": [],
343346
"source": [
344347
"with zfit.run.set_autograd_mode(True):\n",
345-
" weighted_result.hesse(name=\"hesse autograd\")\n",
346-
" weighted_result.hesse(name=\"hesse autograd np\", method=\"hesse_np\")"
348+
" weighted_result.hesse(name=\"hesse autograd asy\", weightcorr=\"asymptotic\")\n",
349+
" weighted_result.hesse(name=\"hesse autograd np asy\", method=\"hesse_np\", weightcorr=\"asymptotic\")"
347350
]
348351
},
349352
{
@@ -353,8 +356,8 @@
353356
"outputs": [],
354357
"source": [
355358
"with zfit.run.set_autograd_mode(False):\n",
356-
" weighted_result.hesse(name=\"hesse numeric\")\n",
357-
" weighted_result.hesse(name=\"hesse numeric np\", method=\"hesse_np\")"
359+
" weighted_result.hesse(name=\"hesse numeric asy\") # weightcorr=\"asymptotic\" is default\n",
360+
" weighted_result.hesse(name=\"hesse numeric np asy\", method=\"hesse_np\")"
358361
]
359362
},
360363
{
@@ -370,7 +373,28 @@
370373
"cell_type": "markdown",
371374
"metadata": {},
372375
"source": [
373-
"As we can see, the errors are underestimated for the nuisance parameters using the minos method while the hesse method is correct."
376+
"As we can see, the errors are underestimated for the nuisance parameters using the minos method while the hesse method is correct.\n",
377+
"\n",
378+
"The `hesse` method can also be used with the `\"effsize\"` correction, which is computationally much cheaper or without any correction."
379+
]
380+
},
381+
{
382+
"cell_type": "code",
383+
"execution_count": null,
384+
"metadata": {},
385+
"outputs": [],
386+
"source": [
387+
"weighted_result.hesse(name=\"hesse autograd effsize\", weightcorr=\"effsize\")\n",
388+
"weighted_result.hesse(name=\"hesse numeric no corr\", weightcorr=False)"
389+
]
390+
},
391+
{
392+
"cell_type": "code",
393+
"execution_count": null,
394+
"metadata": {},
395+
"outputs": [],
396+
"source": [
397+
"print(weighted_result)"
374398
]
375399
},
376400
{

_website/tutorials/guides/constraints_simultaneous_fit_discovery_splot.ipynb

+5-3
Original file line numberDiff line numberDiff line change
@@ -506,14 +506,15 @@
506506
"metadata": {},
507507
"outputs": [],
508508
"source": [
509-
"from hepstats.hypotests.calculators import AsymptoticCalculator\n",
509+
"from hepstats.hypotests.calculators import (AsymptoticCalculator,\n",
510+
" FrequentistCalculator)\n",
510511
"\n",
511512
"# construction of the calculator instance\n",
512-
"calculator = AsymptoticCalculator(input=nll_simultaneous, minimizer=minimizer)\n",
513+
"calculator = FrequentistCalculator(input=nll_simultaneous, minimizer=minimizer)\n",
513514
"calculator.bestfit = result_simultaneous\n",
514515
"\n",
515516
"# equivalent to above\n",
516-
"calculator = AsymptoticCalculator(input=result_simultaneous, minimizer=minimizer)"
517+
"calculator = FrequentistCalculator(input=result_simultaneous, minimizer=minimizer)"
517518
]
518519
},
519520
{
@@ -587,6 +588,7 @@
587588
"outputs": [],
588589
"source": [
589590
"calculator_low_sig = AsymptoticCalculator(input=nll_simultaneous_low_sig, minimizer=minimizer)\n",
591+
"calculator_low_sig = FrequentistCalculator(input=nll_simultaneous_low_sig, minimizer=minimizer)\n",
590592
"\n",
591593
"discovery_low_sig = Discovery(calculator=calculator_low_sig, poinull=sig_yield_poi)\n",
592594
"discovery_low_sig.result()\n",

components/05 - Exploring the FitResult.ipynb

+33-9
Original file line numberDiff line numberDiff line change
@@ -299,12 +299,15 @@
299299
"source": [
300300
"## Weighted uncertainties\n",
301301
"\n",
302-
"A weighted likelihood is technically not a likelihood anymore and the errors are not calculated correctly. However, the hesse method\n",
303-
"can be corrected for weights, which is done automatically as soon as the dataset is weighted.\n",
302+
"A weighted likelihood is technically not a likelihood anymore, and the errors are not calculated correctly. However, the hesse method\n",
303+
"can be corrected for weights, which is done automatically as soon as the dataset is weighted. The method for corrections can be specified using the `weightcorr` argument.\n",
304+
"There are two methods to calculate the weighted uncertainties:\n",
305+
" - `\"asymptotic\"` (default): The method used is the `asymptotically correct` yet computationally expensive method described in [Parameter uncertainties in weighted unbinned maximum likelihood fits](https://link.springer.com/article/10.1140/epjc/s10052-022-10254-8).\n",
306+
" - `\"effsize\"`: The method used is the `effective size` method scaling the covariance matrix by the effective size of the dataset. This method is computationally significantly cheaper but can be less accurate.\n",
304307
"\n",
305-
"The method used is the `asymptotically correct` yet computationally expensive method described in [Parameter uncertainties in weighted unbinned maximum likelihood fits](https://link.springer.com/article/10.1140/epjc/s10052-022-10254-8).\n",
308+
"To disable the corrections, set `weightcorr=False`.\n",
306309
"\n",
307-
"The computation involves the jacobian of each event that can be expensive to compute. Again, zfit offers the possibility to use the\n",
310+
"The `\"asymptotic\"` correction involves the calculation of the jacobian with respect to each event, which can be expensive to compute. Again, zfit offers the possibility to use the\n",
308311
"autograd or the numerical jacobian."
309312
]
310313
},
@@ -342,8 +345,8 @@
342345
"outputs": [],
343346
"source": [
344347
"with zfit.run.set_autograd_mode(True):\n",
345-
" weighted_result.hesse(name=\"hesse autograd\")\n",
346-
" weighted_result.hesse(name=\"hesse autograd np\", method=\"hesse_np\")"
348+
" weighted_result.hesse(name=\"hesse autograd asy\", weightcorr=\"asymptotic\")\n",
349+
" weighted_result.hesse(name=\"hesse autograd np asy\", method=\"hesse_np\", weightcorr=\"asymptotic\")"
347350
]
348351
},
349352
{
@@ -353,8 +356,8 @@
353356
"outputs": [],
354357
"source": [
355358
"with zfit.run.set_autograd_mode(False):\n",
356-
" weighted_result.hesse(name=\"hesse numeric\")\n",
357-
" weighted_result.hesse(name=\"hesse numeric np\", method=\"hesse_np\")"
359+
" weighted_result.hesse(name=\"hesse numeric asy\") # weightcorr=\"asymptotic\" is default\n",
360+
" weighted_result.hesse(name=\"hesse numeric np asy\", method=\"hesse_np\")"
358361
]
359362
},
360363
{
@@ -370,7 +373,28 @@
370373
"cell_type": "markdown",
371374
"metadata": {},
372375
"source": [
373-
"As we can see, the errors are underestimated for the nuisance parameters using the minos method while the hesse method is correct."
376+
"As we can see, the errors are underestimated for the nuisance parameters using the minos method while the hesse method is correct.\n",
377+
"\n",
378+
"The `hesse` method can also be used with the `\"effsize\"` correction, which is computationally much cheaper or without any correction."
379+
]
380+
},
381+
{
382+
"cell_type": "code",
383+
"execution_count": null,
384+
"metadata": {},
385+
"outputs": [],
386+
"source": [
387+
"weighted_result.hesse(name=\"hesse autograd effsize\", weightcorr=\"effsize\")\n",
388+
"weighted_result.hesse(name=\"hesse numeric no corr\", weightcorr=False)"
389+
]
390+
},
391+
{
392+
"cell_type": "code",
393+
"execution_count": null,
394+
"metadata": {},
395+
"outputs": [],
396+
"source": [
397+
"print(weighted_result)"
374398
]
375399
},
376400
{

guides/constraints_simultaneous_fit_discovery_splot.ipynb

+5-3
Original file line numberDiff line numberDiff line change
@@ -506,14 +506,15 @@
506506
"metadata": {},
507507
"outputs": [],
508508
"source": [
509-
"from hepstats.hypotests.calculators import AsymptoticCalculator\n",
509+
"from hepstats.hypotests.calculators import (AsymptoticCalculator,\n",
510+
" FrequentistCalculator)\n",
510511
"\n",
511512
"# construction of the calculator instance\n",
512-
"calculator = AsymptoticCalculator(input=nll_simultaneous, minimizer=minimizer)\n",
513+
"calculator = FrequentistCalculator(input=nll_simultaneous, minimizer=minimizer)\n",
513514
"calculator.bestfit = result_simultaneous\n",
514515
"\n",
515516
"# equivalent to above\n",
516-
"calculator = AsymptoticCalculator(input=result_simultaneous, minimizer=minimizer)"
517+
"calculator = FrequentistCalculator(input=result_simultaneous, minimizer=minimizer)"
517518
]
518519
},
519520
{
@@ -587,6 +588,7 @@
587588
"outputs": [],
588589
"source": [
589590
"calculator_low_sig = AsymptoticCalculator(input=nll_simultaneous_low_sig, minimizer=minimizer)\n",
591+
"calculator_low_sig = FrequentistCalculator(input=nll_simultaneous_low_sig, minimizer=minimizer)\n",
590592
"\n",
591593
"discovery_low_sig = Discovery(calculator=calculator_low_sig, poinull=sig_yield_poi)\n",
592594
"discovery_low_sig.result()\n",

0 commit comments

Comments
 (0)