You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ReleaseValidation.C
* restructure JSON output and add more fields such as used threshold and
computed metric value per test and per histogram
* add reading of custom thresholds per histogram and per test
* only check once if comparable
* con-comparable (for all tests) if
* at least one histogram is empty
* different axis/binning
* integral yields inf or nan
* negative bin counts
o2dpg_release_validation.py
* add functionality to make histogram from log file content
(e.g. used to extract number of TPC clusters from tpcreco.log)
* unifiy JSON output of overall summary including QC, MC kine etc. and
single output from ROOT macro
* add summary plots comparing metric values and threshold per test and
histogram
* add option --use-values-as-thresholds to inject previously computed
matric values as new thresholds
* add comparison of 2 RelVal runs with command
compare -i <relval/outpath1> <relval/outpath2>
The wrapper includes 3 different sub-commands for now
35
35
1.`rel-val` to steer the RelVal,
36
36
1.`inspect` to print histograms of specified severity (if any),
37
+
1.`compare` to compare the results of 2 RelVal runs,
37
38
1.`influx` to convert the summary into a format that can be understood by and sent to an InfluxDB instance.
38
39
39
40
### Basic usage
@@ -95,4 +96,31 @@ When the `--tags` argument is specified, these are injected as TAGS for InfluxDB
95
96
There are various plots created during the RelVal run. For each compared file there are
96
97
* overlay plots (to be found in the sub directory `overlayPlots`),
97
98
* 2D plots summarising the results in a grid view (called `SummaryTests.png`),
98
-
* pie charts showing the fraction of test results per test.
99
+
* pie charts showing the fraction of test results per test,
100
+
* 1D plots showing the computed value and threshold per test.
101
+
102
+
## More details of `rel-val` command
103
+
104
+
As mentioned above, the basic usage of the `rel-val` sub-command is straightforward. But there are quite a few more options available and some of them will be explained briefly below.
105
+
106
+
### Setting new thresholds from another RelVal run (towards threshold tuning)
107
+
108
+
Imagine the scenario, where you assume that one has 2 outputs (either custom or full simulation output) which should be compatible. For instance, these could be 2 simulation runs with the same generator seed and reasonably high statistics and also otherwise with the same parameters.
109
+
Running the RelVal on these directories will - as usual - yield the `<parent/output/dir/SummaryGlobal.json>` as well as `<parent/output/dir/sub/dirSummary.json>`. Now, assuming there is another simulation output from - for instance - another software version. To check, where this is truly worse in terms of the RelVal comparison, one could compare it to one of the "baseline" runs while setting all thresholds to the computed values of the first comparison. This can be done with
0 commit comments