experiment package

Submodules

experiment.databaseObj module

class experiment.databaseObj.Database(base=None, force_load=None, progressbar=False, subpickle=True, combinationsmatrix=None)[source]

Bases: object

Database object. Holds a list of SubDatabases and the ExpSMS map. Delegates all calls to SubDatabases.

Parameters
  • base – path to the database, or pickle file (string), or http address. If None, “official”, or “official_fastlim”, use the official database for your code version (including fastlim results, if specified). If “latest”, or “latest_fastlim”, check for the latest database. Multiple databases may be specified using ‘+’ as a delimiter.

  • force_load – force loading the text database (“txt”), or binary database (“pcl”), dont force anything if None

  • progressbar – show a progressbar when building pickle file (needs the python-progressbar module)

  • subpickle – produce small pickle files per exp result. Should only be used when working on the database.

  • combinationsmatrix – an optional dictionary that contains info about combinable analyses, e.g. { “anaid1”: ( “anaid2”, “anaid3” ) } optionally specifying signal regions, e.g. { “anaid1:SR1”: ( “anaid2:SR2”, “anaid3” ) }

clearLinksToCombinationsMatrix()[source]

clear all shallow links to the combinations matrix

createBinaryFile(filename=None)[source]

create a pcl file from all the subs

createLinksToCombinationsMatrix()[source]

in all globalInfo objects, create a shallow link to the combinations matrix

property databaseParticles

Database particles, a list, one entry per sub

property databaseVersion

The version of the database, concatenation of the individual versions

property expResultList

The combined list of results, compiled from the the active results in each subdatabase.

getExpResults(analysisIDs=['all'], datasetIDs=['all'], txnames=['all'], dataTypes=['all'], useNonValidated=False, onlyWithExpected=False)[source]

Select (filter) the results within the database satisfying the restrictions set by the arguments and returns the corresponding results.

getExpSMS()[source]

Returns all the SMS present in the selected experimental results

mergeERs(o1, r2)[source]

merge the content of exp res r1 and r2

mergeLists(lists)[source]

small function, merges lists of ERs

property pcl_meta

The meta info of the text version, a merger of the original ones

selectExpResults(analysisIDs=['all'], datasetIDs=['all'], txnames=['all'], dataTypes=['all'], useNonValidated=False, onlyWithExpected=False)[source]

Selects (filter) the results within the database satisfying the restrictions set by the arguments and updates the centralized SMS dictionary.

Parameters
  • analysisIDs – list of analysis ids ([CMS-SUS-13-006,…]). Can be wildcarded with usual shell wildcards: * ? [<letters>] Furthermore, the centre-of-mass energy can be chosen as suffix, e.g. “:13*TeV”. Note that the asterisk in the suffix is not a wildcard.

  • datasetIDs – list of dataset ids ([ANA-CUT0,…]). Can be wildcarded with usual shell wildcards: * ? [<letters>]

  • txnames – list of txnames ([TChiWZ,…]). Can be wildcarded with usual shell wildcards: * ? [<letters>]

  • dataTypes – dataType of the analysis (all, efficiencyMap or upperLimit) Can be wildcarded with usual shell wildcards: * ? [<letters>]

  • useNonValidated – If False, the results with validated = False will not be included

  • onlyWithExpected – Return only those results that have expected values also. Note that this is trivially fulfilled for all efficiency maps.

property txt_meta

The meta info of the text version, a merger of the original ones

class experiment.databaseObj.SubDatabase(base=None, force_load=None, progressbar=False, subpickle=True, combinationsmatrix=None)[source]

Bases: object

SubDatabase object. Holds a list of ExpResult objects.

Parameters
  • base – path to the database, or pickle file (string), or http address. If None, “official”, or “official_fastlim”, use the official database for your code version (including fastlim results, if specified). If “latest”, or “latest_fastlim”, check for the latest database. Multiple databases may be named, use “+” as delimiter. Order matters: Results with same name will overwritten according to sequence

  • force_load – force loading the text database (“txt”), or binary database (“pcl”), dont force anything if None

  • progressbar – show a progressbar when building pickle file (needs the python-progressbar module)

  • subpickle – produce small pickle files per exp result. Should only be used when working on the database.

  • combinationsmatrix – an optional dictionary that contains info about combinable analyses, e.g. { “anaid1”: ( “anaid2”, “anaid3” ) } optionally specifying signal regions, e.g. { “anaid1:SR1”: ( “anaid2:SR2”, “anaid3” ) }

property base

This is the path to the base directory.

checkBinaryFile()[source]
checkPathName(path)[source]

checks the path name, returns the base directory and the pickle file name. If path starts with http or ftp, fetch the description file and the database. returns the base directory and the pickle file name

clearLinksToCombinationsMatrix()[source]
createBinaryFile(filename=None)[source]

create a pcl file from the text database, potentially overwriting an old pcl file.

createExpResult(root)[source]

create, from pickle file or text files

createLinksToCombinationsMatrix()[source]

in all globalInfo objects, create links to self.combinationsmatrix

createLinksToModel()[source]

in all globalInfo objects, create links to self.databaseParticles

property databaseVersion

The version of the database, read from the ‘version’ file.

property expResultList

The list of active results.

fetchFromScratch(path, store)[source]

fetch database from scratch, together with description. :param store: filename to store json file.

fetchFromServer(path)[source]
getExpResults(analysisIDs=['all'], datasetIDs=['all'], txnames=['all'], dataTypes=['all'], useNonValidated=False, onlyWithExpected=False)[source]

Returns a list of ExpResult objects.

Each object refers to an analysisID containing one (for UL) or more (for Efficiency maps) dataset (signal region) and each dataset containing one or more TxNames. If analysisIDs is defined, returns only the results matching one of the IDs in the list. If dataTypes is defined, returns only the results matching a dataType in the list. If datasetIDs is defined, returns only the results matching one of the IDs in the list. If txname is defined, returns only the results matching one of the Tx names in the list.

Parameters
  • analysisIDs – list of analysis ids ([CMS-SUS-13-006,…]). Can be wildcarded with usual shell wildcards: * ? [<letters>] Furthermore, the centre-of-mass energy can be chosen as suffix, e.g. “:13*TeV”. Note that the asterisk in the suffix is not a wildcard.

  • datasetIDs – list of dataset ids ([ANA-CUT0,…]). Can be wildcarded with usual shell wildcards: * ? [<letters>]

  • txnames – list of txnames ([TChiWZ,…]). Can be wildcarded with usual shell wildcards: * ? [<letters>]

  • dataTypes – dataType of the analysis (all, efficiencyMap or upperLimit) Can be wildcarded with usual shell wildcards: * ? [<letters>]

  • useNonValidated – If False, the results with validated = False will not be included

  • onlyWithExpected – Return only those results that have expected values also. Note that this is trivially fulfilled for all efficiency maps.

Returns

list of ExpResult objects or the ExpResult object if the list contains only one result

inNotebook()[source]

Are we running within a notebook? Has an effect on the progressbar we wish to use.

loadBinaryFile(lastm_only=False)[source]

Load a binary database, returning last modified, file count, database.

Parameters

lastm_only – if true, the database itself is not read.

Returns

database object, or None, if lastm_only == True.

loadDatabase()[source]

if no binary file is available, then load the database and create the binary file. if binary file is available, then check if it needs update, create new binary file, in case it does need an update.

loadTextDatabase()[source]

simply loads the textdabase

lockFile(filename: PathLike)[source]

lock the file <filename>

needsUpdate()[source]

does the binary db file need an update?

removeLinksToModel()[source]

remove the links of globalInfo._databaseParticles to the model. Currently not used.

setActiveExpResults(analysisIDs=['all'], datasetIDs=['all'], txnames=['all'], dataTypes=['all'], useNonValidated=False, onlyWithExpected=False)[source]

Filter the experimental results and store them in activeResults.

Parameters
  • analysisIDs – list of analysis ids ([CMS-SUS-13-006,…]). Can be wildcarded with usual shell wildcards: * ? [<letters>] Furthermore, the centre-of-mass energy can be chosen as suffix, e.g. “:13*TeV”. Note that the asterisk in the suffix is not a wildcard.

  • datasetIDs – list of dataset ids ([ANA-CUT0,…]). Can be wildcarded with usual shell wildcards: * ? [<letters>]

  • txnames – list of txnames ([TChiWZ,…]). Can be wildcarded with usual shell wildcards: * ? [<letters>]

  • dataTypes – dataType of the analysis (all, efficiencyMap or upperLimit) Can be wildcarded with usual shell wildcards: * ? [<letters>]

  • useNonValidated – If False, the results with validated = False will not be included

  • onlyWithExpected – Return only those results that have expected values also. Note that this is trivially fulfilled for all efficiency maps.

Returns

list of ExpResult objects or the ExpResult object if the list contains only one result

unlockFile(filename: PathLike)[source]

unlock the file <filename>

updateBinaryFile()[source]

write a binar db file, but only if necessary.

experiment.databaseObj.removeLockFiles(lockfiles)[source]

remove cruft lockfiles

experiment.datasetObj module

class experiment.datasetObj.CombinedDataSet(expResult)[source]

Bases: object

Holds the information for a combined dataset (used for combining multiple datasets).

findType()[source]

find the type of the combined dataset

getDataSet(datasetID)[source]

Returns the dataset with the corresponding dataset ID. If the dataset is not found, returns None.

Parameters

datasetID – dataset ID (string)

Returns

DataSet object if found, otherwise None.

getID()[source]

Return the ID for the combined dataset

getIndex(dId, datasetOrder)[source]

Get the index of dataset within the datasetOrder, but allow for abbreviated names.

Parameters
  • dId – id of dataset to search for, may be abbreviated

  • datasetOrder – the ordered list of datasetIds, long form

Returns

index, or -1 if not found

getLumi()[source]

Return the dataset luminosity. For CombinedDataSet always return the value defined in globalInfo.lumi.

getType()[source]

Return the dataset type (combined)

isCombinableWith(other)[source]

Function that reports if two datasets are mutually uncorrelated = combinable. A combined dataset is combinable with “other”, if all consistituents are.

Parameters

other – datasetObj to compare self with

sortDataSets()[source]

Sort datasets according to globalInfo.datasetOrder.

class experiment.datasetObj.DataSet(path=None, info=None, createInfo=True, databaseParticles=None)[source]

Bases: object

Holds the information to a data set folder (TxName objects, dataInfo,…)

Parameters
  • path – Path to the dataset folder

  • info – globalInfo (from the ExptResult obj)

  • createInfo – If True, create object from dataset folder

  • databaseParticles – Model object holding Particle objects to be used when creating the SMS topologies in the TxNames.

checkForRedundancy(databaseParticles)[source]

In case of efficiency maps, check if any txnames have overlapping constraints. This would result in double counting, so we dont allow it.

folderName()[source]

Name of the folder in text database.

getAttributes(showPrivate=False)[source]

Checks for all the fields/attributes it contains as well as the attributes of its objects if they belong to smodels.experiment.

Parameters

showPrivate – if True, also returns the protected fields (_field)

Returns

list of field names (strings)

getCollaboration(ds)[source]
getEfficiencyFor(txname, sms, mass)[source]

Convenience function. Get efficiency for mass assuming no lifetime rescaling. Same as self.getTxName(txname).getEfficiencyFor(sms,mass)

getID()[source]

Return the dataset ID

getLumi()[source]

Return the dataset luminosity. If not defined for the dataset, use the value defined in globalInfo.lumi.

getSRUpperLimit(expected=False)[source]

Returns the 95% upper limit on the signal*efficiency for a given dataset (signal region). Only to be used for efficiency map type results.

Parameters

expected – If True, return the expected limit ( i.e. Nobserved = NexpectedBG )

Returns

upper limit value

getTxName(txname)[source]

get one specific txName object.

getType()[source]

Return the dataset type (EM/UL)

getUpperLimitFor(sms=None, expected=False, txnames=None, compute=False, alpha=0.05, deltas_rel=0.2, mass=None)[source]

Returns the upper limit for a given SMS (or mass) and txname. If the dataset hold an EM map result the upper limit is independent of the input txname or mass. For UL results if an SMS object is given the corresponding upper limit will be rescaled according to the lifetimes of the SMS intermediate particles. If SMS is not defined, but mass is given, compute the UL using only the mass array (no width reweighting is applied) and the mass format is assumed to follow the expected by the data.

Parameters
  • txname – TxName object or txname string (only for UL-type results)

  • sms – SMS object (only for UL-type results)

  • mass – Mass array (only for UL-type results)

  • alpha – Can be used to change the C.L. value. The default value is 0.05 (= 95% C.L.) (only for efficiency-map results)

  • deltas_rel – relative uncertainty in signal (float). Default value is 20%.

  • expected – Compute expected limit, i.e. Nobserved = NexpectedBG (only for efficiency-map results)

  • compute – If True, the upper limit will be computed from expected and observed number of events. If False, the value listed in the database will be used instead.

Returns

upper limit (Unum object)

getValuesFor(attribute)[source]

Returns a list for the possible values appearing in the ExpResult for the required attribute (sqrts,id,constraint,…). If there is a single value, returns the value itself.

Parameters

attribute – name of a field in the database (string).

Returns

list of unique values for the attribute

isCombMatrixCombinableWith_(other)[source]

Check for combinability via the combinations matrix.

isCombinableWith(other)[source]

Function that reports if two datasets are mutually uncorrelated = combinable.

Parameters

other – datasetObj to compare self with

isGlobalFieldCombinableWith_(other)[source]

Check for ‘combinableWith’ fields in globalInfo, check if <other> matches. This check is at analysis level (not at dataset level).

Params other

a dataset to check against

Returns

True, if pair is marked as combinable, else False

isLocalFieldCombinableWith_(other)[source]

Check for ‘combinableWith’ fields in dataInfo, check if <other> matches. This check is at dataset level (not at analysis level).

Params other

a dataset to check against

Returns

True, if pair is marked as combinable, else False

longStr()[source]

Returns a long string displaying the dataset ID, the experimental result ID, the dataset type and the dataset txnames.

Returns

String

experiment.defaultFinalStates module

experiment.exceptions module

exception experiment.exceptions.DatabaseNotFoundException(value)[source]

Bases: Exception

This exception is used when the database cannot be found.

exception experiment.exceptions.SModelSExperimentError(value=None)[source]

Bases: Exception

Class to define SModelS specific errors

experiment.expAuxiliaryFuncs module

experiment.expAuxiliaryFuncs.addInclusives(massList, shapeArray)[source]

Add entries corresponding to ‘*’ in shapeArray. If shapeArray contains entries = ‘*’, the corresponding entries in value will be added from the output.

Parameters
  • massList – 1D array of floats. Its dimension should be equal to the number of non “*” items in shapeArray (e.g. [200.0,100.0])

  • shapeArray – 1D array containing the data type and ‘*’. The values of data type are ignored (e.g. [float,’*’,float,’*’,’*’]).

Returns

original array with ‘*’ inserted at the correct entries.

experiment.expAuxiliaryFuncs.addUnit(obj, unit)[source]

Add unit to object. If the object is a nested list, adds the unit to all of its elements.

Parameters
  • obj – Object without units (e.g. [[100,100.]])

  • unit – Unit to be added to the object (Unum object, e.g. GeV)

Returns

Object with units (e.g. [[100*GeV,100*GeV]])

experiment.expAuxiliaryFuncs.bracketToProcessStr(stringSMS, finalState=None, intermediateState=None, returnNodeDict=False)[source]
Parameters
  • stringSMS – string describing the SMS in bracket notation (e.g. [[[e+],[jet]],[[e-],[jet]]])

  • finalState – list containing the final state labels for each branch (e.g. [‘MET’, ‘HSCP’] or [‘MET’,’MET’])

  • intermediateState – nested list containing intermediate state labels for each branch (e.g. [[‘gluino’], [‘gluino’]])

  • returnNodeDict – If True, return a dictionary mapping the nested bracket indices to the particle nodes ({(branchIndex,vertexIndex) : nodeIndex})

Returns

process string in new format (str) and dictionary nodes dictionary (if returnNodeDict=True)

experiment.expAuxiliaryFuncs.cGtr(weightA, weightB)[source]

Define the auxiliary greater function.

Return a number between 0 and 1 depending on how much it is violated (0 = A > B, 1 = A << B).

Returns

XSectioList object with the values for each label.

experiment.expAuxiliaryFuncs.cSim(*weights)[source]

Define the auxiliar similar function.

Return the maximum relative difference between any element weights of the list, normalized to [0,1].

Returns

List of values.

experiment.expAuxiliaryFuncs.cleanWalk(topdir)[source]

perform os.walk, but ignore all hidden files and directories

experiment.expAuxiliaryFuncs.concatenateLines(oldcontent)[source]

of all lines in the list “oldcontent”, concatenate the ones that end with ‘’ or ‘,’

experiment.expAuxiliaryFuncs.flattenArray(objList)[source]

Flatten any nested list to a 1D list.

Parameters

objList – Any list or nested list of objects (e.g. [[[(100.,1e-10),100.],1.],[[200.,200.],2.],..]

Returns

1D list (e.g. [100.,1e-10,100.,1.,200.,200.,2.,..])

experiment.expAuxiliaryFuncs.getAttributesFrom(obj, skipIDs=[])[source]

Loops over all attributes in the object and return a list of the attributes.

Parameters
  • obj – Any object with a __dict__ attribute

  • skipIDs – List of object ids. Any object which has its id on the list will be ignored (useful to avoid recursion).

Returns

List with unique attribute labels.

experiment.expAuxiliaryFuncs.getValuesForObj(obj, attribute)[source]

Loops over all attributes in the object and in its attributes and returns a list of values for the desired attribute:

Parameters
  • obj – Any object with a __dict__ attribute

  • attribute – String for the desired attribute

Returns

List with unique attribute values. If the attribute is not found, returns empty list.

experiment.expAuxiliaryFuncs.index_bisect(inlist, el)[source]

Return the index where to insert item el in inlist. inlist is assumed to be sorted and a comparison function (lt or cmp) must exist for el and the other elements of the list. If el already appears in the list, inlist.insert(el) will insert just before the leftmost el already there.

experiment.expAuxiliaryFuncs.removeInclusives(massArray, shapeArray)[source]

Remove all entries corresponding to ‘*’ in shapeArray. If shapeArray contains entries = ‘*’, the corresponding entries in value will be removed from the output.

Parameters
  • massArray – Array to be formatted (e.g. [[200.,100.],[200.,100.]] or [[200.,’*’],’*’],0.4])

  • shapeArray – Array with format info (e.g. [‘*’,[float,float]])

Returns

formatted array (e.g. [[200.,100.]] or [[200.]],0.4])

experiment.expAuxiliaryFuncs.removeUnits(value, stdUnits=[1.00E+00[GeV], 1.00E+00[fb]], returnUnit=False)[source]

Remove units from unum objects. Uses the units defined in base.physicsUnits.standard units to normalize the data.

Parameters
  • value – Object containing units (e.g. [[100*GeV,100.*GeV],3.*pb])

  • standardUnits – Unum unit or Array of unum units defined to normalize the data.

  • returnUnit – If True also resturns the unit corresponding to the returned value.

Returns

Object normalized to standard units (e.g. [[100,100],3000]). If returnUnit = True, returns a tuple with the value and its unit (e.g. 100,GeV). For unitless values return 1.0 as the unit.

experiment.expAuxiliaryFuncs.rescaleWidth(width)[source]

The function that is applied to all widths to map it into a better variable for interpolation. It grows logarithmically from zero (for width=0.) to a large number (machine dependent) for width = infinity.

Parameters

width – Width value (in GeV) with or without units

Return x

Coordinate value (float)

experiment.expAuxiliaryFuncs.reshapeList(objList, shapeArray)[source]

Reshape a flat list according to the shape of shapeArray. The number of elements in shapeArray should equal the length of objList.

Parameters
  • objList – 1D list of objects (e.g. [200,100,’*’,50,’*’])

  • shapeArray – Nested array (e.g. [[float,float,’*’,float],’*’])

Returns

Array with elements from objList shaped according to shapeArray (e.g. [[200.,100.,’*’,50],’*’])

experiment.expAuxiliaryFuncs.smsInStr(instring)[source]

Parse instring and return a list of elements appearing in instring. instring can also be a list of strings.

Parameters

instring – string containing elements (e.g. “[[[‘e+’]],[[‘e-‘]]]+[[[‘mu+’]],[[‘mu-‘]]]”)

Returns

list of elements appearing in instring in string format

experiment.expAuxiliaryFuncs.sortParticleList(ptcList)[source]

sorts a list of particle or particle list objects by their label :param ptcList: list to be sorted containing particle or particle list objects :return: sorted list of particles

experiment.expAuxiliaryFuncs.unscaleWidth(x)[source]

Maps a coordinate value back to width. The mapping is such that x=0->width=0 and x=very large -> width = inf.

Parameters

x – Coordinate value (float)

Return width

Width value without units

experiment.expResultObj module

class experiment.expResultObj.ExpResult(path=None, databaseParticles=None)[source]

Bases: object

Object containing the information and data corresponding to an experimental result (experimental conference note or publication).

Parameters
  • path – Path to the experimental result folder, None means transient experimental result

  • databaseParticles – the model, i.e. the particle content

getAttributes(showPrivate=False)[source]

Checks for all the fields/attributes it contains as well as the attributes of its objects if they belong to smodels.experiment.

Parameters

showPrivate – if True, also returns the protected fields (_field)

Returns

list of field names (strings)

getDataset(dataId)[source]

retrieve dataset by dataId

getEfficiencyFor(dataID=None, txname=None, sms=None, mass=None)[source]

For an Efficiency Map type, returns the efficiency for the corresponding txname and dataset for the given dataSet ID (signal region). For an Upper Limit type, returns 1 or 0, depending on whether the SMS matches the Txname. If SMS is not defined, but mass is given, give the efficiency using only the mass array (no width reweighting is applied) and the mass format is assumed to follow the expected by the data.

Parameters
  • dataID – dataset ID (string) (only for efficiency-map type results)

  • txname – TxName object or txname string (only for UL-type results)

  • sms – SMS object

  • mass – Mass array

Returns

efficiency (float)

getTxNames()[source]

Returns a list of all TxName objects appearing in all datasets.

getTxnameWith(restrDict={})[source]

Returns a list of TxName objects satisfying the restrictions. The restrictions specified as a dictionary.

Parameters

restrDict – dictionary containing the fields and their allowed values. E.g. {‘txname’ : ‘T1’, ‘axes’ : ….} The dictionary values can be single entries or a list of values. For the fields not listed, all values are assumed to be allowed.

Returns

list of TxName objects if more than one txname matches the selection criteria or a single TxName object, if only one matches the selection.

getUpperLimitFor(dataID=None, alpha=0.05, expected=False, txname=None, sms=None, compute=False, mass=None)[source]

Computes the 95% upper limit (UL) on the signal cross section according to the type of result. For an Efficiency Map type, returns the UL for the signal*efficiency for the given dataSet ID (signal region). For an Upper Limit type, returns the UL for the signal*BR for the given mass array and Txname. If SMS is not defined, but mass is given, compute the UL using only the mass array (no width reweighting is applied) and the mass format is assumed to follow the expected by the data.

Parameters
  • dataID – dataset ID (string) (only for efficiency-map type results)

  • alpha – Can be used to change the C.L. value. The default value is 0.05 (= 95% C.L.) (only for efficiency-map results)

  • expected – Compute expected limit, i.e. Nobserved = NexpectedBG (only for efficiency-map results)

  • txname – TxName object or txname string (only for UL-type results)

  • sms – SMS object

  • mass – Mass array

  • compute – If True, the upper limit will be computed from expected and observed number of events. If False, the value listed in the database will be used instead.

Returns

upper limit (Unum object)

getValuesFor(attribute)[source]

Returns a list for the possible values appearing in the ExpResult for the required attribute (sqrts,id,constraint,…). If there is a single value, returns the value itself.

Parameters

attribute – name of a field in the database (string).

Returns

list of unique values for the attribute

hasCovarianceMatrix()[source]
hasJsonFile()[source]
id()[source]
isCombinableWith(other)[source]

can this expResult be safely assumed to be approximately uncorrelated with “other”? “Other” is another expResult. Later, “other” should also be allowed to be a dataset

writePickle(dbVersion)[source]

write the pickle file

experiment.expSMS module

class experiment.expSMS.ExpSMS[source]

Bases: GenericSMS

A class for describing Simplified Model Topologies generated by the decompostion of full BSM models.

Initialize basic attributes.

compareNodes(other, nodeIndex1, nodeIndex2)[source]

Convenience function for defining how nodes are compared within the SMS. For ExpSMS the nodes are sorted according to their inclusiveness (InclsuiveNode or inclusiveList), the particle content and finally their string representation.

Parameters
  • other – ExpSMS object (if other=self compare subtrees of the same SMS).

  • nodeIndex1 – Index of first node

  • nodeIndex2 – Index of second node

Returns

1 if node1 > node2, -1 if node1 < node2, 0 if node1 == node2.

computeMatchingDict(other, n1, n2)[source]

Compare the subtrees with n1 and n2 as roots and return a dictionary with the node matchings {n1 : n2,…}. It uses the node comparison to define semantically equivalent nodes.

Parameters
  • other – TheorySMS or ExpSMS object to be compared to self.

  • n1 – Node index belonging to self

  • n2 – Node index belonging to other

Returns

None (subtrees differ) or a dictionary with the mapping of the nodes and their daughters ({n1 : n2, d1 : d2, …}).

copy(emptyNodes=False)[source]

Returns a shallow copy of self.

Parameters

emptyNodes – If True, does not copy any of the nodes from self.

Returns

TheorySMS object

classmethod from_string(stringSMS, model=None, finalState=None, intermediateState=None)[source]

Converts a string describing an SMS to an SMS object. It accepts the (old) bracket notation or the process notation. For the old notation the optional arguments finalState and intermediateState can also be defined. If the argument model is defined, the particle labels will be converted to Particle objects from the model. Otherwise the nodes will hold the particle strings.

Parameters
  • stringSMS – The process in string format (e.g. ‘(PV > gluino(1),squark(2)), (gluino(1) > MET,jet,jet), (squark(2) > HSCP,u)’ or [[[‘jet’,’jet’]],[[‘u’]]]). The particle labels should match the particles in the Model (if Model != None).

  • model – The model (Model object) to be used when converting particle labels to particle objects. If None, the nodes will only store the particle labels.

  • finalState – (optional) list containing the final state labels for each branch (e.g. [‘MET’, ‘HSCP’] or [‘MET’,’MET’])

  • intermediateState – (optional) nested list containing intermediate state labels for each branch (e.g. [[‘gluino’], [‘gluino’]])

identicalTo(other)[source]
matchesTo(other)[source]

Check if self matches other.

Parameters

other – TheorySMS or ExpSMS object to be compared to

Returns

None if objects do not match or a copy of self, but with the nodes from other.

experiment.expSMSDict module

class experiment.expSMSDict.ExpSMSDict(expResultList=[])[source]

Bases: dict

A two-way dictionary for storing the connections between unique ExpSMS and their corresponding TxNames.

Variables
  • _smsDict – Dictionary mapping the unique SMS to the TxNames ({smsUnique : {TxName : smsLabel}})

  • _txDict – Dictionary mapping the TxNames to the unique SMS ({TxName : {smsLabel : smsUnique}})

  • _nodesDict – Dictionary mapping the node numbering in unique SMS to the original numbering in the Txname SMS ({TxName : {smsLabel : nodesDict}})

Parameters

exptResultList – List of ExptResult objects used to build the map

computeDicts(expResultList)[source]

Iterates over all (active) experimental results and build two dictionaries: one mapping TxNames and smsLabels to the unique SMS and another one a with the unique SMS indices as keys and a dictionary {TxName : smsLabel} as values. It also stores the mapping of the node numbering from the original Txname SMS to the unique (sorted) SMS.

copy()[source]

Create a copy of self.

Returns

the new ExpSMSDict object

filter(expResultList)[source]

Returns a copy of self contanining only the mapping to the TxNames contained in expResultList.

Parameters

expResultList – List of experimental results (ExpResult obj)

Returns

A new ExpSMSDict with the selected TxNames.

getMatchesFrom(smsTopDict)[source]

Checks for all the matches between the SMS in smsTopDict (from decomposition) and the unique SMS in self. Returns a dictionary with the mapping: {unique SMS : [(matched SMS, original SMS),…]}

Parameters

smsTopDict – TopologyDict object with the TheorySMS from decomposition

Returns

Dictionary with unique SMS as keys and lists of matched SMS as values.

getSMS()[source]

Iterate over the unique ExpSMS stored in self.

getTx()[source]

Iterate over the TxNames stored in self.

setTxNodeOrdering(sms, tx, smsLabel, reverse=False)[source]

Relabel the node indices in sms accoding to the ExpSMS represented by the smsLabel in the TxName tx (unique ExpSMS indices - > tx sms indices). If reverse=True, do the the reverse labeling (tx sms indices -> unique ExpSMS indices).

Parameters
  • sms – SMS object to be relabeled

  • tx – TxName object

  • smsLabel – Label of the ExpSMS in the TxName object

  • reverse – If True, do the reverse labeling

Returns

sms with indices relabeled

experiment.graphMatching module

experiment.graphMatching.getCycle(G)[source]

Given a directed graph G, return a cycle, if it exits.

Parameters

G – Dictionary with the directed edges ({A : [B,C,D], B : [],..})

Returns

List of nodes generating the cycle. The first and last entries are the same node. ([A,C,D,A])

experiment.graphMatching.getDirectedEdges(edges, match)[source]

From the edges of a bipartite graph (left -> right) and a match dictionary, split the edges into left -> right (if they appear in the match) and right -> left (otherwise).

Parameters
  • edges – Dictionary with all edges between left and right nodes (left nodes as keys and right nodes as values)

  • match – Dictionary containing the edges for one perfect matching (left nodes as keys and right nodes as values)

Returns

Dictionary of directed edges between left and right nodes with edges from matchDict pointing from left to right and all the other edges pointing from right to left.

experiment.graphMatching.getNewMatch(left, right, edges, match)[source]

Given a perfect match, a list of left and right nodes in a bipartite graph and their edges (left > right), compute a new match.

Parameters
  • left – list of left nodes ([nL1,nL2,…])

  • right – list of right nodes ([nR1,nR2,..])

  • edges – Dictionary with left->right edges ({nL1 : [nR2,nR3], nL2 : [nR1],…})

  • match – Dictionary with a perfect matching ({nL1 : nR3, nL2 : nR1,…})

experiment.graphMatching.getPerfectMatchings(left, right, edges)[source]

Find all perfect matchings in an undirected bipartite graph.

Parameters
  • left – List of left nodes

  • right – List of right nodes

  • edges – Dictionary with all edges between left and right nodes (left nodes as keys and right nodes as values)

Returns

list with all matching dictionaries (left nodes as keys and right nodes as values)

experiment.graphMatching.maximal_matching(left, right, edges)[source]

Computes the maximal matching from left nodes to right nodes. The maximal matching is the maximal number of left nodes which can be connected to the right nodes without any node belonging to more than one edge. Adpated from networkx.algorithms.bipartite.matching.hopcroft_karp_matching.

Parameters
  • left – List of left nodes

  • right – List of right nodes

  • edges – Nested dictionary with left nodes as keys and macthing right nodes as values (e.g. {nL1 : {nR2 : {}, nR3 : {}}, nL2 : {nR2 : {}, nR1 : {}},… })

experiment.graphMatching.perfectMatchingIter(left, right, edges, match, all_matches, add_e=None)[source]

Iterate over all perfect matchings.

Parameters
  • left – List of left nodes

  • right – List of right nodes

  • edges – Dictionary with all edges between left and right nodes (left nodes as keys and right nodes as values)

  • match – Dictionary with a perfect matching (left nodes as keys and right nodes as values)

  • all_matches – list with pertfect matching dicts. Newly found matchings will be appendedinto this list.

  • add_e – List of tuples with the edges used to form subproblems. If not None, will be added to each newly found matchings.

Return all_matches

updated list of all perfect matchings.

experiment.infoObj module

class experiment.infoObj.Info(path=None)[source]

Bases: object

Holds the meta data information contained in a .txt file (luminosity, sqrts, experimentID,…). Its attributes are generated according to the lines in the .txt file which contain “info_tag: value”.

Parameters

path – path to the .txt file

addInfo(tag, value)[source]

Adds the info field labeled by tag with value value to the object.

Parameters
  • tag – information label (string)

  • value – value for the field in string format

cacheJsons()[source]

if we have the “jsonFiles” attribute defined, we cache the corresponding jsons. Needed when pickling

dirName(up=0)[source]

directory name of path. If up>0, we step up ‘up’ directory levels.

getInfo(infoLabel)[source]

Returns the value of info field.

Parameters

infoLabel – label of the info field (string). It must be an attribute of the GlobalInfo object

experiment.metaObj module

class experiment.metaObj.Meta(pathname, mtime=None, filecount=None, hasFastLim=None, databaseVersion=None, format_version=214, python='3.11.9 (main, Jun 18 2024, 09:40:25) [GCC 11.4.0]')[source]

Bases: object

Parameters
  • pathname – filename of pickle file, or dirname of text files

  • mtime – last modification time stamps

  • filecount – number of files

  • hasFastLim – fastlim in the database?

  • databaseVersion – version of database

  • format_version – format version of pickle file

  • python – python version

cTime()[source]
current_version = 214

The Meta object holds all meta information regarding the database, like number of analyses, last time of modification, … This info is needed to understand if we have to re-pickle.

determineLastModified(force=False)[source]

compute the last modified timestamp, plus count number of files. Only if text db

getPickleFileName()[source]

get canonical pickle file name

isPickle()[source]

is this meta info from a pickle file?

lastModifiedSubDir(subdir)[source]

Return the last modified timestamp of subdir (working recursively) plus the number of files.

Parameters
  • subdir – directory name that is checked

  • lastm – the most recent timestamp so far, plus number of files

Returns

the most recent timestamp, and the number of files

needsUpdate(current)[source]

do we need an update, with respect to <current>. so <current> is the text database, <self> the pcl.

printFastlimBanner()[source]

check if fastlim appears in data. If yes, print a statement to stdout.

sameAs(other)[source]

check if it is the same database version

versionFromFile()[source]

Retrieves the version of the database using the version file.

experiment.reweighting module

experiment.reweighting.calculateProbabilities(width, Leff_inner, Leff_outer)[source]

The fraction of prompt and displaced decays are defined as:

F_long = exp(-totalwidth*l_outer/gb_outer) F_prompt = 1 - exp(-totaltotalwidth*l_inner/gb_inner) F_displaced = 1 - F_prompt - F_long

Parameters
  • Leff_inner – is the effective inner radius of the detector, given in meters

  • Leff_outer – is the effective outer radius of the detector, given in meters

  • width – particle width for which probabilities should be calculated (in GeV)

Returns

Dictionary with the probabilities for the particle not to decay (in the detector), to decay promptly or displaced.

experiment.reweighting.defaultEffReweight(sms=None, unstableWidths=None, stableWidths=None, Leff_inner=None, Leff_outer=None, minWeight=1e-10)[source]

Computes the lifetime reweighting factor for the SMS efficiency based on the widths of the BSM particles. The widths can be defined through the unstableWidths and stableWidths arguments or they will be extracted from the SMS. The reweighting factor corresponds to the fraction of prompt decays (for the unstableWidths) and the fractions of detector-stable decays (for the stableWidths).

Parameters
  • sms – SMS object

  • unstableWidths – List of widths for particles appearing as prompt decays

  • stableWidths – List of widths for particles appearing as stable

  • minWeight – Lower cut for the reweighting factor. Any value below this will be taken to be zero.

  • Leff_inner – is the effective inner radius of the detector, given in meters. If None, use default value.

  • Leff_outer – is the effective outer radius of the detector, given in meters. If None, use default value.

Returns

Reweight factor (float)

experiment.reweighting.defaultULReweight(sms=None, unstableWidths=None, stableWidths=None, Leff_inner=None, Leff_outer=None)[source]

Computes the lifetime reweighting factor for the SMS upper limit based on the lifetimes of all intermediate particles and the last stable odd-particle appearing in the SMS. The fraction corresponds to the fraction of decays corresponding to prompt decays to all intermediate BSM particles and to a long-lived decay (outside the detector) to the final BSM state.

Parameters
  • sms – SMS object

  • Leff_inner – is the effective inner radius of the detector, given in meters. If None, use default value.

  • Leff_outer – is the effective outer radius of the detector, given in meters. If None, use default value.

Returns

Reweight factor (float)

experiment.reweighting.getWidthsFromSMS(sms)[source]

Extracts all the widths of unstable particles in the SMS and the widths of BSM particles appearing as final states (undecayed).

Parameters

sms – SMS object

Returns

List of unstable widths and list of stable widths

experiment.reweighting.reweightFactorFor(sms=None, resType='prompt', unstableWidths=None, stableWidths=None, Leff_inner=None, Leff_outer=None)[source]

Computer the reweighting factor for the SMS according to the experimental result type. Currently only two result types are supported: ‘prompt’ and ‘displaced’. If resultType = ‘prompt’, returns the reweighting factor for all decays in the SMS to be prompt and the last odd particle to be stable. If resultType = ‘displaced’, returns the reweighting factor for ANY decay in the SMS to be displaced and no long-lived decays and the last odd particle to be stable. Not that the fraction of “long-lived (meta-stable) decays” is usually included in topologies where the meta-stable particle appears in the final state. Hence it should not be included in the prompt or displaced fractions.

Parameters
  • sms – SMS object

  • resType – Type of result to compute the reweight factor for (either ‘prompt’ or ‘displaced’)

  • Leff_inner – is the effective inner radius of the detector, given in meters. If None, use default value.

  • Leff_outer – is the effective outer radius of the detector, given in meters. If None, use default value.

Returns

probabilities (depending on types of decay within branch), branches (with different labels depending on type of decay)

experiment.txnameDataObj module

class experiment.txnameDataObj.Delaunay1D(data)[source]

Bases: object

Uses a 1D data array to interpolate the data. The attribute simplices is a list of N-1 pair of ints with the indices of the points forming the simplices (e.g. [[0,1],[1,2],[3,4],…]).

checkData(data)[source]

Define the simplices according to data. Compute and store the transformation matrix and simplices self.point.

find_index(xlist, x)[source]

Efficient way to find x in a list. Returns the index (i) of xlist such that xlist[i] < x <= xlist[i+1]. If x > max(xlist), returns the length of the list. If x < min(xlist), returns 0. vertices = np.take(self.tri.simplices, simplex, axis=0) temp = np.take(self.tri.transform, simplex, axis=0) d=temp.shape[2] delta = uvw - temp[:, d]

Parameters
  • xlist – List of x-type objects

  • x – object to be searched for.

Returns

Index of the list such that xlist[i] < x <= xlist[i+1].

find_simplex(x, tol=0.0)[source]

Find 1D data interval (simplex) to which x belongs

Parameters
  • x – Point (float) without units

  • tol – Tolerance. If x is outside the data range with distance < tol, extrapolate.

Returns

simplex index (int)

class experiment.txnameDataObj.TxNameData(x, y, txdataId, accept_errors_upto=0.05)[source]

Bases: object

Holds the pre-processed data for the Txname object. It is responsible for computing the PCA transformation and interpolating. Only handles pre-processed data (1D unitless arrays, with widths rescaled).

Parameters
  • x – 2-D list of flat and unitless x-points (e.g. [ [mass1,mass2,mass3,mass4], …])

  • y – 1-D list with y-values (upper limits or efficiencies)

  • _accept_errors_upto – If None, do not allow extrapolations outside of convex hull. If float value given, allow that much relative uncertainty on the upper limit / efficiency when extrapolating outside convex hull. This method can be used to loosen the equal branches assumption.

PCAtransf(point)[source]

Transform a flat/unitless point with masses/widths to the PCA coordinate space.

Parameters

point – Flat and unitless mass/rescaled width point (e.g. [mass1,mass2,width1]). Its length should be equal to self.full_dimensionality.

Returns

1D array in coordinate space

computeV(x)[source]

Compute rotation matrix _V, and triangulation self.tri

Parameters

x – 2-D array with the flatten x-points without units (e.g. [ [mass1,mass2,mass3,mass4], [mass1’,mass2’,mass3’,mass4’], …])

countNonZeros(mp)[source]

count the nonzeros in a vector

getValueFor(point)[source]

Returns the UL or efficiency for the point.

Parameters

point – Flat and unitless mass/width point (e.g. [mass1,mass2,width1]). Its length should be equal to self.full_dimensionality.

Returns

Interpolated value for the grid (without units)

interpolate(point, fill_value=nan)[source]

Returns the interpolated value for the point (in coordinates)

Parameters

point – Point in coordinate space (length = self.dimensionality)

Returns

Value for point without units

inversePCAtransf(point)[source]

Transform a a flat 1D point from coordinate space to flat/unitless point with masses/rescaled widths.

Parameters

point – 1D array in coordinate space

Returns

Flat and unitless mass/rescaled width point (e.g. [mass1,mass2,width1]).

onlyZeroValues()[source]

check if the map is zeroes only

round_to_n(x, n)[source]

experiment.txnameObj module

class experiment.txnameObj.TxName(path=None, globalObj=None, infoObj=None, databaseParticles=None)[source]

Bases: object

Holds the information related to one txname in the Txname.txt file (constraint, condition,…) as well as the data.

addInfo(tag, value)[source]

Adds the info field labeled by tag with value value to the object.

Parameters
  • tag – information label (string)

  • value – value for the field in string format

checkConsistency()[source]

Checks if all the SMS in txname have the same structure (topology/canonical name) and verify if its constraints and conditions are valid expressions.

Returns

True if txname is consitency. Raises an error otherwise.

convertAxes()[source]

If the axes field attribute has been defined (v2 format) convert it to a list of dictionaries with the format: {arrayIndex : axesStr}, where arrayIndex refer to the index in the flat grid array (v3 format) and axesStr defines how this index should be mapped to the validation axes (e.g. {0 : ‘x’, 1 : ‘(x+y)/2’, 2 : ‘y’, …}). The axes attribute is replaced by ._axes.

convertBracketNotation()[source]

If the old bracket notation has been found in contraints or conditions, convert the strings. The _arrayToNodeDict is also defined to keep track of matching between the original nested indices and the node indices in the new format. The old constraint and conditions are stored in self._constraint and self._conditions.

evalConditionsFor(smsList)[source]

Evaluate the conditions for a list of SMS which have been matched to the txname SMS. The SMS must have the attribute txlabel assigned to the label appearing in the constraint expression.

Parameters

smsList – List of SMS objects with txlabel and weight attributes

Returns

List of condition values.

evalConstraintFor(smsList)[source]

Evaluate the constraint function for a list of SMS which have been matched to the txname SMS. The SMS must have the attribute txlabel assigned to the label appearing in the constraint expression.

Parameters

smsList – List of SMS objects with txlabel and weight attributes

Returns

Value for the evaluated constraint, if the constraint has been defined, None otherwise.

evaluateString(value)[source]

Evaluate string.

Parameters

value – String expression.

fetchAttribute(attr, fillvalue=None)[source]

Auxiliary method to get the attribute from self. If not found, look for it in datasetInfo and if still not found look for it in globalInfo. If not found in either of the above, return fillvalue.

Parameters
  • attr – Name of attribute (string)

  • fillvalue – Value to be returned if attribute is not found.

Returns

Value of the attribute or fillvalue, if attribute was not found.

getDataEntry(arrayValue)[source]

Given an array value, extract the masses, widths and their units from the array.

Parameters

arrayValue – List with masses and/or masses and widths (e.g. [100*GeV, (50*GeV,1e-6*GeV)])

getDataFromSMS(sms)[source]
getEfficiencyFor(sms, mass=None)[source]

For upper limit results, checks if the input SMS falls inside the upper limit grid and has a non-zero reweigthing factor. If it does, returns efficiency = 1, else returns efficiency = 0. For efficiency map results, returns the signal efficiency including the lifetime reweighting. If a mass array is given as input, no lifetime reweighting will be applied.

Parameters

sms – SMS object.

Returns

efficiency (float)

getInfo(infoLabel)[source]

Returns the value of info field.

Parameters

infoLabel – label of the info field (string). It must be an attribute of the TxNameInfo object

getReweightingFor(sms)[source]

Compute the lifetime reweighting for the SMS (fraction of prompt decays). If sms is a list, return 1.0.

Parameters

sms – SMS object

Returns

Reweighting factor (float)

getULFor(sms, expected=False, mass=None)[source]

Returns the upper limit (or expected) for SMS (only for upperLimit-type). Includes the lifetime reweighting (ul/reweight). If called for efficiencyMap results raises an error. If SMS is not defined, but mass is given, compute the UL using only the mass array (no width reweighting is applied) and the mass format is assumed to follow the expected by the data.

Parameters
  • sms – SMS object or mass array (with units)

  • expected – look in self.txnameDataExp, not self.txnameData

hasLikelihood()[source]

Can I construct a likelihood for this map? True for all efficiency maps, and for upper limits maps with expected Values.

hasOnlyZeroes()[source]
hasSMSas(theorySMS, useLabel=None)[source]

Verify if any SMS in conditions or constraint matches sms. If possible, check for both branch orderings (for two-branch SMS) and return the one with the largest data/reweighting factor.

Parameters
  • theorySMS – SMS object

  • useLabel – String specifying the smsLabel to be used. If None, checks for all ExpSMS in self.smsMap.

Returns

A copy of the sms with its nodes sorted according to the matching topology in the TxName. Nodes matching InclusiveNodes or inclusiveLists are replaced.

inverseTransformPoint(xFlat)[source]

Transforms a 1D unitless array to a list of mass/width values. If self._arrayMap is defined, use it to convert to a nested bracket array foramt (e.g. [[mass1,(mass2,width2)],[mass3,mass4]]), otherwise convert it to a flat array (e.g. [mass1,mass2,mass3,mass4,width2]) using self.dataMap.

Parameters

x – A 1D unitless array containing masses and rescaled widths

Returns

list (or nested list) with mass/width values (with units).

preProcessData(rawData)[source]

Convert input data (from the upperLimits, expectedUpperLimits or efficiencyMap fields) to a flat array without units. The output is used to construct the TxNameData object, which will further process the data and interpolate it. It also builds the dictionary for translating SMS properties to the flat data array.

Parameters

rawData – Raw data (either string or list)

Returns

Two flat lists of data, one for the model parameters and the other for the y values (UL or efficiency values)

processExpr(stringExpr, databaseParticles, checkUnique=False)[source]

Process a string expression (constraint or condition) for the SMS weights. Returns a simplified string expression, which can be readily evaluated using a dictionary mapping SMS labels to their weights. It also returns an SMS map (dictionary) with the SMS objects as keys and their labels (appearing in the simplified expression) as values.

Parameters
  • stringExpr – A mathematical expression for SMS weights (e.g. 2*([[[‘jet’]],[[‘jet’]]]))

  • databaseParticles – A Model object containing all the particle objects for the database.

  • checkUnique – If True raises an error if the SMS appearing in the expression are not unique (relevant for avoiding double counting in the expression).

Returns

simplfied expression (str), smsMap (dict).

setDataMap(dataPoint)[source]

If self.dataMap has not been defined, sets the dataMap using the first sms in self.smsMap. The dataMap is a dictionary mapping the node index to a flat data array.

Parameters

dataPoint – A point with the x-values from the data grid (e.g. [[100*GeV,(50*GeV,1e-3*GeV)],[100*GeV,(50*GeV,1e-3*GeV),10*GeV]])

Returns

Dictionary with the data mapping {dataArrayIndex : (nodeNumber,attr,unit)} (e.g. {0 : (1,’mass’,GeV), 1 : (1, ‘totalwidth’,GeV),…})

transformData(data)[source]

Uses the information in self.dataMap (or self._arrayMap) to convert data to a list of flat and unitless array. The data is split into two lists, one with the x-values (masses/widths) and another with the y-values (upper limits/efficiencies).

Parameters

data – 2-D array with the data grid ([[x-value,y-value], …]). The x-value can be a flat list (e.g. [mass1,mass2,mass3,width1]) or a nested list (e.g. [[(mass1,width1),mass2],[mass3]])

transformPoint(x)[source]

Transforms a x point (mass/width values) to a flat, unitless list. The widths are rescaled according to rescaleWidth. If x is already flat (e.g. [mass1,mass2,mass3,width3]), the transformation will use the mapping in self.dataMap. However, if x is a nested array (e.g. [[mass1,mass2],[(mass3,width3)]]), the transformation will be done according to the mapping defined in self._arrayMap.

Parameters

x – A list (or nested list) with mass/width values.

Returns

A flat and unitless list matching sel.dataMap.

Module contents