# tools package¶

## tools.asciiGraph module¶

tools.asciiGraph.asciidraw(element, labels=True, html=False, border=False)[source]

Draw a simple ASCII graph on the screen.

## tools.caching module¶

class tools.caching.Cache[source]

Bases: object

a class for storing results from interpolation

static add(key, value)[source]
n_stored = 1000
static reset()[source]

completely reset the cache

static size()[source]

## tools.colors module¶

class tools.colors.Colors[source]

Bases: object

blue
cyan
debug
error
green
info
magenta
red
reset
warn
yellow

## tools.coverage module¶

class tools.coverage.GeneralElement(el, missingX, smFinalStates, bsmFinalStates)[source]

Bases: object

This class represents a simplified (general) element which does only holds information about its even particles and decay type. The even particles are replaced/grouped by the particles defined in smFinalStates.

sortBranches()[source]

Sort branches. The smallest branch is the first one. If branches are equal, sort accoding to decayType.

class tools.coverage.Uncovered(topoList, sqrts=None, sigmacut=0.00E+00 [fb], groupFilters={'missing (all)': <function <lambda>>, 'missing (displaced)': <function <lambda>>, 'missing (prompt)': <function <lambda>>, 'outsideGrid (all)': <function <lambda>>}, groupFactors={'missing (all)': <function <lambda>>, 'missing (displaced)': <function <lambda>>, 'missing (prompt)': <function <lambda>>, 'outsideGrid (all)': <function <lambda>>}, groupdDescriptions={'missing (all)': 'missing topologies', 'missing (displaced)': 'missing topologies with displaced decays', 'missing (prompt)': 'missing topologies with prompt decays', 'outsideGrid (all)': 'topologies outside the grid'}, smFinalStates=None, bsmFinalSates=None)[source]

Bases: object

Wrapper object for defining and holding a list of coverage groups (UncoveredGroup objects).

The class builds a series of UncoveredGroup objects and stores them.

Inititalize the object.

Parameters: topoList – TopologyList object used to select elements from. sqrts – Value (with units) for the center of mass energy used to compute the missing cross sections. If not specified the largest value available will be used. sigmacut – Minimum cross-section/weight value (after applying the reweight factor) for an element to be included. The value should in fb (unitless) groupFilters – Dictionary containing the groups’ labels and the method for selecting elements. groupFactors – Dictionary containing the groups’ labels and the method for reweighting cross sections. groupdDescriptions – Dictionary containing the groups’ labels and strings describing the group (used for printout) smFinalStates – List of (inclusive) Particle or MultiParticle objects used for grouping Z2-even particles when creating GeneralElements. bsmFinalSates – List of (inclusive) Particle or MultiParticle objects used for grouping Z2-odd particles when creating GeneralElements.
getGroup(label)[source]

Returns the group with the required label. If not found, returns None.

Parameters: label – String corresponding to the specific group label UncoveredGroup object which matches the label
class tools.coverage.UncoveredGroup(label, elementFilter, reweightFactor, smFinalStates, bsmFinalStates, sqrts, sigmacut=0.0)[source]

Bases: object

Holds information about a single coverage group: criteria for selecting and grouping elements, function for reweighting cross sections, etc.

Parameters: label – Group label elementFilter – Function which takes an element as argument and returns True (False) if the element should (not) be selected. reweightFactor – Function which takes an element as argument and returns the reweighting factor to be applied to the element weight. smFinalStates – List of Particle/MultiParticle objects used to group Z2-even particles appearing in the final state bsmFinalStates – List of Particle/MultiParticle objects used to group Z2-odd particles appearing in the final state sqrts – Value (with units) for the center of mass energy used to compute the missing cross sections. If not specified the largest value available will be used. sigmacut – Minimum cross-section/weight value (after applying the reweight factor) for an element to be included. The value should in fb (unitless)
addToGeneralElements(el, missingX)[source]

Adds an element to the list of missing topologies = general elements. If the element contributes to a missing topology that is already in the list, add element and weight to topology. :parameter el: element to be added :parameter missingX: missing cross-section for the element (in fb)

getMissingX(element)[source]

Calculate total missing cross section of an element, by recursively checking if its mothers already appear in the list. :param element: Element object

Returns: missing cross section without units (in fb)
getToposFrom(topoList)[source]

Select the elements from topoList according to self.elementFilter and build GeneralElements from the selected elements. The GeneralElement weights corresponds to the missing cross-section with double counting from compressed elements already accounted for.

getTotalXSec(sqrts=None)[source]

Calculate total missing topology cross section at sqrts. If no sqrts is given use self.sqrts :ivar sqrts: sqrts

## tools.crashReport module¶

class tools.crashReport.CrashReport[source]

Bases: object

Class that handles all crash report information.

createCrashReportFile(inputFileName, parameterFileName)[source]

Create a new SModelS crash report file.

A SModelS crash report file contains:

• a timestamp
• SModelS version
• platform information (CPU architecture, operating system, …)
• Python version
• stack trace
• input file name
• input file content
• parameter file name
• parameter file content
Parameters: inputFileName – relative location of the input file parameterFileName – relative location of the parameter file
createUnknownErrorMessage()[source]

Create a message for an unknown error.

tools.crashReport.createStackTrace()[source]

Return the stack trace.

tools.crashReport.readCrashReportFile(crashReportFileName)[source]

Read a crash report file to use its input and parameter file sections for a SModelS run.

Parameters: crashReportFileName – relative location of the crash report file

## tools.databaseBrowser module¶

class tools.databaseBrowser.Browser(database, force_txt=False)[source]

Bases: object

Browses the database, exits if given path does not point to a valid smodels-database. Browser can be restricted to specified run or experiment.

Parameters: force_txt – If True forces loading the text database. database – Path to the database or Database object
getAttributes(showPrivate=False)[source]

Checks for all the fields/attributes it contains as well as the attributes of its objects if they belong to smodels.experiment.

Parameters: showPrivate – if True, also returns the protected fields (_field) list of field names (strings)
getEfficiencyFor(expid, dataset, txname, massarray)[source]

Get an efficiency for the given experimental id, the dataset name, the txname, and the massarray. Can only be used for EfficiencyMap-type experimental results. Interpolation is done, if necessary.

Parameters: expid – experimental id (string) dataset – dataset name (string) txname – txname (string). massarray – list of masses with units, e.g. [[ 400.*GeV, 100.*GeV],[400.*GeV, 100.*GeV]] efficiency
getULFor(expid, txname, massarray, expected=False)[source]

Get an upper limit for the given experimental id, the txname, and the massarray. Can only be used for UL experimental results. Interpolation is done, if necessary.

Parameters: expid – experimental id (string) txname – txname (string). ONLY required for upper limit results massarray – list of masses with units, e.g. [[ 400.*GeV, 100.*GeV],[400.*GeV, 100.*GeV]] expected – If true, return expected upper limit, otherwise return observed upper limit. upper limit [fb]
getULForSR(expid, datasetID)[source]

Get an upper limit for the given experimental id and dataset (signal region). Can only be used for efficiency-map results. :param expid: experimental id (string) :param datasetID: string defining the dataset id, e.g. ANA5-CUT3. :return: upper limit [fb]

getValuesFor(attribute, expResult=None)[source]

Returns a list for the possible values appearing in the database for the required attribute (sqrts,id,constraint,…).

Parameters: attribute – name of a field in the database (string). expResult – if defined, restricts the list to the corresponding expResult. Must be an ExpResult object. list of values
loadAllResults()[source]

Saves all the results from database to the _selectedExpResults. Can be used to restore all results to _selectedExpResults.

selectExpResultsWith(**restrDict)[source]

Loads the list of the experimental results (pair of InfoFile and DataFile) satisfying the restrictions to the _selectedExpResults. The restrictions specified as a dictionary.

Parameters: restrDict – selection fields and their allowed values. E.g. lumi = [19.4/fb, 20.3/fb], txName = ‘T1’,….} The values can be single entries or a list of values. For the fields not listed, all values are assumed to be allowed.
tools.databaseBrowser.main(args)[source]

IPython interface for browsing the Database.

## tools.databaseClient module¶

class tools.databaseClient.DatabaseClient(servername=None, port=None, verbose='info', rundir='./', logfile='@@rundir@@/dbclient.log', clientid=-1)[source]

Bases: object

clearCache()[source]
findServerStatus()[source]
getWaitingTime()[source]

compute a waiting time between attempts, from self.ntries

initialize()[source]
log(*args)[source]
nameAndPort()[source]
pprint(*args)[source]
query(msg)[source]

query a certain result, msg is eg. obs:ATLAS-SUSY-2016-07:ul:T1:[[5.5000E+02,4.5000E+02],[5.5000E+02,4.5000E+02]]

saveStats()[source]
send(message, amount_expected=32)[source]

send the message. :param amount_expected: how many return bytes do you expect

send_shutdown()[source]

send shutdown request to server

setDefaults()[source]

put in some defaults if data members dont exist

tools.databaseClient.stresstest(args)[source]

this is one process in the stress test

## tools.databaseServer module¶

class tools.databaseServer.DatabaseServer(dbpath, servername=None, port=None, verbose='info', rundir='./', logfile='@@rundir@@/dbserver.log')[source]

Bases: object

finish()[source]
initialize()[source]
is_port_in_use(port)[source]

check if port is in use

listen()[source]
log(*args)[source]
logServerStats()[source]

log our stats upon exit

lookUpResult(data)[source]
parseData(data)[source]

parse the data packet

pprint(*args)[source]
run(nonblocking=False)[source]

run server :param nonblock: run in nonblocking mode (not yet implemented)

setStatus(status)[source]

servers have a status file that tells us if they are running

shutdown(fromwhere='unknown')[source]
tools.databaseServer.shutdownAll()[source]

## tools.externalPythonTools module¶

class tools.externalPythonTools.ExternalPythonTool(importname, optional=False)[source]

Bases: object

An instance of this class represents the installation of a python package. As it is python-only, we need this only for installation, not for running (contrary to nllfast or pythia).

Initializes the ExternalPythonTool object. Useful for installation. :params optional: optional package, not needed for core SModelS.

checkInstallation()[source]

The check is basically done in the constructor

compile()[source]
installDirectory()[source]

Just returns the pythonPath variable

pathOfExecutable()[source]

Just returns the pythonPath variable

## tools.inclusiveObjects module¶

class tools.inclusiveObjects.InclusiveList[source]

Bases: list

An inclusive list class. It will return True when compared to any other list object.

class tools.inclusiveObjects.InclusiveValue[source]

Bases: int

An inclusive number class. It will return True when compared to any other integer, float or Unum object.

## tools.interactivePlots module¶

class tools.interactivePlots.Plotter(smodelsFolder, slhaFolder, parameterFile, modelFile=None)[source]

Bases: object

A class to store the required data and produce the interactive plots

Initializes the class.

Parameters: smodelsFolder – path to the folder or tarball containing the smodels (python) output files slhaFolder – path to the folder or tarball containing the SLHA input files parameterFile – path to the file containing the plotting definitions modelFile – path to the model file, e.g smodels/share/models/mssm.py
display()[source]

display the pages, works in jupyter notebooks only

editSlhaInformation()[source]

Edits slha_hover_information,ctau_hover_information,BR_hover_information,variable_x,variable_y if they are defined as a list. The function transforms it in a dict whose keys are the object names

fillWith(smodelsOutput, slhaData)[source]

Fill the dictionary (data_dict) with the desired data from the smodels output dictionary (smodelsDict) and the pyslha.Doc object slhaData

getParticleName(pdg)[source]

looks for the particle label in the model.py file

initializeDataDict()[source]

Initializes an empty dictionary with the plotting options.

loadData(npoints=-1)[source]

Reads the data from the smodels and SLHA folders. If npoints > 0, it will limit the number of points in the plot to npoints.

Parameters: npoints – Number of points to be plotted (int). If < 0, all points will be used.
loadModelFile()[source]

Reads the parameters from the plotting parameter file.

loadParameters()[source]

Reads the parameters from the plotting parameter file.

plot(outFolder, indexfile='plots.html')[source]

Uses the data in self.data_dict to produce the plots.

Parameters: outFolder – Path to the output folder. indexfile – name of entry webpage
rmFiles(flist)[source]

remove files in flist

tools.interactivePlots.main(args, indexfile='index.html')[source]

Create the interactive plots using the input from argparse

Parameters: args – argparser.Namespace object containing the options for the plotter

Main interface for the interactive-plots.

Parameters: smodelsFolder – Path to the folder or tarball containing the SModelS python output slhaFolder – Path to the folder or tarball containing the SLHA files corresponding to the SModelS output parameters – Path to the parameter file setting the options for the interactive plots npoints – Number of points used to produce the plot. If -1, all points will be used. verbosity – Verbosity of the output (debug,info,warning,error) indexfile – name of the starting web page (index.html) True if the plot creation was successfull

## tools.interactivePlotsHelpers module¶

class tools.interactivePlotsHelpers.Filler(plotter, smodelsOutput, slhaData)[source]

Bases: object

A class with the functions required to fill the data dictionary to produce the plots

getBR()[source]

Gets the requested branching ratios from the slha file, that will go into de hover.

getCtau()[source]

Computes the requested ctaus, that will go into de hover.

getExpres()[source]

Extracts the Expres info from the .py output. If requested, the data will be appended on each corresponding list

getMaxMissingTopology()[source]

Extracts the missing topology with the largest cross section

getMaxMissingTopologyXsection()[source]

Extracts the cross section of the missing topology with the largest cross section

getOutsideGrid()[source]

Extracts the outside grid info from the .py output. If requested, the data will be appended on each corresponding list.

getParticleName(pdg)[source]

looks for the particle label in the model.py file

getSlhaData(variable_x, variable_y)[source]

fills data dict with slha data

getSlhaHoverInfo()[source]

Gets the requested slha info from each slha file, to fill the hover.

getSmodelSData()[source]

fills data dict with smodels data

getTotalMissingDisplaced()[source]

Extracts the Total cross section from missing displaced topologies

getTotalMissingPrompt()[source]

Extracts the Total cross section from missing prompt topologies

getTotalMissingXsec()[source]

Extracts the total crossection from missing topologies.

getVariable(variable)[source]

Gets the variable from the slha file.

openSMParticles()[source]

Loads SMparticles.py to parse over SM pdg-labels

truncate(number)[source]

truncate float to 3 decimal places

class tools.interactivePlotsHelpers.PlotlyBackend(master, path_to_plots)[source]

Bases: object

DataFrameExcludedNonexcluded()[source]

Generate sub data frames for excluded and non-excluded points

GetXyAxis()[source]

Retrieves the names of the x and y axis variables.

SeparateContDiscPlots()[source]

Generate sub lists of plots with discrete and conitnuous z axis variables.

createIndexHtml()[source]

Fills the index.html file with links to the interactive plots.

fillHover()[source]

Generates the text of the hover, according to users’s requests.

makeContinuousPlots(data_frame, data_selection)[source]

Generate plots with continuous z axis variables, using all data points

makeDataFrame()[source]

Transform the main dictionary in a data frame.

makeDiscretePlots(data_frame, data_selection)[source]

Generate plots with discrete z axis variables, using all data points

makePlots(indexfile)[source]

Uses the data in self.data_dict to produce the plots.

Parameters: outFolder – Path to the output folder.
plotDescription()[source]

Generate a description for each plot.

refiningVariableNames()[source]

Redifining the output variable names to html format

tools.interactivePlotsHelpers.getEntry(inputDict, *keys)[source]

Get entry key in dictionary inputDict. If a list of keys is provided, it will assumed nested dictionaries (e.g. key1,key2 will return inputDict[key1][key2]).

tools.interactivePlotsHelpers.getSlhaData(slhaFile)[source]

Uses pyslha to read the SLHA file. Return a pyslha.Doc objec, if successful.

tools.interactivePlotsHelpers.getSlhaFile(smodelsOutput)[source]

Returns the file name of the SLHA file corresponding to the output in smodelsDict

tools.interactivePlotsHelpers.importPythonOutput(smodelsFile)[source]

Imports the smodels output from each .py file.

tools.interactivePlotsHelpers.outputStatus(smodelsDict)[source]

Check the smodels output status in the file, if it’s -1, it will append ‘none’ to each list in the dictionary.

## tools.ioObjects module¶

class tools.ioObjects.FileStatus[source]

Bases: object

Object to run several checks on the input file. It holds an LheStatus (SlhaStatus) object if inputType = lhe (slha)

checkFile(inputFile)[source]

Run checks on the input file.

Parameters: inputFile – path to input file
class tools.ioObjects.LheStatus(filename)[source]

Bases: object

Object to check if input lhe file contains errors.

Variables: filename – path to input LHE file
evaluateStatus()[source]

run status check

class tools.ioObjects.OutputStatus(status, inputFile, parameters, databaseVersion)[source]

Bases: object

Object that holds all status information and has a predefined printout.

Initialize output. If one of the checks failed, exit.

Parameters: status – status of input file inputFile – input file name parameters – input parameters databaseVersion – database version (string)
addWarning(warning)[source]

Append warning to warnings.

Parameters: warning – warning to be appended
updateSLHAStatus(status)[source]

Update SLHA status.

Parameters: status – new SLHA status flag
updateStatus(status)[source]

Update status.

Parameters: status – new status flag
class tools.ioObjects.SlhaStatus(filename, findMissingDecayBlocks=True, findIllegalDecays=False, checkXsec=True)[source]

Bases: object

An instance of this class represents the status of an SLHA file. The output status is: = 0 : the file is not checked, = 1: the check is ok = -1: case of a physical problem, e.g. charged LSP, = -2: case of formal problems, e.g. no cross sections

Parameters: filename – path to input SLHA file findMissingDecayBlocks – if True add a warning for missing decay blocks findIllegalDecays – if True check if all decays are kinematically allowed checkXsec – if True check if SLHA file contains cross sections findLonglived – if True find stable charged particles and displaced vertices
emptyDecay(pid)[source]

Check if any decay is missing for the particle with pid

Parameters: pid – PID number of particle to be checked True if the decay block is missing or if it is empty, None otherwise
evaluateStatus()[source]

Get status summary from all performed checks.

Returns: a status flag and a message for explanation
findIllegalDecay(findIllegal)[source]

Find decays for which the sum of daughter masses excels the mother mass

Parameters: findIllegal – True if check should be run status flag and message
findMissingDecayBlocks(findMissingBlocks)[source]

For all non-SMpdgs particles listed in mass block, check if decay block is written

Returns: status flag and message
hasXsec(checkXsec)[source]

Check if XSECTION table is present in the slha file.

Parameters: checkXsec – set True to run the check status flag, message
read()[source]

Get pyslha output object.

## tools.lheChecks module¶

tools.lheChecks.main(args)[source]

## tools.modelTester module¶

tools.modelTester.checkForSemicolon(strng, section, var)[source]
tools.modelTester.getAllInputFiles(inFile)[source]

Given inFile, return list of all input files

Parameters: inFile – Path to input file or directory containing input files List of all input files, and the directory name
tools.modelTester.getCombiner(inputFile, parameterFile)[source]

Facility for running SModelS, computing the theory predictions and returning the combination of analyses (defined in the parameterFile). Useful for plotting likelihoods!. Extracts and returns the TheoryPredictionsCombiner object from the master printer, if the object is found. Return None otherwise.

Parameters: inputFile – path to the input SLHA file parameterFile – path to parameters.ini file TheoryPredictionsCombiner object generated by running SModelS.
tools.modelTester.getParameters(parameterFile)[source]

Read parameter file, exit in case of errors

Parameters: parameterFile – Path to parameter File ConfigParser read from parameterFile
tools.modelTester.loadDatabase(parser, db)[source]

Parameters: parser – ConfigParser with path to database db – binary database object. If None, then database is loaded, according to databasePath. If True, then database is loaded, and text mode is forced. database object, database version
tools.modelTester.loadDatabaseResults(parser, database)[source]

Load database entries specified in parser

Parameters: parser – ConfigParser, containing analysis and txnames selection database – Database object List of experimental results
tools.modelTester.runSetOfFiles(inputFiles, outputDir, parser, databaseVersion, listOfExpRes, timeout, development, parameterFile)[source]

Loop over all input files in inputFiles with testPoint

Parameters: inputFiles – list of input files to be tested outputDir – path to directory where output is be stored parser – ConfigParser storing information from parameter.ini file databaseVersion – Database version (printed to output file) listOfExpRes – list of ExpResult objects to be considered development – turn on development mode (e.g. no crash report) parameterFile – parameter file, for crash reports printers output
tools.modelTester.runSingleFile(inputFile, outputDir, parser, databaseVersion, listOfExpRes, timeout, development, parameterFile)[source]

Call testPoint on inputFile, write crash report in case of problems

Parameters: inputFile – path to input file outputDir – path to directory where output is be stored parser – ConfigParser storing information from parameter.ini file databaseVersion – Database version (printed to output file) listOfExpRes – list of ExpResult objects to be considered crashReport – if True, write crash report in case of problems timeout – set a timeout for one model point (0 means no timeout) output of printers
tools.modelTester.setExperimentalFlag(parser)[source]

set the experimental flag, if options:experimental = True

tools.modelTester.testPoint(inputFile, outputDir, parser, databaseVersion, listOfExpRes)[source]

Test model point defined in input file (running decomposition, check results, test coverage)

Parameters: inputFile – path to input file outputDir – path to directory where output is be stored parser – ConfigParser storing information from parameters file databaseVersion – Database version (printed to output file) listOfExpRes – list of ExpResult objects to be considered dictionary with input filename as key and the MasterPrinter object as value
tools.modelTester.testPoints(fileList, inDir, outputDir, parser, databaseVersion, listOfExpRes, timeout, development, parameterFile)[source]

Loop over all input files in fileList with testPoint, using ncpus CPUs defined in parser

Parameters: fileList – list of input files to be tested inDir – path to directory where input files are stored outputDir – path to directory where output is stored parser – ConfigParser storing information from parameter.ini file databaseVersion – Database version (printed to output files) listOfExpRes – list of ExpResult objects to be considered timeout – set a timeout for one model point (0 means no timeout) development – turn on development mode (e.g. no crash report) parameterFile – parameter file, for crash reports printer(s) output, if not run in parallel mode

## tools.nllFastWrapper module¶

class tools.nllFastWrapper.NllFastWrapper(sqrts, nllfastVersion, testParams, testCondition)[source]

Bases: smodels.tools.wrapperBase.WrapperBase

An instance of this class represents the installation of nllfast.

Parameters: sqrts – sqrt of s, in TeV, as an integer, nllfastVersion – version of the nllfast tool testParams – what are the test params we need to run things with? testCondition – the line that should be the last output line when running executable the path of the source code, for compilation
getKfactorsFor(pIDs, slhafile, pdf='cteq')[source]

Read the NLLfast grid and returns a pair of k-factors (NLO and NLL) for the PIDs pair.

Returns: k-factors = None, if NLLfast does not contain the process; uses the slhafile to obtain the SUSY spectrum.
class tools.nllFastWrapper.NllFastWrapper13[source]

An instance of this class represents the installation of nllfast 8.

class tools.nllFastWrapper.NllFastWrapper7[source]

An instance of this class represents the installation of nllfast 7.

class tools.nllFastWrapper.NllFastWrapper8[source]

An instance of this class represents the installation of nllfast 8.

## tools.printer module¶

class tools.printer.BasicPrinter(output, filename)[source]

Bases: object

Super class to handle the basic printing methods

Variables: typeofexpectedvalues (str) – what type of expected values to print, apriori or posteriori
addObj(obj)[source]

Parameters: obj – A object to be printed. Must match one of the types defined in formatObj True if the object has been added to the output. If the object does not belong to the pre-defined printing list toPrint, returns False.
filename
flush()[source]

Format the objects added to the output, print them to the screen or file and remove them from the printer.

getTypeOfExpected()[source]

tiny convenience function for what expected values to print, apriori (True) or posteriori

mkdir()[source]

create directory to file, if necessary

openOutFile(filename, mode)[source]

creates and opens a data sink, creates path if needed

setOptions(options)[source]

Store the printer specific options to control the output of each printer. Each option is stored as a printer attribute.

Parameters: options – a list of (option,value) for the printer.
class tools.printer.MPrinter[source]

Bases: object

Master Printer class to handle the Printers (one printer/output type)

addObj(obj)[source]

Adds the object to all its Printers:

Parameters: obj – An object which can be handled by the Printers.
flush()[source]

Ask all printers to write the output and clear their cache. If the printers return anything other than None, we pass it on.

setOutPutFiles(filename, silent=False)[source]

Set the basename for the output files. Each printer will use this file name appended of the respective extension (i.e. .py for a python printer, .smodels for a summary printer,…)

Parameters: filename – Input file name silent – dont comment removing old files
setPrinterOptions(parser)[source]

Define the printer types and their options.

Parameters: parser – ConfigParser storing information from the parameters file
class tools.printer.PyPrinter(output='stdout', filename=None)[source]

Printer class to handle the printing of one single pythonic output

flush()[source]

Write the python dictionaries generated by the object formatting to the defined output

setOutPutFile(filename, overwrite=True, silent=False)[source]

Set the basename for the text printer. The output filename will be filename.py. :param filename: Base filename :param overwrite: If True and the file already exists, it will be removed. :param silent: dont comment removing old files

class tools.printer.SLHAPrinter(output='file', filename=None)[source]

Printer class to handle the printing of slha format summary output. It uses the facilities of the TxTPrinter.

setOutPutFile(filename, overwrite=True, silent=False)[source]

Set the basename for the text printer. The output filename will be filename.smodels. :param filename: Base filename :param overwrite: If True and the file already exists, it will be removed. :param silent: dont comment removing old files

class tools.printer.SummaryPrinter(output='stdout', filename=None)[source]

Printer class to handle the printing of one single summary output. It uses the facilities of the TxTPrinter.

setOutPutFile(filename, overwrite=True, silent=False)[source]

Set the basename for the text printer. The output filename will be filename.smodels. :param filename: Base filename :param overwrite: If True and the file already exists, it will be removed. :param silent: dont comment removing old files

class tools.printer.TxTPrinter(output='stdout', filename=None)[source]

Printer class to handle the printing of one single text output

setOutPutFile(filename, overwrite=True, silent=False)[source]

Set the basename for the text printer. The output filename will be filename.log.

Parameters: filename – Base filename overwrite – If True and the file already exists, it will be removed. silent – dont comment removing old files
class tools.printer.XmlPrinter(output='stdout', filename=None)[source]

Printer class to handle the printing of one single XML output

convertToElement(pyObj, parent, tag='')[source]

Convert a python object (list,dict,string,…) to a nested XML element tree. :param pyObj: python object (list,dict,string…) :param parent: XML Element parent :param tag: tag for the daughter element

flush()[source]

Get the python dictionaries generated by the object formatting to the defined output and convert to XML

setOutPutFile(filename, overwrite=True, silent=False)[source]

Set the basename for the text printer. The output filename will be filename.xml. :param filename: Base filename :param overwrite: If True and the file already exists, it will be removed. :param silent: dont comment removing old files

tools.printer.getInfoFromPython(output)[source]

Retrieves information from the python output

Parameters: output – output (dictionary) list of r-values,r-expected and analysis IDs. None if no results are found. If there are results for combined analyses, returns the largest r-value and the corresponding r-expected from the combination.
tools.printer.getInfoFromSLHA(output)[source]

Retrieves information from the SLHA output

Parameters: output – output (string) list of r-values,r-expected and analysis IDs. None if no results are found. If there are results for combined analyses, returns the largest r-value and the corresponding r-expected from the combination.
tools.printer.getInfoFromSummary(output)[source]

Retrieves information from the summary output

Parameters: output – output (string) list of r-values,r-expected and analysis IDs. None if no results are found. If there are results for combined analyses, returns the largest r-value and the corresponding r-expected from the combination.
tools.printer.getSummaryFrom(output, ptype)[source]

Retrieves information about the output according to the printer type (slha,python or summary)

Parameters: output – output (dictionary for ptype=python or string for ptype=slha/summary) ptype – Printer type (slha, python or summary) Dictionary with the output information
tools.printer.printScanSummary(outputDict, outputFile)[source]

Method for creating a simple summary of the results when running SModelS over multiple files.

Parameters: outputDict – A dictionary with filenames as keys and the master printer flush dictionary as values. outputFile – Path to the summary file to be written.

## tools.proxyDBCreator module¶

class tools.proxyDBCreator.ProxyDBCreater(inputfile, rundir, verbose='info')[source]

Bases: object

create(servername, serverport)[source]
pprint(*args)[source]
run(really)[source]

now run the server :param really: if False, then only write out command

store(outputfile)[source]

store the outputfile

set a symlink from self.outputfile to default.pcl

tools.proxyDBCreator.main(args)[source]

## tools.pyhfInterface module¶

class tools.pyhfInterface.PyhfData(nsignals, inputJsons, jsonFiles=None)[source]

Bases: object

Holds data for use in pyhf :ivar nsignals: signal predictions list divided into sublists, one for each json file :ivar inputJsons: list of json instances :ivar jsonFiles: optional list of json files :ivar nWS: number of workspaces = number of json files

checkConsistency()[source]

Check various inconsistencies of the PyhfData attributes

Parameters: zeroSignalsFlag – boolean identifying if all SRs of a single json are empty
getWSInfo()[source]

Getting informations from the json files

Variables: channelsInfo – list of dictionaries (one dictionary for each json file) containing useful information about the json files - :key signalRegions: list of dictonaries with ‘json path’ and ‘size’ (number of bins) of the ‘signal regions’ channels in the json files - :key otherRegions: list of strings indicating the path to the control and validation region channels
totalYield()[source]

the total yield in all signal regions

class tools.pyhfInterface.PyhfUpperLimitComputer(data, cl=0.95, includeCRs=False, lumi=None)[source]

Bases: object

Class that computes the upper limit using the jsons files and signal informations in the data instance of PyhfData

Parameters: data – instance of PyhfData holding the signals information cl – confdence level at which the upper limit is desired to be computed data – created from :param data: nsignals – signal predictions list divided into sublists, one for each json file inputJsons – list of input json files as python json instances channelsInfo – list of channels information for the json files zeroSignalsFlag – list boolean flags in case all signals are zero for a specific json nWS – number of workspaces = number of json files patches – list of patches to be applied to the inputJsons as python dictionary instances workspaces – list of workspaces resulting from the patched inputJsons

;ivar workspaces_expected: list of patched workspaces with observation yields replaced by the expected ones :ivar cl: created from :param cl: :ivar scale: scale that is applied to the signal predictions, dynamically changes throughout the upper limit calculation :ivar alreadyBeenThere: boolean flag that identifies when the :ivar nsignals: accidentally passes twice at two identical values

backup()[source]
checkPyhfVersion()[source]

check the pyhf version, currently we need 0.6.1+

chi2(workspace_index=None)[source]

Returns the chi square

exponentiateNLL(twice_nll, doIt)[source]

if doIt, then compute likelihood from nll, else return nll

getBestCombinationIndex()[source]

find the index of the best expected combination

getSigmaMu(workspace)[source]

given a workspace, compute a rough estimate of sigma_mu, the uncertainty of mu_hat

getUpperLimitOnMu(expected=False, workspace_index=None)[source]
Compute the upper limit on the signal strength modifier with:
• by default, the combination of the workspaces contained into self.workspaces
• if workspace_index is specified, self.workspace[workspace_index] (useful for computation of the best upper limit)
Parameters: expected – if set to True: uses expected SM backgrounds as signals else: uses self.nsignals workspace_index – if different from None: index of the workspace to use for upper limit else: choose best combo the upper limit at self.cl level (0.95 by default)
getUpperLimitOnSigmaTimesEff(expected=False, workspace_index=None)[source]
Compute the upper limit on the fiducial cross section sigma times efficiency:
• by default, the combination of the workspaces contained into self.workspaces
• if workspace_index is specified, self.workspace[workspace_index] (useful for computation of the best upper limit)
Parameters: expected – if set to True: uses expected SM backgrounds as signals else: uses self.nsignals workspace_index – if different from None: index of the workspace to use for upper limit else: choose best combo the upper limit on sigma times eff at self.cl level (0.95 by default)
get_position(name, model)[source]
Parameters: name – name of the parameter one wants to increase model – the pyhf model the position of the parameter that has to be modified in order to turn positive the negative total yield
likelihood(mu=1.0, workspace_index=None, nll=False, expected=False)[source]

Returns the value of the likelihood. Inspired by the pyhf.infer.mle module but for non-log likelihood :param workspace_index: supply index of workspace to use. If None,

choose index of best combo
Parameters: nll – if true, return nll, not llhd expected – if False, compute expected values, if True, compute a priori expected, if “posteriori” compute posteriori expected
lmax(workspace_index=None, nll=False, expected=False, allowNegativeSignals=False)[source]

Returns the negative log max likelihood :param nll: if true, return nll, not llhd :param workspace_index: supply index of workspace to use. If None,

choose index of best combo
Parameters: expected – if False, compute expected values, if True, compute a priori expected, if “posteriori” compute posteriori expected allowNegativeSignals – if False, then negative nsigs are replaced with 0.
patchMaker()[source]

Method that creates the list of patches to be applied to the self.inputJsons workspaces, one for each region given the self.nsignals and the informations available in self.channelsInfo and the content of the self.inputJsons NB: It seems we need to include the change of the “modifiers” in the patches as well

Returns: the list of patches, one for each workspace
rescale(factor)[source]

Rescales the signal predictions (self.nsignals) and processes again the patches and workspaces

Returns: updated list of patches and workspaces (self.patches, self.workspaces and self.workspaces_expected)
rescaleBgYields(init_pars, workspace, model)[source]
Parameters: init_pars – list of initial parameters values one wants to increase in order to turn positive the negative total yields workspace – the pyhf workspace model – the pyhf model the list of initial parameters values that gives positive total yields
restore()[source]
updateWorkspace(workspace_index=None, expected=False)[source]

Small method used to return the appropriate workspace

Parameters: workspace_index – the index of the workspace to retrieve from the corresponding list expected – if False, retuns the unmodified (but patched) workspace. Used for computing observed or aposteriori expected limits. if True, retuns the modified (and patched) workspace, where obs = sum(bkg). Used for computing apriori expected limit.
welcome()[source]

greet the world

wsMaker(apriori=False)[source]

Apply each region patch (self.patches) to his associated json (self.inputJsons) to obtain the complete workspaces :param apriori: - If set to True: Replace the observation data entries of each workspace by the corresponding sum of the expected yields

• Else: The observed yields put in the workspace are the ones written in the corresponfing json dictionary
Returns: the list of patched workspaces
tools.pyhfInterface.getLogger()[source]

## tools.pythia6Wrapper module¶

class tools.pythia6Wrapper.Pythia6Wrapper(configFile='<install>/smodels/etc/pythia.card', executablePath='<install>/smodels/lib/pythia6/pythia_lhe', srcPath='<install>/smodels/lib/pythia6/')[source]

Bases: smodels.tools.wrapperBase.WrapperBase

An instance of this class represents the installation of pythia6. nevents keeps track of how many events we run. For each event we only allow a certain computation time: if self.secondsPerEvent * self.nevents > CPU time, we terminate Pythia.

Parameters: configFile – Location of the config file, full path; copy this file and provide tools to change its content and to provide a template executablePath – Location of executable, full path (pythia_lhe) srcPath – Location of source code
checkFileExists(inputFile)[source]

Check if file exists, raise an IOError if it does not.

Returns: absolute file name if file exists.
replaceInCfgFile(replacements={'NEVENTS': 10000, 'SQRTS': 8000})[source]

Replace strings in the config file by other strings, similar to setParameter.

This is introduced as a simple mechanism to make changes to the parameter file.

Parameters: replacements – dictionary of strings and values; the strings will be replaced with the values; the dictionary keys must be strings present in the config file
run(slhafile, lhefile=None, unlink=True)[source]

Execute pythia_lhe with n events, at sqrt(s)=sqrts.

Parameters: slhafile – input SLHA file lhefile – option to write LHE output to file; if None, do not write output to disk. If lhe file exists, use its events for xsecs calculation. unlink – Clean up temp directory after running pythia List of cross sections
setParameter(param='MSTP(163)', value=6)[source]

Modifies the config file, similar to .replaceInCfgFile.

It will set param to value, overwriting possible old values.

Remove temporary files.

Parameters: unlinkdir – remove temp directory completely

## tools.pythia8Wrapper module¶

class tools.pythia8Wrapper.Pythia8Wrapper(configFile='<install>/smodels/etc/pythia8.cfg', executablePath='<install>/smodels/lib/pythia8/pythia8.exe', srcPath='<install>/smodels/lib/pythia8/')[source]

Bases: smodels.tools.wrapperBase.WrapperBase

An instance of this class represents the installation of pythia8.

Parameters: configFile – Location of the config file, full path; copy this file and provide tools to change its content and to provide a template executablePath – Location of executable, full path (pythia8.exe) srcPath – Location of source code
checkFileExists(inputFile)[source]

Check if file exists, raise an IOError if it does not.

Returns: absolute file name if file exists.
checkInstallInSubclass()[source]
chmod()[source]

chmod 755 on pythia executable, if it exists. Do nothing, if it doesnt exist.

getPythiaVersion()[source]

obtain the pythia version we wish to use, stored in file ‘pythiaversion’

run(slhaFile, lhefile=None, unlink=True)[source]

Run pythia8.

Parameters: slhaFile – SLHA file lhefile – option to write LHE output to file; if None, do not write output to disk. If lhe file exists, use its events for xsecs calculation. unlink – clean up temporary files after run? List of cross sections

Remove temporary files.

Parameters: unlinkdir – remove temp directory completely

## tools.reweighting module¶

tools.reweighting.calculateProbabilities(width, Leff_inner, Leff_outer)[source]

The fraction of prompt and displaced decays are defined as:

F_long = exp(-totalwidth*l_outer/gb_outer) F_prompt = 1 - exp(-totaltotalwidth*l_inner/gb_inner) F_displaced = 1 - F_prompt - F_long

Parameters: Leff_inner – is the effective inner radius of the detector, given in meters Leff_outer – is the effective outer radius of the detector, given in meters width – particle width for which probabilities should be calculated (in GeV) Dictionary with the probabilities for the particle not to decay (in the detector), to decay promptly or displaced.
tools.reweighting.defaultEffReweight(element, Leff_inner=None, Leff_outer=None, minWeight=1e-10)[source]

Computes the lifetime reweighting factor for the element efficiency based on the lifetimes of all intermediate particles and the last stable odd-particle appearing in the element. The fraction corresponds to the fraction of decays corresponding to prompt decays to all intermediate BSM particles and to a long-lived decay (outside the detector) to the final BSM state.

Parameters: element – Element object or nested list of widths minWeight – Lower cut for the reweighting factor. Any value below this will be taken to be zero. Leff_inner – is the effective inner radius of the detector, given in meters. If None, use default value. Leff_outer – is the effective outer radius of the detector, given in meters. If None, use default value. Reweight factor (float)
tools.reweighting.defaultULReweight(element, Leff_inner=None, Leff_outer=None)[source]

Computes the lifetime reweighting factor for the element upper limit based on the lifetimes of all intermediate particles and the last stable odd-particle appearing in the element. The fraction corresponds to the fraction of decays corresponding to prompt decays to all intermediate BSM particles and to a long-lived decay (outside the detector) to the final BSM state.

Parameters: element – Element object Leff_inner – is the effective inner radius of the detector, given in meters. If None, use default value. Leff_outer – is the effective outer radius of the detector, given in meters. If None, use default value. Reweight factor (float)
tools.reweighting.reweightFactorFor(element, resType='prompt', Leff_inner=None, Leff_outer=None)[source]

Computer the reweighting factor for the element according to the experimental result type. Currently only two result types are supported: ‘prompt’ and ‘displaced’. If resultType = ‘prompt’, returns the reweighting factor for all decays in the element to be prompt and the last odd particle to be stable. If resultType = ‘displaced’, returns the reweighting factor for ANY decay in the element to be displaced and no long-lived decays and the last odd particle to be stable. Not that the fraction of “long-lived (meta-stable) decays” is usually included in topologies where the meta-stable particle appears in the final state. Hence it should not be included in the prompt or displaced fractions.

Parameters: element – Element object resType – Type of result to compute the reweight factor for (either ‘prompt’ or ‘displaced’) Leff_inner – is the effective inner radius of the detector, given in meters. If None, use default value. Leff_outer – is the effective outer radius of the detector, given in meters. If None, use default value. probabilities (depending on types of decay within branch), branches (with different labels depending on type of decay)

## tools.runSModelS module¶

tools.runSModelS.main()[source]
tools.runSModelS.run(inFile, parameterFile, outputDir, db, timeout, development)[source]

Provides a command line interface to basic SModelS functionalities.

Parameters: inFile – input file name (either a SLHA or LHE file) or directory name (path to directory containing input files) parameterFile – File containing the input parameters (default = smodels/etc/parameters_default.ini) outputDir – Output directory to write a summary of results to db – supply a smodels.experiment.databaseObj.Database object, so the database doesn’t have to be loaded anymore. Will render a few parameters in the parameter file irrelevant. If None, load the database as described in parameterFile, If True, force loading the text database. timeout – set a timeout for one model point (0 means no timeout) development – turn on development mode (e.g. no crash report)

## tools.runtime module¶

tools.runtime.experimentalFeatures()[source]

a simple boolean flag to turn experimental features on/off, can be turned on and off via options:experimental in parameters.ini.

tools.runtime.filetype(filename)[source]

obtain information about the filetype of an input file, currently only used to discriminate between slha and lhe files.

Returns: filetype as string(“slha” or “lhe”), None if file does not exist, or filetype is unknown.
tools.runtime.nCPUs()[source]

obtain the number of CPU cores on the machine, for several platforms and python versions.

## tools.simplifiedLikelihoods module¶

class tools.simplifiedLikelihoods.Data(observed, backgrounds, covariance, third_moment=None, nsignal=None, name='model', deltas_rel=0.2, lumi=None)[source]

Bases: object

A very simple observed container to collect all the data needed to fully define a specific statistical model

Parameters: observed – number of observed events per dataset backgrounds – expected bg per dataset covariance – uncertainty in background, as a covariance matrix nsignal – number of signal events in each dataset name – give the model a name, just for convenience deltas_rel – the assumed relative error on the signal hypotheses. The default is 20%. lumi – luminosity of dataset in 1/fb, or None
convert(obj)[source]

Convert object to numpy arrays. If object is a float or int, it is converted to a one element array.

correlations()[source]

Correlation matrix, computed from covariance matrix. Convenience function.

diagCov()[source]

Diagonal elements of covariance matrix. Convenience function.

isLinear()[source]

Statistical model is linear, i.e. no quadratic term in poissonians

isScalar(obj)[source]

Determine if obj is a scalar (float or int)

nsignals(mu)[source]

Returns the number of expected signal events, for all datasets, given total signal strength mu.

Parameters: mu – Total number of signal events summed over all datasets.
rel_signals(mu)[source]

Returns the number of expected relative signal events, for all datasets, given total signal strength mu. For mu=1, the sum of the numbers = 1.

Parameters: mu – Total number of signal events summed over all datasets.
sandwich()[source]

Sandwich product

totalCovariance(nsig)[source]

get the total covariance matrix, taking into account also signal uncertainty for the signal hypothesis <nsig>. If nsig is None, the predefined signal hypothesis is taken.

var_s(nsig=None)[source]

The signal variances. Convenience function.

Parameters: nsig – If None, it will use the model expected number of signal events, otherwise will return the variances for the input value using the relative signal uncertainty defined for the model.
zeroSignal()[source]

Is the total number of signal events zero?

class tools.simplifiedLikelihoods.LikelihoodComputer(data, toys=30000)[source]

Bases: object

Parameters: data – a Data object. toys – number of toys when marginalizing
chi2(marginalize=False)[source]

Computes the chi2 for a given number of observed events nobs given the predicted background nb, error on this background deltab, expected number of signal events nsig and the relative error on signal (deltas_rel). :param marginalize: if true, marginalize, if false, profile :param nsig: number of signal events :return: chi2 (float)

d2NLLdMu2(mu, theta_hat, allowZeroHessian=True)[source]

the hessian of the likelihood of mu, at mu, which is the Fisher information which is approximately the inverse of the covariance :param allowZeroHessian: if false and sum(observed)==0, then replace

observed with expected
d2NLLdTheta2(theta)[source]

the Hessian of nll as a function of the thetas. Makes it easier to find the maximum likelihood.

dNLLdMu(mu, theta_hat=None)[source]

d (- ln L)/d mu, if L is the likelihood. The function whose root gives us muhat, i.e. the mu that maximizes the likelihood.

Parameters: mu – total number of signal events theta_hat – array with nuisance parameters, if None then compute them
dNLLdTheta(theta)[source]

the derivative of nll as a function of the thetas. Makes it easier to find the maximum likelihood.

debug_mode = False
extendedOutput(extended_output, default=None)[source]
findAvgr(theta_hat)[source]

from the difference observed - background, find got inital values for lower and upper

findMuHat(allowNegativeSignals=False, extended_output=False, nll=False, marginalize=False)[source]

Find the most likely signal strength mu via gradient descent given the relative signal strengths in each dataset (signal region).

Parameters: allowNegativeSignals – if true, then also allow for negative values extended_output – if true, return also sigma_mu, the estimate of the error of mu_hat, and lmax, the likelihood at mu_hat nll – if true, return nll instead of lmax in the extended output mu_hat, i.e. the maximum likelihood estimate of mu, if extended output is

requested, it returns mu_hat, sigma_mu – the standard deviation around mu_hat, and llhd, the likelihood at mu_hat

findMuHatViaBracketing(allowNegativeSignals=False, extended_output=False, nll=False, marginalize=False)[source]

Find the most likely signal strength mu via a brent bracketing technique given the relative signal strengths in each dataset (signal region).

Parameters: allowNegativeSignals – if true, then also allow for negative values extended_output – if true, return also sigma_mu, the estimate of the error of mu_hat, and lmax, the likelihood at mu_hat nll – if true, return nll instead of lmax in the extended output mu_hat, i.e. the maximum likelihood estimate of mu, if extended output is

requested, it returns a dictionary with mu_hat, sigma_mu – the standard deviation around mu_hat, and lmax, i.e. the likelihood at mu_hat

findThetaHat(mu: float)[source]

Compute nuisance parameters theta that maximize our likelihood (poisson*gauss).

getSigmaMu(mu, theta_hat)[source]

Get an estimate for the standard deviation of mu at <mu>, from the inverse hessian

getThetaHat(nobs, nb, mu, covb, max_iterations)[source]
Compute nuisance parameter theta that
maximizes our likelihood (poisson*gauss) – by setting dNLL/dTheta to zero
Parameters: mu – signal strength theta_hat
likelihood(mu: float, marginalize: bool = False, nll: bool = False) → float[source]

compute likelihood for mu, profiling or marginalizing the nuisances :param mu: float Parameter of interest, signal strength :param marginalize: if true, marginalize, if false, profile :param nll: return nll instead of likelihood

llhdOfTheta(theta, nll=True)[source]
likelihood for nuicance parameters theta, given signal strength
self.mu. notice, by default it returns nll
Parameters: theta – nuisance parameters if True, compute negative log likelihood
lmax(marginalize=False, nll=False, allowNegativeSignals=False)[source]

convenience function, computes likelihood for nsig = nobs-nbg, :param marginalize: if true, marginalize, if false, profile nuisances. :param nll: return nll instead of likelihood :param allowNegativeSignals: if False, then negative nsigs are replaced with 0.

marginalizedLLHD1D(mu, nll)[source]

Return the likelihood (of 1 signal region) to observe nobs events given the predicted background nb, error on this background (deltab), signal strength of mu and the relative error on the signal (deltas_rel).

Parameters: mu – predicted signal strength (float) nobs – number of observed events (float) nb – predicted background (float) deltab – uncertainty on background (float) likelihood to observe nobs events (float)
marginalizedLikelihood(mu, nll)[source]

compute the marginalized likelihood of observing nsig signal event

profileLikelihood(mu: float, nll: bool)[source]

compute the profiled likelihood for mu. Warning: not normalized. Returns profile likelihood and error code (0=no error)

class tools.simplifiedLikelihoods.UpperLimitComputer(ntoys: float = 30000, cl: float = 0.95)[source]

Bases: object

Parameters: ntoys – number of toys when marginalizing cl – desired quantile for limits
computeCLs(model: tools.simplifiedLikelihoods.Data, marginalize: bool = False, toys: float = None, expected: Union[bool, str] = False, trylasttime: bool = False, return_type: str = '1-CLs') → float[source]

Compute the exclusion confidence level of the model (1-CLs) :param model: statistical model :param marginalize: if true, marginalize nuisances, else profile them :param toys: specify number of toys. Use default is none :param expected: if false, compute observed,

true: compute a priori expected, “posteriori”: compute a posteriori expected
Parameters: trylasttime – if True, then dont try extra return_type – (Text) can be “CLs-alpha”, “1-CLs”, “CLs” CLs-alpha: returns CLs - 0.05 (alpha) 1-CLs: returns 1-CLs value CLs: returns CLs value
debug_mode = False
getCLsRootFunc()[source]

Obtain the function “CLs-alpha[0.05]” whose root defines the upper limit, plus mu_hat and sigma_mu :param model: statistical model :param marginalize: if true, marginalize nuisances, else profile them :param toys: specify number of toys. Use default is none :param expected: if false, compute observed,

true: compute a priori expected, “posteriori”: compute a posteriori expected
Parameters: trylasttime – if True, then dont try extra mu_hat, sigma_mu, CLs-alpha
getUpperLimitOnMu(model, marginalize=False, toys=None, expected=False, trylasttime=False)[source]
upper limit on the signal strength multiplier mu

obtained from the defined Data (using the signal prediction

for each signal regio/dataset), by using the q_mu test statistic from the CCGV paper (arXiv:1007.1727).

Params marginalize:
if true, marginalize nuisances, else profile them
Params toys:specify number of toys. Use default is none
Params expected:
if false, compute observed, true: compute a priori expected, “posteriori”: compute a posteriori expected
Params trylasttime:
if True, then dont try extra
Returns:upper limit on the signal strength multiplier mu
getUpperLimitOnSigmaTimesEff(model, marginalize=False, toys=None, expected=False, trylasttime=False)[source]
upper limit on the fiducial cross section sigma times efficiency,
summed over all signal regions, i.e. sum_i xsec^prod_i eff_i obtained from the defined Data (using the signal prediction for each signal regio/dataset), by using the q_mu test statistic from the CCGV paper (arXiv:1007.1727).
Params marginalize:
if true, marginalize nuisances, else profile them
Params toys:specify number of toys. Use default is none
Params expected:
if false, compute observed, true: compute a priori expected, “posteriori”: compute a posteriori expected
Params trylasttime:
if True, then dont try extra
Returns:upper limit on fiducial cross section

## tools.slhaChecks module¶

tools.slhaChecks.main(args)[source]

## tools.smodelsLogging module¶

class tools.smodelsLogging.ColorizedStreamHandler(stream=None)[source]

Bases: logging.StreamHandler

format(record)[source]

Format the specified record.

If a formatter is set, use it. Otherwise, use the default formatter for the module.

should_color()[source]
tools.smodelsLogging.getLogLevel(asString=False)[source]

obtain the current log level. :params asString: return string, not number.

tools.smodelsLogging.getLogger()[source]
tools.smodelsLogging.setLogLevel(level)[source]

set the log level of the central logger. can either be directly an integer ( e.g. logging.DEBUG ), or “debug”, “info”, “warning”, or “error”.

## tools.smodelsTools module¶

tools.smodelsTools.main()[source]

## tools.srCombinations module¶

tools.srCombinations.getCombinedLikelihood(dataset, nsig, marginalize=False, deltas_rel=0.2, expected=False, mu=1.0)[source]

compute only lBSM :param nsig: predicted signal (list) :param deltas_rel: relative uncertainty in signal (float). Default value is 20%. :param expected: compute expected, not observed likelihood. if “posteriori”,

compute expected posteriori.
Parameters: mu – signal strength parameter mu
tools.srCombinations.getCombinedPyhfStatistics(dataset, nsig, marginalize, deltas_rel, nll=False, expected=False, allowNegativeSignals=False)[source]
tools.srCombinations.getCombinedSimplifiedLikelihood(dataset, nsig, marginalize=False, deltas_rel=0.2, expected=False, mu=1.0)[source]

Computes the combined simplified likelihood to observe nobs events, given a predicted signal “nsig”, with nsig being a vector with one entry per dataset. nsig has to obey the datasetOrder. Deltas is the error on the signal. :param nsig: predicted signal (list) :param deltas_rel: relative uncertainty in signal (float). Default value is 20%. :param expected: compute expected likelihood, not observed :param mu: signal strength parameter mu :returns: likelihood to observe nobs events (float)

tools.srCombinations.getCombinedSimplifiedStatistics(dataset, nsig, marginalize, deltas_rel, nll=False, expected=False, allowNegativeSignals=False)[source]

compute likelihood at maximum, for simplified likelihoods only

tools.srCombinations.getCombinedStatistics(dataset, nsig, marginalize=False, deltas_rel=0.2, expected=False, allowNegativeSignals=False)[source]

compute lBSM, lmax, and LSM in a single run :param nsig: predicted signal (list) :param deltas_rel: relative uncertainty in signal (float). Default value is 20%. :param expected: compute expected values, not observed

tools.srCombinations.getCombinedUpperLimitFor(dataset, nsig, expected=False, deltas_rel=0.2)[source]

Get combined upper limit. If covariances are given in globalInfo then simplified likelihood is used, else if json files are given pyhf cimbination is performed.

Parameters: nsig – list of signal events in each signal region/dataset. The list should obey the ordering in globalInfo.datasetOrder. expected – return expected, not observed value deltas_rel – relative uncertainty in signal (float). Default value is 20%. upper limit on sigma*eff

## tools.statistics module¶

tools.statistics.CLsfromNLL(nllA: float, nll0A: float, nll: float, nll0: float, return_type: str = 'CLs-alpha') → float[source]

compute the CLs - alpha from the NLLs TODO: following needs explanation :param nllA: :param nll0A: :param nll: :param nll0: :param return_type: (Text) can be “CLs-alpha”, “1-CLs”, “CLs”

CLs-alpha: returns CLs - 0.05 1-CLs: returns 1-CLs value CLs: returns CLs value
class tools.statistics.TruncatedGaussians(upperLimit, expectedUpperLimit, predicted_yield, corr: Optional[float] = 0.6, cl=0.95, lumi=None)[source]

Bases: object

likelihood computer based on the trunacated Gaussian approximation, see arXiv:1202.3415

Parameters: upperLimit – observed upper limit, as a yield or on xsec expectedUpperLimit – expected upper limit, also as a yield or on xsec predicted_yield – the predicted signal yield, unitless or [fb] corr – correction factor: ULexp_mod = ULexp / (1. - corr*((ULobs-ULexp)/(ULobs+ULexp))) When comparing with likelihoods constructed from efficiency maps, a factor of corr = 0.6 has been found to result in the best approximations. cl – confidence level lumi – if not None, and if the limits are in [fb], then use it to translate limits on xsecs into limits on yields internally
chi2(likelihood=None)[source]

compute the chi2 value from a likelihood (convenience function). :param likelihood: supply likelihood, if None, use just calculcated llhd

findYhat()[source]

find the signal yields that maximize the likelihood

find_neg_yhat(xa, xb)[source]
getSigmaY(yhat=0.0)[source]

get the standard deviation sigma, given an upper limit and a central value. assumes a truncated Gaussian likelihood

likelihood(mu: Optional[float], nll: Optional[bool] = False, allowNegativeMuhat: Optional[bool] = True, corr: Optional[float] = 0.6) → float[source]

return the likelihood, as a function of mu :param mu: number of signal events, if None then mu = muhat :param nll: if True, return negative log likelihood :param allowNegativeMuhat: if True, then allow muhat to become negative,

else demand that muhat >= 0. In the presence of underfluctuations in the data, setting this to True results in more realistic approximate likelihoods.
Returns: likelihood (float), muhat, and sigma_mu
likelihoodOfNSig(nsig: Optional[float], nll: Optional[bool] = False, allowNegativeMuhat: Optional[bool] = True, corr: Optional[float] = 0.6) → float[source]

return the likelihood, as a function of nsig :param nsig: number of signal events, if None then nsig = muhat * predicted_yioelds :param nll: if True, return negative log likelihood :param allowNegativeMuhat: if True, then allow muhat to become negative,

else demand that muhat >= 0. In the presence of underfluctuations in the data, setting this to True results in more realistic approximate likelihoods.
Parameters: corr – correction factor: ULexp_mod = ULexp / (1. - corr*((ULobs-ULexp)/(ULobs+ULexp))) When comparing with likelihoods constructed from efficiency maps, a factor of corr = 0.6 has been found to result in the best approximations. likelihood (float), yhat, and sigma_y
llhd(nsig, muhat, nll)[source]
root_func(x)[source]
rvsFromLimits(n=1)[source]

Generates a sample of random variates, given expected and observed likelihoods. The likelihood is modelled as a truncated Gaussian.

Parameters: n – sample size sample of random variates
tools.statistics.chi2FromLmax(llhd, lmax)[source]

compute the chi2 from likelihood and lmax

tools.statistics.deltaChi2FromLlhd(likelihood)[source]

compute the delta chi2 value from a likelihood (convenience function)

tools.statistics.determineBrentBracket(mu_hat, sigma_mu, rootfinder, allowNegative=True)[source]

find a, b for brent bracketing :param mu_hat: mu that maximizes likelihood :param sigm_mu: error on mu_hat (not too reliable) :param rootfinder: function that finds the root (usually root_func) :param allowNegative: if False, then do not allow a or b to become negative :returns: the interval a,b

## tools.stringTools module¶

tools.stringTools.cleanWalk(topdir)[source]

perform os.walk, but ignore all hidden files and directories

tools.stringTools.concatenateLines(oldcontent)[source]

of all lines in the list “oldcontent”, concatenate the ones that end with or ,

## tools.theoryPredictionsCombiner module¶

class tools.theoryPredictionsCombiner.TheoryPredictionsCombiner(theoryPredictions: list, slhafile=None, marginalize=False, deltas_rel=None)[source]

Bases: object

Facility used to combine theory predictions from different analyes.

constructor. :param theoryPredictions: the List of theory predictions :param slhafile: optionally, the slhafile can be given, for debugging :param marginalize: if true, marginalize nuisances. Else, profile them. :param deltas_rel: relative uncertainty in signal (float).

Default value is 20%.
analysisId(*args, **kwargs)[source]
chi2(*args, **kwargs)[source]
computeCLs(expected: bool = False, return_type: str = '1-CLs')[source]

Compute the exclusion confidence level of the model (1-CLs) :param expected: if false, compute observed, true: compute a priori expected :param return_type: (Text) can be “CLs-alpha”, “1-CLs”, “CLs”

CLs-alpha: returns CLs - 0.05 1-CLs: returns 1-CLs value CLs: returns CLs value
computeStatistics(*args, **kwargs)[source]
dataId(*args, **kwargs)[source]
dataType(*args, **kwargs)[source]
describe()[source]

returns a string containing a list of all analysisId and dataIds

findMuHat(allowNegativeSignals: bool = False, expected: Union[bool, str] = False, extended_output: bool = False, nll: bool = False) → Union[Dict, float, None][source]

find muhat and lmax. :param allowNegativeSignals: if true, then also allow for negative values :param expected: if true, compute expected prior (=lsm), if “posteriori”

compute posteriori expected
Parameters: extended_output – if true, return also sigma_mu, the estimate of the error of mu_hat, and lmax, the likelihood at mu_hat nll – if true, return negative log max likelihood instead of lmax mu_hat, i.e. the maximum likelihood estimate of mu, if extended output is requested, it returns a dictionary with mu_hat, sigma_mu – the standard deviation around mu_hat, and lmax, i.e. the likelihood at mu_hat
getCLsRootFunc(expected: bool = False) → Tuple[float, float, Callable][source]

Obtain the function “CLs-alpha[0.05]” whose root defines the upper limit, plus mu_hat and sigma_mu :param expected: if True, compute expected likelihood, else observed

getLlhds(muvals, expected=False, normalize=True)[source]

Compute the likelihoods for the individual analyses and the combined likelihood. Returns a dictionary with the analysis IDs as keys and the likelihood values as values.

Parameters: muvals – List with values for the signal strenth for which the likelihoods must be evaluated. expected – If True returns the expected likelihood values. normalize – If True normalizes the likelihood by its integral over muvals.
getRValue(*args, **kwargs)[source]
getUpperLimit(*args, **kwargs)[source]
getUpperLimitOnMu(*args, **kwargs)[source]
getmaxCondition(*args, **kwargs)[source]
likelihood(*args, **kwargs)[source]
lmax(*args, **kwargs)[source]
lsm(*args, **kwargs)[source]
muhat(*args, **kwargs)[source]
classmethod selectResultsFrom(theoryPredictions, anaIDs)[source]

Select the results from theoryPrediction list which match one of the IDs in anaIDs. If there are multiple predictions for the same ID for which a likelihood is available, it gives priority to the ones with largest expected r-values.

Parameters: theoryPredictions – list of TheoryPrediction objects anaIDs – list with the analyses IDs (in string format) to be combined a TheoryPredictionsCombiner object for the selected predictions. If no theory prediction was selected, return None.
singleDecorator()[source]

If the combiner holds a single theoryPrediction, calls the theoryPrediction function.

totalXsection(*args, **kwargs)[source]

## tools.timeOut module¶

exception tools.timeOut.NoTime(value=None)[source]

Bases: Exception

The time out exception. Raised when the running time exceeds timeout

class tools.timeOut.Timeout(sec)[source]

Bases: object

Timeout class using ALARM signal.

raise_timeout(*args)[source]

## tools.toolBox module¶

class tools.toolBox.ToolBox[source]

Bases: object

A singleton-like class that keeps track of all external tools. Intended to make installation and deployment easier.

Constructor creates the singleton.

add(instance)[source]

Adds a tool by passing an instance to this method.

checkInstallation(make=False, printit=True, longL=False)[source]

Checks if all tools listed are installed properly, returns True if everything is ok, False otherwise.

compile()[source]

Tries to compile and install tools that are not yet marked as ‘installed’.

get(tool, verbose=True)[source]

Gets instance of tool from the toolbox.

initSingleton()[source]

Initializes singleton instance (done only once for the entire class).

installationOk(ok)[source]

Returns color coded string to signal installation issues.

listOfTools()[source]

Returns a simple list with the tool names.

tools.toolBox.main(args)[source]

## tools.wrapperBase module¶

class tools.wrapperBase.WrapperBase[source]

Bases: object

An instance of this class represents the installation of an external tool.

An external tool encapsulates a tool that is executed via commands.getoutput. The wrapper defines how the tool is tested for proper installation and how the tool is executed.

absPath(path)[source]

Get the absolute path of <path>, replacing <install> with the installation directory.

basePath()[source]

Get the base installation path.

checkInstallation(compile=True)[source]

Checks if installation of tool is correct by looking for executable and executing it. If check is False and compile is True, then try and compile it.

Returns: True, if everything is ok
chmod()[source]

chmod 755 on executable, if it exists. Do nothing, if it doesnt exist.

compile()[source]

Try to compile the tool.

complain()[source]
defaulttempdir = '/tmp/'
installDirectory()[source]
Returns: the installation directory of the tool
pathOfExecutable()[source]
Returns: path of executable
tempDirectory()[source]

Return the temporary directory name.

tools.wrapperBase.ok(b)[source]
Returns: ‘ok’ if b is True, else, return ‘error’.

## tools.xsecComputer module¶

class tools.xsecComputer.ArgsStandardizer[source]

Bases: object

simple class to collect all argument manipulators

checkAllowedSqrtses(order, sqrtses)[source]

check if the sqrtses are ‘allowed’

checkNCPUs(ncpus, inputFiles)[source]
getInputFiles(args)[source]

geth the names of the slha files to run over

getOrder(args)[source]

retrieve the order in perturbation theory from argument list

getPythiaVersion(args)[source]
getSSMultipliers(multipliers)[source]
getSqrtses(args)[source]

extract the sqrtses from argument list

queryCrossSections(filename)[source]
writeToFile(args)[source]
class tools.xsecComputer.XSecComputer(maxOrder, nevents, pythiaVersion, maycompile=True)[source]

Bases: object

cross section computer class, what else?

Parameters: maxOrder – maximum order to compute the cross section, given as an integer if maxOrder == LO, compute only LO pythia xsecs if maxOrder == NLO, apply NLO K-factors from NLLfast (if available) if maxOrder == NLL, apply NLO+NLL K-factors from NLLfast (if available) nevents – number of events for pythia run pythiaVersion – pythia6 or pythia8 (integer) maycompile – if True, then tools can get compiled on-the-fly
addCommentToFile(comment, slhaFile)[source]

add the optional comment to file

addHigherOrders(sqrts, slhafile)[source]

addMultipliersToFile(ssmultipliers, slhaFile)[source]

add the signal strength multipliers to the SLHA file

addXSecToFile(xsecs, slhafile, comment=None, complain=True)[source]

Write cross sections to an SLHA file.

Parameters: xsecs – a XSectionList object containing the cross sections slhafile – target file for writing the cross sections in SLHA format comment – optional comment to be added to each cross section block complain – complain if there are already cross sections in file
applyMultipliers(xsecs, ssmultipliers)[source]

apply the given multipliers to the cross sections

compute(sqrts, slhafile, lhefile=None, unlink=True, loFromSlha=None, pythiacard=None, ssmultipliers=None)[source]

Run pythia and compute SUSY cross sections for the input SLHA file.

Parameters: sqrts – sqrt{s} to run Pythia, given as a unum (e.g. 7.*TeV) slhafile – SLHA file lhefile – LHE file. If None, do not write pythia output to file. If file does not exist, write pythia output to this file name. If file exists, read LO xsecs from this file (does not run pythia). unlink – Clean up temp directory after running pythia loFromSlha – If True, uses the LO xsecs from the SLHA file to compute the higher order xsecs pythiaCard – Optional path to pythia.card. If None, uses smodels/etc/pythia.card ssmultipliers – optionally supply signal strengh multipliers, given as dictionary of the tuple of the mothers’ pids as keys and multipliers as values, e.g { (1000001,1000021):1.1 }. XSectionList object
computeForBunch(sqrtses, inputFiles, unlink, lOfromSLHA, tofile, pythiacard=None, ssmultipliers=None)[source]

compute xsecs for a bunch of slha files

computeForOneFile(sqrtses, inputFile, unlink, lOfromSLHA, tofile, pythiacard=None, ssmultipliers=None, comment=None)[source]

Compute the cross sections for one file.

Parameters: sqrtses – list of sqrt{s} tu run pythia, as a unum (e.g. [7*TeV]) inputFile – input SLHA file to compute xsecs for unlink – if False, keep temporary files lofromSLHA – try to obtain LO xsecs from SLHA file itself tofile – False, True, “all”: write results to file, if “all” also write lower xsecs to file. pythiacard – optionally supply your own runcard ssmultipliers – optionally supply signal strengh multipliers, given as dictionary of the tuple of the mothers’ pids as keys and multipliers as values, e.g { (1000001,1000021):1.1 }. comment – an optional comment that gets added to the slha file. number of xsections that have been computed
getPythia()[source]

returns the pythia tool that is configured to be used

match(pids, theorypid)[source]

do the pids given by the user match the pids of the theorypred?

xsecToBlock(xsec, inPDGs=(2212, 2212), comment=None, xsecUnit=1.00E+00 [pb])[source]

Generate a string for a XSECTION block in the SLHA format from a XSection object.

Parameters: inPDGs – defines the PDGs of the incoming states (default = 2212,2212) comment – is added at the end of the header as a comment xsecUnit – unit of cross sections to be written (default is pb). Must be a Unum unit.
tools.xsecComputer.main(args)[source]