TraceWin Error Study -------------------- First you need to use the GUI to create an error study folder in your output directory (see functionality `Copy set of data`). You may copy as many jobs as you want to run directly in the GUI, but it is very slow and we recommend to rather define the number of runs in the submit command provided here. This submit command use hard links when possible to also save space. You then send the jobs to the queue manager using the command `tracewin_errorstudy`. The help function of this script already says most of what you need to know:: usage: tracewin_errorstudy [-h] [-s SEED] [-c CALC_DIR] [-n NUM_JOBS] [-t STUDY] [-m MULTI] [-l LATTICE] [-q SETTINGS] [-p PRIORITY] Simple setup script to run TraceWin on HT Condor.. Default values in square brackets optional arguments: -h, --help show this help message and exit -s SEED Define seed [None] -c CALC_DIR Path to calculation folder [temp/] -n NUM_JOBS Number of jobs [Count number of copied runs in TW] -t STUDY Statistical study number [1] -m MULTI Multi-dynamic study [1] -l LATTICE The lattice file for the project [lattice.dat] -q SETTINGS The settings for the project template (yaml) [settings.yml] -p PRIORITY Job priority A few things are worth mentioning: - We provide the option of having the lattice file as a template, with settings specified in a YAML file. It is then assumed that the template has the ending .tmp (so if the default lattice.dat, the template file lattice.dat.tmp is expected to exist). Variables are then replaced as per str.format_. - The default Condor script templates should suffice for most. However we do provide an option to define template files, again to be replaced according to Python's str.format. If any of the files exist in the working directory they will be used in favour of the built-in templates. - For standard jobs, the templates are *head.tmp* and *queue.tmp* - For multi-jobs, the templates are *head.multi.tmp* and *queue.multi.tmp* - We provide the option to run multi-dynamic study which is perhaps a bit confusing. In this mode, it is assumed that you have already run a number of simulations before. For each new job, we change the seed but read the same error table for dynamical errors. This allows us to understand better if we are limited by static or dynamic tolerances. .. _str.format: https://docs.python.org/2/library/stdtypes.html#str.format Example 1 ========= Standard simulation of 100 machines in the folder calc:: tracewin_errorstudy -n 100 -c calc Example 2 ========= Simulating 50 machines, then run 20 machines with different static errors but same dynamic errors for each of those (1000 machines total):: tracewin_errorstudy -n 50 -c calc # wait until simulations have finished (check condor_q, condor_status, condor_history) tracewin_errorstudy -n 50 -m 40 -c calc