Difference: EventDisplayAnalysis (3 vs. 4)

Revision 42016-06-16 - MarcosSantander

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

Introduction to the VERITAS analysis with Event Display (ED)

Line: 11 to 11
  Simply run this script:
Changed:
<
<
/a/home/tehanu/santander/reu/getCrabrun.sh
>
>
/a/home/tehanu/santander/reu/getCrabrun.sh
 
Changed:
<
<
This should create a link on $VERITAS_USER_DATA_DIR/data/d20121013 to the VBF file with the Crab data we'll be using. If we run this command:
>
>
This should create a link on $VERITAS_USER_DATA_DIR/data/d20121013 to the VBF file with the Crab data we'll be using. If we run this command:
 
Changed:
<
<
> ls -l $VERITAS_USER_DATA_DIR/data/d20121013
>
>
> ls -l $VERITAS_USER_DATA_DIR/data/d20121013
  we should get something like:
Deleted:
<
<
 
total 0
Changed:
<
<
lrwxrwxrwx 1 santander veritas 63 Jun 10 09:27 64080.cvbf -> /a/data/tehanu/santander/veritas/data/data/d20121013/64080.cvbf
>
>
lrwxrwxrwx 1 santander veritas 63 Jun 10 09:27 64080.cvbf -> /a/data/tehanu/santander/veritas/data/data/d20121013/64080.cvbf
 

Setting up an ED analysis

Setting up your environment

1) Log in on tehanu and execute the following command:

Changed:
<
<
> cp ~santander/reu/myprofile.ED ~/.myprofile
>
>
> cp ~santander/reu/myprofile.ED ~/.myprofile
  2) Then, you need to create some auxiliary folders for Eventdisplay to work. Executing this little script will do that for you and will show you where to find the folders it creates:
Changed:
<
<
> /a/home/tehanu/santander/reu/create-aux-dirs.sh
>
>
> /a/home/tehanu/santander/reu/create-aux-dirs.sh
  3) Either log out and log back in or execute:
Changed:
<
<
> source ~/.bashrc
>
>
> source ~/.bashrc
  That's it! Now, assuming steps 1-3 did not give you any trouble, let's check that ED works.

4) First, you need to let ED know which data you are going to be working with, VERITAS or CTA. This is also known as setting the observatory.

Changed:
<
<
There are a couple of handy aliases in your .myprofile that you can use: This command will set CTA as your observatory:
>
>
There are a couple of handy aliases in your .myprofile that you can use: This command will set CTA as your observatory:
 
Changed:
<
<
> setcta
>
>
> setcta
  Similarly, for VERITAS:
Changed:
<
<
> setvts
>
>
> setvts
  5) Checking that we have access to ED executables and ED version:
Changed:
<
<
> evndisp -v
>
>
> evndisp -v
  This should output:
Changed:
<
<
v480a
>
>
v480a
  You should also run this command to make sure the condor tools are properly setup:
Changed:
<
<
> submit_dag.py
>
>
> submit_dag.py
  The output should be:
Deleted:
<
<
 
usage: %prog [options] [-h] [-s SOURCE | -r RUN | -l LIST] [--rerun] [--nobdt]
                       [--runprocs PROCS] [--cuts CUTS] [--parfile PARFILE]
Line: 74 to 71
 This is the script we'll use to submit our jobs to the cluster.

Submitting your first job to the cluster

Changed:
<
<
Assuming that you have the Crab run we downloaded recently (run #64080). Prepare a runlist file using your favorite text editor containing one run ID number per file. For instance, create a file called runlist.txt that contains a single line so that when you do:
>
>
Assuming that you have the Crab run we downloaded recently (run #64080). Prepare a runlist file using your favorite text editor containing one run ID number per file. For instance, create a file called runlist.txt that contains a single line so that when you do:
 
Changed:
<
<
> cat runlist.txt
>
>
> cat runlist.txt
  the output should be
Changed:
<
<
64080
>
>
64080
 
Changed:
<
<
The next step is to submit the job to the cluster, we'll use submit_dag.py for that this way:
>
>
The next step is to submit the job to the cluster, we'll use submit_dag.py for that this way:
 
Changed:
<
<
> submit_dag.py -l runlist.txt
>
>
> submit_dag.py -l runlist.txt
  The output should look similar to this:
Deleted:
<
<
 
----------------------------------------
 >>> Condor options <<<
Line: 165 to 161
 

Changed:
<
<
Once this is done, you should be able to check on the status of your submission using condor_q
>
>
Once this is done, you should be able to check on the status of your submission using condor_q
 
Changed:
<
<
> condor_q $USER
>
>
> condor_q $USER
 
Changed:
<
<
If you are using submit_dag.py, three jobs are submitted to the Condor system (one for each ED stage, namely evndisp, mscw, and anasum). The jobs have internal dependencies, so that when evndisp finishes, the mscw stage is started right after.
>
>
If you are using submit_dag.py, three jobs are submitted to the Condor system (one for each ED stage, namely evndisp, mscw, and anasum). The jobs have internal dependencies, so that when evndisp finishes, the mscw stage is started right after.
  Running
Changed:
<
<
> condor_q $USER -dag
>
>
> condor_q $USER -dag
  will return
Deleted:
<
<
 
-- Submitter: tehanu.nevis.columbia.edu : <129.236.252.111:36147> : tehanu.nevis.columbia.edu
 ID      OWNER            SUBMITTED     RUN_TIME ST PRI SIZE CMD
Line: 186 to 181
 This shows that the 'A' job depends on the DAG manager that is in charger or running the three stages for that run.

More details on condor are available here.

Added:
>
>

Combining anasum files

Once all the anasum stages have been processed, you can combine all of the anasum files using this script, which you can copy to your folder:

/a/home/tehanu/santander/reu/combine_anasum.sh

You can run by giving a runlist and output file name as the parameters:

./combined_anasum.sh runlist.txt myoutputfile.root

Once this is done, you can copy the ROOT script here:

/a/home/tehanu/santander/reu/printResults.C

and edit it so that the filename points to the results file you just created.

 
Added:
>
>
More information on the functions called in printResults.C are available in the ED manual wiki.
 

Comments


<--/commentPlugin-->
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback