Fermi Analysis with FermiPy

Installation and Setup

The following instructions will set up an installation of the Fermi Science Tools (fermitools) and Fermipy using conda. Conda is a package manager that will automatically manage all of the dependencies needed to use the software and encapsulate them into an environment, which must be activated before use.

First, set up your environment for using the Fermipy pipeline by running the following commands:

cat /a/home/tehanu/brill/fermipipe/bash_setup.txt >> ~/.myprofile
source ~/.bashrc

In addition to initializing the conda environment for the fermi code for your account, this will provide you with several definitions and commands (you can see them by opening with a text editor the file .myprofile in your home directory).

directory where your analysis will occur by default

set up your environment before running analyses

fermianalysis="nohup python $FERMIPIPE/run_analysis.py $FERMI_ANALYSIS_DIR/pipeline_config.yml &"
run an analysis

NOTE: If your home directory is on Milne or another Nevis machine other than tehanu, see the special instructions "Customizing the Installation" below. If you're unsure where your home directory is, type echo $HOME.

Next, create a directory for running your analysis, and copy the configuration file templates there:

cp $FERMIPIPE/pipeline_config.yml $FERMI_ANALYSIS_DIR

Customizing the Installation

If you'd like to perform your analysis somewhere besides the tehanu data partition, change FERMI_ANALYSIS_DIR in your .myprofile file to your preferred directory. For example, to perform the analysis in your home directory (recommended if your account is on milne), the line should read:

export FERMI_ANALYSIS_DIR=$HOME/fermi_analysis

NOTE: As of Fermipy v0.19, installation via conda is currently broken. See Fermipy issue #337.

If you need to recreate the conda environment from scratch, use the setup_pipeline script to download and install fermitools and fermipy into an environment named "fermi". By default, the software will be installed to /a/data/tehanu/$USER/miniconda3, where $USER is your username. Run the following command anywhere:


The script should take several minutes to run. Ignore any message saying to run source ~/.bashrc again, the script does that for you automatically.

If you'd like to change the installation path of miniconda, run the setup script using the -p flag to change the path, as follows:

$FERMIPIPE/setup_pipeline.sh -p <CONDA_PATH>

For example, to install to your home directory, use -p $HOME/miniconda3. You can also use this option to create the fermi environment in an existing conda installation, if you have one, provided it doesn't already have an environment named fermi.

Configuring Your Analysis

When running an analysis with Fermipy, all parameters are set using a configuration file. Different sections contains the parameters for the data selection, likelihood fit, source model, and so on. There are many configuration parameters you can adjust when running an analysis. In your analysis directory, you should have a file called config.yml, which you can use as a template for your Fermipy configuration file. The settings in it are meant for a standard setup, which you can adjust as needed. The Fermipy documentation lists all possible configuration settings.

To perform an analysis, you will have to set a minimum of three parameters: the start time, the stop time, and the name of the target to analyze (if your target is not a known Fermi source, you can define the target using coordinates instead). The analysis start and stop times must be provided in Fermi mission elapsed time (MET). NASA provides a handy time converter that you can use to convert between a number of calendar/time formats and MET.

The pipeline around Fermipy is designed in a similar way. In your analysis directory, you should have a file called pipeline_config.yml, which you can use as a template for your pipeline configuration file.

the name to be used for the output directory (within the fermi_analysis directory) and as a prefix for your output files; shouldn't contain spaces

the name of your Fermipy config file; if null, defaults to <prefix>_config.yml

list of names of sources to delete from the model

list of dictionaries containing parameters defining sources to delete from the model; isodiff, galdiff, and all custom sources are automatically excluded

list of names of sources to free in the model fitting

list of dictionaries containing parameters defining sources to free in the model fitting

The following parameters pertain to the light curve analysis only:

number of sections into which to split the light curve; if null, do not split

the index of the section to analyze if num_sections is not null

Running Your Analysis

First, set your environment and activate the conda environment:


Now you can run an analysis using your config files:


The fermianalysis command runs the analysis in the background, using nohup. This means that it will continue to run even if you close your terminal, and that instead of printing the output to the screen, it saves it to a file called nohup.out.

To generate the light curve, first run the base analysis. After that's complete, run the lightcurve analysis by running the analysis using the --lightcurve option. The pipeline offers the capability to divide the lightcurve analysis into sections, so that an error encountered in one bin crashes only a portion of the analysis rather than everything. If using this functionality, the section may either be specified in the pipeline configuration file or on the command line using the --section option, which overrides the value set in the configuration file. After running the analysis for all sections, the output files must then be combined.

To set the bin size of your lightcurve (in units of seconds), edit the binsz parameter in the lightcurve section of your Fermipy config file.

Run the analysis for the first section of the light curve:

python $FERMIPIPE/run_analysis.py $FERMI_ANALYSIS_DIR/pipeline_config.yml --lightcurve --section 0

After running the analyses for all light curve sections, combine the output files:

python $FERMIPIPE/combine_lightcurve_results.py $FERMI_ANALYSIS_DIR/pipeline_config.yml

Jupyter Notebook

Below I will give steps for how to alter your SSH commands to forward a port and use Jupyter from tehanu on your local computer and browser.
Using Jupyter from tehanu:

  1. Here, ‘port’ will be a four digit number that you choose. It must be the same in all commands where it is used. I recommend choosing something non-standard to lessen the probability that someone else will be logged into tehanu with the same port. Here is an example list of common linux ports that you should avoid: https://www.techbrown.com/cheat-sheet-most-commonly-used-port-numbers-cheat-sheet-linux/
  2. ssh username@tehanu.nevis.columbia.edu -L port:localhost:port
  3. You are now in tehanu with your chosen port being forwarded to ‘localhost.’
  4. Activate your Anaconda environment and navigate to your analysis folder with the .ipynb file. (use fermisetup shown above)
  5. jupyter notebook --no-browser --port=port
  6. This will sit for a moment and spit out a bunch of text. Take a look and make sure that the port you specied wasn’t already in use. There will be a url in the printout that lists a ‘localhost.’ Copy this into your browser and go there. You should have access to Jupyter in your browser now.
  7. Jupyter might request a token that was also in the output. If so, copy that into the appropriate box to gain access.
For analysis, you can use many of the commands used in the run_analysis.py script that creates all the objects necessary for GTAnalysis. Most importantly, note the calls to gta.free_source() and gta.delete_sources() for the default model selection (which lives between the calls to gta.setup(), gta.optimize() and gta.fit(), and the various print statements.). You can modify away from the default by adding or deleting sources and fixing or freeing parameters to help your fit converge, and then use plots to assess how it worked. You can use these calls explicitly in your notebook to avoid having to run the same script multiple times.


Fermipy says "WARNING
version mismatch between CFITSIO header (v3.43) and linked library (v3.41)." : This should not cause any issues with the analysis. For more details see here.

My analysis finished without reporting any errors, but I don't see any plots.
This sometimes occurs. Simply rerun the analysis. It will pick up from where it terminated before and produce the plots.

Useful links



Fermipy documentation

Fermipy Tutorials from Fermi Summer School

NASA mission time converter

Managing conda environments

FermiLAT Summer school presentations:

  1. 2012 complete agenda
    1. Useful slide with good tips on getting analysis to converge - slide 28 is best
  2. 2013 complete agenda
  3. 2014 complete agenda
  4. 2015 complete agenda
  5. 2016 complete agenda
  6. 2017 complete agenda
  7. 2018 complete agenda
    1. This beautiful python notebook explores a full analysis with FermiPY, which is very similar to what run_analysis.py is doing.
  8. 2019 complete agenda

-- Ari Brill - 2020-02-17


Topic attachments
I Attachment History Action Size Date Who Comment
PowerPointpptx FermiPy_Analysis_Tips.pptx r1 manage 850.3 K 2020-07-21 - 20:51 GwenLaPlante  
PDFpdf Introduction_to_Fermipy_Analysis_at_Nevis.pdf r1 manage 2684.5 K 2020-06-01 - 18:52 AriBrill  
Edit | Attach | Watch | Print version | History: r17 < r16 < r15 < r14 < r13 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r17 - 2020-08-21 - AriBrill
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback