Pablo Drake Research Journal (Summer '22)



  • Plotted different graphs containing blazar population data both from TeVCat and 4LAC Fermi-LAT Catalogue. In particular, I completed two skymaps of blazars, two pie charts with blazar classifications and a scatterplot showing Fermi blazars' photon index vs peak frequency, with their classification as well. All the data was retrieved from the catalogues and the plotting was carried out in Jupyter notebooks, using matplotlib. This work is to be included in some of Professor Mukherjee's slides.
  • Read Observing the Energetic Universe at Very High Energies with the VERITAS Gamma Ray Observatory by Prof. Mukherjee ( Wrote a set of reference notes.
  • Read Cherenkov Telescope Array: CTA CERN Courier article. Wrote a set of reference notes.
  • Attended Daniela VEGAS tutorial
  • Started the Crab Analysis tutorial with VEGAS
  • Continued with the Crab Analysis (stage 6)
  • Read Exploring the high-energy gamma-ray spectra of TeV blazars by Doctor Qi Feng ( Wrote a set of reference notes.
  • Centered all my work around the analysis of the May 2022 flare of the FSRQ 4FGL J1350.8+3033. The structure of this analysis will be the following: carry out a fermipy ROI analysis, plot the lightcurve of the source, plot the lightcurve of the flare, carry out an spectral analysis both during flare buildup, maximum, and decrease (and potentially also during quiescent state), carry out VEGAS spectral analysis of the source.
  • Completed the VEGAS tutorial
  • Started working again with fermipy. An error popped up when attempting to carry out the ROI analysis of FSRQ J1350.8+3033.
Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding Python
runtime state: core initialized
[[ModuleNotFoundError][ModuleNotFoundError]]: No module named 'encodings'


  • Started writing a log/manual fermipy document, noting down the different errors that appear.
  • Found out the error was caused when simultaneously calling through my .myprofile doc VEGAS and fermipy code, possibly because of a conflict between subpackages. Another error appeared:
Traceback (most recent call last):
File "./", line 348, in <module> free_sources=pipeline_config.get('free_sources'))
File "./", line 50, in run_analysis gta.setup()
File "/a/share/ged/src/miniconda3/envs/fermipy-1.0.1/lib/python3.7/site-packages/fermipy/", line 1082, in setup c.setup(overwrite=overwrite)
File "/a/share/ged/src/miniconda3/envs/fermipy-1.0.1/lib/python3.7/site-packages/fermipy/", line 5121, in setup self._ltc = LTCube.create(self.files['ltcube'])
File "/a/share/ged/src/miniconda3/envs/fermipy-1.0.1/lib/python3.7/site-packages/fermipy/", line 192, in create ltc = cls.create_from_fits(files[0])
[[IndexError][IndexError]]: list index out of range
  • Read EGRET (GeV) Blazars by Prof. Mukherjee. Wrote a set of reference notes.



  • Worked on adjusting the legends and the colors for the skymaps and pie chart plots that I coded last week. Also adjusted the sizes in order for them to look better in the slides. Finally, researched and included a paragraph of information on general blazar populations at the present time.
  • Reinstalled fermipy using the files provided by Massimo ( The ROI analysis worked without any errors.
  • As per Prof. Mukherjee's suggestion, I started working in creating a matplotlib data plotting manual, using the code created for the TeVCat /4LAC graphs. Also continued working with the fermipy guide.
  • Completed ROI analysis. Redid the analysis in order for the graphs to be created (sometimes the plots aren't created automatically when the finishes).
  • Read The future of gamma-ray astronomy by Jürgen Knödlseder ( Wrote a set of notes on it.
  • Started working on creating a full dataset lightcurve. The following error appeared:
Traceback (most recent call last):
File "/nevis/milne/files/pd2629/fermi_analysis/run_analysis_std.npy", line 349, in <module>
run_lightcurve(fermipy_config, prefix, num_sections, section)
File "/nevis/milne/files/pd2629/fermi_analysis/run_analysis_std.npy", line 253, in run_lightcurve
File "/a/share/ged/src/miniconda3/envs/fermipy-1.0.1/lib/python3.7/site-packages/fermipy/", line 3600, in load_roi roi_file,
roi_data = utils.load_data(infile, workdir=self.workdir)
File "/a/share/ged/src/miniconda3/envs/fermipy-1.0.1/lib/python3.7/site-packages/fermipy/", line 79, in load_data raise
Exception('Input file does not exist.')
Exception: Input file does not exist.
  • Found out that the error described by this code had to do with a naming mismatch in Ari Brill's pipeline code. In particular, the input file for the lightcurve analysis had to be refered to as _initial.npy, as this was the output of the baseline analysis.
  • Regarding the full dataset lightcurve, the following error appeared:
Traceback (most recent call last):
<span>File "/nevis/milne/files/pd2629/fer mi_analysis/run_analysis_std. py", line 350, in <module></span>
run_lightcurve(fermipy_config, prefix, num_sections, section)
<span>File "/nevis/milne/files/pd2629/fer mi_analysis/run_analysis_std. py", line 261, in run_lightcurve</span>
gta.lightcurve(fermipy_config[ 'selection']['target'], make_plots=True)
<span>File "/a/share/ged/src/miniconda3/e nvs/fermipy-1.0.1/lib/python3. 7/site-packages/fermipy/", line 289, in lightcurve</span>
o = self._make_lc(name, **config)
<span>File "/a/share/ged/src/miniconda3/e nvs/fermipy-1.0.1/lib/python3. 7/site-packages/fermipy/", line 452, in _make_lc</span>
if not next_fit['fit_success']:
[[KeyError][KeyError]]: 'fit_success'
  • This error happened during the computation of the first time bin in the lightcurve analysis, and so I tried changing the conditions of that bin regarding both TS requirements and length, to no avail. The error persisted.



  • Solved the previous fit_success error by updating fermipy to v1.1. Found out theat the issue had already been posed in the official fermipy github (which often happens, And the solution for the problem had been implemented earlier in 2022:
  • However, another issue started popping up. The lightcurve analysis actually started, with the gtcube process finishing. However, when trying to fit the different time bins, the following sequence would happen (as can be checked in the log file):
<span>Analysis failed in time range 675480000 675566400</span>
<span><class 'TypeError'></span>
<span>Analysis failed in time range 675566400 675652800</span>
<span><class 'TypeError'></span>
<span>Analysis failed in time range 675652800 675739200</span>
<span><class 'TypeError'></span>
<span>Analysis failed in time range 675739200 675825600</span>
<span><class 'TypeError'></span>
<span>Analysis failed in time range 675825600 675912000</span>
<span><class 'TypeError'></span>
  • As a matter of fact, a fits file would result from this analysis, but it wouldn't include any datapoints.
  • Contacted Dr. Qi Feng about the problem, he commented that a potential reason for it could have to do with the generally low flux levels of 4FGL J1350.8. Dr. Feng guessed that maybe the fit error was caused because of a fit convergence error, caused by these low flux levels. As a solution, he proposed doing significantly longer time bins, or a series of really short ones. The idea behind it being, either get the flux per bin to be larger by adding time, or get most bins with very little flux and a few around the flare to converge because of the elevated flux in those bins. In a conversation with Massimo Capasso, we decided to also modify the ROI in order to potentially increase the baseline flux (even if it didn't come from our source).



  • Carried out several of the analysis suggested by Qi and Massimo. For example, I tried carrying out 3-month timebins and 5-month timebins, and also a series of 1-day bins for the week around the May 2022 flare. None of these solutions had any effect, with the problem persisting.
06/28 and 06/29
  • Carried out several analysis with an increased ROI, up to 40 degrees. Tried several combinations of ROI radius and time bins, to no avail.
  • Read Variability and Spectral Characteristics of Three Flaring Gamma-ray Quasars Observed by VERITAS and Fermi-LAT by C. B. Adams ( Wrote a set of notes on it.
  • Contacted Isabella Guilherme, an alumn of the VERITAS group who had carried out a series of lightcurve analysis last year, in order to enquire about my problem. They showed me several lightcurve analysis that had been fully completed with a previous version of fermipy. I therefore tried to repeat the same process for their source.
  • This attempt didn't result in any changes. It doesn't seem specially advisable to return to a previous version of fermipy without redoing all analyses, as it might tamper with some of the analysis files.
  • Read Photopion production in black-hole jets and flat-spectrum radio quasars as PeV neutrino sources by Charles D. Dermer ( Wrote a set of notes on it.



  • As per Deivid's suggestion, I attempted carrying out lightcurve analysis freeing lower TS sources and a larger radius. Carried out this modification for the whole baseline analysis, the 1-week timeframe around the flare, and the source J0658. The fit still won't converge, and by looking at the residmap, it seems like there is no abnormal source, nothing unaccounted by the model.
  • I had a Zoom chat with Janeth Valverde to consult on the lightcurve problem. Janeth pointed out that the problem might have to do with some of fermitools not running properly for a source this faint. Instead of using Ari's pipeline, she recommended doing a general check on that by running fermitools directly on the command line. She also recommended focusing solely in the last 6 months, and doing an unbinned analysis.
  • Janeth also discussed trying out an adaptive binning strategy (in parallel to the other analyses). She shared this example as a demonstration of the usefulness of adaptive binning.
  • Contacted Ari Brill regarding the problem. He asked the following questions, as a sanity check (which can be useful for others with similar problems):

    "1) Did the baseline analysis using the entire dataset converge successfully? Were you able to look at the validation plots and see that the results seemed reasonable?

    2) To follow up on what Qi was saying, if you look at the monthly lightcurve: you can see that there are years-long stretches in between the flares when the source isn't detected at all. So there are parts of the lightcurve that might not be detected even with a 12-week time bin. That shouldn't cause the fit to fail though. It should converge and report an upper limit.

    3) Are you able to see the error message that's printed out? Just to make sure, do you have enough space in your output directory to store all of the output files?"

  • In my response, I indicated that the baseline analysis did converge successfully, and the resulting plots were reasonable. As stated by Dr. Brill, the fit shouldn't fail to converge, that is the worrying part. I also cleared up some space in my directory, but the problem persisted.
  • In conversation with Dr. Brill, he directed me to the following github issue:, which explains that the fitting problem is caused by an incompatibility between Fermipy and the newest version of Astropy. This seems to be a recurring problem having to do with the timing of fermipy and astropy updates (in this case, the problem had to do with fermipy v.1.1). As a solution, we decided to update fermipy again to its v1.4 (which had only been released two weeks before).
  • We started following Prof. Valverde's instructions on how to directly carry out the ROI analysis and the lightcurve analysis only with fermitools. She had sent a txt file named complete_unbinned_analysis.txt, which can be found at /nevis/milne/files/pd2629/fermitools/, with detailed instructions, that I had to adapt to my source.
  • After updating the fermipy version in the server, the previous error resolved. I set out then to carry out a lightcurve of the full dataset, in order to gain a general understanding of the development of the flux from the source. I decided to set the timebin size to 2 months, resulting in more than 100 different measurements. In order to carry out that analysis in an orderly manner, I used Ari Brill's pipeline, using the sections method. This is a quite useful pipeline, although Dr. Brill warned me in his emails that it can be quickly outdated, due to the constant updating of Fermipy. From what I have been able to see, the pipeline doesn't present any troubles right now.



Brookhaven Laboratory Visit


  • Completed the whole dataset analysis. I then turned into the problem of defining flaring states,
-- Massimo Capasso - 2022-06-16


Edit | Attach | Watch | Print version | History: r13 < r12 < r11 < r10 < r9 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r13 - 2022-08-26 - PabloDrake
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback