Claudia Tang Research Journal (Summer '22)
06/06-06/10
06/06
06/07
06/13-06/17
--
Massimo Capasso - 2022-06-16
Starting Monday June 27th 2022:
6/27/22: attempted applying various spectral fits to my current sources data using xspec. Was unsuccessful and later realized after group meeting that I had used the wrong region (due to complications when converting sky coordinates to detector coordinates). I was unable to immediately fix the problem because the xmm extended analysis is a very lengthy process, but I downloaded the data and did the base work to set myself up for further analysis the next day.
6/28/22: I reran the entire xmm extended analysis process on my data for the region that is suspected to have diffuse emission. The fit was much better than previously and made more sense. According to Kaya, the spectrum looked promising, and told me to try doing the same extended analysis for a random region in the background and apply the same spectral fit to later compare to the actual diffuse emission region's spectrum.
6/29/22: tried to both create the background and soft proton contamination image of the diffuse emission and run a full analysis on the background region. Due to my computer accidentally turning off while running the main extended analysis process (mos-spectra), I had to redo the process a few times which took up a large amount of time. In addition, I only later realized that I was accidentally doing an analysis on the entire XMM field of view instead of a specific region (which was a goal of mine anyhow for tomorrow). Had an odd issue with naming after adapt ran where the file comes out as 100-**** instead of 100-10000.fits, so I tried to rerun this quite lengthy process multiple times. I still am not particularly sure what caused this issue, but since the image does still run in ds9, I moved on for the time being.
6/30/22-7/1/22: Tried to redo the XMM extended analysis for more regions to make sure I got consistent results with what I expect (a diffuse emission). Because of issues with run-time length of some processes this spanned quite a while. I ran the process for the region of the suspected diffuse emission once more and I ran it for a few different background regions to try to compare to the diffuse source. In the mean time, I ran xspec on each of the different regions and tried to apply a proper fit to them. I ran into an odd issue where it seemed like MOS1 had a significant amount of counts more than MOS2 (like 2x the amount) which was causing the constant factor to be very bad (below 0.5 usually). This issue still hasn't been resolved and I spent the majority of Friday trying to pinpoint plausible sources of the issue. I believe it might come from the initial mos-spectra command (if not with the actual initial set-up of my data) as there was no issue with the counts when I was using the data for the normal point source analysis. However, since the work I'm doing for the source now is mainly preparation work for when more data comes in, we decided to put this issue aside and begin to work on SED modeling starting next week.
7/5/22: Had the Veritas and the the Nustar team meetings. Talked with Jooyun about SED modelling and how to get started with understanding Naima. Began to read the Naima official website and learning about the different models and the terminology used to plot SEDs.
7/6/22: Went through all the models and took notes on the main ones and the parameters needed. Reproduced Sarah's (the person who was working on this source before me) previous powerlaw SED model for my source and retyped everything line by line to get a better understanding of exactly how SED modelling works and what I was doing. Still have some questions linked to certain commands and how to find the amplitude in 1/eV units and like the "for
__ in zip" type commands. Goal is to play around with parameters and see how different parameters alter the SED tomorrow.
7/7/22: Found out that for the most part, amplitude is just guessed and varied manually until a good fit is found. I used xspec to extract the absorbed and unabsorbed flux values from the data after being fit to a normal con*pow*tbabs model and used one of Sarah's old correction codes to approximate the data before absorption in order to fit the <a href="https://twiki.nevis.columbia.edu/twiki/bin/edit/Main/PowerLaw?topicparent=Main.ClaudiaTang;nowysiwyg=0" rel="nofollow" title="PowerLaw (this topic does not yet exist; you can create it)">
PowerLaw </a> to the data. For some reason, some points had very high flux errors, so assuming that it was just bad data where background was close to dominating/dominated, I removed those data points so I could at least fit the SED to the majority of the points. Earlier in the day I was also playing around with a normal <a href="https://twiki.nevis.columbia.edu/twiki/bin/edit/Main/PowerLaw?topicparent=Main.ClaudiaTang;nowysiwyg=0" rel="nofollow" title="PowerLaw (this topic does not yet exist; you can create it)">
PowerLaw </a> and seeing how the shape would change as different parameters varied. Found out that most likely, a powerlaw by itself would not fit the data as the magnetic field required to match the synchrotron component was below 1 microgauss which is unfeasible as it's below the ISM, so I will have to try applying different broken powerlaw models most likely. The goal is to try to get a good model fit sometime tomorrow or Monday. I also want to check to make sure that the methodology used to convert the .dat files to ecsv files was correct (since I just used the one's Sarah had done for the other telescopes).
7/8/22: I did more fitting today with the SED for the power law specifically. Most of my time went into trying to do the XMM data extraction properly because a lot of the data had really bad error. Realized that the correction I was doing was incorrect because I was refitting after setting nH to 0 (to represent the unabsorbed flux) which changes the parameters and therefore makes the correction method ineffective and I also learned that when using wdata to extract the binned data from the spectrum, the "NO NO NO NO NO" represented the dividing line between MOS1 and MOS2 data. As I didn't account for this in what I did yesterday, I had to redo the process and replot the data. Next I plan to try to apply the exponential power law model and then learn to use MCMC to check whether a model fits the data well.
7/11/22: BNL trip
7/12/22: Had the Nustar team meeting and worked on more SED model fitting while trying to learn the MCMC process. I was able to write and debug a code and incorporate the G32.64 data into it. Since it took a long time to run, I had to find a way to run the python code so that it would continue running even when the computer turned off. By the end of the day, was able to get the code to run, but because I couldn't figure out how to run the code in the screen -S method even though I ran it there, no file was outputted the next day.
7/13/22: Started to use the nohup method in the terminal to run the python code and decoded a bit more since with the nohup.out file, I could see the runtime errors occurring in the code. While I waited for the code to run, I read up more on pyplot plotting and the MCMC model along with learning more about the MCMC process by learning about markov chains, the monte carlo method and accept-reject sampling.
7/14/22: ran another (shorter) MCMC while waiting for the longer one to finish up in hopes that I can get at least one to output the desired files and results before the weekend. Had the Veritas team meeting and tried to do some more SED fitting using the outputted results from the first few completed MCMC walks. In the meantime, I worked on my Veritas/CTA presentation and the SRI poster.
7/15/22: Finalized MCMC results and ran another short trial with slightly different parameters for Eemax. Got the image/output files that I had hoped to get.
7/18/22: Read Yosi's paper on his PWN evolution SED model. Began to try out different runs of it to get a feel of it. Worked on my presentation for the
NuStar team meeting.
7/19/22: Started to play around with Yosi's model. Since each run took like 10+ minutes, was a bit slow if you just want to change one parameter at a time. Main goal is to find the parameters that will allow the model to fit the data values.
7/20/22: read Yosi's paper again and worked on trying to get model to fit. Started to learn abt supernovas and the SNR envelope and reverse shocks. Downloaded the archived geminga nustar data in order to work on a new project with Jooyun to determine if Geminga is a potential
TeV halo.
7/21/22: learned about nustar to process the geminga data and continued to do sed modeling. Worked on symposium presentation while the codes were running.
7/22/22: Symposium talk. Did more sed modeling.
7/25/22: Day-long Nustar meeting with collaborator Shuo Zhang. Did more sed modeling with yosi's model.
7/26/22: While waiting for more runs of yosi's model to finish, I started up on a new project with geminga. Did a point source analysis on one of the nustar data sets with the longest exposure time and started to learn about timing analysis.
7/27/22: Did more work on geminga analysis. Extracted source and background region for all 15 observations and edited the run_nuproducts.py code to accommodate for multiple observation IDs and to fit the placing of my folders and stuff.
7/28/22: full day PWN meeting in Nustar group with collaborators. Ran more modeling and ran the python code to filter all the data so it would be ready to grouppha and then place into xspec to spectral model them. Later plan to fit all the data together to verify the results from Mori et al.'s geminga paper.
7/29/22: working on geminga source. I had done a point source spectral analysis for geminga
8/1/22: I plan to begin to analyze to look for a x-ray halo since finishing the clean up and compilation and spectral analysis of the geminga point source. Also got the MCMC for yosi's model to work so I started playing around with that and running more runs on g32's sed. Had
NuSTAR and veritas group meetings.
8/2/22: Had to remake and re-edit a lot of the codes I was using before to run yosi's model because the remote computer I was using was unplugged. Questionable issue where results kept changing each time file was read in even though not being consciously overwritten.
8/3/22: SRI poster meeting. More fitting being done. Realized that the mcmc I was running was causing the files to be overwritten.
8/4/22: another meeting with Joseph Gelfand. Talked with his postdoc to fix the mcmc (which had a lot of hardcoded values)
8/5/22: working more on the mcmc code to automate more things and make it more user friendly.
Comments