Difference: ClaudiaTang (8 vs. 9)

Revision 92022-07-14 - ClaudiaTang

Line: 1 to 1
 
META TOPICPARENT name="TWikiUsers"

My Links

Line: 16 to 16
 
E-mail  

Deleted:
<
<

Daily Journals

Starting Monday June 27th 2022:

6/27/22: attempted applying various spectral fits to my current sources data using xspec. Was unsuccessful and later realized after group meeting that I had used the wrong region (due to complications when converting sky coordinates to detector coordinates). I was unable to immediately fix the problem because the xmm extended analysis is a very lengthy process, but I downloaded the data and did the base work to set myself up for further analysis the next day.

6/28/22: I reran the entire xmm extended analysis process on my data for the region that is suspected to have diffuse emission. The fit was much better than previously and made more sense. According to Kaya, the spectrum looked promising, and told me to try doing the same extended analysis for a random region in the background and apply the same spectral fit to later compare to the actual diffuse emission region's spectrum.

6/29/22: tried to both create the background and soft proton contamination image of the diffuse emission and run a full analysis on the background region. Due to my computer accidentally turning off while running the main extended analysis process (mos-spectra), I had to redo the process a few times which took up a large amount of time. In addition, I only later realized that I was accidentally doing an analysis on the entire XMM field of view instead of a specific region (which was a goal of mine anyhow for tomorrow). Had an odd issue with naming after adapt ran where the file comes out as 100-**** instead of 100-10000.fits, so I tried to rerun this quite lengthy process multiple times. I still am not particularly sure what caused this issue, but since the image does still run in ds9, I moved on for the time being.

6/30/22-7/1/22: Tried to redo the XMM extended analysis for more regions to make sure I got consistent results with what I expect (a diffuse emission). Because of issues with run-time length of some processes this spanned quite a while. I ran the process for the region of the suspected diffuse emission once more and I ran it for a few different background regions to try to compare to the diffuse source. In the mean time, I ran xspec on each of the different regions and tried to apply a proper fit to them. I ran into an odd issue where it seemed like MOS1 had a significant amount of counts more than MOS2 (like 2x the amount) which was causing the constant factor to be very bad (below 0.5 usually). This issue still hasn't been resolved and I spent the majority of Friday trying to pinpoint plausible sources of the issue. I believe it might come from the initial mos-spectra command (if not with the actual initial set-up of my data) as there was no issue with the counts when I was using the data for the normal point source analysis. However, since the work I'm doing for the source now is mainly preparation work for when more data comes in, we decided to put this issue aside and begin to work on SED modeling starting next week.

7/5/22: Had the Veritas and the the Nustar team meetings. Talked with Jooyun about SED modelling and how to get started with understanding Naima. Began to read the Naima official website and learning about the different models and the terminology used to plot SEDs.

7/6/22: Went through all the models and took notes on the main ones and the parameters needed. Reproduced Sarah's (the person who was working on this source before me) previous powerlaw SED model for my source and retyped everything line by line to get a better understanding of exactly how SED modelling works and what I was doing. Still have some questions linked to certain commands and how to find the amplitude in 1/eV units and like the "for __ in zip" type commands. Goal is to play around with parameters and see how different parameters alter the SED tomorrow.

7/7/22: Found out that for the most part, amplitude is just guessed and varied manually until a good fit is found. I used xspec to extract the absorbed and unabsorbed flux values from the data after being fit to a normal con*pow*tbabs model and used one of Sarah's old correction codes to approximate the data before absorption in order to fit the PowerLaw to the data. For some reason, some points had very high flux errors, so assuming that it was just bad data where background was close to dominating/dominated, I removed those data points so I could at least fit the SED to the majority of the points. Earlier in the day I was also playing around with a normal PowerLaw and seeing how the shape would change as different parameters varied. Found out that most likely, a powerlaw by itself would not fit the data as the magnetic field required to match the synchrotron component was below 1 microgauss which is unfeasible as it's below the ISM, so I will have to try applying different broken powerlaw models most likely. The goal is to try to get a good model fit sometime tomorrow or Monday. I also want to check to make sure that the methodology used to convert the .dat files to ecsv files was correct (since I just used the one's Sarah had done for the other telescopes).

7/8/22: I did more fitting today with the SED for the power law specifically. Most of my time went into trying to do the XMM data extraction properly because a lot of the data had really bad error. Realized that the correction I was doing was incorrect because I was refitting after setting nH to 0 (to represent the unabsorbed flux) which changes the parameters and therefore makes the correction method ineffective and I also learned that when using wdata to extract the binned data from the spectrum, the "NO NO NO NO NO" represented the dividing line between MOS1 and MOS2 data. As I didn't account for this in what I did yesterday, I had to redo the process and replot the data. Next I plan to try to apply the exponential power law model and then learn to use MCMC to check whether a model fits the data well.

7/11/22: BNL trip

7/12/22: Had the Nustar team meeting and worked on more SED model fitting while trying to learn the MCMC process. I was able to write and debug a code and incorporate the G32.64 data into it. Since it took a long time to run, I had to find a way to run the python code so that it would continue running even when the computer turned off. By the end of the day, was able to get the code to run, but because I couldn't figure out how to run the code in the screen -S method even though I ran it there, no file was outputted the next day.

7/13/22: Started to use the nohup method in the terminal to run the python code and decoded a bit more since with the nohup.out file, I could see the runtime errors occurring in the code. While I waited for the code to run, I read up more on pyplot plotting and the MCMC model along with learning more about the MCMC process by learning about markov chains, the monte carlo method and accept-reject sampling.

 

Comments


<--/commentPlugin-->
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2022 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback