Package ivs :: Package sed :: Module builder
[hide private]
[frames] | no frames]

Source Code for Module ivs.sed.builder

   1  # -*- coding: utf-8 -*- 
   2  """ 
   3  SED builder program. 
   4   
   5  To construct an SED, use the SED class. The functions defined in this module 
   6  are mainly convenience functions specifically for that class, but can be used 
   7  outside of the SED class if you know what you're doing. 
   8   
   9  Table of contents: 
  10   
  11      1. Retrieving and plotting photometry of a target 
  12      2. Where/what is my target? 
  13      3. SED fitting using a grid based approach 
  14          - Binar star ### W.I.P ### 
  15          - Saving SED fits 
  16          - Loading SED fits ### BROKEN ### 
  17      4. Accessing the best fitting full SED model 
  18      5. Radii, distances and luminosities 
  19          - Relations between quantities 
  20          - Parallaxes 
  21          - Seismic constraints 
  22          - Reddening constraints 
  23          - Evolutionary constraints 
  24   
  25  Section 1. Retrieving and plotting photometry of a target 
  26  ========================================================= 
  27   
  28  >>> mysed = SED('HD180642') 
  29  >>> mysed.get_photometry() 
  30  >>> mysed.plot_data() 
  31   
  32  and call Pylab's C{show} function to show to the screen: 
  33   
  34  ]]include figure]]ivs_sed_builder_example_photometry.png] 
  35   
  36  Catch IndexErrors and TypeErrors in case no photometry is found. 
  37   
  38  You can give a B{search radius} to C{get_photometry} via the keyword C{radius}. 
  39  The default value is 10 arcseconds for stars dimmer than 6th magnitude, and 60 
  40  arcseconds for brighter stars. The best value of course depends on the density 
  41  of the field. 
  42   
  43  >>> mysed.get_photometry(radius=5.) 
  44   
  45  If your star's name is not recognised by any catalog, you can give coordinates 
  46  to look for photometry. In that case, the ID of the star given in the C{SED} 
  47  command will not be used to search photometry (only to save the phot file): 
  48   
  49  >>> mysed.get_photometry(ra=289.31167983,dec=1.05941685) 
  50   
  51  Note that C{ra} and C{dec} are given in B{degrees}. 
  52   
  53  You best B{switch on the logger} (see L{ivs.aux.loggers.get_basic_logger}) to see the progress: 
  54  sometimes, access to catalogs can take a long time (the GATOR sources are 
  55  typically slow). If one of the C{gator}, C{vizier} or C{gcpd} is impossibly slow 
  56  or the site is down, you can B{include/exclude these sources} via the keywords 
  57  C{include} or C{exclude}, which take a list of strings (choose from C{gator}, 
  58  C{vizier} and/or C{gcpd}). For ViZieR, there is an extra option to change to 
  59  another mirror site via 
  60   
  61  >>> vizier.change_mirror() 
  62   
  63  The L{vizier.change_mirror} function cycles through all the mirrors continuously, 
  64  so sooner or later you will end up with the default one and repeat the cycle. 
  65   
  66  >>> mysed.get_photometry(exclude=['gator']) 
  67   
  68  The results will be written to the file B{HD180642.phot}. An example content is:: 
  69   
  70    #  meas    e_meas flag unit photband          source                        _r   _RAJ2000    _DEJ2000   cwave       cmeas     e_cmeas cunit       color include 
  71    #float64   float64 |S20 |S30 |S30              |S50                     float64    float64     float64 float64     float64     float64 |S50         bool    bool 
  72      7.823      0.02 nan  mag  WISE.W3           wise_prelim_p3as_psd    0.112931  2.667e-05   2.005e-05  123337 4.83862e-17 8.91306e-19 erg/s/cm2/AA     0       1 
  73      7.744     0.179 nan  mag  WISE.W4           wise_prelim_p3as_psd    0.112931  2.667e-05   2.005e-05  222532 4.06562e-18 6.70278e-19 erg/s/cm2/AA     0       1 
  74       7.77     0.023 nan  mag  WISE.W1           wise_prelim_p3as_psd    0.112931  2.667e-05   2.005e-05 33791.9   6.378e-15  1.3511e-16 erg/s/cm2/AA     0       1 
  75      7.803      0.02 nan  mag  WISE.W2           wise_prelim_p3as_psd    0.112931  2.667e-05   2.005e-05   46293 1.82691e-15 3.36529e-17 erg/s/cm2/AA     0       1 
  76      8.505     0.016 nan  mag  TYCHO2.BT         I/259/tyc2                 0.042   7.17e-06    1.15e-06  4204.4 2.76882e-12 4.08029e-14 erg/s/cm2/AA     0       1 
  77      8.296     0.013 nan  mag  TYCHO2.VT         I/259/tyc2                 0.042   7.17e-06    1.15e-06 5321.86 1.93604e-12 2.31811e-14 erg/s/cm2/AA     0       1 
  78       8.27       nan nan  mag  JOHNSON.V         II/168/ubvmeans             0.01    1.7e-07    3.15e-06 5504.67 1.80578e-12 1.80578e-13 erg/s/cm2/AA     0       1 
  79       0.22       nan nan  mag  JOHNSON.B-V       II/168/ubvmeans             0.01    1.7e-07    3.15e-06     nan     1.40749    0.140749 flux_ratio      1       0 
  80      -0.66       nan nan  mag  JOHNSON.U-B       II/168/ubvmeans             0.01    1.7e-07    3.15e-06     nan     1.21491    0.121491 flux_ratio      1       0 
  81       8.49       nan nan  mag  JOHNSON.B         II/168/ubvmeans             0.01    1.7e-07    3.15e-06 4448.06 2.54162e-12 2.54162e-13 erg/s/cm2/AA     0       1 
  82       7.83       nan nan  mag  JOHNSON.U         II/168/ubvmeans             0.01    1.7e-07    3.15e-06 3641.75 3.08783e-12 3.08783e-13 erg/s/cm2/AA     0       1 
  83      2.601       nan nan  mag  STROMGREN.HBN-HBW J/A+A/528/A148/tables       0.54 -5.983e-05 -0.00013685     nan     1.66181    0.166181 flux_ratio      1       0 
  84     -0.043       nan nan  mag  STROMGREN.M1      J/A+A/528/A148/tables       0.54 -5.983e-05 -0.00013685     nan    0.961281   0.0961281 flux_ratio      1       0 
  85      8.221       nan nan  mag  STROMGREN.Y       J/A+A/528/A148/tables       0.54 -5.983e-05 -0.00013685 5477.32 1.88222e-12 1.88222e-13 erg/s/cm2/AA     0       1 
  86      0.009       nan nan  mag  STROMGREN.C1      J/A+A/528/A148/tables       0.54 -5.983e-05 -0.00013685     nan     0.93125    0.093125 flux_ratio      1       0 
  87      0.238       nan nan  mag  STROMGREN.B-Y     J/A+A/528/A148/tables       0.54 -5.983e-05 -0.00013685     nan     1.28058    0.128058 flux_ratio      1       0 
  88      8.459       nan nan  mag  STROMGREN.B       J/A+A/528/A148/tables       0.54 -5.983e-05 -0.00013685  4671.2 2.41033e-12 2.41033e-13 erg/s/cm2/AA     0       1 
  89      8.654       nan nan  mag  STROMGREN.V       J/A+A/528/A148/tables       0.54 -5.983e-05 -0.00013685 4108.07 2.96712e-12 2.96712e-13 erg/s/cm2/AA     0       1 
  90      8.858       nan nan  mag  STROMGREN.U       J/A+A/528/A148/tables       0.54 -5.983e-05 -0.00013685 3462.92 3.40141e-12 3.40141e-13 erg/s/cm2/AA     0       1 
  91       7.82      0.01 nan  mag  JOHNSON.J         J/PASP/120/1128/catalog     0.02  1.017e-05    3.15e-06 12487.8 2.36496e-13 2.36496e-15 erg/s/cm2/AA     0       1 
  92       7.79      0.01 nan  mag  JOHNSON.K         J/PASP/120/1128/catalog     0.02  1.017e-05    3.15e-06 21951.2 3.24868e-14 3.24868e-16 erg/s/cm2/AA     0       1 
  93       7.83      0.04 nan  mag  JOHNSON.H         J/PASP/120/1128/catalog     0.02  1.017e-05    3.15e-06 16464.4 8.64659e-14 3.18552e-15 erg/s/cm2/AA     0       1 
  94     8.3451    0.0065 nan  mag  HIPPARCOS.HP      I/239/hip_main             0.036   2.17e-06    1.15e-06 5275.11 1.91003e-12 1.91003e-14 erg/s/cm2/AA     0       1 
  95      8.525     0.011 nan  mag  TYCHO2.BT         I/239/hip_main             0.036   2.17e-06    1.15e-06  4204.4 2.71829e-12   2.754e-14 erg/s/cm2/AA     0       1 
  96      8.309     0.012 nan  mag  TYCHO2.VT         I/239/hip_main             0.036   2.17e-06    1.15e-06 5321.86   1.913e-12 2.11433e-14 erg/s/cm2/AA     0       1 
  97       8.02     0.057 nan  mag  COUSINS.I         II/271A/patch2               0.2 -4.983e-05   2.315e-05 7884.05 7.51152e-13 3.94347e-14 erg/s/cm2/AA     0       1 
  98      8.287     0.056 nan  mag  JOHNSON.V         II/271A/patch2               0.2 -4.983e-05   2.315e-05 5504.67 1.77773e-12 9.16914e-14 erg/s/cm2/AA     0       1 
  99       8.47       nan nan  mag  USNOB1.B1         I/284/out                  0.026   7.17e-06     1.5e-07 4448.06 2.57935e-12 7.73805e-13 erg/s/cm2/AA     0       1 
 100       8.19       nan nan  mag  USNOB1.R1         I/284/out                  0.026   7.17e-06     1.5e-07 6939.52 1.02601e-12 3.07803e-13 erg/s/cm2/AA     0       1 
 101      8.491     0.012 nan  mag  JOHNSON.B         I/280B/ascc                0.036   2.17e-06    2.15e-06 4448.06 2.53928e-12 2.80651e-14 erg/s/cm2/AA     0       1 
 102      8.274     0.013 nan  mag  JOHNSON.V         I/280B/ascc                0.036   2.17e-06    2.15e-06 5504.67 1.79914e-12 2.15419e-14 erg/s/cm2/AA     0       1 
 103      7.816     0.023 nan  mag  2MASS.J           II/246/out                 0.118 -2.783e-05   1.715e-05 12412.1 2.28049e-13 4.83093e-15 erg/s/cm2/AA     0       1 
 104      7.792     0.021 nan  mag  2MASS.KS          II/246/out                 0.118 -2.783e-05   1.715e-05 21909.2 3.26974e-14 6.32423e-16 erg/s/cm2/AA     0       1 
 105      7.825     0.042 nan  mag  2MASS.H           II/246/out                 0.118 -2.783e-05   1.715e-05 16497.1 8.48652e-14 3.28288e-15 erg/s/cm2/AA     0       1 
 106      8.272     0.017 nan  mag  GENEVA.V          GCPD                         nan        nan         nan  5482.6 1.88047e-12 2.94435e-14 erg/s/cm2/AA     0       1 
 107        1.8     0.004 nan  mag  GENEVA.G-B        GCPD                         nan        nan         nan     nan    0.669837  0.00669837 flux_ratio      1       0 
 108      1.384     0.004 nan  mag  GENEVA.V1-B       GCPD                         nan        nan         nan     nan    0.749504  0.00749504 flux_ratio      1       0 
 109       0.85     0.004 nan  mag  GENEVA.B1-B       GCPD                         nan        nan         nan     nan     1.05773   0.0105773 flux_ratio      1       0 
 110      1.517     0.004 nan  mag  GENEVA.B2-B       GCPD                         nan        nan         nan     nan    0.946289  0.00946289 flux_ratio      1       0 
 111      0.668     0.004 nan  mag  GENEVA.V-B        GCPD                         nan        nan         nan     nan    0.726008  0.00726008 flux_ratio      1       0 
 112      0.599     0.004 nan  mag  GENEVA.U-B        GCPD                         nan        nan         nan     nan     1.13913   0.0113913 flux_ratio      1       0 
 113      7.604 0.0174642 nan  mag  GENEVA.B          GCPD                         nan        nan         nan 4200.85 2.59014e-12 4.16629e-14 erg/s/cm2/AA     0       1 
 114      9.404 0.0179165 nan  mag  GENEVA.G          GCPD                         nan        nan         nan 5765.89 1.73497e-12   2.863e-14 erg/s/cm2/AA     0       1 
 115      8.988 0.0179165 nan  mag  GENEVA.V1         GCPD                         nan        nan         nan 5395.63 1.94132e-12 3.20351e-14 erg/s/cm2/AA     0       1 
 116      8.454 0.0179165 nan  mag  GENEVA.B1         GCPD                         nan        nan         nan 4003.78 2.73968e-12 4.52092e-14 erg/s/cm2/AA     0       1 
 117      9.121 0.0179165 nan  mag  GENEVA.B2         GCPD                         nan        nan         nan 4477.56 2.45102e-12  4.0446e-14 erg/s/cm2/AA     0       1 
 118      8.203 0.0179165 nan  mag  GENEVA.U          GCPD                         nan        nan         nan 3421.62 2.95052e-12 4.86885e-14 erg/s/cm2/AA     0       1 
 119       8.27       nan nan  mag  JOHNSON.V         GCPD                         nan        nan         nan 5504.67 1.80578e-12 1.80578e-13 erg/s/cm2/AA     0       1 
 120       0.22       nan nan  mag  JOHNSON.B-V       GCPD                         nan        nan         nan     nan     1.40749    0.140749 flux_ratio      1       0 
 121      -0.66       nan nan  mag  JOHNSON.U-B       GCPD                         nan        nan         nan     nan     1.21491    0.121491 flux_ratio      1       0 
 122       8.49       nan nan  mag  JOHNSON.B         GCPD                         nan        nan         nan 4448.06 2.54162e-12 2.54162e-13 erg/s/cm2/AA     0       1 
 123       7.83       nan nan  mag  JOHNSON.U         GCPD                         nan        nan         nan 3641.75 3.08783e-12 3.08783e-13 erg/s/cm2/AA     0       1 
 124     -0.035       nan nan  mag  STROMGREN.M1      GCPD                         nan        nan         nan     nan    0.954224   0.0954224 flux_ratio      1       0 
 125      0.031       nan nan  mag  STROMGREN.C1      GCPD                         nan        nan         nan     nan     0.91257    0.091257 flux_ratio      1       0 
 126      0.259       nan nan  mag  STROMGREN.B-Y     GCPD                         nan        nan         nan     nan     1.25605    0.125605 flux_ratio      1       0 
 127     -0.009 0.0478853 nan  mag  2MASS.J-H         II/246/out                 0.118 -2.783e-05   1.715e-05     nan     2.68719    0.118516 flux_ratio      1       0 
 128     -0.033 0.0469574 nan  mag  2MASS.KS-H        II/246/out                 0.118 -2.783e-05   1.715e-05     nan    0.385286   0.0166634 flux_ratio      1       0 
 129      -0.01 0.0412311 nan  mag  JOHNSON.J-H       J/PASP/120/1128/catalog     0.02  1.017e-05    3.15e-06     nan     2.73514    0.103867 flux_ratio      1       0 
 130      -0.04 0.0412311 nan  mag  JOHNSON.K-H       J/PASP/120/1128/catalog     0.02  1.017e-05    3.15e-06     nan    0.375718    0.014268 flux_ratio      1       0 
 131      0.209 0.0206155 nan  mag  TYCHO2.BT-VT      I/259/tyc2                 0.042   7.17e-06    1.15e-06     nan     1.43014    0.027155 flux_ratio      1       0 
 132   
 133  Once a .phot file is written and L{get_photometry} is called again for the same 
 134  target, the script will B{not retrieve the photometry from the internet again}, 
 135  but will use the contents of the file instead. The purpose is minimizing network 
 136  traffic and maximizing speed. If you want to refresh the search, simply manually 
 137  delete the .phot file or set C{force=True} when calling L{get_photometry}. 
 138   
 139  The content of the .phot file is most easily read using the L{ivs.inout.ascii.read2recarray} 
 140  function. Be careful, as it contains both absolute fluxes as flux ratios. 
 141   
 142  >>> data = ascii.read2recarray('HD180642.phot') 
 143   
 144  Notice that in the C{.phot} files, also a C{comment} column is added. You can 
 145  find translation of some of the flags here (i.e. upper limit, extended source etc..), 
 146  or sometimes just additional remarks on variability etc. Not all catalogs have 
 147  this feature implemented, so you are still responsible yourself for checking 
 148  the quality of the photometry. 
 149   
 150  The references to each source are given in the C{bibtex} column. Simply call 
 151   
 152  >>> mysed.save_bibtex() 
 153   
 154  to convert those bibcodes to a C{.bib} file. 
 155   
 156  Using L{SED.plot_MW_side} and L{SED.plot_MW_top}, you can make a picture of where 
 157  your star is located with respect to the Milky Way and the Sun. With L{SED.plot_finderchart}, 
 158  you can check the location of your photometry, and also see if proper motions 
 159  etc are available. 
 160   
 161  Section 2. Where/what is my target? 
 162  =================================== 
 163   
 164  To give you some visual information on the target, the following plotting 
 165  procedure might be of some help. 
 166   
 167  To check whether the downloaded photometry is really belonging to the target, 
 168  instead of some neighbouring star (don't forget to set C{radius} when looking 
 169  for photometry!), you can generate a finderchart with the location of the 
 170  downloaded photometry overplotted. On top of that, proper motion data is added 
 171  when available, as well as radial velocity data. When a distance is available, 
 172  the proper motion velocity will be converted to a true tangential velocity. 
 173   
 174  >>> p = pl.figure();mysed.plot_finderchart(window_size=1) 
 175   
 176  ]]include figure]]ivs_sed_builder_finderchart.png] 
 177   
 178  To know the location of your target wrt the Milky Way (assuming your target is 
 179  in the milky way), you can call 
 180   
 181  >>> p = pl.figure();mysed.plot_MW_side() 
 182  >>> p = pl.figure();mysed.plot_MW_top() 
 183   
 184  ]]include figure]]ivs_sed_builder_MWside.png] 
 185   
 186  ]]include figure]]ivs_sed_builder_MWtop.png] 
 187   
 188  Section 3. SED fitting using a grid based approach 
 189  ================================================== 
 190   
 191  Subsection 3.1 Single star 
 192  -------------------------- 
 193   
 194  We make an SED of HD180642 by simply B{exploring a whole grid of Kurucz models} 
 195  (constructed via L{fit.generate_grid}, iterated over via L{fit.igrid_search} and 
 196  evaluated with L{fit.stat_chi2}). The model with the best parameters is picked 
 197  out and we make a full SED with those parameters. 
 198   
 199  >>> mysed = SED('HD180642') 
 200  >>> mysed.get_photometry() 
 201   
 202  Now we have collected B{all fluxes and colors}, but we do not want to use them 
 203  all: first, B{fluxes and colors are not independent}, so you probably want to use 
 204  either only absolute fluxes or only colors (plus perhaps one absolute flux per 
 205  system to compute the angular diameter) (see L{SED.set_photometry_scheme}). Second, 
 206  B{some photometry is better suited for fitting an SED than others}; typically IR 
 207  photometry does not add much to the fitting of massive stars, or it can be 
 208  contaminated with circumstellar material. Third, B{some photometry is not so 
 209  reliable}, i.e. the measurements can have large systematic uncertainties due to 
 210  e.g. calibration effects, which are typically not included in the error bars. 
 211   
 212  Currently, four standard schemes are implemented, which you can set via L{SED.set_photometry_scheme}: 
 213   
 214      1. C{absolute}: use only absolute values 
 215      2. C{colors}: use only colors (no angular diameter values calculated) 
 216      3. C{combo}: use all colors and one absolute value per photometric system 
 217      4. C{irfm}: (infrared flux method) use colors for wavelengths shorter than 
 218      infrared wavelengths, and absolute values for systems in the infrared. The 
 219      infrared is defined as wavelength longer than 1 micron, but this can be 
 220      customized with the keyword C{infrared=(value,unit)} in 
 221      L{SED.set_photometry_scheme}. 
 222   
 223  Here, we chose to use colors and one absolute flux per system, but exclude IR 
 224  photometry (wavelength range above 2.5 micron), and some systems and colors which 
 225  we know are not so trustworthy: 
 226   
 227  >>> mysed.set_photometry_scheme('combo') 
 228  >>> mysed.exclude(names=['STROMGREN.HBN-HBW','USNOB1','SDSS','DENIS','COUSINS','ANS','TD1'],wrange=(2.5e4,1e10)) 
 229   
 230  You can L{include}/L{exclude} photoemtry based on name, wavelength range, source and index, 
 231  and only select absolute photometry or colors (L{include_abs},L{include_colors}). 
 232  When working in interactive mode, in particular the index is useful. Print the 
 233  current set of photometry to the screen with 
 234   
 235  >>> print(photometry2str(mysed.master,color=True,index=True)) 
 236   
 237  and you will see in green the included photometry, and in red the excluded photometry. 
 238  You will see that each column is preceded by an index, you can use these indices 
 239  to select/deselect the photometry. 
 240   
 241  Speed up the fitting process by copying the model grids to the scratch disk 
 242   
 243  >>> model.copy2scratch(z='*') 
 244   
 245  Start the grid based fitting process and show some plots. We use 100000 randomly 
 246  distributed points over the grid: 
 247   
 248  >>> mysed.igrid_search(points=100000) 
 249   
 250  Delete the model grids from the scratch disk 
 251   
 252  >>> model.clean_scratch() 
 253   
 254  and make the plot 
 255   
 256  >>> p = pl.figure() 
 257  >>> p = pl.subplot(131);mysed.plot_sed() 
 258  >>> p = pl.subplot(132);mysed.plot_grid(limit=None) 
 259  >>> p = pl.subplot(133);mysed.plot_grid(x='ebv',y='z',limit=None) 
 260   
 261  ]]include figure]]ivs_sed_builder_example_fitting01.png] 
 262   
 263  The grid is a bit too coarse for our liking around the minimum, so we zoom in on 
 264  the results: 
 265   
 266  >>> teffrange = mysed.results['igrid_search']['CI']['teffL'],mysed.results['igrid_search']['CI']['teffU'] 
 267  >>> loggrange = mysed.results['igrid_search']['CI']['loggL'],mysed.results['igrid_search']['CI']['loggU'] 
 268  >>> ebvrange = mysed.results['igrid_search']['CI']['ebvL'],mysed.results['igrid_search']['CI']['ebvU'] 
 269  >>> mysed.igrid_search(points=100000,teffrange=teffrange,loggrange=loggrange,ebvrange=ebvrange) 
 270   
 271  and repeat the plot 
 272   
 273  >>> p = pl.figure() 
 274  >>> p = pl.subplot(131);mysed.plot_sed(plot_deredded=True) 
 275  >>> p = pl.subplot(132);mysed.plot_grid(limit=None) 
 276  >>> p = pl.subplot(133);mysed.plot_grid(x='ebv',y='z',limit=None) 
 277   
 278  ]]include figure]]ivs_sed_builder_example_fitting02.png] 
 279   
 280  You can automatically make plots of (most plotting functions take C{colors=True/False} 
 281  as an argument so you can make e.g. the 'color' SED and 'absolute value' SED): 
 282   
 283      1. the grid (see L{SED.plot_grid}) 
 284      2. the SED (see L{SED.plot_sed}) 
 285      3. the fit statistics (see L{SED.plot_chi2}) 
 286   
 287  To change the grid, load the L{ivs.sed.model} module and call 
 288  L{ivs.sed.model.set_defaults} with appropriate arguments. See that module for 
 289  conventions on the grid structure when adding custom grids. 
 290   
 291  To add arrays manually, i.e. not from the predefined set of internet catalogs, 
 292  use the L{SED.add_photometry_fromarrays} function. 
 293   
 294  B{Warning}: Be careful when interpreting the Chi2 results. In order to always have a 
 295  solution, the chi2 is rescaled so that the minimum equals 1, in the case the 
 296  probability of the best chi2-model is zero. The Chi2 rescaling factor I{f} mimicks 
 297  a rescaling of all errorbars with a factor I{sqrt(f)}, and does not discriminate 
 298  between systems (i.e., B{all} errors are blown up). If the errorbars are 
 299  underestimated, it could be that the rescaling factor is also wrong, which means 
 300  that the true probability region can be larger or smaller! 
 301   
 302  Subsection 3.2 Binary star - ### W.I.P ### 
 303  ------------------------------------------ 
 304   
 305  The SED class can create SEDs for multiple stars as well. There are 2 options 
 306  available, the multiple SED fit which in theory can handle any number of stars, 
 307  and the binary SED fit which is for binaries only, and uses the mass of both 
 308  components to restrict the radii when combining two model SEDs. 
 309   
 310  As an example we take the system PG1104+243, which consists of a subdwarf B star, 
 311  and a G2 type mainsequence star. The photometry from the standard catalogues that 
 312  are build in this class, is of to low quality, so we use photometry obtained from 
 313  U{the subdwarf database<http://catserver.ing.iac.es/sddb/>}. 
 314   
 315  >>> mysed = SED('PG1104+243') 
 316   
 317  >>> meas, e_meas, units, photbands, source = ascii.read2array('pg1104+243_sddb.phot', dtype=str) 
 318  >>> meas = np.array(meas, dtype=float) 
 319  >>> e_meas = np.array(e_meas, dtype=float) 
 320  >>> mysed.add_photometry_fromarrays(meas, e_meas, units, photbands, source) 
 321   
 322  We use only the absolute fluxes 
 323   
 324  >>> mysed.set_photometry_scheme('abs') 
 325   
 326  For the main sequence component we use kurucz models with solar metalicity, and 
 327  for the sdB component tmap models. And we copy the model grids to the scratch disk 
 328  to speed up the process: 
 329   
 330  >>> grid1 = dict(grid='kurucz',z=+0.0) 
 331  >>> grid2 = dict(grid='tmap') 
 332  >>> model.set_defaults_multiple(grid1,grid2) 
 333  >>> model.copy2scratch() 
 334   
 335  The actual fitting. The second fit starts from the 95% probability intervals of 
 336  the first fit. 
 337   
 338  >>> teff_ms = (5000,7000) 
 339  >>> teff_sdb = (25000,45000) 
 340  >>> logg_ms = (4.00,4.50) 
 341  >>> logg_sdb = (5.00,6.50) 
 342  >>> mysed.igrid_search(masses=(0.47,0.71) ,teffrange=(teff_ms,teff_fix),loggrange=(logg_ms,logg_sdb), ebvrange=(0.00,0.02), zrange=(0,0), points=2000000, type='binary') 
 343  >>> mysed.igrid_search(masses=(0.47,0.71) ,points=2000000, type='binary') 
 344   
 345  Delete the used models from the scratch disk 
 346   
 347  >>> model.clean_scratch() 
 348   
 349  Plot the results 
 350   
 351  >>> p = pl.figure() 
 352  >>> p = pl.subplot(131); mysed.plot_sed(plot_deredded=False) 
 353  >>> p = pl.subplot(132); mysed.plot_grid(x='teff', y='logg', limit=0.95) 
 354  >>> p = pl.subplot(133); mysed.plot_grid(x='teff-2', y='logg-2', limit=0.95) 
 355   
 356  ]]include figure]]ivs_sed_builder_example_fittingPG1104+243.png] 
 357   
 358  Subsection 3.3 Saving SED fits 
 359  ------------------------------ 
 360   
 361  You can save all the data to a multi-extension FITS file via 
 362   
 363  >>> mysed.save_fits() 
 364   
 365  This FITS file then contains all B{measurements} (it includes the .phot file), 
 366  the B{resulting SED}, the B{confidence intervals of all parameters} and also the 
 367  B{whole fitted grid}: in the above case, the extensions of the FITS file contain 
 368  the following information (we print part of each header):: 
 369   
 370      EXTNAME = 'DATA    ' 
 371      XTENSION= 'BINTABLE'           / binary table extension 
 372      BITPIX  =                    8 / array data type 
 373      NAXIS   =                    2 / number of array dimensions 
 374      NAXIS1  =                  270 / length of dimension 1 
 375      NAXIS2  =                   67 / length of dimension 2 
 376      TTYPE1  = 'meas    ' 
 377      TTYPE2  = 'e_meas  ' 
 378      TTYPE3  = 'flag    ' 
 379      TTYPE4  = 'unit    ' 
 380      TTYPE5  = 'photband' 
 381      TTYPE6  = 'source  ' 
 382      TTYPE7  = '_r      ' 
 383      TTYPE8  = '_RAJ2000' 
 384      TTYPE9  = '_DEJ2000' 
 385      TTYPE10 = 'cwave   ' 
 386      TTYPE11 = 'cmeas   ' 
 387      TTYPE12 = 'e_cmeas ' 
 388      TTYPE13 = 'cunit   ' 
 389      TTYPE14 = 'color   ' 
 390      TTYPE15 = 'include ' 
 391      TTYPE16 = 'synflux ' 
 392      TTYPE17 = 'mod_eff_wave' 
 393      TTYPE18 = 'chi2    ' 
 394   
 395      EXTNAME = 'MODEL   ' 
 396      XTENSION= 'BINTABLE'           / binary table extension 
 397      BITPIX  =                    8 / array data type 
 398      NAXIS   =                    2 / number of array dimensions 
 399      NAXIS1  =                   24 / length of dimension 1 
 400      NAXIS2  =                 1221 / length of dimension 2 
 401      TTYPE1  = 'wave    ' 
 402      TUNIT1  = 'A       ' 
 403      TTYPE2  = 'flux    ' 
 404      TUNIT2  = 'erg/s/cm2/AA' 
 405      TTYPE3  = 'dered_flux' 
 406      TUNIT3  = 'erg/s/cm2/AA' 
 407      TEFFL   =    23000.68377498454 
 408      TEFF    =    23644.49138963689 
 409      TEFFU   =    29999.33189058337 
 410      LOGGL   =    3.014445328877565 
 411      LOGG    =    4.984788506855546 
 412      LOGGU   =    4.996095055525759 
 413      EBVL    =   0.4703171900728142 
 414      EBV     =   0.4933185398871652 
 415      EBVU    =   0.5645144879211454 
 416      ZL      =   -2.499929434601112 
 417      Z       =   0.4332336811329144 
 418      ZU      =   0.4999776537652627 
 419      SCALEL  = 2.028305798068863E-20 
 420      SCALE   = 2.444606991671813E-20 
 421      SCALEU  = 2.830281842143698E-20 
 422      LABSL   =    250.9613352757437 
 423      LABS    =     281.771013453664 
 424      LABSU   =    745.3149766772975 
 425      CHISQL  =    77.07958742733673 
 426      CHISQ   =    77.07958742733673 
 427      CHISQU  =    118.8587169011471 
 428      CI_RAWL =   0.9999999513255379 
 429      CI_RAW  =   0.9999999513255379 
 430      CI_RAWU =   0.9999999999999972 
 431      CI_RED  =   0.5401112973063139 
 432      CI_REDL =   0.5401112973063139 
 433      CI_REDU =   0.9500015229597392 
 434   
 435      EXTNAME = 'IGRID_SEARCH' 
 436      XTENSION= 'BINTABLE'           / binary table extension 
 437      BITPIX  =                    8 / array data type 
 438      NAXIS   =                    2 / number of array dimensions 
 439      NAXIS1  =                   80 / length of dimension 1 
 440      NAXIS2  =                99996 / length of dimension 2 
 441      TTYPE1  = 'teff    ' 
 442      TTYPE2  = 'logg    ' 
 443      TTYPE3  = 'ebv     ' 
 444      TTYPE4  = 'z       ' 
 445      TTYPE5  = 'chisq   ' 
 446      TTYPE6  = 'scale   ' 
 447      TTYPE7  = 'escale ' 
 448      TTYPE8  = 'Labs    ' 
 449      TTYPE9  = 'CI_raw  ' 
 450      TTYPE10 = 'CI_red  ' 
 451   
 452   
 453  Subsection 3.4 Loading SED fits  ### BROKEN ### 
 454  ----------------------------------------------- 
 455   
 456  Unfortunately this is not yet working properly! 
 457   
 458  Once saved, you can load the contents of the FITS file again into an SED object 
 459  via 
 460   
 461  >>> mysed = SED('HD180642') 
 462  >>> mysed.load_fits() 
 463   
 464  and then you can build all the plots again easily. You can of course use the 
 465  predefined plotting scripts to start a plot, and then later on change the 
 466  properties of the labels, legends etc... for higher quality plots or to better 
 467  suit your needs. 
 468   
 469  Section 4. Accessing the best fitting full SED model 
 470  ==================================================== 
 471   
 472  You can access the full SED model that matches the parameters found by the 
 473  fitting routine via: 
 474   
 475  >>> wavelength,flux,deredded_flux = mysed.get_best_model() 
 476   
 477  Note that this model is retrieved after fitting, and was not in any way used 
 478  during the fitting. As a consequence, there could be small differences between 
 479  synthetic photometry calculated from this returned model and the synthetic 
 480  fluxes stored in C{mysed.results['igrid_search']['synflux']}, which is the 
 481  synthetic photometry coming from the interpolation of the grid of pre-interpolated 
 482  photometry. See the documentation of L{SED.get_model} for more information. 
 483   
 484  Section 5. Radii, distances and luminosities 
 485  ============================================ 
 486   
 487  Subsection 4.1. Relations between quantities 
 488  -------------------------------------------- 
 489   
 490  Most SED grids don't have the radius as a tunable model parameter. The scale 
 491  factor, which is available for all fitted models in the grid when at least one 
 492  absolute photometric point is included, is directly propertional to the angular 
 493  diameter. The following relations hold:: 
 494   
 495      >> distance = radius / np.sqrt(scale) 
 496      >> radius = distance * np.sqrt(scale) 
 497   
 498  Where C{radius} and C{distance} have equal units. The true absolute luminosity 
 499  (solar units) is related to the absolute luminosity from the SED (solar units) 
 500  via the radius (solar units):: 
 501   
 502      >> L_abs_true = L_abs * radius**2 
 503   
 504  Finally, the angular diameter can be computed via:: 
 505   
 506      >> 2*conversions.convert('sr','mas',scale) 
 507   
 508  Subsection 4.2. Seismic constraints 
 509  ----------------------------------- 
 510   
 511  If the star shows clear solar-like oscillations, you can use the nu_max give an 
 512  independent constraint on the surface gravity, given the effective temperature of 
 513  the model (you are free to give errors or not, just keep in mind that in the 
 514  former case, you will get an array of Uncertainties rather than floats):: 
 515   
 516      >> teff = 4260.,'K' 
 517      >> nu_max = 38.90,0.86,'muHz' 
 518      >> logg_slo = conversions.derive_logg_slo(teff,nu_max,unit='[cm/s2']) 
 519   
 520  If you have the large separation (l=0 modes) and the nu_max, you can get an 
 521  estimate of the radius:: 
 522   
 523      >> Deltanu0 = 4.80,0.02,'muHz' 
 524      >> R_slo = conversions.derive_radius_slo(numax,Deltanu0,teff,unit='Rsol') 
 525   
 526  Then from the radius, you can get both the absolute luminosity and distance to 
 527  your star. 
 528   
 529  Subsection 4.3. Parallaxes 
 530  -------------------------- 
 531   
 532  From the parallax, you can get an estimate of the distance. This is, however, 
 533  dependent on some prior assumptions such as the shape of the galaxy and the 
 534  distribution of stars. To estimate the probability density value due to 
 535  a measured parallax of a star at particular distance, you can call L{distance.distprob}:: 
 536   
 537      >> gal_latitude = 0.5 
 538      >> plx = 3.14,0.5 
 539      >> d_prob = distance.distprob(d,gal_latitude,plx) 
 540   
 541  Using this distance, you can get an estimate for the radius of your star, and 
 542  thus also the absolute luminosity. 
 543   
 544  Subsection 4.4: Reddening constraints 
 545  ------------------------------------- 
 546   
 547  An estimate of the reddening of a star at a particular distance may be obtained 
 548  with the L{extinctionmodels.findext} function. There are currently three 
 549  reddening maps available: Drimmel, Marshall and Arenou:: 
 550   
 551      >> lng,lat = 120.,-0.5 
 552      >> dist = 100. # pc 
 553      >> Rv = 3.1 
 554      >> EBV = extinctionmodels.findext(lng,lat,model='drimmel',distance=dist)/Rv 
 555      >> EBV = extinctionmodels.findext(lng,lat,model='marshall',distance=dist)/Rv 
 556      >> EBV = extinctionmodels.findext(lng,lat,model='arenou',distance=dist)/Rv 
 557   
 558   
 559  """ 
 560  import re 
 561  import sys 
 562  import time 
 563  import os 
 564  import logging 
 565  import itertools 
 566  import glob 
 567  import json 
 568   
 569  import pylab as pl 
 570  from matplotlib import mlab 
 571  from PIL import Image 
 572  import numpy as np 
 573  import scipy.stats 
 574  from scipy.interpolate import Rbf 
 575  import astropy.io.fits as pf 
 576   
 577  from ivs import config 
 578  from ivs.aux import numpy_ext 
 579  from ivs.aux import termtools 
 580  from ivs.aux.decorators import memoized,clear_memoization 
 581  from ivs.inout import ascii 
 582  from ivs.inout import fits 
 583  from ivs.inout import hdf5 
 584  from ivs.sed import model 
 585  from ivs.sed import filters 
 586  from ivs.sed import fit 
 587  from ivs.sed import distance 
 588  from ivs.sed import extinctionmodels 
 589  from ivs.sed.decorators import standalone_figure 
 590  from ivs.spectra import tools 
 591  from ivs.catalogs import crossmatch 
 592  from ivs.catalogs import vizier 
 593  from ivs.catalogs import mast 
 594  from ivs.catalogs import sesame 
 595  from ivs.catalogs import corot 
 596  from ivs.units import conversions 
 597  from ivs.units import constants 
 598  from ivs.units.uncertainties import unumpy,ufloat 
 599  from ivs.units.uncertainties.unumpy import sqrt as usqrt 
 600  from ivs.units.uncertainties.unumpy import tan as utan 
 601  from ivs.sigproc import evaluate 
 602  try: 
 603      from ivs.stellar_evolution import evolutionmodels #This module has now been removed, perhaps future re-implementation if demanded 
 604  except ImportError: 
 605      print("Warning: The ivs.stellar_evolution module has been removed from the repository as of 03.11.2017.") 
 606      print("         Stellar evolution models are no longer available to use.") 
 607   
 608  logger = logging.getLogger("SED.BUILD") 
609 #logger.setLevel(10) 610 611 -def fix_master(master,e_default=None):
612 """ 613 Clean/extend/fix record array received from C{get_photometry}. 614 615 This function does a couple of things: 616 617 1. Adds common but uncatalogized colors like 2MASS.J-H if not already 618 present. WARNING: these colors can mix values from the same system 619 but from different catalogs! 620 2. Removes photometry for which no calibration is available 621 3. Adds a column 'color' with flag False denoting absolute flux measurement 622 and True denoting color. 623 4. Adds a column 'include' with flag True meaning the value will be 624 included in the fit 625 5. Sets some default errors to photometric values, for which we know 626 that the catalog values are not trustworthy. 627 6. Sets a lower limit to the allowed errors of 1%. Errors below this 628 value are untrostworthy because the calibration error is larger than that. 629 7. USNOB1 and ANS photometry are set the have a minimum error of 30% 630 due to uncertainties in response curves. 631 632 @param master: record array containing photometry. This should have the fields 633 'meas','e_meas','unit','photband','source','_r','_RAJ2000','DEJ2000', 634 'cmeas','e_cmeas','cwave','cunit' 635 @type master: record array 636 @param e_default: default error for measurements without errors 637 @type e_default: float 638 @return: record array extended with fields 'include' and 'color', and with 639 rows added (see above description) 640 @rtype: numpy recard array 641 """ 642 #-- we recognize uncalibrated stuff as those for which no absolute flux was 643 # obtained, and remove them: 644 master = master[~np.isnan(master['cmeas'])] 645 cats = np.array([ph.split('.')[0] for ph in master['source']]) 646 set_cats = sorted(set(cats)) 647 #-- add common uncatalogized colors: 648 columns = list(master.dtype.names) 649 #-- if the separate bands are available (and not the color itself), 650 # calculate the colors here. We need to do this for every separate 651 # system! 652 add_rows = [] 653 for cat in set_cats: 654 master_ = master[cats==cat] 655 656 for color in ['2MASS.J-H','2MASS.KS-H','TD1.1565-1965','TD1.2365-1965','TD1.2365-2740', 657 'JOHNSON.J-H','JOHNSON.K-H','JOHNSON.B-V','JOHNSON.U-B','JOHNSON.R-V','JOHNSON.I-V', 658 'GALEX.FUV-NUV','TYCHO2.BT-VT','WISE.W1-W2','WISE.W3-W2','WISE.W4-W3', 659 'SDSS.U-G','SDSS.G-R','SDSS.R-I','SDSS.R-Z', 660 'UVEX.U-G','UVEX.G-R','UVEX.R-I','UVEX.HA-I', 661 'DENIS.I-J','DENIS.KS-J', 662 'ANS.15N-15W','ANS.15W-18','ANS.18-22','ANS.22-25','ANS.25-33']: 663 #-- get the filter system and the separate bands for these colors 664 system,band = color.split('.') 665 band0,band1 = band.split('-') 666 band0,band1 = '%s.%s'%(system,band0),'%s.%s'%(system,band1) 667 668 if band0 in master_['photband'] and band1 in master_['photband'] and not color in master_['photband']: 669 #-- where are the bands located? 670 index0 = list(master_['photband']).index(band0) 671 index1 = list(master_['photband']).index(band1) 672 673 #-- start a new row to add the color 674 row = list(master_[index1]) 675 row[columns.index('photband')] = color 676 row[columns.index('cwave')] = np.nan 677 678 #-- it could be a magnitude difference 679 if master_['unit'][index0]=='mag': 680 row[columns.index('meas')] = master_['meas'][index0]-master_['meas'][index1] 681 row[columns.index('e_meas')] = np.sqrt(master_['e_meas'][index0]**2+master_['e_meas'][index1]**2) 682 #-- error will not always be available... 683 try: 684 row[columns.index('cmeas')],row[columns.index('e_cmeas')] = conversions.convert('mag_color','flux_ratio',row[columns.index('meas')],row[columns.index('e_meas')],photband=color) 685 except AssertionError: 686 row[columns.index('cmeas')] = conversions.convert('mag_color','flux_ratio',row[columns.index('meas')],photband=color) 687 row[columns.index('e_cmeas')] = np.nan 688 #-- or it could be a flux ratio 689 else: 690 row[columns.index('meas')] = master_['meas'][index0]/master_['meas'][index1] 691 row[columns.index('e_meas')] = np.sqrt(((master_['e_meas'][index0]/master_['meas'][index0])**2+(master_['e_meas'][index1]/master_['meas'][index1])**2)*row[columns.index('meas')]**2) 692 row[columns.index('cmeas')],row[columns.index('e_cmeas')] = row[columns.index('meas')],row[columns.index('e_meas')] 693 row[columns.index('cunit')] = 'flux_ratio' 694 add_rows.append(tuple(row)) 695 master = numpy_ext.recarr_addrows(master,add_rows) 696 697 #-- add an extra column with a flag to distinguish colors from absolute 698 # fluxes, and a column with flags to include/exclude photometry 699 # By default, exclude photometry if the effective wavelength is above 700 # 150 mum or if the measurement is nan 701 if not 'color' in master.dtype.names: 702 extra_cols = [[filters.is_color(photband) for photband in master['photband']], 703 [(cwave<150e4) or np.isnan(meas) for cwave,meas in zip(master['cwave'],master['cmeas'])]] 704 dtypes = [('color',np.bool),('include',np.bool)] 705 master = numpy_ext.recarr_addcols(master,extra_cols,dtypes) 706 else: 707 iscolor = [filters.is_color(photband) for photband in master['photband']] 708 master['color'] = iscolor 709 #-- add an extra column with indices to distinguish different data sets, e.g. based 710 # on photometric phase. 711 if not 'phase' in master.dtype.names: 712 extra_cols = [[0]*len(master['meas'])] 713 dtypes = [('phase',np.int)] 714 master = numpy_ext.recarr_addcols(master,extra_cols,dtypes) 715 #-- set default errors if no errors are available and set really really 716 # small errors to some larger default value 717 if e_default is not None: 718 no_error = np.isnan(master['e_cmeas']) 719 master['e_cmeas'][no_error] = np.abs(e_default*master['cmeas'][no_error]) 720 small_error = master['e_cmeas']<(0.01*np.abs(master['cmeas'])) 721 master['e_cmeas'][small_error] = 0.01*np.abs(master['cmeas'][small_error]) 722 723 #-- the measurements from USNOB1 are not that good because the response 724 # curve is approximated by the JOHNSON filter. Set the errors to min 30% 725 # The same holds for ANS: here, the transmission curves are very uncertain 726 for i,photband in enumerate(master['photband']): 727 if 'USNOB1' in photband or 'ANS' in photband: 728 master['e_cmeas'][i] = max(0.30*master['cmeas'][i],master['e_cmeas'][i]) 729 730 #-- remove negative fluxes 731 master = master[master['cmeas']>0] 732 733 return master
734
735 736 -def decide_phot(master,names=None,wrange=None,sources=None,indices=None,ptype='all',include=False):
737 """ 738 Exclude/include photometric passbands containing one of the strings listed in 739 photbands. 740 741 This function will set the flags in the 'include' field of the extended 742 master record array. 743 744 Colours also have a wavelength in this function, and it is defined as the 745 average wavelength of the included bands (doesn't work for stromgren C1 and M1). 746 747 include=False excludes photometry based on the input parameters. 748 include=True includes photometry based on the input parameters. 749 750 Some examples: 751 752 1. Exclude all measurements:: 753 754 >> decide_phot(master,wrange=(-np.inf,+np.inf),ptype='all',include=False) 755 756 2. Include all TD1 fluxes and colors:: 757 758 >> decide_phot(master,names=['TD1'],ptype='all',include=True) 759 760 3. Include all V band measurements from all systems (but not the colors):: 761 762 >> decide_phot(master,names=['.V'],ptype='abs',include=True) 763 764 4. Include all Geneva colors and exclude Geneva magnitudes:: 765 766 >> decide_phot(master,names=['GENEVA'],ptype='col',include=True) 767 >> decide_phot(master,names=['GENEVA'],ptype='abs',include=False) 768 769 5. Exclude all infrared measurements beyond 1 micron:: 770 771 >> decide_phot(master,wrange=(1e4,np.inf),ptype='all',include=False) 772 773 6. Include all AKARI measurements below 10 micron:: 774 775 >> decide_phot(master,names=['AKARI'],wrange=(-np.inf,1e5),ptype='all',include=True) 776 777 @param master: record array containing all photometry 778 @type master: numpy record array 779 @param names: strings excerpts to match filters 780 @type names: list of strings 781 @param wrange: wavelength range (most likely angstrom) to include/exclude 782 @type wrange: 2-tuple (start wavelength,end wavelength) 783 @param sources: list of sources 784 @type sources: list of strings 785 @param indices: list of indices (integers) 786 @type indices: list of integers 787 @param ptype: type of photometry to include/exclude: absolute values, colors 788 or both 789 @type ptype: string, one of 'abs','col','all' 790 @param include: flag setting exclusion or inclusion 791 @type include: boolean 792 @return: master record array with adapted 'include' flags 793 @rtype: numpy record array 794 """ 795 #-- exclude/include passbands based on their names 796 if names is not None: 797 logger.info('%s photometry based on photband containining one of %s'%((include and 'Include' or "Exclude"),names)) 798 for index,photband in enumerate(master['photband']): 799 for name in names: 800 if name in photband: 801 if ptype=='all' or (ptype=='abs' and -master['color'][index]) or (ptype=='col' and master['color'][index]): 802 master['include'][index] = include 803 break 804 #-- exclude/include colors based on their wavelength 805 if wrange is not None: 806 logger.info('%s photometry based on photband wavelength between %s'%((include and 'Include' or "Exclude"),wrange)) 807 for index,photband in enumerate(master['photband']): 808 system,color = photband.split('.') 809 if not '-' in color or ptype=='abs': 810 continue 811 band0,band1 = color.split('-') 812 cwave0 = filters.eff_wave('%s.%s'%(system,band0)) 813 cwave1 = filters.eff_wave('%s.%s'%(system,band1)) 814 if (wrange[0]<cwave0<wrange[1]) or (wrange[0]<cwave0<wrange[1]): 815 master['include'][index] = include 816 #-- exclude/include passbands based on their wavelength 817 if not ptype=='col': 818 master['include'][(wrange[0]<master['cwave']) & (master['cwave']<wrange[1])] = include 819 #-- exclude/include passbands based on their source 820 if sources is not None: 821 logger.info('%s photometry based on source catalog in %s'%((include and 'Include' or "Exclude"),sources)) 822 for index,msource in enumerate(master['source']): 823 for source in sources: 824 if source==msource: 825 if ptype=='all' or (ptype=='abs' and -master['color'][index]) or (ptype=='col' and master['color'][index]): 826 master['include'][index] = include 827 break 828 #-- exclude/include passbands based on their index 829 if indices is not None: 830 logger.info('%s photometry based on index'%((include and 'Include' or "Exclude"))) 831 indices = np.array(indices,int) 832 if not indices.shape: indices = indices.reshape(1) 833 master['include'][indices] = include
834
835 836 -def photometry2str(master,comment='',sort='photband',color=False,index=False):
837 """ 838 String representation of master record array. 839 840 Sorting is disabled when C{index=True}. 841 842 @param master: master record array containing photometry 843 @type master: numpy record array 844 """ 845 if sort and not index: 846 master = master[np.argsort(master[sort])] 847 848 templateh = '{:20s} {:>12s} {:>12s} {:12s} {:>10s} {:>12s} {:>12s} {:12s} {:30s}' 849 templated = '{:20s} {:12g} {:12g} {:12s} {:10.0f} {:12g} {:12g} {:12s} {:30s}' 850 header = ['PHOTBAND','MEAS','E_MEAS','UNIT','CWAVE','CMEAS','E_CMEAS','CUNIT','SOURCE'] 851 if 'comments' in master.dtype.names: 852 templateh += ' {:s}' 853 templated += ' {:s}' 854 header += ['COMMENTS'] 855 if index: 856 templateh = '{:3s} '+templateh 857 templated = '{:3d} '+templated 858 header = ['NR']+header 859 860 txt = [comment+templateh.format(*header)] 861 txt.append(comment+'='*170) 862 columns = [master[col.lower()] for col in header if not col=='NR'] 863 for nr,contents in enumerate(zip(*columns)): 864 contents = list(contents) 865 if 'comments' in master.dtype.names: 866 contents[-1] = contents[-1].replace('_',' ') 867 if index: 868 contents = [nr] + contents 869 line = templated.format(*contents) 870 if color: 871 mycolor = termtools.green if master['include'][nr] else termtools.red 872 line = mycolor(line) 873 txt.append(comment + line) 874 return "\n".join(txt)
875
876 #@memoized 877 #def get_schaller_grid(): 878 #""" 879 #Download Schaller 1992 evolutionary tracks and return an Rbf interpolation 880 #function. 881 882 #@return: Rbf interpolation function 883 #@rtype: Rbf interpolation function 884 #""" 885 ##-- translation between table names and masses 886 ##masses = [1,1.25,1.5,1.7,2,2.5,3,4,5,7,9,12,15,20,25,40,60][:-1] 887 ##tables = ['table20','table18','table17','table16','table15','table14', 888 ## 'table13','table12','table11','table10','table9','table8', 889 ## 'table7','table6','table5','table4','table3'][:-1] 890 ##-- read in all the tables and compute radii and luminosities. 891 #data,comms,units = vizier.search('J/A+AS/96/269/models') 892 #all_teffs = 10**data['logTe'] 893 #all_radii = np.sqrt((10**data['logL']*constants.Lsol_cgs)/(10**data['logTe'])**4/(4*np.pi*constants.sigma_cgs)) 894 #all_loggs = np.log10(constants.GG_cgs*data['Mass']*constants.Msol_cgs/(all_radii**2)) 895 #all_radii /= constants.Rsol_cgs 896 ##-- remove low temperature models, the evolutionary tracks are hard to 897 ## interpolate there. 898 #keep = all_teffs>5000 899 #all_teffs = all_teffs[keep] 900 #all_radii = all_radii[keep] 901 #all_loggs = all_loggs[keep] 902 ##-- make linear interpolation model between all modelpoints 903 #mygrid = Rbf(np.log10(all_teffs),all_loggs,all_radii,function='linear') 904 #logger.info('Interpolation of Schaller 1992 evolutionary tracks to compute radii') 905 #return mygrid 906 907 908 909 910 911 912 913 #def get_radii(teffs,loggs): 914 #""" 915 #Retrieve radii from stellar evolutionary tracks from Schaller 1992. 916 917 #@param teffs: model effective temperatures 918 #@type teffs: numpy array 919 #@param loggs: model surface gravities 920 #@type loggs: numpy array 921 #@return: model radii (solar units) 922 #@rtype: numpy array 923 #""" 924 #mygrid = get_schaller_grid() 925 #radii = mygrid(np.log10(teffs),loggs) 926 #return radii 927 928 929 #def calculate_distance(plx,gal,teffs,loggs,scales,n=75000): 930 #""" 931 #Calculate distances and radii of a target given its parallax and location 932 #in the galaxy. 933 934 935 #""" 936 ##-- compute distance up to 25 kpc, which is about the maximum distance from 937 ## earth to the farthest side of the Milky Way galaxy 938 ## rescale to set the maximum to 1 939 #d = np.logspace(np.log10(0.1),np.log10(25000),100000) 940 #if plx is not None: 941 #dprob = distance.distprob(d,gal[1],plx) 942 #dprob = dprob / dprob.max() 943 #else: 944 #dprob = np.ones_like(d) 945 ##-- compute the radii for the computed models, and convert to parsec 946 ##radii = np.ones(len(teffs[-n:])) 947 #radii = get_radii(teffs[-n:],loggs[-n:]) 948 #radii = conversions.convert('Rsol','pc',radii) 949 #d_models = radii/np.sqrt(scales[-n:]) 950 ##-- we set out of boundary values to zero 951 #if plx is not None: 952 #dprob_models = np.interp(d_models,d[-n:],dprob[-n:],left=0,right=0) 953 #else: 954 #dprob_models = np.ones_like(d_models) 955 ##-- reset the radii to solar units for return value 956 #radii = conversions.convert('pc','Rsol',radii) 957 #return (d_models,dprob_models,radii),(d,dprob) 958 959 960 961 -class SED(object):
962 """ 963 Class that facilitates the use of the ivs.sed module. 964 965 This class is meant to be an easy interface to many of the ivs.sed module's 966 functionality. 967 968 The most important attributes of SED are: 969 970 1. C{sed.ID}: star's identification (str) 971 2. C{sed.photfile}: name of the file containing all photometry (str) 972 3. C{sed.info}: star's information from Simbad (dict) 973 4. C{sed.master}: photometry data (record array) 974 5. C{sed.results}: results and summary of the fitting process (dict) 975 976 After fitting, e.g. via calling L{igrid_search}, you can call L{get_model} 977 to retrieve the full SED matching the best fitting parameters (or, rather, 978 closely matching them, see the documentation). 979 980 """
981 - def __init__(self,ID=None,photfile=None,plx=None,load_fits=True,load_hdf5=True,label=''):
982 """ 983 Initialize SED class. 984 985 Different ways to initialize: 986 987 B{1. If no previous data are saved:} 988 989 >>> mysed = SED('HD129929') 990 991 B{2. If previous data exists:} 992 993 >>> #mysed = SED(photfile='HD129929.phot') # will set ID with 'oname' field from SIMBAD 994 >>> #mysed = SED(ID='bla', photfile='HD129929.phot') # Sets custom ID 995 996 The C{ID} variable is used internally to look up data, so it should be 997 something SIMBAD understands and that designates the target. 998 999 @param plx: parallax (and error) of the object 1000 @type plx: tuple (plx,e_plx) 1001 """ 1002 self.ID = ID 1003 self.label = label 1004 self.info = {} 1005 #-- the file containing photometry should have the following name. We 1006 # save photometry to a file to avoid having to download photometry 1007 # each time from the internet 1008 if photfile is None: 1009 self.photfile = '%s.phot'%(ID).replace(' ','_') 1010 #-- keep information on the star from SESAME, but override parallax with 1011 # the value from Van Leeuwen's new reduction. Set the galactic 1012 # coordinates, which are not available from SESAME. 1013 else: 1014 self.photfile = photfile 1015 1016 #-- load information from the photometry file if it exists 1017 if not os.path.isfile(self.photfile): 1018 try: 1019 self.info = sesame.search(os.path.basename(ID),fix=True) 1020 except KeyError: 1021 logger.warning('Star %s not recognised by SIMBAD'%(os.path.basename(ID))) 1022 try: 1023 self.info = sesame.search(os.path.basename(ID),db='N',fix=True) 1024 logger.info('Star %s recognised by NED'%(os.path.basename(ID))) 1025 except KeyError: 1026 logger.warning('Star %s not recognised by NED'%(os.path.basename(ID))) 1027 #-- final attempt: if it's a CoRoT star, try the catalog 1028 if 'corot' in ID.lower() and not 'galpos' in self.info: 1029 corot_id = int("".join([char for char in ID if char.isdigit()])) 1030 try: 1031 self.info['jradeg'],self.info['jdedeg'] = ra,dec = corot.resolve(corot_id) 1032 gal = conversions.convert('equatorial','galactic',(ra,dec),epoch='2000') 1033 self.info['galpos'] = float(gal[0])/np.pi*180,float(gal[1])/np.pi*180 1034 logger.info('Resolved position via CoRoT EXO catalog') 1035 except: 1036 logger.warning("Star %s not recognised by CoRoT"%(os.path.basename(ID))) 1037 1038 if plx is not None: 1039 if not 'plx' in self.info: 1040 self.info['plx'] = {} 1041 self.info['plx']['v'] = plx[0] 1042 self.info['plx']['e'] = plx[1] 1043 else: 1044 self.load_photometry() 1045 logger.info('Photometry loaded from file') 1046 #-- if no ID was given, set the official name as the ID. 1047 if self.ID is None: 1048 self.ID = os.path.splitext(os.path.basename(self.photfile))[0]#self.info['oname'] 1049 logger.info('Name from file used to set ID of object') 1050 #--load information from the FITS file if it exists 1051 self.results = {} 1052 self.constraints = {} 1053 if load_fits: 1054 self.load_fits() 1055 if load_hdf5: 1056 self.load_hdf5() 1057 1058 #-- prepare for information on fitting processes 1059 self.CI_limit = 0.95
1060
1061 - def __repr__(self):
1062 """ 1063 Machine readable string representation of an SED object. 1064 """ 1065 return "SED('{}')".format(self.ID)
1066
1067 - def __str__(self):
1068 """ 1069 Human readable string representation of an SED object. 1070 """ 1071 txt = [] 1072 #-- object designation 1073 txt.append("Object identification: {:s}".format(self.ID)) 1074 if hasattr(self,'info') and self.info and 'oname' in self.info: 1075 txt.append("Official designation: {:s}".format(self.info['oname'])) 1076 #-- additional info 1077 for key in sorted(self.info.keys()): 1078 if isinstance(self.info[key],dict): 1079 txt.append(" {:10s} = ".format(key)+", ".join(["{}: {}".format(i,j) for i,j in self.info[key].iteritems()])) 1080 else: 1081 txt.append(" {:10s} = {}".format(key,self.info[key])) 1082 1083 #txt.append("Included photometry:") 1084 #if hasattr(self,'master') and self.master is not None: 1085 #include_grid = self.master['include'] 1086 #txt.append(photometry2str(self.master[include_grid])) 1087 #txt.append("Excluded photometry:") 1088 #if hasattr(self,'master') and self.master is not None: 1089 #include_grid = self.master['include'] 1090 #txt.append(photometry2str(self.master[-include_grid])) 1091 if hasattr(self,'master'): 1092 txt.append(photometry2str(self.master,color=True)) 1093 return "\n".join(txt)
1094 1095 #{ Handling photometric data
1096 - def get_photometry(self,radius=None,ra=None,dec=None, 1097 include=None,exclude=None,force=False, 1098 units='erg/s/cm2/AA'):
1099 """ 1100 Search photometry on the net or from the phot file if it exists. 1101 1102 For bright stars, you can set radius a bit higher... 1103 1104 @param radius: search radius (arcseconds) 1105 @type radius: float. 1106 """ 1107 if radius is None: 1108 if 'mag.V.v' in self.info and self.info['mag.V.v']<6.: 1109 radius = 60. 1110 else: 1111 radius = 10. 1112 if not os.path.isfile(self.photfile) or force: 1113 #-- get and fix photometry. Set default errors to 1%, and set 1114 # USNOB1 errors to 3% 1115 if ra is None and dec is None: 1116 master = crossmatch.get_photometry(ID=os.path.basename(self.ID),radius=radius, 1117 include=include,exclude=exclude,to_units=units, 1118 extra_fields=['_r','_RAJ2000','_DEJ2000']) # was radius=3. 1119 else: 1120 master = crossmatch.get_photometry(ID=os.path.basename(self.ID),ra=ra,dec=dec,radius=radius, 1121 include=include,exclude=exclude,to_units=units, 1122 extra_fields=['_r','_RAJ2000','_DEJ2000']) # was radius=3. 1123 if 'jradeg' in self.info: 1124 master['_RAJ2000'] -= self.info['jradeg'] 1125 master['_DEJ2000'] -= self.info['jdedeg'] 1126 1127 #-- fix the photometry: set default errors to 2% and print it to the 1128 # screen 1129 self.master = fix_master(master,e_default=0.1) 1130 logger.info('\n'+photometry2str(master)) 1131 1132 #-- write to file 1133 self.save_photometry()
1134
1135 - def get_spectrophotometry(self,directory=None,force_download=False):
1136 """ 1137 Retrieve and combine spectra. 1138 1139 B{WARNING:} this function creates FUSE and DIR directories! 1140 """ 1141 if directory is None and os.path.dirname(self.ID) == '': 1142 directory = os.getcwd() 1143 elif os.path.dirname(self.ID) != '': 1144 directory = os.path.dirname(self.ID) 1145 1146 if not os.path.isdir(directory): 1147 os.mkdir(directory) 1148 1149 #-- add spectrophotometric filters to the set 1150 photbands = filters.add_spectrophotometric_filters(R=200,lambda0=950,lambdan=3350) 1151 if hasattr(self,'master') and self.master is not None and not force_download: 1152 if any(['BOXCAR' in photband for photband in self.master['photband']]): 1153 return None 1154 #-- FUSE spectra 1155 fuse_direc = os.path.join(directory,'FUSE') 1156 iue_direc = os.path.join(directory,'IUE') 1157 if not os.path.isdir(fuse_direc) or force_download: 1158 out1 = mast.get_FUSE_spectra(ID=os.path.basename(self.ID),directory=fuse_direc,select=['ano']) 1159 if out1 is None: 1160 out1 = [] 1161 else: 1162 out1 = glob.glob(fuse_direc+'/*') 1163 #-- IUE spectra 1164 if not os.path.isdir(iue_direc) or force_download: 1165 out2 = vizier.get_IUE_spectra(ID=os.path.basename(self.ID),directory=iue_direc,select='lo') 1166 if out2 is None: 1167 out2 = [] 1168 else: 1169 out2 = glob.glob(iue_direc+'/*') 1170 #-- read them in to combine 1171 list_of_spectra = [fits.read_fuse(ff)[:3] for ff in out1] 1172 list_of_spectra+= [fits.read_iue(ff)[:3] for ff in out2] 1173 #-- combine 1174 wave,flux,err,nspec = tools.combine(list_of_spectra) 1175 #-- add to the master 1176 N = len(flux) 1177 units = ['erg/s/cm2/AA']*N 1178 source = ['specphot']*N 1179 self.add_photometry_fromarrays(flux,err,units,photbands,source,flags=nspec) 1180 1181 #-- write to file 1182 self.master = fix_master(self.master,e_default=0.1) 1183 logger.info('\n'+photometry2str(self.master)) 1184 self.save_photometry()
1185 1186 1187 1188
1189 - def exclude(self,names=None,wrange=None,sources=None,indices=None):
1190 """ 1191 Exclude (any) photometry from fitting process. 1192 1193 If called without arguments, all photometry will be excluded. 1194 """ 1195 if names is None and wrange is None and sources is None and indices is None: 1196 wrange = (-np.inf,np.inf) 1197 decide_phot(self.master,names=names,wrange=wrange,sources=sources,indices=indices,include=False,ptype='all')
1198
1199 - def exclude_colors(self,names=None,wrange=None,sources=None,indices=None):
1200 """ 1201 Exclude (color) photometry from fitting process. 1202 """ 1203 if names is None and wrange is None and sources is None and indices is None: 1204 wrange = (-np.inf,0) 1205 decide_phot(self.master,names=names,wrange=wrange,sources=sources,indices=indices,include=False,ptype='col')
1206
1207 - def exclude_abs(self,names=None,wrange=None,sources=None,indices=None):
1208 """ 1209 Exclude (absolute) photometry from fitting process. 1210 """ 1211 decide_phot(self.master,names=names,wrange=wrange,sources=sources,indices=indices,include=False,ptype='abs')
1212 1213
1214 - def include(self,names=None,wrange=None,sources=None,indices=None):
1215 """ 1216 Include (any) photometry in fitting process. 1217 """ 1218 if names is None and wrange is None and sources is None and indices is None: 1219 wrange = (-np.inf,np.inf) 1220 decide_phot(self.master,names=names,wrange=wrange,sources=sources,indices=indices,include=True,ptype='all')
1221
1222 - def include_colors(self,names=None,wrange=None,sources=None,indices=None):
1223 """ 1224 Include (color) photometry in fitting process. 1225 """ 1226 decide_phot(self.master,names=names,wrange=wrange,sources=sources,indices=indices,include=True,ptype='col')
1227
1228 - def include_abs(self,names=None,wrange=None,sources=None,indices=None):
1229 """ 1230 Include (absolute) photometry in fitting process. 1231 """ 1232 decide_phot(self.master,names=names,wrange=wrange,sources=sources,indices=indices,include=True,ptype='abs')
1233
1234 - def set_photometry_scheme(self,scheme,infrared=(1,'micron')):
1235 """ 1236 Set a default scheme of colors/absolute values to fit the SED. 1237 1238 Possible values: 1239 1240 1. scheme = 'abs': means excluding all colors, including all absolute values 1241 2. scheme = 'color': means including all colors, excluding all absolute values 1242 3. scheme = 'combo': means inculding all colors, and one absolute value per 1243 system (the one with the smallest relative error) 1244 4. scheme = 'irfm': means mimic infrared flux method: choose absolute values 1245 in the infrared (define with C{infrared}), and colours in the optical 1246 1247 @param infrared: definition of start of infrared for infrared flux method 1248 @type infrared: tuple (value <float>, unit <str>) 1249 """ 1250 #-- only absolute values: real SED fitting 1251 if 'abs' in scheme.lower(): 1252 self.master['include'][self.master['color']] = False 1253 self.master['include'][-self.master['color']] = True 1254 logger.info('Fitting procedure will use only absolute fluxes (%d)'%(sum(self.master['include']))) 1255 #-- only colors: color fitting 1256 elif 'col' in scheme.lower(): 1257 self.master['include'][self.master['color']] = True 1258 self.master['include'][-self.master['color']] = False 1259 logger.info('Fitting procedure will use only colors (%d)'%(sum(self.master['include']))) 1260 #-- combination: all colors and one absolute value per system 1261 elif 'com' in scheme.lower(): 1262 self.master['include'][self.master['color'] & self.master['include']] = True 1263 systems = np.array([photband.split('.')[0] for photband in self.master['photband']]) 1264 set_systems = sorted(list(set(systems))) 1265 for isys in set_systems: 1266 keep = -self.master['color'] & (isys==systems) & self.master['include'] 1267 if not sum(keep): continue 1268 index = np.argmax(self.master['e_cmeas'][keep]/self.master['cmeas'][keep]) 1269 index = np.arange(len(self.master))[keep][index] 1270 self.master['include'][keep] = False 1271 self.master['include'][index] = True 1272 logger.info('Fitting procedure will use colors + one absolute flux for each system (%d)'%(sum(self.master['include']))) 1273 #-- infrared flux method scheme 1274 elif 'irfm' in scheme.lower(): 1275 #-- check which measurements are in the infrared 1276 for i,meas in enumerate(self.master): 1277 #-- if measurement is in the infrared and it is a color, remove it (only use absolute values in IR) 1278 #-- for colors, make sure *both* wavelengths are checked 1279 if meas['color']: 1280 bands = filters.get_color_photband(meas['photband']) 1281 eff_waves = filters.eff_wave(bands) 1282 is_infrared = any([(conversions.Unit(*infrared)<conversions.Unit(eff_wave,'AA')) for eff_wave in eff_waves]) 1283 else: 1284 is_infrared = conversions.Unit(*infrared)<conversions.Unit(meas['cwave'],'AA') 1285 if is_infrared and meas['color']: 1286 self.master['include'][i] = False 1287 #-- if measurement is in the infrared and it is not color, keep it (only use absolute values in IR) 1288 elif is_infrared: 1289 self.master['include'][i] = True 1290 #-- if measurement is not in the infrared and it is a color, keep it (only use colors in visible) 1291 elif not is_infrared and meas['color']: 1292 self.master['include'][i] = True 1293 #-- if measurement is not in the infrared and it is not a color, remove it (only use colors in visible) 1294 elif not is_infrared: 1295 self.master['include'][i] = False 1296 use_colors = sum(self.master['include'] & self.master['color']) 1297 use_abs = sum(self.master['include'] & ~self.master['color']) 1298 logger.info('Fitting procedure will use colors (%d) < %.3g%s < absolute values (%d)'%(use_colors,infrared[0],infrared[1],use_abs))
1299 1300
1301 - def add_photometry_fromarrays(self,meas,e_meas,units,photbands,source,flags=None,phases=None):
1302 """ 1303 Add photometry from numpy arrays 1304 1305 By default unflagged, and at the ra and dec of the star. No color and 1306 included. 1307 1308 @param meas: original measurements (fluxes, magnitudes...) 1309 @type meas: array 1310 @param e_meas: error on original measurements in same units as C{meas} 1311 @type e_meas: array 1312 @param units: units of original measurements 1313 @type units: array of strings 1314 @param photbands: photometric passbands of original measurements 1315 @type photbands: array of strings 1316 @param source: source of original measurements 1317 @type source: array of strings 1318 """ 1319 if flags is None: 1320 flags = np.nan*np.ones(len(meas)) 1321 if phases is None: 1322 phases = np.zeros(len(meas)) 1323 #-- if master record array does not exist, make a new one 1324 if not hasattr(self,'master') or self.master is None: 1325 dtypes = [('meas','f8'),('e_meas','f8'),('flag','S20'),('unit','S30'),('photband','S30'),('source','S50'),('_r','f8'),('_RAJ2000','f8'),\ 1326 ('_DEJ2000','f8'),('cwave','f8'),('cmeas','f8'),('e_cmeas','f8'),('cunit','S50'),('color',bool),('include',bool),\ 1327 ('phase',int),('bibcode','S20'),('comments','S200')] 1328 logger.info('No previous measurements available, initialising master record') 1329 self.master = np.rec.fromarrays(np.array([ [] for i in dtypes]), dtype=dtypes) 1330 _to_unit = 'erg/s/cm2/AA' 1331 else: 1332 _to_unit = self.master['cunit'][0] 1333 1334 extra_master = np.zeros(len(meas),dtype=self.master.dtype) 1335 1336 for i,(m,e_m,u,p,s,f) in enumerate(zip(meas,e_meas,units,photbands,source,flags)): 1337 photsys,photband = p.split('.') 1338 if filters.is_color(p): 1339 to_unit = 'flux_ratio' 1340 color = True 1341 else: 1342 to_unit = _to_unit 1343 color = False 1344 if e_m>0: 1345 cm,e_cm = conversions.convert(u,to_unit,m,e_m,photband=p) 1346 else: 1347 cm,e_cm = conversions.convert(u,to_unit,m,photband=p),np.nan 1348 eff_wave = filters.eff_wave(p) 1349 extra_master['cmeas'][i] = cm 1350 extra_master['e_cmeas'][i] = e_cm 1351 extra_master['cwave'][i] = eff_wave 1352 extra_master['cunit'][i] = to_unit 1353 extra_master['color'][i] = color 1354 extra_master['include'][i] = True 1355 extra_master['meas'][i] = meas[i] 1356 extra_master['e_meas'][i] = e_meas[i] 1357 extra_master['unit'][i] = units[i] 1358 extra_master['photband'][i] = photbands[i] 1359 extra_master['source'][i] = source[i] 1360 extra_master['flag'][i] = flags[i] 1361 extra_master['phase'][i] = phases[i] 1362 if 'bibcode' in extra_master.dtype.names: 1363 extra_master['bibcode'][i] = '-' 1364 if 'comments' in extra_master.dtype.names: 1365 extra_master['comments'][i] = '-' 1366 1367 logger.info('Original measurements:\n%s'%(photometry2str(self.master))) 1368 logger.info('Appending:\n%s'%(photometry2str(extra_master))) 1369 self.master = fix_master(np.hstack([self.master,extra_master]),e_default=0.1) 1370 #self.master = np.hstack([self.master,extra_array]) 1371 logger.info('Final measurements:\n%s'%(photometry2str(self.master)))
1372 1373 #} 1374 #{ Additional information 1375
1376 - def is_target(self,name):
1377 """ 1378 Check if this SED represents the object `name'. 1379 1380 Purpose: solve alias problems. Maybe the ID is 'HD129929', and you are 1381 checking for "V* V836 Cen", which is the same target. 1382 1383 @param name: object name 1384 @type name: str 1385 @return: True if this object instance represent the target "name". 1386 @rtype: bool 1387 """ 1388 try: 1389 info = sesame.search(name) 1390 oname = info['oname'] 1391 except: 1392 logger.warning('Did not find {:s} on Simbad}'.format(name)) 1393 return False 1394 if oname==self.info['oname']: 1395 return True
1396
1397 - def has_photfile(self):
1398 """ 1399 Check if this SED has a phot file. 1400 1401 @return: True if this object instance has a photfile 1402 @rtype: bool 1403 """ 1404 return os.path.isfile(self.photfile)
1405
1406 - def get_distance_from_plx(self,plx=None,lutz_kelker=True,unit='pc'):
1407 """ 1408 Get the probability density function for the distance, given a parallax. 1409 1410 If no parallax is given, the catalogue value will be used. If that one 1411 is also not available, a ValueError will be raised. 1412 1413 Parallax must be given in mas. 1414 1415 If the parallax is a float without uncertainty, an uncertainty of 0 will 1416 be assumed. 1417 1418 If C{lutz_kelker=True}, a probability density function will be given for 1419 the distance, otherwise the distance is computed from simply inverting 1420 the parallax. 1421 1422 Distance is returned in parsec (pc). 1423 1424 @return: distance 1425 @rtype: (float,float) 1426 """ 1427 #-- we need parallax and galactic position 1428 if plx is None and 'plx' in self.info: 1429 plx = self.info['plx']['v'],self.info['plx']['e'] 1430 elif plx is None: 1431 raise ValueError('distance cannot be computed from parallax (no parallax)') 1432 if not 'galpos' in self.info: 1433 raise ValueError('distance cannot be computed from parallax (no position)') 1434 gal = self.info['galpos'] 1435 #-- if the parallax has an uncertainty and the Lutz-Kelker bias needs 1436 # to be taken into account, compute probability density function 1437 if lutz_kelker and hasattr(plx,'__iter__'): 1438 d = np.logspace(np.log10(0.1),np.log10(25000),100000) 1439 dprob = distance.distprob(d,gal[1],plx) 1440 dprob = dprob / dprob.max() 1441 logger.info('Computed distance to target with Lutz-Kelker bias') 1442 return d,dprob 1443 #-- just invert (with or without error) 1444 else: 1445 dist = (conversions.Unit(plx,'kpc-1')**-1).convert(unit) 1446 logger.info('Computed distance to target via parallax inversion: {:s}'.format(dist)) 1447 return dist.get_value()
1448
1449 - def get_interstellar_reddening(self,distance=None, Rv=3.1):
1450 """ 1451 Under construction. 1452 """ 1453 gal = self.info['galpos'] 1454 if distance is None: 1455 distance = self.get_distance_from_plx(lutz_kelker=False,unit='pc')[0] 1456 output = {} 1457 for model in ['arenou','schlegel','drimmel','marshall']: 1458 ext = extinctionmodels.findext(gal[0], gal[1], model=model, distance=distance) 1459 if ext is not None: 1460 output[model] = ext/Rv 1461 return output
1462
1463 - def get_angular_diameter(self):
1464 """ 1465 Under construction. 1466 """ 1467 raise NotImplementedError
1468
1469 - def compute_distance(self,mtype='igrid_search'):
1470 """ 1471 Compute distance from radius and angular diameter (scale factor). 1472 1473 The outcome is added to the C{results} attribute:: 1474 1475 distance = r/sqrt(scale) 1476 1477 This particularly useful when you added constraints from solar-like 1478 oscillations (L{add_constraint_slo}). 1479 1480 @return: distance,uncertainty (pc) 1481 @rtype: (float,float) 1482 """ 1483 grid = self.results[mtype]['grid'] 1484 radius = grid['radius'] 1485 e_radius = grid['e_radius'] 1486 scale = grid['scale'] 1487 e_scale = grid['escale'] 1488 1489 radius = conversions.unumpy.uarray([radius,e_radius]) 1490 scale = conversions.unumpy.uarray([scale,e_scale]) 1491 1492 distance = radius/scale**0.5 1493 distance = conversions.convert('Rsol','pc',distance) 1494 distance,e_distance = conversions.unumpy.nominal_values(distance),\ 1495 conversions.unumpy.std_devs(distance) 1496 self.results[mtype]['grid'] = pl.mlab.rec_append_fields(grid,\ 1497 ['distance','e_distance'],[distance,e_distance]) 1498 d,e_d = distance.mean(),distance.std() 1499 logger.info("Computed distance from R and scale: {0}+/-{1} pc".format(d,e_d)) 1500 return d,e_d
1501 1502 1503 1504 1505 #} 1506
1507 - def generate_ranges(self,start_from='igrid_search',distribution='uniform',**kwargs): #type='single',
1508 """ 1509 Generate sensible search range for each parameter. 1510 """ 1511 limits = {} 1512 exist_previous = (start_from in self.results and 'CI' in self.results[start_from]) 1513 1514 #-- run over all parnames, and generate the parameters ranges for each 1515 # component: 1516 # (1) if a previous parameter search is done and the user has not set 1517 # the parameters range, derive a sensible parameter range from the 1518 # previous results. 1519 # (2) if no previous search is done, use infinite upper and lower bounds 1520 # (3) if ranges are given, use those. Cycle over them, and use the 1521 # strategy from (1) if no parameter ranges are given for a specific 1522 # component. 1523 for par_range_name in kwargs: 1524 #-- when "teffrange" is given, par_range will be the value of the 1525 # keyword "teffrange", and parname will be "teff". 1526 parname = par_range_name.rsplit('range')[0] 1527 parrange = kwargs.get(par_range_name,None) 1528 #-- the three cases described above: 1529 if exist_previous and parrange is None: 1530 lkey = parname+'_l' 1531 ukey = parname+'_u' 1532 #-- if the parameters was not used in the fit, stick to the 1533 # default given value 1534 if not lkey in self.results[start_from]['CI']: 1535 parrange = kwargs[par_range_name] 1536 #-- else we can derive a better parameter range 1537 else: 1538 parrange = (self.results[start_from]['CI'][lkey], 1539 self.results[start_from]['CI'][ukey]) 1540 elif parrange is None: 1541 parrange = (-np.inf,np.inf) 1542 1543 #-- now, the ranges are (lower,upper) for the uniform distribution, 1544 # and (mu,scale) for the normal distribution 1545 if distribution=='normal': 1546 parrange = ((i[1]-i[0])/2.,(i[1]-i[0])/6.) 1547 elif distribution!='uniform': 1548 raise NotImplementedError, 'Any distribution other than "uniform" and "normal" has not been implemented yet!' 1549 limits[par_range_name] = parrange 1550 #-- this returns the kwargs but with filled in limits, and confirms 1551 # the type if it was given, or gives the type when it needed to be derived 1552 logger.info('Parameter ranges calculated starting from {0:s} and using distribution {1:s}.'.format(start_from,distribution)) 1553 return limits
1554 1555 #{ Fitting routines 1556
1557 - def clip_grid(self,mtype='igrid_search',CI_limit=None):
1558 """ 1559 Clip grid on CI limit, to save memory. 1560 1561 @param mtype: type or results to clip 1562 @type mtype: str 1563 @param CI_limit: confidence limit to clip on 1564 @type CI_limit: float (between 0 (clips everything) and 1 (clips nothing)) 1565 """ 1566 if CI_limit is None: 1567 CI_limit = self.CI_limit 1568 new_grid = self.results[mtype]['grid'] 1569 new_grid = new_grid[new_grid['ci_red']<=CI_limit] 1570 self.results[mtype]['grid'] = new_grid 1571 logger.info("Clipped grid at {:.6f}%".format(CI_limit*100))
1572
1573 - def collect_results(self, grid=None, fitresults=None, mtype='igrid_search', selfact='chisq', **kwargs):
1574 """creates record array(s) of all fit results and removes the failures""" 1575 1576 gridnames = sorted(grid.keys()) 1577 fitnames = sorted(fitresults.keys()) 1578 pardtypes = [(name,'f8') for name in gridnames]+[(name,'f8') for name in fitnames] 1579 pararrays = [grid[name] for name in gridnames]+[fitresults[name] for name in fitnames] 1580 1581 grid_results = np.rec.fromarrays(pararrays,dtype=pardtypes) 1582 1583 #-- exclude failures 1584 failures = np.isnan(grid_results[selfact]) 1585 if sum(failures): 1586 logger.info('Excluded {0} failed results (nan)'.format(sum(failures))) 1587 grid_results = grid_results[~failures] 1588 1589 #-- make room for chi2 statistics 1590 grid_results = mlab.rec_append_fields(grid_results, 'ci_raw', np.zeros(len(grid_results))) 1591 grid_results = mlab.rec_append_fields(grid_results, 'ci_red', np.zeros(len(grid_results))) 1592 1593 #-- take the previous results into account if they exist: 1594 if not mtype in self.results: 1595 self.results[mtype] = {} 1596 elif 'grid' in self.results[mtype]: 1597 logger.info('Appending previous results ({:d}+{:d})'.format(len(self.results[mtype]['grid']),len(grid_results))) 1598 ex_names = grid_results.dtype.names 1599 ex_grid = np.rec.fromarrays([self.results[mtype]['grid'][exname] for exname in ex_names], 1600 names=ex_names) 1601 grid_results = np.hstack([ex_grid,grid_results]) 1602 1603 #-- inverse sort according to chisq: this means the best models are at the end 1604 # (mainly for plotting reasons, so that the best models are on top). 1605 sa = np.argsort(grid_results[selfact])[::-1] 1606 grid_results = grid_results[sa] 1607 1608 self.results[mtype]['grid'] = grid_results 1609 1610 logger.info('Total of %d grid points, best chisq=%s'%(len(grid_results), grid_results[-1]))
1611
1612 - def calculateDF(self, **ranges):
1613 """ Calculates the degrees of freedom from the given ranges""" 1614 1615 df, df_info = 1, ['theta'] 1616 for range_name in ranges: 1617 if re.search('ebv\d?range$', range_name): 1618 if not 'ebv' in df_info: 1619 df += 1 1620 df_info.append('ebv') 1621 else: 1622 continue 1623 elif not np.allclose(ranges[range_name][0],ranges[range_name][1]): 1624 df += 1 1625 df_info.append(range_name[0:-5]) 1626 df_info.sort() 1627 logger.info('Degrees of freedom = {} ({})'.format(df,', '.join(df_info))) 1628 1629 return df, df_info
1630
1631 - def calculate_statistics(self, df=None, ranges=None, mtype='igrid_search', selfact='chisq'):
1632 """ 1633 Calculates the Chi2 and reduced Chi2 based on nr of observations and degrees of 1634 freedom. If df is not provided, tries to calculate df from the fitting ranges. 1635 """ 1636 1637 #-- If nessessary calculate degrees of freedom from the ranges 1638 if df == None and ranges != None: 1639 df,df_info = self.calculateDF(**ranges) 1640 elif df == None: 1641 logger.warning('Cannot compute degrees of freedom!!! CHI2 might not make sense. (using df=5)') 1642 df = 5 1643 1644 #-- Do the statistics 1645 N = sum(self.master['include']) 1646 k = N-df 1647 if k<=0: 1648 logger.warning('Not enough data to compute CHI2: it will not make sense') 1649 k = 1 1650 logger.info('Calculated statistics based on df={0} and Nobs={1}'.format(df,N)) 1651 1652 #-- Rescale if needed and compute confidence intervals 1653 results = self.results[mtype]['grid'] 1654 factor = max(results[selfact][-1]/k,1) 1655 logger.warning('CHI2 rescaling factor equals %g'%(factor)) 1656 results['ci_raw'] = scipy.stats.distributions.chi2.cdf(results[selfact],k) 1657 results['ci_red'] = scipy.stats.distributions.chi2.cdf(results[selfact]/factor,k) 1658 1659 #-- Store the results 1660 self.results[mtype]['grid'] = results 1661 self.results[mtype]['factor'] = factor
1662
1663 - def calculate_confidence_intervals(self,mtype='igrid_search',chi2_type='red',CI_limit=None):
1664 """ 1665 Compute confidence interval of all columns in the results grid. 1666 1667 @param mtype: type of results to compute confidence intervals of 1668 @type mtype: str 1669 @param chi2_type: type of chi2 (raw or reduced) 1670 @type chi2_type: str ('raw' or 'red') 1671 @param CI_limit: confidence limit to clip on 1672 @type CI_limit: float (between 0 (clips everything) and 1 (clips nothing)) 1673 """ 1674 #-- get some info 1675 grid_results = self.results[mtype]['grid'] 1676 if CI_limit is None or CI_limit > 1.0: 1677 CI_limit = self.CI_limit 1678 #-- the chi2 grid is ordered: where is the value closest to the CI limit? 1679 region = self.results[mtype]['grid']['ci_'+chi2_type]<=CI_limit 1680 if sum(region)==0: 1681 raise ValueError("No models in the sample have a chi2_{} below the limit {}. Try increasing the number of models, increasing the CI_limit or choose different photometry.".format(chi2_type,CI_limit)) 1682 #-- now compute the confidence intervals 1683 cilow, cihigh, value = [],[],[] 1684 for name in grid_results.dtype.names: 1685 cilow.append(grid_results[name][region].min()) 1686 value.append(grid_results[name][-1]) 1687 cihigh.append(grid_results[name][region].max()) 1688 1689 return dict(name=grid_results.dtype.names, value=value, cilow=cilow, cihigh=cihigh)
1690
1691 - def store_confidence_intervals(self, mtype='igrid_search', name=None, value=None, cilow=None, \ 1692 cihigh=None, **kwargs):
1693 """ 1694 Saves the provided confidence intervals in the result dictionary of self. 1695 The provided confidence intervals will be saved in a dictionary: 1696 self.results[mtype]['CI'] with as key for the value the name, for cilow: 1697 name_l and for cihigh: name_u. 1698 1699 @param mtype: the search type 1700 @type mtype: str 1701 @param name: names of the parameters 1702 @type name: array 1703 @param value: best fit values 1704 @type value: array 1705 @param cilow: lower CI limit 1706 @type cilow: array 1707 @param cihigh: upper CI limit 1708 @type cihigh: array 1709 """ 1710 if not 'CI' in self.results[mtype]: 1711 self.results[mtype]['CI'] = {} 1712 for name, val, cil, cih in zip(name, value, cilow, cihigh): 1713 self.results[mtype]['CI'][name+'_l'] = cil 1714 self.results[mtype]['CI'][name] = val 1715 self.results[mtype]['CI'][name+'_u'] = cih 1716 try: 1717 logger.info('CI %s: %g <= %g <= %g'%(name,cil,val,cih)) 1718 except Exception: 1719 logger.info('CI %s: nan <= %g <= nan'%(name,val))
1720 1721
1722 - def igrid_search(self,points=100000,teffrange=None,loggrange=None,ebvrange=None, 1723 zrange=(0,0),rvrange=(3.1,3.1),vradrange=(0,0), 1724 df=None,CI_limit=None,set_model=True,exc_interpolpar=[],**kwargs):
1725 """ 1726 Fit fundamental parameters using a (pre-integrated) grid search. 1727 1728 If called consecutively, the ranges will be set to the CI_limit of previous 1729 estimations, unless set explicitly. 1730 1731 If called for the first time, the ranges will be +/- np.inf by defaults, 1732 unless set explicitly. 1733 1734 There is an option to exclude a certain parameter from the interpolation. This is done by 1735 including it in a list attached to the keyword 'exc_interpolpar', e.g. exc_interpolpar = ['z']. 1736 """ 1737 if CI_limit is None or CI_limit > 1.0: 1738 CI_limit = self.CI_limit 1739 #-- set defaults limits 1740 ranges = self.generate_ranges(teffrange=teffrange,\ 1741 loggrange=loggrange,ebvrange=ebvrange,\ 1742 zrange=zrange,rvrange=rvrange,vradrange=vradrange) 1743 #-- grid search on all include data: extract the best CHI2 1744 include_grid = self.master['include'] 1745 logger.info('The following measurements are included in the fitting process:\n%s'%(photometry2str(self.master[include_grid]))) 1746 #-- an initial check on the conditions for exclusion of a parameter from interpolation 1747 if not exc_interpolpar == None: 1748 for var in exc_interpolpar: 1749 if ranges[var+'range'][0] != ranges[var+'range'][1]: 1750 raise IOError, 'Exclusion of parameters from interpolation is only possible if the lower and upper ranges of those ranges are equal to an actual grid point.' 1751 #-- build the grid, run over the grid and calculate the CHI2 1752 pars = fit.generate_grid_pix(self.master['photband'][include_grid],points=points,**ranges) 1753 pars['exc_interpolpar'] = exc_interpolpar 1754 #for var in ['teff','logg','ebv','z','rv','vrad']: 1755 #if ranges[var+'range'][0] == ranges[var+'range'][1]: 1756 #insertpars[var] = ranges[var+'range'][0] 1757 1758 chisqs,scales,e_scales,lumis = fit.igrid_search_pix(self.master['cmeas'][include_grid], 1759 self.master['e_cmeas'][include_grid], 1760 self.master['photband'][include_grid],**pars) 1761 fitres = dict(chisq=chisqs, scale=scales, escale=e_scales, labs=lumis) 1762 pars.pop('exc_interpolpar') 1763 #-- collect all the results in a record array 1764 self.collect_results(grid=pars, fitresults=fitres, mtype='igrid_search') 1765 #-- do the statistics 1766 self.calculate_statistics(df=df, ranges=ranges, mtype='igrid_search') 1767 #-- compute the confidence intervals 1768 ci = self.calculate_confidence_intervals(mtype='igrid_search',chi2_type='red',CI_limit=CI_limit) 1769 self.store_confidence_intervals(mtype='igrid_search', **ci) 1770 #-- remember the best model 1771 if set_model: self.set_best_model()
1772 - def generate_fit_param(self, start_from='igrid_search', **pars):
1773 """ 1774 generates a dictionary with parameter information that can be handled by fit.iminimize 1775 """ 1776 result = dict() 1777 for key in pars.keys(): 1778 if re.search("range$", key): 1779 result[key[0:-5]+"_min"] = pars[key][0] 1780 result[key[0:-5]+"_max"] = pars[key][1] 1781 #-- if min == max: fix the parameter and force value = min 1782 if np.allclose([pars[key][0]], [pars[key][1]]): 1783 result[key[0:-5]+"_vary"] = False 1784 result[key[0:-5]+"_value"] = pars[key][0] 1785 else: 1786 result[key[0:-5]+"_vary"] = True 1787 else: 1788 #-- Store the startvalue. If None, look in start_from for a new startvalue. 1789 if pars[key] != None and not key+"_value" in result: 1790 result[key+"_value"] = pars[key] 1791 elif pars[key] == None and not key+"_value" in result: 1792 result[key+"_value"] = self.results[start_from]['CI'][key] 1793 1794 return result
1795
1796 - def calculate_iminimize_CI(self, mtype='iminimize', CI_limit=0.66, **kwargs):
1797 1798 #-- Get the best fit parameters and ranges 1799 pars = {} 1800 skip = ['scale', 'chisq', 'nfev', 'labs', 'ci_raw', 'ci_red', 'scale', 'escale'] 1801 for name in self.results[mtype]['CI'].keys(): 1802 name = re.sub('_[u|l]$', '', name) 1803 if not name in pars and not name in skip: 1804 pars[name] = self.results[mtype]['CI'][name] 1805 pars[name+"range"] = [self.results[mtype]['CI'][name+"_l"],\ 1806 self.results[mtype]['CI'][name+"_u"]] 1807 pars.update(kwargs) 1808 pars = self.generate_fit_param(**pars) 1809 1810 #-- calculate the confidence intervalls 1811 include_grid = self.master['include'] 1812 ci = fit.calculate_iminimize_CI(self.master['cmeas'][include_grid], 1813 self.master['e_cmeas'][include_grid], 1814 self.master['photband'][include_grid], 1815 CI_limit=CI_limit,constraints=self.constraints, **pars) 1816 1817 #-- Add the scale factor 1818 ci['name'] = np.append(ci['name'], 'scale') 1819 ci['value'] = np.append(ci['value'], self.results[mtype]['grid']['scale'][-1]) 1820 ci['cilow'] = np.append(ci['cilow'], min(self.results[mtype]['grid']['scale'])) 1821 ci['cihigh'] = np.append(ci['cihigh'], max(self.results[mtype]['grid']['scale'])) 1822 1823 logger.info('Calculated %s%% confidence intervalls for all parameters'%(CI_limit)) 1824 1825 self.store_confidence_intervals(mtype='iminimize', **ci)
1826
1827 - def calculate_iminimize_CI2D(self,xpar, ypar, mtype='iminimize', limits=None, res=10, **kwargs):
1828 1829 #-- get the best fitting parameters 1830 pars = {} 1831 skip = ['scale', 'chisq', 'nfev', 'labs', 'ci_raw', 'ci_red', 'scale', 'escale'] 1832 for name in self.results[mtype]['CI'].keys(): 1833 name = re.sub('_[u|l]$', '', name) 1834 if not name in pars and not name in skip: 1835 pars[name] = self.results[mtype]['CI'][name] 1836 pars[name+"range"] = [self.results[mtype]['CI'][name+"_l"],\ 1837 self.results[mtype]['CI'][name+"_u"]] 1838 pars.update(kwargs) 1839 pars = self.generate_fit_param(**pars) 1840 1841 logger.info('Calculating 2D confidence intervalls for %s--%s'%(xpar,ypar)) 1842 1843 #-- calculate the confidence intervalls 1844 include_grid = self.master['include'] 1845 x,y,chi2, raw, red = fit.calculate_iminimize_CI2D(self.master['cmeas'][include_grid], 1846 self.master['e_cmeas'][include_grid], 1847 self.master['photband'][include_grid], 1848 xpar, ypar, limits=limits, res=res, 1849 constraints=self.constraints, **pars) 1850 1851 #-- store the CI 1852 if not 'CI2D' in self.results[mtype]: self.results[mtype]['CI2D'] = {} 1853 self.results[mtype]['CI2D'][xpar+"-"+ypar] = dict(x=x, y=y, ci_chi2=chi2,\ 1854 ci_raw=raw, ci_red=red)
1855 1856
1857 - def _get_imin_ci(self, mtype='iminimize',**ranges):
1858 """ returns ci information for store_confidence_intervals """ 1859 names, values, cil, cih = [],[],[],[] 1860 for key in ranges.keys(): 1861 name = key[0:-5] 1862 names.append(name) 1863 values.append(self.results[mtype]['grid'][name][-1]) 1864 cil.append(ranges[key][0]) 1865 cih.append(ranges[key][1]) 1866 names.append('scale') 1867 values.append(self.results[mtype]['grid']['scale'][-1]) 1868 cil.append(min(self.results[mtype]['grid']['scale'])) 1869 cih.append(max(self.results[mtype]['grid']['scale'])) 1870 return dict(name=names, value=values, cilow=cil, cihigh=cih)
1871
1872 - def iminimize(self, teff=None, logg=None, ebv=None, z=0, rv=3.1, vrad=0, teffrange=None, 1873 loggrange=None, ebvrange=None, zrange=None, rvrange=None, vradrange=None, 1874 points=None, distance=None, start_from='igrid_search',df=None, CI_limit=None, 1875 calc_ci=False, set_model=True, **kwargs):
1876 """ 1877 Basic minimizer method for SED fitting implemented using the lmfit library from sigproc.fit 1878 """ 1879 1880 #-- set defaults limits and starting values 1881 ranges = self.generate_ranges(teffrange=teffrange,loggrange=loggrange,\ 1882 ebvrange=ebvrange,zrange=zrange,rvrange=rvrange,\ 1883 vradrange=vradrange, start_from=start_from) 1884 pars = self.generate_fit_param(teff=teff, logg=logg, ebv=ebv, z=z, \ 1885 rv=rv, vrad=vrad, start_from=start_from, **ranges) 1886 fitkws = dict(distance=distance) if distance != None else dict() 1887 1888 #-- grid search on all include data: extract the best CHI2 1889 include_grid = self.master['include'] 1890 logger.info('The following measurements are included in the fitting process:\n%s'% \ 1891 (photometry2str(self.master[include_grid]))) 1892 1893 #-- pass all ranges and starting values to the fitter 1894 grid, chisq, nfev, scale, lumis = fit.iminimize(self.master['cmeas'][include_grid], 1895 self.master['e_cmeas'][include_grid], 1896 self.master['photband'][include_grid], 1897 fitkws=fitkws, points=points,**pars) 1898 1899 logger.info('Minimizer Succes with startpoints=%s, chi2=%s, nfev=%s'%(len(chisq), chisq[0], nfev[0])) 1900 #-- handle the results 1901 fitres = dict(chisq=chisq, nfev=nfev, scale=scale, labs=lumis) 1902 self.collect_results(grid=grid, fitresults=fitres, mtype='iminimize', selfact='chisq') 1903 1904 #-- Do the statistics 1905 self.calculate_statistics(df=5, ranges=ranges, mtype='iminimize', selfact='chisq') 1906 1907 #-- Store the confidence intervals 1908 ci = self._get_imin_ci(mtype='iminimize',**ranges) 1909 self.store_confidence_intervals(mtype='iminimize', **ci) 1910 if calc_ci: 1911 self.calculate_iminimize_CI(mtype='iminimize', CI_limit=CI_limit) 1912 1913 #-- remember the best model 1914 if set_model: self.set_best_model(mtype='iminimize')
1915 1916
1917 - def imc(self,teffrange=None,loggrange=None,ebvrange=None,zrange=None,start_from='igrid_search',\ 1918 distribution='uniform',points=None,fitmethod='fmin',disturb=True):
1919 1920 limits = self.generate_ranges(teffrange=teffrange,loggrange=loggrange,\ 1921 ebvrange=ebvrange,zrange=zrange,distribution=distribution,\ 1922 start_from=start_from) 1923 1924 #-- grid search on all include data: extract the best CHI2 1925 include = self.master['include'] 1926 meas = self.master['cmeas'][include] 1927 if disturb: 1928 emeas = self.master['e_cmeas'][include] 1929 else: 1930 emeas = self.master['e_cmeas'][include]*1e-6 1931 photbands = self.master['photband'][include] 1932 logger.info('The following measurements are included in the fitting process:\n%s'%(photometry2str(self.master[include]))) 1933 1934 #-- generate initial guesses 1935 teffs,loggs,ebvs,zs,radii = fit.generate_grid(self.master['photband'][include],type='single',points=points+25,**limits) 1936 NrPoints = len(teffs)>points and points or len(teffs) 1937 firstoutput = np.zeros((len(teffs)-NrPoints,9)) 1938 output = np.zeros((NrPoints,9)) 1939 1940 #-- fit the original data a number of times 1941 for i,(teff,logg,ebv,z) in enumerate(zip(teffs[NrPoints:],loggs[NrPoints:],ebvs[NrPoints:],zs[NrPoints:])): 1942 try: 1943 fittedpars,warnflag = fit.iminimize2(meas,emeas,photbands,teff=teff,logg=logg,ebv=ebv,z=z,fitmethod=fitmethod) 1944 firstoutput[i,1:] = fittedpars 1945 firstoutput[i,0] = warnflag 1946 except IOError: 1947 firstoutput[i,0] = 3 1948 1949 logger.info("{0}/{1} fits on original data failed (max func call)".format(sum(firstoutput[:,0]==1),firstoutput.shape[0])) 1950 logger.info("{0}/{1} fits on original failed (max iter)".format(sum(firstoutput[:,0]==2),firstoutput.shape[0])) 1951 logger.info("{0}/{1} fits on original data failed (outside of grid)".format(sum(firstoutput[:,0]==3),firstoutput.shape[0])) 1952 1953 #-- retrieve the best fitting result and make it the first entry of output 1954 keep = (firstoutput[:,0]==0) & (firstoutput[:,1]>0) 1955 best = firstoutput[keep,-2].argmin() 1956 output[-1,:] = firstoutput[keep][best,:] 1957 1958 # calculate the factor with which to multiply the scale 1959 #factor = np.sqrt(output[-1,5]/len(meas)) 1960 #print factor 1961 1962 #-- now do the actual Monte Carlo simulation 1963 for i,(teff,logg,ebv,z) in enumerate(zip(teffs[:NrPoints-1],loggs[:NrPoints-1],ebvs[:NrPoints-1],zs[:NrPoints-1])): 1964 newmeas = meas + np.random.normal(scale=emeas) #*factor) 1965 try: 1966 fittedpars,warnflag = fit.iminimize2(newmeas,emeas,photbands,teff,logg,ebv,z,fitmethod=fitmethod) 1967 output[i,1:] = fittedpars 1968 output[i,0] = warnflag 1969 except IOError: 1970 output[i,0] = 3 1971 1972 logger.info("{0}/{1} MC simulations failed (max func call)".format(sum(output[:,0]==1),NrPoints)) 1973 logger.info("{0}/{1} MC simulations failed (max iter)".format(sum(output[:,0]==2),NrPoints)) 1974 logger.info("{0}/{1} MC simulations failed (outside of grid)".format(sum(output[:,0]==3),NrPoints)) 1975 1976 #-- remove nonsense results 1977 keep = (output[:,0]==0) & (output[:,1]>0) 1978 output = output[keep] 1979 output = np.rec.fromarrays(output[:,1:-1].T,names=['teff','logg','ebv','z','labs','chisq','scale']) 1980 #-- derive confidence intervals and median values 1981 #print np.median(output[:,1:],axis=0) 1982 1983 self.results['imc'] = {} 1984 self.results['imc']['CI'] = {} 1985 self.results['imc']['grid'] = output 1986 CI_limit = 0.95 1987 for name in output.dtype.names: 1988 sortarr = np.sort(output[name]) 1989 trimarr = scipy.stats.trimboth(sortarr,(1-CI_limit)/2.) # trim 2.5% of top and bottom, to arrive at 95% CI 1990 self.results['imc']['CI'][name+'_l'] = trimarr.min() 1991 self.results['imc']['CI'][name] = output[name][-1]#np.median(output[name]) 1992 self.results['imc']['CI'][name+'_u'] = trimarr.max() 1993 logger.info('%i%% CI %s: %g <= %g <= %g'%(CI_limit*100,name,self.results['imc']['CI'][name+'_l'], 1994 self.results['imc']['CI'][name], 1995 self.results['imc']['CI'][name+'_u'])) 1996 self.set_best_model(mtype='imc')
1997 1998 1999 #} 2000 2001 #{ Interfaces 2002
2003 - def clear(self):
2004 """ 2005 Clear the results. 2006 """ 2007 self.results = {}
2008
2009 - def set_best_model(self,mtype='igrid_search',law='fitzpatrick2004',**kwargs):
2010 """ 2011 Get reddenend and unreddened model 2012 """ 2013 logger.info('Interpolating approximate full SED of best model') 2014 2015 #-- synthetic flux 2016 include = self.master['include'] 2017 synflux = np.zeros(len(self.master['photband'])) 2018 keep = (self.master['cwave']<1.6e6) | np.isnan(self.master['cwave']) 2019 keep = keep & include 2020 2021 if ('igrid_search' in mtype) | ('iminimize' in mtype): #mtype in ['igrid_search', 'iminimize']: 2022 #-- get the metallicity right 2023 files = model.get_file(z='*') 2024 if type(files) == str: files = [files] #files needs to be a list! 2025 metals = np.array([pf.getheader(ff)['Z'] for ff in files]) 2026 metals = metals[np.argmin(np.abs(metals-self.results[mtype]['CI']['z']))] 2027 scale = self.results[mtype]['CI']['scale'] 2028 #-- get (approximated) reddened and unreddened model 2029 wave,flux = model.get_table(teff=self.results[mtype]['CI']['teff'], 2030 logg=self.results[mtype]['CI']['logg'], 2031 ebv=self.results[mtype]['CI']['ebv'], 2032 z=metals, 2033 law=law) 2034 wave_ur,flux_ur = model.get_table(teff=self.results[mtype]['CI']['teff'], 2035 logg=self.results[mtype]['CI']['logg'], 2036 ebv=0, 2037 z=metals, 2038 law=law) 2039 #-- get synthetic photometry 2040 synflux_,Labs = model.get_itable(teff=self.results[mtype]['CI']['teff'], 2041 logg=self.results[mtype]['CI']['logg'], 2042 ebv=self.results[mtype]['CI']['ebv'], 2043 z=self.results[mtype]['CI']['z'], 2044 photbands=self.master['photband'][keep]) 2045 2046 flux,flux_ur = flux*scale,flux_ur*scale 2047 2048 synflux[keep] = synflux_ 2049 2050 #synflux,Labs = model.get_itable(teff=self.results[mtype]['CI']['teff'], 2051 # logg=self.results[mtype]['CI']['logg'], 2052 # ebv=self.results[mtype]['CI']['ebv'], 2053 # photbands=self.master['photband']) 2054 synflux[~self.master['color']] *= scale 2055 chi2 = (self.master['cmeas']-synflux)**2/self.master['e_cmeas']**2 2056 #-- calculate effective wavelengths of the photometric bands via the model 2057 # values 2058 eff_waves = filters.eff_wave(self.master['photband'],model=(wave,flux)) 2059 self.results[mtype]['model'] = wave,flux,flux_ur 2060 self.results[mtype]['synflux'] = eff_waves,synflux,self.master['photband'] 2061 self.results[mtype]['chi2'] = chi2
2062
2063 - def set_model(self,wave,flux,label='manual',unique_phase=None):
2064 """ 2065 Manually set the best SED model. 2066 2067 This is particularly useful when working with calibrators for which there 2068 is an observed SED. 2069 2070 The label will be used to store the model in the C{results} attribute. 2071 2072 @param wave: wavelength array (angstrom) 2073 @type wave: ndarray 2074 @param flux: flux array (erg/s/cm2/AA) 2075 @type flux: ndarray 2076 @param label: key used to store the model and synthetic photometry in C{results} 2077 @type label:s str 2078 """ 2079 #-- necessary information 2080 photbands = self.master['photband'] 2081 is_color = self.master['color'] 2082 if unique_phase != None: 2083 include = self.master['include'] & (self.master['phase'] == unique_phase) 2084 else: 2085 include = self.master['include'] 2086 synflux = np.zeros(len(photbands)) 2087 2088 #-- compute synthetic photometry 2089 synflux[(~is_color) & include] = model.synthetic_flux(wave,flux,photbands[(~is_color) & include]) 2090 synflux[is_color & include] = model.synthetic_color(wave,flux,photbands[is_color & include]) 2091 chi2 = (self.master['cmeas']-synflux)**2/self.master['e_cmeas']**2 2092 eff_waves = filters.eff_wave(photbands) 2093 2094 if not label in self.results: 2095 self.results[label] = {} 2096 self.results[label]['model'] = wave,flux,flux 2097 self.results[label]['synflux'] = eff_waves,synflux,photbands 2098 self.results[label]['chi2'] = chi2 2099 logger.debug('Stored model SED in {}'.format(label))
2100
2101 - def get_model(self,label='igrid_search'):
2102 """ 2103 Retrieve the best SED model. 2104 2105 B{Warning}: the grid search interpolation is also done with interpolation 2106 in metallicity, while this is not the case for the best full SED model. 2107 Interpolation of the full SED is only done in teff/logg, and the 2108 metallicity is chosen to be equal to the closest grid point. On the 2109 other hand, while the reddening value is interpolated in the grid search, 2110 this is not the case for the best model (the model is simply reddened 2111 according the found best value). So don't be surprised if you find 2112 differences between the values of C{self.results[label]['synflux']}, 2113 which are the value sfor the interpolated photometric points of the 2114 best model, and the synthetic photometry obtained by manual integration 2115 of the returned full SED model. Those differences should be incredible 2116 small and mainly due to the metallicity. 2117 """ 2118 wave,flux,urflux = self.results[label]['model'] 2119 return wave,flux,urflux
2120
2121 - def sample_gridsearch(self,mtype='igrid_search',NrSamples=1,df=None,selfact='chisq'):
2122 """ 2123 Retrieve an element from the results of a grid search according to the derived probability. 2124 2125 An array of length "NrSamples" containing 'grid-indices' is returned, so the actual parameter values 2126 of the corresponding model can be retrieved from the results dictionary. 2127 2128 @param NrSamples: the number of samples you wish to draw 2129 @type NrSamples: int 2130 """ 2131 #-- this function is only checked to work with the results of an igrid_search 2132 if not 'igrid_search' in mtype: 2133 return None 2134 2135 ranges = self.generate_ranges(teffrange=None,loggrange=None,ebvrange=None,\ 2136 zrange=None,rvrange=None,vradrange=None,start_from=mtype) 2137 2138 #-- If nessessary calculate degrees of freedom from the ranges 2139 if df == None and ranges != None: 2140 df,df_info = self.calculateDF(**ranges) 2141 elif df == None: 2142 logger.warning('Cannot compute degrees of freedom!!! CHI2 might not make sense. (using df=5)') 2143 df = 5 2144 2145 #-- Do the statistics 2146 N = sum(self.master['include']) 2147 k = N-df 2148 if k<=0: 2149 logger.warning('Not enough data to compute CHI2: it will not make sense') 2150 k = 1 2151 logger.info('Statistics based on df={0} and Nobs={1}'.format(df,N)) 2152 factor = max(self.results[mtype]['grid'][selfact][-1]/k,1) 2153 2154 #-- Compute the pdf and cdf 2155 probdensfunc = scipy.stats.distributions.chi2.pdf(self.results[mtype]['grid'][selfact]/factor,k) 2156 cumuldensfunc = probdensfunc.cumsum() 2157 2158 #-- Uniformly sample the cdf, to get a sampling according to the pdf 2159 sample = pl.uniform(cumuldensfunc[0],cumuldensfunc[-1],NrSamples) 2160 indices = np.zeros(NrSamples,int) 2161 for i in xrange(NrSamples): 2162 indices[i] = (abs(cumuldensfunc-sample[i])).argmin() 2163 return indices
2164
2165 - def chi2(self,select=None,reduced=False,label='igrid_search'):
2166 """ 2167 Calculate chi2 of best model. 2168 2169 TEMPORARY!!! API WILL CHANGE!!! 2170 2171 This function is not called anywhere in the builder. 2172 """ 2173 #-- select photometry 2174 master = self.master.copy() 2175 keep = np.ones(len(master),bool) 2176 if isinstance(select,dict): 2177 for key in select: 2178 keep = keep & (master[key]==select[key]) 2179 master = master[keep] 2180 #-- compute synthetic phtoometry 2181 photbands = master['photband'] 2182 is_color = master['color'] 2183 synflux = np.zeros(len(photbands)) 2184 2185 wave,flux,urflux = self.get_model(label=label) 2186 synflux[~is_color] = model.synthetic_flux(wave,flux,photbands[~is_color]) 2187 synflux[is_color] = model.synthetic_color(wave,flux,photbands[is_color]) 2188 x2 = (master['cmeas']-synflux)**2/master['e_cmeas']**2 2189 x2 = x2.mean() 2190 return x2
2191 2192 #} 2193 #{ Add constraints 2194
2195 - def add_constraint_distance(self,distance=None,mtype='igrid_search',**kwargs):
2196 """ 2197 Use the distance to compute additional information. 2198 2199 Compute radii, absolute luminosities and masses, and add them to the 2200 results. 2201 2202 Extra kwargs go to L{get_distance_from_plx}. 2203 2204 B{Warning:} after calling this function, the C{labs} column in the grid 2205 is actually absolutely calibrated and reflects the true absolute 2206 luminosity instead of the absolute luminosity assuming 1 solar radius. 2207 2208 @param distance: distance in solar units and error 2209 @type distance: tuple (float,float) 2210 @param mtype: type of results to add the information to 2211 @type mtype: str 2212 """ 2213 if distance is None: 2214 kwargs['lutz_kelker'] = False # we can't handle asymmetric error bars 2215 kwargs['unit'] = 'Rsol' 2216 distance = self.get_distance_from_plx(**kwargs) 2217 2218 #-- compute the radius, absolute luminosity and mass: note that there is 2219 # an uncertainty on the scaling factor *and* on the distance! 2220 scale = conversions.unumpy.uarray([self.results[mtype]['grid']['scale'],\ 2221 self.results[mtype]['grid']['escale']]) 2222 distance = conversions.ufloat(distance) 2223 radius = distance*np.sqrt(self.results[mtype]['grid']['scale']) # in Rsol 2224 labs = self.results[mtype]['grid']['labs']*radius**2 2225 mass = conversions.derive_mass((self.results[mtype]['grid']['logg'],'[cm/s2]'),\ 2226 (radius,'Rsol'),unit='Msol') 2227 2228 #-- store the results in the grid 2229 mass,e_mass = conversions.unumpy.nominal_values(mass),\ 2230 conversions.unumpy.std_devs(mass) 2231 radius,e_radius = conversions.unumpy.nominal_values(radius),\ 2232 conversions.unumpy.std_devs(radius) 2233 labs,e_labs = conversions.unumpy.nominal_values(labs),\ 2234 conversions.unumpy.std_devs(labs) 2235 self.results[mtype]['grid']['labs'] = labs # Labs is already there, overwrite 2236 #-- check if the others are already in there or not: 2237 labels,data = ['e_labs','radius','e_radius','mass','e_mass'],\ 2238 [e_labs,radius,e_radius,mass,e_mass] 2239 if 'e_labs' in self.results[mtype]['grid'].dtype.names: 2240 for idata,ilabel in zip(data,labels): 2241 self.results[mtype]['grid'][ilabel] = idata 2242 else: 2243 self.results[mtype]['grid'] = pl.mlab.rec_append_fields(self.results[mtype]['grid'],\ 2244 labels,data) 2245 2246 #-- update the confidence intervals 2247 self.calculate_confidence_intervals(mtype=mtype) 2248 logger.info('Added constraint: distance (improved luminosity, added info on radius and mass)')
2249
2250 - def add_constraint_slo(self,numax,Deltanu0,mtype='igrid_search',chi2_type='red'):
2251 """ 2252 Use diagnostics from solar-like oscillations to put additional constraints on the parameters. 2253 2254 If the results are constrained with the distance before L{add_constraint_distance}, 2255 then these results are combined with the SLO constraints. 2256 2257 @param numax: frequency of maximum amplitude 2258 @type numax: 3-tuple (value,error,unit) 2259 @param Deltanu0: large separation (l=0) 2260 @type Deltanu0: 3-tuple (value,error,unit) 2261 @param chi2_type: type of chi2 (raw or reduced) 2262 @type chi2_type: str ('raw' or 'red') 2263 """ 2264 grid = self.results[mtype]['grid'] 2265 #-- we need the teffs, so that we can compute the logg and radius using 2266 # the solar-like oscillations information. 2267 teff = grid['teff'] 2268 logg = grid['logg'] 2269 pvalues = [1-grid['ci_'+chi2_type]] 2270 names = ['ci_'+chi2_type+'_phot'] 2271 2272 #-- we *always* have a logg, that's the way SEDs work: 2273 logg_slo = conversions.derive_logg_slo((teff,'K'),numax) 2274 logg_slo,e_logg_slo = conversions.unumpy.nominal_values(logg_slo),\ 2275 conversions.unumpy.std_devs(logg_slo) 2276 # compute the probabilities: scale to standard normal 2277 logg_prob = scipy.stats.distributions.norm.sf(abs(logg_slo-logg)/e_logg_slo) 2278 2279 pvalues.append(logg_prob) 2280 names.append('ci_logg_slo') 2281 2282 #-- but we only sometimes have a radius (viz. when the distance is 2283 # known). There is an uncertainty on that radius! 2284 radius_slo = conversions.derive_radius_slo(numax,Deltanu0,(teff,'K'),unit='Rsol') 2285 radius_slo,e_radius_slo = conversions.unumpy.nominal_values(radius_slo),\ 2286 conversions.unumpy.std_devs(radius_slo) 2287 if 'radius' in grid.dtype.names: 2288 radius = grid['radius'] 2289 e_radius = grid['e_radius'] 2290 #total_error = np.sqrt( (e_radius_slo**2+e_radius**2)) 2291 total_error = np.sqrt( (e_radius_slo**2+e_radius**2)/2. + (radius-radius_slo)**2/4.) 2292 radius_prob = scipy.stats.distributions.norm.sf(abs(radius_slo-radius)/total_error) 2293 pvalues.append(radius_prob) 2294 names.append('ci_radius_slo') 2295 #-- combined standard deviation and mean for two populations with 2296 # possibly zero intersection (wiki: standard deviation) 2297 #total_error = np.sqrt( (e_radius_slo**2+e_radius**2)/2. + (radius-radius_slo)**2/4.) 2298 total_mean = (radius+radius_slo)/2. 2299 grid['radius'] = total_mean 2300 grid['e_radius'] = total_error 2301 logger.info('Added contraint: combined existing radius estimate with slo estimate') 2302 # otherwise, we just add info on the radius 2303 else: 2304 labs = grid['labs']*conversions.unumpy.uarray([radius_slo,e_radius_slo])**2 2305 labs,e_labs = conversions.unumpy.nominal_values(labs),\ 2306 conversions.unumpy.std_devs(labs) 2307 grid['labs'] = labs 2308 mass = conversions.derive_mass((self.results[mtype]['grid']['logg'],'[cm/s2]'),\ 2309 (radius_slo,e_radius_slo,'Rsol'),unit='Msol') 2310 mass,e_mass = conversions.unumpy.nominal_values(mass),\ 2311 conversions.unumpy.std_devs(mass) 2312 #-- now we can also derive the distance: 2313 scale = conversions.unumpy.uarray([self.results[mtype]['grid']['scale'],\ 2314 self.results[mtype]['grid']['escale']]) 2315 distance = radius_slo/conversions.sqrt(scale) 2316 distance,e_distance = conversions.unumpy.nominal_values(distance),\ 2317 conversions.unumpy.std_devs(distance) 2318 grid = pl.mlab.rec_append_fields(grid,['radius','e_radius','e_labs','mass','e_mass','distance','e_distance'],\ 2319 [radius_slo,e_radius_slo,e_labs,mass,e_mass,distance,e_distance]) 2320 logger.info('Added constraint: {0:s} via slo, improved luminosity'.format(', '.join(['radius','e_radius','e_labs','mass','e_mass']))) 2321 2322 #-- combine p values using Fisher's method 2323 combined_pvals = evaluate.fishers_method(pvalues) 2324 2325 #-- add the information to the grid 2326 self.results[mtype]['grid'] = pl.mlab.rec_append_fields(grid,\ 2327 names,pvalues) 2328 2329 #-- and replace the original confidence intervals, and re-order 2330 self.results[mtype]['grid']['ci_'+chi2_type] = 1-combined_pvals 2331 sa = np.argsort(self.results[mtype]['grid']['ci_'+chi2_type])[::-1] 2332 self.results[mtype]['grid'] = self.results[mtype]['grid'][sa] 2333 2334 #-- update the confidence intervals 2335 self.calculate_confidence_intervals(mtype=mtype) 2336 logger.info('Added constraint: {0:s} via slo and replaced ci_{1:s} with combined CI'.format(', '.join(names),chi2_type))
2337
2338 - def add_constraint_reddening(self,distance=None,ebv=None,e_ebv=0.1,Rv=3.1,\ 2339 model=None,mtype='igrid_search',chi2_type='red',upper_limit=False):
2340 """ 2341 Use reddening maps to put additional constraints on the parameters. 2342 2343 This constraint assumes that, if C{upper_limit=False}, given the 2344 distance to target, the reddening from the reddening maps is similar to 2345 the one derived from the SED. All models where this is not the case will 2346 be deemed improbable. 2347 2348 If C{upper_limit=True}, all models with an E(B-V) value above C{ebv} 2349 will be considered improbable. 2350 2351 When you don't set C{ebv}, the value will, by default, be derived from 2352 Drimmel maps if C{upper_limit=False} and from Schlegel if 2353 C{upper_limit=True}. You can change this behaviour by setting C{model} 2354 manually. 2355 2356 @param distance: distance and uncertainty in parsec 2357 @type distance: tuple (float,float) 2358 @param ebv: E(B-V) reddening in magnitude 2359 @type ebv: float 2360 @param e_ebv: error on the reddening in percentage 2361 @type e_ebv: float 2362 @param model: model reddening maps 2363 @type model: str 2364 @param mtype: type of results to add the information to 2365 @type mtype: str 2366 @param upper_limit: consider the E(B-V) value as an upper limit 2367 @type upper_limit: bool 2368 """ 2369 #-- for upper limits on E(B-V), we best use Schlegel maps by default, 2370 # otherwise we best use Drimmel maps. 2371 if model is None and upper_limit: 2372 model = 'schlegel' 2373 elif model is None: 2374 model = 'drimmel' 2375 #-- if I need to figure out the reddening myself, I need the distance! 2376 if ebv is None and distance is None: 2377 distance = self.get_distance_from_plx(lutz_kelker=False,unit='pc') 2378 #-- let me figure out the reddening if you didn't give it: 2379 if ebv is None: 2380 gal = self.info['galpos'] 2381 ebv = extinctionmodels.findext(gal[0], gal[1], model=model, distance=distance[0])/Rv 2382 ebv_u = extinctionmodels.findext(gal[0], gal[1], model=model, distance=distance[0]-distance[1])/Rv 2383 ebv_l = extinctionmodels.findext(gal[0], gal[1], model=model, distance=distance[0]+distance[1])/Rv 2384 e_ebv = max(ebv-ebv_l,ebv_u-ebv) 2385 else: 2386 e_ebv = 0.1*ebv 2387 grid = self.results[mtype]['grid'] 2388 ebvs = grid['ebv'] 2389 #-- probabilities are differently calculated depending on upper limit. 2390 if not upper_limit: 2391 ebv_prob = scipy.stats.distributions.norm.cdf(abs(ebv-ebvs)/e_ebv) 2392 #-- combine p values using Fisher's method 2393 combined_pvals = evaluate.fishers_method([1-grid['ci_'+chi2_type],ebv_prob]) 2394 grid['ci_'+chi2_type] = 1-combined_pvals 2395 else: 2396 grid['ci_'+chi2_type] = np.where(ebvs<=ebv,grid['ci_'+chi2_type],1.) 2397 2398 #-- and replace the original confidence intervals, and re-order 2399 sa = np.argsort(grid['ci_'+chi2_type])[::-1] 2400 self.results[mtype]['grid'] = grid[sa] 2401 2402 #-- update the confidence intervals 2403 self.calculate_confidence_intervals(mtype=mtype) 2404 logger.info('Added constraint: E(B-V)={0}+/-{1}'.format(ebv,e_ebv))
2405
2406 - def add_constraint_angular_diameter(self,angdiam):
2407 raise NotImplementedError
2408
2409 - def add_constraint_mass(self,mass,mtype='igrid_search',chi2_type='red'):
2410 """ 2411 Add constraints on the mass. 2412 2413 C{mass} must be a tuple, if the second element is smaller than the first, 2414 it is assumed to be (mu,sigma) from a normal distribution. If the second 2415 element is larger than the first, it is assumed to be (lower, upper) 2416 from a uniform distribution. 2417 """ 2418 normal = mass[0]>mass[1] 2419 grid = self.results[mtype]['grid'] 2420 masses = grid['mass'] 2421 if normal: 2422 mass_prob = scipy.stats.distributions.norm.cdf(abs(mass[0]-masses)/mass[1]) 2423 #-- combine p values using Fisher's method 2424 combined_pvals = evaluate.fishers_method([1-grid['ci_'+chi2_type],mass_prob]) 2425 grid['ci_'+chi2_type] = 1-combined_pvals 2426 else: 2427 grid['ci_'+chi2_type] = np.where((masses<=mass[0]) | (mass[1]<=masses),1.,grid['ci_'+chi2_type]) 2428 2429 #-- and replace the original confidence intervals, and re-order 2430 sa = np.argsort(grid['ci_'+chi2_type])[::-1] 2431 self.results[mtype]['grid'] = grid[sa] 2432 2433 #-- update the confidence intervals 2434 self.calculate_confidence_intervals(mtype=mtype) 2435 logger.info('Added constraint: mass={0}+/-{1}'.format(*mass))
2436
2437 - def add_constraint_evolution_models(self,models='siess2000',\ 2438 ylabels=['age','labs','radius'],e_y=None, 2439 function='linear',mtype='igrid_search',chi2_type='red'):
2440 """ 2441 Use stellar evolutionary models to put additional constraints on the parameters. 2442 """ 2443 grid = self.results[mtype]['grid'] 2444 2445 #-- make sure y's and ylabels are iterable 2446 if isinstance(ylabels,str): 2447 ylabels = [ylabels] 2448 #-- cycle over all yvalues and compute the pvalues 2449 pvalues = [1-grid['ci_'+chi2_type]] 2450 add_info = [] 2451 #-- create the evolutionary grid and interpolate the stellar evolutinary 2452 # grid on the SED-integrated grid points 2453 output = np.array([evolutionmodels.get_itable(iteff,ilogg,iz) for iteff,ilogg,iz in zip(grid['teff'],grid['logg'],grid['z'])]) 2454 output = np.rec.fromarrays(output.T,names=['age','labs','radius']) 2455 for label in ylabels: 2456 y_interpolated = output[label] 2457 add_info.append(y_interpolated) 2458 #-- only add the contraint when it is possible to do so. 2459 if not label in grid.dtype.names: 2460 logger.info("Cannot put constraint on {} (not a parameter)".format(label)) 2461 continue 2462 y_computed = grid[label] 2463 #-- error on the y-value: either it is computed, it is given or it is 2464 # assumed it is 10% of the value 2465 if e_y is None and ('e_'+label) in grid.dtype.names: 2466 e_y = np.sqrt(grid['e_'+label]**2+(0.1*y_interpolated)**2) 2467 elif e_y is None: 2468 e_y = np.sqrt((0.1*y_computed)**2+(0.1*y_interpolated)**2) 2469 y_prob = scipy.stats.distributions.norm.sf(abs(y_computed-y_interpolated)/e_y) 2470 pl.figure() 2471 pl.subplot(221) 2472 pl.title(label) 2473 pl.scatter(y_computed,y_interpolated,c=y_prob,edgecolors='none',cmap=pl.cm.spectral) 2474 pl.plot([pl.xlim()[0],pl.xlim()[1]],[pl.xlim()[0],pl.xlim()[1]],'r-',lw=2) 2475 pl.xlim(pl.xlim()) 2476 pl.ylim(pl.xlim()) 2477 pl.xlabel('Computed') 2478 pl.ylabel('Interpolated') 2479 pl.colorbar() 2480 pl.subplot(223) 2481 pl.title('y_interpolated') 2482 pl.scatter(grid['teff'],grid['logg'],c=y_interpolated,edgecolors='none',cmap=pl.cm.spectral) 2483 pl.colorbar() 2484 pl.subplot(224) 2485 pl.title('y_computed') 2486 pl.scatter(grid['teff'],grid['logg'],c=y_computed,edgecolors='none',cmap=pl.cm.spectral) 2487 pl.colorbar() 2488 pvalues.append(y_prob) 2489 2490 pl.figure() 2491 pl.subplot(221) 2492 pl.title('p1') 2493 sa = np.argsort(pvalues[0]) 2494 pl.scatter(grid['labs'][sa],grid['radius'][sa],c=pvalues[0][sa],edgecolors='none',cmap=pl.cm.spectral) 2495 pl.colorbar() 2496 pl.xlabel('labs') 2497 pl.ylabel('radius') 2498 pl.subplot(222) 2499 pl.title('p2') 2500 sa = np.argsort(pvalues[1]) 2501 pl.scatter(grid['labs'][sa],grid['radius'][sa],c=pvalues[1][sa],edgecolors='none',cmap=pl.cm.spectral) 2502 pl.colorbar() 2503 pl.xlabel('labs') 2504 pl.ylabel('radius') 2505 pl.subplot(223) 2506 pl.title('p3') 2507 sa = np.argsort(pvalues[2]) 2508 pl.scatter(grid['labs'][sa],grid['radius'][sa],c=pvalues[2][sa],edgecolors='none',cmap=pl.cm.spectral) 2509 pl.colorbar() 2510 pl.xlabel('labs') 2511 pl.ylabel('radius') 2512 2513 #-- combine p values using Fisher's method 2514 combined_pvals = evaluate.fishers_method(pvalues) 2515 pl.subplot(224) 2516 pl.title('pcombined') 2517 sa = np.argsort(combined_pvals) 2518 pl.scatter(grid['labs'][sa],grid['radius'][sa],c=combined_pvals[sa],edgecolors='none',cmap=pl.cm.spectral) 2519 pl.colorbar() 2520 pl.xlabel('labs') 2521 pl.ylabel('radius') 2522 pl.show() 2523 2524 #-- add the information to the grid (assume that if there one label 2525 # not in there already, none of them are 2526 if not 'c_'+ylabels[0] in grid.dtype.names: 2527 self.results[mtype]['grid'] = pl.mlab.rec_append_fields(grid,\ 2528 ['c_'+ylabel for ylabel in ylabels],add_info) 2529 #-- if they are already in there, overwrite! 2530 else: 2531 for info,ylabel in zip(add_info,ylabels): 2532 self.results[mtype]['grid']['c_'+ylabel] = add_info 2533 2534 #-- and replace the original confidence intervals, and re-order 2535 self.results[mtype]['grid']['ci_'+chi2_type] = 1-combined_pvals 2536 2537 sa = np.argsort(self.results[mtype]['grid']['ci_'+chi2_type])[::-1] 2538 self.results[mtype]['grid'] = self.results[mtype]['grid'][sa] 2539 2540 #-- update the confidence intervals 2541 self.calculate_confidence_intervals(mtype=mtype) 2542 logger.info('Added constraint: {0:s} via stellar models and replaced ci_{1:s} with combined CI'.format(', '.join(ylabels),chi2_type))
2543 2544 2545 #} 2546 2547 #{ Plotting routines 2548
2549 - def _label_dict(self, param):
2550 """ 2551 Returns the label belonging to a certain parameter 2552 2553 If the label is not present, the function will just return param. 2554 """ 2555 #split parameter in param name and componentent number 2556 param, component = re.findall('(.*?)(\d?$)', param)[0] 2557 2558 ldict = dict(teff='Effective temperature [K]',\ 2559 z='log (Metallicity Z [$Z_\odot$]) [dex]',\ 2560 logg=r'log (surface gravity [cm s$^{-2}$]) [dex]',\ 2561 ebv='E(B-V) [mag]',\ 2562 ci_raw='Raw probability [%]',\ 2563 ci_red='Reduced probability [%]',\ 2564 #labs=r'log (Absolute Luminosity [$L_\odot$]) [dex]',\ 2565 labs=r'Absolute Luminosity [$L_\odot$]',\ 2566 radius=r'Radius [$R_\odot$]',\ 2567 mass=r'Mass [$M_\odot$]', 2568 mc=r'MC [Nr. points in hexagonal bin]', 2569 rv=r'Extinction parameter $R_v$') 2570 if param in ldict: 2571 param = ldict[param] 2572 if component != '': 2573 return param #+ " - " + component 2574 else: 2575 return param
2576 2577 @standalone_figure
2578 - def plot_grid(self,x='teff',y='logg',ptype='ci_red',mtype='igrid_search',limit=0.95,d=None,**kwargs):
2579 """ 2580 Plot grid as scatter plot 2581 2582 PrameterC{ptype} sets the colors of the scattered points (e.g., 'ci_red','z','ebv'). 2583 2584 Example usage: 2585 2586 First set the SED: 2587 2588 >>> mysed = SED('HD180642') 2589 >>> mysed.load_fits() 2590 2591 Then make the plots: 2592 2593 >>> p = pl.figure() 2594 >>> p = pl.subplot(221);mysed.plot_grid(limit=None) 2595 >>> p = pl.subplot(222);mysed.plot_grid(x='ebv',y='z',limit=None) 2596 >>> p = pl.subplot(223);mysed.plot_grid(x='teff',y='ebv',limit=None) 2597 >>> p = pl.subplot(224);mysed.plot_grid(x='logg',y='z',limit=None) 2598 2599 ]]include figure]]ivs_sed_builder_plot_grid_01.png] 2600 2601 """ 2602 #-- if no distance is set, derive the most likely distance from the plx: 2603 #if d is None: 2604 #try: 2605 #d = self.get_distance_from_plx() 2606 #except ValueError: 2607 #d = None 2608 #logger.info('Distance to {0} unknown'.format(self.ID)) 2609 #if isinstance(d,tuple): 2610 #d = d[0][np.argmax(d[1])] 2611 ##-- it's possible that we still don't have a distance 2612 #if d is not None: 2613 #logger.info('Assumed distance to {0} = {1:.3e} pc'.format(self.ID,d)) 2614 #radius = d*np.sqrt(self.results[mtype]['grid']['scale']) 2615 #radius = conversions.convert('pc','Rsol',radius) # in Rsol 2616 #labs = np.log10(self.results[mtype]['grid']['labs']*radius**2) # in [Lsol] 2617 #mass = conversions.derive_mass((self.results[mtype]['grid']['logg'].copy(),'[cm/s2]'),\ 2618 #(radius,'Rsol'),unit='Msol') 2619 #-- compute angular diameter 2620 theta = 2*conversions.convert('sr','mas',self.results[mtype]['grid']['scale']) 2621 2622 if limit is not None: 2623 region = self.results[mtype]['grid']['ci_red']<=limit 2624 else: 2625 region = self.results[mtype]['grid']['ci_red']<np.inf 2626 #-- get the colors and the color scale 2627 if d is not None and ptype=='labs': 2628 colors = locals()[ptype][region] 2629 elif ptype in self.results[mtype]['grid'].dtype.names: 2630 colors = self.results[mtype]['grid'][ptype][region] 2631 elif isinstance(ptype,str): 2632 colors = locals()[ptype][region] 2633 else: 2634 colors = ptype[region] 2635 ptype = 'custom variable' 2636 2637 if 'ci_' in ptype.lower(): 2638 colors *= 100. 2639 vmin = colors.min() 2640 vmax = 95. 2641 else: 2642 vmin = kwargs.pop('vmin',colors.min()) 2643 vmax = kwargs.pop('vmax',colors.max()) 2644 2645 #-- define abbrevation for plotting 2646 if x in self.results[mtype]['grid'].dtype.names: 2647 X = self.results[mtype]['grid'][x] 2648 else: 2649 X = locals()[x] 2650 2651 if y in self.results[mtype]['grid'].dtype.names: 2652 Y = self.results[mtype]['grid'][y] 2653 else: 2654 Y = locals()[y] 2655 2656 #-- make the plot 2657 if mtype == 'imc': 2658 pl.hexbin(X,Y,mincnt=1,cmap=pl.cm.spectral) #bins='log' 2659 ptype = 'mc' 2660 2661 #-- set the limits 2662 pl.xlim(X.max(),X.min()) 2663 pl.ylim(Y.max(),Y.min()) 2664 cbar = pl.colorbar() 2665 else: 2666 #if limit is not None: 2667 #region = self.results[mtype]['grid']['ci_red']<limit 2668 #else: 2669 #region = self.results[mtype]['grid']['ci_red']<np.inf 2670 ##-- get the colors and the color scale 2671 #if d is not None and ptype=='labs': 2672 #colors = locals()[ptype][region] 2673 #elif ptype in self.results[mtype]['grid'].dtype.names: 2674 #colors = self.results[mtype]['grid'][ptype][region] 2675 #else: 2676 #colors = locals()[ptype][region] 2677 2678 #if 'ci' in ptype: 2679 #colors *= 100. 2680 #vmin = colors.min() 2681 #vmax = 95. 2682 #else: 2683 #vmin = kwargs.pop('vmin',colors.min()) 2684 #vmax = kwargs.pop('vmax',colors.max()) 2685 2686 #-- grid scatter plot 2687 pl.scatter(X[region],Y[region], 2688 c=colors,edgecolors='none',cmap=pl.cm.spectral,vmin=vmin,vmax=vmax) 2689 #-- set the limits to only include the 95 interval 2690 pl.xlim(X[region].max(),X[region].min()) 2691 pl.ylim(Y[region].max(),Y[region].min()) 2692 cbar = pl.colorbar() 2693 2694 #-- mark best value 2695 pl.plot(X[-1],Y[-1],'r+',ms=40,mew=3) 2696 2697 pl.xlabel(self._label_dict(x)) 2698 pl.ylabel(self._label_dict(y)) 2699 cbar.set_label(self._label_dict(ptype)) 2700 2701 logger.info('Plotted %s-%s diagram of %s'%(x,y,ptype))
2702 2703 @standalone_figure
2704 - def plot_CI2D(self,xpar='teff',ypar='logg',mtype='iminimize', ptype='ci_red',**kwargs):
2705 """ 2706 Plot a 2D confidence intervall calculated using the CI2D computation 2707 from calculate_iminimize_CI2D. Make sure you first calculate the grid 2708 you want using the calculate_iminimize_CI2D function. 2709 """ 2710 2711 grid = self.results[mtype]['CI2D'][xpar+"-"+ypar][ptype] 2712 x = self.results[mtype]['CI2D'][xpar+"-"+ypar]['x'] 2713 y = self.results[mtype]['CI2D'][xpar+"-"+ypar]['y'] 2714 2715 if ptype == 'ci_red' or ptype == 'ci_raw': 2716 grid = grid*100.0 2717 levels = np.linspace(0,100,25) 2718 ticks = [0,20,40,60,80,100] 2719 elif ptype == 'ci_chi2': 2720 grid = np.log10(grid) 2721 levels = np.linspace(np.min(grid), np.max(grid), 25) 2722 ticks = np.round(np.linspace(np.min(grid), np.max(grid), 11), 2) 2723 2724 pl.contourf(x,y,grid,levels,**kwargs) 2725 pl.xlabel(self._label_dict(xpar)) 2726 pl.ylabel(self._label_dict(ypar)) 2727 cbar = pl.colorbar(fraction=0.08,ticks=ticks) 2728 cbar.set_label(ptype!='ci_chi2' and r'Probability' or r'$^{10}$log($\chi^2$)')
2729 2730 2731 2732 2733 2734 @standalone_figure
2735 - def plot_data(self,colors=False, plot_unselected=True, 2736 unit_wavelength='angstrom',unit_flux=None,**kwargs):
2737 """ 2738 Plot only the SED data. 2739 2740 Extra kwargs are passed to plotting functions. 2741 2742 The decorator provides an keyword C{savefig}. When set to C{True}, an 2743 image name is generated automatically (very long!), and the figure is 2744 closed. When set to a string, the image is saved with that name, and 2745 the figure is closed. When C{savefig} is not given, the image will 2746 stay open so that the user can still access all plot elements and do 2747 further enhancements. 2748 2749 @param colors: if False, plot absolute values, otherwise plot colors 2750 (flux ratios) 2751 @type colors: boolean 2752 @param plot_unselected: if True, all photometry is plotted, otherwise 2753 only those that are selected 2754 """ 2755 if not plot_unselected: 2756 master = self.master[self.master['include']] 2757 else: 2758 master = self.master 2759 if unit_flux is None: 2760 unit_flux = master['cunit'][0] 2761 wave,flux,e_flux = master['cwave'],master['cmeas'],master['e_cmeas'] 2762 sources = master['source'] 2763 iscolor = np.array(master['color'],bool) 2764 photbands = master['photband'] 2765 indices = np.arange(len(master)) 2766 2767 allsystems = np.array([i.split('.')[0] for i in photbands]) 2768 systems = sorted(set(allsystems)) 2769 color_cycle = [pl.cm.spectral(j) for j in np.linspace(0, 1.0, len(systems))] 2770 2771 if not colors: 2772 color_cycle = itertools.cycle(color_cycle) 2773 pl.gca().set_xscale('log',nonposx='clip') 2774 pl.gca().set_yscale('log',nonposy='clip') 2775 wave = conversions.convert('angstrom',unit_wavelength,wave) 2776 flux,e_flux = conversions.convert(master['cunit'][0],unit_flux,flux,e_flux,wave=(wave,unit_wavelength)) 2777 mf = [] 2778 # plot photometric systems in different colors 2779 for system in systems: 2780 keep = (allsystems==system) & ~iscolor 2781 if keep.sum(): 2782 # plot each photometric points separately, so that we could 2783 # use it interactively. Label them all with a unique ID 2784 # and make them pickable. 2785 color = color_cycle.next() 2786 #for i in range(sum(keep)): 2787 #label = system if i==0 else '_nolegend_' 2788 #pltlin,caplins,barlincs = pl.errorbar(wave[keep][i],flux[keep][i],yerr=e_flux[keep][i],fmt='o',label=label,ms=7,picker=5,color=color,**kwargs) 2789 #pltlin.sed_index = indices[keep][i] 2790 #caplins[0].sed_index = indices[keep][i] 2791 #caplins[1].sed_index = indices[keep][i] 2792 #barlincs[0].sed_index = indices[keep][i] 2793 pl.errorbar(wave[keep],flux[keep],yerr=e_flux[keep],fmt='o',label=system,ms=7,color=color,**kwargs) 2794 mf.append(flux[keep]) 2795 if keep.sum(): 2796 pl.ylabel(conversions.unit2texlabel(unit_flux,full=True)) 2797 pl.xlabel('Wavelength [{0}]'.format(conversions.unit2texlabel(unit_wavelength))) 2798 #-- scale y-axis (sometimes necessary for data with huge errorbars) 2799 mf = np.log10(np.hstack(mf)) 2800 lmin,lmax = np.nanmin(mf),np.nanmax(mf) 2801 lrange = np.abs(lmin-lmax) 2802 pl.ylim(10**(lmin-0.1*lrange),10**(lmax+0.1*lrange)) 2803 else: 2804 pl.gca().set_color_cycle(color_cycle) 2805 names = [] 2806 start_index = 1 2807 for system in systems: 2808 keep = (allsystems==system) & iscolor 2809 if keep.sum(): 2810 pl.errorbar(range(start_index,start_index+keep.sum()),flux[keep],yerr=e_flux[keep],fmt='o',label=system,ms=7,**kwargs) 2811 names += [ph.split('.')[1] for ph in photbands[keep]] 2812 start_index += keep.sum() 2813 pl.xticks(range(1,len(names)+1),names,rotation=90) 2814 pl.ylabel(r'Flux ratio') 2815 pl.xlabel('Index') 2816 leg = pl.legend(prop=dict(size='small'),loc='best',fancybox=True) #,numpoints=1) #prop=dict(size='small'),loc='best',fancybox=True) 2817 leg.get_frame().set_alpha(0.5) 2818 pl.grid() 2819 pl.title(self.ID)
2820 2821 2822 @standalone_figure
2823 - def plot_sed(self,colors=False,mtype='igrid_search',plot_redded=True,plot_deredded=False, 2824 plot_unselected=True,wave_units='AA',flux_units='erg/s/cm2',**kwargs):
2825 """ 2826 Plot a fitted SED together with the data. 2827 2828 Example usage: 2829 2830 First set the SED: 2831 2832 >>> mysed = SED('HD180642') 2833 >>> mysed.load_fits() 2834 2835 Then make the plots: 2836 2837 >>> p = pl.figure() 2838 >>> p = pl.subplot(121) 2839 >>> mysed.plot_sed(colors=False) 2840 >>> p = pl.subplot(122) 2841 >>> mysed.plot_sed(colors=True) 2842 2843 ]]include figure]]ivs_sed_builder_plot_sed_01.png] 2844 """ 2845 annotation = kwargs.pop('annotation',True) 2846 2847 def plot_sed_getcolors(master,color_dict=None): 2848 myphotbands = [iphotb.split('.')[1] for iphotb in master['photband'][master['color']]] 2849 if not myphotbands: #-- If there are no colours none can be returned (added by joris 30-01-2012) 2850 return [],[],[],None 2851 if color_dict is None: 2852 color_dict = {myphotbands[0]:0} 2853 for mycol in myphotbands[1:]: 2854 if not mycol in color_dict: 2855 max_int = max([color_dict[entry] for entry in color_dict]) 2856 color_dict[mycol] = max_int+1 2857 x = [color_dict[mycol] for mycol in myphotbands] 2858 y = master['cmeas'] 2859 e_y = master['e_cmeas'] 2860 return x,y,e_y,color_dict
2861 2862 2863 x__,y__,e_y__,color_dict = plot_sed_getcolors(self.master) 2864 2865 #-- get the color cycle 2866 systems = np.array([system.split('.')[0] for system in self.master['photband']],str) 2867 set_systems = sorted(list(set(systems))) 2868 if ('absolutesymbolcolor' in kwargs) and (kwargs.pop('absolutesymbolcolor') == True): 2869 sortedphotsystems,plotcolorvalues = filters.get_plotsymbolcolorinfo() 2870 selectedcolorinds = np.array([np.where(sortedphotsystems == system)[0][0] for system in set_systems]) 2871 color_cycle = itertools.cycle([pl.cm.spectral(j) for j in plotcolorvalues[selectedcolorinds]]) 2872 else: 2873 color_cycle = itertools.cycle([pl.cm.spectral(j) for j in np.linspace(0, 1.0, len(set_systems))]) 2874 2875 #-- for plotting reasons, we translate every color to an integer 2876 for system in set_systems: 2877 color = color_cycle.next() 2878 keep = (systems==system) & (self.master['color']==colors) 2879 if not plot_unselected: 2880 keep = keep & self.master['include'] 2881 #-- synthetic: 2882 if sum(keep) and mtype in self.results and 'synflux' in self.results[mtype]: 2883 if colors: 2884 x,y,e_y,color_dict = plot_sed_getcolors(self.master[keep],color_dict) 2885 y = self.results[mtype]['synflux'][1][keep] 2886 else: 2887 x = self.results[mtype]['synflux'][0][keep] 2888 y = self.results[mtype]['synflux'][1][keep] 2889 #-- convert to correct units 2890 y = conversions.convert('erg/s/cm2/AA',flux_units,y,wave=(x,'AA')) 2891 x = conversions.convert('AA',wave_units,x) 2892 pl.plot(x,y,'x',ms=10,mew=2,alpha=0.75,color=color,**kwargs) 2893 #-- include: 2894 keep = (systems==system) & (self.master['color']==colors) & self.master['include'] 2895 if sum(keep): 2896 if colors: 2897 #-- translate every color to an integer 2898 x,y,e_y,color_dict = plot_sed_getcolors(self.master[keep],color_dict) 2899 else: 2900 if mtype in self.results and 'synflux' in self.results[mtype]: 2901 x = self.results[mtype]['synflux'][0][keep] 2902 for ind in range(len(x)): 2903 if np.isnan(x[ind]): 2904 try: 2905 x[ind] = self.master['cwave'][keep][ind] 2906 except: 2907 pass 2908 else: 2909 x = self.master['cwave'][keep] 2910 y = self.master['cmeas'][keep] 2911 e_y = self.master['e_cmeas'][keep] 2912 y,e_y = conversions.convert('erg/s/cm2/AA',flux_units,y,e_y,wave=(x,'AA')) 2913 x = conversions.convert('AA',wave_units,x) 2914 2915 p = pl.errorbar(x,y,yerr=e_y,fmt='o',label=system, 2916 capsize=10,ms=7,color=color,**kwargs) 2917 2918 #-- exclude: 2919 label = np.any(keep) and '_nolegend_' or system 2920 keep = (systems==system) & (self.master['color']==colors) & ~self.master['include'] 2921 if sum(keep) and plot_unselected: 2922 if colors: 2923 x,y,e_y,color_dict = plot_sed_getcolors(self.master[keep],color_dict) 2924 else: 2925 x = self.results[mtype]['synflux'][0][keep] 2926 if np.any(np.isnan(x)): 2927 x = self.master['cwave'][keep] 2928 y = self.master['cmeas'][keep] 2929 e_y = self.master['e_cmeas'][keep] 2930 y,e_y = conversions.convert('erg/s/cm2/AA',flux_units,y,e_y,wave=(x,'AA')) 2931 x = conversions.convert('AA',wave_units,x) 2932 pl.errorbar(x,y,yerr=e_y,fmt='o',label=label, 2933 capsize=10,ms=7,mew=2,color=color,mfc='1',mec=color,**kwargs) 2934 2935 #-- only set logarithmic scale if absolute fluxes are plotted 2936 # and only plot the real model then 2937 if not colors: 2938 pl.gca().set_xscale('log',nonposx='clip') 2939 pl.gca().set_yscale('log',nonposy='clip') 2940 pl.gca().set_autoscale_on(False) 2941 2942 #-- the model 2943 if mtype in self.results and 'model' in self.results[mtype]: 2944 wave,flux,flux_ur = self.results[mtype]['model'] 2945 2946 flux = conversions.convert('erg/s/cm2/AA',flux_units,flux,wave=(wave,'AA')) 2947 flux_ur = conversions.convert('erg/s/cm2/AA',flux_units,flux_ur,wave=(wave,'AA')) 2948 wave = conversions.convert('AA',wave_units,wave) 2949 2950 if plot_redded: 2951 pl.plot(wave,flux,'r-',**kwargs) 2952 if plot_deredded: 2953 pl.plot(wave,flux_ur,'k-',**kwargs) 2954 pl.ylabel(conversions.unit2texlabel(flux_units,full=True)) 2955 pl.xlabel('wavelength [{0}]'.format(conversions.unit2texlabel(wave_units))) 2956 else: 2957 xlabels = color_dict.keys() 2958 xticks = [color_dict[key] for key in xlabels] 2959 pl.xticks(xticks,xlabels,rotation=90) 2960 pl.ylabel(r'Flux ratio') 2961 pl.xlabel('Color name') 2962 pl.xlim(min(xticks)-0.5,max(xticks)+0.5) 2963 2964 pl.grid() 2965 if colors: 2966 leg = pl.legend(loc='best',prop=dict(size='x-small'),numpoints=1) 2967 else: 2968 leg = pl.legend(loc='upper right',prop=dict(size='x-small'),numpoints=1) 2969 leg.get_frame().set_alpha(0.5) 2970 loc = (0.35,0.05) 2971 if mtype in self.results and 'grid' in self.results[mtype] and annotation: 2972 teff = self.results[mtype]['grid']['teff'][-1] 2973 logg = self.results[mtype]['grid']['logg'][-1] 2974 ebv = self.results[mtype]['grid']['ebv'][-1] 2975 met = self.results[mtype]['grid']['z'][-1] 2976 scale = self.results[mtype]['grid']['scale'][-1] 2977 angdiam = 2*conversions.convert('sr','mas',scale) 2978 try: 2979 teff2 = self.results[mtype]['CI']['teff2'] 2980 logg2 = self.results[mtype]['CI']['logg2'] 2981 radii = self.results[mtype]['CI']['rad2']/self.results[mtype]['CI']['rad'] 2982 pl.annotate('Teff=%i %i K\nlogg=%.2f %.2f cgs\nE(B-V)=%.3f mag\nr2/r1=%.2f\n$\Theta$=%.3g mas'%(teff,teff2,logg,logg2,ebv,radii,angdiam), 2983 loc,xycoords='axes fraction') 2984 except: 2985 pl.annotate('Teff=%d K\nlogg=%.2f cgs\nE(B-V)=%.3f mag\nlogZ=%.3f dex\n$\Theta$=%.3g mas'%(teff,logg,ebv,met,angdiam), 2986 loc,xycoords='axes fraction') 2987 pass 2988 logger.info('Plotted SED as %s'%(colors and 'colors' or 'absolute fluxes')) 2989 teff = "%d" % teff 2990 logg = "%.2f" % logg 2991 ebv = "%.3f" % ebv 2992 metallicity = "%.3f" % met 2993 #wave,flux = model.get_table(teff=teff,logg=logg,ebv=ebv,z=z,grid='kurucz2') 2994 '''f = open('/home/anae/python/SEDfitting/Bastars_info.txt', 'a') 2995 f.writelines(str(teff)+'\t'+str(logg)+'\t'+str(ebv)+'\t'+str(metallicity)+'\n') 2996 f.closed''' 2997 2998 2999 3000 @standalone_figure
3001 - def plot_chi2(self,colors=False,mtype='igrid_search',**kwargs):
3002 """ 3003 Plot chi2 statistic for every datapoint included in the fit. 3004 3005 To plot the statistic from the included absolute values, set 3006 C{colors=False}. 3007 3008 To plot the statistic from the included colors, set C{colors=True}. 3009 3010 Example usage: 3011 3012 First set the SED: 3013 3014 >>> mysed = SED('HD180642') 3015 >>> mysed.load_fits() 3016 3017 Then make the plots: 3018 3019 >>> p = pl.figure() 3020 >>> p = pl.subplot(121) 3021 >>> mysed.plot_chi2(colors=False) 3022 >>> p = pl.subplot(122) 3023 >>> mysed.plot_chi2(colors=True) 3024 3025 ]]include figure]]ivs_sed_builder_plot_chi2_01.png] 3026 3027 @param colors: flag to distinguish between colors and absolute values 3028 @type colors: boolean 3029 """ 3030 if 'phase' in kwargs: 3031 uniquephase = kwargs.pop('phase') 3032 phase = True 3033 else: 3034 uniquephase = None 3035 phase = False 3036 3037 include_grid = self.master['include'] 3038 systems = np.array([system.split('.')[0] for system in self.master['photband'][include_grid]],str) 3039 set_systems = sorted(list(set(systems))) 3040 color_cycle = itertools.cycle([pl.cm.spectral(i) for i in np.linspace(0, 1.0, len(set_systems))]) 3041 if mtype in self.results: 3042 eff_waves,synflux,photbands = self.results[mtype]['synflux'] 3043 chi2 = self.results[mtype]['chi2'] 3044 for system in set_systems: 3045 color = color_cycle.next() 3046 keep = (systems==system) 3047 if phase: 3048 keep = keep & (self.master['phase'][include_grid] == uniquephase) 3049 if sum(keep) and not colors: 3050 try: 3051 pl.loglog(eff_waves[include_grid][keep],chi2[include_grid][keep],'o',label=system,color=color) 3052 except: 3053 logger.critical('Plotting of CHI2 of absolute values failed') 3054 elif sum(keep) and colors: 3055 pl.semilogy(range(len(eff_waves[include_grid][keep])),chi2[include_grid][keep],'o',label=system,color=color) 3056 pl.legend(loc='upper right',prop=dict(size='x-small')) 3057 pl.grid() 3058 pl.annotate('Total $\chi^2$ = %.1f'%(self.results[mtype]['grid']['chisq'][-1]),(0.59,0.120),xycoords='axes fraction',color='r') 3059 pl.annotate('Total Reduced $\chi^2$ = %0.2f'%(sum(chi2[include_grid][keep])),(0.59,0.075),xycoords='axes fraction',color='r') 3060 if 'factor' in self.results[mtype]: 3061 pl.annotate('Error scale = %.2f'%(np.sqrt(self.results[mtype]['factor'])),(0.59,0.030),xycoords='axes fraction',color='k') 3062 xlims = pl.xlim() 3063 pl.plot(xlims,[self.results[mtype]['grid']['chisq'][-1],self.results[mtype]['grid']['chisq'][-1]],'r-',lw=2) 3064 pl.xlim(xlims) 3065 pl.xlabel('wavelength [$\AA$]') 3066 pl.ylabel(r'Reduced ($\chi^2$)') 3067 logger.info('Plotted CHI2 of %s'%(colors and 'colors' or 'absolute fluxes')) 3068 else: 3069 logger.info('%s not in results, no plot could be made.'%(mtype))
3070 3071 3072 @standalone_figure
3073 - def plot_distance(self,mtype='igrid_search'):
3074 try: 3075 #-- necessary information 3076 (d_models,d_prob_models,radii) = self.results['igrid_search']['d_mod'] 3077 (d,dprob) = self.results['igrid_search']['d'] 3078 3079 ax_d = pl.gca() 3080 3081 gal = self.info['galpos'] 3082 #-- the plot 3083 dzoom = dprob>1e-4 3084 pl.plot(d,dprob,'k-') 3085 pl.grid() 3086 pl.xlabel('Distance [pc]') 3087 pl.ylabel('Probability [unnormalized]') 3088 pl.xlim(d[dzoom].min(),d[dzoom].max()) 3089 xlims = pl.xlim() 3090 pl.twiny(ax_d) 3091 pl.xlim(xlims) 3092 xticks = pl.xticks() 3093 pl.xticks(xticks[0],['%.2f'%(conversions.convert('pc','Rsol',np.sqrt(self.results['igrid_search']['grid']['scale'][-1])*di)) for di in xticks[0]]) 3094 pl.xlabel('Radius [$R_\odot$]') 3095 pl.twinx(ax_d) 3096 res = 100 3097 d_keep = (xlims[0]<=d[::res]) & (d[::res]<=xlims[1]) 3098 if len(self.results['igrid_search']['drimmel']): 3099 pl.plot(d[::res][d_keep],self.results['igrid_search']['drimmel'].ravel()[d_keep],'b-',label='Drimmel') 3100 if len(self.results['igrid_search']['marshall']): 3101 pl.plot(d[::res][d_keep],self.results['igrid_search']['marshall'].ravel()[d_keep],'b--',label='Marshall') 3102 ebv = self.results[mtype]['grid']['ebv'][-1] 3103 pl.plot(xlims,[ebv*3.1,ebv*3.1],'r--',lw=2,label='measured') 3104 pl.ylabel('Visual extinction $A_v$ [mag]') 3105 pl.legend(loc='lower right',prop=dict(size='x-small')) 3106 pl.xlim(xlims) 3107 logger.info('Plotted distance/reddening') 3108 except KeyError: 3109 logger.info('No distance/reddening plotted due to KeyError.')
3110 3111 @standalone_figure
3112 - def plot_grid_model(self,ptype='prob'):
3113 """ 3114 Grid of models 3115 """ 3116 if 'spType' in self.info: 3117 pl.title(self.info['spType']) 3118 cutlogg = (self.results['igrid_search']['grid']['logg']<=4.4) & (self.results['igrid_search']['grid']['ci_red']<=0.95) 3119 (d_models,d_prob_models,radii) = self.results['igrid_search']['d_mod'] 3120 (d,dprob) = self.results['igrid_search']['d'] 3121 gal = self.info['galpos'] 3122 3123 n = 75000 3124 region = self.results['igrid_search']['grid']['ci_red']<0.95 3125 total_prob = 100-(1-self.results['igrid_search']['grid']['ci_red'][cutlogg][-n:])*d_prob_models*100 3126 tp_sa = np.argsort(total_prob)[::-1] 3127 if ptype=='prob': 3128 pl.scatter(self.results['igrid_search']['grid']['teff'][cutlogg][-n:][tp_sa],self.results['igrid_search']['grid']['logg'][cutlogg][-n:][tp_sa], 3129 c=total_prob[tp_sa],edgecolors='none',cmap=pl.cm.spectral, 3130 vmin=total_prob.min(),vmax=total_prob.max()) 3131 elif ptype=='radii': 3132 pl.scatter(self.results['igrid_search']['grid']['teff'][cutlogg][-n:][tp_sa],self.results['igrid_search']['grid']['logg'][cutlogg][-n:][tp_sa], 3133 c=radii,edgecolors='none',cmap=pl.cm.spectral, 3134 vmin=radii.min(),vmax=radii.max()) 3135 pl.xlim(self.results['igrid_search']['grid']['teff'][region].max(),self.results['igrid_search']['grid']['teff'][region].min()) 3136 pl.ylim(self.results['igrid_search']['grid']['logg'][region].max(),self.results['igrid_search']['grid']['logg'][region].min()) 3137 cbar = pl.colorbar() 3138 pl.xlabel('log (effective temperature [K]) [dex]') 3139 pl.ylabel(r'log (surface gravity [cm s$^{-2}$]) [dex]') 3140 3141 if ptype=='prob': 3142 cbar.set_label('Probability (incl. plx) [%]') 3143 elif ptype=='radii': 3144 cbar.set_label('Model radii [$R_\odot$]') 3145 logger.info('Plotted teff-logg diagram of models (%s)'%(ptype))
3146 3147 3148 @standalone_figure
3149 - def plot_MW_side(self):
3150 im = Image.open(config.get_datafile('images','NRmilkyway.tif')) 3151 left,bottom,width,height = 0.0,0.0,1.0,1.0 3152 startm,endm = 183,-177 3153 startv,endv = -89,91 3154 3155 xwidth = startm-endm 3156 ywidth = 90. 3157 ratio = ywidth/xwidth 3158 3159 gal = list(self.info['galpos']) 3160 if gal[0]>180: 3161 gal[0] = gal[0] - 360. 3162 #-- boundaries of ESO image 3163 pl.imshow(im,extent=[startm,endm,startv,endv],origin='lower') 3164 pl.plot(gal[0],gal[1],'rx',ms=15,mew=2) 3165 pl.xlim(startm,endm) 3166 pl.ylim(startv,endv) 3167 pl.xticks([]) 3168 pl.yticks([])
3169 3170 @standalone_figure
3171 - def plot_MW_top(self):
3172 im = Image.open(config.get_datafile('images','topmilkyway.jpg')) 3173 pl.imshow(im,origin='lower') 3174 pl.box(on=False) 3175 pl.xticks([]) 3176 pl.yticks([]) 3177 xlims = pl.xlim() 3178 ylims = pl.ylim() 3179 gal = self.info['galpos'] 3180 pl.plot(2800,1720,'ro',ms=10) 3181 pl.plot([2800,-5000*np.sin(gal[0]/180.*np.pi)+2800],[1720,5000*np.cos(gal[0]/180.*np.pi)+1720],'r-',lw=2) 3182 3183 #-- necessary information 3184 if 'igrid_search' in self.results and 'd_mod' in self.results['igrid_search']: 3185 (d_models,d_prob_models,radii) = self.results['igrid_search']['d_mod'] 3186 (d,dprob) = self.results['igrid_search']['d'] 3187 else: 3188 d = np.linspace(0,1000,2) 3189 dprob = np.zeros(len(d)) 3190 3191 x = d/10.*1.3 3192 y = dprob*1000. 3193 theta = gal[0]/180.*np.pi + np.pi/2. 3194 x_ = np.cos(theta)*x - np.sin(theta)*y + 2800 3195 y_ = np.sin(theta)*x + np.cos(theta)*y + 1720 3196 3197 pl.plot(x_,y_,'r-',lw=2) 3198 index = np.argmax(y) 3199 pl.plot(np.cos(theta)*x[index] + 2800,np.sin(theta)*x[index] + 1720,'rx',ms=15,mew=2) 3200 3201 pl.xlim(xlims) 3202 pl.ylim(ylims)
3203 3204 @standalone_figure
3205 - def plot_finderchart(self,cmap_photometry=pl.cm.spectral,window_size=5.):
3206 """ 3207 Size is x and y width in arcminutes 3208 """ 3209 try: 3210 dec = 'jdedeg' in self.info and self.info['jdedeg'] or None 3211 ra = 'jradeg' in self.info and self.info['jradeg'] or None 3212 data,coords,size = mast.get_dss_image(self.ID,ra=ra,dec=dec) 3213 pl.imshow(data[::-1],extent=[-size[0]/2*60,size[0]/2*60, 3214 -size[1]/2*60,size[1]/2*60],cmap=pl.cm.RdGy_r)#Greys 3215 pl.xlim(-window_size/2.,+window_size/2.) 3216 pl.ylim(-window_size/2.,+window_size/2.) 3217 xlims,ylims = pl.xlim(),pl.ylim() 3218 keep_this = -self.master['color'] & (self.master['cmeas']>0) 3219 toplot = self.master[keep_this] 3220 systems = np.array([system.split('.')[0] for system in toplot['photband']],str) 3221 set_systems = sorted(list(set(systems))) 3222 color_cycle = itertools.cycle([cmap_photometry(j) for j in np.linspace(0, 1.0, len(set_systems))]) 3223 for system in set_systems: 3224 color = color_cycle.next() 3225 keep = systems==system 3226 if sum(keep): 3227 pl.plot(toplot['_RAJ2000'][keep][0]*60, 3228 toplot['_DEJ2000'][keep][0]*60,'x',label=system, 3229 mew=2.5,ms=15,alpha=0.5,color=color) 3230 leg = pl.legend(numpoints=1,prop=dict(size='x-small'),loc='best',fancybox=True) 3231 leg.get_frame().set_alpha(0.75) 3232 pl.xlim(xlims) 3233 pl.ylim(ylims) 3234 pl.xlabel(r'Right ascension $\alpha$ [arcmin]') 3235 pl.ylabel(r'Declination $\delta$ [arcmin]') 3236 except: 3237 logger.warning('No image found of %s'%(self.ID)) 3238 pass 3239 3240 if 'pm' in self.info: 3241 logger.info("Found proper motion info") 3242 ppm_ra,ppm_de = (self.info['pm']['pmRA'],self.info['pm']['epmRA']),(self.info['pm']['pmDE'],self.info['pm']['epmDE']) 3243 pl.annotate('',xy=(ppm_ra[0]/50.,ppm_de[0]/50.), 3244 xycoords='data',xytext=(0,0),textcoords='data',color='red', 3245 arrowprops=dict(facecolor='red', shrink=0.05), 3246 horizontalalignment='right', verticalalignment='top') 3247 pl.annotate('pmRA: %.1f $\pm$ %.1f mas/yr\npmDE: %.1f $\pm$ %.1f mas/yr'%(ppm_ra[0],ppm_ra[1],ppm_de[0],ppm_de[1]), 3248 xy=(0.05,0.25),xycoords='axes fraction',color='red') 3249 if 'igrid_search' in self.results and 'd' in self.results['igrid_search']: 3250 (d,dprob) = self.results['igrid_search']['d'] 3251 max_distance = d[np.argmax(dprob)] 3252 e_max_distance = abs(max_distance - d[np.argmin(np.abs(dprob-0.5*max(dprob)))]) 3253 elif 'plx' in self.info and 'v' in self.info['plx'] and 'e' in self.info['plx']: 3254 plx,eplx = self.info['plx']['v'],self.info['plx']['e'] 3255 dist = 1000./ufloat((plx,eplx)) 3256 max_distance,e_max_distance = dist.nominal_value,dist.std_dev() 3257 else: 3258 max_distance = 1000. 3259 e_max_distance = 100. 3260 3261 tang_velo = 'Tan. vel. at %.0f+/-%.0f pc: '%(max_distance,e_max_distance) 3262 3263 max_distance = conversions.convert('pc','km',max_distance,e_max_distance) 3264 ppm_ra = conversions.convert('mas/yr','rad/s',*ppm_ra) 3265 ppm_de = conversions.convert('mas/yr','rad/s',*ppm_de) 3266 max_distance = unumpy.uarray([max_distance[0],max_distance[1]]) 3267 x = unumpy.uarray([ppm_ra[0],ppm_ra[1]]) 3268 y = unumpy.uarray([ppm_de[0],ppm_de[1]]) 3269 velocity = max_distance*utan( usqrt(x**2+y**2)) 3270 3271 3272 pl.annotate(tang_velo + '%s km/s'%(velocity),xy=(0.05,0.2),xycoords='axes fraction',color='red') 3273 if 'Vel' in self.info and 'v' in self.info['Vel']: 3274 rad_velo = 'Rad. vel.: %.1f'%(self.info['Vel']['v']) 3275 if 'e' in self.info['Vel']: 3276 rad_velo += '+/-%.1f'%(self.info['Vel']['e']) 3277 pl.annotate(rad_velo+' km/s',xy=(0.05,0.15),xycoords='axes fraction',color='red')
3278 3279
3280 - def make_plots(self):
3281 """ 3282 Make all available plots 3283 """ 3284 pl.figure(figsize=(22,12)) 3285 rows,cols = 3,4 3286 pl.subplots_adjust(left=0.04, bottom=0.07, right=0.97, top=0.96, 3287 wspace=0.17, hspace=0.24) 3288 pl.subplot(rows,cols,1);self.plot_grid(ptype='ci_red') 3289 pl.subplot(rows,cols,2);self.plot_grid(ptype='ebv') 3290 pl.subplot(rows,cols,3);self.plot_grid(ptype='z') 3291 pl.subplot(rows,cols,4);self.plot_distance() 3292 3293 pl.subplot(3,2,3);self.plot_sed(colors=False) 3294 pl.subplot(3,2,5);self.plot_sed(colors=True) 3295 3296 pl.subplot(rows,cols,7);self.plot_chi2(colors=False) 3297 pl.subplot(rows,cols,11);self.plot_chi2(colors=True) 3298 3299 #pl.subplot(rows,cols,8);self.plot_grid_model(ptype='prob') 3300 #pl.subplot(rows,cols,12);self.plot_grid_model(ptype='radii') 3301 3302 pl.figure(figsize=(12,12)) 3303 pl.axes([0,0.0,1.0,0.5]);self.plot_MW_side() 3304 pl.axes([0,0.5,0.5,0.5]);self.plot_MW_top() 3305 pl.axes([0.5,0.5,0.5,0.5]);self.plot_finderchart()
3306 3307 #} 3308 3309 #{Input and output 3310
3311 - def save_photometry(self,photfile=None):
3312 """ 3313 Save master photometry to a file. 3314 3315 @param photfile: name of the photfile. Defaults to C{starname.phot}. 3316 @type photfile: str 3317 """ 3318 #-- write to file 3319 if photfile is not None: 3320 self.photfile = photfile 3321 logger.info('Save photometry to file %s'%(self.photfile)) 3322 #-- add some comments 3323 if self.ID: 3324 if not 'bibcode' in self.master.dtype.names: 3325 self.master = crossmatch.add_bibcodes(self.master) 3326 if not 'comments' in self.master.dtype.names: 3327 self.master = vizier.quality_check(self.master,self.ID) 3328 ascii.write_array(self.master,self.photfile,header=True,auto_width=True,use_float='%g',comments=['#'+json.dumps(self.info)])
3329
3330 - def load_photometry(self,photfile=None):
3331 """ 3332 Load the contents of the photometry file to the master record. 3333 3334 @param photfile: name of the photfile. Defaults to the value of C{self.photfile}. 3335 @type photfile: str 3336 """ 3337 if photfile is not None: 3338 self.photfile = photfile 3339 logger.info('Load photometry from file %s'%(self.photfile)) 3340 self.master,comments = ascii.read2recarray(self.photfile,return_comments=True) 3341 #self.info = json.loads(comments[-3]) #### HE COMENTADO ESTO PARA QUE NO SE ATASQUE 3342 3343 ## to make the builder backwards-compatible with files made with an older version that did not implement a 'phases' column yet 3344 if 'phase' not in self.master.dtype.names: 3345 extra_cols = [[0]*len(self.master['meas'])] 3346 extradtype = [('phase',np.int)] 3347 names = list(self.master.dtype.names) 3348 lastnames = [] 3349 if 'bibcode' in names: 3350 lastnames.append('bibcode') 3351 names.remove('bibcode') 3352 if 'comments' in names: 3353 lastnames.append('comments') 3354 names.remove('comments') 3355 3356 lastrecords = self.master[lastnames] 3357 self.master = numpy_ext.recarr_addcols(self.master[names],extra_cols,extradtype) 3358 self.master = numpy_ext.recarr_join(self.master,lastrecords)
3359 3360
3361 - def save_fits(self,filename=None,overwrite=True):
3362 """ 3363 Save content of SED object to a FITS file. 3364 3365 The .fits file will contain the following extensions if they are present in the object: 3366 1) data (all previously collected photometric data = content of .phot file) 3367 2) model_igrid_search (full best model SED table from grid_search) 3368 3) igrid_search (CI from grid search = actual grid samples) 3369 4) synflux_igrid_search (integrated synthetic fluxes for best model from grid_search) 3370 5) model_imc (full best model SED table from monte carlo) 3371 6) imc (CI from monte carlo = actual monte carlo samples) 3372 7) synflux_imc (integrated synthetic fluxes for best model from monte carlo) 3373 3374 @param filename: name of SED FITS file 3375 @type filename: string 3376 @param overwrite: overwrite old FITS file if true 3377 @type overwrite: boolean 3378 3379 Example usage: 3380 3381 >>> #mysed.save_fits() 3382 >>> #mysed.save_fits(filename='myname.fits') 3383 """ 3384 if filename is None: 3385 filename = str(os.path.splitext(self.photfile)[0]+'.fits') 3386 if overwrite: 3387 if os.path.isfile(filename): 3388 os.remove(filename) 3389 logger.info('Old FITS file removed') 3390 3391 #-- write primary header 3392 #prim_header = {} 3393 #for key in self.info: 3394 #if not (isinstance(self.info[key],float) or isinstance(self.info[key],str)): 3395 #continue 3396 #prim_header[key] = self.info[key] 3397 #fits.write_recarray(np.array([[0]]),filename,header_dict=prim_header,ext=0) 3398 3399 #-- write master data 3400 master = self.master.copy() 3401 fits.write_recarray(master,filename,header_dict=dict(extname='data')) 3402 3403 #-- write the rest 3404 for mtype in self.results:#['igrid_search','imc']: 3405 eff_waves,synflux,photbands = self.results[mtype]['synflux'] 3406 chi2 = self.results[mtype]['chi2'] 3407 3408 results_modeldict = dict(extname='model_'+mtype) 3409 results_griddict = dict(extname=mtype) 3410 keys = sorted(self.results[mtype]) 3411 for key in keys: 3412 if 'CI' in key: 3413 for ikey in self.results[mtype][key]: 3414 if '_l' not in ikey and '_u' not in ikey and ikey != 'chisq': 3415 results_modeldict[ikey] = self.results[mtype][key][ikey] 3416 results_griddict[ikey] = self.results[mtype][key][ikey] 3417 if key=='factor': 3418 results_griddict[key] = self.results[mtype][key] 3419 3420 fits.write_array(list(self.results[mtype]['model']),filename, 3421 names=('wave','flux','dered_flux'), 3422 units=('AA','erg/s/cm2/AA','erg/s/cm2/AA'), 3423 header_dict=results_modeldict) 3424 3425 index = 1 3426 while index > 0: 3427 headerdict = results_modeldict.copy() 3428 key = 'model{}'.format(index) 3429 headerdict['extname'] = key+'_'+mtype 3430 if key in self.results[mtype].keys(): 3431 fits.write_array(list(self.results[mtype][key]),filename, 3432 names=('wave','flux','dered_flux'), 3433 units=('AA','erg/s/cm2/AA','erg/s/cm2/AA'), 3434 header_dict=headerdict) 3435 index += 1 3436 else: 3437 index = 0 3438 3439 if 'grid' in self.results[mtype]: 3440 fits.write_recarray(self.results[mtype]['grid'],filename,header_dict=results_griddict) 3441 3442 results = np.rec.fromarrays([synflux,eff_waves,chi2],dtype=[('synflux','f8'),('mod_eff_wave','f8'),('chi2','f8')]) 3443 3444 fits.write_recarray(results,filename,header_dict=dict(extname='synflux_'+mtype)) 3445 3446 logger.info('Results saved to FITS file: %s'%(filename))
3447 3448 3449 #def load_fits2(self,filename=None): 3450 #""" 3451 #Load a previously made SED FITS file. Only works for SEDs saved with 3452 #the save_fits function after 14.06.2012. 3453 3454 #The .fits file can contain the following extensions: 3455 #1) data (all previously collected photometric data = content of .phot file) 3456 #2) model_igrid_search (full best model SED table from grid_search) 3457 #3) igrid_search (CI from grid search = actual grid samples) 3458 #4) synflux_igrid_search (integrated synthetic fluxes for best model from grid_search) 3459 #5) model_imc (full best model SED table from monte carlo) 3460 #6) imc (CI from monte carlo = actual monte carlo samples) 3461 #7) synflux_imc (integrated synthetic fluxes for best model from monte carlo) 3462 3463 #@param filename: name of SED FITS file 3464 #@type filename: string 3465 #@rtype: bool 3466 #@return: true if Fits file could be loaded 3467 #""" 3468 #if filename is None: 3469 #filename = os.path.splitext(self.photfile)[0]+'.fits' 3470 #if not os.path.isfile(filename): 3471 #logger.warning('No previous results saved to FITS file {:s}'.format(filename)) 3472 #return False 3473 #ff = pf.open(filename) 3474 3475 ##-- observed photometry 3476 #fields = ff['data'].columns.names 3477 #master = np.rec.fromarrays([ff['data'].data.field(field) for field in fields],names=','.join(fields)) 3478 ##-- remove the whitespace in columns with strings added by fromarrays 3479 #for i,name in enumerate(master.dtype.names): 3480 #if master.dtype[i].str.count('S'): 3481 #for j,element in enumerate(master[name]): 3482 #master[name][j] = element.strip() 3483 #self.master = master 3484 3485 ##-- add dictionary that will contain the results 3486 #if not hasattr(self,'results'): 3487 #self.results = {} 3488 3489 ##-- grid search and MC results 3490 #mtypes = [ext.header['extname'] for ext in ff[1:]] 3491 #mtypes = list(set(mtypes) - set(['data'])) 3492 3493 #for mtype in mtypes: 3494 #if mtype.startswith('i'): 3495 #continue 3496 #else: 3497 #mtype = mtype.lower().split('_',1) #lstrip('synflux_').lstrip('model_') 3498 #prefix,mtype = splitted[0],splitted[1] 3499 #if not mtype in self.results: 3500 #self.results[mtype] = {} 3501 #self.results[mtype][prefix] = np.array(ff[prefix+'_'+mtype].data.field('wave'),dtype='float64'),np.array(ff[prefix+'_'+mtype].data.field('flux'),dtype='float64'),np.array(ff[prefix+'_'+mtype].data.field('dered_flux'),dtype='float64') 3502 #self.results[mtype]['chi2'] = np.array(ff['synflux_'+mtype].data.field('chi2'),dtype='float64') 3503 #self.results[mtype]['synflux'] = np.array(ff['synflux_'+mtype].data.field('mod_eff_wave'),dtype='float64'),np.array(ff['synflux_'+mtype].data.field('synflux'),dtype='float64'),self.master['photband'] 3504 3505 #searchmtypes = [] 3506 #for mtype in mtypes: 3507 #if 'igrid_search' in mtype: 3508 #searchmtypes.append(mtype) 3509 #elif 'iminimize' in mtype: 3510 #searchmtypes.append(mtype) 3511 #elif 'imc' in mtype: 3512 #searchmtypes.append(mtype) 3513 3514 #for mtype in searchmtypes: #['igrid_search','iminimize','imc']: 3515 #try: 3516 #fields = ff[mtype].columns.names 3517 #master = np.rec.fromarrays([ff[mtype].data.field(field) for field in fields],names=','.join(fields)) 3518 #if not mtype in self.results: 3519 #self.results[mtype] = {} 3520 #self.results[mtype]['grid'] = master 3521 #if 'factor' in ff[mtype].header: 3522 #self.results[mtype]['factor'] = np.array([ff[mtype].header['factor']])[0] 3523 3524 #headerkeys = ff[mtype].header.ascardlist().keys() 3525 #for key in headerkeys[::-1]: 3526 #for badkey in ['xtension','bitpix','naxis','pcount','gcount','tfields','ttype','tform','tunit','factor','extname']: 3527 #if key.lower().count(badkey): 3528 #headerkeys.remove(key) 3529 #continue 3530 #self.results[mtype]['CI'] = {} 3531 #for key in headerkeys: 3532 ##-- we want to have the same types as the original: numpy.float64 --> np.array([..])[0] 3533 #self.results[mtype]['CI'][key.lower()] = np.array([ff[mtype].header[key]])[0] 3534 #except KeyError: 3535 #continue 3536 3537 ##self.results['igrid_search'] = {} 3538 ##fields = ff['igrid_search'].columns.names 3539 ##master = np.rec.fromarrays([ff['igrid_search'].data.field(field) for field in fields],names=','.join(fields)) 3540 ##self.results['igrid_search']['grid'] = master 3541 ##self.results['igrid_search']['factor'] = ff['igrid_search'].header['factor'] 3542 3543 ##self.results['model'] = ff[2].data.field('wave'),ff[2].data.field('flux'),ff[2].data.field('dered_flux') 3544 ##self.results['chi2'] = ff[4].data.field('chi2') 3545 ##self.results['synflux'] = ff[4].data.field('mod_eff_wave'),ff[4].data.field('synflux'),ff[1].data.field('photband') 3546 3547 #ff.close() 3548 3549 #logger.info('Loaded previous results from FITS file: %s'%(filename)) 3550 #return filename 3551
3552 - def load_fits(self,filename=None):
3553 """ 3554 Load a previously made SED FITS file. Only works for SEDs saved with 3555 the save_fits function after 14.06.2012. 3556 3557 The .fits file can contain the following extensions: 3558 1) data (all previously collected photometric data = content of .phot file) 3559 2) model_igrid_search (full best model SED table from grid_search) 3560 3) igrid_search (CI from grid search = actual grid samples) 3561 4) synflux_igrid_search (integrated synthetic fluxes for best model from grid_search) 3562 5) model_imc (full best model SED table from monte carlo) 3563 6) imc (CI from monte carlo = actual monte carlo samples) 3564 7) synflux_imc (integrated synthetic fluxes for best model from monte carlo) 3565 3566 @param filename: name of SED FITS file 3567 @type filename: string 3568 @rtype: bool 3569 @return: true if Fits file could be loaded 3570 """ 3571 if filename is None: 3572 filename = os.path.splitext(self.photfile)[0]+'.fits' 3573 if not os.path.isfile(filename): 3574 logger.warning('No previous results saved to FITS file {:s}'.format(filename)) 3575 return False 3576 ff = pf.open(filename) 3577 3578 #-- observed photometry 3579 fields = ff['data'].columns.names 3580 master = np.rec.fromarrays([ff['data'].data.field(field) for field in fields],names=','.join(fields)) 3581 3582 #-- remove the whitespace in columns with strings added by fromarrays 3583 for i,name in enumerate(master.dtype.names): 3584 if master.dtype[i].str.count('S'): 3585 for j,element in enumerate(master[name]): 3586 master[name][j] = element.strip() 3587 self.master = master 3588 3589 #-- add dictionary that will contain the results 3590 if not hasattr(self,'results'): 3591 self.results = {} 3592 3593 #-- grid search and MC results 3594 mtypes = [ext.header['extname'] for ext in ff[1:]] 3595 mtypes = list(set(mtypes) - set(['data','DATA'])) 3596 3597 for mtype in mtypes: 3598 mtype = mtype.lower() 3599 if mtype.startswith('i'): 3600 if not mtype in self.results: 3601 self.results[mtype] = {} 3602 try: 3603 fields = ff[mtype].columns.names 3604 master = np.rec.fromarrays([ff[mtype].data.field(field) for field in fields],names=','.join(fields)) 3605 if not mtype in self.results: 3606 self.results[mtype] = {} 3607 self.results[mtype]['grid'] = master 3608 if 'factor' in ff[mtype].header: 3609 self.results[mtype]['factor'] = np.array([ff[mtype].header['factor']])[0] 3610 3611 headerkeys = ff[mtype].header.keys() #ascardlist().keys() 3612 for key in headerkeys[::-1]: 3613 for badkey in ['xtension','bitpix','naxis','pcount','gcount','tfields','ttype','tform','tunit','factor','extname']: 3614 if key.lower().count(badkey): 3615 headerkeys.remove(key) 3616 continue 3617 self.results[mtype]['CI'] = {} 3618 for key in headerkeys: 3619 #-- we want to have the same types as the original: numpy.float64 --> np.array([..])[0] 3620 self.results[mtype]['CI'][key.lower()] = np.array([ff[mtype].header[key]])[0] 3621 except KeyError,msg: 3622 print msg 3623 continue 3624 else: 3625 splitted = mtype.lower().split('_',1) #lstrip('synflux_').lstrip('model_') 3626 prefix,mtype = splitted[0],splitted[1] 3627 if not mtype in self.results: 3628 self.results[mtype] = {} 3629 if 'model' in prefix: 3630 self.results[mtype][prefix] = np.array(ff[prefix+'_'+mtype].data.field('wave'),dtype='float64'),np.array(ff[prefix+'_'+mtype].data.field('flux'),dtype='float64'),np.array(ff[prefix+'_'+mtype].data.field('dered_flux'),dtype='float64') 3631 elif 'synflux' in prefix: 3632 self.results[mtype]['chi2'] = np.array(ff[prefix+'_'+mtype].data.field('chi2'),dtype='float64') 3633 self.results[mtype][prefix] = np.array(ff[prefix+'_'+mtype].data.field('mod_eff_wave'),dtype='float64'),np.array(ff[prefix+'_'+mtype].data.field('synflux'),dtype='float64'),self.master['photband'] 3634 3635 ff.close() 3636 3637 logger.info('Loaded previous results from FITS file: %s'%(filename)) 3638 return filename
3639
3640 - def save_hdf5(self, filename=None, update=True):
3641 """ 3642 Save content of SED object to a HDF5 file. (HDF5 is the successor of FITS files, 3643 providing a clearer structure of the saved content.) 3644 This way of saving is more thorough that save_fits(), fx. the CI2D confidence 3645 intervals are not save to a fits file, but are saved to a hdf5 file. 3646 Currently the following data is saved to HDF5 file: 3647 - sed.master (photometry) 3648 - sed.results (results from all fitting methods) 3649 - sed.constraints (extra constraints on the fits) 3650 3651 @param filename: name of SED FITS file 3652 @type filename: string 3653 @param update: if True, an existing file will be updated with the current information, if 3654 False, an existing fill be overwritten 3655 @type update: bool 3656 @return: the name of the output HDF5 file. 3657 @rtype: string 3658 """ 3659 3660 if filename is None: 3661 filename = str(os.path.splitext(self.photfile)[0]+'.hdf5') 3662 3663 data = dict() 3664 data['master'] = self.master 3665 data['results'] = self.results 3666 data['constraints'] = self.constraints 3667 3668 hdf5.write_dict(data, filename, update=update) 3669 3670 logger.info('Results saved to HDF5 file: %s'%(filename)) 3671 return filename
3672
3673 - def load_hdf5(self,filename=None):
3674 """ 3675 Load a previously made SED from HDF5 file. 3676 3677 @param filename: name of SED FITS file 3678 @type filename: string 3679 @return: True if HDF5 file could be loaded 3680 @rtype: bool 3681 """ 3682 if filename is None: 3683 filename = os.path.splitext(self.photfile)[0]+'.hdf5' 3684 if not os.path.isfile(filename): 3685 logger.warning('No previous results saved to HFD5 file {:s}'.format(filename)) 3686 return False 3687 3688 data = hdf5.read2dict(filename) 3689 3690 self.master = data.get('master', {}) 3691 self.results = data.get('results', {}) 3692 self.constraints = data.get('constraints', {}) 3693 3694 logger.info('Loaded previous results from HDF5 file: %s'%(filename)) 3695 logger.debug('Loaded following datasets from HDF5 file:\n %s'%(data.keys())) 3696 return True
3697
3698 - def save_bibtex(self):
3699 """ 3700 Convert the bibcodes in a phot file to a bibtex file. 3701 3702 The first line in the bibtex file contains a \citet command citing 3703 all photometry. 3704 """ 3705 filename = os.path.splitext(self.photfile)[0]+'.bib' 3706 crossmatch.make_bibtex(self.master,filename=filename)
3707
3708 - def save_summary(self,filename=None,CI_limit=None,method='igrid_search',chi2type='ci_red'):
3709 """ 3710 Save a summary of the results to an ASCII file. 3711 """ 3712 #-- open the summary file to write the results 3713 if filename is None: 3714 filename = os.path.splitext(self.photfile)[0]+'.sum' 3715 3716 if CI_limit is None: 3717 CI_limit = self.CI_limit 3718 3719 #-- gather the results: 3720 grid_results = self.results[method]['grid'] 3721 start_CI = np.argmin(np.abs(grid_results[chi2type]-CI_limit)) 3722 factor = self.results[method]['factor'] 3723 names = ['factor','chi2_type','ci_limit'] 3724 results = [factor,chi2type,CI_limit*100] 3725 for name in grid_results.dtype.names: 3726 lv,cv,uv = grid_results[name][start_CI:].min(),\ 3727 grid_results[name][-1],\ 3728 grid_results[name][start_CI:].max() 3729 names += [name+'_l',name,name+'_u'] 3730 results += [lv,cv,uv] 3731 #-- write the used photometry to a file 3732 include_grid = self.master['include'] 3733 photbands = ":".join(self.master[include_grid]['photband']) 3734 references = ",".join(self.master[include_grid]['bibcode']) 3735 used_photometry = photometry2str(self.master[include_grid],comment='#') 3736 used_atmosphere = '#'+model.defaults2str()+'\n' 3737 used_photbands = '#'+photbands+'\n' 3738 used_references = '#'+references 3739 comments = used_photometry+used_atmosphere+used_photbands+used_references 3740 3741 contents = np.array([results]).T 3742 contents = np.rec.fromarrays(contents,names=names) 3743 ascii.write_array(contents,filename,auto_width=True,header=True, 3744 comments=comments.split('\n'),mode='a',use_float='%g') 3745 3746 logger.info('Saved summary to {0}'.format(filename))
3747 3748
3749 - def save_important_info(self,filename=None,CI_limit=None,method='igrid_search',chi2type='ci_red'):
3750 """ 3751 Save a summary of the results to an ASCII file. 3752 """ 3753 #-- open the summary file to write the results 3754 if filename is None: 3755 filename = os.path.splitext(self.photfile)[0]+'.sum' 3756 3757 if CI_limit is None: 3758 CI_limit = self.CI_limit 3759 3760 #-- gather the results: 3761 grid_results = self.results[method]['grid'] 3762 start_CI = np.argmin(np.abs(grid_results[chi2type]-CI_limit)) 3763 factor = self.results[method]['factor'] 3764 #names = ['scaling_factor','chi2_type','ci_limit'] 3765 results = [factor,chi2type,CI_limit*100] 3766 print 'Metallicity:' 3767 print grid_results['z'][-1] 3768 wanted_names = ['ebv','logg','teff','z','chisq'] 3769 for name in wanted_names: 3770 lv,cv,uv = grid_results[name][start_CI:].min(),\ 3771 grid_results[name][-1],\ 3772 grid_results[name][start_CI:].max() 3773 #names += [name+'_l',name,name+'_u'] 3774 results += ["%.3f"%lv,"%.3f"%cv,"%.3f"%uv] 3775 3776 contents = np.array([results]).T 3777 contents = np.rec.fromarrays(contents) 3778 ascii.write_array(contents,filename,auto_width=True,mode='a',use_float='%g') 3779 3780 logger.info('Saved summary to {0}'.format(filename))
3781
3782 3783 #} 3784 3785 -class BinarySED(SED):
3786
3787 - def __init__(self,ID=None,photfile=None,plx=None,load_fits=True,load_hdf5=True,label='', **kwargs):
3788 """ 3789 Setup the Binary sed in the same way as a normal SED. 3790 The masses of both components can be provided, and will then be used in igrid_search, 3791 iminimize, and while calculating CI and CI2D confidence intervalls 3792 """ 3793 super(BinarySED, self).__init__(ID=ID,photfile=photfile,plx=plx,label=label,\ 3794 load_fits=load_fits,load_hdf5=load_hdf5) 3795 3796 self.set_constraints(**kwargs)
3797 3798
3799 - def set_constraints(self, **kwargs):
3800 """ 3801 Add constraints that are used when fitting the Binary SED. Up till now the following 3802 contraints are supported: 3803 - masses (in Msol) 3804 - distance (in Rsol) 3805 TODO: This function should in the future accept Units. 3806 """ 3807 3808 if 'masses' in kwargs: 3809 self.constraints['masses'] = kwargs['masses'] 3810 if 'distance' in kwargs: 3811 self.constraints['distance'] = kwargs['distance']
3812
3813 - def constraints2str(self):
3814 """ 3815 Summarizes all constraints in a string. 3816 """ 3817 res = "" 3818 for key in self.constraints.keys(): 3819 res += "Using constraint: %s = %s\n"%(key, self.constraints[key]) 3820 res = res[:-1] 3821 return res
3822
3823 - def igrid_search(self,points=100000,teffrange=None,loggrange=None,ebvrange=None,\ 3824 zrange=None,rvrange=((3.1,3.1),(3.1,3.1)),vradrange=((0,0),(0,0)),\ 3825 radrange=(None,None),compare=True,df=None,CI_limit=None,\ 3826 set_model=True, distance=None,**kwargs):
3827 """ 3828 Fit fundamental parameters using a (pre-integrated) grid search. 3829 3830 If called consecutively, the ranges will be set to the CI_limit of previous 3831 estimations, unless set explicitly. 3832 3833 If called for the first time, the ranges will be +/- np.inf by defaults, 3834 unless set explicitly. 3835 """ 3836 3837 if CI_limit is None or CI_limit > 1.0: 3838 CI_limit = self.CI_limit 3839 3840 #-- set defaults limits 3841 ranges = self.generate_ranges(teffrange=teffrange[0],\ 3842 loggrange=loggrange[0],ebvrange=ebvrange[0],\ 3843 zrange=zrange[0],rvrange=rvrange[0],vradrange=vradrange[0], 3844 radrange=radrange[0],teff2range=teffrange[1],\ 3845 logg2range=loggrange[1],ebv2range=ebvrange[1],\ 3846 z2range=zrange[1],rv2range=rvrange[1],vrad2range=vradrange[1], 3847 rad2range=radrange[1]) 3848 3849 #-- grid search on all include data: extract the best CHI2 3850 include_grid = self.master['include'] 3851 logger.info('The following measurements are included in the fitting process:\n%s'%\ 3852 (photometry2str(self.master[include_grid]))) 3853 3854 logger.info('The following constraints are included in the fitting process:\n%s'%\ 3855 (self.constraints2str())) 3856 3857 #-- build the grid, run over the grid and calculate the CHI2 3858 masses = self.constraints.get('masses', None) 3859 pars = fit.generate_grid_pix(self.master['photband'][include_grid], masses=masses, points=points, **ranges) 3860 3861 chisqs,scales,escales,lumis = fit.igrid_search_pix(self.master['cmeas'][include_grid], 3862 self.master['e_cmeas'][include_grid], 3863 self.master['photband'][include_grid], 3864 model_func=model.get_itable_pix,constraints=self.constraints, 3865 **pars) 3866 fitres = dict(chisq=chisqs, scale=scales, escale=escales, labs=lumis) 3867 3868 #-- collect all the results in a record array 3869 self.collect_results(grid=pars, fitresults=fitres, mtype='igrid_search') 3870 3871 #-- do the statistics 3872 self.calculate_statistics(df=df, ranges=ranges, mtype='igrid_search') 3873 3874 #-- compute the confidence intervals 3875 ci = self.calculate_confidence_intervals(mtype='igrid_search',chi2_type='red',\ 3876 CI_limit=CI_limit) 3877 self.store_confidence_intervals(mtype='igrid_search', **ci) 3878 3879 #-- remember the best model 3880 if set_model: self.set_best_model()
3881
3882 - def generate_fit_param(self, start_from='igrid_search', **pars):
3883 """ 3884 generates a dictionary with parameter information that can be handled by fit.iminimize 3885 """ 3886 masses = self.constraints.get('masses', None) 3887 result = super(BinarySED, self).generate_fit_param(start_from=start_from, **pars) 3888 3889 #-- Couple ebv and rv of both components 3890 result['ebv2_expr'] = 'ebv' 3891 result['rv2_expr'] = 'rv' 3892 3893 if masses != None: 3894 #-- Couple the radii to the masses 3895 G, Msol, Rsol = constants.GG_cgs, constants.Msol_cgs, constants.Rsol_cgs 3896 result['rad_value'] = np.sqrt(G*masses[0]*Msol/10**result['logg_value']) 3897 result['rad2_value'] = np.sqrt(G*masses[1]*Msol/10**result['logg2_value']) 3898 result['rad_expr'] = 'sqrt(%0.5f/10**logg) * 165.63560394542122'%(masses[0]) 3899 result['rad2_expr'] = 'sqrt(%0.5f/10**logg2) * 165.63560394542122'%(masses[1]) 3900 result['rad_min'], result['rad_max'] = None, None 3901 result['rad2_min'], result['rad2_max'] = None, None 3902 result['rad_vary'], result['rad2_vary'] = False, False 3903 else: 3904 result['rad_value'] = self.results[start_from]['CI']['rad'] 3905 result['rad2_value'] = self.results[start_from]['CI']['rad2'] 3906 3907 return result
3908
3909 - def iminimize(self, teff=(None,None), logg=(None,None), ebv=(None,None), z=(None,None), 3910 rv=(None,None), vrad=(0,0), teffrange=(None,None), loggrange=(None,None), 3911 ebvrange=(None,None), zrange=(None,None), rvrange=(None,None), 3912 vradrange=(None,None), radrange=(None,None), compare=True, df=None, 3913 distance=None, start_from='igrid_search', points=None, CI_limit=None, 3914 calc_ci=True, set_model=True, **kwargs):
3915 """ Binary minimizer """ 3916 3917 ranges = self.generate_ranges(teffrange=teffrange[0],\ 3918 loggrange=loggrange[0],ebvrange=ebvrange[0],\ 3919 zrange=zrange[0],rvrange=rvrange[0],vradrange=vradrange[0], 3920 radrange=radrange[0],teff2range=teffrange[1],\ 3921 logg2range=loggrange[1],ebv2range=ebvrange[1],\ 3922 z2range=zrange[1],rv2range=rvrange[1],vrad2range=vradrange[1], 3923 rad2range=radrange[1]) 3924 pars = self.generate_fit_param(teff=teff[0],logg=logg[0],ebv=ebv[0],z=z[0],\ 3925 rv=rv[0], vrad=vrad[0],teff2=teff[1],logg2=logg[1],\ 3926 ebv2=ebv[1],z2=z[1],rv2=rv[1],vrad2=vrad[1], \ 3927 start_from=start_from, **ranges) 3928 3929 #-- Print the used data and constraints 3930 include_grid = self.master['include'] 3931 logger.info('The following measurements are included in the fitting process:\n%s'%\ 3932 (photometry2str(self.master[include_grid]))) 3933 logger.info('The following constraints are included in the fitting process:\n%s'%\ 3934 (self.constraints2str())) 3935 3936 #-- pass all ranges and starting values to the fitter 3937 kick_list = ['teff', 'logg', 'teff2', 'logg2', 'ebv'] 3938 grid, chisq, nfev, scale, lumis = fit.iminimize(self.master['cmeas'][include_grid], 3939 self.master['e_cmeas'][include_grid], self.master['photband'][include_grid], 3940 points=points, kick_list=kick_list, constraints=self.constraints, **pars) 3941 3942 logger.info('Minimizer Succes with startpoints=%s, chi2=%s, nfev=%s'%(len(chisq), chisq[0], nfev[0])) 3943 #-- handle the results 3944 fitres = dict(chisq=chisq, nfev=nfev, scale=scale, labs=lumis) 3945 self.collect_results(grid=grid, fitresults=fitres, mtype='iminimize', selfact='chisq') 3946 3947 #-- Do the statistics 3948 self.calculate_statistics(df=5, ranges=ranges, mtype='iminimize', selfact='chisq') 3949 3950 #-- Store the confidence intervals 3951 ci = self._get_imin_ci(mtype='iminimize',**ranges) 3952 self.store_confidence_intervals(mtype='iminimize', **ci) 3953 if calc_ci: 3954 self.calculate_iminimize_CI(mtype='iminimize', CI_limit=CI_limit) 3955 3956 #-- remember the best model 3957 if set_model: self.set_best_model(mtype='iminimize')
3958 3959
3960 - def calculate_iminimize_CI(self, mtype='iminimize', CI_limit=0.66, **kwargs):
3961 """ 3962 Calculate the confidence intervals for each parameter using the lmfit 3963 calculate confidence interval method. 3964 3965 The calculated confidence intervals are stored in the results['CI'] 3966 dictionary. If the method fails, or if the asked CI is outside the 3967 provided ranges, those ranges will be set as CI. 3968 3969 This method works in the same way as for a single SED, but it adds 3970 the mass as an extra constraint if the mass of both components is stored. 3971 """ 3972 masses = self.constraints.get('masses',None) 3973 super(BinarySED, self).calculate_iminimize_CI(mtype=mtype, CI_limit=CI_limit,\ 3974 masses=masses, constraints=self.constraints,\ 3975 **kwargs)
3976
3977 - def calculate_iminimize_CI2D(self,xpar, ypar, mtype='iminimize', limits=None, res=10, **kwargs):
3978 """ 3979 Calculated 2 dimentional confidence intervals for the given parameters, 3980 using lmfit methods. 3981 3982 The calculated confidence intervals are stored in the results['CI2D'] 3983 dictionary. 3984 3985 This method works in the same way as for a single SED, but it adds 3986 the mass as an extra constraint if the mass of both components is stored. 3987 """ 3988 masses = self.constraints.get('masses',None) 3989 super(BinarySED, self).calculate_iminimize_CI2D(xpar, ypar, mtype=mtype,\ 3990 limits=limits, res=res, masses=masses, **kwargs)
3991
3992 - def set_best_model(self,mtype='igrid_search',law='fitzpatrick2004', **kwargs):
3993 """ 3994 Get reddenend and unreddened model 3995 """ 3996 logger.info('Interpolating approximate full SED of best model') 3997 3998 #-- synthetic flux 3999 include = self.master['include'] 4000 synflux = np.zeros(len(self.master['photband'])) 4001 keep = (self.master['cwave']<1.6e6) | np.isnan(self.master['cwave']) 4002 keep = keep & include 4003 4004 if mtype in ['igrid_search', 'iminimize']: 4005 scale = self.results[mtype]['CI']['scale'] 4006 4007 #-- get (approximated) reddened and unreddened model 4008 wave,flux = model.get_table_multiple(teff=(self.results[mtype]['CI']['teff'],self.results[mtype]['CI']['teff2']), 4009 logg=(self.results[mtype]['CI']['logg'],self.results[mtype]['CI']['logg2']), 4010 ebv=(self.results[mtype]['CI']['ebv'],self.results[mtype]['CI']['ebv2']), 4011 radius=(self.results[mtype]['CI']['rad'],self.results[mtype]['CI']['rad2']), 4012 law=law) 4013 wave_ur,flux_ur = model.get_table_multiple(teff=(self.results[mtype]['CI']['teff'],self.results[mtype]['CI']['teff2']), 4014 logg=(self.results[mtype]['CI']['logg'],self.results[mtype]['CI']['logg2']), 4015 ebv=(0,0), 4016 radius=(self.results[mtype]['CI']['rad'],self.results[mtype]['CI']['rad2']), 4017 law=law) 4018 #-- get synthetic photometry 4019 pars = {} 4020 for key in self.results[mtype]['CI'].keys(): 4021 if not key[-2:] == '_u' and not key[-2:] == '_l': 4022 pars[key] = self.results[mtype]['CI'][key] 4023 synflux_,pars = model.get_itable(photbands=self.master['photband'][keep], **pars) 4024 flux,flux_ur = flux*scale,flux_ur*scale 4025 4026 synflux[keep] = synflux_ 4027 4028 synflux[-self.master['color']] *= scale 4029 chi2 = (self.master['cmeas']-synflux)**2/self.master['e_cmeas']**2 4030 #-- calculate effective wavelengths of the photometric bands via the model 4031 # values 4032 eff_waves = filters.eff_wave(self.master['photband'],model=(wave,flux)) 4033 self.results[mtype]['model'] = wave,flux,flux_ur 4034 self.results[mtype]['synflux'] = eff_waves,synflux,self.master['photband'] 4035 self.results[mtype]['chi2'] = chi2
4036
4037 4038 -class PulsatingSED(SED):
4039
4040 - def __init__(self,ID=None,photfile=None,plx=None,load_fits=True,load_hdf5=True,label='', **kwargs):
4041 """ 4042 Setup the Binary sed in the same way as a normal SED. 4043 The masses of both components can be provided, and will then be used in igrid_search, 4044 iminimize, and while calculating CI and CI2D confidence intervalls 4045 """ 4046 super(PulsatingSED, self).__init__(ID=ID,photfile=photfile,plx=plx,label=label,\ 4047 load_fits=load_fits,load_hdf5=load_hdf5) 4048 4049 self.set_constraints(**kwargs)
4050 4051
4052 - def set_constraints(self, **kwargs):
4053 """ 4054 Add constraints that are used when fitting the Pulsating SED. Up till now the following 4055 contraints are supported (and in fact needed to do a fitting): 4056 - deltaTeff (in K) 4057 - deltaLogg (in dex) 4058 For each 'phase' additional to the zeroth phase, a tuple containing a lower and upper limit 4059 needs to be added to the deltaTeff and deltaLogg list. These limits have a sign! 4060 4061 TODO: This function should in the future accept Units. 4062 """ 4063 4064 #if 'mass' in kwargs: 4065 # self.constraints['mass'] = kwargs['mass'] 4066 #if 'distance' in kwargs: 4067 #self.constraints['distance'] = kwargs['distance'] 4068 if 'deltaTeff' in kwargs: 4069 deltaTeff = kwargs.get('deltaTeff') 4070 print deltaTeff 4071 for i in range(len(deltaTeff)): 4072 self.constraints['delta{}Teff'.format(i+1)] = deltaTeff[i] 4073 if 'deltaLogg' in kwargs: 4074 deltaLogg = kwargs.get('deltaLogg') 4075 print deltaLogg 4076 for i in range(len(deltaTeff)): 4077 self.constraints['delta{}Logg'.format(i+1)] = deltaLogg[i]
4078
4079 - def constraints2str(self):
4080 """ 4081 Summarizes all constraints in a string. 4082 """ 4083 res = "" 4084 for key in self.constraints.keys(): 4085 res += "Using constraint: %s = %s\n"%(key, self.constraints[key]) 4086 res = res[:-1] 4087 return res
4088
4089 - def igrid_search(self,points=100000,teffrange=None,loggrange=None,ebvrange=None,\ 4090 zrange=None,rvrange=(3.1,3.1),vradrange=None,df=None,CI_limit=None,\ 4091 set_model=True, distance=None,**kwargs):
4092 """ 4093 Fit fundamental parameters at various phases simultaneously, using a (pre-integrated) grid search. 4094 4095 Parameter ranges only need to be provided for the first 'phase', but be aware that 4096 these ranges should encompass the parameters of your star at all phases! The part of the 4097 parameter space that falls outside the box defined for the first 'phase' is not 4098 probed at other phases, even though the defined constraints would in principle allow it. 4099 4100 """ 4101 4102 if CI_limit is None or CI_limit > 1.0: 4103 CI_limit = self.CI_limit 4104 4105 ##-- the data sets 4106 include_grid = self.master['include'] 4107 logger.info('The following measurements are included in the fitting process:\n%s'%\ 4108 (photometry2str(self.master[include_grid]))) 4109 4110 phase_indices = self.master['phase'] 4111 unique_phases = np.unique(phase_indices) 4112 logger.info('Data sets with the following phase indices are included in the fitting process:\n%s'%\ 4113 (str(unique_phases))) 4114 4115 ##-- set defaults limits 4116 ranges = self.generate_ranges(teffrange=teffrange,loggrange=loggrange,ebvrange=ebvrange,zrange=zrange,rvrange=rvrange,vradrange=vradrange) 4117 4118 ##-- the constraints used in the fit 4119 logger.info('The following constraints are included in the fitting process:\n%s'%\ 4120 (self.constraints2str())) 4121 4122 ##-- build the grid, run over the grid and calculate the CHI2 4123 newkwargs = ranges.copy() 4124 newkwargs.update(self.constraints) 4125 pars = fit.generate_grid_pix_pulsating2(self.master['photband'][include_grid],\ 4126 points=points, unique_phases=unique_phases,\ 4127 **newkwargs) 4128 4129 chisqs,scales,escales,lumis = fit.igrid_search_pix2(self.master['cmeas'][include_grid], 4130 self.master['e_cmeas'][include_grid],\ 4131 self.master['photband'][include_grid],phases=self.master['phase'][include_grid],\ 4132 unique_phases=unique_phases,model_func=model.get_itable_pix,\ 4133 constraints=self.constraints,stat_func=fit.stat_chi2b,\ 4134 **pars) 4135 4136 ##-- collect all the results in record arrays, remove the points that fall 4137 # out of the model grid (the nan in chisqs), do the statistics and 4138 # compute the confidence intervals 4139 for i in range(len(unique_phases)): 4140 fitres = dict(chisq=chisqs[:,i], scale=scales[:,i], escale=escales[:,i], labs=lumis[:,i]) 4141 partpars = pars.fromkeys(pars.keys()) 4142 for key in pars.keys(): 4143 partpars[key] = pars[key][:,i] 4144 self.collect_results(grid=partpars, fitresults=fitres, mtype='igrid_search_{}'.format(unique_phases[i])) 4145 4146 ##-- do the statistics 4147 self.calculate_statistics(df=df, ranges=ranges, mtype='igrid_search_{}'.format(unique_phases[i])) 4148 4149 ##-- compute the confidence intervals 4150 ci = self.calculate_confidence_intervals(mtype='igrid_search_{}'.format(unique_phases[i]),\ 4151 chi2_type='red',CI_limit=CI_limit) 4152 self.store_confidence_intervals(mtype='igrid_search_{}'.format(unique_phases[i]), **ci) 4153 4154 ##-- remember the best model 4155 if set_model: self.set_best_model(mtype='igrid_search_{}'.format(unique_phases[i])) 4156 4157 ##-- collect the results of the all-inclusive fit 4158 index = len(unique_phases) 4159 fitres = dict(chisq=chisqs[:,index], scale=scales[:,index], escale=escales[:,index], labs=lumis[:,index]) 4160 allpars = pars.fromkeys(pars.keys()) 4161 for key in pars.keys(): 4162 allpars[key] = pars[key][:,0] 4163 for i in range(len(unique_phases)-1): 4164 fitres['scale{}'.format(i+1)] = scales[:,index+i+1] 4165 fitres['escale{}'.format(i+1)] = escales[:,index+i+1] 4166 fitres['labs{}'.format(i+1)] = lumis[:,index+i+1] 4167 allpars['teff{}'.format(i+1)] = pars['teff'][:,i+1] 4168 allpars['logg{}'.format(i+1)] = pars['logg'][:,i+1] 4169 ranges['teff{}range'.format(i+1)] = self.constraints['delta{}Teff'.format(i+1)] 4170 ranges['logg{}range'.format(i+1)] = self.constraints['delta{}Logg'.format(i+1)] 4171 4172 self.collect_results(grid=allpars, fitresults=fitres, mtype='igrid_search_all') 4173 4174 ##-- do the statistics of the all-inclusive fit 4175 self.calculate_statistics(df=df, ranges=ranges, mtype='igrid_search_all') 4176 4177 ##-- compute the confidence intervals of the all-inclusive fit 4178 ci = self.calculate_confidence_intervals(mtype='igrid_search_all',\ 4179 chi2_type='red',CI_limit=CI_limit) 4180 self.store_confidence_intervals(mtype='igrid_search_all', **ci) 4181 4182 ##-- remember the best model 4183 if set_model: self.set_best_model(mtype='igrid_search_all')
4184
4185 - def generate_fit_param(self, start_from='igrid_search', **pars):
4186 """ 4187 generates a dictionary with parameter information that can be handled by fit.iminimize 4188 """ 4189 raise NotImplementedError
4190
4191 - def calculate_iminimize_CI(self, mtype='iminimize', CI_limit=0.66, **kwargs):
4192 raise NotImplementedError
4193
4194 - def calculate_iminimize_CI2D(self, xpar, ypar, mtype='iminimize', limits=None, res=10, **kwargs):
4195 raise NotImplementedError
4196
4197 - def iminimize(self, teff=None, logg=None, ebv=None, z=0, rv=3.1, vrad=0, teffrange=None, 4198 loggrange=None, ebvrange=None, zrange=None, rvrange=None, vradrange=None, 4199 points=None, distance=None, start_from='igrid_search',df=None, CI_limit=None, 4200 calc_ci=False, set_model=True, **kwargs):
4201 raise NotImplementedError
4202
4203 - def imc(self,teffrange=None,loggrange=None,ebvrange=None,zrange=None,start_from='igrid_search',\ 4204 distribution='uniform',points=None,fitmethod='fmin',disturb=True):
4205 raise NotImplementedError
4206
4207 - def set_best_model(self,mtype='igrid_search',law='fitzpatrick2004',**kwargs):
4208 """ 4209 Get reddenend and unreddened model 4210 """ 4211 super(PulsatingSED, self).set_best_model(mtype=mtype,law=law,**kwargs) 4212 4213 if '_all' in mtype: 4214 logger.info('Interpolating approximate full SED of best model at additional phases.') 4215 4216 #-- synthetic flux 4217 include = self.master['include'] 4218 phases = self.master['phase'] 4219 unique_phases = np.unique(phases) 4220 synflux = self.results[mtype]['synflux'][1] 4221 keep = (self.master['cwave']<1.6e6) | np.isnan(self.master['cwave']) 4222 keep = keep & include 4223 4224 if ('igrid_search' in mtype) | ('iminimize' in mtype): #mtype in ['igrid_search', 'iminimize']: 4225 #-- get the metallicity right 4226 files = model.get_file(z='*') 4227 if type(files) == str: files = [files] #files needs to be a list! 4228 metals = np.array([pf.getheader(ff)['Z'] for ff in files]) 4229 metals = metals[np.argmin(np.abs(metals-self.results[mtype]['CI']['z']))] 4230 4231 for i in range(len(unique_phases)-1): 4232 keep = keep & (phases == unique_phases[i+1]) 4233 scale = self.results[mtype]['CI']['scale{}'.format(i+1)] 4234 #-- get (approximated) reddened and unreddened model 4235 wave,flux = model.get_table(teff=self.results[mtype]['CI']['teff{}'.format(i+1)], 4236 logg=self.results[mtype]['CI']['logg{}'.format(i+1)], 4237 ebv=self.results[mtype]['CI']['ebv'], 4238 z=metals, 4239 law=law) 4240 wave_ur,flux_ur = model.get_table(teff=self.results[mtype]['CI']['teff{}'.format(i+1)], 4241 logg=self.results[mtype]['CI']['logg{}'.format(i+1)], 4242 ebv=0, 4243 z=metals, 4244 law=law) 4245 #-- get synthetic photometry 4246 synflux_,Labs = model.get_itable(teff=self.results[mtype]['CI']['teff{}'.format(i+1)], 4247 logg=self.results[mtype]['CI']['logg{}'.format(i+1)], 4248 ebv=self.results[mtype]['CI']['ebv'], 4249 z=self.results[mtype]['CI']['z'], 4250 photbands=self.master['photband'][keep]) 4251 4252 flux,flux_ur = flux*scale,flux_ur*scale 4253 4254 synflux[keep] = synflux_ 4255 4256 #synflux,Labs = model.get_itable(teff=self.results[mtype]['CI']['teff'], 4257 # logg=self.results[mtype]['CI']['logg'], 4258 # ebv=self.results[mtype]['CI']['ebv'], 4259 # photbands=self.master['photband']) 4260 synflux[-self.master['color'] & keep] *= scale 4261 chi2 = (self.master['cmeas']-synflux)**2/self.master['e_cmeas']**2 4262 #-- calculate effective wavelengths of the photometric bands via the model 4263 # values 4264 eff_waves = filters.eff_wave(self.master['photband'],model=(wave,flux)) 4265 self.results[mtype]['model{}'.format(i+1)] = wave,flux,flux_ur 4266 self.results[mtype]['synflux'] = eff_waves,synflux,self.master['photband'] 4267 self.results[mtype]['chi2'] = chi2
4268 4269 @standalone_figure
4270 - def plot_sed(self,colors=False,mtype='igrid_search',plot_redded=True,plot_deredded=False, 4271 plot_unselected=True,wave_units='AA',flux_units='erg/s/cm2',**kwargs):
4272 """ 4273 Plot a fitted SED together with the data. 4274 4275 Example usage: 4276 4277 First set the SED: 4278 4279 >>> mysed = SED('HD180642') 4280 >>> mysed.load_fits() 4281 4282 Then make the plots: 4283 4284 >>> p = pl.figure() 4285 >>> p = pl.subplot(121) 4286 >>> mysed.plot_sed(colors=False) 4287 >>> p = pl.subplot(122) 4288 >>> mysed.plot_sed(colors=True) 4289 4290 ]]include figure]]ivs_sed_builder_plot_sed_01.png] 4291 """ 4292 if 'phase' in kwargs: 4293 uniquephase = kwargs.pop('phase') 4294 phase = True 4295 else: 4296 uniquephase = None 4297 phase = False 4298 annotation = kwargs.pop('annotation',True) 4299 4300 def plot_sed_getcolors(master,color_dict=None): 4301 myphotbands = [iphotb.split('.')[1] for iphotb in master['photband'][master['color']]] 4302 if not myphotbands: #-- If there are no colours none can be returned (added by joris 30-01-2012) 4303 return [],[],[],None 4304 if color_dict is None: 4305 color_dict = {myphotbands[0]:0} 4306 for mycol in myphotbands[1:]: 4307 if not mycol in color_dict: 4308 max_int = max([color_dict[entry] for entry in color_dict]) 4309 color_dict[mycol] = max_int+1 4310 x = [color_dict[mycol] for mycol in myphotbands] 4311 y = master['cmeas'] 4312 e_y = master['e_cmeas'] 4313 return x,y,e_y,color_dict
4314 4315 4316 x__,y__,e_y__,color_dict = plot_sed_getcolors(self.master) 4317 4318 #-- get the color cycle 4319 systems = np.array([system.split('.')[0] for system in self.master['photband']],str) 4320 set_systems = sorted(list(set(systems))) 4321 4322 if ('absolutesymbolcolor' in kwargs) and (kwargs.pop('absolutesymbolcolor') == True): 4323 sortedphotsystems,plotcolorvalues = filters.get_plotsymbolcolorinfo() 4324 selectedcolorinds = np.array([np.where(sortedphotsystems == system)[0][0] for system in set_systems]) 4325 color_cycle = itertools.cycle([pl.cm.spectral(j) for j in plotcolorvalues[selectedcolorinds]]) 4326 else: 4327 color_cycle = itertools.cycle([pl.cm.spectral(j) for j in np.linspace(0, 1.0, len(set_systems))]) 4328 4329 #-- for plotting reasons, we translate every color to an integer 4330 for system in set_systems: 4331 color = color_cycle.next() 4332 keep = (systems==system) & (self.master['color']==colors) 4333 if not plot_unselected: 4334 keep = keep & self.master['include'] 4335 if phase: 4336 keep = keep & (self.master['phase'] == uniquephase) 4337 #-- synthetic: 4338 if sum(keep) and mtype in self.results and 'synflux' in self.results[mtype]: 4339 if colors: 4340 x,y,e_y,color_dict = plot_sed_getcolors(self.master[keep],color_dict) 4341 y = self.results[mtype]['synflux'][1][keep] 4342 else: 4343 x = self.results[mtype]['synflux'][0][keep] 4344 y = self.results[mtype]['synflux'][1][keep] 4345 #-- convert to correct units 4346 y = conversions.convert('erg/s/cm2/AA',flux_units,y,wave=(x,'AA')) 4347 x = conversions.convert('AA',wave_units,x) 4348 pl.plot(x,y,'x',ms=10,mew=2,alpha=0.75,color=color,**kwargs) 4349 #-- include: 4350 keep = (systems==system) & (self.master['color']==colors) & self.master['include'] & (self.master['phase'] == uniquephase) 4351 if sum(keep): 4352 if colors: 4353 #-- translate every color to an integer 4354 x,y,e_y,color_dict = plot_sed_getcolors(self.master[keep],color_dict) 4355 else: 4356 if mtype in self.results and 'synflux' in self.results[mtype]: 4357 x = self.results[mtype]['synflux'][0][keep] 4358 else: 4359 x = self.master['cwave'][keep] 4360 y = self.master['cmeas'][keep] 4361 e_y = self.master['e_cmeas'][keep] 4362 y,e_y = conversions.convert('erg/s/cm2/AA',flux_units,y,e_y,wave=(x,'AA')) 4363 x = conversions.convert('AA',wave_units,x) 4364 4365 4366 p = pl.errorbar(x,y,yerr=e_y,fmt='o',label=system, 4367 capsize=10,ms=7,color=color,**kwargs) 4368 4369 #-- exclude: 4370 label = np.any(keep) and '_nolegend_' or system 4371 keep = (systems==system) & (self.master['color']==colors) & -self.master['include'] 4372 if phase: 4373 keep = keep & (self.master['phase'] == uniquephase) 4374 if sum(keep) and plot_unselected: 4375 if colors: 4376 x,y,e_y,color_dict = plot_sed_getcolors(self.master[keep],color_dict) 4377 else: 4378 x = self.results[mtype]['synflux'][0][keep] 4379 if np.any(np.isnan(x)): 4380 x = self.master['cwave'][keep] 4381 y = self.master['cmeas'][keep] 4382 e_y = self.master['e_cmeas'][keep] 4383 y,e_y = conversions.convert('erg/s/cm2/AA',flux_units,y,e_y,wave=(x,'AA')) 4384 x = conversions.convert('AA',wave_units,x) 4385 pl.errorbar(x,y,yerr=e_y,fmt='o',label=label, 4386 capsize=10,ms=7,mew=2,color=color,mfc='1',mec=color,**kwargs) 4387 4388 #-- only set logarithmic scale if absolute fluxes are plotted 4389 # and only plot the real model then 4390 if not colors: 4391 pl.gca().set_xscale('log',nonposx='clip') 4392 pl.gca().set_yscale('log',nonposy='clip') 4393 pl.gca().set_autoscale_on(False) 4394 4395 #-- the model 4396 if phase & (uniquephase != 0): 4397 modelname = 'model{}'.format(uniquephase) 4398 else: 4399 modelname = 'model' 4400 4401 if mtype in self.results and modelname in self.results[mtype]: 4402 wave,flux,flux_ur = self.results[mtype][modelname] 4403 4404 flux = conversions.convert('erg/s/cm2/AA',flux_units,flux,wave=(wave,'AA')) 4405 flux_ur = conversions.convert('erg/s/cm2/AA',flux_units,flux_ur,wave=(wave,'AA')) 4406 wave = conversions.convert('AA',wave_units,wave) 4407 4408 if plot_redded: 4409 pl.plot(wave,flux,'r-',**kwargs) 4410 if plot_deredded: 4411 pl.plot(wave,flux_ur,'k-',**kwargs) 4412 pl.ylabel(conversions.unit2texlabel(flux_units,full=True)) 4413 pl.xlabel('wavelength [{0}]'.format(conversions.unit2texlabel(wave_units))) 4414 else: 4415 xlabels = color_dict.keys() 4416 xticks = [color_dict[key] for key in xlabels] 4417 pl.xticks(xticks,xlabels,rotation=90) 4418 pl.ylabel(r'Flux ratio') 4419 pl.xlabel('Color name') 4420 pl.xlim(min(xticks)-0.5,max(xticks)+0.5) 4421 4422 pl.grid() 4423 if colors: 4424 leg = pl.legend(loc='best',prop=dict(size='x-small')) 4425 else: 4426 leg = pl.legend(loc='upper right',prop=dict(size='x-small')) 4427 leg.get_frame().set_alpha(0.5) 4428 loc = (0.45,0.05) 4429 if mtype in self.results and 'grid' in self.results[mtype] and annotation: 4430 if phase & (uniquephase != 0): 4431 exten = uniquephase 4432 else: 4433 exten = '' 4434 teff = self.results[mtype]['grid']['teff{}'.format(exten)][-1] 4435 logg = self.results[mtype]['grid']['logg{}'.format(exten)][-1] 4436 ebv = self.results[mtype]['grid']['ebv'][-1] 4437 met = self.results[mtype]['grid']['z'][-1] 4438 scale = self.results[mtype]['grid']['scale{}'.format(exten)][-1] 4439 angdiam = 2*conversions.convert('sr','mas',scale) 4440 pl.annotate('Teff=%d K\nlogg=%.2f cgs\nE(B-V)=%.3f mag\nlogZ=%.3f dex\n$\Theta$=%.3g mas'%(teff,logg,ebv,met,angdiam), 4441 loc,xycoords='axes fraction') 4442 logger.info('Plotted SED as %s'%(colors and 'colors' or 'absolute fluxes')) 4443 teff = "%d" % teff 4444 logg = "%.2f" % logg 4445 ebv = "%.3f" % ebv 4446 metallicity = "%.3f" % met 4447 '''f = open('/home/anae/python/SEDfitting/Bastars_info.txt', 'a') 4448 f.writelines(str(teff)+'\t'+str(logg)+'\t'+str(ebv)+'\n'+'\t'+str(metallicity)+'\n') 4449 f.closed'''
4450
4451 4452 -class Calibrator(SED):
4453 """ 4454 Convenience class for a photometric standard star or calibrator. 4455 """
4456 - def __init__(self,ID=None,photfile=None,plx=None,load_fits=False,label='',library='calspec'):
4457 names,fits_files,phot_files = model.read_calibrator_info(library=library) 4458 index = names.index(ID) 4459 #--retrieve fitsfile information 4460 if library in ['ngsl','calspec']: 4461 fits_file = pf.open(fits_files[index]) 4462 wave = fits_file[1].data.field('wavelength') 4463 flux = fits_file[1].data.field('flux') 4464 fits_file.close() 4465 elif library=='stelib': 4466 wave,flux = fits.read_spectrum(fits_files[index]) 4467 #--photfile: 4468 photfile = phot_files[index] 4469 super(Calibrator,self).__init__(photfile=photfile,plx=plx,load_fits=load_fits,label=label) 4470 self.set_model(wave,flux)
4471
4472 4473 -class SampleSEDs(object):
4474 """ 4475 Class representing a list of SEDs. 4476 """
4477 - def __init__(self,targets,**kwargs):
4478 """ 4479 Initialize a sample. 4480 4481 This can be done either with a list of IDs or with a list of SED 4482 instances. The SED must exist! That is, for each ID, there must be a 4483 phot or FITS file. 4484 """ 4485 #-- we don't want to load the FITS files by default because they 4486 # can eat up memory 4487 kwargs.setdefault('load_fits',False) 4488 self.targets = [] 4489 self.seds = [] 4490 #-- create all SEDs 4491 for target in targets: 4492 #-- perhaps we gave an ID: in that case, create the SED instance 4493 if not isinstance(target,SED): 4494 mysed = SED(target,**kwargs) 4495 if not mysed.has_photfile(): 4496 raise ValueError("No phot file found for {}".format(target)) 4497 #-- perhaps already an SED object: then do nothing 4498 else: 4499 mysed = target 4500 self.seds.append(mysed) 4501 self.targets.append(mysed.ID) 4502 logger.info("Loaded {} SEDs".format(len(self.seds)))
4503
4504 - def __iter__(self):
4505 """ 4506 Allow iteration over the SED instances. 4507 """ 4508 for sed in self.seds: 4509 yield sed
4510
4511 - def __len__(self):
4512 """ 4513 The length of a SampleSEDs instance is the number SED instances. 4514 """ 4515 return len(self.seds)
4516
4517 - def __getitem__(self,key):
4518 """ 4519 Implements various ways to get individual seds. 4520 4521 Allows integer indexing, slicing, indexing with integer and boolean arrays. 4522 """ 4523 #-- via slicing 4524 if isinstance(key,slice): 4525 return SampleSEDs([self[ii] for ii in range(*key.indices(len(self)))]) 4526 #-- via an integer 4527 elif isinstance(key,int): 4528 return self.seds[key] 4529 else: 4530 #-- try to make the input an array 4531 try: 4532 key = np.array(key) 4533 except: 4534 raise TypeError("Cannot use instance of type {} for indexing".format(type(key))) 4535 #-- integer array slicing 4536 if key.dtype==np.dtype(int): 4537 return SampleSEDs([self[ii] for ii in key]) 4538 #-- boolean array slicing 4539 elif key.dtype==np.dtype(bool): 4540 return SampleSEDs([self[ii] for ii in range(len(key)) if key[ii]]) 4541 #-- that's all I can come up with 4542 else: 4543 raise TypeError("Cannot use arrays of type {} for indexing".format(key.dtype))
4544
4545 - def summarize(self):
4546 #-- collect the names of all the different sources 4547 sources = {} 4548 for sed in self.seds: 4549 #-- collect the different sources 4550 these_sources = list(set(sed.master['source'])) 4551 for source in these_sources: 4552 if not source in sources: 4553 sources[source] = [] 4554 #-- now for each source, collect the names of the passbands 4555 keep = sed.master['source']==source 4556 sources[source] += list(set(sed.master[keep]['photband'])) 4557 sources[source] = sorted(list(set(sources[source]))) 4558 #-- next step: for each source, create a record array where the columns 4559 # are the different photbands. Fill in the values for the photbands 4560 # for each target when possible. 4561 summary = [] 4562 source_names = sorted(sources.keys()) 4563 for source in source_names: 4564 #-- create the record array 4565 data = np.zeros((len(self.targets),2*len(sources[source]))) 4566 # but remember to have errors with the photbands 4567 names = [] 4568 for photband in sources[source]: 4569 names += [photband,'e_'+photband] 4570 data = np.rec.fromarrays(data.T,names=names) 4571 #-- fill in the values 4572 for i,sed in enumerate(self.seds): 4573 for photband in sources[source]: 4574 keep = (sed.master['source']==source) & (sed.master['photband']==photband) 4575 #-- fill in nans for value and error when not present 4576 if not sum(keep): 4577 data[photband][i] = np.nan 4578 data['e_'+photband][i] = np.nan 4579 #-- otherwise give the first value you've found 4580 else: 4581 data[photband][i] = sed.master[keep]['cmeas'][0] 4582 data['e_'+photband][i] = sed.master[keep]['e_cmeas'][0] 4583 #if not sum(keep): logger.warning('multiple defined photband ({}) and source ({})'.format(photband,source)) 4584 summary.append(data) 4585 dtypes = [(source,summary[i].dtype) for i,source in enumerate(source_names)] 4586 output = np.zeros(len(self.targets),dtype=np.dtype(dtypes)) 4587 for i,name in enumerate(output.dtype.names): 4588 output[name] = summary[i] 4589 return output
4590
4591 - def get_data(self,source,photband,label=None):
4592 """ 4593 Get all data on a particular passband from a particular source. 4594 4595 If label is not None, synthetic flux from a model will be added to the 4596 columns. 4597 """ 4598 records = [] 4599 if label is not None: 4600 synflux_label = [('synflux','f8')] 4601 else: 4602 synflux_label = [] 4603 for targetname,sed in zip(self.targets,self.seds): 4604 #-- for the source, collect the names of the passbands 4605 keep = (sed.master['source']==source) & (sed.master['photband']==photband) 4606 #-- is there a model SED for which we want to retrieve synthetic photometry? 4607 if label is not None: 4608 if sum(keep)==0: 4609 synflux = [0] 4610 else: 4611 synflux = [sed.results[label]['synflux'][1][keep][0]] 4612 else: 4613 synflux = [] 4614 #-- if no data on the matter is present, put zero values everywhere 4615 if sum(keep)==0: 4616 records.append([targetname]+list(np.zeros(len(sed.master.dtype.names)))+synflux) 4617 else: 4618 records.append([targetname]+list(sed.master[keep][0])+synflux) 4619 #-- make the thing into a record array 4620 dtypes = np.dtype([('targetname','|S25')] + sed.master.dtype.descr + synflux_label) 4621 output = np.rec.fromrecords(records,names=dtypes.names) 4622 output = np.array(output,dtype=dtypes) 4623 return output
4624
4625 - def get_confidence_interval(self,parameter='teff',mtype='igrid_search'):
4626 values = np.zeros((len(self),3)) 4627 for i,sed in enumerate(self): 4628 sed.load_fits() 4629 values[i,0] = sed.results[mtype]['CI'][parameter+'_l'] 4630 values[i,1] = sed.results[mtype]['CI'][parameter] 4631 values[i,2] = sed.results[mtype]['CI'][parameter+'_u'] 4632 sed.clear() 4633 return values
4634 4635 4636 4637 4638 if __name__ == "__main__": 4639 import sys 4640 import doctest 4641 import pprint 4642 from ivs.aux import loggers 4643 4644 if not sys.argv[1:]: 4645 doctest.testmod() 4646 pl.show() 4647 else: 4648 name = " ".join([string for string in sys.argv[1:] if not '=' in string]) 4649 units = [string.split('=')[1] for string in sys.argv[1:] if 'units=' in string] 4650 if not units: 4651 units = 'erg/s/cm2/AA' 4652 else: 4653 units = units[0] 4654 logger = loggers.get_basic_logger("") 4655 mysed = SED(name) 4656 pprint.PrettyPrinter(indent=4).pprint(mysed.info) 4657 mysed.get_photometry(units=units) 4658 mysed.plot_data() 4659 pl.show() 4660 answer = raw_input('Keep photometry file %s (y/N)'%(mysed.photfile)) 4661 if not 'y' in answer.lower(): 4662 os.unlink(mysed.photfile) 4663 logger.info('Removed %s'%(mysed.photfile)) 4664 raise SystemExit 4665 4666 #-- clean up 4667 if os.path.isfile('HD180642.fits'): 4668 os.remove('HD180642.fits') 4669 raise SystemExit 4670 #-- PCA analysis 4671 master['include'] = True 4672 exclude(master,names=['STROMGREN.HBN-HBW','USNOB1'],wrange=(1.5e4,np.inf)) 4673 do_pca = False 4674 include_pca = master['include'] 4675 4676 if sum(keep)>2: 4677 do_pca = True 4678 logger.info("Start of PCA analysis to find fundamental parameters") 4679 colors,index = np.unique1d(master['photband'][include_pca],return_index=True) 4680 A,grid = fit.get_PCA_grid(colors,ebvrange=(0,0.5),res=10) 4681 P,T,(means,stds) = fit.get_PCA(A) 4682 calib = fit.calibrate_PCA(T,grid,function='linear') 4683 obsT,e_obsT = master['cmeas'][include_pca][index], master['e_cmeas'][include_pca][index] 4684 pars = fit.get_PCA_parameters(obsT,calib,P,means,stds,e_obsT=e_obsT,mc=mc) 4685 teff,logg,ebv = pars[0] 4686 logger.info("PCA result: teff=%.0f, logg=%.2f, E(B-V)=%.3f"%(teff,logg,ebv)) 4687 4688 #-- find angular diameter 4689 logger.info('Estimation of angular diameter') 4690 iflux = model.get_itable(teff=teff,logg=logg,ebv=ebv,photbands=master['photband'][include_grid]) 4691 scale_pca = np.median(master['cmeas'][include_grid]/iflux) 4692 angdiam = 2*np.arctan(np.sqrt(scale_pca))/np.pi*180*3600*1000 4693 logger.info('Angular diameter = %.3f mas'%(angdiam)) 4694 4695 #-- get model 4696 wave_pca,flux_pca = model.get_table(teff=teff,logg=logg,ebv=ebv,law='fitzpatrick1999') 4697 wave_ur_pca,flux_ur_pca = model.get_table(teff=teff,logg=logg,ebv=0,law='fitzpatrick1999') 4698 4699 logger.info('...brought to you in %.3fs'%(time.time()-c0)) 4700 4701 pl.figure() 4702 pl.title(ID) 4703 toplot = master[-master['color']] 4704 systems = np.array([system.split('.')[0] for system in toplot['photband']],str) 4705 set_systems = sorted(list(set(systems))) 4706 pl.gca().set_color_cycle([pl.cm.spectral(i) for i in np.linspace(0, 1.0, len(set_systems))]) 4707 for system in set_systems: 4708 keep = systems==system 4709 pl.errorbar(master['cwave'][keep],master['cmeas'][keep], 4710 yerr=master['e_cmeas'][keep],fmt='o',label=system) 4711 pl.gca().set_xscale('log',nonposx='clip') 4712 pl.gca().set_yscale('log',nonposy='clip') 4713 pl.gca().set_autoscale_on(False) 4714 pl.plot(wave_pca,flux_pca*scale_pca,'r-') 4715 pl.plot(wave_ur_pca,flux_ur_pca*scale_pca,'k-') 4716 pl.grid() 4717 pl.legend(loc='lower left',prop=dict(size='x-small')) 4718 pl.annotate('Teff=%d K\nlogg=%.2f cgs\nE(B-V)=%.3f mag\n'%(teff,logg,ebv)+r'$\theta$=%.3f mas'%(angdiam),(0.75,0.80),xycoords='axes fraction') 4719 pl.xlabel('wavelength [$\mu$m]') 4720 pl.ylabel(r'$F_\nu$ [Jy]') 4721 #### -- END SOME FIGURES -- #### 4722 4723 4724 if mc and do_pca: 4725 pl.figure() 4726 pl.subplot(131) 4727 pl.hexbin(pars[:,0],pars[:,1]) 4728 pl.xlim(pars[:,0].max(),pars[:,0].min()) 4729 pl.ylim(pars[:,1].max(),pars[:,1].min()) 4730 pl.colorbar() 4731 pl.subplot(132) 4732 pl.hexbin(pars[:,0],pars[:,2]) 4733 pl.xlim(pars[:,0].max(),pars[:,0].min()) 4734 pl.ylim(pars[:,2].min(),pars[:,2].max()) 4735 pl.colorbar() 4736 pl.subplot(133) 4737 pl.hexbin(pars[:,1],pars[:,2]) 4738 pl.xlim(pars[:,1].max(),pars[:,1].min()) 4739 pl.ylim(pars[:,2].min(),pars[:,2].max()) 4740 pl.colorbar() 4741 pl.show() 4742 4743 4744 4745 4746 4747 4748 4749 4750 4751 4752 ################################ 4753