Stars spin – and the rate at which they spin has a profound affect on their health. So measuring the rotation of different types of stars is an important pass-time for astronomers who like that kind of thing (i.e. stars).
Fortunately stellar rotation is something that can even be measured directly from their time dependent brightness: stars don’t have smooth surfaces, like our own Sun they are spotted with activity. When we observe them, this non-uniformity is reflected in the data as a quasi-periodic variation in their brightness.
However, the time dependent variability in the emission from spotted, rotating stars is often not perfectly periodic. Spots on the surface of stars have a tendency to move and disappear or appear. This means that fitting a straight forward sinusoidal model to the data is not very reliable – and fitting a full physical model is extremely complicated.
One solution? To use Gaussian Process Modelling with a quasi-periodic covariance kernel…
[Wondering what the hell GPM is? You probably want to take a look at my previous posts here and here.]
Kepler Data
I’m going to use the data presented in this paper for a simple demonstration of GPM applied to data in stellar brightness variation from the Kepler satellite. The data are publicly available here. I’m going to be looking at Kepler object KIC1430163 (just enter the numerical bit as the Kepler ID).
There’s a bunch of data for this object – I just selected the same data points that can be seen in Figure 7 of the paper, which looks like this:

These can be found in this bit of the overall dataset:
If you “download the table of plotted data”, it puts the visible data into a text file (with the extension “.tbl”), which you can read directly into Python.
But first some libraries…
import numpy as np import pylab as pl
This example uses the v0.2.1 george GPM library and the emcee package.
To install george v0.2.1 (rather than 0.3.0) use:
pip install george==0.2.1
We install the george library and the kernels from george:
import george from george import kernels
Then we can specify our input data file:
filename="KIC1430163.tbl" datafile = open(filename,'r')
…and read the data from it:
time=[];value=[] while True: line = datafile.readline() if not line: break items=line.split() if (items[0][0]!='|'): time.append(float(items[1])) value.append(float(items[2])) time=np.array(time) value=np.array(value)
If we plotted it just like this it wouldn’t look exactly like the plot from the paper. Those data have had a mean subtracted and they’ve been normalised:
mean = np.mean(value) value-=mean norm = np.max(value) value/=norm
Also the time scale is all relative to the first day, so let’s do that too:
day1 = time[0] time-= day1
How does it look?
pl.subplot(111) pl.scatter(time,value,s=0.2) pl.axis([0.,60.,-1.,1.]) pl.ylabel("Relative flux [ppt]") pl.xlabel("Time [days]") pl.show()
Covariance Kernels
In the paper there are two suggested kernels for modelling the covariance of the Kepler data (Eqs. 55 & 56). In the paper the authors fit Eq 56 – here we are going to fit Eq. 55, which is the more standard approach for modelling the rotation (as described in a second paper here).
We can do this using a combination of kernels from the george library.
Exponential Squared Kernel:
Exp-Sine-Squared Kernel:
By multiplying the periodic kernel by the exponential
kernel we allow for a decay away from exact periodicity in the data.
Our combined kernel is therefore:
where .
However, following the second paper, we are also going to add a white noise kernel in:
So our final kernel will be:
Let’s set this up in Python, with initial guesses at the parameter values. I don’t have any idea what these should be, so I’m going to start by simply setting everything to unity.
# h =1.0; sigma = 1.0; Gamma = 2.0/1.0^2; P = 1.0 k1 = 1.0**2 * kernels.ExpSquaredKernel(1.0**2) \ * kernels.ExpSine2Kernel(2.0 / 1.0**2, 1.0) k2 = kernels.WhiteKernel(0.01) kernel = k1+k2
Once we’ve set up our kernel we can pass it to the george library and compute the covariance matrix:
# first we feed our combined kernel to the George library: gp = george.GP(kernel, mean=0.0) # then we compute the covariance matrix: gp.compute(time)
From the covariance matrix we can then predict the posterior mean and variance at a range of test points:
t = np.arange(np.min(time),np.max(time),0.1) # calculate expectation and variance at each point: mu, cov = gp.predict(value, t) std = np.sqrt(np.diag(cov))
Let’s see how our prediction compares with our measured data.
ax = pl.subplot(111) pl.plot(t,mu) ax.fill_between(t,mu-std,mu+std,facecolor='lightblue', lw=0, interpolate=True) pl.scatter(time,value,s=2) pl.axis([0.,60.,-1.,1.]) pl.ylabel("Relative flux [ppt]") pl.xlabel("Time [days]") pl.show()
Optimizing the Hyper-parameters
We now need to work out the best values for the hyper-parameters rather than just guessing. The paper says:
As with the earlier examples, we start by estimating the MAP parameters using L-BFGS-B
So let’s do that. We’ll use the scipy optimiser, which requires us to define a log(likelihood) function:
def nll(p): # Update the kernel parameters: gp.kernel[:] = p # Compute the loglikelihood: ll = gp.lnlikelihood(value, quiet=True) # The scipy optimizer doesn’t play well with infinities: return -ll if np.isfinite(ll) else 1e25
…and a function for the gradient of the log(likelihood):
def grad_nll(p): # Update the kernel parameters: gp.kernel[:] = p # Compute the gradient of the loglikelihood: gll = gp.grad_lnlikelihood(value, quiet=True) return -gll
I’m going to set bounds on the available parameters space, i.e. our prior volume, using the ranges taken from Table 4 of the second paper:
import scipy.optimize as op # extract our initial guess at parameters # from the george kernel and put it in a # vector: p0 = gp.kernel.vector # set prior ranges # Note that these are in *logarithmic* space bnds = ((-20.,0.),(2.,8.),(0.,3.),(np.log(0.5),np.log(100.)),(-20.,0)) # run optimization: results = op.minimize(nll, p0, method='L-BFGS-B', jac=grad_nll, bounds=bnds)
Let’s print out the optimized parameter values:
# print the value of the optimised parameters: print np.exp(results.x)
[ 6.80152691e-02 7.38905610e+00 1.05905027e+01 3.89480586e+00
1.33416450e-03]
The key parameter here is the period, which is the fourth number along. We expect this to be about 3.9 and… that the kind of value it’s coming out with. ☺
From the paper:
This star has a published rotation period of 3.88 ± 0.58 days, measured using traditional periodogram and autocorrelation function approaches applied to Kepler data from Quarters 0–16 (Mathur et al. 2014), covering about four years.
Let’s now pass these optimised parameters to george:
# pass the parameters to the george kernel: gp.kernel[:] = results.x
…and recompute our prediction:
t = np.arange(np.min(time),np.max(time),0.1) # calculate expectation and variance at each point: mu, cov = gp.predict(value, t) std = np.sqrt(np.diag(cov))
Which looks much cleaner.
ax = pl.subplot(111) pl.plot(t,mu) ax.fill_between(t,mu-std,mu+std,facecolor='lightblue', lw=0, interpolate=True) pl.scatter(time,value,s=2) pl.axis([0.,60.,-1.,1.]) pl.ylabel("Relative flux [ppt]") pl.xlabel("Time [days]") pl.show()
Of course, if you’ve been reading the paper I originally started with then you’ll have seen that we could run a much faster optimization using the new celerite library.
I’ll be using celerite in my next post.
Then for the blog this.
One Reply to “Spinning Stars”