Behavioral Change Point Analysis

From QERM Wiki
(Difference between revisions)
Jump to: navigation, search
(Getting closer ....)
m (Brief Introduction: fixes ....)
Line 15: Line 15:
 
[[image:BCPA_MLCPLikelihood.png | right| 200 px | thumb | Identifying MLCP from likelihood profile.]]
 
[[image:BCPA_MLCPLikelihood.png | right| 200 px | thumb | Identifying MLCP from likelihood profile.]]
  
Briefly, there are several hierarchically layered parts to the method.  For location and time data <math>\{Z, T\}</math>, the analysis is performed on velocity (estimated as <math> V = \Delta Z / \Delta T</math> components <math> V \cos(\theta) </math> and <math> V \sin(\theta) </math>.  These components are (importantly) assumed to be observations from a continuous time, Gaussian process, (i.e. an [[Wikipedia: Ornstein-Uhlenbeck process|Ornstein-Uhlenbeck process]]), with mean <math>\mu(t)</math>, variance <math> \sigma^2(t) </math> and autocorrelation <math>\rho(t)</math>.  The values of these parameters are assumed to change gradually or abruptly. The purpose of the BCPA is to identify the locations where changes are abrupt (assumed to correspond to discrete changes in an animal's behavior).  
+
Briefly, there are several hierarchically layered parts to the method.  For location and time data <math>\{Z, T\}</math>, the analysis is performed on velocity (estimated as <math> {V = \Delta Z / \Delta T} </math> components <math> V \cos(\theta) </math> and <math> V \sin(\theta) </math>.  These components are (importantly) assumed to be observations from a continuous time, Gaussian process, (i.e. an [[Wikipedia: Ornstein-Uhlenbeck process|Ornstein-Uhlenbeck process]]), with mean <math>\mu(t)</math>, variance <math> \sigma^2(t) </math> and autocorrelation <math>\rho(t)</math>.  The values of these parameters are assumed to change gradually or abruptly. The purpose of the BCPA is to identify the locations where changes are abrupt (assumed to correspond to discrete changes in an animal's behavior).  
  
 
The distribution function of this process is given by:
 
The distribution function of this process is given by:

Revision as of 23:44, 18 July 2011

PLEASE NOTE: THIS PAGE IS UNDER CONSTRUCTION!

FurSealBCPA.png

The Behavioral Change Point Analysis (BCPA) is a method of identifying hidden shifts in the underlying parameters of a time series, developed specifically to be applied to animal movement data which is irregularly sampled.

The original paper on which it is based is: E. Gurarie, R. Andrews and K. Laidre A novel method for identifying behavioural changes in animal movement data (2009) Ecology Letters 12:5 395-408. Most of the material is also present in the Chapter 5 of my Pdf10.png PhD dissertation (click to access).

I have received numerous requests for the code behind the BCPA, so (after sending out more or less the same email with the same attachments several dozen times) have decided that it might be more efficient to post a sample work flow of using the BCPA on this wiki.

Please send all comments, questions, critiques, possible improvements, or (heaven forbid!) identification of errors to via e-mail at <text> eli.gurarie (at) noaa.gov </text>.

Contents

Brief Introduction

Likelihood of autocorrelation parameter for randomly subsampled process
Identifying MLCP from likelihood profile.

Briefly, there are several hierarchically layered parts to the method. For location and time data {Z,T}, the analysis is performed on velocity (estimated as V = ΔZ / ΔT components Vcos(θ) and Vsin(θ). These components are (importantly) assumed to be observations from a continuous time, Gaussian process, (i.e. an Ornstein-Uhlenbeck process), with mean μ(t), variance σ2(t) and autocorrelation ρ(t). The values of these parameters are assumed to change gradually or abruptly. The purpose of the BCPA is to identify the locations where changes are abrupt (assumed to correspond to discrete changes in an animal's behavior).

The distribution function of this process is given by:

 f(X_i|X_{i-1}) = {1\over  \sigma\sqrt{2 \pi (1-\rho^{2\tau_i})}} \exp {\left( \frac{\left(X_i - \rho^{\tau_i} (X_{i-1}-\mu)\right)^2}{2\sigma^2 (1 - \rho^{2\tau_i})}  \right)}
First, we identifying a likelihood function for ρ, given estimates of  \mu = \overline{V_i} and \sigma^2 = S_V^2 , in a behaviorally homogenous region using the distribution function above:
 L(\rho|{X},{T}) = \prod_{i=1}^n f(X_i|X_{i-1},\tau_i,\rho)
  • Second, we identify a "most likely change point" (MLCP) in a time series where the behavior may have changed by taking the product of the likelihoods of the estimates to the left and to the right of all possible change points in a time series. We identify which of the parameters (if any) have changed by comparing the BIC of eight possible models: M0 - no changes in any parameter, M1 - change in μ only, M2 - change in σ only,

M3 - change in ρ only ... etc ... M7 - chance in all three parameters.

  • Third, we sweep the MLCP changepoint across a complete data set, recording at every point what the parameter values are to the left and right of all MLCP's under the model with the highest BIC, and record the paraemters.
  • Fourth, we somehow present this mass of analysis.

I present here the code for all these steps, as they were applied in the original paper, and apply them to a simulated dataset. Naturally, implementation of this type of analysis should be specific to the relevant application.

Code pieces

Likelihood of ρ parameter

Description: This function works first by estimating the mean and standard deviation directly from x, using these to standardize the time series, and then optimizes for the likelihood of the value of rho. The equation for the likelihood is given above (and in equations 10 and 11 in the BCPA paper).

Usage: GetRho(x, t)

Value: Returns a vector with two values (again - not a list or dataframe for greater speed). The first value is "rho" estimate, the second is the log likelihood of the estimate.

Arguments

  • x Values of time series.
  • t Times of measurements associated with x.
 
GetRho <- function (x, t) 
{
    getL <- function(rho) {
        dt <- diff(t)
        s <- sd(x)
        mu <- mean(x)
        n <- length(x)
        x.plus <- x[-1]
        x.minus <- x[-length(x)]
        Likelihood <- dnorm(x.plus, mean = mu + (rho^dt) * (x.minus - 
            mu), sd = s * sqrt(1 - rho^(2 * dt)))
        logL <- sum(log(Likelihood))
        if (!is.finite(logL)) 
            logL <- -10^10
        return(-logL)
    }
    o <- optimize(getL, lower = 0, upper = 1, tol = 1e-04)
    return(c(o$minimum, o$objective))
}

Total likelihood within a behaviorally homogenous section

Description: Returns log-likelihood of a given parameter set for a gappy time series.

Usage: GetLL(x, t, mu, s, rho)

Value: Returns value of the log likelihood

Arguments:

  • x Time series data.
  • t Times at which data is obtained
  • mu Mean estimate
  • s Sigma (standard deviation) estimate (>0)
  • rho Rho estimate (between 0 and 1)
GetLL <- function (x, t, mu, s, rho) 
{
    dt <- diff(t)
    n <- length(x)
    x.plus <- x[-1]
    x.minus <- x[-length(x)]
    Likelihood <- dnorm(x.plus, mean = mu + (rho^dt) * (x.minus - 
        mu), sd = s * sqrt(1 - rho^(2 * dt)))
    LL <- -sum(log(Likelihood))
    return(LL)
}

Likelihood of single change point

Description: Takes a time series with values "x" obtained at time "t" and a time break "tbreak" and returns the estimates of "mu", "sigma" and "rho" as well as the negative log-likelihood of those estimates (given the data) both before and after the break.

Usage: GetDoubleL(x, t, tbreak)

Value: Returns a labeled matrix (more efficient than a data frame) with columns: "mu", "sigma", "rho" and "LL" corresponding to the estimates and 2 rows for each side of the break point.

Arguments:

  • x Values of time series.
  • t Times of measurements associated with x.
  • tbreak Breakpoint (in terms of the INDEX within "t" and "x", not actual time value).
GetDoubleL <- function(x,t,tbreak)
  {  
    x1 <- x[1:tbreak]  
    x2 <- x[tbreak:length(x)]
    
    t1 <- t[1:tbreak]
    t2 <- t[tbreak:length(t)]

    o1<-GetRho(x1,t1)
    o2<-GetRho(x2,t2)
    
    mu1 <- mean(x1)
    sigma1 <- sd(x1)
    rho1 <- o1[1]
    
    mu2 <- mean(x2)
    sigma2 <- sd(x2)
    rho2 <- o2[1]
  
    LL1 <- -o1[2]
    LL2 <- -o2[2]

    m <- matrix(c(mu1,mu2,sigma1,sigma2,rho1,rho2,LL1,LL2),2,4)
    colnames(m) <- c("mu","sigma","rho","LL")
    
    return(m)
  }

Sweeping breaks

Description: Finds a single change point within a time series.

Usage: SweepBreaks(x, t, range=0.6)

Arguments:

  • x Values of time series.
  • t Times of measurements associated with x.
  • range Range of possible breaks. Default (0.6) runs approximately from 1/5th to 4/5ths of the total length of the time series.

Value: Returns a matrix (not a data.frame for greater speed) with column headings: "breaks","tbreaks","mu1","sigma1","rho1","LL1","mu2","sigma2","rho2","LL2","LL". This is calculated for every possible break - which extends from 0.2l to 0.8l (where l is the length of the time series). The output of this function feeds WindowSweep.

SweepBreaks <- function(x,t,range=0.6)
    {
      n<-length(t)
      start <- (1-range)/2
      breaks<-round((start*n):((1-start)*n))
      Ls<-breaks*0
      
      l<-length(breaks)
      
      BreakParams <- matrix(NA,l,8)
      #BreakParams <- data.frame(mu1=NA,s1=NA,rho1=NA,LL1=NA,mu2=NA,s2=NA,rho2=NA,LL2=NA)
      
      for(i in 1:l)
      {
        myDoubleL <- GetDoubleL(x,t,breaks[i])
        BreakParams[i,] <- c(myDoubleL[1,],myDoubleL[2,])
      }
      
      # remember: LL1 and LL2 are columns 4 and 8
      BreakMatrix<- cbind(breaks,t[breaks], BreakParams, 
      BreakParams[,4]+BreakParams[,8])
      
      colnames(BreakMatrix) <- c("breaks","tbreaks","mu1","sigma1","rho1","LL1","mu2","sigma2","rho2","LL2","LL")
      
      return(BreakMatrix[2:nrow(BreakMatrix),])
    }

Choosing a model

There are eight model functions, names M0 to M7, with redundant code which I have placed here: BCPA/Model Specification. Each of these function take data "x", at times "t", and breakpoint "tbreak" and return a named dataframe of parameters, log-likelihoods and BIC values: "data.frame(LL,bic,mu1,s1,rho1,mu2,s2,rho2)"

The ouput of all the models is obtained using this function, which returns a data.frame including columns: "Model", followed by the estimate output of each model function M0-M7 (i.e. data.frame(LL,bic,mu1,s1,rho1,mu2,s2,rho2)):

 
GetModels <- function(x,t,tbreak,K=2)
{
  for(i in 0:7)
  {
    f<-get(paste("M",i,sep=""))
    myr<-data.frame(Model=i,f(x,t,tbreak,K))
    ifelse(i==0,
      r<-myr,
      r<-rbind(r,myr))
  }
  return(r)
}

Window sweeping

This is the key mother-function which sweeps the analysis above using windows of size windowsize and selecting the best model according to BIC and recording the estimated parameters.

WindowSweep <- function (x, t, windowsize = 50, windowstep = 1, sine = 0, K = 2, 
    plotme = 1) 
{
    low <- seq(1, (length(t) - windowsize), windowstep)
    hi <- low + windowsize
    for (i in 1:length(low)) {
        myx <- x[low[i]:hi[i]]
        myt <- t[low[i]:hi[i]]
        bp <- SweepBreaks(myx, myt)
        myestimate <- bp[bp[, 11] == max(bp[, 11]), ]
        breakpoint <- myestimate[1]
        tbreak <- myestimate[2]
        ifelse(sine, allmodels <- GetModelsSin(myx, myt, breakpoint, 
            K), allmodels <- GetModels(myx, myt, breakpoint, 
            K))
        mymodel <- allmodels[allmodels$bic == min(allmodels$bic), 
            ]
        mymodel <- data.frame(mymodel, Break = tbreak)
        ifelse(i == 1, estimates <- mymodel, estimates <- rbind(estimates, 
            mymodel))
        if (plotme) {
            plot.ts(t, x, type = "l", col = "grey")
            lines(t, x, type = "l")
            lines(myt, myx, col = "green")
            abline(v = tbreak)
            print(estimates[i, ])
        }
    }
    return(data.frame(estimates))
}

Partitioning parameters

Finally, the output of the windowsweep function (which we call "ws"), is converted to parameter estimates over the entire complex timeseries using the following function:

PartitionParameters <- function(ws,t,windowsize=50,windowstep=1)
{
	n.col<-length(t)
	n.row<-dim(ws)[1]

	mu.M <- matrix(NA,n.row,n.col)
	s.M <- matrix(NA,n.row,n.col)
	rho.M <- matrix(NA,n.row,n.col)

	for(i in 1:n.row)
	{
		myws<-ws[i,]
		dts <- abs(t-myws$Break)
		tbreak <- match(min(dts),dts)

		max <- min(n.col,i+windowsize)

		mu.M[i,i:tbreak] <- myws$mu1
		mu.M[i,(tbreak+1):max] <- myws$mu2
		s.M[i,i:tbreak] <- myws$s1
		s.M[i,(tbreak+1):max] <- myws$s2
		rho.M[i,i:tbreak] <- myws$rho1
		rho.M[i,(tbreak+1):max] <- myws$rho2
		  }

		adjust <- colSums(!is.na(mu.M))


		mu.hat<-colSums(mu.M,na.rm=1)/adjust
		s.hat<-colSums(s.M,na.rm=1)/adjust
		rho.hat<-colSums(rho.M,na.rm=1)/adjust

		return(data.frame(mu.hat,s.hat,rho.hat))
	}
}

Plotting output

Sample work flow

Personal tools