Likelihood

By default the likelihood is determined by the type of transient/data being used. However, users can choose a different likelihood. We note that there is typically only one correct choice of likelihood but there may be edge cases such as errors in time, or non-detections, or uncertain y errors which requires users to use a different likelihood. Many different simple to more complicated likelihoods are included in redback, these should cover most of the cases seen in transient data but if not, users can write their own likelihoods. We encourage users to add such likelihoods to redback.

Regular likelihoods

  • Gaussian likelihood - general Gaussian likelihood

  • GRB Gaussian likelihood - a GRB specific Gaussian likelihood

  • Poisson likelihood - For a poisson process

More advanced likelihoods

  • Gaussian likelihood with additional noise - When you want to estimate some additional uncertainty on your model

  • Gaussian likelihood with uniform x errors - When you have x errors that are bin widths

  • Gaussian likelihood with non detections - A general Gaussian likelihood with a upper limits on some data points

  • Gaussian likelihood with non detections and quadrature noise - Same as above but with an additional noise source added in quadrature

Write your own likelihood

If you don’t like the likelihoods implemented in redback, you can write your own, subclassing the redback likelihood for example,

class GaussianLikelihoodKnownNoise(redback.Likelihood):
       def __init__(self, x, y, sigma, function, kwargs):
           """
           A general Gaussian likelihood - the parameters are inferred from the
           arguments of function

           Parameters
           ----------
           x, y: array_like
               The data to analyse
           sigma: float
               The standard deviation of the noise
           function:
               The python function to fit to the data. Note, this must take the
               dependent variable as its first argument. The other arguments are
               will require a prior and will be sampled over (unless a fixed
               value is given).
            kwargs: dictionary of additional keywords for the model
           """
           self.x = x
           self.y = y
           self.sigma = sigma
           self.N = len(x)
           self.function = function

           # These lines of code infer the parameters from the provided function
           parameters = inspect.getargspec(function).args
           parameters.pop(0)
           super().__init__(parameters=dict.fromkeys(parameters))


       def log_likelihood(self):
           res = self.y - self.function(self.x, **self.parameters, **self.kwargs)
           return -0.5 * (np.sum((res / self.sigma)**2)
                          + self.N*np.log(2*np.pi*self.sigma**2))