Finding and playing with peaks in RMINC

So, peaks. When producing a statistical map, it’s good to get a report of the peaks (i.e. most significant findings). RMINC has had this support for a while now, though it has remained somewhat hidden. Here’s a bit of an intro, then. I will walk through the example we used from the Mouse Imaging Summer School in 2017, which is data from this paper: de Guzman AE, Gazdzinski LM, Alsop RJ, Stewart JM, Jaffray DA, Wong CS, Nieman BJ.

Bayesian Model Selection with PSIS-LOO

Pitch In this post I’d like to provide an overview of Pareto-Smoothed Importance Sampling (PSIS-LOO) and how it can be used for bayesian model selection. Everything I discuss regarding this technique can be found in more detail in Vehtari, Gelman, and Gabry (2016). To lead up to PSIS-LOO I will introduce Akaike’s Information Criterion (AIC) to lay the foundation for model selection in general, then cover the expected log predictive density, the corner stone of bayesian model selection.

Linear Models

Preamble The purpose of this post is to elucidate some of the concepts associated with statistical linear models. Let’s start by loading some libraries. library(ggplot2) library(datasets) Background Theory The basic idea is as follows: Given two variables, \(x\) and \(y\), for which we’ve measured a set of data points \(\{x_i, y_i\}\) with \(i = 1, ..., n\), we want to estimate a function, \(f(x)\), such that \[y_i = f(x_i) + \epsilon_i\]