A follow up to post on sampling from multivariate normal describing the case when the covariance or precision matricies are singular (not positive definite). In these cases the Eigen decomposition provides a means of calculating the matrix square root.

# Statistics @ Home

last update:A quick note on sampling from both the precision and covariance parameterizations of the multivariate normal. This post was written to highlihgt a error that is easy to make.

The Problem My parents like to play Bridge. It’s a great card game and totally worth the time it takes to learn all the rules. And there are many. My parents play on opposing teams since my mom has the opinion that one should never partner up with his/her actual partner, for the sake of the marriage. Which is saying something about being Bridge teammates since by being on opposite teams, when you lose, your partner wins.

Bayesian Decision Theory is a wonderfully useful tool that provides a formalism for decision making under uncertainty. It is used in a diverse range of applications including but definitely not limited to finance for guiding investment strategies or in engineering for designing control systems. In what follows I hope to distill a few of the key ideas in Bayesian decision theory. In particular I will give examples that rely on simulation rather than analytical closed form solutions to global optimization problems. My hope is that such a simulation based approach will provide a gentler introduction while allowing readers to solve more difficult problems right from the start.

For users of PhILR (Paper, R Package), and also for users of the ILR transform that wan to make use of the awesome plotting functions in R. I wanted to share a function for plotting a sequential binary partition on a tree using the ggtree package. I recently wrote this for a manuscript but figured it might be of more general use to others as well. In its simplest form a sequential binary partition can be represented as a binary tree.

Lately I have been working on figures for a manuscript. In this process I created a few visualizations that I thought might help others visualize the Multinomial distribution. I will focus on describing how counting processes introduce uncertainty into estimates of relative abundances and I will end with a discussion of how understanding the Multinomial has impacted my view of analyses of sequence count data (e.g., data from microbiome surveys, RNA-seq, and more).

First things first, Gauss is our dog. Since I am able to work from home, my dog Gauss and I spend a lot of time together. As a result, I like to think I know why he does what he does. But of course I will never really know - though, it’s nice to think that I do. Both of us being a creatures of habit, we have fallen into a nice routine during the day - one where he sleeps the day away and comes to get me around 4pm for some outdoor training/playing. I have noticed that whenever I do anything interesting or out of the norm, he is right there, waiting to see if he can benefit from the activity. Most remarkably, it feels like whenever we are in the kitchen, he sits down right in the middle of everything waiting for scraps and food that drops on the floor. I know Gauss loves me, but I wonder if I am more valuable to him in certain rooms? Does he “love” me more in the kitchen?...

In this post I describe an algorithm for clustering regression data that is based somewhat on K-Means. I cooked it up yesterday when looking over Cross Validated questions. A very smart professor at Duke has recently informed me that this is basically a mixture of regressions model (or a mixture of experts). So, don't I feel silly with the title for this post. Still I left it in to grip the readers attention! (Is it working?)

Following up on a recent post on limitations of the ALR and Softmax transforms, I wanted to briefly show how we can derive an Isometric Log-Ratio transform from the Additive Log-Ratio (ALR) transform.

Short post describing one of the key limitations with the additive log-ratio (ALR) transform (which is essentially the same as the softmax transform).