- Sat 11 October 2025
- Statistics
- #rust, #mcmc, #bayesian-inference, #nuts, #hmc
Overview
mini-mcmc is a Rust library for Markov Chain Monte Carlo providing
NUTS, HMC, Metropolis–Hastings, and Gibbs samplers in a single, consistent framework.
It’s designed to be:
- Simple to use – clear API with minimal boilerplate
- Unified – one library for multiple MCMC algorithms
- Consistent – use
ndarrayfor gradient-free methods andburnfor gradient-based ones - Deterministic – explicit seeds and reproducible chains
Built on burn, it supports autodiff, parallel chains, and optional GPU acceleration.
Installation
Add it to your Rust project:
cargo add mini-mcmc
Then import it:
use mini_mcmc::*;
Full examples are in the GitHub repo →.
Core idea
Define your log density, choose a MCMC method, and run. For gradient-based methods (HMC, NUTS), autodiff computes gradients automatically — no need to derive them by hand.
Each sampler provides:
- Samples across chains
- Diagnostics (acceptance rate, R-hat, ESS)
- Optional live progress display
Quick example — 2D Gaussian via Metropolis–Hastings
use mini_mcmc::core::{init_det, ChainRunner};
use mini_mcmc::distributions::{Gaussian2D, IsotropicGaussian};
use mini_mcmc::metropolis_hastings::MetropolisHastings;
use ndarray::{arr1, arr2};
fn main() {
let target = Gaussian2D {
mean: arr1(&[0.0, 0.0]),
cov: arr2(&[[1.0, 0.0], [0.0, 1.0]]),
};
let proposal = IsotropicGaussian::new(1.0);
// Create a MH sampler with 4 parallel chains
let mut mh = MetropolisHastings::new(target, proposal, init_det(4, 2));
// Run the sampler for 1,100 steps, discarding the first 100 as burn-in
let sample = mh.run(1000, 100).unwrap();
// We should have 1000 * 4 = 3600 observations
assert_eq!(sample.shape()[0], 4);
assert_eq!(sample.shape()[1], 1000);
}
Gradient-based samplers
For HMC and NUTS, you only need to specify \(\log p(\theta)\) —
autodiff (via burn) handles \(\nabla_\theta \log p(\theta)\).
Switch between CPU and GPU with one line:
type Backend = burn::backend::Autodiff<burn::backend::NdArray>;
// or: burn::backend::Autodiff<burn::backend::Wgpu>