Metrics are a powerful and cost-efficient tool for understanding the health and performance of your code in production. But it's hard to decide what metrics to track and even harder to write queries to understand the data.
Autometrics provides a macro that makes it trivial to instrument any function with the most useful metrics: request rate, error rate, and latency. It standardizes these metrics and then generates powerful Prometheus queries based on your function details to help you quickly identify and debug issues in production.
- ✨
#[autometrics]
macro adds useful metrics to any function orimpl
block, without you thinking about what metrics to collect - 💡 Generates powerful Prometheus queries to help quickly identify and debug issues in production
- 🔗 Injects links to live Prometheus charts directly into each function's doc comments
- 📊 Grafana dashboards work without configuration to visualize the performance of functions & SLOs
- 🔍 Correlates your code's version with metrics to help identify commits that introduced errors or latency
- 📏 Standardizes metrics across services and teams to improve debugging
- ⚖️ Function-level metrics provide useful granularity without exploding cardinality
- ⚡ Minimal runtime overhead
- 🚨 Define alerts using SLO best practices directly in your source code
- 📍 Attach exemplars automatically to connect metrics with traces
- ⚙️ Configurable metric collection library (
opentelemetry
,prometheus
,prometheus-client
ormetrics
)
See autometrics.dev for more details on the ideas behind autometrics.
Autometrics isn't tied to any web framework, but this shows how you can use the library in an Axum server.
use autometrics::{autometrics, prometheus_exporter};
use axum::{routing::*, Router};
use std::error::Error;
use std::net::Ipv4Addr;
use tokio::net::TcpListener;
// Instrument your functions with metrics
#[autometrics]
pub async fn create_user() -> Result<(), ()> {
Ok(())
}
// Export the metrics to Prometheus
#[tokio::main]
pub async fn main() -> Result<(), Box<dyn Error + Send + Sync>> {
prometheus_exporter::init();
let app = Router::new()
.route("/users", post(create_user))
.route(
"/metrics",
get(|| async { prometheus_exporter::encode_http_response() }),
);
let listener = TcpListener::bind((Ipv4Addr::from([127, 0, 0, 1]), 0)).await?;
axum::serve(listener, app).await?;
Ok(())
}
Because Autometrics combines a macro and a library, and supports multiple underlying metrics libraries, different settings are configured in different places.
See [settings
].
Autometrics produces a build_info
metric and writes queries that make it easy to correlate production issues with the commit or version that may have introduced bugs or latency (see this blog post for details).
The version
label is set automatically based on the version in your crate's Cargo.toml
file.
You can set commit
and branch
labels using the AUTOMETRICS_COMMIT
and AUTOMETRICS_BRANCH
environment variables , or you can use the vergen
crate to attach them automatically:
cargo add vergen --features git,gitcl
// build.rs
pub fn main() {
vergen::EmitBuilder::builder()
.git_sha(true)
.git_branch()
.emit()
.expect("Unable to generate build info");
}
The Autometrics macro inserts Prometheus query links into function documentation. By default, the links point to http://localhost:9090
but you can configure it to use a custom URL using a compile-time environment variable in your build.rs
file:
// build.rs
pub fn main() {
// Reload Rust analyzer after changing the Prometheus URL to regenerate the links
let prometheus_url = "https://your-prometheus-url.example";
println!("cargo:rustc-env=PROMETHEUS_URL={prometheus_url}");
}
If you do not want Autometrics to insert Prometheus query links into the function documentation, set the AUTOMETRICS_DISABLE_DOCS
compile-time environment variable:
// build.rs
pub fn main() {
println!("cargo:rustc-env=AUTOMETRICS_DISABLE_DOCS=1");
}
prometheus-exporter
- exports a Prometheus metrics collector and exporter. This is compatible with any of the Metrics backends and usesprometheus-client
by default if none are explicitly selected
Easily push collected metrics to a OpenTelemetry collector and compatible software. Combine one of the transport feature flags together with your runtime feature flag:
Transport feature flags:
otel-push-exporter-http
- metrics sent over HTTP(s) usinghyper
otel-push-exporter-grpc
- metrics sent over gRPC usingtonic
Runtime feature flags:
otel-push-exporter-tokio
- tokiootel-push-exporter-tokio-current-thread
- tokio withflavor = "current_thread"
otel-push-exporter-async-std
- async-std
If you require more customization than these offered feature flags, enable just
otel-push-exporter
and follow the example.
If you are exporting metrics yourself rather than using the
prometheus-exporter
, you must ensure that you are using the exact same version of the metrics library asautometrics
(and it must come fromcrates.io
rather than git or another source). If not, the autometrics metrics will not appear in your exported metrics.
opentelemetry-0_24
- use the opentelemetry crate for producing metrics.metrics-0_24
- use the metrics crate for producing metricsprometheus-0_13
- use the prometheus crate for producing metricsprometheus-client-0_22
- use the official prometheus-client crate for producing metrics
See the exemplars module docs for details about these features. Currently only supported with the prometheus-client
backend.
exemplars-tracing
- extract arbitrary fields fromtracing::Span
sexemplars-tracing-opentelemetry-0_25
- extract thetrace_id
andspan_id
from theopentelemetry::Context
, which is attached totracing::Span
s by thetracing-opentelemetry
crate
By default, Autometrics supports a fixed set of percentiles and latency thresholds for [objectives
]. Use these features to enable custom values:
custom-objective-latency
- enable this to use custom latency thresholds. Note, however, that the custom latency must match one of the buckets configured for your histogram or the queries, recording rules, and alerts will not work.custom-objective-percentile
- enable this to use custom objective percentiles. Note, however, that using custom percentiles requires generating a different recording and alerting rules file using the CLI + Sloth (see here).