Alloy OpenTelemetry Collector Configuration
Grafana Alloy is a vendor-neutral, OpenTelemetry-compatible telemetry collector that replaces the Grafana Agent, providing a unified pipeline for collecting and forwarding traces, metrics, logs, and profiles to any backend including Grafana Cloud, Loki, Tempo, and the broader LGTM stack. This guide covers installing Alloy on Linux, configuring pipelines, setting up receivers and exporters, and integrating with the LGTM stack.
Prerequisites
- Ubuntu 20.04+ / Debian 11+ or CentOS 8+ / Rocky Linux 8+
- 256 MB RAM minimum (scales with collection volume)
- Root or sudo access
- A destination: Grafana Cloud, or self-hosted Loki/Tempo/Mimir
Installing Grafana Alloy
# Ubuntu/Debian
wget -q -O - https://apt.grafana.com/gpg.key | sudo apt-key add -
echo "deb https://apt.grafana.com stable main" | sudo tee /etc/apt/sources.list.d/grafana.list
sudo apt-get update && sudo apt-get install -y alloy
# CentOS/Rocky Linux
cat > /etc/yum.repos.d/grafana.repo << 'EOF'
[grafana]
name=grafana
baseurl=https://rpm.grafana.com
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://rpm.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt
EOF
sudo dnf install -y alloy
# Verify installation
alloy --version
# Enable and start the service
sudo systemctl enable --now alloy
sudo systemctl status alloy
The default configuration lives at /etc/alloy/config.alloy. The Alloy UI is available at http://localhost:12345 (debug port).
Alloy Configuration Language (River)
Alloy uses River, a declarative configuration language with a block/attribute syntax:
// Basic structure:
// component_name "label" {
// attribute = value
// nested_block {
// attribute = value
// }
// }
// Components are wired together by referencing their outputs:
// prometheus.scrape "my_scraper" {
// targets = discovery.kubernetes.pods.targets
// forward_to = [prometheus.remote_write.mimir.receiver]
// }
Components are organized by function:
prometheus.scrape- Prometheus-style metric scrapingloki.source.*- log collection sourcesotelcol.*- OpenTelemetry componentsdiscovery.*- service discoveryprometheus.remote_write- send metrics via remote writeloki.write- send logs to Loki
Collecting Metrics
// /etc/alloy/config.alloy
// Scrape local node exporter
prometheus.scrape "node_exporter" {
targets = [{"__address__" = "localhost:9100"}]
forward_to = [prometheus.remote_write.mimir.receiver]
scrape_interval = "15s"
}
// Scrape metrics from all local Docker containers
discovery.docker "containers" {
host = "unix:///var/run/docker.sock"
}
prometheus.scrape "docker" {
targets = discovery.docker.containers.targets
forward_to = [prometheus.remote_write.mimir.receiver]
scrape_interval = "30s"
}
// Kubernetes pod discovery
discovery.kubernetes "pods" {
role = "pod"
namespaces {
names = ["default", "production"]
}
}
discovery.relabel "pod_metrics" {
targets = discovery.kubernetes.pods.targets
rule {
source_labels = ["__meta_kubernetes_pod_annotation_prometheus_io_scrape"]
action = "keep"
regex = "true"
}
rule {
source_labels = ["__meta_kubernetes_pod_annotation_prometheus_io_port"]
target_label = "__address__"
replacement = "${1}:${2}"
regex = "^(.+)$"
}
}
prometheus.scrape "kubernetes_pods" {
targets = discovery.relabel.pod_metrics.output
forward_to = [prometheus.remote_write.mimir.receiver]
}
// Expose Alloy's own metrics
prometheus.exporter.self "alloy" {}
prometheus.scrape "alloy_self" {
targets = prometheus.exporter.self.alloy.targets
forward_to = [prometheus.remote_write.mimir.receiver]
}
// Remote write destination
prometheus.remote_write "mimir" {
endpoint {
url = "http://mimir:9009/api/v1/push"
// For Grafana Cloud:
// url = "https://prometheus-prod-01-eu-west-0.grafana.net/api/prom/push"
// basic_auth {
// username = "your-username"
// password = env("GRAFANA_CLOUD_API_KEY")
// }
}
external_labels = {
cluster = "production",
env = "prod",
}
}
Collecting Logs
// Collect logs from systemd journal
loki.source.journal "systemd" {
forward_to = [loki.write.loki.receiver]
labels = {
job = "systemd",
host = env("HOSTNAME"),
}
}
// Tail log files
local.file_match "app_logs" {
path_targets = [{"__path__" = "/var/log/app/*.log"}]
}
loki.source.file "app_logs" {
targets = local.file_match.app_logs.targets
forward_to = [loki.process.app_logs.receiver]
}
// Process logs: parse and add labels
loki.process "app_logs" {
forward_to = [loki.write.loki.receiver]
// Extract JSON fields as labels
stage.json {
expressions = {
level = "level",
app = "service",
}
}
stage.labels {
values = {
level = "level",
app = "app",
}
}
// Drop debug logs in production
stage.drop {
expression = ".*DEBUG.*"
drop_counter_reason = "debug_log"
}
// Extract metrics from logs (log-to-metric)
stage.metrics {
metric.counter {
name = "log_lines_total"
description = "Total log lines processed"
prefix = "alloy_"
labels = ["level", "app"]
}
}
}
// Loki destination
loki.write "loki" {
endpoint {
url = "http://loki:3100/loki/api/v1/push"
}
external_labels = {
cluster = "production",
}
}
Collecting Traces
// Receive traces via OTLP (from your applications)
otelcol.receiver.otlp "default" {
grpc {
endpoint = "0.0.0.0:4317"
}
http {
endpoint = "0.0.0.0:4318"
}
output {
traces = [otelcol.processor.batch.default.input]
}
}
// Batch traces before sending
otelcol.processor.batch "default" {
timeout = "5s"
send_batch_size = 1000
output {
traces = [otelcol.exporter.otlp.tempo.input]
}
}
// Add resource attributes
otelcol.processor.attributes "add_env" {
action {
key = "deployment.environment"
value = "production"
action = "upsert"
}
output {
traces = [otelcol.processor.batch.default.input]
}
}
// Send to Grafana Tempo
otelcol.exporter.otlp "tempo" {
client {
endpoint = "http://tempo:4317"
tls {
insecure = true
}
}
}
// Also export metrics from trace spans
otelcol.connector.spanmetrics "default" {
histogram {
explicit {
buckets = ["2ms", "4ms", "6ms", "8ms", "10ms", "50ms", "100ms", "200ms", "400ms", "800ms", "1s", "1400ms", "2s", "5s", "10s", "15s"]
}
}
output {
metrics = [otelcol.exporter.prometheus.spanmetrics.input]
}
}
otelcol.exporter.prometheus "spanmetrics" {
forward_to = [prometheus.remote_write.mimir.receiver]
}
Forwarding to Grafana Cloud
// /etc/alloy/config.alloy for Grafana Cloud
// Metrics
prometheus.remote_write "grafana_cloud" {
endpoint {
url = "https://prometheus-prod-01-eu-west-0.grafana.net/api/prom/push"
basic_auth {
username = env("GRAFANA_CLOUD_METRICS_USER")
password = env("GRAFANA_CLOUD_API_KEY")
}
}
}
// Logs
loki.write "grafana_cloud" {
endpoint {
url = "https://logs-prod-eu-west-0.grafana.net/loki/api/v1/push"
basic_auth {
username = env("GRAFANA_CLOUD_LOGS_USER")
password = env("GRAFANA_CLOUD_API_KEY")
}
}
}
// Traces
otelcol.exporter.otlp "grafana_cloud" {
client {
endpoint = "tempo-eu-west-0.grafana.net:443"
auth = otelcol.auth.basic.grafana_cloud.handler
}
}
otelcol.auth.basic "grafana_cloud" {
username = env("GRAFANA_CLOUD_TRACES_USER")
password = env("GRAFANA_CLOUD_API_KEY")
}
Set environment variables:
# /etc/alloy/alloy.env (referenced by systemd unit)
GRAFANA_CLOUD_API_KEY=glc_your_key_here
GRAFANA_CLOUD_METRICS_USER=123456
GRAFANA_CLOUD_LOGS_USER=789012
GRAFANA_CLOUD_TRACES_USER=345678
# Add to /etc/systemd/system/alloy.service under [Service]:
EnvironmentFile=/etc/alloy/alloy.env
Forwarding to Self-Hosted LGTM Stack
LGTM = Loki + Grafana + Tempo + Mimir:
// /etc/alloy/config.alloy for self-hosted LGTM
// === METRICS to Mimir ===
prometheus.remote_write "mimir" {
endpoint {
url = "http://mimir:9009/api/v1/push"
headers = {
"X-Scope-OrgID" = "default",
}
}
}
// === LOGS to Loki ===
loki.write "loki" {
endpoint {
url = "http://loki:3100/loki/api/v1/push"
}
}
// === TRACES to Tempo ===
otelcol.exporter.otlp "tempo" {
client {
endpoint = "tempo:4317"
tls { insecure = true }
}
}
// === Scrape node metrics ===
prometheus.exporter.unix "node" {
set_collectors = ["cpu", "diskstats", "filesystem", "meminfo", "netdev", "loadavg"]
}
prometheus.scrape "node" {
targets = prometheus.exporter.unix.node.targets
forward_to = [prometheus.remote_write.mimir.receiver]
}
Troubleshooting
Alloy UI for debugging:
# Access the Alloy debug UI
curl http://localhost:12345
# Shows component graph, health status, and live logs
Validate configuration:
alloy fmt /etc/alloy/config.alloy # Format/validate syntax
alloy run /etc/alloy/config.alloy # Run in foreground for debugging
Component not receiving data:
# Check component status in the UI
# Or via API
curl http://localhost:12345/api/v0/web/components
# Enable debug logging
sudo systemctl edit alloy
# Add under [Service]:
# Environment=ALLOY_LOG_LEVEL=debug
sudo systemctl restart alloy
sudo journalctl -u alloy -f
Metrics not appearing in Mimir/Prometheus:
# Check remote write WAL
ls /var/lib/alloy/data/
# WAL stores data during connectivity issues
# Check remote write errors in logs
journalctl -u alloy | grep -i "remote_write\|err"
High memory usage:
# Alloy exposes its own metrics
curl http://localhost:12345/metrics | grep alloy_wal
# Reduce scrape frequency or drop unused metrics
# Use relabeling to drop high-cardinality metrics:
prometheus.relabel "drop_cardinality" {
rule {
source_labels = ["__name__"]
regex = "go_gc_.*|process_.*"
action = "drop"
}
forward_to = [prometheus.remote_write.mimir.receiver]
}
Conclusion
Grafana Alloy unifies metrics, logs, and traces collection into a single agent with a declarative River configuration that makes it easy to build complex pipelines without external dependencies. Its compatibility with both the OpenTelemetry Collector protocol and Prometheus scraping covers legacy and modern workloads simultaneously. For production deployments, use the Alloy debug UI to inspect pipeline health, configure EnvironmentFile for secrets management, and enable WAL (write-ahead log) persistence to handle short-term connectivity interruptions to backend services.


