Node Exporter Installation for Prometheus
Introduction
Prometheus has become the de facto standard for metrics collection and monitoring in modern infrastructure, particularly in cloud-native and Kubernetes environments. Node Exporter is a crucial component of the Prometheus ecosystem, designed to expose hardware and operating system-level metrics from Linux servers in a format that Prometheus can scrape and process.
Node Exporter transforms your Linux servers into observable endpoints by collecting hundreds of system metrics including CPU usage, memory consumption, disk I/O, network statistics, filesystem usage, and much more. Unlike agent-based monitoring solutions, Node Exporter runs as a lightweight daemon that simply exposes metrics via HTTP, following Prometheus's pull-based architecture.
This comprehensive guide walks you through the complete process of installing, configuring, and securing Node Exporter on Linux systems. You'll learn how to deploy Node Exporter, configure custom collectors, integrate with Prometheus, secure the metrics endpoint, and implement best practices for production monitoring. Whether you're building a new monitoring infrastructure or migrating from legacy tools, Node Exporter provides the foundation for comprehensive system observability.
Prerequisites
Before installing Node Exporter, ensure you have:
- A Linux server (Ubuntu 20.04/22.04, Debian 10/11, CentOS 7/8, Rocky Linux 8/9, or similar)
- Root or sudo access for installation and configuration
- Basic understanding of Prometheus architecture
- Firewall access to allow metrics collection (default port 9100)
- A Prometheus server for scraping metrics (can be configured later)
Recommended Knowledge:
- Understanding of systemd service management
- Basic knowledge of HTTP and REST APIs
- Familiarity with metrics and time-series data
- Understanding of monitoring concepts
System Requirements:
- Minimal RAM: 50 MB
- Minimal CPU: Very low overhead
- Disk space: ~20 MB for binary and data
Understanding Node Exporter
What is Node Exporter?
Node Exporter is a Prometheus exporter for hardware and OS metrics exposed by *NIX kernels. It collects metrics from the /proc and /sys filesystems and exposes them via HTTP endpoint for Prometheus to scrape.
Key Features:
- Over 100 different collectors for system metrics
- Minimal resource footprint
- No external dependencies
- Configurable collectors
- Textfile collector for custom metrics
- Support for multiple architectures
Metrics Exposed
Node Exporter provides metrics for:
- CPU: Usage, idle time, I/O wait
- Memory: Available, used, cached, buffered
- Disk: I/O statistics, space usage
- Network: Bandwidth, errors, packets
- Filesystem: Mount points, usage
- Load: System load averages
- Network Statistics: TCP/UDP connections
- Time: System time and NTP sync
- Hardware: Temperature, RAID status (with additional collectors)
Installing Node Exporter
Method 1: Binary Installation (Recommended)
This method installs the official pre-compiled binary from the Prometheus GitHub repository.
Step 1: Download Node Exporter
# Create user for Node Exporter
sudo useradd --no-create-home --shell /bin/false node_exporter
# Download latest version (check https://prometheus.io/download/ for latest)
cd /tmp
wget https://github.com/prometheus/node_exporter/releases/download/v1.7.0/node_exporter-1.7.0.linux-amd64.tar.gz
# Extract archive
tar xvfz node_exporter-1.7.0.linux-amd64.tar.gz
# Move binary to /usr/local/bin
sudo cp node_exporter-1.7.0.linux-amd64/node_exporter /usr/local/bin/
# Set ownership
sudo chown node_exporter:node_exporter /usr/local/bin/node_exporter
# Clean up
rm -rf node_exporter-1.7.0.linux-amd64*
Step 2: Verify Installation
# Check version
/usr/local/bin/node_exporter --version
# Test run (Ctrl+C to stop)
/usr/local/bin/node_exporter
Method 2: Package Manager Installation
On Ubuntu/Debian:
# Note: May not be latest version
sudo apt update
sudo apt install prometheus-node-exporter -y
# Check status
sudo systemctl status prometheus-node-exporter
On CentOS/Rocky Linux (EPEL Repository):
# Enable EPEL repository
sudo yum install epel-release -y
# Install Node Exporter
sudo yum install golang-github-prometheus-node-exporter -y
# Check status
sudo systemctl status node_exporter
Method 3: Docker Installation
# Run Node Exporter as Docker container
docker run -d \
--name node-exporter \
--net="host" \
--pid="host" \
-v "/:/host:ro,rslave" \
quay.io/prometheus/node-exporter:latest \
--path.rootfs=/host
# Verify container is running
docker ps | grep node-exporter
# View metrics
curl http://localhost:9100/metrics
Creating Systemd Service
For binary installation, create a systemd service for automatic startup and management.
Create service file:
sudo nano /etc/systemd/system/node_exporter.service
Service configuration:
[Unit]
Description=Node Exporter
Documentation=https://prometheus.io/docs/guides/node-exporter/
Wants=network-online.target
After=network-online.target
[Service]
User=node_exporter
Group=node_exporter
Type=simple
Restart=on-failure
RestartSec=5s
ExecStart=/usr/local/bin/node_exporter
[Install]
WantedBy=multi-user.target
Enable and start service:
# Reload systemd daemon
sudo systemctl daemon-reload
# Enable Node Exporter to start on boot
sudo systemctl enable node_exporter
# Start Node Exporter
sudo systemctl start node_exporter
# Check status
sudo systemctl status node_exporter
# View logs
sudo journalctl -u node_exporter -f
Configuring Node Exporter
Basic Configuration
Node Exporter is configured via command-line flags. Modify the systemd service file to add configuration options.
Edit service file:
sudo nano /etc/systemd/system/node_exporter.service
Add configuration flags:
[Service]
ExecStart=/usr/local/bin/node_exporter \
--web.listen-address=:9100 \
--collector.filesystem.mount-points-exclude='^/(dev|proc|sys|var/lib/docker/.+|var/lib/kubelet/.+)($|/)' \
--collector.filesystem.fs-types-exclude='^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$'
Common configuration flags:
# Change listen address and port
--web.listen-address=:9100
# Specify which collectors to enable
--collector.systemd
--collector.processes
# Disable specific collectors
--no-collector.hwmon
--no-collector.nfs
# Set textfile collector directory
--collector.textfile.directory=/var/lib/node_exporter/textfile_collector
# Set maximum request duration
--web.max-requests=40
# Enable TLS
--web.config.file=/etc/node_exporter/web-config.yml
Reload and restart:
sudo systemctl daemon-reload
sudo systemctl restart node_exporter
sudo systemctl status node_exporter
Enabling Specific Collectors
View available collectors:
/usr/local/bin/node_exporter --help | grep collector
Enable systemd collector (disabled by default):
# Edit service file
sudo nano /etc/systemd/system/node_exporter.service
# Add to ExecStart
--collector.systemd
# Reload and restart
sudo systemctl daemon-reload
sudo systemctl restart node_exporter
Enable process collector:
# Add to ExecStart
--collector.processes
sudo systemctl daemon-reload
sudo systemctl restart node_exporter
Textfile Collector for Custom Metrics
The textfile collector allows you to expose custom metrics by writing them to files.
Setup textfile collector:
# Create directory for textfile collector
sudo mkdir -p /var/lib/node_exporter/textfile_collector
sudo chown node_exporter:node_exporter /var/lib/node_exporter/textfile_collector
# Enable in service configuration
sudo nano /etc/systemd/system/node_exporter.service
Add to ExecStart:
--collector.textfile.directory=/var/lib/node_exporter/textfile_collector
Create custom metrics script:
#!/bin/bash
# /usr/local/bin/custom_metrics.sh
OUTPUT_FILE="/var/lib/node_exporter/textfile_collector/custom_metrics.prom"
{
# Custom metric: count of logged-in users
USER_COUNT=$(who | wc -l)
echo "# HELP logged_in_users Number of logged in users"
echo "# TYPE logged_in_users gauge"
echo "logged_in_users $USER_COUNT"
# Custom metric: failed login attempts
FAILED_LOGINS=$(grep "Failed password" /var/log/auth.log 2>/dev/null | wc -l)
echo "# HELP failed_login_attempts Total failed login attempts"
echo "# TYPE failed_login_attempts counter"
echo "failed_login_attempts $FAILED_LOGINS"
# Custom metric: running processes
PROCESS_COUNT=$(ps aux | wc -l)
echo "# HELP total_processes Total number of processes"
echo "# TYPE total_processes gauge"
echo "total_processes $PROCESS_COUNT"
} > "$OUTPUT_FILE.$$"
# Atomic move to avoid partial reads
mv "$OUTPUT_FILE.$$" "$OUTPUT_FILE"
Make executable and schedule:
sudo chmod +x /usr/local/bin/custom_metrics.sh
# Add to crontab
sudo crontab -e
# Run every 5 minutes
*/5 * * * * /usr/local/bin/custom_metrics.sh
Restart Node Exporter:
sudo systemctl daemon-reload
sudo systemctl restart node_exporter
Accessing Metrics
View Metrics Endpoint
# View all metrics
curl http://localhost:9100/metrics
# Filter specific metrics
curl http://localhost:9100/metrics | grep node_cpu
# View filesystem metrics
curl http://localhost:9100/metrics | grep node_filesystem
# View memory metrics
curl http://localhost:9100/metrics | grep node_memory
# Check textfile collector metrics
curl http://localhost:9100/metrics | grep custom_
Understanding Metrics Format
Metrics are exposed in Prometheus text format:
# HELP node_cpu_seconds_total Seconds the CPUs spent in each mode.
# TYPE node_cpu_seconds_total counter
node_cpu_seconds_total{cpu="0",mode="idle"} 123456.78
node_cpu_seconds_total{cpu="0",mode="system"} 1234.56
node_cpu_seconds_total{cpu="0",mode="user"} 2345.67
Components:
# HELP: Description of the metric# TYPE: Metric type (counter, gauge, histogram, summary)- Metric name:
node_cpu_seconds_total - Labels:
{cpu="0",mode="idle"} - Value:
123456.78
Testing Metrics Collection
# Test script to verify metrics
cat > /tmp/test_metrics.sh <<'EOF'
#!/bin/bash
echo "Testing Node Exporter Metrics..."
echo ""
# Test connection
if curl -s http://localhost:9100/metrics > /dev/null; then
echo "✓ Node Exporter is accessible"
else
echo "✗ Node Exporter is not accessible"
exit 1
fi
# Count total metrics
METRIC_COUNT=$(curl -s http://localhost:9100/metrics | grep -c "^node_")
echo "✓ Total metrics: $METRIC_COUNT"
# Check CPU metrics
if curl -s http://localhost:9100/metrics | grep -q "node_cpu_seconds_total"; then
echo "✓ CPU metrics available"
fi
# Check memory metrics
if curl -s http://localhost:9100/metrics | grep -q "node_memory_MemTotal_bytes"; then
echo "✓ Memory metrics available"
fi
# Check disk metrics
if curl -s http://localhost:9100/metrics | grep -q "node_disk_io_time_seconds_total"; then
echo "✓ Disk metrics available"
fi
# Check filesystem metrics
if curl -s http://localhost:9100/metrics | grep -q "node_filesystem_avail_bytes"; then
echo "✓ Filesystem metrics available"
fi
# Check network metrics
if curl -s http://localhost:9100/metrics | grep -q "node_network_receive_bytes_total"; then
echo "✓ Network metrics available"
fi
echo ""
echo "Node Exporter is working correctly!"
EOF
chmod +x /tmp/test_metrics.sh
/tmp/test_metrics.sh
Configuring Prometheus to Scrape Node Exporter
Add Scrape Configuration
Edit Prometheus configuration:
sudo nano /etc/prometheus/prometheus.yml
Add scrape job:
scrape_configs:
- job_name: 'node_exporter'
static_configs:
- targets: ['localhost:9100']
labels:
instance: 'server1'
environment: 'production'
Multiple servers:
scrape_configs:
- job_name: 'node_exporter'
static_configs:
- targets:
- 'server1.example.com:9100'
- 'server2.example.com:9100'
- 'server3.example.com:9100'
labels:
environment: 'production'
With service discovery (file-based):
scrape_configs:
- job_name: 'node_exporter'
file_sd_configs:
- files:
- '/etc/prometheus/targets/node_exporter_*.yml'
refresh_interval: 5m
Create target file:
sudo mkdir -p /etc/prometheus/targets
sudo nano /etc/prometheus/targets/node_exporter_prod.yml
- targets:
- 'server1.example.com:9100'
- 'server2.example.com:9100'
labels:
environment: 'production'
region: 'us-east'
- targets:
- 'server3.example.com:9100'
labels:
environment: 'production'
region: 'us-west'
Reload Prometheus:
# Check configuration
promtool check config /etc/prometheus/prometheus.yml
# Reload Prometheus
sudo systemctl reload prometheus
# Or send SIGHUP
sudo killall -HUP prometheus
Securing Node Exporter
Firewall Configuration
Using UFW (Ubuntu/Debian):
# Allow from specific Prometheus server
sudo ufw allow from 192.168.1.100 to any port 9100
# Allow from subnet
sudo ufw allow from 192.168.1.0/24 to any port 9100
# Check rules
sudo ufw status
Using firewalld (CentOS/Rocky Linux):
# Add rich rule for specific IP
sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.1.100" port port="9100" protocol="tcp" accept'
# Add rich rule for subnet
sudo firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.1.0/24" port port="9100" protocol="tcp" accept'
# Reload firewall
sudo firewall-cmd --reload
# List rules
sudo firewall-cmd --list-all
Using iptables:
# Allow from specific IP
sudo iptables -A INPUT -p tcp -s 192.168.1.100 --dport 9100 -j ACCEPT
# Allow from subnet
sudo iptables -A INPUT -p tcp -s 192.168.1.0/24 --dport 9100 -j ACCEPT
# Drop all other connections to port 9100
sudo iptables -A INPUT -p tcp --dport 9100 -j DROP
# Save rules
sudo netfilter-persistent save
TLS/HTTPS Configuration
Create TLS certificates:
# Create directory for certificates
sudo mkdir -p /etc/node_exporter
# Generate self-signed certificate (or use Let's Encrypt)
sudo openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 \
-keyout /etc/node_exporter/node_exporter.key \
-out /etc/node_exporter/node_exporter.crt \
-subj "/CN=node-exporter"
# Set permissions
sudo chown -R node_exporter:node_exporter /etc/node_exporter
sudo chmod 400 /etc/node_exporter/node_exporter.key
Create web configuration file:
sudo nano /etc/node_exporter/web-config.yml
tls_server_config:
cert_file: /etc/node_exporter/node_exporter.crt
key_file: /etc/node_exporter/node_exporter.key
Update systemd service:
sudo nano /etc/systemd/system/node_exporter.service
[Service]
ExecStart=/usr/local/bin/node_exporter \
--web.config.file=/etc/node_exporter/web-config.yml
sudo systemctl daemon-reload
sudo systemctl restart node_exporter
Test HTTPS endpoint:
curl -k https://localhost:9100/metrics
Basic Authentication
Create password file:
# Install htpasswd
sudo apt install apache2-utils -y # Ubuntu/Debian
sudo yum install httpd-tools -y # CentOS/Rocky
# Generate password hash
htpasswd -nBC 12 "" | tr -d ':\n'
# Enter password when prompted
# Copy the hash output
Update web configuration:
sudo nano /etc/node_exporter/web-config.yml
tls_server_config:
cert_file: /etc/node_exporter/node_exporter.crt
key_file: /etc/node_exporter/node_exporter.key
basic_auth_users:
prometheus: $2y$12$HASH_FROM_HTPASSWD_COMMAND
sudo systemctl restart node_exporter
Test authentication:
# Without auth (should fail)
curl -k https://localhost:9100/metrics
# With auth (should succeed)
curl -k -u prometheus:password https://localhost:9100/metrics
Update Prometheus configuration:
scrape_configs:
- job_name: 'node_exporter'
scheme: https
tls_config:
insecure_skip_verify: true
basic_auth:
username: 'prometheus'
password: 'password'
static_configs:
- targets: ['localhost:9100']
Monitoring and Alerting
Prometheus Alert Rules
Create alert rules file:
sudo nano /etc/prometheus/rules/node_exporter_alerts.yml
groups:
- name: node_exporter_alerts
interval: 30s
rules:
# CPU alerts
- alert: HighCPUUsage
expr: (100 - (avg by (instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)) > 80
for: 5m
labels:
severity: warning
annotations:
summary: "High CPU usage on {{ $labels.instance }}"
description: "CPU usage is above 80% (current value: {{ $value }}%)"
# Memory alerts
- alert: HighMemoryUsage
expr: (1 - (node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes)) * 100 > 85
for: 5m
labels:
severity: warning
annotations:
summary: "High memory usage on {{ $labels.instance }}"
description: "Memory usage is above 85% (current value: {{ $value }}%)"
# Disk space alerts
- alert: LowDiskSpace
expr: (node_filesystem_avail_bytes{fstype!="tmpfs"} / node_filesystem_size_bytes{fstype!="tmpfs"}) * 100 < 15
for: 5m
labels:
severity: warning
annotations:
summary: "Low disk space on {{ $labels.instance }}:{{ $labels.mountpoint }}"
description: "Disk space is below 15% (current value: {{ $value }}%)"
# Node down alert
- alert: NodeDown
expr: up{job="node_exporter"} == 0
for: 1m
labels:
severity: critical
annotations:
summary: "Node Exporter down on {{ $labels.instance }}"
description: "Node Exporter has been down for more than 1 minute"
# High load average
- alert: HighLoadAverage
expr: node_load15 / count without (cpu, mode) (node_cpu_seconds_total{mode="system"}) > 2
for: 5m
labels:
severity: warning
annotations:
summary: "High load average on {{ $labels.instance }}"
description: "Load average is above 2.0 per CPU (current value: {{ $value }})"
Include rules in Prometheus:
sudo nano /etc/prometheus/prometheus.yml
rule_files:
- "/etc/prometheus/rules/node_exporter_alerts.yml"
# Validate configuration
promtool check config /etc/prometheus/prometheus.yml
# Reload Prometheus
sudo systemctl reload prometheus
Troubleshooting
Common Issues
Issue 1: Node Exporter not starting
# Check service status
sudo systemctl status node_exporter
# View logs
sudo journalctl -u node_exporter -n 50
# Check if binary exists
ls -l /usr/local/bin/node_exporter
# Verify permissions
ls -l /usr/local/bin/node_exporter
# Should be owned by node_exporter user
# Test manual start
sudo -u node_exporter /usr/local/bin/node_exporter
Issue 2: Metrics endpoint not accessible
# Check if service is listening
sudo ss -tlnp | grep 9100
# Test local connection
curl http://localhost:9100/metrics
# Check firewall rules
sudo iptables -L -n | grep 9100
sudo ufw status
# Verify listen address
ps aux | grep node_exporter
Issue 3: Missing metrics
# Check enabled collectors
curl http://localhost:9100/metrics | grep "# HELP" | wc -l
# Verify specific collector is enabled
ps aux | grep node_exporter | grep collector
# Check for errors in logs
sudo journalctl -u node_exporter | grep -i error
Issue 4: Prometheus not scraping
# Check Prometheus targets
# Visit: http://prometheus-server:9090/targets
# Verify scrape configuration
cat /etc/prometheus/prometheus.yml | grep -A 5 node_exporter
# Check Prometheus logs
sudo journalctl -u prometheus | grep node_exporter
# Test connectivity from Prometheus server
curl http://node-exporter-host:9100/metrics
Conclusion
Node Exporter is an essential component for monitoring Linux servers with Prometheus, providing comprehensive system metrics with minimal overhead. By following this guide, you've learned how to install, configure, secure, and integrate Node Exporter into your monitoring infrastructure.
Key takeaways:
- Lightweight monitoring - Node Exporter adds minimal overhead while providing extensive metrics
- Flexible configuration - Enable only the collectors you need for your use case
- Custom metrics - Use textfile collector to expose application-specific metrics
- Security - Implement TLS, authentication, and firewall rules to protect metrics endpoint
- Integration - Seamlessly integrates with Prometheus for alerting and visualization
Best practices for production deployment:
- Use systemd for reliable service management
- Implement firewall rules to restrict access
- Enable TLS and basic authentication for security
- Configure appropriate alert rules in Prometheus
- Use textfile collector for custom application metrics
- Monitor Node Exporter itself to ensure metrics collection
- Document your collector configuration and custom metrics
- Regularly update to latest stable version
Node Exporter, combined with Prometheus and Grafana, forms a powerful monitoring stack that scales from single servers to large distributed systems. The metrics collected by Node Exporter provide the foundation for understanding system performance, capacity planning, and proactive issue detection in modern infrastructure environments.


