Identificación de Cuellos de Botella con perf

Introducción

Linux perf (herramienta de análisis de rendimiento) es un poderoso framework de perfilado que proporciona información profunda sobre el rendimiento de aplicaciones y sistemas. A diferencia de strace que rastrea llamadas al sistema, perf utiliza contadores de hardware e instrumentación del kernel para identificar cuellos de botella de CPU, patrones de acceso a memoria, fallos de caché y puntos críticos de rendimiento con una sobrecarga mínima. Es el estándar de oro para el análisis de rendimiento en sistemas Linux.

perf puede responder preguntas críticas de rendimiento: ¿Qué funciones consumen más CPU? ¿Dónde ocurren los fallos de caché? ¿Qué causa los cambios de contexto? ¿Qué rutas de código están activas? Al muestrear la pila de ejecución a intervalos regulares, perf construye un perfil estadístico que muestra exactamente dónde se gasta el tiempo, permitiendo esfuerzos de optimización dirigidos.

Esta guía completa cubre la instalación de perf, patrones de uso, generación de flame graphs, técnicas de identificación de cuellos de botella y ejemplos de optimización del mundo real. Aprenderá cómo identificar y eliminar sistemáticamente cuellos de botella de rendimiento en aplicaciones y sistemas.

Entendiendo perf

Qué Mide perf

Rendimiento de CPU:

  • Tiempo de ejecución de funciones
  • Conteo de instrucciones
  • Ciclos de CPU
  • Predicciones erróneas de bifurcación

Rendimiento de Memoria:

  • Fallos de caché (L1, L2, L3)
  • Cargas/almacenamientos de memoria
  • Fallos de TLB
  • Fallos de página

Eventos del Sistema:

  • Cambios de contexto
  • Migraciones de CPU
  • Llamadas al sistema
  • Interrupciones

Cómo Funciona perf

  1. Muestreo: Interrumpe periódicamente la CPU para registrar el rastro de la pila
  2. Contadores de Hardware: Utiliza unidades de monitoreo de rendimiento (PMU) de la CPU
  3. Eventos del Kernel: Rastrea actividades a nivel del kernel
  4. Resolución de Símbolos: Mapea direcciones a nombres de funciones
  5. Análisis Estadístico: Agrega muestras para identificar puntos críticos

Instalación

# Ubuntu/Debian
apt-get install linux-tools-common linux-tools-generic linux-tools-$(uname -r) -y

# CentOS/Rocky Linux
dnf install perf -y

# Verify installation
perf --version

# Check available events
perf list

# Output shows hundreds of events:
# cpu-cycles OR cycles                               [Hardware event]
# instructions                                        [Hardware event]
# cache-references                                    [Hardware event]
# cache-misses                                        [Hardware event]
# branch-instructions OR branches                     [Hardware event]
# branch-misses                                       [Hardware event]
# ...

Uso Básico

Perfilado de CPU

# Profile application
perf record ./application

# Creates perf.data file
# View report
perf report

# Interactive TUI showing:
# Overhead  Command   Shared Object      Symbol
#  45.23%   app       app                [.] compute_results
#  18.67%   app       libc-2.31.so       [.] __memcpy_avx_unaligned
#  12.34%   app       app                [.] process_data
#   8.92%   app       app                [.] parse_input
#   ...

# Functions consuming most CPU time are at top

Monitoreo en Tiempo Real

# Top-like interface for system-wide profiling
perf top

# Shows:
# Overhead  Shared Object       Symbol
#   8.23%  [kernel]            [k] _raw_spin_lock
#   5.67%  libc-2.31.so        [.] __memcpy_avx_unaligned
#   4.12%  nginx               [.] ngx_http_process_request
#   3.45%  [kernel]            [k] copy_user_enhanced_fast_string
#   ...

# Updates in real-time
# Press 'h' for help, 'q' to quit

Perfilado de Todo el Sistema

# Profile entire system for 30 seconds
perf record -a -g sleep 30

# -a: all CPUs
# -g: capture call graphs (stack traces)

# Generate report
perf report

# Shows all processes and their CPU usage

Técnicas Avanzadas de Perfilado

Perfilado de Gráfico de Llamadas

# Record with call graphs
perf record -g ./application

# Or with more detail
perf record -g --call-graph dwarf ./application

# View hierarchical call graph
perf report -g

# Output shows call chains:
# Children  Self  Command  Shared Object  Symbol
#  45.23%  0.00%  app      app            [.] main
#     |
#     ---main
#        |
#        |--30.45%--compute_results
#        |    |
#        |    |--20.12%--heavy_calculation
#        |    |
#        |    --10.33%--data_processing
#        |
#        --14.78%--parse_input

# Identifies call paths consuming most time

Frecuencia de Muestreo

# Default: 1000 Hz (1000 samples per second)
perf record ./app

# High frequency (more accurate, more overhead)
perf record -F 4000 ./app  # 4000 Hz

# Low frequency (less overhead, less accurate)
perf record -F 100 ./app   # 100 Hz

# Adaptive sampling (let perf decide)
perf record -F max ./app

Perfilado de Eventos Específicos

# Cache miss profiling
perf record -e cache-misses ./app
perf report

# Branch misprediction profiling
perf record -e branch-misses ./app

# Multiple events
perf record -e cpu-cycles,cache-misses,branch-misses ./app

# View specific event in report
perf report --stdio | head -50

Perfilado Específico de Proceso

# Attach to running process
perf record -p <PID> -g sleep 30

# Profile specific command with arguments
perf record ./benchmark --iterations=1000

# Profile all threads of a process
perf record -p <PID> -g -a sleep 30

Flujo de Trabajo de Análisis de Rendimiento

Paso 1: Identificar Puntos Críticos

# Profile application
perf record -g ./application

# Generate basic report
perf report --stdio > perf-report.txt

# Extract top 20 functions
head -50 perf-report.txt

# Example output:
# Overhead  Command  Shared Object      Symbol
#  45.23%   app      app                [.] md5_transform
#  18.67%   app      libc-2.31.so       [.] memcpy
#  12.34%   app      app                [.] json_parse
#   8.92%   app      app                [.] string_concat
#   5.45%   app      libz.so.1          [.] inflate
#   ...

# Interpretation:
# md5_transform consuming 45% of CPU - optimization target!

Paso 2: Analizar Cadenas de Llamadas

# View call graph for specific function
perf report -g 'graph,0.5,caller' --no-children

# Focus on specific symbol
perf report --symbol md5_transform

# Shows:
# Overhead  Command  Symbol
#  45.23%   app      [.] md5_transform
#     |
#     ---md5_transform
#        |
#        |--35.12%--compute_hash
#        |          |
#        |          ---process_file
#        |             |
#        |             ---main
#        |
#        --10.11%--verify_checksum
#                  |
#                  ---validate_data
#                     |
#                     ---main

# Shows md5_transform called from two paths
# compute_hash path is hottest (35.12%)

Paso 3: Anotar Código Fuente

# Show assembly and source with performance data
perf annotate md5_transform

# Output shows instruction-level hotspots:
#       :      Disassembly of md5_transform:
#  0.00 :      push   %r12
#  0.00 :      push   %rbx
#  0.00 :      mov    %rsi,%r12
# 12.34 :      mov    0x10(%rsi),%eax    # <-- 12% of time here
# 18.92 :      mov    0x14(%rsi),%ebx    # <-- 19% of time here
#  8.45 :      rol    $0x7,%eax           # <-- 8% of time here
#  ...

# Identifies specific instructions causing bottlenecks

Casos de Estudio del Mundo Real

Caso de Estudio 1: Procesamiento Lento de Imágenes

Problema: Aplicación de procesamiento de imágenes lenta, no está claro por qué

# Profile application
perf record -g ./process-images input/*.jpg

# View report
perf report --stdio | head -30

# Results:
# Overhead  Symbol
#  67.23%   [.] resize_bilinear
#  12.45%   [.] convert_colorspace
#   8.92%   [.] save_jpeg
#   5.34%   [.] load_jpeg
#   ...

# Bottleneck identified: resize_bilinear consuming 67% CPU

# Analyze call graph
perf report -g --symbol resize_bilinear

# Shows:
# resize_bilinear
#   |
#   |--45.67%--interpolate_pixel    # Main bottleneck
#   |
#   --21.56%--calculate_weights

# Annotate to see hotspot code
perf annotate interpolate_pixel

# Shows tight loop with expensive divisions:
# 23.45%:  divsd  %xmm1,%xmm0    # Division in hot loop!
# 19.87%:  mulsd  %xmm2,%xmm0
# 15.23%:  addsd  %xmm3,%xmm0

# Root cause: Division operation in inner loop
# Solution: Precompute division results

# After optimization:
# Overhead  Symbol
#  28.12%   [.] resize_bilinear  # 58% reduction!
#  15.34%   [.] convert_colorspace
#  12.45%   [.] save_jpeg
#   ...

# Total speedup: 2.4x faster image processing

Caso de Estudio 2: Misterio de Alto Uso de CPU

Problema: Servidor web usando 80% de CPU en sistema aparentemente inactivo

# System-wide profiling
sudo perf record -a -g sleep 30

# View report
sudo perf report

# Results show:
# Overhead  Command   Symbol
#  32.45%   php-fpm   [.] php_json_decode
#  18.23%   php-fpm   [.] zend_hash_find
#  12.67%   nginx     [.] ngx_http_process_request
#   8.92%   php-fpm   [.] pcre_compile
#   ...

# json_decode consuming significant CPU despite "idle" system

# Examine call graph
sudo perf report -g --symbol php_json_decode

# Shows:
# php_json_decode
#   |
#   ---zif_json_decode
#      |
#      ---execute_ex
#         |
#         ---zend_execute
#            |
#            ---check_session.php:35

# Traced to check_session.php line 35

# Investigate source:
# while (true) {
#     $session_data = json_decode(file_get_contents('/tmp/session'));
#     usleep(1000);  # Check every 1ms!
# }

# Root cause: Polling loop decoding JSON 1000 times per second
# Solution: Use proper session handling, reduce polling frequency

# After fix:
# Overhead  Symbol
#   5.12%   [.] php_json_decode  # 84% reduction
#   4.23%   [.] ngx_http_process_request
#   ...

# CPU usage dropped to 12% (85% reduction)

Caso de Estudio 3: Patrones de Acceso a Memoria

Problema: Aplicación más lenta de lo esperado a pesar de algoritmos optimizados

# Profile cache misses
perf record -e cache-misses,cache-references -g ./app

# View cache statistics
perf report --stdio

# Shows:
# cache-misses: 234,567,890
# cache-references: 1,234,567,890
# Cache miss rate: 19% (very high!)

# Identify functions with most cache misses
perf report -e cache-misses

# Results:
# Overhead  Symbol
#  42.34%   [.] process_matrix
#  23.45%   [.] sort_array
#  15.67%   [.] search_tree
#   ...

# Analyze process_matrix
perf annotate process_matrix

# Shows column-major access on row-major data:
# for (int col = 0; col < COLS; col++) {
#     for (int row = 0; row < ROWS; row++) {
#         result += matrix[row][col];  # Cache-unfriendly!
#     }
# }

# Root cause: Poor cache locality due to access pattern
# Solution: Change to row-major access

# After optimization:
# cache-misses: 12,345,678 (95% reduction!)
# cache-references: 1,234,567,890
# Cache miss rate: 1% (excellent)

# Speedup: 4.2x faster execution

Flame Graphs

Generando Flame Graphs

# Install FlameGraph tools
git clone https://github.com/brendangregg/FlameGraph
cd FlameGraph

# Record perf data
perf record -F 99 -a -g -- sleep 30

# Generate flame graph
perf script | ./stackcollapse-perf.pl | ./flamegraph.pl > flamegraph.svg

# Open in web browser
firefox flamegraph.svg

# Flame graph visualization:
# - Width: proportion of time
# - Height: call stack depth
# - Color: random (aid visibility)
# - Wide "plateaus": performance bottlenecks

Interpretando Flame Graphs

                     [main]                          100%
         ____________/    \__________
        /                            \
  [process_data]                [handle_requests]    60%     40%
    /         \                     /      \
[parse] [compute]             [read_db] [respond]    30% 30% 25% 15%
   |       |                     |         |
 [json] [heavy_calc]        [mysql]    [format]     30% 30% 25% 15%

# Interpretation:
# - heavy_calc is 30% wide = 30% of CPU time
# - mysql operations are 25% = database bottleneck
# - json parsing is 30% = parsing bottleneck

# Optimization targets:
# 1. heavy_calc (30%) - largest single function
# 2. mysql (25%) - database optimization potential
# 3. json (30%) - parsing optimization

Flame Graphs Diferenciales

# Compare before and after optimization

# Before optimization
perf record -F 99 -a -g -o perf-before.data ./app
perf script -i perf-before.data | ./stackcollapse-perf.pl > before.folded

# After optimization
perf record -F 99 -a -g -o perf-after.data ./app
perf script -i perf-after.data | ./stackcollapse-perf.pl > after.folded

# Generate differential flame graph
./difffolded.pl before.folded after.folded | ./flamegraph.pl > diff.svg

# Visualization:
# - Red: increased CPU time (regressions)
# - Blue: decreased CPU time (improvements)
# - Gray: unchanged

# Shows impact of optimizations visually

Perfilado Especializado

Análisis Off-CPU

# Profile scheduler events (why processes block)
perf record -e sched:sched_switch -e sched:sched_stat_sleep -a -g sleep 30

# Generate off-CPU flame graph
perf script | ./stackcollapse-perf.pl | ./flamegraph.pl --color=io > off-cpu.svg

# Shows:
# - I/O waits
# - Lock contention
# - Network delays
# - Sleep/yield calls

# Complements CPU profiling (CPU = on-CPU + off-CPU)

Perfilado de Memoria

# Profile page faults
perf record -e page-faults -g ./app

# Profile memory loads/stores
perf record -e cpu/mem-loads/ -e cpu/mem-stores/ -g ./app

# Analyze memory access patterns
perf mem record ./app
perf mem report

# Shows:
# - Load/store distribution
# - NUMA node access patterns
# - TLB miss rates

Análisis de Cambio de Contexto

# Profile context switches
perf record -e context-switches -g -a sleep 30

# View report
perf report

# High context switches indicate:
# - Lock contention
# - Too many threads
# - I/O blocking
# - Improper sleep/wake patterns

# Example output:
# Overhead  Command  Symbol
#  23.45%   app      [k] schedule
#  18.92%   app      [k] futex_wait_queue_me
#  12.34%   app      [.] pthread_mutex_lock
#   ...

# Shows mutex contention causing context switches

Contadores de Rendimiento

Análisis de Contadores de Hardware

# List available hardware counters
perf list hardware

# Profile multiple counters
perf stat -e cycles,instructions,cache-misses,branch-misses ./app

# Output:
Performance counter stats for './app':

    2,345,678,901      cycles
    3,456,789,012      instructions    #  1.47  insn per cycle
       45,678,901      cache-misses    #  1.95% of all cache refs
        8,901,234      branch-misses   #  0.73% of all branches

      2.345678 seconds time elapsed

# Interpretation:
# IPC (insn per cycle): 1.47 is good (> 1.0)
# Cache miss rate: 1.95% is excellent (< 5%)
# Branch miss rate: 0.73% is good (< 2%)

Grupos de Eventos Personalizados

# Define event group
perf stat -e '{cycles,instructions,cache-misses}' ./app

# Profile with detailed breakdown
perf stat -d ./app

# Output shows:
# - L1-dcache-loads
# - L1-dcache-load-misses
# - LLC-loads
# - LLC-load-misses
# - dTLB-loads
# - dTLB-load-misses

# Comprehensive cache hierarchy analysis

Flujo de Trabajo de Optimización

1. Medición de Línea Base

# Establish baseline performance
perf stat -r 10 ./app

# -r 10: run 10 times, report average
# Gives consistent baseline before optimization

# Results:
Performance counter stats for './app' (10 runs):

      2,345.67 msec task-clock       #  0.998 CPUs utilized  ( +-  0.23% )
    2,345,678,901 cycles             #  1.000 GHz            ( +-  0.18% )
    3,456,789,012 instructions       #  1.47  insn per cycle ( +-  0.12% )

      2.3512 +- 0.0054 seconds time elapsed  ( +-  0.23% )

# Save for comparison after optimization

2. Identificar Puntos Críticos

# Profile with call graphs
perf record -g -F 999 ./app

# Generate annotated report
perf report -g --stdio > hotspots.txt

# Extract top functions
grep "^#.*Overhead" -A 30 hotspots.txt > top-functions.txt

# Prioritize based on:
# 1. High overhead (> 5%)
# 2. Accessible code (can modify)
# 3. Optimization potential (algorithm vs library)

3. Optimizar y Validar

# After making optimization
perf stat -r 10 ./app > after-optimization.txt

# Compare with baseline
echo "=== Before ==="
cat baseline.txt | grep "seconds time elapsed"

echo "=== After ==="
cat after-optimization.txt | grep "seconds time elapsed"

# Calculate improvement
# Before: 2.3512 seconds
# After: 1.1234 seconds
# Improvement: 52% faster (2.09x speedup)

4. Optimización Iterativa

#!/bin/bash
# optimize-iteratively.sh

APP="./app"
ITERATIONS=10

echo "Iteration,Time(s),Cycles,Instructions,IPC" > optimization-log.csv

for i in {1..10}; do
    echo "Optimization iteration $i"

    # Measure performance
    STATS=$(perf stat -r $ITERATIONS $APP 2>&1)
    TIME=$(echo "$STATS" | grep "seconds time elapsed" | awk '{print $1}')
    CYCLES=$(echo "$STATS" | grep "cycles" | awk '{print $1}' | tr -d ',')
    INSNS=$(echo "$STATS" | grep "instructions" | awk '{print $1}' | tr -d ',')
    IPC=$(echo "scale=2; $INSNS / $CYCLES" | bc)

    echo "$i,$TIME,$CYCLES,$INSNS,$IPC" >> optimization-log.csv

    # Identify next hotspot
    perf record -g $APP
    HOTSPOT=$(perf report --stdio | grep "^#.*%" | head -1 | awk '{print $5}')

    echo "Top hotspot: $HOTSPOT"
    echo "Press Enter to optimize and continue..."
    read

    # User optimizes code, then continues loop
done

echo "Optimization complete. Review optimization-log.csv for progress."

Mejores Prácticas

1. Reducir el Ruido

# Bad: Profile with other processes running
perf record ./app  # System under load

# Good: Minimize background activity
# Stop unnecessary services
systemctl stop unnecessary-service

# Set CPU governor to performance
for cpu in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor; do
    echo performance > $cpu
done

# Disable frequency scaling effects
sudo perf record ./app

2. Conteo de Muestras Adecuado

# Bad: Short profiling session
perf record -F 99 sleep 1  # Too few samples

# Good: Longer profiling for statistical significance
perf record -F 99 sleep 30  # 30 seconds = ~3000 samples

# Better: Profile realistic workload duration
perf record -F 99 ./run-benchmark  # Profile complete benchmark

3. Resolución de Símbolos

# Ensure debug symbols available
# Install debug packages
apt-get install libc6-dbg

# Compile with debug symbols
gcc -g -O2 app.c -o app

# Verify symbols present
nm app | grep function_name

# Without symbols, perf shows addresses:
# 0x00007f8b4c2d1234
# With symbols:
# compute_results

4. Análisis Enfocado

# Don't profile everything
# Focus on specific aspects:

# CPU bottlenecks:
perf record -e cycles -g

# Memory issues:
perf record -e cache-misses -g

# Branch prediction:
perf record -e branch-misses -g

# Targeted analysis is more actionable

Resolución de Problemas

Problemas de Permisos

# Error: Permission denied
# Solution 1: Run as root
sudo perf record ./app

# Solution 2: Adjust paranoid level
sudo sysctl -w kernel.perf_event_paranoid=-1
# Or permanently:
echo "kernel.perf_event_paranoid = -1" | sudo tee -a /etc/sysctl.conf

Símbolos Faltantes

# Functions shown as addresses
# Solution: Install debug symbols
apt-get install <package>-dbg

# For kernel symbols
apt-get install linux-image-$(uname -r)-dbg

Alta Sobrecarga

# Profiling slowing down application
# Reduce sampling frequency
perf record -F 99 ./app  # Lower frequency

# Use hardware events (lower overhead)
perf record -e cycles ./app  # Instead of -g

Conclusión

perf es la herramienta de análisis de rendimiento más poderosa de Linux:

Capacidades Clave:

  • Perfilado de CPU con sobrecarga mínima
  • Análisis de contadores de hardware
  • Análisis de patrones de acceso a memoria
  • Generación de gráficos de llamadas
  • Visualización de flame graphs
  • Perfilado de todo el sistema o específico de proceso

Cuellos de Botella Comunes Identificados:

  • Funciones intensivas en CPU (70-90% de objetivos de optimización)
  • Fallos de caché (aceleraciones de 2-5x posibles)
  • Predicciones erróneas de bifurcación (mejoras del 10-30%)
  • Patrones de acceso a memoria (2-10x con optimización)
  • Contención de bloqueos (análisis de cambio de contexto)

Resultados Típicos:

  • Puntos críticos a nivel de función identificados: potencial de aceleración de 10-50x
  • Optimizaciones de algoritmos: mejoras de 2-100x
  • Optimización de caché: 2-5x más rápido
  • Aceleración general de la aplicación: 1.5-10x con optimización sistemática

Flujo de Trabajo:

  1. Medición de línea base (perf stat)
  2. Identificar puntos críticos (perf record + perf report)
  3. Analizar cadenas de llamadas (flame graphs)
  4. Anotar fuente (perf annotate)
  5. Optimizar funciones objetivo
  6. Validar mejoras (comparar métricas)
  7. Iterar hasta alcanzar los objetivos

Al dominar perf, obtiene la capacidad de identificar y eliminar sistemáticamente cuellos de botella de rendimiento, transformando aplicaciones lentas en sistemas altamente optimizados y eficientes. El enfoque basado en datos asegura que los esfuerzos de optimización se centren en cuellos de botella reales en lugar de conjeturas, maximizando el retorno de la inversión en optimización.