Temporal Workflow Engine Installation

Temporal is a durable workflow execution platform that persists workflow state to a database, enabling long-running, fault-tolerant business processes that survive server restarts and network failures. With worker pools, activity-level retries, and rich visibility features, Temporal is the foundation for reliable microservice orchestration on Linux.

Prerequisites

  • Docker and Docker Compose installed
  • At least 4 GB RAM for the server stack
  • PostgreSQL or Cassandra (Docker handles this in dev)
  • A programming language SDK: Go, Python, Java, TypeScript, or .NET

Docker Compose Server Setup

# Clone Temporal's Docker Compose configurations
git clone https://github.com/temporalio/docker-compose.git temporal-docker
cd temporal-docker

# Start with PostgreSQL backend (recommended for production)
docker compose -f docker-compose-postgres.yml up -d

# Or start the default development setup (ephemeral SQLite - dev only)
docker compose up -d

# Verify services are running
docker compose ps

# Expected services:
# temporal (server)
# temporal-ui (web dashboard)
# temporal-admin-tools (CLI tools)
# postgresql (or cassandra)
# elasticsearch (for advanced visibility)

Access the Temporal Web UI at http://localhost:8080.

Production Docker Compose with PostgreSQL

# docker-compose.yml (production-oriented)
version: '3'

services:
  postgresql:
    image: postgres:15-alpine
    environment:
      POSTGRES_USER: temporal
      POSTGRES_PASSWORD: temporal-db-password
    volumes:
      - pg_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U temporal"]
      interval: 10s
      retries: 10

  temporal:
    image: temporalio/auto-setup:latest
    depends_on:
      postgresql:
        condition: service_healthy
    ports:
      - "7233:7233"  # gRPC frontend
    environment:
      DB: postgresql
      DB_PORT: 5432
      POSTGRES_USER: temporal
      POSTGRES_PWD: temporal-db-password
      POSTGRES_SEEDS: postgresql
      DYNAMIC_CONFIG_FILE_PATH: /etc/temporal/config/dynamicconfig/development-sql.yaml
    volumes:
      - ./dynamicconfig:/etc/temporal/config/dynamicconfig

  temporal-ui:
    image: temporalio/ui:latest
    depends_on:
      - temporal
    ports:
      - "8080:8080"
    environment:
      TEMPORAL_ADDRESS: temporal:7233
      TEMPORAL_CORS_ORIGINS: https://temporal.yourdomain.com

volumes:
  pg_data:
# Initialize the Temporal schema
docker compose exec temporal temporal-sql-tool \
  --plugin postgres12 \
  --ep postgresql \
  create-initial-schema

Namespace Management

Namespaces provide isolation between different environments or teams:

# Get a shell to the admin tools container
docker compose exec temporal-admin-tools bash

# Or use the tctl CLI (installed in admin-tools)
alias tctl="docker compose exec temporal-admin-tools tctl"

# Create a namespace
tctl namespace register \
  --global_namespace false \
  --retention 7 \
  --description "Production namespace" \
  production

# Create a development namespace with shorter retention
tctl namespace register \
  --retention 1 \
  --description "Development namespace" \
  development

# List namespaces
tctl namespace list

# Update namespace retention
tctl namespace update --retention 14 production

# Describe namespace details
tctl namespace describe production

Worker Configuration

Workers poll Temporal for workflow and activity tasks. Configure them in your application:

Go Worker Example

// worker/main.go
package main

import (
    "go.temporal.io/sdk/client"
    "go.temporal.io/sdk/worker"
    "log"
)

func main() {
    // Connect to Temporal server
    c, err := client.Dial(client.Options{
        HostPort:  "temporal:7233",
        Namespace: "production",
    })
    if err != nil {
        log.Fatalf("Unable to connect: %v", err)
    }
    defer c.Close()

    // Create worker polling "order-processing" task queue
    w := worker.New(c, "order-processing", worker.Options{
        MaxConcurrentActivityExecutionSize: 10,
        MaxConcurrentWorkflowTaskExecutionSize: 5,
    })

    // Register workflows and activities
    w.RegisterWorkflow(OrderWorkflow)
    w.RegisterActivity(&Activities{})

    // Start worker (blocks until shutdown)
    if err := w.Run(worker.InterruptCh()); err != nil {
        log.Fatalf("Worker failed: %v", err)
    }
}

Python Worker Example

# worker.py
import asyncio
from temporalio.client import Client
from temporalio.worker import Worker
from workflows import OrderWorkflow
from activities import charge_card, send_confirmation, update_inventory

async def main():
    client = await Client.connect("temporal:7233", namespace="production")

    # Start worker
    async with Worker(
        client,
        task_queue="order-processing",
        workflows=[OrderWorkflow],
        activities=[charge_card, send_confirmation, update_inventory],
        max_concurrent_activities=10,
    ):
        print("Worker started, polling for tasks...")
        await asyncio.Future()  # Run forever

if __name__ == "__main__":
    asyncio.run(main())

Activity Design and Retry Policies

Activities are the individual units of work. Configure retry policies per activity:

Go Activities with Retry Policy

// activities.go
package main

import (
    "context"
    "fmt"
    "time"
    "go.temporal.io/sdk/activity"
    "go.temporal.io/sdk/temporal"
)

type Activities struct{}

func (a *Activities) ChargeCard(ctx context.Context, orderID string, amount float64) error {
    logger := activity.GetLogger(ctx)
    logger.Info("Charging card", "orderID", orderID, "amount", amount)

    // Activity heartbeat for long-running operations
    activity.RecordHeartbeat(ctx, "charging")

    // Call your payment API
    err := paymentService.Charge(orderID, amount)
    if err != nil {
        // Wrap as application error to control retry behavior
        return temporal.NewApplicationError(
            fmt.Sprintf("payment failed: %v", err),
            "PaymentFailed",
            err,
        )
    }
    return nil
}

// In workflow, set retry policy per activity
retryPolicy := &temporal.RetryPolicy{
    InitialInterval:    time.Second,
    BackoffCoefficient: 2.0,
    MaximumInterval:    time.Minute * 5,
    MaximumAttempts:    5,
    NonRetryableErrorTypes: []string{"InvalidCardError"},
}

activityOptions := workflow.ActivityOptions{
    StartToCloseTimeout: time.Minute * 10,
    RetryPolicy: retryPolicy,
}
ctx = workflow.WithActivityOptions(ctx, activityOptions)

Workflow Implementation

Durable Workflow Example

// workflow.go
package main

import (
    "time"
    "go.temporal.io/sdk/workflow"
    "go.temporal.io/sdk/temporal"
)

// OrderWorkflow orchestrates the order processing pipeline
func OrderWorkflow(ctx workflow.Context, order Order) error {
    logger := workflow.GetLogger(ctx)
    logger.Info("Starting order workflow", "orderID", order.ID)

    // Activities run with automatic retry and state persistence
    var activities *Activities

    // Step 1: Validate inventory
    ctx1 := workflow.WithActivityOptions(ctx, workflow.ActivityOptions{
        StartToCloseTimeout: time.Minute,
    })
    if err := workflow.ExecuteActivity(ctx1, activities.CheckInventory, order).Get(ctx1, nil); err != nil {
        return err
    }

    // Step 2: Charge payment (with custom retry policy)
    retryPolicy := &temporal.RetryPolicy{
        MaximumAttempts: 3,
        InitialInterval: time.Second * 30,
    }
    ctx2 := workflow.WithActivityOptions(ctx, workflow.ActivityOptions{
        StartToCloseTimeout: time.Minute * 5,
        RetryPolicy: retryPolicy,
    })
    if err := workflow.ExecuteActivity(ctx2, activities.ChargeCard, order.ID, order.Amount).Get(ctx2, nil); err != nil {
        // Compensate: cancel inventory reservation
        workflow.ExecuteActivity(ctx1, activities.ReleaseInventory, order).Get(ctx1, nil)
        return err
    }

    // Step 3: Fulfill order (can wait days for a signal)
    workflow.ExecuteActivity(ctx1, activities.CreateFulfillment, order).Get(ctx1, nil)

    // Wait for shipping signal (durable wait - survives restarts)
    signalChan := workflow.GetSignalChannel(ctx, "order-shipped")
    var trackingNumber string
    signalChan.Receive(ctx, &trackingNumber)

    // Step 4: Send confirmation with tracking
    workflow.ExecuteActivity(ctx1, activities.SendConfirmation, order, trackingNumber).Get(ctx1, nil)

    return nil
}

Starting Workflows

// Start a workflow from your application
workflowRun, err := c.ExecuteWorkflow(context.Background(),
    client.StartWorkflowOptions{
        ID:        "order-" + orderID,
        TaskQueue: "order-processing",
        // Workflow-level timeout
        WorkflowExecutionTimeout: time.Hour * 24 * 7,
    },
    OrderWorkflow,
    order,
)
# Or use tctl to start a workflow
tctl workflow start \
  --namespace production \
  --taskqueue order-processing \
  --workflow_type OrderWorkflow \
  --workflow_id "order-12345" \
  --input '{"id":"12345","amount":99.99}'

# Signal a running workflow
tctl workflow signal \
  --namespace production \
  --workflow_id "order-12345" \
  --name "order-shipped" \
  --input '"TRACK123456"'

Visibility and Monitoring

# List running workflows
tctl workflow list --namespace production

# List workflows with a query
tctl workflow list \
  --namespace production \
  --query 'WorkflowType="OrderWorkflow" AND ExecutionStatus="Running"'

# Get workflow details
tctl workflow describe \
  --namespace production \
  --workflow_id "order-12345"

# View workflow history
tctl workflow showid \
  --namespace production \
  --workflow_id "order-12345"

The Temporal Web UI at http://localhost:8080 provides:

  • Real-time workflow execution timeline
  • Activity input/output inspection
  • Retry attempt history
  • Workflow search with filters

Production Deployment Considerations

# Run multiple workers for high availability
docker compose up -d --scale worker=3

# Configure worker resource limits
# Environment variables for worker processes:
TEMPORAL_MAX_CONCURRENT_WORKFLOW_TASKS=100
TEMPORAL_MAX_CONCURRENT_ACTIVITIES=50
TEMPORAL_WORKER_MAX_HEARTBEAT_THROTTLE=60s

# Set up Temporal server metrics (Prometheus)
# temporal server configuration:
global:
  metrics:
    prometheus:
      listenAddress: "0.0.0.0:9090"
      timerType: histogram

Troubleshooting

Worker not picking up tasks:

# Check worker is polling the correct task queue
tctl taskqueue describe --taskqueue order-processing --namespace production

# Verify namespace exists
tctl namespace list

# Check server logs
docker compose logs temporal --tail 50

Workflow stuck in "Running" state:

# View workflow history to see where it's stuck
tctl workflow showid --workflow_id "order-12345" --namespace production

# Check if waiting for a signal
# Look for "WorkflowExecutionSignaledEventType" in history

# Terminate stuck workflow
tctl workflow terminate \
  --namespace production \
  --workflow_id "order-12345" \
  --reason "Manual termination for investigation"

Database connection errors:

# Check PostgreSQL is running
docker compose ps postgresql

# Test database connectivity from Temporal container
docker compose exec temporal psql \
  "postgresql://temporal:temporal-db-password@postgresql/temporal"

Conclusion

Temporal transforms complex, multi-step business processes into durable, self-healing workflows by persisting every state transition to a database. Its separation of orchestration logic from execution code, combined with per-activity retry policies and signal-based event handling, enables building workflows that can span hours or days while surviving infrastructure failures. Starting with the Docker Compose setup provides a production-ready foundation that scales to multi-worker deployments as workload grows.