Modern applications require comprehensive observability to maintain optimal performance and reliability. This tutorial demonstrates how to deploy a complete observability stack using OpenTelemetry Collector, Prometheus, Loki, Tempo, and Grafana on Ubuntu 24.04 LTS. You’ll learn to collect metrics, logs, and traces from your applications in a unified monitoring platform.
Prerequisites
Before starting this observability deployment, ensure you have:
- Ubuntu 24.04 LTS VPS with minimum 4GB RAM and 2 vCPUs
- Root or sudo access to the server
- Docker and Docker Compose installed
- At least 20GB available storage space
- Basic understanding of containerized applications
If you haven’t installed Docker yet, run these commands:
sudo apt update
sudo apt install -y docker.io docker-compose-plugin
sudo systemctl enable --now docker
sudo usermod -aG docker $USER
Step-by-Step Tutorial
Step 1: Create Project Structure
Create a dedicated directory for your observability stack:
mkdir ~/observability-stack
cd ~/observability-stack
mkdir -p configs/{prometheus,loki,tempo,otel-collector,grafana}
Step 2: Configure Prometheus
Create the Prometheus configuration file:
cat > configs/prometheus/prometheus.yml << 'EOF'
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'otel-collector'
static_configs:
- targets: ['otel-collector:8888']
- job_name: 'grafana'
static_configs:
- targets: ['grafana:3000']
EOF
Step 3: Configure Loki for Log Aggregation
Set up Loki configuration for centralized log management:
cat > configs/loki/loki.yml << 'EOF'
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9096
common:
instance_addr: 127.0.0.1
path_prefix: /tmp/loki
storage:
filesystem:
chunks_directory: /tmp/loki/chunks
rules_directory: /tmp/loki/rules
replication_factor: 1
ring:
kvstore:
store: inmemory
query_range:
results_cache:
cache:
embedded_cache:
enabled: true
max_size_mb: 100
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
ruler:
alertmanager_url: http://localhost:9093
EOF
Step 4: Configure Tempo for Distributed Tracing
Create Tempo configuration for trace collection:
cat > configs/tempo/tempo.yml << 'EOF'
server:
http_listen_port: 3200
grpc_listen_port: 9095
distributor:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
ingester:
max_block_duration: 5m
compactor:
compaction:
block_retention: 1h
storage:
trace:
backend: local
local:
path: /tmp/tempo/traces
wal:
path: /tmp/tempo/wal
EOF
Step 5: Configure OpenTelemetry Collector
Set up the OpenTelemetry Collector as the central telemetry data processing hub:
cat > configs/otel-collector/otel-collector.yml << 'EOF'
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
prometheus:
config:
scrape_configs:
- job_name: 'otel-collector'
scrape_interval: 10s
static_configs:
- targets: ['0.0.0.0:8888']
processors:
batch:
exporters:
prometheus:
endpoint: "0.0.0.0:8889"
loki:
endpoint: http://loki:3100/loki/api/v1/push
otlp/tempo:
endpoint: http://tempo:4317
tls:
insecure: true
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp/tempo]
metrics:
receivers: [otlp, prometheus]
processors: [batch]
exporters: [prometheus]
logs:
receivers: [otlp]
processors: [batch]
exporters: [loki]
EOF
Step 6: Create Docker Compose Configuration
Deploy the complete observability stack with Docker Compose:
cat > docker-compose.yml << 'EOF'
version: '3.8'
services:
# OpenTelemetry Collector
otel-collector:
image: otel/opentelemetry-collector-contrib:0.91.0
container_name: otel-collector
command: ["--config=/etc/otelcol-contrib/otel-collector.yml"]
volumes:
- ./configs/otel-collector/otel-collector.yml:/etc/otelcol-contrib/otel-collector.yml
ports:
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
- "8888:8888" # Prometheus metrics
- "8889:8889" # Prometheus exporter
depends_on:
- tempo
- loki
# Prometheus for metrics
prometheus:
image: prom/prometheus:v2.48.0
container_name: prometheus
ports:
- "9090:9090"
volumes:
- ./configs/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
# Loki for logs
loki:
image: grafana/loki:2.9.2
container_name: loki
ports:
- "3100:3100"
volumes:
- ./configs/loki/loki.yml:/etc/loki/local-config.yaml
- loki_data:/tmp/loki
command: -config.file=/etc/loki/local-config.yaml
# Tempo for traces
tempo:
image: grafana/tempo:2.3.0
container_name: tempo
ports:
- "3200:3200"
- "4317:4317"
volumes:
- ./configs/tempo/tempo.yml:/etc/tempo/tempo.yml
- tempo_data:/tmp/tempo
command: ["-config.file=/etc/tempo/tempo.yml"]
# Grafana for visualization
grafana:
image: grafana/grafana:10.2.0
container_name: grafana
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin123
volumes:
- grafana_data:/var/lib/grafana
depends_on:
- prometheus
- loki
- tempo
volumes:
prometheus_data:
loki_data:
tempo_data:
grafana_data:
EOF
Step 7: Deploy the Observability Stack
Launch all services with a single command:
docker compose up -d
Verify all containers are running:
docker compose ps
Step 8: Configure Grafana Data Sources
Access Grafana at http://your-vps-ip:3000
(admin/admin123) and add these data sources:
- Prometheus:
http://prometheus:9090
- Loki:
http://loki:3100
- Tempo:
http://tempo:3200
Best Practices
Performance Optimization:
- Configure retention policies to manage storage usage effectively
- Use appropriate scrape intervals based on your monitoring requirements
- Implement log filtering to reduce noise in your observability pipeline
Security Considerations:
- Change default Grafana credentials immediately
- Implement network segmentation using Docker networks
- Consider enabling authentication for production deployments
- Use CrowdSec protection for external-facing services
Monitoring Best Practices:
- Set up alerting rules in Prometheus for critical metrics
- Create comprehensive dashboards covering infrastructure and application metrics
- Implement distributed tracing in your applications using OpenTelemetry SDKs
Conclusion
You’ve successfully deployed a production-ready observability stack featuring metrics collection with Prometheus, log aggregation through Loki, distributed tracing via Tempo, and unified visualization in Grafana. This comprehensive monitoring solution provides the foundation for maintaining application performance, troubleshooting issues, and ensuring system reliability.
The OpenTelemetry Collector serves as your telemetry data processing hub, enabling seamless integration with your existing applications. For high-performance deployment of observability stacks, consider our Singapore VPS solutions with dedicated resources and enterprise-grade storage.
To further enhance your infrastructure, explore our guides on deploying production-ready Kubernetes clusters and implementing automated backup strategies for your monitoring data.
[…] Monitoring: Implement observability with tools like those discussed in our observability stack tutorial […]