docker compose configPrometheus for system metrics (cross-platform)
A practical foundation for scraping targets and validating stack health across Windows, Linux, macOS, and Docker.
Deploy Prometheus and validate operational metrics with a reproducible workflow, independent of your operating system.
Created: April 5, 2026
Published: April 5, 2026
Docker
This guide runs as a Docker-based lab. Use the Compose and configuration snippets in the article as the primary source of truth, and do not assume native OS support unless it is explicitly documented.
docker compose up -d --builddocker compose psMinimal lab architecture
- One Prometheus container with a config file mounted from disk.
- An initial target. This can be your app or an exporter such as node-exporter.
- Basic persistence so you do not lose series on every restart.
This guide is not about high availability. The goal is a reliable local foundation for querying metrics and understanding the end-to-end flow.
Create the stack with Docker Compose
services:
prometheus:
image: prom/prometheus:v2.54.1
container_name: prometheus
command:
- --config.file=/etc/prometheus/prometheus.yml
- --storage.tsdb.path=/prometheus
- --web.enable-lifecycle
ports:
- "9090:9090"
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus_data:/prometheus
node-exporter:
image: prom/node-exporter:v1.8.2
container_name: node-exporter
ports:
- "9100:9100"
volumes:
prometheus_data:Prometheus scrapes a simple exporter first so you can validate the pipeline before pointing it at your application.
Define the scrape file
Create a `prometheus/` directory next to `docker-compose.yml`.
Use a short `scrape_interval` for the lab and add the exporter target so ingestion is easy to validate.
global:
scrape_interval: 15s
scrape_configs:
- job_name: "prometheus"
static_configs:
- targets: ["prometheus:9090"]
- job_name: "node"
static_configs:
- targets: ["node-exporter:9100"]Start the stack
Run `docker compose up -d` and wait until Prometheus and node-exporter are healthy or running.
Open `http://localhost:9090/targets` and verify both jobs show as UP.
docker compose up -dValidate that metrics are queryable
Do not call the deployment finished until you run specific queries. Targets tells you scraping works; Graph proves you already have useful data.
- Query `up` to list active targets.
- Query `rate(node_cpu_seconds_total[5m])` to validate exporter series.
- Check `prometheus_tsdb_head_series` to understand how much data you are already storing.
Checklist before moving beyond the lab
- ✓Prometheus starts with valid config and no parse errors.
- ✓Targets stay UP consistently.
- ✓Persistence is mounted so series survive restarts.
- ○You already have a plan for auth or secrets on sensitive endpoints.
Add your first real service
Once the lab works, replace the initial exporter or add a new job for your application. If you use OpenTelemetry Collector, Prometheus can scrape the collector endpoint or service-specific exporters.
Do I need service discovery for a first test?
No. Start with `static_configs` until the model is clear. Dynamic discovery becomes useful once the environment is less manual.
What if every target is DOWN?
Check networking and DNS names inside Docker Compose first. In local labs the most common mistake is using `localhost` instead of the service name between containers.
The natural next step is wiring this Prometheus instance into Grafana and building a minimal host or application health dashboard.