docker compose configOpenSearch for centralized logs (cross-platform)
A lab for indexing, searching, and exploring events across Windows, Linux, macOS, and Docker.
Configure OpenSearch and Dashboards, load initial documents, and validate operational log-search workflows on any platform.
Created: April 6, 2026
Published: April 6, 2026
Docker
This guide runs as a Docker-based lab. Use the Compose and configuration snippets in the article as the primary source of truth, and do not assume native OS support unless it is explicitly documented.
docker compose up -d --builddocker compose psWhat this guide brings up
- A single OpenSearch node with bounded memory for development.
- OpenSearch Dashboards to inspect indexes and queries.
- An initial index with JSON events to validate searches and filters.
This deployment is for learning and pipeline validation. Do not reuse it as-is in production without reviewing security, persistence, and sizing.
Base stack with Docker Compose
services:
opensearch:
image: opensearchproject/opensearch:2.17.1
container_name: opensearch
environment:
- discovery.type=single-node
- bootstrap.memory_lock=true
- plugins.security.disabled=true
- OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m
ulimits:
memlock:
soft: -1
hard: -1
ports:
- "9200:9200"
volumes:
- opensearch_data:/usr/share/opensearch/data
dashboards:
image: opensearchproject/opensearch-dashboards:2.17.1
container_name: opensearch-dashboards
environment:
- OPENSEARCH_HOSTS=["http://opensearch:9200"]
ports:
- "5601:5601"
depends_on:
- opensearch
volumes:
opensearch_data:Single-node mode and disabled security reduce friction for a reproducible local lab.
Start and verify the node
Run `docker compose up -d` and wait until OpenSearch answers on port 9200.
Use `curl http://localhost:9200` to confirm the cluster is reachable and reports a healthy single-node status.
docker compose up -dLoad initial events
Before integrating a real shipper, index a few documents manually. This validates mappings, timestamps, and searches before adding Fluent Bit, Vector, or Beats to the flow.
curl -X PUT "http://localhost:9200/app-logs"
curl -X POST "http://localhost:9200/app-logs/_doc" \
-H "Content-Type: application/json" \
-d '{
"@timestamp": "2026-04-06T08:30:00Z",
"service.name": "checkout-api",
"log.level": "error",
"message": "payment provider timeout",
"trace.id": "9f4a62"
}'- Create an explicit index to keep bootstrap and real data separate.
- Include `@timestamp`, `service.name`, `log.level`, and `trace.id` so correlation stays possible later.
Explore the logs in Dashboards
Configure the data view
Open `http://localhost:5601`, create a data view for `app-logs*`, and select `@timestamp` as the time field.
Run a search for `service.name: checkout-api` and then filter further with `log.level: error`.
Minimum checklist for a healthy log pipeline
- ✓All documents carry a consistent timestamp.
- ✓Service names are stable and repeatable.
- ✓At least one field exists for trace correlation.
- ○A retention and rollover policy is already defined.
Why use OpenSearch Dashboards in the guide?
Because it gives immediate feedback on whether the index and fields make sense before you introduce more complex ingestion pipelines.
What if OpenSearch takes too long to start?
Check available RAM and memory limits first. In local labs the most common issue is insufficient memory or reusing broken volumes from previous tests.
Once the index is validated, connect OpenSearch as a datasource in Grafana or add a real shipper from your application containers.