Back to guidesGuide

OpenSearch for centralized logs (cross-platform)

A lab for indexing, searching, and exploring events across Windows, Linux, macOS, and Docker.

Configure OpenSearch and Dashboards, load initial documents, and validate operational log-search workflows on any platform.

Created: April 6, 2026

Published: April 6, 2026

Estimated time18 min
LevelBeginner
Before you startDocker and Docker Compose installed.
PlatformsDocker
WhatsAppXLinkedIn

Docker

This guide runs as a Docker-based lab. Use the Compose and configuration snippets in the article as the primary source of truth, and do not assume native OS support unless it is explicitly documented.

Docker and Docker Compose installed.At least 4 GB of free RAM for the local stack.Ports 9200 and 5601 free for OpenSearch and Dashboards.
Lab bootstrap
docker compose config
Stack start
docker compose up -d --build
Validation
docker compose ps
OpenSearch is a solid option when you want to store and query logs without depending on a managed service from day one. In local environments, the main goal is lower friction: single-node mode, minimal lab security, and a UI where searches can be validated quickly.

What this guide brings up

  • A single OpenSearch node with bounded memory for development.
  • OpenSearch Dashboards to inspect indexes and queries.
  • An initial index with JSON events to validate searches and filters.
Lab scope

This deployment is for learning and pipeline validation. Do not reuse it as-is in production without reviewing security, persistence, and sizing.

Base stack with Docker Compose

docker-compose.yml
services:
  opensearch:
    image: opensearchproject/opensearch:2.17.1
    container_name: opensearch
    environment:
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - plugins.security.disabled=true
      - OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m
    ulimits:
      memlock:
        soft: -1
        hard: -1
    ports:
      - "9200:9200"
    volumes:
      - opensearch_data:/usr/share/opensearch/data

  dashboards:
    image: opensearchproject/opensearch-dashboards:2.17.1
    container_name: opensearch-dashboards
    environment:
      - OPENSEARCH_HOSTS=["http://opensearch:9200"]
    ports:
      - "5601:5601"
    depends_on:
      - opensearch

volumes:
  opensearch_data:

Single-node mode and disabled security reduce friction for a reproducible local lab.

1

Start and verify the node

Run `docker compose up -d` and wait until OpenSearch answers on port 9200.

Use `curl http://localhost:9200` to confirm the cluster is reachable and reports a healthy single-node status.

command
docker compose up -d

Load initial events

Before integrating a real shipper, index a few documents manually. This validates mappings, timestamps, and searches before adding Fluent Bit, Vector, or Beats to the flow.

seed-logs.sh
curl -X PUT "http://localhost:9200/app-logs"

curl -X POST "http://localhost:9200/app-logs/_doc" \
  -H "Content-Type: application/json" \
  -d '{
    "@timestamp": "2026-04-06T08:30:00Z",
    "service.name": "checkout-api",
    "log.level": "error",
    "message": "payment provider timeout",
    "trace.id": "9f4a62"
  }'
  • Create an explicit index to keep bootstrap and real data separate.
  • Include `@timestamp`, `service.name`, `log.level`, and `trace.id` so correlation stays possible later.

Explore the logs in Dashboards

2

Configure the data view

Open `http://localhost:5601`, create a data view for `app-logs*`, and select `@timestamp` as the time field.

Run a search for `service.name: checkout-api` and then filter further with `log.level: error`.

Minimum checklist for a healthy log pipeline

  • All documents carry a consistent timestamp.
  • Service names are stable and repeatable.
  • At least one field exists for trace correlation.
  • A retention and rollover policy is already defined.
Why use OpenSearch Dashboards in the guide?

Because it gives immediate feedback on whether the index and fields make sense before you introduce more complex ingestion pipelines.

What if OpenSearch takes too long to start?

Check available RAM and memory limits first. In local labs the most common issue is insufficient memory or reusing broken volumes from previous tests.

Recommended next step

Once the index is validated, connect OpenSearch as a datasource in Grafana or add a real shipper from your application containers.

More related reads

More related reads

OpenTelemetry~18 min

Observability foundations with OpenTelemetry

Created: March 21, 2026 · Published: March 21, 2026

A practical guide for moving from instrumentation by fashion to instrumentation that answers real operational questions.

Beginner
Read guide
Reliability~12 min

SLO design for platform teams

Created: March 30, 2026 · Published: March 30, 2026

A short framework for choosing indicators and targets that help you negotiate reliability with product and engineering.

Intermediate
Read guide
Metrics~32 min

Metric downsampling with VictoriaMetrics in the free version

Created: April 10, 2026 · Published: April 10, 2026

VictoriaMetrics Enterprise provides native downsampling in cluster. On the free tier, you can approximate it with separate clusters, fan-out, and `-dedup.minScrapeInterval`.

Advanced
Read guide