Thank You, Grafana: How Beyla Helped Us, and How You Can Use it Too!

Severin Neumann

Severin Neumann

October 7, 2025

Thank You, Grafana: How Beyla Helped Us, and How You Can Use it Too!

This post is the first in what we hope will become a series where we pause to say thanks to the projects and communities that move us forward. Today it’s Beyla’s turn. By open-sourcing eBPF-based auto-instrumentation and then donating it as an OpenTelemetry BPF Instrumentation (OBI) project, Grafana Labs didn’t just release code, they lowered the onramp for observability.

That matters to us because Causely turns your observability data into distilled insights to reason over cause and effect to put teams in control. The faster you get baseline, trustworthy signals, the faster we can do that work. 

Why Beyla Helps 

Teams come to us in two states. Some already have a healthy OpenTelemetry footprint; for them, we plug Causely in and get to causal reasoning quickly. Others are still early: traces are patchy, metrics are inconsistent, logs are everywhere. Beyla makes that second state less painful. You turn it on and a picture forms: service boundaries, dependency maps, request paths. Suddenly they’re no longer guessing. Our engine creates a consistent baseline of telemetry they can trust, and the engine can begin attributing symptoms to specific causes with confidence.  

The effect shows up in the first week: fewer blind spots, clearer causal analytics, and the ability to move from “seeing an error” to “knowing why it happened and what to change.” Beyla helps create the raw material; Causely turns it into decisions. 

What the OBI Donation Signals 

Donating Beyla to the OTel ecosystem signals a commitment to standards and longevity. It’s a move toward shared building blocks rather than one-off integrations. We value that because our promise — autonomous reliability without drama — depends on predictable inputs and open interfaces. The more OTel wins, the healthier the whole stack becomes. 

How This Looks With Causely 

Our agents roll out Beyla by default. Setup is minimal: deploy Beyla, establish a baseline of traces/metrics, and immediately begin causal reasoning. Beyla provides out-of-the-box visibility; Causely layers causal inference, risk scoring, and proposed actions.

You get: 

  • A consistent baseline of traces/metrics powered by eBPF auto-instrumentation. 
  • High-confidence root cause detection that doesn’t drown you in correlations. 
  • Guardrails that respect zero-trust boundaries — you keep your data; Causely reasons over signals, not secrets. 

It adds up to a simple promise:

Faster time to control, not just faster dashboards.  

Try Beyla on Your Own 

If you’d like to see what we’re talking about, the easiest path is to stand up a tiny environment locally and watch Beyla fill in the picture. The steps below take you from a single service to a small conversation between services, and then into traces and metrics you can actually explore. 

Single Service + Beyla (Docker) 

Pick a simple HTTP service; the example below instruments a demo app running on port 8080 and prints spans to the console — no OTLP required to get started. 

# Terminal 1 — start your app 
docker run --rm --name demo -p 5678:5678 golang:1.23 go run github.com/hashicorp/http-echo@latest -text=hello 
# Terminal 2 — run Beyla next to it (console output only) 
docker run --rm \ 
  --name beyla \ 
  --privileged \ 
  --pid="container:demo" \ 
  -e BEYLA_OPEN_PORT=5678 \ 
  -e BEYLA_TRACE_PRINTER=text \ 
  grafana/beyla:latest 

Open the app in your browser (http://localhost:5678), click around to generate traffic, and watch spans print in your Beyla terminal. Save a short snippet of that output — we’ll use it below as an example of what “good” looks like. 


If you want to send data to an OpenTelemetry Collector, Tempo, or Jaeger, add the following to the Beyla container:  

  -e OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf \ 
  -e OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318 

A Small Web of Services (Docker Compose) 

To make this concrete, the snippet below starts two services that talk to each other, runs Beyla against the frontend, and wires up local backends (an OpenTelemetry Collector that feeds Jaeger for traces, and Prometheus scraping Beyla’s metrics). 

services: 
  frontend: 
    image: golang:1.23 
    command: ["go", "run", "github.com/hashicorp/http-echo@latest", "-listen=:5678", "-text=hello"] 
    ports: 
      - "5678:5678" 
    environment: 
      - BACKEND_URL=http://backend:9090 
 
  backend: 
    image: ealen/echo-server:latest 
    environment: 
      - PORT=9090 
    expose: 
      - "9090" 
 
  beyla: 
    image: grafana/beyla:latest 
    privileged: true 
    pid: "service:frontend" 
    environment: 
      - BEYLA_OPEN_PORT=5678 
      - OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf 
      - OTEL_EXPORTER_OTLP_ENDPOINT=http://jaeger:4318 
      - BEYLA_PROMETHEUS_PORT=8999 
      - BEYLA_TRACE_PRINTER= 
    depends_on: 
      - frontend 
      - jaeger 
 
  jaeger: 
    image: cr.jaegertracing.io/jaegertracing/jaeger:2.10.0 
    ports: 
      - "16686:16686"  # Jaeger UI 
      - "4318:4318"    # OTLP/HTTP ingest (native in v2) 
 
  prometheus: 
    image: prom/prometheus:latest 
    volumes: 
      - ./prometheus.yml:/etc/prometheus/prometheus.yml:ro 
    ports: 
      - "9090:9090" 

Create the Prometheus config next to your compose file: 

prometheus.yml 

global: 
  scrape_interval: 5s 
scrape_configs: 
  - job_name: "beyla" 
    static_configs: 
      - targets: ["beyla:8999"] 

Bring it up: 

docker compose up -d 

Hit the frontend to generate traffic: 

curl -s http://localhost:8080/ | head -n1 

When the stack is up, open the Jaeger UI at http://localhost:16686 and search for the frontend service to browse traces. For metrics, visit Prometheus at http://localhost:9090 and try queries like http_server_request_duration_seconds_count or http_client_request_duration_seconds_count to see call patterns emerge.

What to Look for 

Generate a little pressure (for example, hey -z 30s http://localhost:8080/). In Jaeger, follow a slow trace end-to-end and note where p95 shifts between the frontend and the backend call. In Prometheus, line up the client and server RED metrics to see where latency actually accumulates — it’s a simple way to separate symptoms from causes.

Where to Read More 

If you want the exact steps, the canonical source is the Beyla documentation and the OBI pages in OpenTelemetry.

Our own docs show how Beyla and the Causely agents fit together in a few minutes of setup. 

Closing the Loop 

Beyla is the right kind of infrastructure: minimal friction, maximal signal, and donated to the place where open standards live.

If you’re ready to move from “seeing it” to “controling it,” we’d be happy to show how Causely turns that signal into confident action. 

Ready when you are. 

Ready to Move from Reactive to Autonomous?

See why engineering teams trust Causely to deliver reliable digital experiences without the firefighting.