0

I am running Grafana Tempo with the example docker-compose local configuration from https://github.com/grafana/tempo/tree/main/example/docker-compose/local. To keep the traces clear, I disabled the mocking container k6-tracing. I also changed the log level to debug and enabled the log_received_spans and log_discarded_spans in the distributor config to confirm that the spans were ingested.

Whenever I create a custom trace using opentelemetry in Python, according to the logs, the span gets ingested, but then I fail to query it from the bundled Grafana.

from datetime import datetime, timedelta
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor, ConsoleSpanExporter

resource = Resource(attributes={"service.name": "Manual Environment Tracer"})
traceProvider = TracerProvider(resource=resource)
processor = BatchSpanProcessor( OTLPSpanExporter( endpoint="http://localhost:4318/v1/traces" ))
traceProvider.add_span_processor(processor)
traceProvider.add_span_processor(BatchSpanProcessor(ConsoleSpanExporter()))
trace.set_tracer_provider(traceProvider)
tracer = trace.get_tracer("simple_tracer")

span = tracer.start_span( f"mock-span-created-at-{datetime.now().isoformat()}",
    start_time=int((datetime.now() - timedelta(minutes=15)).astimezone() .timestamp() * 1e9 ))
span.add_event( "mock-event",
    timestamp=int((datetime.now() - timedelta(minutes=10)) .astimezone() .timestamp() * 1e9 ))
span.set_status(trace.StatusCode.OK)
span.end(end_time=int( (datetime.now() - timedelta(minutes=5)) .astimezone() .timestamp() * 1e9 ))

This creates a span correctly according to the console exporter:

{
    "name": "mock-span-created-at-2025-02-25T15:46:50.076520",
    "context": {
        "trace_id": "0x0196f88425f031e0bcc5864972373416",
        "span_id": "0x87fe780fb4fee2b1",
        "trace_state": "[]"
    },
    "kind": "SpanKind.INTERNAL",
    "parent_id": null,
    "start_time": "2025-02-25T14:31:50.076534Z",
    "end_time": "2025-02-25T14:41:50.076724Z",
    "status": {
        "status_code": "OK"
    },
    "attributes": {},
    "events": [
        {
            "name": "mock-event",
            "timestamp": "2025-02-25T14:36:50.076675Z",
            "attributes": {}
        }
    ],
    "links": [],
    "resource": {
        "attributes": {
            "service.name": "Manual Environment Tracer"
        },
        "schema_url": ""
    }
}

Which is ingested correctly according to the distributor logs:

level=info ts=2025-02-25T14:46:51.083035496Z caller=distributor.go:1013 msg=received spanid=87fe780fb4fee2b1 traceid=0196f88425f031e0bcc5864972373416
level=debug ts=2025-02-25T14:46:51.084111933Z caller=server.go:1203 method=/tempopb.Pusher/PushBytesV2 duration=21.859µs msg="gRPC (success)"

But when queried in Grafana, the results come with exactly 0 traces. The query is for 2 days span, no other restrictions. Sample URL: http://localhost:3000/a/grafana-exploretraces-app/explore?primarySignal=all_spans&from=now-2d&to=now&timezone=browser&var-ds=tempo&var-filters=&var-metric=rate&var-groupBy=resource.service.name&var-latencyThreshold=&var-partialLatencyThreshold=&actionView=traceList&selection=%7B%22query%22:%22status%20%3D%20error%22,%22type%22:%22auto%22%7D

Sometimes, refreshing the query may cause these traces to appear and then disappear again. I am not sure whether this is indicative of some issue in the query engine or not.

Instead, when I use the current span via opentelemetry like below, the trace appears in the queries immediately:

with tracer.start_as_current_span("instrumented-span") as span:
        # do some work that 'span' will track
        print("doing some work...")

These are the relevant pip package versions:

opentelemetry-api==1.29.0
opentelemetry-exporter-otlp==1.29.0
opentelemetry-exporter-otlp-proto-common==1.29.0
opentelemetry-exporter-otlp-proto-grpc==1.29.0
opentelemetry-exporter-otlp-proto-http==1.29.0
opentelemetry-proto==1.29.0
opentelemetry-sdk==1.29.0
opentelemetry-semantic-conventions==0.50b0

And these are the containers running:

CONTAINER ID   IMAGE                    COMMAND                  CREATED          STATUS          PORTS                                                                                                                                                                                                                                           NAMES
08d1e47b0f71   grafana/tempo:latest     "/tempo -config.file…"   18 minutes ago   Up 18 minutes   0.0.0.0:3200->3200/tcp, :::3200->3200/tcp, 0.0.0.0:4317-4318->4317-4318/tcp, :::4317-4318->4317-4318/tcp, 0.0.0.0:9095->9095/tcp, :::9095->9095/tcp, 0.0.0.0:9411->9411/tcp, :::9411->9411/tcp, 0.0.0.0:14268->14268/tcp, :::14268->14268/tcp   local-tempo-1
17a3427dcd77   grafana/grafana:11.2.0   "/run.sh"                18 minutes ago   Up 18 minutes   0.0.0.0:3000->3000/tcp, :::3000->3000/tcp                                                                                                                                                                                                       local-grafana-1
52b36ee3ce93   memcached:1.6.29         "docker-entrypoint.s…"   18 minutes ago   Up 18 minutes   0.0.0.0:11211->11211/tcp, :::11211->11211/tcp                                                                                                                                                                                                   memcached
a864f2472d27   prom/prometheus:latest   "/bin/prometheus --c…"   18 minutes ago   Up 18 minutes   0.0.0.0:9090->9090/tcp, :::9090->9090/tcp                                                                                                                                                                                                       local-prometheus-1

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.