OpenTelemetry and Kubernetes

Forward OpenTelemetry traces, metrics, and logs from your Kubernetes applications to Velocity to enable AI-powered investigations, automatic service maps, and cross-source correlation

Velocity's AI models become dramatically more effective when they understand how services talk to each other and what happens inside those calls. By forwarding OpenTelemetry data you unlock:

  • Automatic service map – real, high-cardinality topology derived from traces

  • Cross-source correlation – traces enrich your existing Datadog/Grafana/Coralogix dashboards inside Velocity, eliminating context-switching

  • Faster investigation – anomaly detection, error clustering, and "likely root cause" suggestions are trace-aware

Overview

This document is for platform engineers and SREs who already instrument their workloads with OpenTelemetry (Otel SDKs or an Otel-compatible framework) and want to make their incident investigations faster and more accurate with Velocity.

You will learn how to ship traces (and optionally metrics & logs) from your cluster to Velocity in < 10 minutes using a hardened Helm chart built on the OpenTelemetry Collector.

The Velocity Collector is a thin, production-ready wrapper around the upstream OpenTelemetry Collector. It gives you:

  • A single Helm chart with sane defaults

  • Secure (TLS everywhere), stateless, and easy to upgrade

  • Full control – the chart is open-source, CI-built, and uses standard Otel components

circle-info

This collector forwards OpenTelemetry data from your apps to Velocity. No instrumentation -> nothing to forward. Haven't instrumented yet? You're missing out. OpenTelemetry is worth the effort! πŸ˜‰

Prerequisites

Quick Start

Let's get you up and running in under 5 minutes.

1

Get your API key

Log into your Velocity account. Contact us at [email protected] to get your API key.

2

Add the Helm repository

3

Create a secret with your API key

4

Install the collector

5

Verify it's running

You should see something like:

6

Point your apps at the collector

Update your application's OpenTelemetry configuration:

7

Verify data flow

Check the collector logs:

Then head to your Velocity dashboard to see traces flowing in.

Architecture

Here's how Velocity Collector fits alongside your existing observability infrastructure:

spinner

The Velocity Collector runs in parallel with your existing telemetry pipelineβ€”no need to rip and replace.

Deployment Patterns

Velocity Collector supports different deployment patterns depending on your needs.

I. Application Telemetry (Default)

The default setup deploys a single collector that receives telemetry from your applications. This is perfect for:

  • Microservices sending traces

  • Applications emitting custom metrics

  • Services forwarding structured logs

II. Kubernetes Infrastructure Monitoring

Want to monitor your Kubernetes infrastructure too? Deploy additional collectors:

DaemonSet (one per node) collects:

  • Node metrics: CPU, memory, disk, network

  • Pod/Container metrics: Resource usage and limits

  • Kubernetes metadata enrichment

Deployment (cluster-wide) collects:

  • Cluster metrics: Node conditions, resource allocation

  • Kubernetes events: Scheduling, failures, warnings

III. Service Graph Connector

Coming Soon: Privacy-preserving abstraction that builds relationship graphs from traces without exposing trace details.

For examples and detailed configuration, check out our GitHub repositoryarrow-up-right.

Configuration

Protocol Selection: HTTP vs gRPC

The Velocity Collector uses HTTP protocol (OTLP/HTTP) for sending telemetry data to Velocity's cloud endpoints. This choice provides:

  • Maximum compatibility – Works through firewalls, proxies, and load balancers

  • Easy debugging – Use standard HTTP tools like curl for troubleshooting

  • Internet-friendly – Reliable transmission across internet boundaries

  • Custom port – Uses port 14318 (instead of standard 4318) to avoid conflicts with other collectors

While gRPC offers slightly better performance (~10-20% faster), HTTP's compatibility advantages make it the better choice for internet-facing endpoints like Velocity's ingestion service.

Note:

  • The collector accepts incoming data using either protocol (gRPC on port 14317 or HTTP on port 14318)

  • When forwarding to Velocity's cloud service, it uses HTTP

  • For cluster-internal communication between collectors, gRPC (port 14317) is fine and may offer better performance

Basic Configuration

First, create a Kubernetes Secret containing your API key:

Then reference it in your Helm values:

Advanced Options

For production deployments, you might want to tune these settings:

Clone Traffic from Existing Collector

Already have an OpenTelemetry Collector? Add Velocity as an additional exporter to try it out without disrupting your current setup:

This approach lets you evaluate Velocity while keeping your existing observability tools running.

Filtering PII and Internal Data

Need to filter sensitive data? The collector includes a purpose-built redaction processor:

The redaction processor automatically scans all span attributes, resource attributes, and span names for sensitive patterns. For more options, see the redaction processorarrow-up-right documentation.

Security Considerations

The Velocity Collector is built with enterprise security requirements in mind. It's a stateless forwarder based on the widely-audited OpenTelemetry Collectorβ€”no data persistence, no local caching, no attack surface beyond standard HTTPS egress. Your telemetry data flows directly from your cluster to Velocity's TLS-secured endpoints without intermediate storage. The collector runs with minimal permissions, requires only outbound HTTPS (port 443), and can be deployed in locked-down environments with egress proxies.

This architecture means your compliance team can treat it like any other observability agentβ€”same security posture, familiar operational model.

Key Security Features:

  • File-based authentication – API keys are mounted as files, not environment variables

  • No plaintext secrets – All sensitive data stored in Kubernetes Secrets

  • Minimal permissions – Runs as non-root with read-only filesystem

  • TLS everywhere – All communication encrypted in transit

API Key Management

The Velocity Collector uses file-based authentication for enhanced security. Your API key from Velocity is ready to use as-is - simply store it in a Kubernetes secret and the collector will handle the authentication. Create the secret before installing:

Then reference it in your Helm values:

For production environments, consider using:

  • AWS Secrets Manager with External Secrets Operator

  • HashiCorp Vault

  • Sealed Secrets

  • Azure Key Vault

Network Policies

If you're using network policies, ensure the collector can:

  • Receive traffic from your application pods (ports 14317/14318)

  • Send traffic to Velocity's ingestion endpoint

Best Practices

  • Start simple – Deploy basic configuration first, add complexity incrementally

  • Monitor the monitor – Set alerts for collector health metrics

  • Resource allocation – Begin with conservative limits, scale based on observed usage

Troubleshooting

Common Issues

Error: "global.velocity.apiKey.existingSecret is required"

  • You must specify the secret name when installing: --set global.velocity.apiKey.existingSecret=your-secret-name

  • The secret must exist before installing the chart

Collector pod is not starting

  • Check the secret exists: kubectl get secret velocity-collector-secret -n velocity

  • Verify the secret has the correct key: kubectl get secret velocity-collector-secret -n velocity -o jsonpath='{.data.apiKey}' | base64 -d

  • Check pod logs: kubectl logs -n velocity -l app.kubernetes.io/name=opentelemetry-collector

No data appearing in Velocity

  • Verify your applications are sending data to the correct endpoint

  • Check the collector logs for authentication errors

  • Ensure network policies allow outbound HTTPS to Velocity's ingestion endpoint

Next Steps

Last updated