OpenTelemetry and Kubernetes
Forward OpenTelemetry traces, metrics, and logs from your Kubernetes applications to Velocity to enable AI-powered investigations, automatic service maps, and cross-source correlation
Velocity's AI models become dramatically more effective when they understand how services talk to each other and what happens inside those calls. By forwarding OpenTelemetry data you unlock:
Automatic service map β real, high-cardinality topology derived from traces
Cross-source correlation β traces enrich your existing Datadog/Grafana/Coralogix dashboards inside Velocity, eliminating context-switching
Faster investigation β anomaly detection, error clustering, and "likely root cause" suggestions are trace-aware
Overview
This document is for platform engineers and SREs who already instrument their workloads with OpenTelemetry (Otel SDKs or an Otel-compatible framework) and want to make their incident investigations faster and more accurate with Velocity.
You will learn how to ship traces (and optionally metrics & logs) from your cluster to Velocity in < 10 minutes using a hardened Helm chart built on the OpenTelemetry Collector.
The Velocity Collector is a thin, production-ready wrapper around the upstream OpenTelemetry Collector. It gives you:
A single Helm chart with sane defaults
Secure (TLS everywhere), stateless, and easy to upgrade
Full control β the chart is open-source, CI-built, and uses standard Otel components
Prerequisites
A Velocity account
An application instrumented with OpenTelemetry SDKs
Kubernetes 1.24+ (see OpenTelemetry Helm chart requirements)
Helm 3.17.1+
Admin access to the Kubernetes cluster
Quick Start
Let's get you up and running in under 5 minutes.
Get your API key
Log into your Velocity account. Contact us at [email protected] to get your API key.
Add the Helm repository
Create a secret with your API key
Install the collector
Verify it's running
You should see something like:
Point your apps at the collector
Update your application's OpenTelemetry configuration:
Verify data flow
Check the collector logs:
Then head to your Velocity dashboard to see traces flowing in.
Architecture
Here's how Velocity Collector fits alongside your existing observability infrastructure:
The Velocity Collector runs in parallel with your existing telemetry pipelineβno need to rip and replace.
Deployment Patterns
Velocity Collector supports different deployment patterns depending on your needs.
I. Application Telemetry (Default)
The default setup deploys a single collector that receives telemetry from your applications. This is perfect for:
Microservices sending traces
Applications emitting custom metrics
Services forwarding structured logs
II. Kubernetes Infrastructure Monitoring
Want to monitor your Kubernetes infrastructure too? Deploy additional collectors:
DaemonSet (one per node) collects:
Node metrics: CPU, memory, disk, network
Pod/Container metrics: Resource usage and limits
Kubernetes metadata enrichment
Deployment (cluster-wide) collects:
Cluster metrics: Node conditions, resource allocation
Kubernetes events: Scheduling, failures, warnings
III. Service Graph Connector
Coming Soon: Privacy-preserving abstraction that builds relationship graphs from traces without exposing trace details.
For examples and detailed configuration, check out our GitHub repository.
Configuration
Protocol Selection: HTTP vs gRPC
The Velocity Collector uses HTTP protocol (OTLP/HTTP) for sending telemetry data to Velocity's cloud endpoints. This choice provides:
Maximum compatibility β Works through firewalls, proxies, and load balancers
Easy debugging β Use standard HTTP tools like curl for troubleshooting
Internet-friendly β Reliable transmission across internet boundaries
Custom port β Uses port 14318 (instead of standard 4318) to avoid conflicts with other collectors
While gRPC offers slightly better performance (~10-20% faster), HTTP's compatibility advantages make it the better choice for internet-facing endpoints like Velocity's ingestion service.
Note:
The collector accepts incoming data using either protocol (gRPC on port 14317 or HTTP on port 14318)
When forwarding to Velocity's cloud service, it uses HTTP
For cluster-internal communication between collectors, gRPC (port 14317) is fine and may offer better performance
Basic Configuration
First, create a Kubernetes Secret containing your API key:
Then reference it in your Helm values:
Advanced Options
For production deployments, you might want to tune these settings:
Clone Traffic from Existing Collector
Already have an OpenTelemetry Collector? Add Velocity as an additional exporter to try it out without disrupting your current setup:
This approach lets you evaluate Velocity while keeping your existing observability tools running.
Filtering PII and Internal Data
Need to filter sensitive data? The collector includes a purpose-built redaction processor:
The redaction processor automatically scans all span attributes, resource attributes, and span names for sensitive patterns. For more options, see the redaction processor documentation.
Security Considerations
The Velocity Collector is built with enterprise security requirements in mind. It's a stateless forwarder based on the widely-audited OpenTelemetry Collectorβno data persistence, no local caching, no attack surface beyond standard HTTPS egress. Your telemetry data flows directly from your cluster to Velocity's TLS-secured endpoints without intermediate storage. The collector runs with minimal permissions, requires only outbound HTTPS (port 443), and can be deployed in locked-down environments with egress proxies.
This architecture means your compliance team can treat it like any other observability agentβsame security posture, familiar operational model.
Key Security Features:
File-based authentication β API keys are mounted as files, not environment variables
No plaintext secrets β All sensitive data stored in Kubernetes Secrets
Minimal permissions β Runs as non-root with read-only filesystem
TLS everywhere β All communication encrypted in transit
API Key Management
The Velocity Collector uses file-based authentication for enhanced security. Your API key from Velocity is ready to use as-is - simply store it in a Kubernetes secret and the collector will handle the authentication. Create the secret before installing:
Then reference it in your Helm values:
For production environments, consider using:
AWS Secrets Manager with External Secrets Operator
HashiCorp Vault
Sealed Secrets
Azure Key Vault
Network Policies
If you're using network policies, ensure the collector can:
Receive traffic from your application pods (ports 14317/14318)
Send traffic to Velocity's ingestion endpoint
Best Practices
Start simple β Deploy basic configuration first, add complexity incrementally
Monitor the monitor β Set alerts for collector health metrics
Resource allocation β Begin with conservative limits, scale based on observed usage
Troubleshooting
Common Issues
Error: "global.velocity.apiKey.existingSecret is required"
You must specify the secret name when installing:
--set global.velocity.apiKey.existingSecret=your-secret-nameThe secret must exist before installing the chart
Collector pod is not starting
Check the secret exists:
kubectl get secret velocity-collector-secret -n velocityVerify the secret has the correct key:
kubectl get secret velocity-collector-secret -n velocity -o jsonpath='{.data.apiKey}' | base64 -dCheck pod logs:
kubectl logs -n velocity -l app.kubernetes.io/name=opentelemetry-collector
No data appearing in Velocity
Verify your applications are sending data to the correct endpoint
Check the collector logs for authentication errors
Ensure network policies allow outbound HTTPS to Velocity's ingestion endpoint
Next Steps
Explore advanced examples in our GitHub repository
Review the OpenTelemetry documentation for instrumentation best practices
Check out the OpenTelemetry Collector documentation for advanced configuration options
Last updated