OpenTelemetry and Kubernetes
Forward OpenTelemetry traces, metrics, and logs from your Kubernetes applications to Velocity to enable AI-powered investigations, automatic service maps, and cross-source correlation
Velocity's AI models become dramatically more effective when they understand how services talk to each other and what happens inside those calls. By forwarding OpenTelemetry data you unlock:
Automatic service map – real, high-cardinality topology derived from traces
Cross-source correlation – traces enrich your existing Datadog/Grafana/Coralogix dashboards inside Velocity, eliminating context-switching
Faster investigation – anomaly detection, error clustering, and "likely root cause" suggestions are trace-aware
Overview
This document is for platform engineers and SREs who already instrument their workloads with OpenTelemetry (Otel SDKs or an Otel-compatible framework) and want to make their incident investigations faster and more accurate with Velocity.
You will learn how to ship traces (and optionally metrics & logs) from your cluster to Velocity in < 10 minutes using a hardened Helm chart built on the OpenTelemetry Collector.
The Velocity Collector is a thin, production-ready wrapper around the upstream OpenTelemetry Collector. It gives you:
A single Helm chart with sane defaults
Secure (TLS everywhere), stateless, and easy to upgrade
Full control – the chart is open-source, CI-built, and uses standard Otel components
Prerequisites
A Velocity account
An application instrumented with OpenTelemetry SDKs
Kubernetes 1.24+ (see OpenTelemetry Helm chart requirements)
Helm 3.17.1+
Admin access to the Kubernetes cluster
Quick Start
Let's get you up and running in under 5 minutes.
Get your API key
Log into your Velocity account. Contact us at [email protected] to get your API key.
Add the Helm repository
helm repo add velocity https://techvelocity.github.io/helm-charts
helm repo update
Create a secret with your API key
kubectl create secret generic velocity-collector-secret \
--from-literal=apiKey=YOUR_API_KEY \
--namespace velocity \
--create-namespace
Install the collector
helm install velocity-collector velocity/velocity-collector \
--set global.velocity.apiKey.existingSecret=velocity-collector-secret \
--namespace velocity
Verify it's running
kubectl get pods -n velocity
You should see something like:
NAME READY STATUS RESTARTS AGE
velocity-collector-opentelemetry-collector-abc123 1/1 Running 0 30s
Point your apps at the collector
Update your application's OpenTelemetry configuration:
OTEL_EXPORTER_OTLP_ENDPOINT=http://velocity-collector-opentelemetry-collector.velocity.svc.cluster.local:14318
Verify data flow
Check the collector logs:
kubectl logs -n velocity -l app.kubernetes.io/name=opentelemetry-collector
Then head to your Velocity dashboard to see traces flowing in.
Architecture
Here's how Velocity Collector fits alongside your existing observability infrastructure:
The Velocity Collector runs in parallel with your existing telemetry pipeline—no need to rip and replace.
Deployment Patterns
Velocity Collector supports different deployment patterns depending on your needs.
I. Application Telemetry (Default)
The default setup deploys a single collector that receives telemetry from your applications. This is perfect for:
Microservices sending traces
Applications emitting custom metrics
Services forwarding structured logs
II. Kubernetes Infrastructure Monitoring
Want to monitor your Kubernetes infrastructure too? Deploy additional collectors:
DaemonSet (one per node) collects:
Node metrics: CPU, memory, disk, network
Pod/Container metrics: Resource usage and limits
Kubernetes metadata enrichment
Deployment (cluster-wide) collects:
Cluster metrics: Node conditions, resource allocation
Kubernetes events: Scheduling, failures, warnings
III. Service Graph Connector
Coming Soon: Privacy-preserving abstraction that builds relationship graphs from traces without exposing trace details.
For examples and detailed configuration, check out our GitHub repository.
Configuration
Protocol Selection: HTTP vs gRPC
The Velocity Collector uses HTTP protocol (OTLP/HTTP) for sending telemetry data to Velocity's cloud endpoints. This choice provides:
Maximum compatibility – Works through firewalls, proxies, and load balancers
Easy debugging – Use standard HTTP tools like curl for troubleshooting
Internet-friendly – Reliable transmission across internet boundaries
Custom port – Uses port 14318 (instead of standard 4318) to avoid conflicts with other collectors
While gRPC offers slightly better performance (~10-20% faster), HTTP's compatibility advantages make it the better choice for internet-facing endpoints like Velocity's ingestion service.
Note:
The collector accepts incoming data using either protocol (gRPC on port 14317 or HTTP on port 14318)
When forwarding to Velocity's cloud service, it uses HTTP
For cluster-internal communication between collectors, gRPC (port 14317) is fine and may offer better performance
Basic Configuration
First, create a Kubernetes Secret containing your API key:
apiVersion: v1
kind: Secret
metadata:
name: velocity-collector-secret
namespace: velocity
type: Opaque
stringData:
apiKey: YOUR_API_KEY
Then reference it in your Helm values:
global:
velocity:
apiKey:
existingSecret: velocity-collector-secret # Name of your Kubernetes secret
key: apiKey # Key within the secret (default: apiKey)
Advanced Options
For production deployments, you might want to tune these settings:
global:
velocity:
apiKey:
existingSecret: velocity-collector-secret
opentelemetry-collector:
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 10
resources:
requests:
memory: 512Mi
cpu: 200m
limits:
memory: 2Gi
cpu: 1000m
# ... rest of values
Clone Traffic from Existing Collector
Already have an OpenTelemetry Collector? Add Velocity as an additional exporter to try it out without disrupting your current setup:
# In your existing collector's config:
exporters:
# Your existing exporters stay unchanged
jaeger:
endpoint: jaeger-collector:14250
prometheus:
endpoint: 0.0.0.0:8889
# Add Velocity as a new exporter
otlp/velocity:
endpoint: velocity-collector-opentelemetry-collector.velocity.svc.cluster.local:14317
tls:
insecure: true # Within cluster traffic
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters:
- jaeger
- otlp/velocity # Add to existing pipeline
metrics:
receivers: [otlp]
processors: [batch]
exporters:
- prometheus
- otlp/velocity # Add to existing pipeline
This approach lets you evaluate Velocity while keeping your existing observability tools running.
Filtering PII and Internal Data
Need to filter sensitive data? The collector includes a purpose-built redaction processor:
opentelemetry-collector:
config:
processors:
# Redact PII using the redaction processor
redaction:
allow_all_keys: true
blocked_values:
# Redact credit card numbers
- regex: '\b4[0-9]{12}(?:[0-9]{3})?\b'
placeholder: "[REDACTED_CC]"
# Redact email addresses
- regex: '\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b'
placeholder: "[REDACTED_EMAIL]"
# Redact SSNs
- regex: '\b\d{3}-\d{2}-\d{4}\b'
placeholder: "[REDACTED_SSN]"
# Redact API keys
- regex: '\b(vli_|org_)[A-Za-z0-9:_]+\b'
placeholder: "[REDACTED_API_KEY]"
# Or block specific keys entirely
blocked_keys:
- "user.email"
- "customer.ssn"
- "payment.card_number"
# Additionally remove entire sensitive spans
filter:
error_mode: ignore
traces:
span:
- 'attributes["http.url"] == "/api/v1/credit-cards"'
- 'name == "SELECT * FROM payment_methods"'
service:
pipelines:
traces:
processors: [redaction, filter, batch]
# ... rest of pipeline
The redaction processor automatically scans all span attributes, resource attributes, and span names for sensitive patterns. For more options, see the redaction processor documentation.
Security Considerations
The Velocity Collector is built with enterprise security requirements in mind. It's a stateless forwarder based on the widely-audited OpenTelemetry Collector—no data persistence, no local caching, no attack surface beyond standard HTTPS egress. Your telemetry data flows directly from your cluster to Velocity's TLS-secured endpoints without intermediate storage. The collector runs with minimal permissions, requires only outbound HTTPS (port 443), and can be deployed in locked-down environments with egress proxies.
This architecture means your compliance team can treat it like any other observability agent—same security posture, familiar operational model.
Key Security Features:
File-based authentication – API keys are mounted as files, not environment variables
No plaintext secrets – All sensitive data stored in Kubernetes Secrets
Minimal permissions – Runs as non-root with read-only filesystem
TLS everywhere – All communication encrypted in transit
API Key Management
The Velocity Collector uses file-based authentication for enhanced security. Your API key from Velocity is ready to use as-is - simply store it in a Kubernetes secret and the collector will handle the authentication. Create the secret before installing:
# Using kubectl
kubectl create secret generic velocity-collector-secret \
--from-literal=apiKey=YOUR_API_KEY \
-n velocity
# Using environment variable
kubectl create secret generic velocity-collector-secret \
--from-literal=apiKey=$VELOCITY_API_KEY \
-n velocity
Then reference it in your Helm values:
global:
velocity:
apiKey:
existingSecret: velocity-collector-secret
key: apiKey # Key within the secret
For production environments, consider using:
AWS Secrets Manager with External Secrets Operator
HashiCorp Vault
Sealed Secrets
Azure Key Vault
Network Policies
If you're using network policies, ensure the collector can:
Receive traffic from your application pods (ports 14317/14318)
Send traffic to Velocity's ingestion endpoint
Best Practices
Start simple – Deploy basic configuration first, add complexity incrementally
Monitor the monitor – Set alerts for collector health metrics
Resource allocation – Begin with conservative limits, scale based on observed usage
Troubleshooting
Common Issues
Error: "global.velocity.apiKey.existingSecret is required"
You must specify the secret name when installing:
--set global.velocity.apiKey.existingSecret=your-secret-name
The secret must exist before installing the chart
Collector pod is not starting
Check the secret exists:
kubectl get secret velocity-collector-secret -n velocity
Verify the secret has the correct key:
kubectl get secret velocity-collector-secret -n velocity -o jsonpath='{.data.apiKey}' | base64 -d
Check pod logs:
kubectl logs -n velocity -l app.kubernetes.io/name=opentelemetry-collector
No data appearing in Velocity
Verify your applications are sending data to the correct endpoint
Check the collector logs for authentication errors
Ensure network policies allow outbound HTTPS to Velocity's ingestion endpoint
Next Steps
Explore advanced examples in our GitHub repository
Review the OpenTelemetry documentation for instrumentation best practices
Check out the OpenTelemetry Collector documentation for advanced configuration options
Last updated