Search
⌃K

Creating a Blueprint

In this tutorial, we'll onboard a single application flow from this sample app from GCP's microservices demo project.
You can find the fully onboarded application here.

Isolate an application flow

Let's imagine we need to debug some interaction between the frontend and the cart service in our sample app.
Notice that the cart service interacts directly with two others -- the frontend and the Redis cache. So, in order to develop the cart service in a production-like setting, we'll need to onboard all three services that make up this data flow.

Onboard the Redis cache

Generally speaking, a data flow within an app will end at some variety of datastore. Here, that datastore is our Redis instance.
We'll onboard that service first, because it doesn't depend on any others. That is, Redis doesn't need to connect to any other services in order to run, which means that we don't need to make any changes to the manifest.
Let's confirm that it works by running the following:
veloctl env create -f https://raw.githubusercontent.com/techvelocity/velocity-blueprints/main/getting-started/onboarding-example/1_redis.yaml
NOTE: this is the original, unchanged file.
We should see the following:
And when we follow the provided link, we should see:
Congratulations! You just onboarded your first Velocity Service!
Learn about seeding databases with production-like data here.

Onboard the cart service

Next, we'll onboard our cart service, because it depends directly on Redis in order to run. To do so, we'll need to make some changes to the cart service's original YAML definition.
Specifically, we'll need to add Velocity Annotations and Templates, like so:
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
velocity.tech.v1/id: cart
velocity.tech.v1/dependsOn: redis-cart
name: cartservice
spec:
selector:
matchLabels:
app: cartservice
template:
metadata:
labels:
app: cartservice
spec:
containers:
- env:
- name: REDIS_ADDR
value: '{velocity.v1:redis-cart.exposures(port=tls-redis).host}:{velocity.v1:redis-cart.exposures(port=tls-redis).port}'
View the full original and updated files.

Notice that we made three changes to this manifest:

First, we added the velocity.tech.v1/id: cart annotation, which we can use to reference this service in others that we onboard.
Second, we added the velocity.tech.v1/dependsOn: redis-cart annotation (a reference to the name of the Redis deployment). Alternatively, we could have added a velocity.tech.v1/id annotation to the Redis deployment definition, and used that value here instead.
This does two things:
  1. 1.
    It ensures that Redis spins up before the cart service.
  2. 2.
    It allows us to dynamically reference connectivity details related to the Redis K8s service via Velocity Templates.
And third, we updated the REDIS_ADDR environment variable to the following Velocity Templates, which dynamically resolve to the host and port of the deployed Redis instance:
'{velocity.v1:redis-cart.exposures(port=tls-redis).host}:{velocity.v1:redis-cart.exposures(port=tls-redis).port}'
Notice that our template refers to redis-cart (the name of the Redis deployment) and tls-redis (the port.name associated with Redis' ClusterIP Service).
Let's confirm that it works by running the following:
veloctl env update -f https://raw.githubusercontent.com/techvelocity/velocity-blueprints/main/getting-started/onboarding-example/2_redis_cart.yaml

Onboard the frontend service

Finally, to complete the onboarding of this data flow, let's add the frontend service. Again, we'll need to make some changes to the original K8s Deployment definition.
Notice that we are again adding id and dependsOn annotations, as well as replacing the hard-coded CART_SERVICE_ADDR with Velocity Templates much like we did above.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
velocity.tech.v1/id: frontend
velocity.tech.v1/dependsOn: cart
name: frontend
spec:
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- env:
- name: PORT
value: "8080"
- name: CART_SERVICE_ADDR
value: '{velocity.v1:cart.exposures(port=grpc).host}:{velocity.v1:cart.exposures(port=grpc).port}'
We'll also have to substitute the provided Load Balancer with an Ingress, like so:
---
apiVersion: v1
kind: Service
metadata:
name: frontend-external
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- name: http
port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-ingress
spec:
rules:
- host: frontend-{velocity.v1.domainSuffix}
http:
paths:
- backend:
service:
name: frontend
port:
number: 8080
path: /
pathType: Prefix
tls:
- hosts:
- frontend-{velocity.v1.domainSuffix}
secretName: wildcard-cert
Notice that the host definition in our Ingress includes a {velocity.v1.domainSuffix} template, which will dynamically resolve to the domain name associated with our Velocity account.
So, we'll be able to navigate to https://frontend-<domain>.com in our browser to view the live app.
Learn more about Velocity Templates here.
Let's confirm that it works by running:
veloctl env update -f https://raw.githubusercontent.com/techvelocity/velocity-blueprints/main/getting-started/onboarding-example/3_redis_cart_frontend.yaml

The rest of the demo application's services are onboarded in a similar way. To see the end result, run:

veloctl env update -f https://raw.githubusercontent.com/techvelocity/getting-started-app/main/sample.yaml
And... tada! We have a running app.

Summary

We've seen how we can take any existing application, onboard it to Velocity, and with a click of a button (or CLI command
:wink:
) create a reproducible isolated remote environment.

What's next?