Develop one or more microservices
Velocity enables you to work on one or more components of your environment without having to worry about running everything locally on your computer. Instead, you develop what you need, while the rest of the environments' dependencies run on the cloud.
In this tutorial, we will start by developing the backend locally and then also run and debug the worker service.


If you already followed the Setup, feel free to skip directly into Creating your environment.
# via Homebrew
brew install techvelocity/tap/veloctl
# or with a script
curl -fsSL | sh -s
# Via snapcraft
sudo snap install veloctl --classic
# or with a script
curl -fsSL | sh -s

Authenticate to Velocity

veloctl auth login
Attempting to automatically open the login page in your default browser.
If the browser does not open or you wish to use a different device to authorize this request, open the following URL:
and verify the following code appears:
* Fetching underlying Kubernetes cluster configuration...
* Stored context 'velocity-VELOACME' locally (`/Users/Marty/.kube/config`).
Welcome Marty McFly ([email protected])!

Creating your environment

In this tutorial, we will be developing two services - backend and worker.
Use veloctl env create to ask Velocity to create an isolated environment for you with these services inside:
veloctl env create --service backend --service worker
Watching environment exciting-friday-64 status... /
Point in time: 2015-10-21T19:28:00Z
Service Status Version Public URI
database In Progress mysql:5.7
backend Pending ...6c6ed6e8d52388190 (pending)
worker Pending ...91698f97536e95126
Overall status: In Progress
Once the environment is ready, the CLI will show its up-to-date status:
Watching environment exciting-friday-64 status...
Point in time: 2015-10-21T19:28:00Z
Service Status Version Public URI
database Ready mysql:5.7
backend Ready ...6c6ed6e8d52388190
worker Ready ...91698f97536e95126
Overall status: Ready
Your environment is ready! To develop on your services with command <CMD>, please run:
veloctl env develop --service backend -- <CMD>
veloctl env develop --service worker -- <CMD>
Where is the website?
Velocity automatically resolves the dependencies of your services, so you don't have to worry about the other components required for your app to work.
In the current tutorial, neither the backend nor the worker require the website, so it wasn't created.
At this point, you can work directly against the backend created in your environment by using its new URL -

Running the backend

The environment was created in order to develop a new feature, and I want to run the backend service locally. Using veloctl env develop , I can wrap my backend's run command to start developing. Below are few examples to give you the gist of how to wrap your run command:
Node.js (NPM)
Ruby (Rails)
Python (Flask)
veloctl env develop --service backend -- go run .
veloctl env develop --service backend -- npm start
veloctl env develop --service backend -- rails server
veloctl env develop --service backend -- docker run -v `pwd`:/app -t my-image veloctl env develop --service backend -- flask run
The double dash (--) is important to separate any arguments you pass to your command from the commands passed to veloctl. But don't worry, we'll remind you if you forget it!
veloctl env develop --service backend -v -- go run .
[VELOCTL] Developing service 'backend' in environment 'exciting-friday-64'...
[VELOCTL] Forwarding: localhost:50271 => database:3306
[VELOCTL] Executing command: go run .
[VELOCTL] Service 'website' is also accessible via
Connecting to the database (`DATABASE_URL` env): mysql://RA6su4rv:[email protected]:50271/app
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.
[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)
[GIN-debug] GET /ping --> main.main.func1 (3 handlers)
[GIN-debug] Environment variable PORT="8080"
[GIN-debug] Listening and serving HTTP on :8080
The backend is now available locally (on port 8080) and also via the Public URI mentioned before - . That URI is accessible to any of your coworkers and it will proxy the incoming traffic to your local process!
What is this "Forwarding" line?
To let your locally run process access internal resources (like the database), we bind a random port on your computer and forward it to the resource. In this case, the MySQL database is accessible at and Velocity automatically stored the full connection string in the DATABASE_URL that the backend is expecting.

Running the worker

See logs from the environment

When we created the environment, the worker service was created in the cloud. We can see the output from it by using the veloctl env logs command:
veloctl env logs --service worker
worker-cfdb86554-v8dfl worker 2015-10-21T19:28:02Z info: Queue backend: MySQL (mysql://RA6su4rv:[email protected]:3306/app)
worker-cfdb86554-v8dfl worker 2015-10-21T19:28:03Z info: 0 jobs in queue

Run the worker locally

Now that we have seen the worker is running in the environment, we might want to develop it locally too. Just as we started the previous service, we will use veloctl env develop to start the worker:
veloctl env develop --service worker -- yarn start
[VELOCTL] Developing service 'worker' in environment 'exciting-friday-64'...
[VELOCTL] Forwarding: localhost:50271 => database:3306
[VELOCTL] Executing command: yarn start
yarn run v1.22.10
$ node .
2015-10-21T19:28:22Z info: Queue backend: MySQL (mysql://RA6su4rv:[email protected]:50271/app)
2015-10-21T19:28:23Z info: 0 jobs in queue

Cleaning up

When you are done developing and ready to clean up the resources, use:
veloctl env destroy
? Are you sure you want to destroy the 'exciting-friday-64' environment? Yes
Destroying environment exciting-friday-64... \
Environment successfully destroyed

What's next?

Copy link
On this page
Authenticate to Velocity
Creating your environment
Running the backend
Running the worker
See logs from the environment
Run the worker locally
Cleaning up
What's next?