HashiCorp Nomad

Shashwot Risal
4 min readJan 27, 2022

Run Java Spring Boot Application Using Nomad with Mysql

Nomad is an orchestration tool that enables us to deploy, manage any containerized or legacy application using a single, unified workflow. Nomad can run a diverse workload of Docker, non-containerized, micronomad-service, and batch applications.

Intro to HashiCorp Nomad

In this example we will see how to run containerized spring-boot application using Nomad.

Nomad Installation

Ubuntu

$ curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -$ sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main"$ sudo apt-get update && sudo apt-get install nomad

Centos

$ sudo yum install -y yum-utils
$ sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
$ sudo yum -y install nomad

Fedora

$ sudo dnf install -y dnf-plugins-core
$ sudo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/fedora/hashicorp.repo
$ sudo dnf -y install nomad

Verify Installation

$ nomad

Nomad with Consul Connect Integration

Nomad integrates with Consul to provide secure service-to-service communication between Nomad jobs and task groups. In order to support Consul Connect, Nomad adds a new networking mode for jobs that enables tasks in the same task group to share their networking stack.

Consul Installation

$ mkdir nomad
$ cd nomad
$ wget https://releases.hashicorp.com/consul/1.8.5/consul_1.8.5_linux_amd64.zip
$ unzip consul_1.8.5_linux_amd64.zip

Run Consul

$ ./consul agent -server -client 127.0.0.1 -advertise 127.0.0.1 -data-dir /tmp/consul -ui -bootstrap

Verify that Consul is up and running by on your browser:

http://127.0.0.1:8500

This shows an overview of running services and should so far only include Consul itself.

Running Nomad Cluster

Next, in another screen, we run Nomad with server and client mode which is also a cluster.

You can read the official docs to create a HCL file from here

server.hcl

log_level= "DEBUG"bind_addr = "0.0.0.0"
data_dir= "/tmp/server"
server {
enabled = true
bootstrap_expect=1
}
consul {
address= "127.0.0.1:8500"
}

client.hcl

#increase log verbosity
log_level = "DEBUG"
# Setup data dir
data_dir = "/tmp/client"
# Give the agent a unique name. Defaults to hostname
name = "client"
# Enable the client
client {
enabled = true
# For demo assume we are talking to server. For production,
# this should be like "nomad.service.consul:4647" and a system
# like Consul used for service discovery.
servers = ["127.0.0.1:4647"]
}
# Modify our port to avoid a collision with server
ports {
http = 6666
}
# Disable the dangling container cleanup to avoid interaction with other clients
plugin "docker" {
config {
gc {
dangling_containers {
enabled = false
}
}
}
}

Now lets run the configurations:

For Server:

$ nomad agent -config server.hcl

For Client:

$ nomad agent -config client.hcl

Check the nomad interface from the browser using:

http://127.0.0.1:4646

We can now go back to the Consul UI and see that both Nomad (the component keeping state and making scheduling decisions), and nomad-client (the component running the actual jobs) have been registered.

Our Nomad cluster is ready to accept jobs now. Jobs are specified in .nomad files and sent to the Nomad server to be placed in the cluster.

In this example we will run a spring-boot application with

  • Mysql
  • Spring-boot application talking to db driver (via Consul)

Nomad Job

Nomad job specification defines schema for Nomad. The job is broken down into smaller pieces which can be expanded with the following hierarchy for a job:

job
|_ group
|_ task

Each job file has only a single job, however a job may have multiple groups, and each group may have multiple tasks. Groups contain a set of tasks that are co-located on a machine.

Configuration of the job uses the following syntax:

springbootapp.nomad

job "springboot" {
datacenters = ["dc1"]
group "mysql-server" {
count = 1
task "mysql-server" {
driver = "docker"
env {
MYSQL_ROOT_PASSWORD = "P@ssw0rd"
MYSQL_DATABASE = "task"
}
config {
image = "mysql:8"
port_map {
db = 3306
}
}
resources {
memory = 1024
cpu = 1000
network {
port "db" {}
}
}
service {
name = "db"
port = "db"
}
}
}
group "springboot" {
count = 2
task "serverapp" {
driver = "docker"
config {
image = "shashwot/springboot-app:200919-eb762c8"
port_map {
app = 8090
}
}
env{
SPRING_DATASOURCE_USERNAME= "root"
SPRING_DATASOURCE_PASSWORD= "P@ssw0rd"
}
template{
data = <<EOH
SPRING_DATASOURCE_URL= "jdbc:mysql://{{ range service "db" }}{{ .Address }}:{{ .Port }}{{ end }}/task?allowPublicKeyRetrieval=true"
EOH
destination = "mysql-server.env"
env = true
}
resources {
memory = 1024
cpu = 1000
network {
port "app" {}
}
}
}
}
}

Running the job

We are now ready to schedule jobs to our local scheduler. Nomad will register these services with consul.

$ nomad run springbootOR if you have a different server using nomad server$ nomad run -address=http://<ip>:4646 springboot

Production Settings

This is an example only, and when running a similar setup in production, the following issues should be addressed:

  • Consul should be a cluster (i.e. three or five instances)
  • Nomad should be a cluster (i.e. three or five server instances and a dynamic number of client instances)
  • Networking should be planned with respect to both security and routing/proxying
  • Backup for MySQL data (before considering any kind of container deployment for MySQL we strongly recommend to have both backups and restore processes in place)
  • Proper risk assessment and evaluation of your storage/volume requirements

--

--