Spring Boot micro-services deployement on Kubernetes
Hello, I would like to guide you how to launch a local Kubernetes cluster (minikube), develop an app using the Spring Boot framework and deploy it as a container in Kubernetes.
The app can let you do so:
- Find customer’s information (GET)
- Add customer’s information (POST)
- Update customer’s information (PUT)
- Remove customer’s information (DELETE)
Get started with the initialisation of Spring Boot
You will build a simple web application with Spring Boot and add some useful services to it.
First we need to create a Spring Boot application, which can be done in a number of ways.
Using the Initialiser Website
- Visit https://start.spring.io and choose the Java language and Maven project.
2. Enter the following coordinates:
- Artifact: jpa_project
- Packaging: Jar
- Java: 8
3. Add the following dependencies:
- Spring Web: Build web, including RESTful, applications using Spring MVC. Uses Apache Tomcat as the default embedded container.
- Spring Boot DevTools: Provides fast application restarts, LiveReload, and configurations for enhanced development experience.
- Spring Data JPA: Persist data in SQL stores with Java Persistence API using Spring Data and Hibernate.
- Lombok: Java annotation library which helps to reduce boilerplate code.
- MySQL Driver: MySQL JDBC and R2DBC driver.
- H2 Database: Provides a fast in-memory database that supports JDBC API and R2DBC access. Supports embedded and server modes as well as a browser based console application.
4. Click “Generate project”
- The .zip file contains a standard project in the root directory, so you might want to create an empty directory before you unpack it.
Using IntelliJ IDEA
Spring Initialiser is also integrated in IntelliJ IDEA and allows you to create and import a new project without having to leave the IDE for the command-line or the web UI.
Saving and retrieving data
When the main page of you application loads, two things happen:
- It will tell you if it has connected to DB successfully
- It will return all customer’s information registered in DB
First, you should create a Customer
class that holds the customer’s details.
import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import lombok.ToString;@Entity
@Table(name = "CUSTOME")
@Data
@NoArgsConstructor
@AllArgsConstructor
@ToString
public class Customer {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
@Column(name = "ID")
private Long id; @Column(name = "NAME", length = 25)
private String name; @Column(name = "BIRTH")
@Temporal(TemporalType.DATE)
private Date birthDate; @Column(name = "SALARY")
private int salary;
}
- @Data is a notation which will add the getter(), setter(), hashCode(), and also equals() automatically.
- @NoArgsConstructor will insert a default constructor without any arguments.
public Customer(){}
- Conversely, @AllArgsConstructor will add a default constructor with all arguments from class.
public Customer(Long id, String name, Date birthDay, int salary){this.id = id;
this.name = name;
this.birthDay = birthDay;
this.salary = salary;}
- @id defines that this filed will be chosen as the primary key.
- @Column(name = “ID”) defines the name of column which will register this field’s data.
Beside our Customer class, we also need to generate an Repository interface in order to store the Customer data.
public interface CustomerRepository extends JpaRepository<Customer, Long> {}
Please pay attention to how the Customer are persisted in the database:
- The type of the Customer is Customer
- The id of the Customer is of type Long
You should notice the two types in the interface signature
JpaRepository<Customer, Long>
Spring Boot can help us to access the repository by auto-wiring it using @Autowired. Which means it will generate its class that implements the interface and then return an instance.
Next, you should create a new class with the @Controller annotation to select the RESTful method(GET, POST, PUT, DELETE) in your application to deal with different URL demands.
@RestController
public class CustomerController { @Autowired
private CustomerRepository customerRepository; @GetMapping("/customers")
public List<Customer> customers(){
return customerRepository.findAll();
} @PostMapping("/")
public Customer saveCustomer(final @RequestBody Customer customer){
return customerRepository.save(customer);
} @DeleteMapping("/customers")
public void deleteCustomer(@PathVariable Long id){
customerRepository.deleteById(id);
}}
When a user access the /customers
route, they should see all customers.
Your application is functional now although not yet complete. And we can already run this application at this stage. But to do so, we still need to run a MySQL as well.
In order to test your application and to see how it works properly. I suggest you can directly use a temporary in-memory SQL database no more than one line of claim in application.propertities
.
spring.datasource.url=jdbc:h2:mem:DB_CUSTOMER
Where DB_CUSTOMER is your SQL Table name. And after running your application you can directly access its UI by visiting URL http://localhost/h2-console
Don’t forget to modify JDBC URL with your own Table name. By default User Name is ‘sa’ and Password is blank.
You can get Customer DB Table above if you login in with correct authentication.
But it’s necessary to connect to the real MySQL server. We have two solutions in order to authenticate.
- Declare all arguments in the file
application.properties
spring.datasource.url=jdbc:mysql://localhost:3702/DB_CUSTOMER
spring.datasource.username=root
spring.datasource.password=password
- Add env variables manually or declare env into yaml files. Where Spring Boot will detect automatically those env variables and consider as one part of application.propertites.
Use "SPRING_DATASOURCE_URL"
equals to "spring.datasource.url"-------------------------------------------------------------------
Use "SPRING_DATASOURCE_USERNAME"
equals to "spring.datasource.username"-------------------------------------------------------------------
Use "SPRING_DATASOURCE_PASSWORD"
equals to "spring.datasource.password"
Now you’re now ready to build Docker image.
- Docker image can be built from Dockerfile
FROM openjdk:11
WORKDIR /opt
ENV PORT 8082
EXPOSE 8082
COPY target/*.jar /opt/app.jar
ENTRYPOINT exec java $JAVA_OPTS -jar app.jar
- Docker image can also be built from a Maven plug-in Fabric8. And add it into pom.xml.
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<extensions>true</extensions>
<configuration>
<images>

</images>
</configuration>
<executions>
<execution>
<goals>
<goal>
build
</goal>
</goals>
</execution>
</executions>
</plugin>
Use ./mvnw clean install to generate your docker image from your local environment.
You can run the container with 8080 port exposed where you’are allowed to access http://localhost:8080 in your navigator.
docker run -p 8080:8080 jpa_project
Upload the container image to a container registry
Maybe you’re familiar with how docker pull images from internet. You can also create your own DockerHub with your own images and push them to internet.
To used Docker Hub, you first have to create a Docker ID.
A Docker ID is your Docker Hub username.
Before uploading your image, please take the notes that there is one last thing to do. Images uploaded to Docker Hub must have a name of form username/image:tag
:
username
is your Docker IDimage
is the name of the image (Where you can find out from your local machine using Commend Line: docker image ls)tag
is an optional additional attribute which is used to indicate the version of current image (image_name:1.0.0 where 1.0.0 is the tag or version of image).
Let’s upload our image:
- To rename your image according to this format.
docker tag <image_name> <username>/<image_name>:<version>
- To upload your image to Docker Hub:
docker push <username>/<image_name>:<version>
Your image is now publicly available <username>/<image_name>:<version>
on your Docker Hub and everybody can download and run it if you make it as “public image” instead of “private image”.
Because MySQL docker image has already been uploaded and presented on Docker Hub as an official version. So we don’t need to upload any MySQL image on our own Docker Hub.
To deploy your application from Kubernetes
Container orchestrators are designed to run complex applications with large numbers of scalable components.
They work by inspecting the underlying infrastructure and determining the best server to run each containers.
They can scale to thousands of computers and tens of thousands of containers and still word efficiently and reliably.
As a micro-service, it is possible for your servers to encounter a sudden enormous requests from Internet. That’s why the load balancer is becoming more and more indispensable.
So in this chapter, I will teach you how to create a replicated pods for your Spring Boot Service, bound with a external load balancer and connect to MySQL pod as well.
You will use minikube to deploy the cluster of your application from your local machine.
Minikube creates a single-node Kubernetes cluster running in a virtual machine. And it is only intended for testing purposes, not for production.
Before you install minikube, you have to install kubectl.
kubectl is the primary Kubernetes CLI — you use it for all interactions with a Kubernetes cluster, no matter how the cluster was created.
Defining a Deployment
First of all, create a folder named k8s
in your application directory:
mkdir k8s
The purpose of this folder is to hold all the Kubernetes YAML files that you will create.
A Deployment creates and runs containers and keeps them alive.
Here is the definition of Deployment for your Spring Boot Application.
apiVersion: apps/v1
kind: Deployment
metadata:
name: springboot
namespace: springboot-project
labels:
app: springboot
spec:
replicas: 3
selector:
matchLabels:
app: springboot
template:
metadata:
labels:
app: springboot
spec:
containers:
- name: springboot
image: byckles/jpa_project:2.0.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8082
env:
- name: SPRING_DATASOURCE_URL
valueFrom:
configMapKeyRef:
name: mysql-configmap
key: database_url
- name: SPRING_DATASOURCE_USERNAME
valueFrom:
secretKeyRef:
key: mysql-user-username
name: mysql-secret
- name: SPRING_DATASOURCE_PASSWORD
valueFrom:
secretKeyRef:
key: mysql-user-password
name: mysql-secret
That looks complicated, but we will break it down and explain it in detail.
- apiVersion : The version of this resource type.
- Kind:The type of resource.
- metadata.name: The name of this specific resource.
- spec.replicas : The number of replicas (copies) of your container.
- template.metadata.labels : The label for the pods that wrap your Spring Boot container.
- selector.matchLabels : To select those pods with a specific label to belong to this Deployment resource.
Defining a Service
A Service resource makes Pods accessible to other Pods or users outside the cluster.
Without a Service, a Pod cannot be accessed at all.
A service forwards requests to a set of Pods.
In this regard, a Service works similarly as a load balancer we talk from outside frequently.
Here is the definition of a Service that makes your application Pod accessible from outside the cluster.
apiVersion: v1
kind: Service
metadata:
name: springboot-service
namespace: springboot-project
spec:
selector:
app: springboot
type: LoadBalancer
ports:
- protocol: TCP
port: 8082
targetPort: 8082
nodePort: 30001
Service will look for all nodes with same value declared in spec.selector. In our case, it tries to match all Pods have a label of app: springboot and establish a bridge between them and outside the cluster.
The next important part is the port. In this case, the Service listens for requests on port 8082 and forwards them to port 8082 of the target Pods. And port 30001 is exposed to outside the cluster where we can directly access to it using this port number.
Defining the database
In principle, a MySQL Pod can be deployed similarly as your app — that is, by defining a Deployment ands Service resource.
However, deploying MySQL need some additional configuration if you want to persist your data.
MySQL needs a persistent storage.
This storage myst not be affected by whatever happens to the MySQL Pod. Which means if the MySQL Pod is deleted accidentally, the storage should still persist and if the MySQL Pod is moved to another node, the storage must persist.
Consequently, the description of your database components should consist of four resource definitions:
- Deployment
- Service
- PersistentVolumeClaim
- PersistentVolume
Here is the definition of Deployment and Service of MySQL.
apiVersion: v1
kind: Service
metadata:
name: mysql-service
namespace: springboot-project
spec:
ports:
- port: 3306
selector:
app: mysql
clusterIP: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: springboot-project
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_DATABASE
value: db_customers
- name: MYSQL_USER
valueFrom:
secretKeyRef:
key: mysql-user-username
name: mysql-secret
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
key: mysql-user-password
name: mysql-secret
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
key: mysql-user-password
name: mysql-secret
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
Volume
field defines a storage volume, which references the PersistentVolumeClaim.
The VolumeMount
field mounts the referenced volume at the specified path in the container, which in this case is /var/lib/mysql
. And data will be stored in /var/lib/mysql
which is referenced by /mnt/data
which comes from node’s filesystem.
The definition of PersistentVolume and PersistentVolumeClaim of MySQL.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mysql-pv-volume
namespace: springboot-project
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
namespace: springboot-project
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
PersistentVolumeClaim : It requests a persistent storage of 20GB (You can modify to different size according to your needs).
Service : The Service is similar to the Service that we defined for the application component. By default, its type is ClusterIP
and can be accessed inside of the cluster which can avoid data being modified by outside the cluster without passing by our application/
There’s one more important thing to node
You need to declare all necessary env variables inside of your Pods. Otherwise, Spring Boot cannot connect to MySQL and itself cannot either run properly. So declare them in your YAML file.
- MYSQL_DATABASE: It will specify the name of a database to be created on image startup.
- MYSQL_USER: User name of custom account which has already be granted superuser permissions.
- MYSQL_PASSWORD: User password of custom account which has already be granted superuser permissions.
- MYSQL_ROOT_PASSWORD: The default password of Root account.
env:
- name: MYSQL_DATABASE
value: xxx
- name: MYSQL_USER
value: xxx
- name: MYSQL_PASSWORD
value: xxx
- name: MYSQL_ROOT_PASSWORD
value: xxx
Deploying the application
So far, you created all YAML files with resources definitions.
Let’s submit your resource definitions to Kubernetes.
First of all, make sure that you keep all YAML files from your local machine. Also, make sure that minikube cluster is running.
minikube run
Then submit your resource definitions to Kubernetes with the following command.
kubectl apply -f any_file.yaml
As soon as Kubernetes receives your resources, it creates the Pods.
kubectl get pods --watch
When you try to find the external IP address, but our application Service is still pending without any results after few minutes.
kubectl get services
Because minikube doesn’t support LoadBalancer services, so the service will never get an external IP. But you can access the service anyway with its external port. You can get the IP and Port by running.
minikube service <service_name>
So now, we are allowed to everything we can. For example, we can add new data, get all data and remove data from DB, etc.
Here I use Postman to connect to service’s URL.
- Add our first customer into our MySQL
- Get all customers from our MySQL
As MySQL has been attached with a PersistentVolume, so we can assume that data will persist even MySQL Pods are removed anyway. Let’s check it out.
First of all, I remove our MySQL Pods from my local machine. And another MySQL Pod is created automatically but with different ID.
When I try to GET our data one more time just after removing the previous MySQL Pod where another Pod is generated and replace it. Our data is still persist and returned correctly.
So far, you’ve learnt how to:
- Develop a Customer taking application that stores in MySQL
- Packaged it as a Docker container
- Deployed it in a local Kubernetes cluster
All files have already pushed to my Github. 😜