Tasks

Tasks
Example Task Template
Extend kubectl with plugins
Manage HugePages
Schedule GPUs
Manage Memory, CPU, and API Resources
Access Clusters Using the Kubernetes API
Access Services Running on Clusters
Advertise Extended Resources for a Node
Autoscale the DNS Service in a Cluster
Change the Reclaim Policy of a PersistentVolume
Change the default StorageClass
Cluster Management
Configure Default CPU Requests and Limits for a Namespace
Configure Default Memory Requests and Limits for a Namespace
Configure Memory and CPU Quotas for a Namespace
Configure Minimum and Maximum CPU Constraints for a Namespace
Configure Minimum and Maximum Memory Constraints for a Namespace
Configure Multiple Schedulers
Configure Out Of Resource Handling
Configure Quotas for API Objects
Configure a Pod Quota for a Namespace
Control CPU Management Policies on the Node
Customizing DNS Service
Debugging DNS Resolution
Declare Network Policy
Developing Cloud Controller Manager
Encrypting Secret Data at Rest
Guaranteed Scheduling For Critical Add-On Pods
IP Masquerade Agent User Guide
Kubernetes Cloud Controller Manager
Limit Storage Consumption
Namespaces Walkthrough
Operating etcd clusters for Kubernetes
Persistent Volume Claim Protection
Reconfigure a Node's Kubelet in a Live Cluster
Reserve Compute Resources for System Daemons
Romana for NetworkPolicy
Safely Drain a Node while Respecting Application SLOs
Securing a Cluster
Set Kubelet parameters via a config file
Set up High-Availability Kubernetes Masters
Set up a High-Availablity Etcd Cluster With Kubeadm
Share a Cluster with Namespaces
Static Pods
Storage Object in Use Protection
Use Calico for NetworkPolicy
Use Cilium for NetworkPolicy
Use Kube-router for NetworkPolicy
Using CoreDNS for Service Discovery
Using Sysctls in a Kubernetes Cluster
Using a KMS provider for data encryption
Weave Net for NetworkPolicy

Edit This Page

Connect a Front End to a Back End Using a Service

This task shows how to create a frontend and a backend microservice. The backend microservice is a hello greeter. The frontend and backend are connected using a Kubernetes Service object.

Objectives

Before you begin

To check the version, enter kubectl version.

Creating the backend using a Deployment

The backend is a simple hello greeter microservice. Here is the configuration file for the backend Deployment:

hello.yaml docs/tasks/access-application-cluster
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello
spec:
  selector:
    matchLabels:
      app: hello
      tier: backend
      track: stable
  replicas: 7
  template:
    metadata:
      labels:
        app: hello
        tier: backend
        track: stable
    spec:
      containers:
        - name: hello
          image: "gcr.io/google-samples/hello-go-gke:1.0"
          ports:
            - name: http
              containerPort: 80

Create the backend Deployment:

kubectl create -f https://k8s.io/docs/tasks/access-application-cluster/hello.yaml

View information about the backend Deployment:

kubectl describe deployment hello

The output is similar to this:

Name:                           hello
Namespace:                      default
CreationTimestamp:              Mon, 24 Oct 2016 14:21:02 -0700
Labels:                         app=hello
                                tier=backend
                                track=stable
Annotations:                    deployment.kubernetes.io/revision=1
Selector:                       app=hello,tier=backend,track=stable
Replicas:                       7 desired | 7 updated | 7 total | 7 available | 0 unavailable
StrategyType:                   RollingUpdate
MinReadySeconds:                0
RollingUpdateStrategy:          1 max unavailable, 1 max surge
Pod Template:
  Labels:       app=hello
                tier=backend
                track=stable
  Containers:
   hello:
    Image:              "gcr.io/google-samples/hello-go-gke:1.0"
    Port:               80/TCP
    Environment:        <none>
    Mounts:             <none>
  Volumes:              <none>
Conditions:
  Type          Status  Reason
  ----          ------  ------
  Available     True    MinimumReplicasAvailable
  Progressing   True    NewReplicaSetAvailable
OldReplicaSets:                 <none>
NewReplicaSet:                  hello-3621623197 (7/7 replicas created)
Events:
...

Creating the backend Service object

The key to connecting a frontend to a backend is the backend Service. A Service creates a persistent IP address and DNS name entry so that the backend microservice can always be reached. A Service uses selector labels to find the Pods that it routes traffic to.

First, explore the Service configuration file:

hello-service.yaml docs/tasks/access-application-cluster
kind: Service
apiVersion: v1
metadata:
  name: hello
spec:
  selector:
    app: hello
    tier: backend
  ports:
  - protocol: TCP
    port: 80
    targetPort: http

In the configuration file, you can see that the Service routes traffic to Pods that have the labels app: hello and tier: backend.

Create the hello Service:

kubectl create -f https://k8s.io/docs/tasks/access-application-cluster/hello-service.yaml

At this point, you have a backend Deployment running, and you have a Service that can route traffic to it.

Creating the frontend

Now that you have your backend, you can create a frontend that connects to the backend. The frontend connects to the backend worker Pods by using the DNS name given to the backend Service. The DNS name is “hello”, which is the value of the name field in the preceding Service configuration file.

The Pods in the frontend Deployment run an nginx image that is configured to find the hello backend Service. Here is the nginx configuration file:

frontend/frontend.conf docs/tasks/access-application-cluster/frontend
upstream hello {
    server hello;
}

server {
    listen 80;

    location / {
        proxy_pass http://hello;
    }
}

Similar to the backend, the frontend has a Deployment and a Service. The configuration for the Service has type: LoadBalancer, which means that the Service uses the default load balancer of your cloud provider.

frontend.yaml docs/tasks/access-application-cluster
apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  selector:
    app: hello
    tier: frontend
  ports:
  - protocol: "TCP"
    port: 80
    targetPort: 80
  type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  selector:
    matchLabels:
      app: hello
      tier: frontend
      track: stable
  replicas: 1
  template:
    metadata:
      labels:
        app: hello
        tier: frontend
        track: stable
    spec:
      containers:
      - name: nginx
        image: "gcr.io/google-samples/hello-frontend:1.0"
        lifecycle:
          preStop:
            exec:
              command: ["/usr/sbin/nginx","-s","quit"]

Create the frontend Deployment and Service:

kubectl create -f https://k8s.io/docs/tasks/access-application-cluster/frontend.yaml

The output verifies that both resources were created:

deployment "frontend" created
service "frontend" created

Note: The nginx configuration is baked into the container image. A better way to do this would be to use a ConfigMap, so that you can change the configuration more easily.

Interact with the frontend Service

Once you’ve created a Service of type LoadBalancer, you can use this command to find the external IP:

kubectl get service frontend

The external IP field may take some time to populate. If this is the case, the external IP is listed as <pending>.

NAME       CLUSTER-IP      EXTERNAL-IP   PORT(S)  AGE
frontend   10.51.252.116   <pending>     80/TCP   10s

Repeat the same command again until it shows an external IP address:

NAME       CLUSTER-IP      EXTERNAL-IP        PORT(S)  AGE
frontend   10.51.252.116   XXX.XXX.XXX.XXX    80/TCP   1m

Send traffic through the frontend

The frontend and backends are now connected. You can hit the endpoint by using the curl command on the external IP of your frontend Service.

curl http://<EXTERNAL-IP>

The output shows the message generated by the backend:

{"message":"Hello"}

What's next

Analytics

Create an Issue Edit this Page