Introducing the Kubernetes service
Kubernetes deployments create and destroy pods dynamically. For a general three-tier web architecture, this can be a problem if the frontend and backend are different pods. Frontend pods don't know how to connect to the backend. Network service abstraction in Kubernetes resolves this problem.
The Kubernetes service enables network access for a logical set of pods. The logical set of pods are usually defined using labels. When a network request is made for a service, it selects all the pods with a given label and forwards the network request to one of the selected pods.
A Kubernetes service is defined using a YAML Ain't Markup Language (YAML) file, as follows:
apiVersion: v1
kind: Service
metadata:
name: service-1
spec:
type: NodePort
selector:
app: app-1
ports:
- nodePort: 29763
protocol: TCP
port: 80
targetPort: 9376
In this YAML file, the following applies:
- The type property defines how the service is exposed to the network.
- The selector property defines the label for the Pods.
- The port property is used to define the port exposed internally in the cluster.
- The targetPort property defines the port on which the container is listening.
Services are usually defined with a selector, which is a label attached to pods that need to be in the same service. A service can be defined without a selector. This is usually done to access external services or services in a different namespace. Services without selectors are mapped to a network address and a port using an endpoint object, as follows:
apiVersion: v1
kind: Endpoints
subsets:
- addresses:
- ip: 192.123.1.22
ports:
- port: 3909
This endpoint object will route traffic for 192:123.1.22:3909 to the attached service.
Service discovery
To find Kubernetes services, developers either use environment variables or the Domain Name System (DNS), detailed as follows:
- Environment variables: When a service is created, a set of environment variables of the form [NAME]_SERVICE_HOST and [NAME]_SERVICE_PORT are created on the nodes. These environment variables can be used by other pods or applications to reach out to the service, as illustrated in the following code snippet:
DB_SERVICE_HOST=192.122.1.23
DB_SERVICE_PORT=3909
- DNS: The DNS service is added to Kubernetes as an add-on. Kubernetes supports two add-ons: CoreDNS and Kube-DNS. DNS services contain a mapping of the service name to IP addresses. Pods and applications use this mapping to connect to the service.
Clients can locate the service IP from environment variables as well as through a DNS query, and there are different types of services to serve different types of client.
Service types
A service can have four different types, as follows:
- ClusterIP: This is the default value. This service is only accessible within the cluster. A Kubernetes proxy can be used to access the ClusterIP services externally. Using kubectl proxy is preferable for debugging but is not recommended for production services as it requires kubectl to be run as an authenticated user.
- NodePort: This service is accessible via a static port on every node. NodePorts expose one service per port and require manual management of IP address changes. This also makes NodePorts unsuitable for production environments.
- LoadBalancer: This service is accessible via a load balancer. A node balancer per service is usually an expensive option.
- ExternalName: This service has an associated Canonical Name Record (CNAME) that is used to access the service.
There are a few types of service to use and they work on layer 3 and layer 4 of the OSI model. None of them is able to route a network request at layer 7. For routing requests to applications, it would be ideal if the Kubernetes service supported such a feature. Let's see, then, how an ingress object can help here.
Ingress for routing external requests
Ingress is not a type of service but is worth mentioning here. Ingress is a smart router that provides external HTTP/HTTPS (short for HyperText Transfer Protocol Secure) access to a service in a cluster. Services other than HTTP/HTTPS can only be exposed for the NodePort or LoadBalancer service types. An Ingress resource is defined using a YAML file, like this:
apiVersion: extensions/v1beta1
kind: Ingress
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: service-1
servicePort: 80
This minimal ingress spec forwards all traffic from the testpath route to the service-1 route.
Ingress objects have five different variations, listed as follows:
- Single-service Ingress: This exposes a single service by specifying a default backend and no rules, as illustrated in the following code block:
apiVersion: extensions/v1beta1
kind: Ingress
spec:
backend:
serviceName: service-1
servicePort: 80
This ingress exposes a dedicated IP address for service-1.
- Simple fanout: A fanout configuration routes traffic from a single IP to multiple services based on the Uniform Resource Locator (URL), as illustrated in the following code block:
apiVersion: extensions/v1beta1
kind: Ingress
spec:
rules:
- host: foo.com
http:
paths:
- path: /foo
backend:
serviceName: service-1
servicePort: 8080
- path: /bar
backend:
serviceName: service-2
servicePort: 8080
This configuration allows requests to foo.com/foo to reach out to service-1 and for foo.com/bar to connect to service-2.
- Name-based virtual hosting: This configuration uses multiple hostnames for a single IP to reach out to different services, as illustrated in the following code block:
apiVersion: extensions/v1beta1
kind: Ingress
spec:
rules:
- host: foo.com
http:
paths:
- backend:
serviceName: service-1
servicePort: 80
- host: bar.com
http:
paths:
- backend:
serviceName: service-2
servicePort: 80
This configuration allows requests to foo.com to connect to service-1 and requests to bar.com to connect to service-2. The IP address allocated to both services is the same in this case.
- Transport Layer Security (TLS): A secret can be added to the ingress spec to secure the endpoints, as illustrated in the following code block:
apiVersion: extensions/v1beta1
kind: Ingress
spec:
tls:
- hosts:
- ssl.foo.com
secretName: secret-tls
rules:
- host: ssl.foo.com
http:
paths:
- path: /
backend:
serviceName: service-1
servicePort: 443
With this configuration, the secret-tls secret provides the private key and certificate for the endpoint.
- Load balancing: A load balancing ingress provides a load balancing policy, which includes the load balancing algorithm and weight scheme for all ingress objects.
In this section, we introduced the basic concept of the Kubernetes service, including ingress objects. These are all Kubernetes objects. However, the actual network communication magic is done by several components, such as kube-proxy. Next, we will introduce the CNI and CNI plugins, which is the foundation that serves the network communication of a Kubernetes cluster.