Kubernetes Networking
Kubernetes Networking
Kubernetes networking is the way that containers and services within a Kubernetes cluster communicate with each other. The network infrastructure of a Kubernetes cluster is responsible for routing traffic between different components of the cluster, such as pods, nodes, and services.
There are several different networking options available for Kubernetes :
Kubernetes Service: This is the simplest networking option, and it allows pods to communicate with each other using a single IP address and port.
Container Network Interface (CNI): This is a plugin-based system that allows Kubernetes to support a variety of networking technologies, such as flannel, Calico, and Weave Net.
Ingress: Ingress is a Kubernetes resource that allows external traffic to access services within the cluster. It provides an HTTP(S) endpoint for external traffic and routes requests to the appropriate backend service within the cluster.
Network Policies: These are rules that define how traffic is allowed to flow within a Kubernetes cluster. They can be used to restrict traffic between pods and services, and to define which pods are allowed to communicate with each other.
1.Kubernetes Service:
Several types of Services can be used to expose applications running in a cluster to other components of the cluster or external clients. The type of Service that you choose depends on the use case and the requirements of your application. Here are the most common types of Services in Kubernetes:
ClusterIP:
a) This is the default type of Service in Kubernetes.
b) It provides a stable IP address that can be used by other components of the cluster to access the Service.
c) The Service is only accessible from within the cluster.
d) You can set cluster IP ( Optional) in the service definition file.
Inter-service communication within the cluster between front-end and back-end components of the app
NodePort:
NodePort is a service that exposes a set of pods to the outside world by allocating a static port on each node in the cluster. This means that each node in the cluster will listen on the same port for incoming traffic, and any traffic that is received on that port will be forwarded to the appropriate pod.
a) This type of Service exposes the Service on a specific port on each node in the cluster.
b) The Service can be accessed using the IP address of any node in the cluster, along with the specified port number.
c) NodePort Services can be used to provide external access to a Service, but they do not provide load-balancing capabilities.
To create a NodePort service in Kubernetes, you can create a service manifest file and specify the
type
field asNodePort
. Here is an example:
apiVersion: v1
kind: Service
metadata:
name: my-nodeport-service
spec:
type: NodePort
selector:
app: my-app
ports:
- name: http
port: 80
targetPort: 8080
nodePort: 30001
In the above example we are creating a NodePort service named my-nodeport-service
that selects pods with the label app=my-app
. The service exposes port 80
and forwards traffic to port 8080
on the pods. The nodePort
field is set to 30001
, which means that any traffic received on port 30001
on any node in the cluster will be forwarded to the service.
To access the service externally, you can use the IP address of any node in the cluster, along with the NodePort number. For example, if you have a node with IP address 10.0.0.1
and the NodePort is 30001
, you can access the service at http://10.0.0.1:30001
.
LoadBalancer:
This type of Service provisions a load balancer for the Service in cloud environments that support load balancers. The load balancer distributes traffic to the pods running the Service, providing external access and load-balancing capabilities.
A LoadBalancer is a service that distributes traffic across a set of pods using a load-balancing algorithm. When you create a LoadBalancer service, Kubernetes automatically provisions a load balancer in the cloud provider's infrastructure (such as AWS or Google Cloud) and assigns it a public IP address. This public IP address can be used to access the service from outside the cluster.
To create a LoadBalancer service in Kubernetes, you can create a service manifest file and specify the type
field as LoadBalancer
. Here is an example:
apiVersion: v1
kind: Service
metadata:
name: my-loadbalancer-service
spec:
type: LoadBalancer
selector:
app: my-app
ports:
- name: http
port: 80
targetPort: 8080
Here creating a LoadBalancer service named my-loadbalancer-service
that selects pods with the label app=my-app
. The service exposes port 80
and forwards traffic to port 8080
on the pods.
Once you apply this manifest file, Kubernetes will create a load balancer in the cloud provider's infrastructure, assign it a public IP address, and route traffic to the pods based on the load balancing algorithm.
To access the service externally, you can use the public IP address of the load balancer. Depending on the cloud provider, the IP address may take some time to become available after the service is created. You can check the status of the service by running kubectl get svc
. Once the EXTERNAL-IP
field shows a valid IP address, you can use that IP address to access the service
ExternalName
ExternalName is a Kubernetes Service type that allows you to create a virtual service that points to an external DNS name instead of an IP address. This is useful when you want to expose a service that is running outside of your Kubernetes cluster, such as a database or an API hosted on a different server.
When you create an ExternalName Service, Kubernetes creates a CNAME record in the cluster's DNS that points to the external DNS name. When a client within the cluster sends a request to the ExternalName Service, Kubernetes resolves the CNAME record to the external DNS name and returns it to the client.
Here's an example of how you can create an ExternalName Service in Kubernetes:
apiVersion: v1
kind: Service
metadata:
name: my-external-service
spec:
type: ExternalName
externalName: my-external-service.example.com
In this example, my-external-service
is the name of the Kubernetes Service, and my-external-service.example.com
is the external DNS name that the Service points to. Clients within the cluster can access the external service using the my-external-service
name.
2.Container Network Interface (CNI):
The Container Network Interface (CNI) is a specification for network plugins in Kubernetes that allow containers to communicate with each other and with the outside world. CNI defines a standard API for network plugins to configure network interfaces for containers and define how containers are connected to the network.
In Kubernetes, CNI plugins are used to create virtual networks that enable communication between containers running on different nodes in the cluster. When a pod is scheduled to a node, the CNI plugin on that node is responsible for setting up the network interface for the pod and connecting it to the virtual network.
Some popular CNI plugins for Kubernetes include:
Calico: a CNI plugin that provides advanced networking and network security features for Kubernetes, including network policy enforcement and distributed firewalling.
Weave Net: a CNI plugin that provides a simple and easy-to-use networking solution for Kubernetes, with built-in support for DNS and service discovery.
When deploying Kubernetes clusters, it's important to choose a CNI plugin that fits your specific networking requirements and use case. CNI plugins can have a significant impact on cluster performance and scalability, so it's important to carefully evaluate and test different options before making a decision.
3.Ingress
an Ingress is an API object that provides a way to expose HTTP and HTTPS routes from outside the cluster to services within the cluster.
An Ingress is typically used to route traffic to different services based on the URL path or host name.
An Ingress controller is a software component that runs in the cluster and listens for requests to the Ingress API object. When a request is received, the Ingress controller uses the rules defined in the Ingress resource to route the request to the appropriate service.
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress spec: rules: - host: myapp.com http: paths: - path: / pathType: Prefix backend: service: name: my-service port: name: http
4.Network Policies
Network Policies provide a way to enforce security and segmentation in your Kubernetes environment, allowing you to control access to sensitive data and resources.
To use Network Policies in Kubernetes, you must have a CNI plugin that supports the NetworkPolicy API. Some popular CNI plugins that support Network Policies include Calico, Weave Net, and Cilium.
Here's an example of how you can define a simple Network Policy in Kubernetes:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: my-network-policy
spec:
podSelector:
matchLabels:
app: myapp
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: my-other-app
ports:
- protocol: TCP
port: 80
In this example, the Network Policy allows traffic to flow from pods labeled with app: my-other-app
to pods labeled with app: myapp
on port 80 using the TCP protocol. All other traffic is denied.
Some of the key features and benefits of using Kubernetes Network Policies include:
Fine-grained control over network traffic: Network Policies allow you to define complex rules that restrict traffic based on a wide range of attributes, including labels, namespaces, and protocols.
Improved security: By controlling the flow of network traffic between pods, Network Policies can help improve the security of your Kubernetes environment and prevent unauthorized access to sensitive data and resources.
Simplified network management: With Network Policies, you can define your networking rules in code and apply them consistently across your Kubernetes cluster, making it easier to manage and troubleshoot your network configuration.