Kubernetes Networking
Services in Kubernetes:
The Service API, part of Kubernetes, is an abstraction to help you expose groups of Pods over a network. Each Service object defines a logical set of endpoints (usually these endpoints are Pods) along with a policy about how to make those pods accessible.
For example, consider a stateless image-processing backend which is running with 3 replicas. Those replicas are fungible—frontends do not care which backend they use. While the actual Pods that compose the backend set may change, the frontend clients should not need to be aware of that, nor should they need to keep track of the set of backends themselves.
The Service abstraction enables this decoupling.
The set of Pods targeted by a Service is usually determined by a selector that you define. To learn about other ways to define Service endpoints, see Services without selectors.
If your workload speaks HTTP, you might choose to use an Ingress to control how web traffic reaches that workload. Ingress is not a Service type, but it acts as the entry point for your cluster. An Ingress lets you consolidate your routing rules into a single resource, so that you can expose multiple components of your workload, running separately in your cluster, behind a single listener.
The Gateway API for Kubernetes provides extra capabilities beyond Ingress and Service. You can add Gateway to your cluster - it is a family of extension APIs, implemented using CustomResourceDefinitions - and then use these to configure access to network services that are running in your cluster.
Kubernetes ServiceTypes
allow you to specify what kind of Service you want.
Type
values and their behaviors are:
ClusterIP
: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default that is used if you don't explicitly specify atype
for a Service. You can expose the service to the public with an Ingress or the Gateway API.
NodePort
: Exposes the Service on each Node's IP at a static port (theNodePort
). To make the node port available, Kubernetes sets up a cluster IP address, the same as if you had requested a Service oftype: ClusterIP
.If you set the
type
field toNodePort
, the Kubernetes control plane allocates a port from a range specified by--service-node-port-range
flag (default: 30000-32767). Each node proxies that port (the same port number on every Node) into your Service. Your Service reports the allocated port in its.spec.ports[*].nodePort
field.Using a NodePort gives you the freedom to set up your own load balancing solution, to configure environments that are not fully supported by Kubernetes, or even to expose one or more nodes' IP addresses directly.
Created deployment file and exposed port with Type:Nodeport(30008)
LoadBalancer
: Exposes the Service externally using a cloud provider's load balancer.ExternalName
: Maps the Service to the contents of theexternalName
field (e.g.foo.bar.example.com
), by returning aCNAME
record with its value. No proxying of any kind is set up.Note: You need eitherkube-dns
version 1.7 or CoreDNS version 0.0.8 or higher to use theExternalName
type.
Choosing your own port:
If you want a specific port number, you can specify a value in the nodePort
field. The control plane will either allocate you that port or report that the API transaction failed. This means that you need to take care of possible port collisions yourself. You also have to use a valid port number, one that's inside the range configured for NodePort use.
Here is an example manifest for a Service of type: NodePort
that specifies a NodePort value (30008, in this example).
Ingress:
FEATURE STATE: Kubernetes v1.19 [stable]
An API object that manages external access to the services in a cluster, typically HTTP.
Ingress may provide load balancing, SSL termination and name-based virtual hosting.
Terminology:
For clarity, this guide defines the following terms:
Node: A worker machine in Kubernetes, part of a cluster.
Cluster: A set of Nodes that run containerized applications managed by Kubernetes. For this example, and in most common Kubernetes deployments, nodes in the cluster are not part of the public internet.
Edge router: A router that enforces the firewall policy for your cluster. This could be a gateway managed by a cloud provider or a physical piece of hardware.
Cluster network: A set of links, logical or physical, that facilitate communication within a cluster according to the Kubernetes networking model.
Service: A Kubernetes Service that identifies a set of Pods using label selectors. Unless mentioned otherwise, Services are assumed to have virtual IPs only routable within the cluster network.
What is Ingress?
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
Here is a simple example where an Ingress sends all its traffic to one Service:
Default Ingress Class:
You can mark a particular IngressClass as default for your cluster. Setting the ingressclass.kubernetes.io/is-default-class
annotation to true
on an IngressClass resource will ensure that new Ingresses without an ingressClassName
field specified will be assigned this default IngressClass.
Caution: If you have more than one IngressClass marked as the default for your cluster, the admission controller prevents creating new Ingress objects that don't have an ingressClassName
specified. You can resolve this by ensuring that at most 1 IngressClass is marked as default in your cluster.
There are some ingress controllers, that work without the definition of a default IngressClass
. For example, the Ingress-NGINX controller can be configured with a flag --watch-ingress-without-class
. It is recommended though, to specify the default IngressClass
:
Network Policies:
There are unlimited situations where you need to permit or deny traffic from specific or different sources. This is utilized in Kubernetes to indicate how gatherings of pods are permitted to speak with one another and with outside endpoints.
All Pods in Kubernetes communicate with each other which are present in the cluster. By default all Pods are non-isolated however Pods become isolated by having a Kubernetes Network Policy in Kubernetes. Once we have it in a namespace choosing a specific pod, that will restrict all the incoming and outing traffic of the pods.
This strategy involves the following conditions:
Blocking the default Kubernetes behavior that allows all traffic.
Ensuring that only Pods labeled
wordpress
in thedefault
namespace can communicate with one another.Allowing incoming (
ingress
) connectivity from public internet on ports80
/443
and the172.16.0.0
IP range.Only allowing the Pods labeled
wordpress
to connect to the MySQL database Pods on port3306
.Always allowing connectivity to the Kubernetes DNS service (port
53
).Blocking all outgoing connectivity outside of the cluster.
DNS:
Kubernetes, also known as K8s, is an open-source orchestration system for automating deployment, scaling, and managing containerized applications. Its portability, flexibility, and automatic scaling capabilities make it an extensively used system. Above all the standout features, the option to create DNS records for services and pods makes it unbeatable from other software systems. Kubernetes DNS service allows you to contact services with consistent DNS names instead of IP addresses.
The Domain Name System (DNS) is a mechanism for linking various sorts of information with easy-to-remember names, such as IP addresses. Using a DNS system to translate request names into IP addresses makes it easy for end-users to reach their target domain name effortlessly. Most Kubernetes clusters include an internal DNS service configured by default to offer a lightweight approach for service discovery. Even when pods and services are created, deleted, or shifted between nodes, built-in service discovery simplifies applications to identify and communicate with Kubernetes clusters.
Previously, the Kubernetes DNS service was based on kube-dns before the version 1.11 release. However, security and privacy were still a serious concern. Later, the Kubernetes community introduced CoreDNS in the new version 1.11 to address kube-dns security and stability issues.
No matter which software version you are using to handle DNS records, kube-dns and CoreDNS function in a similar way:
A kube-dns service and one or more pods are created.
The kube-dns service monitors the Kubernetes API for service and endpoint events and changes its DNS entries as appropriate. When you modify these Kubernetes services and their related pods with creating, editing, or deleting operations, these events are auto-triggered.
Kubelet assigns the cluster IP of the kube-dns service to every new pod etc/resolv.conf nameserver option, along with suitable search settings to allow for shorter hostnames:
Container Network Interface:(plugins)
Network Plugins:
Kubernetes 1.27 supports Container Network Interface (CNI) plugins for cluster networking. You must use a CNI plugin that is compatible with your cluster and that suits your needs. Different plugins are available (both open- and closed- source) in the wider Kubernetes ecosystem.
A CNI plugin is required to implement the Kubernetes network model.
You must use a CNI plugin that is compatible with the v0.4.0 or later releases of the CNI specification. The Kubernetes project recommends using a plugin that is compatible with the v1.0.0 CNI specification (plugins can be compatible with multiple spec versions.
Network Plugin Requirements
For plugin developers and users who regularly build or deploy Kubernetes, the plugin may also need specific configuration to support kube-proxy. The iptables proxy depends on iptables, and the plugin may need to ensure that container traffic is made available to iptables. For example, if the plugin connects containers to a Linux bridge, the plugin must set the net/bridge/bridge-nf-call-iptables
sysctl to 1
to ensure that the iptables proxy functions correctly. If the plugin does not use a Linux bridge, but uses something like Open vSwitch or some other mechanism instead, it should ensure container traffic is appropriately routed for the proxy.
By default, if no kubelet network plugin is specified, the noop
plugin is used, which sets net/bridge/bridge-nf-call-iptables=1
to ensure simple configurations (like Docker with a bridge) work correctly with the iptables proxy.
Thank you for reading!! Hope you find this helpful.
#KubeWeekchallenge# withtrainwithshubham#90daysofdevops#devopscommunity#trainwithshubham
Always open for suggestions..!!
Thankyou Shubham Londhe !!