Table of contents
- 1.What is Kubernetes and why it is important?
- 2.What is difference between docker swarm and kubernetes?
- 3.How does Kubernetes handle network communication between containers?
- 4**.How** does Kubernetes handle scaling of applications?
- 5.What is a Kubernetes Deployment and how does it differ from a ReplicaSet?
- 6.Can you explain the concept of rolling updates in Kubernetes?
- 7.How does Kubernetes handle network security and access control?
- 8.Can you give an example of how Kubernetes can be used to deploy a highly available application?
- 9.What is namespace is kubernetes? Which namespace any pod takes if we don't specify any namespace?
- 10.How ingress helps in kubernetes?
- 11.Explain different types of services in kubernetes?
- 12.Can you explain the concept of self-healing in Kubernetes and give examples of how it works?
- 13.How does Kubernetes handle storage management for containers?
- 14.What are the access mode available for PersistentVolume (PV)?
- 15.How does the NodePort service work?
- 16.What is a multinode cluster and single-node cluster in Kubernetes?
- Single cluster
- Multiple clusters
- 17.Difference between create and apply in Kubernetes?
- Example of kubectl apply
- Example of kubectl create
- 18.What is NFS in Kubernetes?
- 19. What are the various things that can be done to increase Kubernetes security?
- 20. How to run a POD on a particular node?
1.What is Kubernetes and why it is important?
Ans: Kubernetes (also known as k8s or “Kube”) is an open-source container orchestration platform that automates many of the manual processes involved in deploying, managing, and scaling containerized applications.
One of the benefits of Kubernetes is that it makes building and running complex applications much simpler. Here’s a handful of the many Kubernetes features:
Standard services like local DNS and basic load-balancing that most applications need, and are easy to use.
Standard behaviors (e.g., restart this container if it dies) that are easy to invoke, and do most of the work of keeping applications running, available, and performant.
A standard set of abstract “objects” (called things like “pods,” “replicasets,” and “deployments”) that wrap around containers and make it easy to build configurations around collections of containers.
A standard API that applications can call to easily enable more sophisticated behaviors, making it much easier to create applications that manage other applications.
*The simple answer to “what is Kubernetes used for” is that it saves developers and operators a great deal of time and effort, and lets them focus on building features for their applications, instead of figuring out and implementing ways to keep their applications running well, at scale.
By keeping applications running despite challenges (e.g., failed servers, crashed containers, traffic spikes, etc.) Kubernetes also reduces business impacts, reduces the need for fire drills to bring broken applications back online, and protects against other liabilities, like the costs of failing to comply with Service Level Agreements (SLAs).*
2.What is difference between docker swarm and kubernetes?
Below are the main difference between Kubernetes and Docker:
The installation procedure of the K8s is very complicated but if it is once installed then the cluster is robust. On the other hand, the Docker swarm installation process is very simple but the cluster is not at all robust.
Kubernetes can process the auto-scaling but the Docker swarm cannot process the auto-scaling of the pods based on incoming load.
Kubernetes is a full-fledged Framework. Since it maintains the cluster states more consistently so autoscaling is not as fast as Docker Swarm
Point of comparison | Kubernetes | Docker Swarm |
Main selling point | A complete container orchestration solution with advanced automation features and high customization | An emphasis on ease of use and a seamless fit with other Docker products |
Installation | Somewhat complex as you need to install (and learn to use) kubectl | Quick and easy setup (if you already run Docker) |
Learning curve | High learning curve (but has more features) | Lightweight and easy to use (but limited functionality) |
GUI | Detailed native dashboards | No out-of-the-box dashboards (but you can integrate a third-party tool) |
Cluster setup | Difficult to start a cluster (but the cluster is very strong once set up) | Easy to start a cluster |
Availability features | Self-healing, intelligent scheduling, and replication features | Availability controls and service duplication |
Scalability | All-in-one scaling based on traffic | Values scaling quickly (approx. 5x faster than K8s) over scaling automatically |
Horizontal auto-scaling | Yes | No |
Monitoring capabilities | Has built-in monitoring and logging | Basic server log and event tools, but needs a third-party tool for advanced monitoring |
Load balancing | No built-in mechanism for auto load-balancing | Internal load balancing |
Security features | Relies on transport layer security (TLS) and access control-related tasks | Supports multiple security protocols (RBAC authorization, TLS/SSL, secrets management, policies, etc.) |
CLI | Needs a separate CLI | Integrated Docker CLI, which can limit the functionality in some use cases |
Community | Huge and active community | Reasonably popular, but the user base is getting smaller since the Mirantis acquisition |
Optimal use case | High-demand apps with a complex configuration | Simple apps that are quick to deploy and easy to manage |
3.How does Kubernetes handle network communication between containers?
Kubernetes defines a network model called the container network interface (CNI), but the actual implementation relies on network plugins. The network plugin is responsible for allocating internet protocol (IP) addresses to pods and enabling pods to communicate with each other within the Kubernetes cluster.
4**.How** does Kubernetes handle scaling of applications?
Autoscaling is when you configure your application to automatically adjust the number of pods based on current demand and resource availability. For example, if too few pods are running, the system could automatically create more pods to meet demand. On the other hand, you can also scale down if there are too many pods. There are three types of autoscaling in Kubernetes - Horizontal Pod Autoscaler, Vertical Pod Autoscaler and Cluster Autoscaler. Let’s dive in and understand each option in detail.
Three autoscaling tools you can use:
HPA—HPA is a form of autoscaling that increases or decreases the number of pods based on CPU utilization.
VPA—VPA automatically sets container resource requests and limits based on usage.
Cluster autoscaler—The cluster autoscaler increases or decreases the size of a Kubernetes cluster based on the presence of pending pods and various node utilization metrics.
5.What is a Kubernetes Deployment and how does it differ from a ReplicaSet?
Deployment and ReplicaSet are used to manage the lifecycle of pods in Kubernetes. Deployment provides higher-level abstractions and additional features such as rolling updates, rollbacks, and versioning of the application. ReplicaSet is a lower-level abstraction that provides basic scaling mechanisms. When choosing between Deployment and ReplicaSet, consider the level of control and features required for the application.
Deployments | ReplicaSet |
High-level abstractions that manage replica sets.It provides additional features such as rolling updates, rollbacks, and versioning of the application. | A lower-level abstraction that manages the desired number of replicas of a pod.Additionally, it provides basic scaling and self-healing mechanisms. |
Deployment manages a template of pods and uses replica sets to ensure that the specified number of replicas of the pod is running. | ReplicaSet only manages the desired number of replicas of a pod. |
Deployment provides a mechanism for rolling updates and rollbacks of the application, enabling seamless updates and reducing downtime. | Applications must be manually updated or rolled back. |
It provides versioning of the application, allowing us to manage multiple versions of the same application. It also makes it easy to roll back to a previous version if necessary. | ReplicaSet doesn't provide this feature. |
6.Can you explain the concept of rolling updates in Kubernetes?
Users expect applications to be available all the time and developers are expected to deploy new versions of them several times a day. In Kubernetes this is done with rolling updates. Rolling updates allow Deployments' update to take place with zero downtime by incrementally updating Pods instances with new ones. The new Pods will be scheduled on Nodes with available resources.
These are the important terms in Rolling Updates:
maxUnavailable - specifies the maximum number of unavailable pods during an update. Optional, and can be specified through a percentage or an absolute number.
maxSurge - specifies the maximum number of pods to be created beyond the desired state during the upgrade. Optional, and can be specified through a percentage or an absolute number.
timeoutSeconds - the time (in seconds) that waits for the rolling event to timeout. On reaching the specified time, it automatically rolls back to the previous deployment. Optional, if left blank, the default value is assumed to be 600 seconds.
intervalSeconds - specifies the time gap in seconds after an update. Optional, if left blank, the default value is assumed to be 1 second.
updatePeriodSeconds - time to wait between individual pods migrations or updates. Optional, if left blank, the default value is assumed to be 1 second.
7.How does Kubernetes handle network security and access control?
Kubernetes uses the RBAC(RollBackAccessControl) method and a set of network policies to handle its network and access controls. Network Policies are defined to limit the traffic of external networks incoming to specific clusters. Access control policies restrict the access of unwanted users and allow only users with specific permissions.
8.Can you give an example of how Kubernetes can be used to deploy a highly available application?
Kubernetes provides high availability of applications in a cluster by using a number of features and components, including:
Replication Controllers: These ensure that a specified number of replicas of a given pod are running at all times. If a pod goes down, the replication controller automatically creates a new one to take its place.
Services: Services provide a stable endpoint for pods, regardless of their underlying IP address. This means that if a pod goes down and is replaced by a new one with a different IP address, the service will still be able to route traffic to it.
Health checks: Kubernetes includes built-in health checks for pods, which can be used to determine whether or not a pod is functioning correctly. If a pod fails a health check, it can be replaced automatically by the replication controller.
Self-healing: Kubernetes includes self-healing capabilities that can automatically detect and recover from failures in the system, such as a node going down or a pod crashing.
All of these features work together to provide a robust and highly available environment for running applications in a Kubernetes cluster. Additionally, Kubernetes also provides features such as automatic scaling and load balancing, which can be used to optimize the performance and availability of applications.
9.What is namespace is kubernetes? Which namespace any pod takes if we don't specify any namespace?
You can think of a Namespace as a virtual cluster inside your Kubernetes cluster. You can have multiple namespaces inside a single Kubernetes cluster, and they are all logically isolated from each other. They can help you and your teams with organization, security, and even performance!
The default namespace for objects with no other namespace. kube-system The namespace for objects created by the Kubernetes system. kube-public This namespace is created automatically and is readable
10.How ingress helps in kubernetes?
Ingress help you to easily set up rules for routing traffic without creating a bunch of Load Balancers or exposing each service on the node.
This is an API object that provides the routing rules to manage the external users' access to the services in the Kubernetes cluster through HTTPS/ HTTP. With this, users can easily set up the rules for routing traffic without creating a bunch of load balancers or exposing each service to the nodes.
11.Explain different types of services in kubernetes?
ClusterIP. Exposes a service which is only accessible from within the cluster.
NodePort. Exposes a service via a static port on each node’s IP.
LoadBalancer. Exposes the service via the cloud provider’s load balancer.
ExternalName. Maps a service to a predefined externalName field by returning a value for the CNAME record.
12.Can you explain the concept of self-healing in Kubernetes and give examples of how it works?
Kubernetes ensures that the actual state of the cluster and the desired statue of the cluster are always in-sync. This is made possible through continuous monitoring within the Kubernetes cluster. Whenever the state of a cluster changes from what has been defined, the various components of Kubernetes work to bring it back to its defined state. This automated recovery is often referred to as self-healing.
So, let’s copy one of the pods mentioned in the prerequisite and see what happens when we delete it:
kubectl delete pod nginx-deployment-example-f4cd8584-f494x
13.How does Kubernetes handle storage management for containers?
Containers are immutable means they don’t write data permanently to any storage location, meaning that when a container is deleted, all the data generated during its lifetime also gets deleted. This gives rise to two problems.
One loss of files when the container crashes, and second files can’t be shared between containers. That’s where Kubernetes Volume comes into the picture. At its core, a volume is just a directory, possibly with some data in it, which is accessible to the containers in a pod. It solves both of these problems.
In Kubernetes Persistent Storage, a PersistentVolume (PV) is a piece of storage within the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. The main feature of a PV is that it has an independent life cycle which Kubernetes manage, and it continues to live when the pods accessing it gets deleted.
A PersistentVolumeClaim (PVC) is a request for storage by a user. The claim can include specific storage parameters required by the application. For example, an amount of storage or a specific type of access (RWO – ReadWriteOnce, ROX – ReadOnlyMany, RWX – ReadWriteMany, etc.)
Kubernetes looks for a PV that meets the criteria defined in the user’s PVC, and if there is one, it matches the claim to PV, then it binds the PV to that PVC.
The 1st component "Persistent Volume" is a cluster resource, like CPU or RAM, which is created and provisioned by administrators. The 2nd component "Persistent Volume Claim" on the other hand is a user's or pod's request for a persistent volume. With the 3rd component "Storage Class" you can dynamically provision Persistent Volume component and so automate the storage provisioning process.
14.What are the access mode available for PersistentVolume (PV)?
ReadWriteOnce(RWO): Volume can be mounted as read-write by a single node. ReadOnlyMany(ROX): Volume can be mounted read-only by many nodes. ReadWriteMany(RWX): Volume can be mounted as read-write by many nodes.
15.How does the NodePort service work?
A NodePort service is the most primitive way to get external traffic directly to your service. NodePort, as the name implies, opens a specific port on all the Nodes (the VMs), and any traffic that is sent to this port is forwarded to the service.
Port range:ports 30000–32767
16.What is a multinode cluster and single-node cluster in Kubernetes?
Single cluster
A single cluster is a basic infrastructure platform. Everything that needs to be executed is deployed to the current Kubernetes cluster.
Pros:
Easy administration
Cheap
Efficient use of resource
Multiple clusters
Multiple Kubernetes clusters make up a Kubernetes multi-cluster environment. They can be set up in various ways – within the confines of a single physical host, with a variety of hosts in the same data center, or using the same cloud provider in multiple geographies.
Pros:
Improves scalability
Apps isolation
Effective cluster management
As apps are segregated, there are fewer security issues
17.Difference between create and apply in Kubernetes?
The key difference between kubectl apply and create is that apply creates Kubernetes objects through a declarative syntax, while the create command is imperative.
Example of kubectl apply
Write deployment.yml file and kubectl apply -f deployement.yaml
Example of kubectl create
Now, let's use kubectl create to try to create a deployment imperatively, like so:
kubectl create deployment mydeployment --image=nginx
18.What is NFS in Kubernetes?
Ans: One of the most useful types of volumes in Kubernetes is NFS.
NFS stands for Network File System – it’s a shared filesystem that can be accessed over the network. The NFS must already exist – Kubernetes doesn’t run the NFS; pods just access it.
An NFS is valid for two reasons.
One, what’s already stored in the NFS is not deleted when a pod is destroyed. Data is persistent.
Two, an NFS can be accessed from multiple pods at the same time. An NFS can be used to share data between pods!
This is useful for running applications that need a filesystem that’s shared between multiple application servers. You can use an NFS to run WordPress on Kubernetes!
19. What are the various things that can be done to increase Kubernetes security?
By default, POD can communicate with any other POD, we can set up network policies to limit this communication between the PODs.
RBAC (Role-based access control) to narrow down the permissions.
Use namespaces to establish security boundaries.
Set the admission control policies to avoid running the privileged containers.
Turn on audit logging.
20. How to run a POD on a particular node?
Various methods are available to achieve it.
nodeName: specify the name of a node in POD spec configuration, it will try to run the POD on a specific node.
nodeSelector: Assign a specific label to the node which has special resources and use the same label in POD spec so that POD will run only on that node.
nodeaffinities: required DuringSchedulingIgnoredDuringExecution, preferredDuringSchedulingIgnoredDuringExecution are hard and soft requirements for running the POD on specific nodes. This will be replacing nodeSelector in the future. It depends on the node labels.
Thank you for reading!! Hope you find this helpful.
#day37#90daysofdevops#devopscommunity#
Always open for suggestions..!!