The videos I had been watching for uni doesn't even come anything close to what I had learned on this video about On Premise, IaaS, PaaS, and SaaS. To sum it up, what I understood from it is that On-Premise is what a company handles all on their own, IaaS the company handles it however they don't own it. PaaS also the company does not own it and it is handled by third party vendors. Lastly, SaaS are software such as Microsoft Office Suite.
Thank you ,its clear for me now🤗,i have one question,once the hotfix branch changes are done ,we need to merge the branch to develop branch right for latest changes to be there in all branches rights? Please clarify me on this
FB Branch (Bug fizinf) once developed fixed, it has to be tested on staging environment and approved . Supposed same time staging environments already testing another FB, so how could you make the decisions?
great learning video . @ 34 min plz can you answer question you asked , if there is bug in production only what branching strategy we will adopt to resolve that ?
life cycle of terraform? copy paste from terraform documentation. lifecycle is a nested block that can appear within a resource block. The lifecycle block and its contents are meta-arguments, available for all resource blocks regardless of type. The arguments available within a lifecycle block are create_before_destroy, prevent_destroy, ignore_changes, and replace_triggered_by.
No, not that much programming. But, it is an advantage. Mostly it's tools based but in a few companies DevOps and SRE do 40% coding of their overall work.
Why scan the Docker image? Security and vulnerability checks should be performed before building the Docker image. This seems more like a DevOps engineer interview for someone with less than 8 years of experience. I didn't notice any DevOps architect skills or discussions during the interview from either side.
Before go ahead for this exam is it require to learn anything first? or this documentation or study material itself sufficient for understanding like me who has no prior knowledge for google cloud.? Your opinion on this
QUESTIONS WITH ANSWERS--(WITH DISCUSSION) CI/CD Pipeline and Kubernetes 🌟 The CI/CD pipeline uses Jenkins with a Git Flow branching strategy. Developers create their own feature branches, which are then merged into the development branch. When a release is ready, a new release branch is created from the development branch, which is then merged into the master branch. Git Flow Branching Strategy Branch Description Feature Developer-created branches for new features Development Main branch for development Release Branch created for releases Master Production-ready code Jenkins Stages SC Checkout Compile Build Test Code Coverage Code Quality Analysis Security Scan Vulnerability Scan Delivery Deployment Java-Based Web Application 📊 The application is written in Java and uses Kubernetes. Experience with Kubernetes The candidate has experience with Kubernetes. Rolling Updates 🔄 To perform a rolling update, follow these steps: Create a deployment YAML file, specifying the desired state of the application. Mention the API version, kind, and metadata in the YAML file. Define the replicas, selectors, and template in the YAML file. Specify the container image, port, and other details in the YAML file. Apply the YAML file using kubectl apply. Specify the rolling update strategy in the YAML file. Creating a New Version of the Docker Image To create a new version of the Docker image: Create a Dockerfile. Build an image from the Dockerfile. Use the new image for the pods. Horizontal Pod Autoscaling (HPA) ⚖ Benefits of HPA "HPA is a resource that automatically scales the number of pods in a deployment, replica set, or stateful set based on metrics." Setting up HPA Enable the metric server in the cluster. Create a horizontal pod autoscaler YAML file, specifying the scale target reference, metrics, and other details. Apply the YAML file using kubectl apply. Monitor the HPA using kubectl get hpa and kubectl describe hpa. Custom Metrics Use custom metrics to enable HPA. Enable the metrics server in the cluster. Define custom metrics using Prometheus or other tools. Persistent Storage 📁 Purpose of Persistent Storage "Persistent storage refers to the ability to store and retain data beyond the lifetime of a single container. It allows data to be saved and accessed even if pods are rescheduled or terminated, or if pods move to different nodes within the cluster." Setting up Persistent Storage Create a storage class, specifying the type of storage. Create a persistent volume claim (PVC), requesting storage resources based on the storage class. Mount the PVC in the application. Deploy the application. PV and PVC Term Description PV Persistent Volume, a piece of storage in the cluster provisioned by the administrator PVC Persistent Volume Claim, a request for storage resources by the user or application Multicontainer Pods 🚀 Experience with Multicontainer Pods The candidate has experience with multicontainer pods, including sidecar containers. Using Multicontainer Pods Use sidecar containers for logging or monitoring. Deploy a pod with multiple containers, each running a different application or service. Managing Secrets 🔒 Kubernetes Secrets "Kubernetes secrets store sensitive information in an encrypted format." Using Kubernetes Secrets Store sensitive information in secrets. Use secrets to manage confidential data in the cluster.## Encryption and Secret Management 💻 Storing Sensitive Information We store sensitive information, such as API keys and database passwords, in a secret file using a basic algorithm. The key values are stored in a HashiCorp's Vault (Hashar W) based on the environment. Integrating with Jenkins We integrate Hashar W with Jenkins, which touches the passwords and secrets stored in the Vault. Service Mesh in Kubernetes 🌐 Load Balancing and Traffic Management We use a service mesh, specifically Istio, for load balancing and traffic management. It also provides mutual TLS, certificate, and server certificate verification. Port Security Policies in Kubernetes 🔒 Defining Security Configurations Port security policies allow administrators to define rules for controlling security configurations for pods. We implement port security policies using the PortSecurity admission controller. Enable the PortSecurity admission controller Write an admission.config file with the enabled admission plugins Define a constraint template with API version, kind, and spec Apply the constraint to enforce the host path policy Custom Resource Definitions (CRDs) in Kubernetes 📚 Defining and Using Custom Resources CRDs allow us to define and use custom resources within our Kubernetes cluster. They help extend the Kubernetes API and create custom objects for specific applications. Example: We used CRDs for operations and controllers like Prometheus, creating custom resources for managing Prometheus instances. Network Policies in Kubernetes 🔗 Fine-Grained Control of Network Traffic Network policies provide fine-grained control of network traffic within the cluster. We define rules and policies for controlling communication between pods. Ingress and egress rules Part selector field specifying which pods the policy applies to Name and namespace scope Rules are evaluated in order, with the first matching rule taking precedence Main Use Case: Implementing security and compliance by restricting access to sensitive data and services. Resource Quotas and Limits in Kubernetes ⚖ Ensuring Fair Resource Allocation Resource quotas and limits ensure fair resource allocation and prevent resource exhaustion. Factors to Consider: Resource requirements of the application Critical applications requiring guaranteed resources Less critical applications with burstable resources Namespace isolation and capacity planning Implementation: Define a resource quota based on CPU, memory, and other resources Attach the resource quota to a specific namespace Horizontal Pod Scaling based on Custom Metrics 📈 Scaling Applications based on Business-Specific Metrics We implement horizontal pod scaling based on custom metrics using a monitoring tool and a custom metric provider. Steps: Deploy a metric server in the Kubernetes cluster Implement a custom metric provider exposing application-specific metrics Deploy and configure the Kubernetes custom metric API server Use the custom metric API server with the Horizontal Pod Autoscaler (HPA) Example: We used Prometheus as a monitoring tool and the Prometheus adapter for Kubernetes to expose custom metrics.## Resource Management in Kubernetes 📈 Critical Parts and Prioritization In order to ensure that critical parts of the system receive the resources they need, we assign the highest priority (PR) to them. This ensures that they continue to function reliably, even in emergency scenarios where resources are scarce. Priority Class Name Kubernetes provides a field called Priority Class Name, which allows us to assign a priority to pods. This feature can be enabled through the Priority and Preemption feature in Kubernetes. Kubernetes Job vs. Cron Job Kubernetes Job Cron Job Purpose Run a single task to completion Run a task periodically Design Designed for short-lived tasks Designed for long-lived tasks Example Batch processing, data migration Backups, data synchronization Restart Pods can be automatically restarted if they fail Automatically manages scheduling and execution Definition: A Kubernetes Job is a resource used to run a single task to completion. It is designed for short-lived tasks and can be used for batch processing or data migration. Definition: A Cron Job is a resource used to run a task periodically. It is designed for long-lived tasks and can be used for backups or data synchronization. Stateful Sets and Deployments Stateful Sets Deployments Purpose Manage stateful applications Manage stateless applications Design Designed for applications with ordered initialization requirements Designed for horizontal scaling Example Databases, distributed systems Web servers, microservices Volume Claims Automatically manage PVCs for each pod Does not manage PVCs Definition: A Stateful Set is a resource used to manage stateful applications, such as databases or distributed systems. It provides stable network identities and can automatically manage Persistent Volume Claims (PVCs) for each pod. Definition: A Deployment is a resource used to manage stateless applications, such as web servers or microservices. It is designed for horizontal scaling and does not manage PVCs. Changing Replica Counts There are two ways to change the replica count of a running replica set: Method 1: Edit the replica set file and save it. Method 2: Use the imperative command kubectl scale with the desired replica count. Troubleshooting Replica Count Issues If the replica count is not changing as expected, check for: Error messages or warnings in the output logs. Resource constraints specified in the replica set. Pod termination delays. Part disruptions, such as min ready seconds. By checking these areas, you can identify and resolve issues preventing the replica count from changing.
@@LogicOpsLab but u still can install and bring up sonarqube in ec2 machine. And do some little tests. If un your logic then jenkins is not open source too. Since it has paid version cloubees with more features and support. Terraform would be the same
Hey the one question you asked about node is tainted ? answer is yes we do it through toleration to a pod specification to allow it to be scheduled onto a tainted node
How much experience do you have in Cloud? How much experience do you have in DevOps? Can you walk me through the current project that you are working on and what exactly you do in that current project? The application that you folks are working on, what is the tech stack and what language is it written in? Can you walk me through the pipeline that is happening in this project until production? How does Amazon CloudTrail differ from CloudWatch logs? Do you know how Amazon CloudTrail works? What exactly are you doing with Amazon ECS? Can you explain what Amazon ECS is and how it works? How does auto scaling work in ECS? Can you share a real-time scenario where you would choose AWS Fargate over the EC2 launch type? Can you explain Amazon RDS and how it differs from a traditional database hosting solution? What are the database engines supported by Amazon RDS? Can you explain multi-AZ deployments in RDS? What is the difference between active-active and active-passive setups? Why should we use read replicas in Amazon RDS? What is the difference between Amazon RDS and Amazon Aurora? How are you securing data in RDS? Are you doing any automated backup or retention policies in RDS? How do you control connection limits, timeouts, and cache sizes in RDS? Do you have any maintenance window concept in RDS? Can you describe a scenario in which you would choose Amazon RDS over running your own database in an EC2 instance? What source code management tool do you use? Can you explain a merge conflict in Git and how to resolve it? What is Git stash? What is the difference between Git fetch and Git pull? How do you resolve a detached head state in Git? Have you ever done cherry-picking in Git? How do you think EKS is different from self-managed Kubernetes clusters? Can you explain what exactly you are doing with EKS in your current project? How would you set up auto-scaling for EKS cluster worker nodes for peak hours? How can you perform a Kubernetes version update with minimal disruption to services? How can you configure application scaling to handle increased load automatically in EKS? How do you integrate EBS volume with your pods in EKS? How do you deploy your application on EKS across multiple AWS regions? Who has done all the setup for your current project?
I love your channel. I think the interviewee sounds a little too off like he's reading from a script and not prepared in terms of thinking through the questions. Hopefully this was not a real interview for a real job else it will give us all a bad name, like we're all fakes.
Could you help with a query ... supposedly I have completed and earned some skill badges from Google Skill Boosts in my journey to giving GCP ACE exam 2024(planning to give around June, preparation ongoing) whether I can mention this badges thing and in which column of my resume ? as I want to keep applying to jobs in the run for certification and I am having work exp of 2 years in IT and trying to switch to different domain Devops Cloud along with prep of cloud cetification like GCP Ace as of now I dont have any certs.