Wednesday, November 6, 2024

Azure DevOps Challanging questions & answers

1. CI/CD Pipeline Design and Optimization

How do you set up a CI/CD pipeline in Azure DevOps from scratch, and what are key components of the pipeline?

To set up a CI/CD pipeline in Azure DevOps:

1. Create a new project and repository.


2. Define a YAML pipeline file in the repository, specifying stages like build, test, and deploy.


3. Add triggers to automate builds upon code changes.


4. Configure agent pools for different environments.


5. Set up environments for dev, test, and production with required approvals.


6. Key components include Triggers, Jobs, Tasks, Stages, Environments, and Artifacts.




What are some best practices to optimize CI/CD pipelines in Azure DevOps?

Best practices include:

Using parallel jobs to speed up execution.

Defining reusable templates to avoid redundancy.

Setting up caching for dependencies.

Automating testing early in the pipeline.

Enabling resource governance to control costs.

Using Azure DevTest Labs for quick provisioning of testing environments.



How would you set up multi-stage deployments in Azure DevOps pipelines?

In the YAML pipeline, define multiple stages for each environment (e.g., Dev, QA, Prod). Use environments with appropriate approvals and checks for controlled rollouts. Each stage should include relevant jobs for that environment (e.g., deploy-to-dev, deploy-to-qa, deploy-to-prod).



2. Containerization and Kubernetes

How would you deploy an application using Azure Kubernetes Service (AKS) via Azure DevOps?

Create a CI/CD pipeline where:

1. CI builds and pushes the Docker image to Azure Container Registry (ACR).


2. CD pulls the image and deploys it to AKS using kubectl or Helm.


3. AKS is configured with service accounts and roles for secure deployments.




Explain how you would integrate Helm charts with Azure DevOps to manage Kubernetes deployments.

In the pipeline, add a Helm install/upgrade task. Ensure the pipeline has access to Helm charts stored in a repository. Use Helm values files for different environments and use Helm lifecycle hooks for controlled rollouts.


What are some challenges you might face when scaling AKS with Azure DevOps, and how would you overcome them?

Challenges include resource limitations, scaling lag, and traffic spikes. To address these:

Configure autoscaling in AKS.

Use Azure Monitor and Alerts to detect issues early.

Use deployment strategies like blue-green or canary to avoid downtime during high traffic.




3. Infrastructure as Code (IaC)

How do you implement Infrastructure as Code in Azure DevOps, and what tools would you use?

Use tools like ARM templates, Terraform, or Bicep. Create a YAML pipeline to manage IaC, defining stages for validating, applying, and destroying resources.


How would you handle secrets and sensitive information in ARM templates or Terraform scripts in Azure DevOps?

Use Azure Key Vault to store secrets. Access secrets through service connections in Azure DevOps or integrate directly in ARM templates or Terraform scripts.


What’s your approach to managing IaC for a multi-environment setup in Azure DevOps?

Create separate parameter files or Terraform workspaces for each environment. Use different Azure resource groups and configure pipelines with environment-specific values.



4. Source Control and Branching Strategies

Which branching strategies work best for Azure DevOps in a team environment?

GitFlow or GitHub Flow are common approaches. Feature branches, release branches, and hotfix branches keep the codebase organized and streamline collaboration.


How would you configure branch policies in Azure Repos to ensure code quality and security?

Enable policies like pull request approvals, build validation, and minimum reviewer count. Add policies for comment resolution and protected branches to avoid accidental pushes.


What’s your approach to handling large pull requests and code reviews in Azure DevOps?

Use feature toggles to split large pull requests. Encourage frequent, smaller pull requests. Enable pull request templates to guide reviews and standardize review quality.



5. Monitoring and Logging

How do you implement monitoring and alerting for applications deployed via Azure DevOps?

Use Azure Monitor and Application Insights. Configure alerts for metrics like CPU, memory, and HTTP failures. Set up alerts in Azure DevOps to trigger notifications or rollback if issues arise.


Describe the setup of Application Insights or Log Analytics for a CI/CD pipeline in Azure DevOps.

Install Application Insights SDK in the application. Add telemetry logging to track performance. Use Log Analytics workspaces to centralize and analyze logs, and set up pipeline tasks to check metrics.


What is your approach to monitoring the health and performance of services deployed on Azure?

Use Azure Monitor, Application Insights, and Log Analytics. Set up dashboards and alerts. Use Azure Cost Management to monitor cost metrics.



6. Security and Compliance

How would you implement DevSecOps practices in an Azure DevOps pipeline?

Integrate security scanning tools like SonarQube, WhiteSource, or Aqua. Use Azure Security Center to enforce policies. Add security validation steps in the CI/CD pipeline.


What are some strategies for securing the CI/CD pipelines in Azure DevOps?

Use service principals for deployment permissions. Restrict access to pipelines through role-based access control (RBAC). Use Azure Key Vault for secrets management.


How would you manage compliance requirements, such as GDPR or HIPAA, in an Azure DevOps setup?

Implement audit logging and access control. Use Azure Policy to enforce compliance standards and track compliance using Azure Compliance Manager.



7. Automated Testing and Quality Gates

How would you implement automated testing in Azure DevOps, and what types of tests would you include?

Use unit, integration, and UI tests. Integrate tests using frameworks like Selenium, NUnit, or JUnit. Set up test tasks in the pipeline and configure test summaries and reports.


Explain quality gates and how they can be configured in Azure DevOps.

Quality gates use metrics like code coverage and defect density. Tools like SonarQube define gates, and Azure DevOps can block deployments if the code fails to meet gate criteria.


What’s your approach to managing flaky tests in an Azure DevOps CI/CD pipeline?

Identify flaky tests using a test dashboard. Mark them for retry or isolate them. Schedule regular analysis of test results to address underlying issues.



8. Release Management and Rollback Strategies

Explain how you would set up deployment slots in Azure App Service and leverage them in Azure DevOps.

Create staging slots in App Service. Deploy to the staging slot, validate, and swap with the production slot when ready.


What’s your approach to implementing a blue-green deployment or canary release using Azure DevOps?

Set up two environments (blue and green) in App Services or Kubernetes. Deploy to the secondary environment, perform tests, and switch traffic when validated.


How would you set up rollback strategies for failed deployments in Azure DevOps?

Use release approvals and deployment history to revert to previous versions. Implement manual or automatic rollback steps in the pipeline.



9. Scaling and Load Testing

How would you perform load testing on an application using Azure DevOps tools?

Use Azure Load Testing or integrate third-party tools like JMeter. Automate load testing as part of the pipeline with pre-set thresholds.


Explain autoscaling in Azure and how it would work with Azure DevOps pipelines.

Configure VM scale sets or AKS autoscaling. Use Azure Monitor to trigger scaling based on metrics like CPU or memory, and Azure DevOps can deploy or scale resources as needed.


How would you handle scaling a CI/CD pipeline to manage increased demand or large repositories?

Optimize with build agents, caching, and parallel jobs. Use agent pools for heavy workloads and minimize dependencies to improve efficiency.



10. Configuration Management and Secrets Handling

What strategies would you use to manage configuration for multiple environments in Azure DevOps?

Use variable groups for shared configurations and parameter files for specific environments. Store configurations in Azure Key Vault for secure access.


How do you handle secrets management in Azure DevOps pipelines?

Store secrets in Azure Key Vault and retrieve them using service connections. Use Pipeline Secrets for variables that require secure handling.


Explain the use of Azure Key Vault in Azure DevOps pipelines.

Add a Key Vault task in the pipeline to retrieve secrets at runtime. Ensure

Sunday, October 13, 2024

How Does Longhorn Use Kubernetes Worker Node Storage as PV?

Longhorn installs as a set of microservices within a Kubernetes cluster and treats each worker node as a potential storage provider. It uses disk paths available on each node to create storage pools and allocates storage from these pools to dynamically provision Persistent Volumes (PVs) for applications. By default, Longhorn uses /var/lib/longhorn/ on each node, but you can specify custom paths if you have other storage paths available.

Configuring Longhorn to Use a Custom Storage Path

To configure Longhorn to use existing storage paths on the nodes (e.g., /mnt/disks), follow these steps:

1. Install Longhorn in the Kubernetes Cluster:

Install Longhorn using Helm or the Longhorn YAML manifest:

kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml

You can also install Longhorn from the Kubernetes marketplace or directly from the Longhorn UI.

2. Access the Longhorn UI:

Once installed, access the Longhorn UI to configure and manage your Longhorn setup.

By default, Longhorn is accessible through a Service of type ClusterIP, but you can change it to NodePort or LoadBalancer if needed.


kubectl get svc -n longhorn-system


3. Add a New Storage Path on Each Node:

Before configuring Longhorn, ensure that the desired storage paths are created and available on each node. For example, you might want to use /mnt/disks as your custom storage directory:

mkdir -p /mnt/disks

You may want to mount additional disks or directories to this path for greater storage capacity.

4. Configure Longhorn to Use the New Storage Path:

Open the Longhorn UI (<Longhorn-IP>:<Port>) and navigate to Node settings.

Select the node where you want to add a new disk path.

Click Edit Node and Disks, and then Add Disk.

Specify the Path (e.g., /mnt/disks) and Tags (optional).

Set the Storage Allow Scheduling option to true to enable Longhorn to schedule storage volumes on this disk.

Repeat this process for each node in the cluster that should contribute storage.

5. Verify Storage Path Configuration:

After adding the new storage paths, Longhorn will automatically create storage pools based on these paths. Check the Nodes section in the Longhorn UI to see the updated disk paths and available storage.

6. Create a Persistent Volume (PV) Using Longhorn:

Now that Longhorn is using your custom storage paths, you can create Persistent Volumes that utilize this storage.

Either create a new PersistentVolumeClaim (PVC) that dynamically provisions a PV using the Longhorn StorageClass or use the Longhorn UI to manually create volumes.

Example: Configuring a Node's Storage for Longhorn

Below is an example YAML configuration for adding a disk path (/mnt/disks) to a node, which can also be done through the UI:

apiVersion: longhorn.io/v1beta1
kind: Node
metadata:
  name: <node-name>
  namespace: longhorn-system
spec:
  disks:
    disk-1:
      path: /mnt/disks
      allowScheduling: true
      storageReserved: 0
  tags: []

path: Specifies the custom path on the node where Longhorn will allocate storage.

allowScheduling: Enables Longhorn to schedule volumes on this disk.

storageReserved: (Optional) Specifies the amount of storage to be reserved and not used for Longhorn volumes.


Important Considerations When Using Node Storage for Longhorn:

1. Data Redundancy and Availability:

Longhorn provides replication for data redundancy. When using node-local storage, ensure that you have sufficient replicas configured (e.g., 3 replicas for high availability) so that data remains safe even if one node goes down.

This means you need enough storage capacity across multiple nodes to accommodate these replicas.

2. Storage Path Consistency:

Ensure that the same storage path (/mnt/disks) is present on each node where you want Longhorn to store data.

If a node does not have the specified path, Longhorn will not be able to use it, leading to scheduling failures.

3. Handling Node Failures:

If the node with the custom storage path fails or becomes unavailable, the volumes stored on that node may be temporarily inaccessible.

Consider setting up anti-affinity rules and replication strategies in Longhorn to handle such scenarios gracefully.

4. Storage Permissions:

Make sure the Kubernetes worker node's storage directory has the appropriate permissions for Longhorn to read/write data.

5. Longhorn's Built-in Backup and Restore:

Utilize Longhorn’s built-in backup and restore capabilities to safeguard data if you are using node-local storage paths, as this storage may not be as reliable as network-based or cloud-backed storage solutions.

How to create a Kubernetes Operator?

Creating a Kubernetes operator involves building a controller that watches Kubernetes resources and takes action based on their state. The common approach to create an operator is using the kubebuilder framework or the Operator SDK, but a custom solution using the Kubernetes API client directly can also be done.

Below, I'll show an example of a simple operator using the client-go library, which is the official Kubernetes client for Go. This operator will watch a custom resource called Foo and log whenever a Foo resource is created, updated, or deleted.

Prerequisites

Go programming language installed.

Kubernetes cluster and kubectl configured.

client-go and apimachinery libraries installed.


To install these dependencies, run:

go get k8s.io/client-go@v0.27.1
go get k8s.io/apimachinery@v0.27.1

Step 1: Define a Custom Resource Definition (CRD)

Create a foo-crd.yaml file to define a Foo custom resource:

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: foos.samplecontroller.k8s.io
spec:
  group: samplecontroller.k8s.io
  versions:
    - name: v1
      served: true
      storage: true
  scope: Namespaced
  names:
    plural: foos
    singular: foo
    kind: Foo
    shortNames:
    - fo

Apply this CRD to the cluster:

kubectl apply -f foo-crd.yaml

Step 2: Create a Go File for the Operator

Create a new Go file named main.go:

package main

import (
"context"
"flag"
"fmt"
"log"
"os"
"os/signal"
"syscall"
"time"

"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/clientcmd"
)

func main() {
// Load the Kubernetes configuration from ~/.kube/config
kubeconfig := flag.String("kubeconfig", clientcmd.RecommendedHomeFile, "Path to the kubeconfig file")
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
log.Fatalf("Error building kubeconfig: %v", err)
}

// Create a dynamic client
dynClient, err := dynamic.NewForConfig(config)
if err != nil {
log.Fatalf("Error creating dynamic client: %v", err)
}

// Define the GVR (GroupVersionResource) for the Foo custom resource
gvr := schema.GroupVersionResource{
Group:    "samplecontroller.k8s.io",
Version:  "v1",
Resource: "foos",
}

// Create a list watcher for Foo resources
fooListWatcher := cache.NewListWatchFromClient(
dynClient.Resource(gvr), "foos", "", cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
foo := obj.(*unstructured.Unstructured)
fmt.Printf("New Foo Added: %s\n", foo.GetName())
},
UpdateFunc: func(oldObj, newObj interface{}) {
foo := newObj.(*unstructured.Unstructured)
fmt.Printf("Foo Updated: %s\n", foo.GetName())
},
DeleteFunc: func(obj interface{}) {
foo := obj.(*unstructured.Unstructured)
fmt.Printf("Foo Deleted: %s\n", foo.GetName())
},
},
)

// Create a controller to handle Foo events
stopCh := make(chan struct{})
defer close(stopCh)
_, controller := cache.NewInformer(fooListWatcher, &unstructured.Unstructured{}, 0, cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
fmt.Println("Foo Created:", obj)
},
UpdateFunc: func(oldObj, newObj interface{}) {
fmt.Println("Foo Updated:", newObj)
},
DeleteFunc: func(obj interface{}) {
fmt.Println("Foo Deleted:", obj)
},
})

// Run the controller
go controller.Run(stopCh)

// Wait for a signal to stop the operator
sigCh := make(chan os.Signal, 1)
signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM)
<-sigCh
fmt.Println("Stopping the Foo operator...")
}

Step 3: Running the Operator

1. Build and run the Go program:

go run main.go


2. Create a sample Foo resource to test:

# Save this as foo-sample.yaml
apiVersion: samplecontroller.k8s.io/v1
kind: Foo
metadata:
  name: example-foo

Apply this resource:

kubectl apply -f foo-sample.yaml

Step 4: Check the Output

You should see logs in the terminal indicating when Foo resources are added, updated, or deleted:

New Foo Added: example-foo
Foo Updated: example-foo
Foo Deleted: example-foo

Explanation

1. Dynamic Client: The operator uses the dynamic client to interact with the custom resource since Foo is a CRD.


2. ListWatcher: The NewListWatchFromClient is used to monitor changes in Foo resources.


3. Controller: The controller is set up to handle Add, Update, and Delete events for the Foo resource.


4. Signal Handling: It gracefully shuts down on receiving a termination signal.



Further Enhancements

Use a code generation framework like kubebuilder or Operator SDK for complex operators.

Implement reconcile logic to manage the desired state.

Add leader election for high availability.


This example demonstrates the basic structure of an operator using the Kubernetes API. For production-grade operators, using a dedicated framework is recommended.

The need for ExternalName service type in Kubernetes

In Kubernetes, the Service resource defines a way to expose applications running in pods. There are several Service types (ClusterIP, NodePort, LoadBalancer, etc.), and one of them is ExternalName. This service type is unique because it maps a service name to an external DNS name instead of providing access to an IP address.

Understanding Service Type: ExternalName

The ExternalName service allows Kubernetes to proxy traffic to an external service using a DNS name. It doesn't create a typical cluster-internal IP and doesn't expose the service using ClusterIP or any other method. Instead, it returns a CNAME record with the value specified in the externalName field.

Use Case

The ExternalName type is used when you want Kubernetes to act as a DNS alias for services that are external to the cluster (e.g., a service running outside of Kubernetes, in another cluster, or even a third-party service).

Example Configuration

Here’s a sample YAML for a service of type ExternalName:

apiVersion: v1
kind: Service
metadata:
  name: my-external-service
spec:
  type: ExternalName
  externalName: example.com

Key Fields Explained:

1. type: ExternalName: Specifies that the service type is ExternalName.

2. externalName: example.com: This is the external DNS name that the service will map to. Any requests to my-external-service within the cluster will be redirected to example.com.

How It Works

When pods within the same namespace try to access my-external-service (e.g., via my-external-service:port), Kubernetes will resolve this to the example.com address. It acts like a DNS CNAME record, and no cluster IP or load balancer is created.

Limitations

This service type does not provide load balancing.

There’s no IP address or port assignment.

It only supports DNS name resolution.

Cannot be used for connecting to IP addresses directly—only valid DNS names.

This type is primarily used for use cases where external dependencies need to be aliased using the Kubernetes DNS system.

Data Flow for Kubernetes Service Networking

1. External Traffic Ingress:

If you are using a LoadBalancer service or NodePort, traffic from the External Network first hits the load balancer or the node’s external IP at the specified port.

The external traffic is directed to the Kubernetes Service (via the LoadBalancer or NodePort).

2. Service:

The Service uses a ClusterIP to expose an internal, stable endpoint for communication within the cluster.

The service acts as a load balancer that forwards requests to the correct Pods. The Kube Proxy ensures that traffic gets routed correctly.

3. Kube Proxy:

The Kube Proxy running on each node maintains IP tables or network rules to ensure that traffic destined for a particular service (i.e., its ClusterIP) is routed to the corresponding Pods.

It balances requests between different Pods based on the service’s configuration.

4. Pod Communication:

Inside the cluster, Pods communicate with each other using the ClusterIP. The service ensures that traffic is routed to the appropriate Pods, which may be distributed across different nodes.

The Kube Proxy facilitates this internal communication between services and Pods within the cluster.

Example Traffic Flow:

An external user makes a request from the External Network (e.g., via a browser or API).

If the service is of type LoadBalancer or NodePort, the request enters the cluster via the load balancer or node port.

The service routes the request to the appropriate Pods using its ClusterIP, with the Kube Proxy forwarding the traffic to the specific Pods based on the current Pod status.

The Pod processes the request, and the response is sent back to the user through the same path.

This architecture allows for seamless load balancing, internal Pod communication, and external access depending on the service type, all managed through the Kubernetes network infrastructure.

Key Components in Kubernetes Service Networking

1. External Network: This is any external user or system that wants to access your Kubernetes services (e.g., via a browser or API request).


2. Service: A Kubernetes resource that defines a stable endpoint to expose your application, abstracting the underlying Pods. There are different service types: ClusterIP (default, internal only), NodePort, and LoadBalancer (for external access).


3. Pods: The smallest deployable units in Kubernetes, hosting one or more containers. Each Pod has its own IP address, but it’s ephemeral, meaning it can change when Pods are recreated.


4. Kube Proxy: A component running on each node that ensures proper routing of traffic between services and Pods. It watches the Kubernetes API for new services and endpoints and maintains the network rules.


5. ClusterIP: The internal IP address assigned to a service. It acts as a virtual IP that directs traffic from the service to the appropriate Pods.


6. LoadBalancer/NodePort: These expose the service to external networks:

LoadBalancer: Automatically provisions an external load balancer (typically on cloud platforms) and assigns it a public IP.

NodePort: Opens a specific port on each node to allow external traffic to enter.

Why Kubernetes: The Need for Container Orchestration

In recent years, Kubernetes has become one of the most popular tools in the tech industry, especially when it comes to managing containerized applications. But why exactly has Kubernetes gained such a significant foothold? What problem does it solve, and why do so many organizations choose it as their go-to platform for container orchestration? Let’s explore the reasons behind Kubernetes' growing adoption.

1. The Rise of Containers

Before understanding why Kubernetes is important, it's essential to grasp the role of containers in modern software development. Containers package an application and its dependencies into a single, lightweight unit that can run reliably across different computing environments. They are portable, fast to start, and require fewer resources than traditional virtual machines (VMs).

While containers provide flexibility, scaling, and isolation, managing them across large, distributed environments becomes increasingly complex as more containers are deployed. This is where Kubernetes comes in.

2. Automation at Scale

In dynamic production environments, manually managing and scaling containers isn’t feasible. Kubernetes automates this process, making it possible to manage and orchestrate hundreds or even thousands of containers efficiently.

Kubernetes handles:

Automated scheduling: It decides which servers (nodes) should run which containers based on resource availability and performance requirements.

Scaling: As the demand for an application increases or decreases, Kubernetes automatically scales containers up or down to meet performance goals without wasting resources.

Self-healing: If a container or node fails, Kubernetes automatically replaces it, ensuring your application remains available with minimal disruption.


3. Portability and Multi-Cloud Compatibility

One of Kubernetes' most powerful features is its ability to run across different cloud environments and on-premises infrastructure, making it a true multi-cloud solution. You are no longer tied to a single cloud provider or limited by your on-premises hardware. This portability allows organizations to avoid vendor lock-in, migrate workloads between clouds, or adopt hybrid cloud strategies easily.

4. Microservices Architecture

Kubernetes is a natural fit for applications following a microservices architecture, where each component of an application (e.g., user authentication, database, front-end) runs as a separate service. In such architectures, Kubernetes simplifies managing these services, orchestrating how they communicate with one another, handling load balancing, and providing mechanisms to manage network traffic between services.

This microservices model is essential for modern, scalable applications, and Kubernetes is the go-to platform to manage such environments.

5. DevOps and Continuous Deployment

Kubernetes works seamlessly with DevOps practices and CI/CD pipelines. It allows developers to:

Rapidly deploy new code updates.

Automate testing, integration, and deployment processes.

Roll back to previous versions easily in case of failure.


With Kubernetes, you can set up automated deployment strategies like blue-green deployments or canary releases, ensuring that new features are rolled out smoothly without downtime.

6. Community Support and Ecosystem

Kubernetes benefits from an enormous open-source community backed by major players like Google, Red Hat, IBM, Microsoft, and others. This means that there is a wealth of resources, tools, and plugins available for integration. The ecosystem surrounding Kubernetes, from monitoring tools like Prometheus to service meshes like Istio, is vast and continues to grow, allowing you to extend Kubernetes in numerous ways based on your needs.

7. Flexibility and Extensibility

Kubernetes provides flexibility by supporting a wide variety of workloads and programming languages. Whether you’re running stateless or stateful applications, batch processing, or streaming data, Kubernetes can handle it. Additionally, with its custom resource definitions (CRDs) and operators, Kubernetes is highly extensible, allowing you to automate even more advanced use cases and integrate it with other tools.

Conclusion

Kubernetes is more than just a container orchestrator; it is a key enabler of modern cloud-native applications. Its automation capabilities, scalability, portability, and alignment with microservices architectures make it an essential tool for organizations that want to innovate quickly and manage their infrastructure efficiently. As the industry continues to shift toward containerization and multi-cloud strategies, Kubernetes will likely remain at the forefront of container orchestration for years to come.

By adopting Kubernetes, organizations can reduce operational overhead, scale efficiently, and ensure that their applications are ready for the challenges of today’s complex, distributed environments.

Wednesday, March 13, 2024

HorizontalAutoScaler in OpenShift/k8s

 1. HorizontalAutoScaler in OpenShift/k8s, 

Declerative:

apiVersion: autoscaling/v1

  kind: HorizontalPodAutoscaler

  metadata:

   name: php-apache

   namespace: hpa-test

  spec:

   scaleTargetRef:

     apiVersion: apps/v1

     kind: Deployment

     name: php-apache

   minReplicas: 1

   maxReplicas: 10

   targetCPUUtilizationPercentage: 50

kubectl command: kubectl -n hpa-test autoscale deployment php-apache --cpu-percent=50 --min=1 --max=5

2. Ingress Controller

- An ingress controller acts as a reverse proxy and load balancer. It implements a Kubernetes Ingress. The ingress controller adds a layer of abstraction to traffic routing, accepting traffic from outside the Kubernetes platform and load balancing it to Pods running inside the platform.

- Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster

3. Private endpoint. 

- A private endpoint is a network interface that uses a private IP address from your virtual network. This network interface connects you privately and securely to a service that's powered by Azure Private Link. By enabling a private endpoint, you're bringing the service into your virtual network


CDIR:

Classless or Classless Inter-Domain Routing (CIDR) addresses use variable length subnet masking (VLSM) to alter the ratio between the network and host address bits in an IP address. A subnet mask is a set of identifiers that returns the network address’s value from the IP address by turning the host address into zeroes. 

Sunday, March 10, 2024

Redhat Openshift

What are openshift operators?

Red Hat OpenShift Operators automate the creation, configuration, and management of instances of Kubernetes-native applications. Operators provide automation at every level of the stack—from managing the parts that make up the platform all the way to applications that are provided as a managed service.

What is Secret Store CSI?

CSI - Container Storage Interface

The Kubernetes Secret Store CSI is a storage driver that allows you to mount secrets from external secret management systems like HashiCorp Vault and AWS Secrets. It comes in two parts, the Secret Store CSI, and a Secret provider driver

What is configmap?

ConfigMap is similar to secrets, but designed to more conveniently support working with strings that do not contain sensitive information

The ConfigMap API object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers

How to create ConfigMap ?

oc create configmap my-key-vals --from-literal db-user=user1 db-password=db-password1

OR from yaml

------------------------------

apiVersion: v1

kind: ConfigMap

metadata:

  name: env-config

  namespace: my-project

data:

  db-user: user1

  db-password: db-password1

------------------------------

How do pods consume envs?

apiVersion: v1

kind: Pod

metadata:

  name: my-project

spec:

  containers:

    - name: test-container

      image: gcr.io/google_containers/busybox

      command: [ "/bin/sh", "-c", "env" ]

      env: 

        - name: DB-USER

          valueFrom:

            configMapKeyRef:

              name: env-config

              key: db-user

        - name: DB-PASSWORD

          valueFrom:

            configMapKeyRef:

              name: env-config

              key: db-password

  restartPolicy: Never

What is difference between Deployments and DeploymentConfig?

DeploymentConfig objects prefer consistency, whereas Deployments objects take availability over consistency. For DeploymentConfig objects, if a node running a deployer pod goes down, it will not get replaced. The process waits until the node comes back online or is manually deleted.


Saturday, February 24, 2024

Terraform Clarrification

 1. How to use existing resource group in terraform?

terraform import azurerm_resource_group.rg /subscriptions/<sub_id>/resourceGroups/<rg_name)

2. 

Microsoft Azure Guest User

How to give access to Azure guest user?
Business scenario:
Let se I guess future create its file storage in Microsoft storage account and create a private and point. The URL of the private and point will not be accessible to the guest user who created the end point. 
Solution:
To gain the access to the private endpoint, the guest user has to create a support ticket with some tools. Administrator will review and grant the access to a private end point based on business need.





Thursday, February 22, 2024

How to automate creating Linux Virtual Machine in Azure using Terraform

Complete code:  https://github.com/MaheshMagadum/cloudops/tree/main/terraform-02

terraform {

  required_version = ">=1.0.0"

  required_providers {

    azapi = {

      source  = "azure/azapi"

      version = "~>1.5"

    }

    azurerm = {

      source  = "hashicorp/azurerm"

      version = "~>3.0.0"

    }

    random = {

      source  = "hashicorp/random"

      version = "~>3.0"

    }

  }

}

provider "azurerm" {

  features{}

}

resource "azurerm_resource_group" "rg" {

  name = "dev-rg"

  location = var.location

}

resource "azurerm_virtual_network" "azure_vnet" {

  resource_group_name = azurerm_resource_group.rg.name

  name = "aro-vnet"

  location = azurerm_resource_group.rg.location

  address_space = ["10.0.4.0/25"]

}

resource "azurerm_subnet" "azure_subnet" {

  name = var.subnet_name

  resource_group_name = azurerm_resource_group.rg.name

  virtual_network_name = azurerm_virtual_network.azure_vnet.name

  address_prefixes = ["10.0.4.0/29"]

}

# Create public IPs

resource "azurerm_public_ip" "public_IP" {

  name                = "public_IP"

  location            = azurerm_resource_group.rg.location

  resource_group_name = azurerm_resource_group.rg.name

  allocation_method   = "Dynamic"

}

resource "azurerm_network_interface" "azure_ni" {

  name = azurerm_virtual_network.azure_vnet.name

  location = var.location

  resource_group_name = azurerm_resource_group.rg.name

  ip_configuration {

    name = "my_azure_ni"

    subnet_id = azurerm_subnet.azure_subnet.id

    private_ip_address_allocation = var.private_ip_allocation

    public_ip_address_id          = azurerm_public_ip.public_IP.id

  }

}

resource "azurerm_network_security_group" "nsg" {

  name                = "myNetworkSecurityGroup"

  location            = azurerm_resource_group.rg.location

  resource_group_name = azurerm_resource_group.rg.name

 

  security_rule {

    name                       = "SSH"

    priority                   = 1001

    direction                  = "Inbound"

    access                     = "Allow"

    protocol                   = "Tcp"

    source_port_range          = "*"

    destination_port_range     = "22"

    source_address_prefix      = "*"

    destination_address_prefix = "*"

  }

}

# Connect the security group to the network interface

resource "azurerm_network_interface_security_group_association" "namehere" {

  network_interface_id      = azurerm_network_interface.azure_ni.id

  network_security_group_id = azurerm_network_security_group.nsg.id

}

resource "azurerm_linux_virtual_machine" "azure_vm" {

  name = var.vm_name

  resource_group_name = azurerm_resource_group.rg.name

  location = var.location

  network_interface_ids = [azurerm_network_interface.azure_ni.id]

  size                  = "Standard_B2s"

   os_disk {

    name                 = "myOsDisk"

    caching              = "ReadWrite"

    storage_account_type = "Standard_LRS"

  }

 source_image_reference {

    publisher = "Canonical"

    offer     = "0001-com-ubuntu-server-jammy"

    sku       = "22_04-lts-gen2"

    version   = "latest"

  }

  computer_name  = var.hostname

  admin_username = var.username

  admin_ssh_key {

    username   = var.username

    public_key = jsondecode(azapi_resource_action.ssh_public_key_gen.output).publicKey

  } 

}


Wednesday, February 21, 2024

Infrastructure as code (Terraform) - Automate creating an Azure resource using terraform

Create an Azure Resource Group using Terraform:

main.tf 

terraform {

  required_version = ">=1.0.0"

  required_providers {

    azurerm = {

        source = "hashicorp/azurerm"

        version = "~>3.0.0"

    }

  }

}

provider "azurerm" {

  features{}

}


variable "location" {

  type = string

  default = "East US"

}


resource "azurerm_resource_group" "rg" {

  name = "dev-rg"

  location = var.location

}

Execute below commands to install the provider (Azure) plugin, create source and destroy it

>terraform init -upgrade

>terraform validate

>terraform plan

>terraform apply

>terraform destroy

Tuesday, February 20, 2024

How to access Azure KeyVault in RedHat OpenShift 4 cluster

 Ref: https://learn.microsoft.com/en-us/azure/openshift/howto-use-key-vault-secrets

oc login https://api.alstt7ftx43328907e.eastus.aroapp.io:6443/ -u kubeadmin -p g9i2H-KVUqo-7SjUm-UthrL

oc new-project k8s-secrets-store-csi

oc adm policy add-scc-to-user privileged system:serviceaccount:k8s-secrets-store-csi:secrets-store-csi-driver

helm repo add secrets-store-csi-driver https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts

helm repo update

helm install -n k8s-secrets-store-csi csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver --version v1.3.1 --set  "linux.providersDir=/var/run/secrets-store-csi-providers"


Next,

helm repo add csi-secrets-store-provider-azure https://azure.github.io/secrets-store-csi-driver-provider-azure/charts

helm repo update


Next,

helm install -n k8s-secrets-store-csi azure-csi-provider csi-secrets-store-provider-azure/csi-secrets-store-provider-azure --set linux.privileged=true --set secrets-store-csi-driver.install=false --set "linux.providersDir=/var/run/secrets-store-csi-providers" --version=v1.4.1

oc adm policy add-scc-to-user privileged system:serviceaccount:k8s-secrets-store-csi:csi-secrets-store-provider-azure

Next (Create key vault and a secret)

oc new-project my-application

az keyvault create -n ${KEYVAULT_NAME} -g ${KEYVAULT_RESOURCE_GROUP} --location ${KEYVAULT_LOCATION}

az keyvault secret set --vault-name secret-store-oljy7AQDbV --name secret1 --value "Hello"

export SERVICE_PRINCIPAL_CLIENT_SECRET="ces8Q~kBm~YYJTPLDOSsqrbLT0yDFWcil7r-XbbB"

export SERVICE_PRINCIPAL_CLIENT_ID="e8d92000-2a2c-4581-890f-6fb611717706"

az keyvault set-policy -n secret-store-oljy7AQDbV --secret-permissions get --spn ${SERVICE_PRINCIPAL_CLIENT_ID}

kubectl create secret generic secrets-store-creds --from-literal clientid=${SERVICE_PRINCIPAL_CLIENT_ID} --from-literal clientsecret=${SERVICE_PRINCIPAL_CLIENT_SECRET} 

kubectl -n my-application label secret secrets-store-creds secrets-store.csi.k8s.io/used=true

kubectl exec busybox-secrets-store-inline -- ls /mnt/secrets-store/


Monday, February 19, 2024

Warning: would violate PodSecurity "restricted:v1.24": unrestricted capabilities

Error from server (Forbidden): error when creating "<pod>.yaml": pods "busybox-secrets-store-inline" is forbidden: busybox-secrets-store-inline uses an inline volume provided by CSIDriver secrets-store.csi.k8s.io and namespace my-application has a pod security enforce level that is lower than privileged

ISSUE THIS COMMAND:

kubectl label --overwrite ns my-application pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/enforce-version=v1.29

AND HAXE POD YAML:






Sunday, February 18, 2024

How to connect to Redhat Openshift 4 cluster's api server using Openshift CLI oc

 Find address of the API Server

apiServer=$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query apiserverProfile.url -o tsv)

ex: apiServer=$(az aro show -g aro_group -n arocluster --query apiserverProfile.url -o tsv)

oc login $apiServer -u kubeadmin -p <kubeadmin password>


C:\Users\santosh>helm install my-kong2 kong/kong -n kong --values ./full-k4k8s-with-kong-enterprise.conf.txt

coalesce.go:289: warning: destination for kong.proxy.stream is a table. Ignoring non-table value ([])

coalesce.go:289: warning: destination for kong.proxy.stream is a table. Ignoring non-table value ([])

NAME: my-kong2

LAST DEPLOYED: Sun Feb 18 15:16:30 2024

NAMESPACE: kong

STATUS: deployed

REVISION: 1

TEST SUITE: None

NOTES:

To connect to Kong, please execute the following commands:


HOST=$(kubectl get svc --namespace kong my-kong2-kong-proxy -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

PORT=$(kubectl get svc --namespace kong my-kong2-kong-proxy -o jsonpath='{.spec.ports[0].port}')

export PROXY_IP=${HOST}:${PORT}

curl $PROXY_IP


Once installed, please follow along the getting started guide to start using

Kong: https://docs.konghq.com/kubernetes-ingress-controller/latest/guides/getting-started/

Article: https://arifkiziltepe.medium.com/kong-installation-on-openshift-3eb3291d3998


Install Kong API Gateway in Azure RedHat OpenShift 4 cluster

 todo

Saturday, February 17, 2024

Kubernetes Terminologies

 CRI - Container Runtime Interface

OCI - Open Container Initiative (has imagespec, runtimespec)

ContainerD - comes with CLI ctl (docker vs containerd?)

kublet - [Register Node, create PODs, Monitor Node & PODs). Registers the node to kubernetes cluster. It requests the container runtime engine like docker to pull container image on to the node to run pods. Monitors node & pods

Kube-Proxy - Network traffic between PODs

Services - Enable kube apps to be accessible outside cluster. 2 service types, They have an IP address

NodePort: Maps a port on the node to the Port on the pod

ClusterIP: Communication between pods within the cluster. Apps->DBs

LoadBalancer - Kube native load balancer

CNI - Container Network Interface






Reactjs Application Deployment In Azure storage Account

 1. Create an Azure storage account

2. From Settings of created storage account, select 'configuration', a) select 'Disabled' for 'Secure transfer required' -> this enables http. b) select 'Enabled' for 'Allow Blob anonymous access'

3. Save the changes

4. Under 'Data Storage', click on Containers, create a container with access level as Blob(anonymous read access for blobs only)

5. Using Upload under container, upload a file called index.hmtl which I have attached/shared.

6. create a 'Front Door and CDN' and do this, search for 'Front Door and CDN profiles', and click +Create button, select Exlore other offerings and Azure CDN Standard from Microsoft (classic). Provide resource group name and cdn profile name (ex: cdn-profile)

7. Next, create an Endpoint. To do this, click on cdn-profile you created and click on +Endpoint, provide Endpoint name (Ex: myreactapp), and choose Storage under Origin type. Select Origin hostname from dropdown (ex: mystorageaccount.blob.core.windows.net)

8. Now, select the Endpoint you created and click on 'Rule Engine' that appears in Settings

9. choose Add rules button under EndPoint. Now give a name to rules and Click on Add Condition and select 'URL File Extension', choose Operator 'less than' and Extension as 1 and case transform as No transform. Then click on the “Add action” button and select “URL rewrite” action. 

10. Click on Save button.

11. Now select container that you created under storage account and click Upload button and upload index.html OR your reacjs app after running npm build

12. Now you click on Endpoint created and click Endpoint hostname (ex: https://reactjsapp.azureedge.net) to open the hosted/deployed app in browser.

Thursday, February 15, 2024

How to create ARO(Azure Redhat OpenShift) private cluster

Register to azure redhat 

az account set --subscription 7ce96666-9c91-4251-a956-c0bbc4617409 
az provider register -n Microsoft.RedHatOpenShift --wait 
az provider register -n Microsoft.Compute --wait 
az provider register -n Microsoft.Network --wait 
az provider register -n Microsoft.Storage --wait 

 Envirinment variables: 

LOCATION=eastus # the location of your cluster 
RESOURCEGROUP="v4-$LOCATION" # the name of the resource group where you want to create your cluster 
CLUSTER=aro-cluster # the name of your cluster 

 az group create --name $RESOURCEGROUP --location $LOCATION 

 Create a virtual network.

 az network vnet create --resource-group $RESOURCEGROUP --name aro-vnet --address-prefixes 10.0.0.0/22 

 Add an empty subnet for the master nodes. 

 az network vnet subnet create --resource-group $RESOURCEGROUP --vnet-name aro-vnet --name master-subnet --address-prefixes 10.0.0.0/23 --service-endpoints Microsoft.ContainerRegistry 

 Add an empty subnet for the worker nodes. 
 az network vnet subnet create --resource-group $RESOURCEGROUP --vnet-name aro-vnet --name worker-subnet --address-prefixes 10.0.2.0/23 --service-endpoints Microsoft.ContainerRegistry 

 Disable subnet private endpoint policies on the master subnet. This is required to be able to connect and manage the cluster. 

 az network vnet subnet update --name master-subnet --resource-group $RESOURCEGROUP --vnet-name aro-vnet --disable-private-link-service-network-policies true Check accessability

Links:
https://docs.openshift.com/container-platform/4.8/networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-ingress-controller.html
https://learn.microsoft.com/en-us/answers/questions/1165736/exposing-aro-cluster-application-on-internet

Thursday, February 8, 2024

Python Client to send messages to Azure EventHub with ConnectionString

 Run this python code to send messages both in JSON format and string, both are sent successfully. You can just run this from your end, it will sure work. Note that the messages sent in NON JSON format will not be visible but the count increases and messages sent in JSON format can be viewed from Azure UI.  Consumer can read all formats of message.

import asyncio

from azure.eventhub import EventData

from azure.eventhub.aio import EventHubProducerClient

EVENT_HUB_CONNECTION_STR =

"Endpoint=sb://genericeventhub.servicebus.windows.net/;

SharedAccessKeyName=RootManageSharedAccessKey;

SharedAccessKey=4cI6t0fjJwhf8i0ZKIjJ+uww27yCsBtnf+AEhIiC9xQ="

EVENT_HUB_NAME = "javaappeh"

async def run():   

    # Create a producer client to send messages to the event hub.

    # Specify a connection string to your event hubs namespace and

    # the event hub name.

    producer = EventHubProducerClient.from_connection_string(

        conn_str=EVENT_HUB_CONNECTION_STR, eventhub_name=EVENT_HUB_NAME

    )

    async with producer:

        # Create a batch.

        event_data_batch = await producer.create_batch()

        # Add events to the batch.

        messagebody = '{"KEY20":"VALUE20","KEY21":"VALUE21","timestamp":"2024-05-17T01:17:00Z"}'

        event_data_batch.add(EventData(messagebody))

        #event_data_batch.add(EventData("Second event"))       

        await producer.send_batch(event_data_batch)

asyncio.run(run())



Dockerfile for Java Spring Boot Application in RHEL8

 FROM registry.access.redhat.com/ubi8/ubi:8.1

WORKDIR /app

RUN yum update -y  && yum install -y wget

RUN pwd

RUN cd /opt && wget --no-check-certificate https://downloads.apache.org/maven/maven-3/3.8.8/binaries/apache-maven-3.8.8-bin.tar.gz

RUN cd /opt && tar -xvf apache-maven-3.8.8-bin.tar.gz

RUN cd /opt && ln -s apache-maven-3.8.8 maven

RUN yum -y install java-17-openjdk

COPY pom.xml .

COPY src ./src

ENV M2_HOME /opt/maven

ENV PATH ${M2_HOME}/bin:${PATH}:/usr/bin

RUN echo $PATH

RUN mvn -version

RUN mvn install -DskipTests

RUN ls target

EXPOSE 8080

ENV PORT 8080

CMD  ["java", "-Dserver.port=8080", "-jar", "/app/target/test-service-0.0.1-SNAPSHOT.jar"]

Azure Red Hat OpenShift cluster

 ARO( cluster):

When Ingress visibility is set as Private, routes default to internal load balancer and when set Public, routes the default to public standard load balancer. Here the default Virtual network traffic routing can changed. Refer to this link: https://learn.microsoft.com/en-au/azure/virtual-network/virtual-networks-udr-overview 

Create Private ARO: https://learn.microsoft.com/en-au/azure/openshift/howto-create-private-cluster-4x

Ingress Controllers for ARO reference: https://www.redhat.com/en/blog/a-guide-to-controller-ingress-for-azure-red-hat-openshift

Azure KeyVault Access and how to use it in Azure Kubernetes Service

 KeyVault Access:

With public access disabled for Azure Keyvaults, the private endpoints (privatelink.vaultcore.azure.net) can be created for azure key valts and will be accessible from VMs within the same virtual network and subnet as that of keyvault private link's virtual network and subnet.

Azure KeyVault can be accessed publicly (outside azure) with allow public networks enabled under networking.

Azure keyvault can be accessed only from certain virtual network with allow public networks only from specified different virtual network.

Links: https://learn.microsoft.com/en-gb/azure/key-vault/general/private-link-service?tabs=portal

Kubernetes & KeyVault:

Azure keyvault can be accessed in kubernetes cluster by configuring "Enable secret store CSI driver" present in "Advanced" tab while creating AKS. After enabling this, you can define azure keyvault in the network accessible by the cluster.

https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/demos/standard-walkthrough/


Thursday, January 11, 2024

How to deploy ReactApp in Microsoft Azure Storage account

1. Create a storage account in Microsoft Azure.

2. From Settings of created storage account, select 'configuration', a) select 'Disabled' for 'Secure transfer required' -> this enables http. b) select 'Enabled' for 'Allow Blob anonymous access'

3. Save the changes

4. Under 'Data Storage', click on Containers, create a container with access level as Blob(anonymous read access for blobs only)

5. create a 'Front Door and CDN' and do this, search for 'Front Door and CDN profiles', and click +Create button, select Explore other offerings and Azure CDN Standard from Microsoft (classic). Provide resource group name and cdn profile name (ex: cdn-profile)

6. Next, create an Endpoint. To do this, click on cdn-profile you created and click on +Endpoint, provide Endpoint name (Ex: myreactapp), and choose Storage under Origin type. Select Origin hostname from dropdown (ex: mystorageaccount.blob.core.windows.net)

7. Now, select the Endpoint you created and click on 'Rule Engine' that appears in Settings

8. choose Add rules button under EndPoint. Now give a name to rules and Click on Add Condition and select 'URL File Extension', choose Operator 'less than' and Extension as 1 and case transform as No transform. Then click on the “Add action” button and select “URL rewrite” action. 

9. Click on Save button.

10. Now select container that you created under storage account and click Upload button and upload index.html OR your reacjs app after running npm build

11. Now you click on Endpoint created and click Endpoint hostname (ex: https://reactjsapp.azureedge.net) to open the hosted/deployed app in browser.

Related Posts Plugin for WordPress, Blogger...