Wednesday, March 13, 2024

HorizontalAutoScaler in OpenShift/k8s

 1. HorizontalAutoScaler in OpenShift/k8s, 

Declerative:

apiVersion: autoscaling/v1

  kind: HorizontalPodAutoscaler

  metadata:

   name: php-apache

   namespace: hpa-test

  spec:

   scaleTargetRef:

     apiVersion: apps/v1

     kind: Deployment

     name: php-apache

   minReplicas: 1

   maxReplicas: 10

   targetCPUUtilizationPercentage: 50

kubectl command: kubectl -n hpa-test autoscale deployment php-apache --cpu-percent=50 --min=1 --max=5

2. Ingress Controller

- An ingress controller acts as a reverse proxy and load balancer. It implements a Kubernetes Ingress. The ingress controller adds a layer of abstraction to traffic routing, accepting traffic from outside the Kubernetes platform and load balancing it to Pods running inside the platform.

- Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster

3. Private endpoint. 

- A private endpoint is a network interface that uses a private IP address from your virtual network. This network interface connects you privately and securely to a service that's powered by Azure Private Link. By enabling a private endpoint, you're bringing the service into your virtual network


CDIR:

Classless or Classless Inter-Domain Routing (CIDR) addresses use variable length subnet masking (VLSM) to alter the ratio between the network and host address bits in an IP address. A subnet mask is a set of identifiers that returns the network address’s value from the IP address by turning the host address into zeroes. 

Sunday, March 10, 2024

Redhat Openshift

What are openshift operators?

Red Hat OpenShift Operators automate the creation, configuration, and management of instances of Kubernetes-native applications. Operators provide automation at every level of the stack—from managing the parts that make up the platform all the way to applications that are provided as a managed service.

What is Secret Store CSI?

CSI - Container Storage Interface

The Kubernetes Secret Store CSI is a storage driver that allows you to mount secrets from external secret management systems like HashiCorp Vault and AWS Secrets. It comes in two parts, the Secret Store CSI, and a Secret provider driver

What is configmap?

ConfigMap is similar to secrets, but designed to more conveniently support working with strings that do not contain sensitive information

The ConfigMap API object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers

How to create ConfigMap ?

oc create configmap my-key-vals --from-literal db-user=user1 db-password=db-password1

OR from yaml

------------------------------

apiVersion: v1

kind: ConfigMap

metadata:

  name: env-config

  namespace: my-project

data:

  db-user: user1

  db-password: db-password1

------------------------------

How do pods consume envs?

apiVersion: v1

kind: Pod

metadata:

  name: my-project

spec:

  containers:

    - name: test-container

      image: gcr.io/google_containers/busybox

      command: [ "/bin/sh", "-c", "env" ]

      env: 

        - name: DB-USER

          valueFrom:

            configMapKeyRef:

              name: env-config

              key: db-user

        - name: DB-PASSWORD

          valueFrom:

            configMapKeyRef:

              name: env-config

              key: db-password

  restartPolicy: Never

What is difference between Deployments and DeploymentConfig?

DeploymentConfig objects prefer consistency, whereas Deployments objects take availability over consistency. For DeploymentConfig objects, if a node running a deployer pod goes down, it will not get replaced. The process waits until the node comes back online or is manually deleted.


Saturday, February 24, 2024

Terraform Clarrification

 1. How to use existing resource group in terraform?

terraform import azurerm_resource_group.rg /subscriptions/<sub_id>/resourceGroups/<rg_name)

2. 

Microsoft Azure Guest User

How to give access to Azure guest user?
Business scenario:
Let se I guess future create its file storage in Microsoft storage account and create a private and point. The URL of the private and point will not be accessible to the guest user who created the end point. 
Solution:
To gain the access to the private endpoint, the guest user has to create a support ticket with some tools. Administrator will review and grant the access to a private end point based on business need.





Thursday, February 22, 2024

How to automate creating Linux Virtual Machine in Azure using Terraform

Complete code:  https://github.com/MaheshMagadum/cloudops/tree/main/terraform-02

terraform {

  required_version = ">=1.0.0"

  required_providers {

    azapi = {

      source  = "azure/azapi"

      version = "~>1.5"

    }

    azurerm = {

      source  = "hashicorp/azurerm"

      version = "~>3.0.0"

    }

    random = {

      source  = "hashicorp/random"

      version = "~>3.0"

    }

  }

}

provider "azurerm" {

  features{}

}

resource "azurerm_resource_group" "rg" {

  name = "dev-rg"

  location = var.location

}

resource "azurerm_virtual_network" "azure_vnet" {

  resource_group_name = azurerm_resource_group.rg.name

  name = "aro-vnet"

  location = azurerm_resource_group.rg.location

  address_space = ["10.0.4.0/25"]

}

resource "azurerm_subnet" "azure_subnet" {

  name = var.subnet_name

  resource_group_name = azurerm_resource_group.rg.name

  virtual_network_name = azurerm_virtual_network.azure_vnet.name

  address_prefixes = ["10.0.4.0/29"]

}

# Create public IPs

resource "azurerm_public_ip" "public_IP" {

  name                = "public_IP"

  location            = azurerm_resource_group.rg.location

  resource_group_name = azurerm_resource_group.rg.name

  allocation_method   = "Dynamic"

}

resource "azurerm_network_interface" "azure_ni" {

  name = azurerm_virtual_network.azure_vnet.name

  location = var.location

  resource_group_name = azurerm_resource_group.rg.name

  ip_configuration {

    name = "my_azure_ni"

    subnet_id = azurerm_subnet.azure_subnet.id

    private_ip_address_allocation = var.private_ip_allocation

    public_ip_address_id          = azurerm_public_ip.public_IP.id

  }

}

resource "azurerm_network_security_group" "nsg" {

  name                = "myNetworkSecurityGroup"

  location            = azurerm_resource_group.rg.location

  resource_group_name = azurerm_resource_group.rg.name

 

  security_rule {

    name                       = "SSH"

    priority                   = 1001

    direction                  = "Inbound"

    access                     = "Allow"

    protocol                   = "Tcp"

    source_port_range          = "*"

    destination_port_range     = "22"

    source_address_prefix      = "*"

    destination_address_prefix = "*"

  }

}

# Connect the security group to the network interface

resource "azurerm_network_interface_security_group_association" "namehere" {

  network_interface_id      = azurerm_network_interface.azure_ni.id

  network_security_group_id = azurerm_network_security_group.nsg.id

}

resource "azurerm_linux_virtual_machine" "azure_vm" {

  name = var.vm_name

  resource_group_name = azurerm_resource_group.rg.name

  location = var.location

  network_interface_ids = [azurerm_network_interface.azure_ni.id]

  size                  = "Standard_B2s"

   os_disk {

    name                 = "myOsDisk"

    caching              = "ReadWrite"

    storage_account_type = "Standard_LRS"

  }

 source_image_reference {

    publisher = "Canonical"

    offer     = "0001-com-ubuntu-server-jammy"

    sku       = "22_04-lts-gen2"

    version   = "latest"

  }

  computer_name  = var.hostname

  admin_username = var.username

  admin_ssh_key {

    username   = var.username

    public_key = jsondecode(azapi_resource_action.ssh_public_key_gen.output).publicKey

  } 

}


Wednesday, February 21, 2024

Infrastructure as code (Terraform) - Automate creating an Azure resource using terraform

Create an Azure Resource Group using Terraform:

main.tf 

terraform {

  required_version = ">=1.0.0"

  required_providers {

    azurerm = {

        source = "hashicorp/azurerm"

        version = "~>3.0.0"

    }

  }

}

provider "azurerm" {

  features{}

}


variable "location" {

  type = string

  default = "East US"

}


resource "azurerm_resource_group" "rg" {

  name = "dev-rg"

  location = var.location

}

Execute below commands to install the provider (Azure) plugin, create source and destroy it

>terraform init -upgrade

>terraform validate

>terraform plan

>terraform apply

>terraform destroy

Tuesday, February 20, 2024

How to access Azure KeyVault in RedHat OpenShift 4 cluster

 Ref: https://learn.microsoft.com/en-us/azure/openshift/howto-use-key-vault-secrets

oc login https://api.alstt7ftx43328907e.eastus.aroapp.io:6443/ -u kubeadmin -p g9i2H-KVUqo-7SjUm-UthrL

oc new-project k8s-secrets-store-csi

oc adm policy add-scc-to-user privileged system:serviceaccount:k8s-secrets-store-csi:secrets-store-csi-driver

helm repo add secrets-store-csi-driver https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts

helm repo update

helm install -n k8s-secrets-store-csi csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver --version v1.3.1 --set  "linux.providersDir=/var/run/secrets-store-csi-providers"


Next,

helm repo add csi-secrets-store-provider-azure https://azure.github.io/secrets-store-csi-driver-provider-azure/charts

helm repo update


Next,

helm install -n k8s-secrets-store-csi azure-csi-provider csi-secrets-store-provider-azure/csi-secrets-store-provider-azure --set linux.privileged=true --set secrets-store-csi-driver.install=false --set "linux.providersDir=/var/run/secrets-store-csi-providers" --version=v1.4.1

oc adm policy add-scc-to-user privileged system:serviceaccount:k8s-secrets-store-csi:csi-secrets-store-provider-azure

Next (Create key vault and a secret)

oc new-project my-application

az keyvault create -n ${KEYVAULT_NAME} -g ${KEYVAULT_RESOURCE_GROUP} --location ${KEYVAULT_LOCATION}

az keyvault secret set --vault-name secret-store-oljy7AQDbV --name secret1 --value "Hello"

export SERVICE_PRINCIPAL_CLIENT_SECRET="ces8Q~kBm~YYJTPLDOSsqrbLT0yDFWcil7r-XbbB"

export SERVICE_PRINCIPAL_CLIENT_ID="e8d92000-2a2c-4581-890f-6fb611717706"

az keyvault set-policy -n secret-store-oljy7AQDbV --secret-permissions get --spn ${SERVICE_PRINCIPAL_CLIENT_ID}

kubectl create secret generic secrets-store-creds --from-literal clientid=${SERVICE_PRINCIPAL_CLIENT_ID} --from-literal clientsecret=${SERVICE_PRINCIPAL_CLIENT_SECRET} 

kubectl -n my-application label secret secrets-store-creds secrets-store.csi.k8s.io/used=true

kubectl exec busybox-secrets-store-inline -- ls /mnt/secrets-store/


Monday, February 19, 2024

Warning: would violate PodSecurity "restricted:v1.24": unrestricted capabilities

Error from server (Forbidden): error when creating "<pod>.yaml": pods "busybox-secrets-store-inline" is forbidden: busybox-secrets-store-inline uses an inline volume provided by CSIDriver secrets-store.csi.k8s.io and namespace my-application has a pod security enforce level that is lower than privileged

ISSUE THIS COMMAND:

kubectl label --overwrite ns my-application pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/enforce-version=v1.29

AND HAXE POD YAML:






Sunday, February 18, 2024

How to connect to Redhat Openshift 4 cluster's api server using Openshift CLI oc

 Find address of the API Server

apiServer=$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query apiserverProfile.url -o tsv)

ex: apiServer=$(az aro show -g aro_group -n arocluster --query apiserverProfile.url -o tsv)

oc login $apiServer -u kubeadmin -p <kubeadmin password>


C:\Users\santosh>helm install my-kong2 kong/kong -n kong --values ./full-k4k8s-with-kong-enterprise.conf.txt

coalesce.go:289: warning: destination for kong.proxy.stream is a table. Ignoring non-table value ([])

coalesce.go:289: warning: destination for kong.proxy.stream is a table. Ignoring non-table value ([])

NAME: my-kong2

LAST DEPLOYED: Sun Feb 18 15:16:30 2024

NAMESPACE: kong

STATUS: deployed

REVISION: 1

TEST SUITE: None

NOTES:

To connect to Kong, please execute the following commands:


HOST=$(kubectl get svc --namespace kong my-kong2-kong-proxy -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

PORT=$(kubectl get svc --namespace kong my-kong2-kong-proxy -o jsonpath='{.spec.ports[0].port}')

export PROXY_IP=${HOST}:${PORT}

curl $PROXY_IP


Once installed, please follow along the getting started guide to start using

Kong: https://docs.konghq.com/kubernetes-ingress-controller/latest/guides/getting-started/

Article: https://arifkiziltepe.medium.com/kong-installation-on-openshift-3eb3291d3998


Install Kong API Gateway in Azure RedHat OpenShift 4 cluster

 todo

Saturday, February 17, 2024

Kubernetes Terminologies

 CRI - Container Runtime Interface

OCI - Open Container Initiative (has imagespec, runtimespec)

ContainerD - comes with CLI ctl (docker vs containerd?)

kublet - [Register Node, create PODs, Monitor Node & PODs). Registers the node to kubernetes cluster. It requests the container runtime engine like docker to pull container image on to the node to run pods. Monitors node & pods

Kube-Proxy - Network traffic between PODs

Services - Enable kube apps to be accessible outside cluster. 2 service types, They have an IP address

NodePort: Maps a port on the node to the Port on the pod

ClusterIP: Communication between pods within the cluster. Apps->DBs

LoadBalancer - Kube native load balancer

CNI - Container Network Interface






Reactjs Application Deployment In Azure storage Account

 1. Create an Azure storage account

2. From Settings of created storage account, select 'configuration', a) select 'Disabled' for 'Secure transfer required' -> this enables http. b) select 'Enabled' for 'Allow Blob anonymous access'

3. Save the changes

4. Under 'Data Storage', click on Containers, create a container with access level as Blob(anonymous read access for blobs only)

5. Using Upload under container, upload a file called index.hmtl which I have attached/shared.

6. create a 'Front Door and CDN' and do this, search for 'Front Door and CDN profiles', and click +Create button, select Exlore other offerings and Azure CDN Standard from Microsoft (classic). Provide resource group name and cdn profile name (ex: cdn-profile)

7. Next, create an Endpoint. To do this, click on cdn-profile you created and click on +Endpoint, provide Endpoint name (Ex: myreactapp), and choose Storage under Origin type. Select Origin hostname from dropdown (ex: mystorageaccount.blob.core.windows.net)

8. Now, select the Endpoint you created and click on 'Rule Engine' that appears in Settings

9. choose Add rules button under EndPoint. Now give a name to rules and Click on Add Condition and select 'URL File Extension', choose Operator 'less than' and Extension as 1 and case transform as No transform. Then click on the “Add action” button and select “URL rewrite” action. 

10. Click on Save button.

11. Now select container that you created under storage account and click Upload button and upload index.html OR your reacjs app after running npm build

12. Now you click on Endpoint created and click Endpoint hostname (ex: https://reactjsapp.azureedge.net) to open the hosted/deployed app in browser.

Thursday, February 15, 2024

How to create ARO(Azure Redhat OpenShift) private cluster

Register to azure redhat 

az account set --subscription 7ce96666-9c91-4251-a956-c0bbc4617409 
az provider register -n Microsoft.RedHatOpenShift --wait 
az provider register -n Microsoft.Compute --wait 
az provider register -n Microsoft.Network --wait 
az provider register -n Microsoft.Storage --wait 

 Envirinment variables: 

LOCATION=eastus # the location of your cluster 
RESOURCEGROUP="v4-$LOCATION" # the name of the resource group where you want to create your cluster 
CLUSTER=aro-cluster # the name of your cluster 

 az group create --name $RESOURCEGROUP --location $LOCATION 

 Create a virtual network.

 az network vnet create --resource-group $RESOURCEGROUP --name aro-vnet --address-prefixes 10.0.0.0/22 

 Add an empty subnet for the master nodes. 

 az network vnet subnet create --resource-group $RESOURCEGROUP --vnet-name aro-vnet --name master-subnet --address-prefixes 10.0.0.0/23 --service-endpoints Microsoft.ContainerRegistry 

 Add an empty subnet for the worker nodes. 
 az network vnet subnet create --resource-group $RESOURCEGROUP --vnet-name aro-vnet --name worker-subnet --address-prefixes 10.0.2.0/23 --service-endpoints Microsoft.ContainerRegistry 

 Disable subnet private endpoint policies on the master subnet. This is required to be able to connect and manage the cluster. 

 az network vnet subnet update --name master-subnet --resource-group $RESOURCEGROUP --vnet-name aro-vnet --disable-private-link-service-network-policies true Check accessability

Links:
https://docs.openshift.com/container-platform/4.8/networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-ingress-controller.html
https://learn.microsoft.com/en-us/answers/questions/1165736/exposing-aro-cluster-application-on-internet

Thursday, February 8, 2024

Python Client to send messages to Azure EventHub with ConnectionString

 Run this python code to send messages both in JSON format and string, both are sent successfully. You can just run this from your end, it will sure work. Note that the messages sent in NON JSON format will not be visible but the count increases and messages sent in JSON format can be viewed from Azure UI.  Consumer can read all formats of message.

import asyncio

from azure.eventhub import EventData

from azure.eventhub.aio import EventHubProducerClient

EVENT_HUB_CONNECTION_STR =

"Endpoint=sb://genericeventhub.servicebus.windows.net/;

SharedAccessKeyName=RootManageSharedAccessKey;

SharedAccessKey=4cI6t0fjJwhf8i0ZKIjJ+uww27yCsBtnf+AEhIiC9xQ="

EVENT_HUB_NAME = "javaappeh"

async def run():   

    # Create a producer client to send messages to the event hub.

    # Specify a connection string to your event hubs namespace and

    # the event hub name.

    producer = EventHubProducerClient.from_connection_string(

        conn_str=EVENT_HUB_CONNECTION_STR, eventhub_name=EVENT_HUB_NAME

    )

    async with producer:

        # Create a batch.

        event_data_batch = await producer.create_batch()

        # Add events to the batch.

        messagebody = '{"KEY20":"VALUE20","KEY21":"VALUE21","timestamp":"2024-05-17T01:17:00Z"}'

        event_data_batch.add(EventData(messagebody))

        #event_data_batch.add(EventData("Second event"))       

        await producer.send_batch(event_data_batch)

asyncio.run(run())



Dockerfile for Java Spring Boot Application in RHEL8

 FROM registry.access.redhat.com/ubi8/ubi:8.1

WORKDIR /app

RUN yum update -y  && yum install -y wget

RUN pwd

RUN cd /opt && wget --no-check-certificate https://downloads.apache.org/maven/maven-3/3.8.8/binaries/apache-maven-3.8.8-bin.tar.gz

RUN cd /opt && tar -xvf apache-maven-3.8.8-bin.tar.gz

RUN cd /opt && ln -s apache-maven-3.8.8 maven

RUN yum -y install java-17-openjdk

COPY pom.xml .

COPY src ./src

ENV M2_HOME /opt/maven

ENV PATH ${M2_HOME}/bin:${PATH}:/usr/bin

RUN echo $PATH

RUN mvn -version

RUN mvn install -DskipTests

RUN ls target

EXPOSE 8080

ENV PORT 8080

CMD  ["java", "-Dserver.port=8080", "-jar", "/app/target/test-service-0.0.1-SNAPSHOT.jar"]

Azure Red Hat OpenShift cluster

 ARO( cluster):

When Ingress visibility is set as Private, routes default to internal load balancer and when set Public, routes the default to public standard load balancer. Here the default Virtual network traffic routing can changed. Refer to this link: https://learn.microsoft.com/en-au/azure/virtual-network/virtual-networks-udr-overview 

Create Private ARO: https://learn.microsoft.com/en-au/azure/openshift/howto-create-private-cluster-4x

Ingress Controllers for ARO reference: https://www.redhat.com/en/blog/a-guide-to-controller-ingress-for-azure-red-hat-openshift

Azure KeyVault Access and how to use it in Azure Kubernetes Service

 KeyVault Access:

With public access disabled for Azure Keyvaults, the private endpoints (privatelink.vaultcore.azure.net) can be created for azure key valts and will be accessible from VMs within the same virtual network and subnet as that of keyvault private link's virtual network and subnet.

Azure KeyVault can be accessed publicly (outside azure) with allow public networks enabled under networking.

Azure keyvault can be accessed only from certain virtual network with allow public networks only from specified different virtual network.

Links: https://learn.microsoft.com/en-gb/azure/key-vault/general/private-link-service?tabs=portal

Kubernetes & KeyVault:

Azure keyvault can be accessed in kubernetes cluster by configuring "Enable secret store CSI driver" present in "Advanced" tab while creating AKS. After enabling this, you can define azure keyvault in the network accessible by the cluster.

https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/demos/standard-walkthrough/


Thursday, January 11, 2024

How to deploy ReactApp in Microsoft Azure Storage account

1. Create a storage account in Microsoft Azure.

2. From Settings of created storage account, select 'configuration', a) select 'Disabled' for 'Secure transfer required' -> this enables http. b) select 'Enabled' for 'Allow Blob anonymous access'

3. Save the changes

4. Under 'Data Storage', click on Containers, create a container with access level as Blob(anonymous read access for blobs only)

5. create a 'Front Door and CDN' and do this, search for 'Front Door and CDN profiles', and click +Create button, select Explore other offerings and Azure CDN Standard from Microsoft (classic). Provide resource group name and cdn profile name (ex: cdn-profile)

6. Next, create an Endpoint. To do this, click on cdn-profile you created and click on +Endpoint, provide Endpoint name (Ex: myreactapp), and choose Storage under Origin type. Select Origin hostname from dropdown (ex: mystorageaccount.blob.core.windows.net)

7. Now, select the Endpoint you created and click on 'Rule Engine' that appears in Settings

8. choose Add rules button under EndPoint. Now give a name to rules and Click on Add Condition and select 'URL File Extension', choose Operator 'less than' and Extension as 1 and case transform as No transform. Then click on the “Add action” button and select “URL rewrite” action. 

9. Click on Save button.

10. Now select container that you created under storage account and click Upload button and upload index.html OR your reacjs app after running npm build

11. Now you click on Endpoint created and click Endpoint hostname (ex: https://reactjsapp.azureedge.net) to open the hosted/deployed app in browser.

Related Posts Plugin for WordPress, Blogger...