Posts

jre docker image

i think the best docker image for java jre or jdk that i can find is eclipse-temurin:21-jre or other variants. Oher has been a confusing nightmare.  You can use this to build your dockerfile.  FROM eclipse-temurin:17-jre-alpine RUN apk add --no-cache curl unzip ENV JMETER_VERSION=5.6.3 ENV JMETER_URL=https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-${JMETER_VERSION}.tgz RUN curl -Ls "${JMETER_URL}" | tar xz -C /opt && \     ln -s "/opt/apache-jmeter-${JMETER_VERSION}" /opt/jmeter ENV PATH="/opt/jmeter/bin:${PATH}"

pyspark - building a local docker with jupyter notebook

Image
We can easy use docker to build us an image of pyspark with docker installed. This is what our dockerfile looks like : # 1. Use a modern, slim Python image FROM python:3.12-slim-bookworm # 2. Install latest JRE using the default meta-package # This ensures we get the latest Java version available in the standard Bookworm repos (often OpenJDK 17 or 21). RUN apt-get update && \     apt-get install -y default-jre-headless && \     rm -rf /var/lib/apt/lists/* # 3. Install PySpark (which includes the necessary Spark binaries) # Using a modern PySpark version (~=4.0.0) RUN pip install pyspark~=4.0.0 jupyterlab # 4. Set environment variables # Note: JAVA_HOME is often automatically detected with the default package. # Setting it explicitly here for robustness. We'll use the default symlink path. ENV JAVA_HOME= "/usr/lib/jvm/default-java" ENV SPARK_HOME= "/usr/local/lib/python3.12/site-packages/pyspark" ENV PATH= "$PATH:$SPARK_HOME/bin" #...

dotnet - running cli results in error - opt/app-root/locals read only

When running dotnet command in a docker container, you encounter error - /opt/app-root/locals read only. To resolve this, you can just set the HOME to a writable location like /tmp In dockerfile, you can set FROM your-image END HOME=/tmp And that's it! :)  

istio - monitoring istiod key metrics

Image
Monitoring the health and status of istiod and its key metrics can be quite benefitials to ensure our production system run smoothly.  To get started, istiod exposes metrics. To view it we can simply do this  kubectl -n istio-system get pods -l app=istiod kubectl -n istio-system port-forward istiod-86db895df-ltll2 15014:15014 And then we can expose it by hitting this url. http://localhost:15014/metrics And you probably end up with the following screen The key metrics for our istiod are  - pilot_xds_push_time - time spend in pushing discovery service  - memory usages    go_memstats_heap_alloc_bytes   go_memstats_heap_inuse_bytes Let's go and get thus on prometheus and grafana.  First we need to expose our istiod locally via kubectl. Todo that we will use this command kubectl port-forward --address 0.0.0.0 svc/istiod 15041:15014 -n istio-system Setting up promethus Let's create a pm.yaml file to host all our configuration. Please note...

keyvault checking if we hitting request limits

  Resource limit can caught us by surprise. So it is always handy to be able to have a script that we can run to see how things are going AzureDiagnostics | where ResourceType == "VAULTS" | summarize count() by bin(TimeGenerated, 10s), OperationName To check for trottling  AzureDiagnostics | where ResourceType == "VAULTS" | where ResultType == "429" Or to have a more global view  AzureDiagnostics | where ResultType == "429" | summarize Throttles = count() by Resource

postgres - list and terminate active connection to the postgresserver.

Sometimes we may needt to check for active connection and if they are connected, we would like to terminate  cancel it.  There are the SQL command that might be able to assist.  SELECT     pid,     usename,     datname,     application_name,     client_addr,     backend_start,     state ,     query_start,     query FROM pg_stat_activity ORDER BY query_start; -- Kill a specific backend process by its PID SELECT pg_terminate_backend( 73 ); -- Cancel a specific backend process by its PID SELECT pg_cancel_backend( 64 ) -- Kill all other backend processes except the current one SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE pid <> pg_backend_pid();

envoy - using lua with http requests

Image
Here we will use an envoy configuration that allow us to run lua scripts to intercept http request. This static configuration  static_resources :   listeners :   - name : listener_0     address :       socket_address :         address : 0.0.0.0         port_value : 8080     filter_chains :     - filters :       - name : envoy.filters.network.http_connection_manager         typed_config :           "@type" : type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager           stat_prefix : ingress_http           codec_type : AUTO           route_config :             name : local_route             virtual_hosts :             - ...

terraform state - removing resources from a state file

Image
 Let's say we would like to remove state file from our terraform. So let's list it down here terraform state list  null_resource.server["web1"] null_resource.server["web2"] Le'ts remove the web1 from the resource. In command prompt you need to run this command. It will be different on Powershell due to its how it interpretes quotes. Command prompt terraform state rm "null_resource.server[`"web1`"]" Powershell terraform state rm 'null_resource.server[\"web1\"]' And you cans see the resources being removed.

terraform statefile when it drifted .....

When working with Terraform statefile, there is this scenario where someone change the azure resource manually. then they upadate the tf code. So the statefile becomes stale and not updated. When run terraform plan/apply it says no changes detected. what is the best approach to get the state updated?  Use terraform refresh option  terraform apply -refresh-only Something i tried not to do which is the use terraform import to update statefile.

postgres transaction log or write ahead log full

To check for WAL usage we can run the following commands: SELECT     wal_buffers,     wal_writer_delay,     wal_writer_flush_after FROM pg_settings WHERE name LIKE 'wal%'; SELECT     pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), '0/00000000' ):: bigint ) AS wal_size ; We can also check where is the WAL file by running this command -- Current WAL position SELECT pg_current_wal_lsn(); -- How many WAL files are there SELECT count(*) FROM pg_ls_waldir(); Then on the OS level we can see how big these files are: du -sh /var/lib/postgresql/ data /pg_wal We should setup archiving for our postgres database by having the following configuration in our postgres.conf archive_mode = on archive_command = 'cp %p /path/to/archive/%f' To clear it, we just have to force a checkpoint like so  CHECKPOINT ;  

sqlserver transaction log full

To see what's going on with our transaction log for our database run the following command DBCC SQLPERF(LOGSPACE); To resolve this we should always try to backup our transaction log using  BACKUP LOG MyDB TO DISK = 'C:\Backups\MyDB_Log.trn'; If we don't care about transaction log then  ALTER DATABASE MyDB SET RECOVERY SIMPLE; DBCC SHRINKFILE(MyDB_Log, 1); -- shrink to 1 MB or desired size ALTER DATABASE MyDB SET RECOVERY FULL; -- if you want to return to full recovery

terraform merge

Merge Merge is to merge many 1st level map into one. For example,  Merge is terrible with nested maps.  That's something important to keep in mind before trying to use merge. As you can see here, you have lost certain data already.   

infinispan 15 - setting up persistent store quick and dirty way

Image
 Infinispan data can be persisted into a data store. In this setup, we going to use postgres as the datastore. First we need the following xml and we called it - is-db.xml <infinispan       xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"       xsi:schemaLocation = "urn:infinispan:config:15.0 https://infinispan.org/schemas/infinispan-config-15.0.xsd                             urn:infinispan:server:15.0 https://infinispan.org/schemas/infinispan-server-15.0.xsd"       xmlns = "urn:infinispan:config:15.0"       xmlns:server = "urn:infinispan:server:15.0" >     <cache-container name = "default" statistics = "true" >       <transport cluster = "${infinispan.cluster.name:cluster}" stack = "${infinispan.cluster.stack:tcp}" node-name = "${infinispan.node.name:}" />       <security> ...

keycloak - viewing metrics via prometheus and grafana

Image
First of all let's start keycloak docker run --rm -p 127.0.0.1:8080:8080   -p 9000:9000 -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=admin -e KC_METRICS_ENABLED=true quay.io/keycloak/keycloak:26.4.4 start-dev  --metrics-enabled=true --http-enabled=true --hostname-strict=false Next we will setup prometheus to scap keycloak metrics.  docker run --rm --name prometheus -p 9090:9090 -v ${pwd}/pm.yaml:/etc/prometheus/prometheus.yml prom/prometheus The content for prometheus.yml are as follows (since i am using docker, so my target would be host.docker.internal:9000) global :   scrape_interval : 15s scrape_configs :   - job_name : 'keycloak'     metrics_path : '/metrics'     static_configs :       - targets : [ 'host.docker.internal:9000' ]   # hostname or IP of your Keycloak instance And then we will setup grafana docker run --rm --name grafana -p 3000:3000  grafana/grafana Once everything is set...

keycloak debugging user sessions - query offline session generated for a user sorted by creation time

The purpose of this query is to figure out the offline session created for the user by time interval sorted In keycloak, previous offline session gets invalidated when a new token gets generated. So if the previou session matches then we know user or app follows a proper sequential sequence to get a new tokem. However, if a app misbehave then this sequential pattern will not come out in the table results.  SELECT r.name AS realm_name, u.username, c.client_id, ous.id AS offline_user_session_id, ocs.id AS offline_client_session_id, ocs.offline_flag, ocs.offline_token, to_timestamp(ocs.created_on) AS created_at, LAG(ocs.offline_token) OVER (PARTITION BY u.id ORDER BY ocs.created_on) AS previous_token, LAG(to_timestamp(ocs.created_on)) OVER (PARTITION BY u.id ORDER BY ocs.created_on) AS previous_created_at, (ocs.created_on - LAG(ocs.c...

keycloak logging our using kcadm.sh

Image
 We can obtin info from keycloak by using kcadm.sh command, for example :- First we have to login ./kcadm.sh config credentials --server http://localhost:8080 --realm master --user admin To get some values from keycloak Please note this is a admin user and we can run the following command to get details about admin and sessions.  ./kcadm.sh get users/c0fbada7-fa66-417e-b882-10263ac25d27 --realm master ./kcadm.sh get users/c0fbada7-fa66-417e-b882-10263ac25d27/sessions --realm master

istioctl experimental describe pod intro

Image
istioctl command has a really awesome  that you can use to examine a pod to see if virtual service or destination rule are configure correctly. It gives you the key perspective required when troubleshooting issue with pod setup.  You can do something like:-  istioctl experimental describe pod httpbin-686d6fc899-gdsb6  And it will show you virtual service configuration, port used and the service.  Let's look at bookinfo sample app. If we have not deployment bookinfo-gateway, we won't see virtual service.  Once we have deployed  kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml And configuring the service  kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml If we re-run this command again, we will noticed that there's some warning - telling us that the destinationrule is not configure yet and this will fail our routing.   And when we run istioctl experimental describe pod command again, we will see tha...

istio service entry example

Image
ServiceEntry allow us to register service to Istio so that istio would be able to route it correctly. In this example, we will take a request from a client and then forwards it to an external service entry called " portquiz.net ".  So let's create a service entry, virtual service and ingress gateway apiVersion : networking.istio.io/v1 kind : ServiceEntry metadata :   name : portquiz-net spec :   hosts :   - portquiz.net   location : MESH_EXTERNAL   ports :   - number : 80     name : http     protocol : HTTP   resolution : DNS --- apiVersion : networking.istio.io/v1 kind : Gateway metadata :   name : example-gateway spec :   selector :     istio : ingressgateway # use the default Istio ingress   servers :   - port :       number : 80       name : http       protocol : HTTP     hosts :     - "*" --- apiVersion : networking.istio.io/v1 ...

istio - enable and show kiali dashboard

 We can easily install and view kiali dashboard by running the folowing command kubectl apply -f samples/addons/kiali.yaml istioctl dashboard kiali

istio - enabling debugging

Image
Normally we would be able to do it using the following yaml  apiVersion : install.istio.io/v1 kind : IstioOperator metadata :   name : istio-default   namespace : istio-system spec :   meshConfig :     accessLogFile : /dev/stdout   But sometimes we dont have istio operator crds install on our local testing cluster. So to quicky enable logging, we can do this accessLogFile to dev/stdout as shown here:-  kubectl -n istio-system edit configmap istio And ensure we set apiVersion : v1 data :   mesh : |-     defaultConfig:       discoveryAddress: istiod.istio-system.svc:15012     defaultProviders:       metrics:       - prometheus     enablePrometheusMerge: true     rootNamespace: istio-system     trustDomain: cluster.local     accessLogFile: /dev/stdout     accessLogFormat: |         [%START_TIME%] "%REQ(:METHOD)% ...