Upgrade Connectware from 1.x to 1.5 (Kubernetes Guide)

This Cybus Kubernetes guide provides a detailed procedure to help admins adjust the persistent volume content permissions to ensure a smooth upgrade to Connectware 1.5.0 in Kubernetes environments.

Important: Connectware 1.5.0 introduces a significant change regarding container privileges. Now, containers are started without root privileges. This change causes an issue where files that persisted on volumes that were created by a user other than the one accessing them, cannot easily have their permissions changed. As a result, this upgrade requires the admin to manually update the permissions.

Prerequisites

  • Connectware version 1.3.0 or higher is installed on Kubernetes
  • Administrative rights to run privileged workloads in your Kubernetes cluster
  • kubectl configured with access to the target installation
  • Helm version 3 installed on your system
  • Cybus Helm repository added for access to:
    • Connectware Helm chart connectware in version 1.5.0.
    • Connectware Agent Helm chart connectware-agent in version 1.1.0.
  • Access to the values.yaml file specific to your installation
  • Knowledge of the name and namespace of your Connectware installation

Overview of the Upgrade

Before you begin the upgrade process, ensure that you have made the necessary preparations and that all relevant stakeholders are involved.

The following steps give you an overview of what you need to do. See below for detailed step-by-step instructions.

  1. Create the YAML snippets: Create YAML snippets for each PersistentVolumeClaim used by Connectware. This step is essential for identifying the volumes that need permission adjustments.
  2. Filter volumes: Carefully review the created snippets to ensure that only related volumes are included. This is critical to avoid unnecessary modifications and potential disruptions.
  3. Update the Kubernetes job file: Integrate the snippets into the kubernetes-job.yml file as described.
  4. Shutdown Connectware: Securely shut down all Connectware and agent pods to prevent any access conflicts during the permissions update. This step is crucial to ensure that the permissions changes are applied without interference.
  5. Run the Kubernetes job: Run the job to adjust permissions on all relevant volumes. This action addresses the core challenge posed by the upgrade and ensures that the new, non-root containers can access the files they need.
  6. Delete Kubernetes services: Remove all services used by Connectware in preparation for the new version. This step is necessary to accommodate updates that are incompatible with the existing service configurations.
  7. Apply Helm upgrade: Proceed with the Helm upgrade to Connectware version 1.5.0. This step completes the upgrade process and implements the new security measures and permissions model.

Important Notes

  • Permissions on NFS-based storage: NFS-based storage, such as EKS, may not allow permission changes directly through Kubernetes. For such storages, adjust permissions directly on the storage device if needed. This ensures that the required volumes are accessible to the new container users. The permissions are as follows:
    • For all volumes besides postgresql set ownership to 1000:0  and permissions to 770.
    • For postgresql set ownership to 70:0 and permissions to 770.
  • Downtime for Connectware services: This process involves downtime for Connectware services. Schedule the upgrade accordingly to minimize the impact on your operations.
  • Deleting Kubernetes services: Deleting Kubernetes services before the upgrade is critical. This is necessary in order to update our existing headless services to ClusterIP services. Transitioning to ClusterIP services enables port translation, allowing you to continue to connect to the pre-upgrade service ports (80,443) while also being able to connect to the new service ports (8080, 8443). This ensures uninterrupted service access and compatibility after the upgrade.

Upgrading Procedure

1. Creating YAML Snippets for relevant PersistentVolumeClaim names

  • Create a script named create-pvc-yaml.sh containing the following:
#!/usr/bin/env bash
#
# Creates a Kubernetes resource list of volumeMounts and volumes for Connectware (agent) deployments
#
function usage {
    printf "Usage:n"
    printf "$0 [--kubeconfig|-kc <kubeconfig_file>] [--context|-ctx <target_context>] [--namespace|-n <target_namespace>] [--no-external-agents] [--no-connectware]n"
    exit 1
}

function argparse {

  while [ $# -gt 0 ]; do
    case "$1" in
      --kubeconfig|-kc)
        # a kube_config file for the Kubernetes cluster access
        export KUBE_CONFIG="${2}"
        shift
        ;;
      --context|-ctx)
        # a KUBE_CONTEXT for the Kubernetes cluster access
        export KUBE_CONTEXT="${2}"
        shift
        ;;
      --namespace|-n)
        # the Kubernetes cluster namespace to operate on
        export NAMESPACE="${2}"
        shift
        ;;
      --no-external-agents)
        export NO_EXTERNAL_AGENTS=true
        shift
        ;;
      --no-connectware)
        export NO_CONNECTWARE=true
        shift
        ;;
      *)
        printf "ERROR: Parameters invalidn"
        usage
    esac
    shift
  done
}

#
# init
export NO_EXTERNAL_AGENTS=false
export NO_CONNECTWARE=false
export KUBECTL_BIN=$(command -v kubectl)

argparse $*

shopt -s expand_aliases
# Check for kubectl paramaters and construct the ${KUBECTL_CMD} command to use
if [ ! -z $KUBE_CONFIG ]; then
  KUBECONFIG_FILE_PARAM=" --kubeconfig=${KUBE_CONFIG}"
fi
if [ ! -z $KUBE_CONTEXT ]; then
  KUBECONFIG_CTX_PARAM=" --context=${KUBE_CONTEXT}"
fi
if [ ! -z $NAMESPACE ]; then
  NAMESPACE_PARAM=" -n${NAMESPACE}"
fi

KUBECTL_CMD=${KUBECTL_BIN}${KUBECONFIG_FILE_PARAM}${KUBECONFIG_CTX_PARAM}${NAMESPACE_PARAM}

volumes=$(${KUBECTL_CMD} get pvc -o name | sed -e 's/persistentvolumeclaim///g' )
valid_pvcs=""
volume_yaml=""
volumemounts_yaml=""

if [[ "${NO_CONNECTWARE}" == "false" ]]; then
  valid_pvcs=$(cat << EOF
system-control-server-data
certs
brokerdata-*
brokerlog-*
workbench
postgresql-postgresql-*
service-manager
protocol-mapper-*
EOF
)
fi

if [[ "${NO_EXTERNAL_AGENTS}" == "false" ]]; then
  # Add volumes from agent Helm chart
  pvc_volumes=$(${KUBECTL_CMD} get pvc -o name -l connectware.cybus.io/service-group=agent | sed -e 's/persistentvolumeclaim///g' )
  valid_pvcs="${valid_pvcs}
  ${pvc_volumes}"
fi

# Collect volumeMounts
for pvc in $volumes; do
  for valid_pvc in $valid_pvcs; do
      if [[ "$pvc" =~ $valid_pvc ]]; then
        volumemounts_yaml="${volumemounts_yaml}
          - name: $pvc
            mountPath: /mnt/connectware_$pvc"
        break
      fi
  done
done

# Collect volumes
for pvc in $volumes; do
  for valid_pvc in $valid_pvcs; do
      if [[ "$pvc" =~ $valid_pvc ]]; then
        volume_yaml="${volume_yaml}
        - name: $pvc
          persistentVolumeClaim:
            claimName: $pvc"
          break
      fi
  done
done


# print volumeMounts
echo
echo "Copy this as the "volumeMounts:" section:"
echo "######################################"
echo -n "        volumeMounts:"
echo "$volumemounts_yaml"

# print volumes
echo
echo "Copy this as the "volumes:" section:"
echo "######################################"
echo -n "      volumes:"
echo "$volume_yaml"
Code language: YAML (yaml)
  • Select the target environment:
kubectl config use-context <my-cluster>
kubectl config set-context --current --namespace <my-connectware-namespace>
Code language: YAML (yaml)
  • Make the script executable and run it:
chmod u+x create-pvc-yaml.sh
./create-pvc-yaml.sh
Code language: YAML (yaml)
  • Review the output to confirm only relevant volumes are listed.

Example output:

Copy this as the "volumeMounts:" section:
######################################
        volumeMounts:
          - name: brokerdata-broker-0
            mountPath: /mnt/connectware_brokerdata-broker-0
          - name: brokerdata-control-plane-broker-0
            mountPath: /mnt/connectware_brokerdata-control-plane-broker-0
          - name: brokerlog-broker-0
            mountPath: /mnt/connectware_brokerlog-broker-0
          - name: brokerlog-control-plane-broker-0
            mountPath: /mnt/connectware_brokerlog-control-plane-broker-0
          - name: certs
            mountPath: /mnt/connectware_certs
          - name: postgresql-postgresql-0
            mountPath: /mnt/connectware_postgresql-postgresql-0
          - name: protocol-mapper-agent-001-0
            mountPath: /mnt/connectware_protocol-mapper-agent-001-0
          - name: service-manager
            mountPath: /mnt/connectware_service-manager
          - name: system-control-server-data
            mountPath: /mnt/connectware_system-control-server-data
          - name: workbench
            mountPath: /mnt/connectware_workbench

Copy this as the "volumes:" section:
######################################
      volumes:
        - name: brokerdata-broker-0
          persistentVolumeClaim:
            claimName: brokerdata-broker-0
        - name: brokerdata-control-plane-broker-0
          persistentVolumeClaim:
            claimName: brokerdata-control-plane-broker-0
        - name: brokerlog-broker-0
          persistentVolumeClaim:
            claimName: brokerlog-broker-0
        - name: brokerlog-control-plane-broker-0
          persistentVolumeClaim:
            claimName: brokerlog-control-plane-broker-0
        - name: certs
          persistentVolumeClaim:
            claimName: certs
        - name: postgresql-postgresql-0
          persistentVolumeClaim:
            claimName: postgresql-postgresql-0
        - name: protocol-mapper-agent-001-0
          persistentVolumeClaim:
            claimName: protocol-mapper-agent-001-0
        - name: service-manager
          persistentVolumeClaim:
            claimName: service-manager
        - name: system-control-server-data
          persistentVolumeClaim:
            claimName: system-control-server-data
        - name: workbench
          persistentVolumeClaim:
            claimName: workbench
Code language: YAML (yaml)

2. Updating and Applying the Kubernetes Job

  • Create a script named kubernetes-job.yml containing the following:
---
apiVersion: batch/v1
kind: Job
metadata:
  name: connectware-fix-permissions
  labels:
    app: connectware-fix-permissions
spec:
  backoffLimit: 3
  template:
    spec:
      restartPolicy: OnFailure
      imagePullSecrets:
      - name: cybus-docker-registry
      containers:
      - image: registry.cybus.io/cybus/connectware-fix-permissions:1.5.0
        securityContext:
          runAsUser: 0
        imagePullPolicy: Always
        name: connectware-fix-permissions
        # Insert the volumeMounts section here
        # volumeMounts:
        # - name: brokerdata-broker-0
        #   mountPath: /mnt/connectware_brokerdata-broker-0
        resources:
          limits:
            cpu: 200m
            memory: 100Mi
      # Insert the volumes section here
      # volumes:
      # - name: brokerdata-broker-0
      #   persistentVolumeClaim:
      #     claimName: brokerdata-broker-0


Code language: YAML (yaml)
  • Open kubernetes-job.yml and integrate the output of the create-pvc-yaml.sh script.
  • Shut down all Connectware and agent pods:
kubectl scale sts,deploy -lapp.kubernetes.io/instance=<connectware-installation-name> --replicas 0
kubectl scale sts -lapp.kubernetes.io/instance=<connectware-agent-installation-name> --replicas 0
Code language: YAML (yaml)
  • Apply the job to change permissions:
kubectl apply -f kubernetes-job.yml
Code language: YAML (yaml)

3. Verifying and Cleaning Up

  • Check the job logs for confirmation:
kubectl logs -f job/connectware-fix-permissions
Code language: SAS (sas)

Example output:

Found directory: connectware_brokerdata-broker-0. Going to change permissions
Found directory: connectware_brokerdata-broker-1. Going to change permissions
Found directory: connectware_brokerdata-control-plane-broker-0. Going to change permissions
Found directory: connectware_brokerdata-control-plane-broker-1. Going to change permissions
Found directory: connectware_brokerlog-broker-0. Going to change permissions
Found directory: connectware_brokerlog-broker-1. Going to change permissions
Found directory: connectware_brokerlog-control-plane-broker-0. Going to change permissions
Found directory: connectware_brokerlog-control-plane-broker-1. Going to change permissions
Found directory: connectware_certs. Going to change permissions
Found directory: connectware_postgresql-postgresql-0. Going to change permissions
Postgresql volume identified, using postgresql specific permissions
Found directory: connectware_service-manager. Going to change permissions
Found directory: connectware_system-control-server-data. Going to change permissions
Found directory: connectware_workbench. Going to change permissions
All done. Found 13 volumes.
Code language: YAML (yaml)
  • Delete the Kubernetes job afterward:
kubectl delete -f kubernetes-job.yml
Code language: YAML (yaml)
  • Delete outdated Kubernetes services:
kubectl delete svc -l app.kubernetes.io/instance=<connectware-installation-name>
Code language: YAML (yaml)

4. Upgrading Connectware

  • Run the Helm command to upgrade Connectware to 1.5.0.
helm upgrade -n <namespace> <installation-name> <repo-name>/connectware --version 1.5.0 -f values.yaml
Code language: YAML (yaml)
  • If you have any connectware-agent Helm chart installations, update them to Helm chart version 1.1.0.

Result: You have successfully upgraded Connectware to 1.5.0.

Functionality of Provided Files

create-pvc-yaml.sh

This script is designed to automatically generate Kubernetes resource lists, specifically volumeMounts and volumes, for Connectware deployments on Kubernetes. It simplifies the preparation process for upgrading Connectware by:

  • Identifying PersistentVolumeClaims (PVCs) used by Connectware and optionally by external agents.
  • Creating the YAML snippets required for mounting these volumes into the Kubernetes job designed to fix permissions.
  • Allowing customization through parameters such as --kubeconfig, --context, --namespace, to specify the Kubernetes cluster and namespace.
  • Providing options to exclude volumes related to Connectware or external agents, if necessary.

The script outputs sections that can be directly copied and pasted into the kubernetes-job.yml file, ensuring that the correct volumes are mounted with the appropriate permissions for the upgrade process.

kubernetes-job.yml

This YAML file defines a Kubernetes job responsible for adjusting the permissions of the volumes identified by the create-pvc-yaml.sh script. The job is tailored to run with root privileges, enabling it to modify ownership and permissions of files and directories within PVCs that are otherwise inaccessible due to the permission changes introduced in Connectware 1.5.0.

The job’s purpose is to ensure that all persistent volumes used by Connectware are accessible by the new, non-root container user, addressing the core upgrade challenge without compromising on security by avoiding the use of root containers in the Connectware deployment itself.

Similar Articles

Was this article helpful?
YesNo
Need more help?

Can’t find the answer you’re looking for?
Don’t worry, we’re here to help.

Share this article
  • Previous

    Connectware Account

  • Upgrade Connectware from 1.x to 1.5 (Docker Guide)

Oops, your browser does not support this website.

Dear visitor, you are trying to visit our website using Internet Explorer. The support for this browser has been discontinued by the manufacturer, which is why it can no longer display modern web pages correctly.
To view the content of this website correctly, you need a more modern browser.

Under the following links you will find browsers for which our website has been optimized:

Download Google Chrome Browser Download Mozilla Firefox Browser

You can still view this website, but you will have to face significant restrictions.

Show this website anyway.