Prerequisites

Uninstalling Connectware

To uninstall Connectware use the helm uninstall command on your Connectware on Kubernetes installation:

helm uninstall -n <namespace> <installation-name>
Code-Sprache: YAML (yaml)

Cleaning up Leftover Resources

Some resources will intentionally not be removed, like some persistent volume claims, and potentially ConfigMaps and Secrets manually created for mTLS.

If you want to completely clean up, for example for a fresh install, use this command to identify the resources:

kubectl -n <namespace> get all,cm,secret,pvc
Code-Sprache: YAML (yaml)

Keep in mind that some resources are part of a standard Kubernetes namespace, for example configmap/kube-root-ca.crt or service/kubernetes. After identifying the resources for cleanup use this command to to remove them:

kubectl -n <namespace> delete <resource-1> <resource-2> <resource-n> 
Code-Sprache: YAML (yaml)

Example:

kubectl -n <namespace> delete persistentvolumeclaim/brokerdata-broker-0 
persistentvolumeclaim/brokerdata-broker-1 configmap/cw-mtls-ca-cert 
secret/cw-mtls-welder-robots 
persistentvolumeclaim/brokerdata-control-plane-broker-0 
persistentvolumeclaim/brokerlog-control-plane-broker-0 
persistentvolumeclaim/postgresql-postgresql-0
persistentvolumeclaim/certs
Code-Sprache: YAML (yaml)

Hint: If you plan a fresh installation in the same location it is especially important to remove persistentvolumeclaim/postgresql-postgresql-0 and persistentvolumeclaim/certs.

The disk space needed for our brokers depends on customers use-cases, especially in regards to use of QoS > 0 and retained messages, as well as message size. We can therefore not perfectly predict the necessary disk space, allowing for a situation in which existing volumes need to be resized.

This procedure can be used to increase the available disk space for broker volumes with as little service interruption as possible. Currently this means pod restarts, which result in the necessity for clients to reconnect.

This guide can be used for control-plane-broker by replacing any mention of broker with control-plane-broker.

Important: This procedure depends on removing the StatefulSet. This leaves the broker cluster open to failures caused by cluster events or human error. Therefore this should be executed with great care and only in a stable cluster!

Prerequisites

  1. kubectl access to the necessary installation and the current context namespace set to the target namespace (kubectl config set-context --current --namespace <target-namespace>)
  2. A StorageClass that supports volume resizing (kubectl get sc and check that ALLOWVOLUMEEXPANSION show true for the StorageClass used for the volumes

Instructions

Prepare the broker cluster

  1. Make sure that you have a healthy broker cluster of at least two pods (kubectl get sts broker shows READY 2/2 or higher, but same number on both sides of the slash)
  2. If you only have a single broker, scale the StatefulSet to two:
kubectl scale sts broker --replicas 2
Code-Sprache: YAML (yaml)
  1. Export the StatefulSet definition to a local file:
kubectl get sts broker -o yaml > broker.yaml
Code-Sprache: YAML (yaml)

Resizing Volumes

Repeat this part for each broker cluster pod you have!

  1. Delete the broker StatufulSet while leaving the pods as orphans:
kubectl delete sts broker --cascade=orphan
Code-Sprache: YAML (yaml)
  1. Set variable $broker to the pod name of the broker you want to resize (e.g. broker-0):
broker=broker-0
Code-Sprache: YAML (yaml)
  1. Delete the broker pod:
kubectl delete pod $broker
Code-Sprache: YAML (yaml)
  1. Increase the PVC size (replace <size> with the Kubernetes Quantity for the volume, e.g. 5Gi):
kubectl patch pvc brokerdata-$broker --patch '{"spec": { "resources": {"requests": {"storage": "<size>"}}}}'
Code-Sprache: YAML (yaml)
  1. Wait until the PVC shows the correct CAPACITY:
kubectl get pvc brokerdata-$broker
Code-Sprache: YAML (yaml)
  1. Recreate the StatefulSet:
kubectl apply -f broker.yaml
Code-Sprache: YAML (yaml)
  1. Wait for the StatefulSet to recreate the missing pod and wait for the StatefulSet to be ready (kubectl get sts broker shows READY 2/2 or higher, but same number on both sides of the slash)
  2. Verify that all cluster members show the same output for the cluster members:
kubectl get pod -lapp=broker -o name | xargs -I % kubectl exec % -- vmq-admin cluster show
Code-Sprache: YAML (yaml)
Defaulted container "broker" out of: broker, wait-for-k8s (init)
+-------------------------------------------------+---------+
| Node                                            | Running |
+-------------------------------------------------+---------+
| VerneMQ@broker-0.broker.cybus.svc.cluster.local | true    |
+-------------------------------------------------+---------+
| VerneMQ@broker-1.broker.cybus.svc.cluster.local | true    |
+-------------------------------------------------+---------+
Defaulted container "broker" out of: broker, wait-for-k8s (init)
+-------------------------------------------------+---------+
| Node                                            | Running |
+-------------------------------------------------+---------+
| VerneMQ@broker-0.broker.cybus.svc.cluster.local | true    |
+-------------------------------------------------+---------+
| VerneMQ@broker-1.broker.cybus.svc.cluster.local | true    |
+-------------------------------------------------+---------+
Code-Sprache: YAML (yaml)
  1. Repeat this step for the next broker, until all volumes are resized

Persisting through Helm values

Once you are done you should adjust your Helm values for Connectware to reflect the changes.

Update the following fields based on what volumes you resized:

PVC NameHelm value
brokerdata-broker-*global.broker.storage.data.size
brokerlog-broker-*global.broker.storage.log.size
brokerdata-control-plane-broker-*global.controlPlaneBroker.storage.data.size
brokerlog-control-plane-broker-*global.controlPlaneBroker.storage.log.size

Setting up Agents inside Connectware

Connectware protocol-mapper agents are additional components of the Connectware that can be deployed and started individually. You can use agents for the following:

Related Links

Adding Agents inside Your Connectware Installation

You can add additional agents to your Connectware installation via the Connectware Helm chart. The section to add agents is commented out by default.

  1. In the values.yaml file, search for #protocolMapperAgents within the global context and remove the #.
  2. Add the agents. The minimum configuration requires you to add the agent name.
    Note: You cannot change the name of an agent after creating it.
  3. Add configurations to each agent. For example, define the storage size or the CPU and memory resources of each agent. 

Example

In this example, two agents are added to the protocolMapperAgents section of the values.yaml file.

protocolMapperAgents:
  - name: welder-robots
  - name: bender-robots
Code-Sprache: YAML (yaml)

Specifying the Storage Size for Agents (Optional)

Agents require a persistent volume to store their data. The default storage size value is 40 Mi (40 Mebibytes).

Note: You cannot change the storage size of an agent after creating it.

Example

protocolMapperAgents:
  - name: bender-robots
    storageSize: 1Gi
Code-Sprache: YAML (yaml)

Related Links

Specifying a StorageClass for Agents (Optional)

Agents require a persistent volume to store their data. By default, the agents use the default storage class of the Kubernetes cluster. You can specify any Kubernetes StorageClass that offers the ReadWriteOnce access mode and is available in your Kubernetes cluster.

Example

protocolMapperAgents:
  - name: bender-robots
    storageClassName: longhorn
Code-Sprache: YAML (yaml)

Related Links

Specifying CPU and Memory Resources for Agents

You can use the Kubernetes resource requests and limits to specify CPU and memory resources for agents.

Depending on their role and workload, agents can consume varying amounts of CPU and memory resources. We recommend that you use the Kubernetes metrics-server to identify the resource requirements of each agent and adjust the configuration if necessary.

Important: Adjusting CPU and memory resources can impact the performance and availability of Connectware. When you customize the settings for CPU and memory resources, make sure that you monitor the performance and make adjustments if necessary.

Example

protocolMapperAgents:
- name: bender-robots
  resources:
      requests:
        cpu: 1000m
        memory: 1000Mi
      limits:
        cpu: 2000m
        memory: 2000Mi
Code-Sprache: YAML (yaml)

Related Links

Specifying Additional Environment Variables for Agents

You can specify a YAML array of objects to add additional environment variables for agents.

Note: Do not specify the following environment variables as they are already used by the Helm chart of Connectware:

Example

protocolMapperAgents:
  - name: bender-robots
    env:
      - name: CYBUS_HOSTNAME_INGRESS
        value: connectware
      - name: SOME_OTHER_VARIABLE
        value: bar
Code-Sprache: YAML (yaml)

Directly Targeting the MQTT Broker

Agents target the MQTT broker of Connectware through an Ingress proxy via the Kubernetes LoadBalancer Service. In your new Connectware installation, the LoadBalancer is named connectware. However, you can bypass the Ingress proxy. This allows you to reduce the number of services that move data, increase throughput or reduce load.

Note: Only target the MQTT broker directly if the necessity was identified.

  1. To directly target the MQTT broker, do one of the following in the values.yaml file:
    1. If you are using a separate control-plane broker, set the Helm value mqttHost to control-plane-broker.
    2. Otherwise, set the Helm value mqttHost and mqttDataHost to broker.
  2. Set the environment variable CYBUS_HOSTNAME_INGRESS to connectware.

Example

Directly target the MQTT broker:

protocolMapperAgents:
  - name: bender-robots
    mqttHost: broker
    mqttDataHost: broker
    env:
      - name: CYBUS_HOSTNAME_INGRESS
        value: connectware
Code-Sprache: YAML (yaml)

Directly target the MQTT broker while using a separate control-plane broker:

protocolMapperAgents:
  - name: bender-robots
    mqttHost: control-plane-broker
    mqttDataHost: broker
    env:
      - name: CYBUS_HOSTNAME_INGRESS
        value: connectware
Code-Sprache: YAML (yaml)

Defining Kubernetes Labels and Annotations for Agents

You can define sets of labels and annotations that are added to the pod and controller resources of your agents. The following Helm values are available:

Helm valueApplied to
labelsController (StatefulSet), Pod, Service, PersistentVolumeClaim
service.labelsService
podLabelsPod
annotationsController (StatefulSet)
podAnnotationsPod
service.annotationsService

Example

protocolMapperAgents:
  - name: bender-robots
    labels:
      tld.mycompany/robots: benders # label is common to all resources
    podLabels:
      pod: only # label is only on pods
    annotations:
      controller: only # annotation is only on StatefulSet controller
    podAnnotations:
      pod: only # annotation is only on pods
    service:
      labels:
        service: only # label is only on the service
      annotations:
        service: only # annotations is only on the service
Code-Sprache: YAML (yaml)

Enabling mutual Transport Layer Security (mTLS)

As an alternative to password-based authentication you can use mutual TLS (mTLS) as the authentication for Connectware. mTLS is an X.509 certificate-based authentication and provides better performance compared to password-based authentication. We recommend using mTLS when handling a large number of agents.

Important: When mTLS is activated, password authentication is no longer possible when using encrypted connections to the Connectware broker (Port TCP/8883 by default).

  1. To activate mTLS authentication, set the Helm value authentication.mTLS.enabled within the global context to true.
authentication:
  mTLS:
    enabled: true
Code-Sprache: YAML (yaml)
  1. Apply the configuration changes via the helm upgrade. For more information, see Applying Helm configuration changes.

Configuring podAntiAffinity to spread workloads

Kubernetes podAntiAffinity is used to ensure replicas of the same workload are not running on the same Kubernetes node to ensure redundancy. All Connectware workloads that support scaling use soft podAntiAffinity by default. The following behaviors can be configured:

ModeBehavior of Pods of the same workload (for example: broker)
soft (default)Pods will be spread over different Kubernetes cluster nodes, but may be on the same node
hardPods will be spread over different Kubernetes cluster nodes, or will fail to be scheduled
noneNo podAntiAffinity scheduling requirements will be used

Additionally you can define a topology key, which is a label all Kubernetes nodes need to have for podAntiAffinity to work correctly. By default the label kubernetes.io/hostname is used.

To change the podAntiAffinity behavior you can use the Helm values podAntiAffinity and podAntiAffinityTopologyKey in the services Helm value block. For this example we will use the broker workload:

broker:
  podAntiAffinity: <strong>hard</strong>
  podAntiAffinityTopologyKey: <strong>kubernetes.io/os=linux</strong>
Code-Sprache: YAML (yaml)

Apply the configuration changes via the helm upgrade. For more information, see Applying Helm configuration changes.

Configure storage volume size for the control-plane-broker

Prerequisites

Procedure

Please not that the size of existing volumes can’t be changed through this procedure.

Please use Resizing Broker Volumes in Kubernetes to resize existing volumes, and return to this procedure for the final step of that guide.

The Connectware control-plane-broker uses two volumes, the size of each can be configured through Helm configuration values:

VOLUMEPurposeHelm value
dataStored retained messages, offline queues and cluster metadataglobal.controlPlaneBroker.storage.data.size
logStores logfilesglobal.controlPlaneBroker.storage.log.size

These values can be filled by a Kubernetes quantity specifying the volume size, for example 5Gi to use a volume of 5 GiB size.

Example

global:
  controlPlaneBroker:
    storage:
      data:
        size: 5Gi
      log:
        size: 500Mi
Code-Sprache: YAML (yaml)

Apply the configuration changes via the helm upgrade. For more information, see Applying Helm configuration changes

Configure storage volume size for the MQTT broker

Prerequisites

Procedure

Please note that the size of existing volumes can’t be changed through this procedure.

Please use Resizing Broker Volumes in Kubernetes to resize existing volumes, and return to this procedure for the final step of that guide.

The Connectware MQTT broker uses two volumes, the size of each can be configured through Helm configuration values:

VOLUMEPURPOSEHELM VALUE
dataStored retained messages, offline queues and cluster metadataglobal.broker.storage.data.size
logStores logfilesglobal.broker.storage.log.size

These values can be filled by a Kubernetes quantity specifying the volume size, for example 5Gi to use a volume of 5 GiB size.

Example

global:
  broker:
    storage:
      data:
        size: 5Gi
      log:
        size: 500Mi
Code-Sprache: YAML (yaml)

Apply the configuration changes via the helm upgrade. For more information, see Applying Helm configuration changes

Configuring podAntiAffinity

The connectware-agent Helm chart uses Kubernetes inter-pod anti-affinity to distribute configured agents across different Kubernetes nodes. The chart offers three modes of anti-affinity which you can choose with the podAntiAffinity value inside the agent’s entry in the protocolMapperAgents context of your values.yaml file:

ModeEffect
soft (default)Will try to schedule agent pods on different Kubernetes nodes, but will schedule them on the same node if not possible otherwise.
hardWill schedule agent pods only on different nodes. If there are not enough matching nodes available, agents will not be scheduled.
noneWill not add anti-affinity rules to the agents.

Example

protocolMapperAgentDefaults:
  connectwareHost: connectware.cybus # adjust to actual hostname of Connectware
  podAntiAffinity: hard # agents will only be scheduled on different Kubernetes nodes
protocolMapperAgents:
  - name: bender-robots
  - name: welder-robots # will not be scheduled on the same Kubernetes node as bender-robots agent
Code-Sprache: YAML (yaml)

(Advanced) Overriding podAntiAffinity Options

If you want to configure very specific pod anti-affinity rules to match your Kubernetes cluster setup, you can use the values of the podAntiAffinityOptions section inside the agent’s entry in the protocolMapperAgents section of your values.yaml file.

Configuring podAntiAffinity Topology Key

To change topology key used for the agent’s pod anti-affinity, specify the topology key in the podAntiAffinityOptions.topologyKey value inside the agent’s entry in the protocolMapperAgents context of your values.yaml file.

Example

protocolMapperAgentDefaults:
  connectwareHost: connectware.cybus # adjust to actual hostname of Connectware
  podAntiAffinity: hard # agents will only be scheduled on different availability zones
  podAntiAffinityOptions:
    topologyKey: topology.kubernetes.io/zone
protocolMapperAgents:
  - name: bender-robots
  - name: welder-robots # will not be scheduled on the same availability zone as bender-robots agent
Code-Sprache: YAML (yaml)

Configuring podAntiAffinity Match Expression

To change match expression used for the agent’s pod anti-affinity, specify the values podAntiAffinityOptions.key, podAntiAffinityOptions.operator, podAntiAffinityOptions.value, value inside the agents entry in the protocolMapperAgents section of your values.yaml file.

Example

protocolMapperAgentDefaults:
  connectwareHost: connectware.cybus # adjust to actual hostname of Connectware
  # Agents will not be scheduled on the same Kubernetes nodes as other agents.
  # This is also true for agents installed through other instances of this Helm chart.
  podAntiAffinity: hard
  podAntiAffinityOptions:
    key: app.kubernetes.io/component
    operator: In
    value: protocol-mapper-agent
protocolMapperAgents:
  - name: bender-robots
  - name: welder-robots # will not be scheduled on the same Kubernetes node as bender-robots agent
Code-Sprache: YAML (yaml)

You can access the Connectware Admin UI through the Kubernetes LoadBalancer Service. In your new Connectware installation, the LoadBalancer is named connectware. How to access the LoadBalancer depends on which LoadBalancer provider your cluster offers.

  1. To check if your load balancer provider has connected to the connectware service, enter the following command:
Kubectl -n <namespace> get svc/connectware
Code-Sprache: YAML (yaml)
  1. Depending on the result, do one of the following:
    1. If your IP address or hostname is displayed in the EXTERNAL-IP column, you can access the Connectware Admin UI through it.
    2. If no load balancer provider is available in your cluster, you can add an external load balancer.
  2. To verify that the installation was successful, enter the following command to forward the service to your local machine through kubectl:
Kubectl -n <namespace> port-forward svc/connectware 10443:443
Code-Sprache: YAML (yaml)
  1. Enter https://localhost:10443 to access the Connectware Admin UI. By default, Connectware rolls out its own PKI infrastructure.
  2. Confirm the certificate warning in your browser.
  3. Login with the following default credentials:
    1. Username: admin
    2. Password: admin
      Important: After you log in for the first time, immediately change the username and password.
  4. Click CHANGE PASSWORD and change the default credentials.
  5. Select System > Status and verify that all components have the status RUNNING.

Result: Your Connectware on Kubernetes installation is now ready.

  1. LoadBalancer (Kubernetes documentation)

By default, agents use a password for authentication. As an alternative to password-based authentication you can use mutual TLS (mTLS) as the authentication mechanism for agents. mTLS is an X.509 certificate-based authentication and provides better performance compared to password-based authentication. We recommend using mTLS when handling a large number of agents.

Prerequisites

Procedure

To configure agents for mTLS, do the following:

  1. Extracting Certificate Authority key pairs
  2. Signing certificate key pairs for your agents
  3. Configuring agents for key pairs
  4. Configuring your agent to use the Certificate Authority
  5. Activating mTLS in Connectware
  6. Enabling mTLS for the agent

Extracting Certificate Authority Key Pairs

In order to use mTLS authentication, you must extract the Certificate Authority (CA) that Connectware uses to sign certificates that are created for you agents. You can extract the certificate from Connectware or replace it with a CA certificate that you already have. In both cases, you must extract the truststore that Connectware uses, as well as the affiliated private key.

The steps in this section are executed in your Connectware installation, not your connectware-agent installation

Note: For production setups, we recommend that you replace the Public Key Infrastructure (PKI) that is generated during the Connectware installation with a PKI that is managed and approved by your company.

Extracting the CA Key Pair of Connectware

To extract the existing CA key pair, use kubectl cp to copy the certificate from the running auth-server pod via the following commands. Make sure to specify the Kubernetes namespace that contains the Connectware installation.

namespace=<namespace>
pod=$(kubectl -n ${namespace} get pod -o name -lapp.kubernetes.io/name=auth-server | head -1 | sed 's/pod///g');
kubectl -n ${namespace} cp -c auth-server $pod:/connectware_certs/cybus_ca.key cybus_ca.key
kubectl -n ${namespace} cp -c auth-server $pod:/connectware_certs/cybus_ca.crt cybus_ca.crt
Code-Sprache: YAML (yaml)

Result

The files cybus_ca.crt and cybus_ca.key are created in your current directory.

Using a Custom Certificate Authority (Optional)

For production setups, we recommend that you use a Certificate Authority (CA) that is managed and approved by your company. You can append the certificate of your CA or a valid intermediate CA to the certificate truststore that Connectware uses.

Prerequisites

The following files are available:

Procedure

  1. Append your ca-chain.crt to cybus_ca.crt:
cat ca-chain.pem >> cybus_ca.crt
Code-Sprache: YAML (yaml)
  1. Upload the following files to Connectware. Make sure to specify the Connectware namespace:
    • Modified cybus_ca.crt
    • Your new server certificate
    • Your new server key
namespace=<namespace>
pod=$(kubectl -n ${namespace} get pod -o name -lapp.kubernetes.io/name=auth-server | head -1 | sed 's/pod///g');
kubectl -n ${namespace} cp -c auth-server cybus_ca.crt $pod:/connectware_certs/cybus_ca.crt 
kubectl -n ${namespace} cp -c auth-server server.crt $pod:/connectware_certs/cybus_server.crt 
kubectl -n ${namespace} cp -c auth-server server.key $pod:/connectware_certs/cybus_server.key 
kubectl -n ${namespace} exec $pod -c auth-server -- chown -R root.root /connectware_certs
kubectl -n ${namespace} exec $pod -c auth-server -- chmod 664 /connectware_certs/cybus_ca.crt
kubectl -n ${namespace} exec $pod -c auth-server -- chmod 664 /connectware_certs/cybus_ca.key
kubectl -n ${namespace} exec $pod -c auth-server -- chmod 664 /connectware_certs/cybus_server.crt
kubectl -n ${namespace} exec $pod -c auth-server -- chmod 664 /connectware_certs/cybus_server.key
Code-Sprache: YAML (yaml)
  1. To apply the new server certificate, restart the deployment of the Connectware Ingress proxy:
namespace=<namespace> kubectl -n ${namespace} rollout restart deployment connectware
Code-Sprache: YAML (yaml)

Signing Certificate Key Pairs for Your Agents

Every agent needs a certificate key pair that is signed by the Certificate Authority (CA) that you want to use. We will assume that the certificate CA files are named cybus_ca.crt and cybus_ca.key and in your current directory.

Note: If you have already signed certificates for your agents, skip this task and continue with Configuring the agent to use your key pair.

The exact parameters for the key pair are subject to your preferences and security requirements. The commands used here are meant as an example.

  1. To generate a new key for your agent, enter the following command (Do not set a password for the key):
openssl genrsa -out tls.key 4096
Code-Sprache: YAML (yaml)
  1. To generate a Certificate Signing Request (CSR), enter the following command:
openssl req -new -key tls.key -out tls.csr
Code-Sprache: YAML (yaml)
  1. Fill out the details for the certificate. Make sure to set Common Name (e.g. server FQDN or YOUR name) to the exact name of the agent that you are generating a certificate for.
  2. To sign the CSR, use the following command:
openssl x509 -req -in tls.csr -CA cybus_ca.crt -CAkey cybus_ca.key -CAcreateserial -out tls.crt -days 365 -sha256
Code-Sprache: YAML (yaml)

If you are using other file names than the ones that we are using in this documentation, make sure to use them in the command.

Result

The certificate is created. The certificate is valid for one year. Make sure to create new certificates before the old certificates expire to avoid impact on the operation of the corresponding agents.

Key Pairs for Agents

You can configure key pairs for agents in your values.yaml file or you can create a Kubernetes Secret before you deploy your agent.

Each method has its advantages. However, the most important difference is that a key is considered private data, like a password. If you do not want to store this information in your unencrypted values.yaml file, we recommend that you use the Kubernetes Secret.

Configuring the Agent to Use your Key Pair

You can configure the key pair in the mTLS.keyPair section inside the agents entry in protocolMapperAgents context of your values.yaml file. Alternatively you can create a Kubernetes secret before you deploy your agent.

Each method has its advantages. However, the most important difference is that a key is considered private data, like a password. If you do not want to store this information in your plain text values.yaml file, we recommend that you use the Kubernetes secret.

Configuring Key Pair via Helm Values

To add the key pair to your Helm values, add the respective files as literal block scalars to these Helm values inside the agents entry in protocolMapperAgents context of your values.yaml file:

ValueContent
mTLS.keyPair.certThe certificate you generated for the agent (tls.crt) as a literal block scalar
mTLS.keyPair.keyThe key you generated for the agent (tls.key)  as a literal block scalar

Make sure to stick to the YAML indentation rules. Indent the certificate and key by two spaces relative to cert/key, see the example.

Example

protocolMapperAgents:
  - name: bender-robots
    mTLS:
      keyPair:
        cert: |
          -----BEGIN CERTIFICATE-----
          IIEgTCCAmkCFCN+Wi9RpeajIunZnxdIhvdZep6ZMA0GCSqGSIb3DQEBCwUAMIGN
          [skipped for brevity - include whole certificate]
          sD9hY3o=
          -----END CERTIFICATE-----
        key: |
          -----BEGIN PRIVATE KEY-----
          IIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCg+mC1iGmz+qCO
          [skipped for brevity - include whole private key]
          nJn5oNH9lcodhXcuOYVg3kQ=
          -----END PRIVATE KEY-----
Code-Sprache: YAML (yaml)

Configuring Key Pair via Kubernetes Secret

Creating the Kubernetes secret

If you want to manually manage the certificate and key as a Kubernetes secret you will need to create it before configuring your agent. You can use any process you like, as long as the result is a Kubernetes secret that:

We will demonstrate how to create this secret through kubectl. Please ensure that your agent certificate is stored in a file named tls.crt and your key is stored in a file named tls.key. 

Define a name for your secret. You will need this name later to configure the agent, so keep it at hand. We recommend choosing a name following the scheme “<chart-name>-<release-name>-<agent-name>-mtls”. Some characters, most prominently the “.” character will be replaced by “-” characters in the agent name.

For example if your agent is named bender-robots and you named your installation “connectware-agent” as used in our docs, the name should be “connectware-agent-bender-robots-mtls”. If you follow this naming convention you will need to include less configuration in your values.yaml. 

If you are unsure how to name the secret, you can first deploy your agent without the mTLS configuration, and check the name of its StatefulSet using kubectl get sts. The secret should have the same name with the suffix “-mtls”.

Example

kubectl -n <namespace> create secret tls <secret-name> --key="./tls.key" --cert="./tls.crt"
Code-Sprache: YAML (yaml)
Configuring the agent through Helm values

If you followed the naming convention of “<chart-name>-<release-name>-<agent-name>-mtls” you don’t need to configure the name of the secret, as this name will be assumed. If you have chosen a different name you need to specify it in the Helm value mTLS.keyPair.existingSecret inside the agents entry in protocolMapperAgents context of your values.yaml file.

Example

protocolMapperAgents:
  - name: bender-robots
    mTLS:
      keyPair:
        existingSecret: <secret-name>
Code-Sprache: YAML (yaml)

Certificate Authority for Agents

There are two ways you can configure your agent to use the Certificate Authority (CA):

Configuring Certificate Authority for Agents via Helm Values

To add the CA certificate to your Helm values, add the file as literal block scalar to the Helm value mTLS.caChain.cert inside the agents entry in protocolMapperAgents section of your values.yaml file.

Make sure to pay attention to indentation of your CA certificate, it needs to be indented by two spaces relative to cert and keep this indentation (see Example).

If you configure more than one agent, it is recommended to provide the CA certificate through protocolMapperAgentDefaults instead of protocolMapperAgents, because it should be the same for all agents.

Example

protocolMapperAgents:
  - name: bender-robots
    mTLS:
      caChain:
        cert: |
          -----BEGIN CERTIFICATE-----
          MIIFpTCCA40CFGFL86145m7JIg2RaKkAVCOV1H71MA0GCSqGSIb3DQEBCwUAMIGN
          [skipped for brevity - include whole certificate]
          SKnBS1Y1Dn2e
          -----END CERTIFICATE-----
Code-Sprache: YAML (yaml)

Configuring Certificate Authority for Agents via Manual Kubernetes ConfigMap

Alternatively, you can provide the CA certificate as a Kubernetes ConfigMap.

Creating the Kubernetes ConfigMap

If you want to manually manage the CA certificate as a Kubernetes ConfigMap you will need to create it before configuring your agent. You can use any process you like, as long as the result is a Kubernetes ConfigMap that:

We will demonstrate how to create this ConfigMap through kubectl. Please ensure that your CA certificate is stored in a file named ca-chain.pem. Because the CA certificate extracted from Connectware it is named cybus_ca.crt we will create a copy in our example.

Define a name for your ConfigMap. You will need this name later to configure the agent, so keep it at hand. We recommend choosing a name following the scheme “<chart-name>-<release-name>-<agent-name>-mtls-ca-cert”. Some characters, most prominently the “.” character will be replaced by “-” characters in the agent name.

For example if you named your installation “connectware-agent” as used in our docs, the name should be “connectware-agent-mtls-ca-cert”. If you follow this naming convention you will need to include less configuration in your values.yaml. 

Example

cp cybus_ca.crt ca-chain.pem
kubectl create configmap <configmap-name> --from-file ca-chain.pem
Code-Sprache: YAML (yaml)
Configuring the agent through Helm values

If you followed the naming convention of “<chart-name>-<release-name>-<agent-name>-mtls-ca-cert” you don’t need to configure the name of the ConfigMap, as this name will be assumed. If you have chosen a different name you need to specify it in the Helm value mTLS.caChain.existingConfigMap inside the agents entry in protocolMapperAgents context of your values.yaml file

If you configure more than one agent, it is recommended to provide the CA certificate through protocolMapperAgentDefaults instead of protocolMapperAgents, because it should be the same for all agents.

Example

protocolMapperAgents:
  - name: bender-robots
    mTLS:
      caChain:
        existingConfigMap: <configmap-name>
Code-Sprache: YAML (yaml)

Enabling mTLS for the Agent

Finally you will need to enable mTLS for the agent. To do this set the Helm value mTLS.enabled to true inside the agents entry in protocolMapperAgents section of your values.yaml file.

Example

protocolMapperAgents:
  - name: bender-robots
    mTLS:
      enabled: true
Code-Sprache: YAML (yaml)

To apply this configuration to your agent you need to use Helm upgrade on your connectware-agent installation with the same parameters you originally used.

Example

helm upgrade connectware-agent cybus/connectware-agent -f values.yaml -n <namespace>
Code-Sprache: YAML (yaml)

Replacing mTLS Certificates and Keys for the connectware-agent Helm Chart

If you want to replace certificates or keys you follow the same steps as when adding them, however the agents will not automatically start using the new certificates. You will need to manually restart the Kubernetes StatefulSets associated with the agents for which you replaced certificates. This StatefulSet is named “<chart-name>-<release-name>-<agent-name>”. Some characters, most prominently the “.” character will be replaced by “-” characters in the agent name.

If you followed the recommendations in these docs the first two parts are abbreviated to “connectware-agent”. An agent named “bender-robots” for example would then be deployed as a StatefulSet named “connectware-agent-bender-robots”.

kubectl -n <namespace> rollout restart sts <statefulset-name>
Code-Sprache: YAML (yaml)

This will restart the agent and apply the new certificates/key.

If you want to restart all agents from your installation, you can use this command in combination with the name you gave to your agent deployment:

kubectl -n <namespace> rollout restart sts -l app.kubernetes.io/instance=<release-name>
Code-Sprache: YAML (yaml)

If you followed the recommendations in these docs, <release-name> is “connectware-agent”.

Full mTLS Examples for the connectware-agent Helm Chart

Two Agents with Manually Created Kubernetes Secrets/Configmap with Default Names

This example assumes:

licenseKey: <your-connectware-license-key>
protocolMapperAgentDefaults:
  connectwareHost: connectware.cybus
  mTLS:
    enabled: true
protocolMapperAgents:
  - name: bender-robots
  - name: welder-robots
Code-Sprache: YAML (yaml)

Two Agents with Manually Created Kubernetes Secrets/Configmap with Custom Names

This example assumes:

licenseKey: <your-connectware-license-key>
protocolMapperAgentDefaults:
  connectwareHost: connectware.cybus
  mTLS:
    enabled: true
    caChain:
      existingConfigMap: my-ca-cert
protocolMapperAgents:
  - name: bender-robots
    mTLS:
      keypair:
        existingSecret: mtls-keypair-1
  - name: welder-robots
    mTLS:
      keypair:
        existingSecret: mtls-keypair-2
Code-Sprache: YAML (yaml)

Two Agents with Manually Created CA Certificate Configmap but Key Pair in Helm Values

This example assumes:

licenseKey: <your-connectware-license-key>
protocolMapperAgentDefaults:
  connectwareHost: connectware.cybus
  mTLS:
    enabled: true
protocolMapperAgents:
  - name: bender-robots
    mTLS:
      keyPair:
        cert: |
          -----BEGIN CERTIFICATE-----
          MIIEgTCCAmkCFCN+Wi9RpeajIunZnxdIhvdZep6ZMA0GCSqGSIb3DQEBCwUAMIGN
          [skipped for brevity - include whole certificate]
          sD9hY3o=
          -----END CERTIFICATE-----
        key: |
          -----BEGIN PRIVATE KEY-----
          MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCg+mC1iGmz+qCO
          [skipped for brevity - include whole private key]
          nJn5oNH9lcodhXcuOYVg3kQ=
          -----END PRIVATE KEY-----
  - name: welder-robots
    mTLS:
      keyPair:
        cert: |
          -----BEGIN CERTIFICATE-----
          MIIFcjCCA1oCFFgO7SgdLBuU6YBOuZxhQg0eW5f+MA0GCSqGSIb3DQEBCwUAMIGN
          [skipped for brevity - include whole certificate]
          VM6E0Lqy
          -----END CERTIFICATE-----
        key: |
          -----BEGIN PRIVATE KEY-----
          MIIJRAIBADANBgkqhkiG9w0BAQEFAASCCS4wggkqAgEAAoICAQDvmp+v3+x1am6m
          [skipped for brevity - include whole private key]
          Y6vWPIuRCwum9DxjrdIva6Z6Pqkdyed9
          -----END PRIVATE KEY-----
Code-Sprache: YAML (yaml)

Two Agents with Manually Created CA Certificate and Key Pair in Helm Values

This example assumes:

licenseKey: <your-connectware-license-key>
protocolMapperAgentDefaults:
  connectwareHost: connectware.cybus
  mTLS:
    enabled: true
    caChain:
      cert: |
        -----BEGIN CERTIFICATE-----
        MIIFpTCCA40CFGFL86145m7JIg2RaKkAVCOV1H71MA0GCSqGSIb3DQEBCwUAMIGN
        [skipped for brevity - include whole certificate]
        SKnBS1Y1Dn2e
        -----END CERTIFICATE-----
protocolMapperAgents:
  - name: bender-robots
    mTLS:
      keyPair:
        cert: |
          -----BEGIN CERTIFICATE-----
          MIIEgTCCAmkCFCN+Wi9RpeajIunZnxdIhvdZep6ZMA0GCSqGSIb3DQEBCwUAMIGN
          [skipped for brevity - include whole certificate]
          sD9hY3o=
          -----END CERTIFICATE-----
        key: |
          -----BEGIN PRIVATE KEY-----
          MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCg+mC1iGmz+qCO
          [skipped for brevity - include whole private key]
          nJn5oNH9lcodhXcuOYVg3kQ=
          -----END PRIVATE KEY-----
  - name: welder-robots
    mTLS:
      keyPair:
        cert: |
          -----BEGIN CERTIFICATE-----
          MIIFcjCCA1oCFFgO7SgdLBuU6YBOuZxhQg0eW5f+MA0GCSqGSIb3DQEBCwUAMIGN
          [skipped for brevity - include whole certificate]
          VM6E0Lqy
          -----END CERTIFICATE-----
        key: |
          -----BEGIN PRIVATE KEY-----
          MIIJRAIBADANBgkqhkiG9w0BAQEFAASCCS4wggkqAgEAAoICAQDvmp+v3+x1am6m
          [skipped for brevity - include whole private key]
          Y6vWPIuRCwum9DxjrdIva6Z6Pqkdyed9
          -----END PRIVATE KEY-----
Code-Sprache: YAML (yaml)

Checking Pod state

As with any workload in Kubernetes, Connectware needs to have all its pods in the STATUS Running and with all containers READY. You can see this by them showing the same number left and right of the / when running kubectl get pods on your Connectware installation’s namespace:

$ kubectl get pods
Code-Sprache: YAML (yaml)
NAME                 READYSTATUSRESTARTSAGE
admin-web-app-8649f98fc6-sktb71/1Running03m1s
auth-server-5f46964984-5rwvc1/1Running02m39s
broker-0   1/1Running02m11s
broker-11/1Running02m50s
connectware-5b948ffdff-tj2x91/1Running02m41s
container-manager-5f5678657c-944861/1Running02m46s
control-plane-broker-01/1Running02m4s
control-plane-broker-1 1/1Running02m48s
doc-server-6b486bb5cb-fkpdb1/1Running03m
ingress-controller-85fffdcb4b-m8kpm 1/1Running02m37s
postgresql-0 1/1Running02m58s
protocol-mapper-69f59f7dd4-6xhkf1/1Running02m42s
service-manager-6b5fffd66d-gt584 1/1Running02m52s
system-control-server-bd486f5bd-2mkxz1/1Running02m45s
welder-robots-01/1Running02m59s
workbench-57d4b59fbb-gqwnb1/1Running02m38s

You can identify an unhealthy pod by it displaying a clear error state, or by being stuck in a transitory state for too long, for example this pod is unable to start:

NAME                 READYSTATUSRESTARTSAGE
auth-server-b4b69ccfd-fvsmz0/1Init:0/108m

To see the reason for a pods problem you need to use the kubectl describe pod <podname> command and then check the events section at the bottom of the output. In this case the pod wants to use a volume that the Kubernetes cluster cannot provide:

Warning  FailedMount  7m4s kubelet Unable to attach or mount volumes: unmounted volumes=[testfail], unattached volumes=[certs testfail kube-api-access-52xmc]: timed out waiting for the condition
Code-Sprache: YAML (yaml)

To repair a situation like this you need to resolve the underlying issue, which can be a wide array of things that are beyond the scope of the Connectware documentation, and are generally covered by Kubernetes documentation.

If there is no clear reason visible for a problem you should check the logs next, which might give you an indicator of the problem. Checking the logs is covered in the next section. 

It is generally a good rule of thumb that issues that exist right after an upgrade or reconfiguration of Connectware are often related to misconfiguration within the Helm values, while problems that start and persist later are connected to cluster infrastructure.

Should you be unable to identify or fix the root cause you might have to involve your support contact.

Checking logs using kubetail

We recommend using the tool kubetail to easily follow logs of multiple pods (https://github.com/johanhaleby/kubetail).

This tool is a small wrapper around kubectl that allows you to see multiple logs at the same time. If you want to use kubetail please follow the instructions on the project’s Github page to install it. By default kubetail will always follow the logs like kubectl logs -f would.

Here are a few examples of how you can use it, but make sure to check kubetail --help too:

Display the logs of a whole namespace

kubetail -n <namespace>
Code-Sprache: YAML (yaml)

Display logs of pods that match a search term

kubetail broker
Code-Sprache: YAML (yaml)

Display logs for pods that match a regular expression

kubetail '(service-manager|protocol-mapper)' -e regex
Code-Sprache: YAML (yaml)

Display logs from the past

You can combine the parameter -s <timeframe> with any other command to display logs from the past up to now:

kubetail broker -s 10m
Code-Sprache: YAML (yaml)

Display logs of a terminated container of a pod

kubetail broker --previous
Code-Sprache: YAML (yaml)

Displaying timestamps

If the logs you are viewing a missing timestamps you can use the parameter --timestamps for kubetail to add timestamps to each log line:

kubetail broker --timestamps
Code-Sprache: YAML (yaml)

Checking logs using kubectl

If you don’t want to use kubetail as suggested in the previous chapter, you can use kubectl to read logs.

Here are a few examples of how you can use it:

Display and tail the logs of a pod

kubectl logs -f <podname> 
Code-Sprache: YAML (yaml)

Display and tail logs for all pods with a label

kubectl logs -f -l app=broker
Code-Sprache: YAML (yaml)

Display logs of a terminated container of a pod

kubectl logs --previous <podname>
Code-Sprache: YAML (yaml)

Display logs from the past

You can combine the parameter –since <timeframe> with any other command to display logs from the past up to now:

kubectl logs -f --since 10m <podname>
Code-Sprache: YAML (yaml)

Displaying timestamps

If the logs you are viewing a missing timestamps you can use the parameter –timestamps for kubectl to add timestamps to each log line:

kubectl logs -f --timestamps <podname>
Code-Sprache: YAML (yaml)

Removing unhealthy pods

When a pod is in an unhealthy state as covered by the section Checking Pod state or identified through viewing of the logs it is often a good first step to collect the current state into an archive using our collect_debug.sh script from the Connectware Kubernetes Toolkit covered in the section Collecting Debug Information.

Following that you should simply remove this pod using the kubectl delete pod <podname> command. This will cause the owning controller of this pod to create a new instance, which often already solves many issues. Do not be afraid to delete pods when they are unhealthy, as this does not delete any persisted data.

Pay special attention to any pod that does not contain a randomly generated id, but ends in a simple number, for example broker-0. These pods are part of a StatefulSet, which often is treated differently by Kubernetes than most workloads. One of the differences is that an unhealthy pod is not replaced by a newer version automatically, which means you cannot fix a configuration mistake on a StatefulSet without manually deleting the pod. This behavior is meant to protect StatefulSets from automatic processes as they often contain workloads that handle stateful data.

For Connectware StatefulSets include the broker, control-plane-broker, postgresql, and any protocol-mapper agents you defined.

Restarting Connectware

In order to restart Connectware completely you will need to scale all Controller resources to 0 before scaling them back up.

The following guides explain how to do this for different versions of Connectware. Please read the guides carefully to avoid accidentally impacting other workloads on your cluster.

Any Connectware Version

This procedure is meant to work for any version of Connectware on Kubernetes.

Prerequisites

This procedure will scale everything that is currently deployed in the target namespace!

If you have any workload besides Connectware Core Services in this namespace they will be restarted too!

CONNECTWARE_NS=<namespace here>
Code-Sprache: YAML (yaml)
kubectl get sts,deployment -n $CONNECTWARE_NS
Code-Sprache: YAML (yaml)
BROKER_REPLICAS=$(kubectl get --no-headers -o custom-columns=":spec.replicas" sts broker -n $CONNECTWARE_NS)
Code-Sprache: YAML (yaml)
CONTROL_PLANE_REPLICAS=$(kubectl get --no-headers -o custom-columns=":spec.replicas" sts control-plane-broker -n $CONNECTWARE_NS)
Code-Sprache: YAML (yaml)
kubectl get deploy,sts -n $CONNECTWARE_NS -o name | xargs -I % kubectl scale -n $CONNECTWARE_NS % --replicas 0
Code-Sprache: YAML (yaml)
while [ True ]; do clear; kubectl get pod -n $CONNECTWARE_NS ; sleep 5; done
Code-Sprache: YAML (yaml)
kubectl get deploy,sts -n $CONNECTWARE_NS -o name | xargs -I % kubectl scale -n $CONNECTWARE_NS % --replicas 1
Code-Sprache: YAML (yaml)
kubectl -n $CONNECTWARE_NS scale sts broker --replicas $BROKER_REPLICAS
Code-Sprache: YAML (yaml)
kubectl -n $CONNECTWARE_NS scale sts control-plane-broker --replicas $CONTROL_PLANE_REPLICAS
Code-Sprache: YAML (yaml)
while [ True ]; do clear; kubectl get pod -n $CONNECTWARE_NS ; sleep 5; done
Code-Sprache: YAML (yaml)

Collecting Debug Information

The Connectware Kubernetes Toolkit includes a script named collect_debug.sh which should be used to collect debug information of Connectware’s current state whenever a problem is identified. It is highly recommended to run this tool prior to any attempts to fix a problem.

Prerequisites

Download the collect_debug.sh script

You can download the script from https://download.cybus.io/connectware-k8s-toolkit/latest/collect_debug.sh.

Example:

wget 
https://download.cybus.io/connectware-k8s-toolkit/latest/collect_debug.sh
chmod u+x ./collect_debug.sh
Code-Sprache: YAML (yaml)

Running the collect_debug.sh script

The script takes parameters to target the correct Kubernetes namespace holding a Connectware installation:

ParameterValueDescription
-nnamespaceThe Kubernetes namespace to use
-kpath to kubeconfig fileA kubeconfig file to use other than the default (~/.kube/config)
-cname of kubeconfig contextThe name of a kubeconfig context different than the currently selected

If your kubectl command is already configured to point at the correct cluster you can use the script by just specifying the namespace:

./collect_debug.sh -n <namespace>
Code-Sprache: YAML (yaml)

The script will collect logs and state information and create a compressed Tar archive that you can easily archive and send to your support contact.

If you are collecting pod logs in a central log aggregator, please also include these logs for the relevant timeframe.

Common Problems

This section covers commonly occurring problems that often come from small mistakes in the configuration.

Protocol-Mapper Agents

Problems related to the usage of Protocol-Mapper agents:

SymptomCaused bySolution
Agent with mTLS enabled not connecting to broker

Agent log shows
Reconnecting to mqtts://connectware:8883

Broker log shows:
[warning] can't authenticate client {"ssl",<<"someName">>} from someIp due to <<"Authentication denied">>
mTLS not enabled in ConnectwareEnable mTLS in Connectware as described in Using Mutual Transport Layer Security (mTLS) for agents with the connectware-agent Helm chart.
Agent not connecting to broker when mTLS in Connectware is enabled

Agent log shows
VRPC agent connection to broker lostReconnecting to mqtts://localhost:8883
mTLS not enabled in agentEnable mTLS in agent as described in Using Mutual Transport Layer Security (mTLS) for agents with the connectware-agent Helm chart.
Agent with mTLS enabled does not connect to broker

Agent log shows
Error: Client network socket disconnected before secure TLS connection was established
Agent is connecting to the wrong MQTTS port in brokerVerify the parameters mqttPort and mqttDataPort within the agent’s configuration in the protocolMapperAgents section of your Helm values.yaml file are set to the correct ports. If you are not using a modified setup these values are set correctly by default and can be removed from the Helm values.
Agent with mTLS enabled does not connect to broker

Agent log shows
Failed to read certificates during mTLS setup please check the configuration
The certificates provided to the agent are either not found or faultyVerify that your certificates are generated and configured as described in Using Mutual Transport Layer Security (mTLS) for agents with the connectware-agent Helm chart.

One common mistake is to not generate the Kubernetes objects from files with the names ca-chain.pem, tls.crt and tls.key, however these names will be adopted in the Kubernetes objects and subsequently not be found by the agent.
Allowing an mTLS enabled agent in Connectware Client Registry fails with the message “An Error has occurred – Registration failed”
auth-server logs show:
Unable to process request: 'POST /api/client-registry/confirm', because: Certificate Common Name does not match the username. CN: someCN, username: agentName
Certificate invalidVerify the certificate’s Common Name (CN) is identical to the name you configured in the Helm value name for this agent.
Agent with mTLS enabled does not connect to broker

Agent log shows
Can not register protocol-mapper agent, because: socket hang up
Certificate invalidVerify the agent’s certificate was signed by the correct Certificate Authority (CA).
Agent with mTLS enabled does not connect to broker

Agent log shows
Failed to register agent. Response: 409 Conflict. A conflicting registration might be pending, or a user with the same username <agent-name> is already existing (which you must delete first).
The username of this agent is already takenEvery agent needs a user with the username of the value configured in the Helm value name for this agent.

Verify that the agent’s name is unique

Verify there is no old agent with the same name, if there is:
– Delete the Agent using the Systems => Agents UI
– Delete the user using the User Management => Users and Roles UI

If you created a user with the agent’s name for something else you have to choose a different name for the agent
Agent pod enters state CrashLoopBackOff

Agent log shows:
{"level":30,"time":1670940068658,"pid":8,"hostname":"welder-robots-0","service":"protocol-mapper","msg":"Re-starting using cached credentials"}
2{"level":50,"time":1670940068759,"pid":8,"hostname":"welder-robots-0","service":"protocol-mapper","msg":"Failed to query license at https://connectware/api/system/info probably due to authentication": 401 Unauthorized."}
3{"level":50,"time":1670940068759,"pid":8,"hostname":"welder-robots-0","service":"protocol-mapper","msg":"No valid license file available. Protocol-mapper will stop."}
The agents credentials are not correct anymoreThe agent needs to be re-registered:

Delete the Agent using the Systems => Agents UI

Delete the user using the User Management => Users and Roles UI

Delete the agents StatefulSet:
kubectl -n <namespace> delete sts <agent-name>

Delete the agents PersistentVolumeClaim:
kubectl -n <namespace> delete pvc protocol-mapper-<agent-name>-0

Re-apply your configuration through helm upgrade as described in Applying Helm configuration changes.

Important: Connectware currently does not support hit-less upgrades. You may experience a service degradation during upgrading. Make sure to take an appropriate maintenance window into account when upgrading.

Prerequisites for upgrading Connectware

Before upgrading Connectware, make sure that you meet the following prerequisites:

Connectware version number that you want to upgrade to

Make sure that you know the exact version number of the Connectware version that you want to upgrade to.

For the code examples in this documentation, we use the variable <target-version> to refer to the Connectware version that you want to upgrade to.

Pulling updated Helm information

You must update the Helm repository cache to make sure that you receive the latest Connectware version.

Reviewing the Connectware changelog

Before you upgrade to a new Connectware version, we recommend that you read the changelog to find out about new features, bug fixes, and changes of the Connectware version that you want to upgrade to.

Reviewing the readme file

Before you upgrade to a new Connectware version, read the readme file of the Connectware version that you want to upgrade to for additional upgrade instructions.

helm show readme <repo-name>/connectware --version <target-version></code></code>
Code-Sprache: YAML (yaml)

Comparing Helm configurations between Connectware versions

With a new Connectware version, there might be changes to the default Helm configuration values. We recommend that you compare the default Helms values of your current Connectware version with the default Helm values of your target Connectware version.

helm show values <repo-name>/connectware --version <target-version>
Code-Sprache: YAML (yaml)
diff <(helm show values <repo-name>/connectware --version <current-version>) <(helm show values <repo-name>/connectware --version <target-version>
Code-Sprache: YAML (yaml)

Example

diff <(helm show values cybus/connectware --version 1.1.0) <(helm show values cybus/connectware --version 1.1.1)
83c83
<     version: 1.1.0
---
>     version: 1.1.1
Code-Sprache: YAML (yaml)

In this example, only the image version has changed. However, if any of the Helm value changes are relevant to your setup, make the appropriate changes.

Adjusting Helm values

When you have reviewed the necessary information, adjust your configuration in your <values.yaml> file. Not every upgrade requires adjustments.

If you specified which image version of Connectware to use by setting the Helm value global.image.version you will need to update this to <target-version>. Unless you have a specific reason to use a specific image version, we recommend not setting the Helm value.

Verifying your backups

Make sure that you store backups of your setup. This allows you to restore a previous state if necessary.

Your backups must consist of the following files:

Depending on your local infrastructure, it may be necessary to back up additional files.

Starting the Connectware upgrade

Once you have all the information that you need to upgrade your Connectware, you can start the upgrade process. The following sections will guide you through monitoring the upgrade, as well as what to do on failed upgrades.

helm upgrade -n <namespace> <installation-name> <repo-name>/connectware --version <target-version> -f <values.yaml>
Code-Sprache: YAML (yaml)

Optionally you can use the --atomic --timeout 10m command line switch, which will cause Helm to wait for the result of your upgrade and perform a rollback when it fails. We recommend setting the timeout value to at least 10 minutes, but because the time it takes to complete an upgrade strongly depends on your infrastructure and configuration you might have to increase it further.

Result: The newly generated workload definitions are applied to your Kubernetes cluster and your Connectware pods are replaced.

Verifying the Connectware upgrade

You can monitor the Connectware upgrade progress to verify that everything runs smoothly, to know when the installation is successful, or to investigate potential issues.

Monitoring the Connectware upgrade

The Connectware upgrade can take a few minutes. To monitor the upgrade process, do one of the following:

kubectl get pods -n <namespace>
Code-Sprache: YAML (yaml)
while [ True ]; do clear; kubectl get pod -n <namespace>; sleep 5; done
Code-Sprache: YAML (yaml)

Pod stages during the Connectware upgrade

During the Connectware upgrade, the pods go through the following stages:

When pods reach the STATUS Running, they go through their individual startup before reporting as Ready. To be fully functional, all pods must reach the STATUS Running and report all their containers as ready. This is indicated by them showing the same number on both sides of the / in the column READY.

Example

$ kubectl get pod -n <namespace>
Code-Sprache: YAML (yaml)
NAMEREADYSTATUSRESTARTSAGE
admin-web-app-7cd8ccfbc5-bvnzx 1/1Running03h44m
auth-server-5b8c899958-f9nl41/1Running03m3s
broker-0 1/1Running03h44m
broker-11/1Running02m1s
connectware-7784b5f4c5-g8krn1/1Running021s
container-manager-558d9c4cbf-m82bz 1/1Running03h44m
doc-server-55c77d4d4c-nwq5f1/1Running03h44m
ingress-controller-6bcf66495c-l5dpk1/1Running018s
postgresql-01/1Running03h44m
protocol-mapper-67cfc6c848-qqtx91/1Running03h44m
service-manager-f68ccb767-cftps1/1Running03h44m
system-control-server-58f47c69bf-plzt5 1/1Running03h44m
workbench-5c69654659-qwhgc 1/1Running015s

At this point Connectware is upgraded and started. You can now make additional configurations or verify the upgrade status in the Connectware Admin UI.

For more information on the Connectware Admin UI, see the Connectware documentation.

Troubleshooting pod stages

If a pod is in another state than expected or if it is stuck at a certain stage for more than three minutes, there might be an issue.

kubectl describe pod <pod-name>
Code-Sprache: YAML (yaml)

For help on solving issues, see Troubleshooting Connectware on Kubernetes.

Rolling back the Helm upgrade

If the Helm upgrade fails, and it is not possible to immediately identify and fix the problem, you can roll back Helm upgrades using the helm rollback command.

To perform the rollback you need to know the current REVISION of your installation. Use the command helm list -n <namespace>, and note down the value for REVISION displayed in the row of your Connectware installation. In the next section you will use this value and decrement one to restore the previous REVISION. For example, if the REVISION displayed is 8, you will use 7 in the helm rollback command:

helm rollback --wait -n <namespace> <installation-name> <REVISION - 1>
Code-Sprache: YAML (yaml)

Note: This will roll back your Helm upgrade and start your previous version of Connectware. It is possible that you will need to perform a restore of Connectware, if there were modifications made by the attempted upgrade

When you have chosen whether to use Connectware LTS or regular you are ready to install in your target cluster.

To install Connectware on Kubernetes, you must complete the following tasks:

  1. Add the Helm chart repository.
  2. Create a values.yaml file.
  3. Install Connectware.
  4. Verify the installation.
  5. Log in for the first time.

Prerequisites for installing Connectware on Kubernetes

Before you start with the Connectware installation, make sure that you meet the following prerequisites:

Adding the Helm chart repository

To use the Connectware Helm chart, add the Connectware Helm chart repository.

Example

helm repo add <local-repo> https://repository.cybus.io/repository/connectware-helm
Code-Sprache: YAML (yaml)

Setting up the values.yaml file

The values.yaml file is the configuration file for an application that is deployed through Helm. The values.yaml file allows you to configure your Connectware installation. For example, edit deployment parameters, manage resources, and update your Connectware to a new version.

In this documentation, we will focus on a basic Kubernetes configuration and commonly used parameters.

Note: We recommend that you store the values.yaml file in a version control system.

Creating a copy of the default values.yaml file

A Helm chart contains a default configuration. It is likely that you only need to customize some of the configuration parameters. We recommend that you create a copy of the default values.yaml file named default-values.yaml and a new, empty values.yaml file to customize specific parameters.

Example

helm show values cybus/connectware > default-values.yaml
Code-Sprache: YAML (yaml)

Creating a values.yaml file

When you have created the default-values.yaml file, you can create the values.yaml file to add your custom configuration parameters.

  1. Enter the following code. Substitute the editor vi with your preferred editor.

Example

vi values.yaml
Code-Sprache: YAML (yaml)

Specifying the license key

To install Connectware, you need a valid license key.

  1. In the values.yaml file, specify the license key in the Helm value global.licenseKey.

Example

global:
  licensekey: cY9HiVZJs8aJHG1NVOiAcrqC_ # example value
Code-Sprache: YAML (yaml)

Specifying the broker cluster secret

You must specify a secret for the broker cluster. The cluster secret value is used to secure your broker cluster, just like a password.

Important: Treat the broker cluster secret with the same level of care as a password.

  1. In the values.yaml file, specify the broker cluster secret in the Helm value global.broker.clusterSecret.

Example

global:
  licensekey: cY9HiVZJs8aJHG1NVOiAcrqC_ # example value
  broker:
    clusterSecret: Uhoo:RahShie6goh # example value
Code-Sprache: YAML (yaml)

Allowing immutable labels

For a fresh Connectware installation, we recommend that you set best-practice labels on immutable workload objects like StatefulSet volumes.

  1. In the values.yaml file, set the Helm value global.setImmutableLabels to true.

Example

global:
  licensekey: cY9HiVZJs8aJHG1NVOiAcrqC_ # example value
  broker:
    clusterSecret: Uhoo:RahShie6goh # example value
  setImmutableLabels: true
Code-Sprache: YAML (yaml)

Specifying the broker cluster replica count (optional)

By default, Connectware uses three nodes for the broker cluster that moves data. You can specify a custom number of broker nodes. For example, increase the broker nodes to handle higher data loads or decrease the broker nodes for a testing environment. 

  1. In the values.yaml file, specify the number of broker nodes in the Helm value global.broker.replicaCount.

Example

global:
  licensekey: cY9HiVZJs8aJHG1NVOiAcrqC_ # example value
  broker:
    clusterSecret: Uhoo:RahShie6goh # example value
    replicaCount: 5
  setImmutableLabels: true
    clusterSecret: ahciaruighai_t2G # example value
Code-Sprache: YAML (yaml)

Activating a separate control-plane broker (optional)

By default, Connectware uses the same broker for data payload processing and control-plane communication. You can use a separate control-plane broker. This might be useful for production environments, as it provides higher resilience and better manageability in cases of the data broker becomes slow to respond due to high load.

  1. In the values.yaml file, set the Helm value global.controlPlaneBroker.enabled to true.
  2. Specify a broker cluster secret in the Helm value global.controlPlaneBroker.clusterSecret.

Important: Treat the broker cluster secret with the same level of care as a password.

Example

global:
  licensekey: cY9HiVZJs8aJHG1NVOiAcrqC_ # example value
  broker:
    clusterSecret: Uhoo:RahShie6goh # example value
  setImmutableLabels: true
  controlPlaneBroker:
    enabled: true
    clusterSecret: ahciaruighai_t2G # example value
Code-Sprache: YAML (yaml)

Tip: You can activate/deactivate this option within a scheduled maintenance window.

Specifying which StorageClass Connectware should use (optional)

A broker cluster can contain several Kubernetes StorageClasses. You can specify which StorageClass Connectware should use.

  1. In the values.yaml file, specify the StorageClass in the Helm value global.storage.storageClassName.

Example

global:
  licensekey: cY9HiVZJs8aJHG1NVOiAcrqC_ # example value
  broker:
    clusterSecret: Uhoo:RahShie6goh # example value
  setImmutableLabels: true
  storage:
    storageClassName: gp2 # example value
Code-Sprache: YAML (yaml)

There are several configuration parameters to control the StorageClass of each volume that Connectware uses.

Specifying CPU and memory resources (optional)

By default, Connectware is configured for high-performance systems and according to the guaranteed Quality of Service (QoS) class. However, you can use the Kubernetes resource management values requests and limits to specify the CPU and memory resources that Connectware is allowed to use.

Important: Adjusting CPU and memory resources can impact the performance and availability of Connectware. When you customize the settings for CPU and memory resources, make sure that you monitor the performance and make adjustments if necessary.

  1. In the values.yaml file, specify the CPU and memory limits and requests in the Helm value global.podResources. Specify the limits and requests as Kubernetes quantities.
  2. You can use the default values shipped with Connectware as a starting point. You can find these in your default-values.yaml file you created earlier.

Example

global:
  licensekey: cY9HiVZJs8aJHG1NVOiAcrqC_ # example value
  broker:
    clusterSecret: Uhoo:RahShie6goh # example value
  setImmutableLabels: true
  podResources:
    distributedProtocolMapper:
      limits:
        cpu: 2000m
        memory: 3000Mi
      requests:
        cpu: 1500m
        memory: 1500Mi
Code-Sprache: YAML (yaml)

Related links

Starting the Connectware installation

When you are done customizing your installation through your Helm values, you can deploy Connectware onto your Kubernetes cluster.

  1. Enter the following command:  helm install
  2. Specify the installation name. For example, connectware.
  3. Specify the target namespace. For example, cybus.

Example

helm install <installation-name> cybus/connectware -f ./values.yaml -n <namespace> --create-namespace
Code-Sprache: YAML (yaml)

This deploys Connectware according to your kubectl configuration.

Verifying the Connectware installation

You can monitor the Connectware installation progress to verify that everything runs smoothly, to know when the installation is successful, or to investigate potential issues.

Monitoring the Connectware installation progress

The Connectware installation can take a few minutes. To monitor the installation process, do one of the following:

kubectl get pods -n <namespace>
Code-Sprache: YAML (yaml)
while [ True ]; do clear; kubectl get pod -n <namespace>; sleep 5; done
Code-Sprache: YAML (yaml)

Pod stages during the Connectware installation

During the Connectware installation, the pods go through the following stages:

When pods reach the STATUS Running, they go through their individual startup before reporting as Ready. To be fully functional, all pods must reach the STATUS Running and report all their containers as ready. This is indicated by them showing the same number on both sides of the / in the column READY.

Example

$ kubectl get pod -n <namespace>
Code-Sprache: YAML (yaml)
NAME                 READYSTATUSRESTARTSAGE
admin-web-app-7cd8ccfbc5-bvnzx1/1Running03h44m
auth-server-5b8c899958-f9nl4 1/1Running03m3s
broker-0   1/1Running03h44m
broker-11/1Running02m1s
connectware-7784b5f4c5-g8krn1/1Running021s
container-manager-558d9c4cbf-m82bz 1/1Running03h44m
doc-server-55c77d4d4c-nwq5f 1/1Running03h44m
ingress-controller-6bcf66495c-l5dpk1/1Running018s
postgresql-0    1/1Running03h44m
protocol-mapper-67cfc6c848-qqtx91/1Running03h44m
service-manager-f68ccb767-cftps 1/1Running03h44m
system-control-server-58f47c69bf-plzt51/1Running03h44m
workbench-5c69654659-qwhgc 1/1Running015s

At this point Connectware is installed and started. You can now make additional configurations or verify the installation status in the Connectware Admin UI.

For more information on the Connectware Admin UI, see the Connectware documentation.

Troubleshooting pod stages

If a pod is in another state than expected or if it is stuck at a certain stage for more than three minutes, there might be an issue.

kubectl describe pod <pod-name>
Code-Sprache: YAML (yaml)

For help on solving issues, see Troubleshooting Connectware on Kubernetes.

Obtaining the local name of your Connectware Helm repository

The local name of your Connectware Helm repository corresponds to the following URL: https://repository.cybus.io/repository/connectware-helm

For the code examples in this documentation, we use the variable <local-repo> to refer to the local name of your Connectware Helm repository.

Obtaining the name, namespace, and version of your Connectware installation

If you want to upgrade and configure Connectware, you must know the name, namespace, and version of your Connectware installation.

Prerequisites

Procedure

Result

The name, namespace, and version number of your Connectware installation is displayed in the NAME, NAMESPACE, and APP VERSION columns. If you have trouble locating your Connectware installation in the list, look for connectware in the CHART column.

NAMENAMESPACErevisionupdatedstatuschartapp VERSION
connectwareconnectware42022-12-01 17:04:16.664663648 +0100 CETdeployedconnectware-1.1.01.1.0

For the code examples in this documentation, we use the following variables:

Extracting the values.yaml file

The Helm configuration that Connectware uses is stored in the values.yaml file. You can extract the values.yaml file from your installation.

Prerequisites

Procedure

helm get values <installation-name> -n <namespace> -o yaml > values.yaml
Code-Sprache: YAML (yaml)

For the code examples in this documentation, we use the variable <values.yaml> to refer to the currently used Helm values.

Related links

Applying Helm configuration changes

When you have changed the Helm configuration in your values.yaml file, you must apply the changes via an Helm upgrade.

Important: When you apply the changes that you have made to the values.yaml file, the former configuration is overwritten. We recommend that you apply configuration changes during planned maintenance times.

Prerequisites

Procedure

  1. In the values.yaml file, edit the configuration parameters.
    • Note: Make sure to stick to the YAML indentation rules.
  2. To apply the changed configuration parameters, enter the following command:
helm upgrade -n <namespace> <installation-name> -f values.yaml
Code-Sprache: YAML (yaml)

Prerequisites

Follow this guide if you want to install agents using the connectware-agent Helm chart without providing your license key.

You have two options to achieve this:

The agents will still verify a valid license for your Connectware once you register them

Installing Using a Manually Created Pull Secret

If you don’t enter a license key, you can still install the agent if you provide an already existing Kubernetes secret of type kubernetes.io/dockerconfigjson.

Connectware creates a secret like this named cybus-docker-registry. If you install agents in the same namespace as Connectware itself, you can simply use this secret by listing it in the protocolMapperAgentDefaults.image.pullSecrets list of your values.yaml file:

# not needed when supplying another pullSecret
# licenseKey:
protocolMapperAgentDefaults:
  image:
    pullSecrets:
      - name: cybus-docker-registry
Code-Sprache: YAML (yaml)

If your Connectware installation is in a different namespace, you can copy the secret from your Connectware namespace to the target namespace by using this command:

kubectl get secret cybus-docker-registry --namespace=<connectware-namespace> -o yaml | sed 's/namespace: .*/namespace: <agent-namespace>/' | kubectl apply -f -
Code-Sprache: YAML (yaml)

Example

kubectl get secret cybus-docker-registry --namespace=connectware -o yaml | sed 's/namespace: .*/namespace: connectware-agent/' | kubectl apply -f -
Code-Sprache: YAML (yaml)

If you need to copy between Kubernetes clusters, you can use the --context parameter of kubectl to target your local contexts.

Of course you can also use a completely manually created secret, as long as it provides access to the registry used to pull the agents protocol-mapper image.

Installing Using a Custom Registry

You can also use a custom registry to provide the protocol-mapper image for the agent.

In this case, set this registry in the image.registry inside the protocolMapperAgentDefaults section in your values.yaml file.

Example

# not needed when supplying another image registry
# licenseKey:
protocolMapperAgentDefaults:
  image:
    registry: registry.company.tld/cybus
Code-Sprache: YAML (yaml)

If your custom registry requires authentication, you must also provide a manually created kubernetes.io/dockerconfigjson secret in the protocolMapperAgentDefaults.image.pullSecrets list of your values.yaml file.

Example

# not needed when supplying another image registry
# licenseKey:
protocolMapperAgentDefaults:
  image:
    registry: registry.company.tld/cybus
    pullSecrets:
      - name: my-company-pull-secret
Code-Sprache: YAML (yaml)

Hint: kubernetes.io/dockerconfigjson secrets can be created with this command:

kubectl create secret <secret-name> --docker-server=<registry-address> --docker-username=<username> --docker-password=<password> --docker-email=<user-email>
Code-Sprache: YAML (yaml)

Ihr Browser unterstützt diese Webseite nicht.

Liebe Besucher:innen, Sie versuchen unsere Website über den Internet Explorer zu besuchen. Der Support für diesen Browser wurde durch den Hersteller eingestellt, weshalb er moderne Webseiten nicht mehr richtig darstellen kann.
Um die Inhalte dieser Website korrekt anzeigen zu können, benötigen Sie einen modernen Browser.

Unter folgenden Links finden Sie Browser, für die unsere Webseite optimiert wurde:

Google Chrome Browser herunterladen Mozilla Firefox Browser herunterladen

Sie können diese Website trotzdem anzeigen lassen, müssen aber mit erheblichen Einschränkungen rechnen.

Diese Website trotzdem anzeigen.