This Cybus Kubernetes guide provides a detailed procedure to help admins adjust the persistent volume content permissions to ensure a smooth upgrade to Connectware 1.5.0 in Kubernetes environments.
Important: Connectware 1.5.0 introduces a significant change regarding container privileges. Now, containers are started without root privileges. This change causes an issue where files that persisted on volumes that were created by a user other than the one accessing them, cannot easily have their permissions changed. As a result, this upgrade requires the admin to manually update the permissions.
kubectl
configured with access to the target installationconnectware
in version 1.5.0
.connectware-agent
in version 1.1.0
.values.yaml
file specific to your installationBefore you begin the upgrade process, ensure that you have made the necessary preparations and that all relevant stakeholders are involved.
The following steps give you an overview of what you need to do. See below for detailed step-by-step instructions.
kubernetes-job.yml
file as described.1000:0
and permissions to 770
.70:0
and permissions to 770
.create-pvc-yaml.sh
containing the following:#!/usr/bin/env bash
#
# Creates a Kubernetes resource list of volumeMounts and volumes for Connectware (agent) deployments
#
function usage {
printf "Usage:n"
printf "$0 [--kubeconfig|-kc <kubeconfig_file>] [--context|-ctx <target_context>] [--namespace|-n <target_namespace>] [--no-external-agents] [--no-connectware]n"
exit 1
}
function argparse {
while [ $# -gt 0 ]; do
case "$1" in
--kubeconfig|-kc)
# a kube_config file for the Kubernetes cluster access
export KUBE_CONFIG="${2}"
shift
;;
--context|-ctx)
# a KUBE_CONTEXT for the Kubernetes cluster access
export KUBE_CONTEXT="${2}"
shift
;;
--namespace|-n)
# the Kubernetes cluster namespace to operate on
export NAMESPACE="${2}"
shift
;;
--no-external-agents)
export NO_EXTERNAL_AGENTS=true
shift
;;
--no-connectware)
export NO_CONNECTWARE=true
shift
;;
*)
printf "ERROR: Parameters invalidn"
usage
esac
shift
done
}
#
# init
export NO_EXTERNAL_AGENTS=false
export NO_CONNECTWARE=false
export KUBECTL_BIN=$(command -v kubectl)
argparse $*
shopt -s expand_aliases
# Check for kubectl paramaters and construct the ${KUBECTL_CMD} command to use
if [ ! -z $KUBE_CONFIG ]; then
KUBECONFIG_FILE_PARAM=" --kubeconfig=${KUBE_CONFIG}"
fi
if [ ! -z $KUBE_CONTEXT ]; then
KUBECONFIG_CTX_PARAM=" --context=${KUBE_CONTEXT}"
fi
if [ ! -z $NAMESPACE ]; then
NAMESPACE_PARAM=" -n${NAMESPACE}"
fi
KUBECTL_CMD=${KUBECTL_BIN}${KUBECONFIG_FILE_PARAM}${KUBECONFIG_CTX_PARAM}${NAMESPACE_PARAM}
volumes=$(${KUBECTL_CMD} get pvc -o name | sed -e 's/persistentvolumeclaim///g' )
valid_pvcs=""
volume_yaml=""
volumemounts_yaml=""
if [[ "${NO_CONNECTWARE}" == "false" ]]; then
valid_pvcs=$(cat << EOF
system-control-server-data
certs
brokerdata-*
brokerlog-*
workbench
postgresql-postgresql-*
service-manager
protocol-mapper-*
EOF
)
fi
if [[ "${NO_EXTERNAL_AGENTS}" == "false" ]]; then
# Add volumes from agent Helm chart
pvc_volumes=$(${KUBECTL_CMD} get pvc -o name -l connectware.cybus.io/service-group=agent | sed -e 's/persistentvolumeclaim///g' )
valid_pvcs="${valid_pvcs}
${pvc_volumes}"
fi
# Collect volumeMounts
for pvc in $volumes; do
for valid_pvc in $valid_pvcs; do
if [[ "$pvc" =~ $valid_pvc ]]; then
volumemounts_yaml="${volumemounts_yaml}
- name: $pvc
mountPath: /mnt/connectware_$pvc"
break
fi
done
done
# Collect volumes
for pvc in $volumes; do
for valid_pvc in $valid_pvcs; do
if [[ "$pvc" =~ $valid_pvc ]]; then
volume_yaml="${volume_yaml}
- name: $pvc
persistentVolumeClaim:
claimName: $pvc"
break
fi
done
done
# print volumeMounts
echo
echo "Copy this as the "volumeMounts:" section:"
echo "######################################"
echo -n " volumeMounts:"
echo "$volumemounts_yaml"
# print volumes
echo
echo "Copy this as the "volumes:" section:"
echo "######################################"
echo -n " volumes:"
echo "$volume_yaml"
Code-Sprache: YAML (yaml)
kubectl config use-context <my-cluster>
kubectl config set-context --current --namespace <my-connectware-namespace>
Code-Sprache: YAML (yaml)
chmod u+x create-pvc-yaml.sh
./create-pvc-yaml.sh
Code-Sprache: YAML (yaml)
Example output:
Copy this as the "volumeMounts:" section:
######################################
volumeMounts:
- name: brokerdata-broker-0
mountPath: /mnt/connectware_brokerdata-broker-0
- name: brokerdata-control-plane-broker-0
mountPath: /mnt/connectware_brokerdata-control-plane-broker-0
- name: brokerlog-broker-0
mountPath: /mnt/connectware_brokerlog-broker-0
- name: brokerlog-control-plane-broker-0
mountPath: /mnt/connectware_brokerlog-control-plane-broker-0
- name: certs
mountPath: /mnt/connectware_certs
- name: postgresql-postgresql-0
mountPath: /mnt/connectware_postgresql-postgresql-0
- name: protocol-mapper-agent-001-0
mountPath: /mnt/connectware_protocol-mapper-agent-001-0
- name: service-manager
mountPath: /mnt/connectware_service-manager
- name: system-control-server-data
mountPath: /mnt/connectware_system-control-server-data
- name: workbench
mountPath: /mnt/connectware_workbench
Copy this as the "volumes:" section:
######################################
volumes:
- name: brokerdata-broker-0
persistentVolumeClaim:
claimName: brokerdata-broker-0
- name: brokerdata-control-plane-broker-0
persistentVolumeClaim:
claimName: brokerdata-control-plane-broker-0
- name: brokerlog-broker-0
persistentVolumeClaim:
claimName: brokerlog-broker-0
- name: brokerlog-control-plane-broker-0
persistentVolumeClaim:
claimName: brokerlog-control-plane-broker-0
- name: certs
persistentVolumeClaim:
claimName: certs
- name: postgresql-postgresql-0
persistentVolumeClaim:
claimName: postgresql-postgresql-0
- name: protocol-mapper-agent-001-0
persistentVolumeClaim:
claimName: protocol-mapper-agent-001-0
- name: service-manager
persistentVolumeClaim:
claimName: service-manager
- name: system-control-server-data
persistentVolumeClaim:
claimName: system-control-server-data
- name: workbench
persistentVolumeClaim:
claimName: workbench
Code-Sprache: YAML (yaml)
kubernetes-job.yml
containing the following:---
apiVersion: batch/v1
kind: Job
metadata:
name: connectware-fix-permissions
labels:
app: connectware-fix-permissions
spec:
backoffLimit: 3
template:
spec:
restartPolicy: OnFailure
imagePullSecrets:
- name: cybus-docker-registry
containers:
- image: registry.cybus.io/cybus/connectware-fix-permissions:1.5.0
securityContext:
runAsUser: 0
imagePullPolicy: Always
name: connectware-fix-permissions
# Insert the volumeMounts section here
# volumeMounts:
# - name: brokerdata-broker-0
# mountPath: /mnt/connectware_brokerdata-broker-0
resources:
limits:
cpu: 200m
memory: 100Mi
# Insert the volumes section here
# volumes:
# - name: brokerdata-broker-0
# persistentVolumeClaim:
# claimName: brokerdata-broker-0
Code-Sprache: YAML (yaml)
kubernetes-job.yml
and integrate the output of the create-pvc-yaml.sh
script.kubectl scale sts,deploy -lapp.kubernetes.io/instance=<connectware-installation-name> --replicas 0
kubectl scale sts -lapp.kubernetes.io/instance=<connectware-agent-installation-name> --replicas 0
Code-Sprache: YAML (yaml)
kubectl apply -f kubernetes-job.yml
Code-Sprache: YAML (yaml)
kubectl logs -f job/connectware-fix-permissions
Code-Sprache: SAS (sas)
Example output:
Found directory: connectware_brokerdata-broker-0. Going to change permissions
Found directory: connectware_brokerdata-broker-1. Going to change permissions
Found directory: connectware_brokerdata-control-plane-broker-0. Going to change permissions
Found directory: connectware_brokerdata-control-plane-broker-1. Going to change permissions
Found directory: connectware_brokerlog-broker-0. Going to change permissions
Found directory: connectware_brokerlog-broker-1. Going to change permissions
Found directory: connectware_brokerlog-control-plane-broker-0. Going to change permissions
Found directory: connectware_brokerlog-control-plane-broker-1. Going to change permissions
Found directory: connectware_certs. Going to change permissions
Found directory: connectware_postgresql-postgresql-0. Going to change permissions
Postgresql volume identified, using postgresql specific permissions
Found directory: connectware_service-manager. Going to change permissions
Found directory: connectware_system-control-server-data. Going to change permissions
Found directory: connectware_workbench. Going to change permissions
All done. Found 13 volumes.
Code-Sprache: YAML (yaml)
kubectl delete -f kubernetes-job.yml
Code-Sprache: YAML (yaml)
kubectl delete svc -l app.kubernetes.io/instance=<connectware-installation-name>
Code-Sprache: YAML (yaml)
helm upgrade -n <namespace> <installation-name> <repo-name>/connectware --version 1.5.0 -f values.yaml
Code-Sprache: YAML (yaml)
Result: You have successfully upgraded Connectware to 1.5.0.
create-pvc-yaml.sh
This script is designed to automatically generate Kubernetes resource lists, specifically volumeMounts
and volumes
, for Connectware deployments on Kubernetes. It simplifies the preparation process for upgrading Connectware by:
--kubeconfig
, --context
, --namespace
, to specify the Kubernetes cluster and namespace.The script outputs sections that can be directly copied and pasted into the kubernetes-job.yml
file, ensuring that the correct volumes are mounted with the appropriate permissions for the upgrade process.
kubernetes-job.yml
This YAML file defines a Kubernetes job responsible for adjusting the permissions of the volumes identified by the create-pvc-yaml.sh
script. The job is tailored to run with root privileges, enabling it to modify ownership and permissions of files and directories within PVCs that are otherwise inaccessible due to the permission changes introduced in Connectware 1.5.0.
The job’s purpose is to ensure that all persistent volumes used by Connectware are accessible by the new, non-root container user, addressing the core upgrade challenge without compromising on security by avoiding the use of root containers in the Connectware deployment itself.
If your LDAP directory uses a different property than “cn” as the username that is to be used, you can specify this property in the Helm value userRdn
in the global.authentication.ldap
context.
Example
global:
authentication:
ldap:
enabled: true
bindDn: CN=Users,DC=company,DC=tld
url: ldap://my-dc.complany.tld:389
userRdn: SN
Code-Sprache: YAML (yaml)
To use TLS for LDAP you only need to set a valid ldaps://
URL for the Helm value url
in the global.authentication.ldap
context. Remember to also adjust the TCP port number. By default LDAPS uses port 636.
Connectware will verify that the LDAP server presents a valid certificate before using it as authentication backend. Unless you have a certificate for your LDAP server that is signed by a valid root CA, you will need to provide the CA certificate that signed your LDAP server’s certificate. Alternatively you can disable certificate validation.
You can simply provide the CA certificate in the Helm value caChain.cert
in the global.authentication.ldap
context. Provide the complete certificate chain necessary to validate the LDAP server’s certificate.
Example
global:
authentication:
ldap:
enabled: true
bindDn: CN=Users,DC=company,DC=tld
url: ldaps://my-dc.complany.tld:636
caChain:
cert: |
-----BEGIN CERTIFICATE-----
MIIFpTCCA40CFGFL86145m7JIg2RaKkAVCOV1H71MA0GCSqGSIb3DQEBCwUAMIGN
[skipped for brevity - include whole certificate]
SKnBS1Y1Dn2e
-----END CERTIFICATE-----
Code-Sprache: YAML (yaml)
As an alternative, you can provide the CA certificate through a manually create Kubernetes ConfigMap.
To provide the CA certificate necessary to validate the certificate used by your LDAP server, you can manually create a Kubernetes ConfigMap that contains the certificate as a file named ca.crt. You will then provide the name of that ConfigMap in the Helm value caChain.existingConfigMap
in the global.authentication.ldap
context.
Example
Create the Kubernetes ConfigMap from a file named ca.crt in your current directory:
kubectl -n <namespace> create cm cw-ldap-ca-cert --from-file ca.cr
Code-Sprache: YAML (yaml)
Specify the name of the ConfigMap:
global:
authentication:
ldap:
enabled: true
bindDn: CN=Users,DC=company,DC=tld
url: ldaps://my-dc.complany.tld:636
caChain:
existingConfigMap: cw-ldap-ca-cert
Code-Sprache: YAML (yaml)
While we do not recommend skipping certificate validation for production use, it is possible to tell Connectware to accept any certificate the LDAP server presents. To do so, simply set the Helm value caChain.trustAllCertificates
in the global.authentication.ldap
context to true
.
Example
global:
authentication:
ldap:
enabled: true
bindDn: CN=Users,DC=company,DC=tld
url: ldaps://my-dc.complany.tld:636
caChain:
trustAllCertificates: true
Code-Sprache: YAML (yaml)
If you don’t want to provide the bind user for LDAP authentication through the Helm values bindDn
and bindPassword
within the global.authentication.ldap
context, you can also manually create a Kubernetes secret in Connectware’s namespace through your preferred method of managing secrets in Kubernetes. You will then need to provide the name of this secret in the Helm value existingBindSecret
.
This secret needs to contain two keys, bindDn
and bindPassword
, containing the parameters you did not specify directly as Helm values. If you want to use different keys, you can customize these as shown below.
Example
Create your Kubernetes secret:
kubectl -n <namespace> create secret generic my-ldap-user --from-literal=bindDn="CN=Bind User,CN=Users,DC=company,DC=tld" --from-literal=bindPassword="S3cretPassword"
Code-Sprache: YAML (yaml)
Specify the name of the Secret:
global:
authentication:
ldap:
enabled: true
existingBindSecret: my-ldap-user
searchBase: CN=Users,DC=company,DC=tld
url: ldap://my-dc.complany.tld:389
Code-Sprache: YAML (yaml)
If you want to customize the keys used in the Kubernetes secret, you can do so and specify the keys you want to use instead in the Helm value existingBindSecretDnKey
and existingBindSecretPasswordKey
within the global.authentication.ldap
context.
Example
Create your Kubernetes secret:
kubectl -n <namespace> create secret generic custom-ldap-user --from-literal=username="CN=Bind User,CN=Users,DC=company,DC=tld" --from-literal=password="S3cretPassword"
Code-Sprache: YAML (yaml)
Specify the name of the Secret in your values.yaml file:
global:
authentication:
ldap:
enabled: true
existingBindSecret: custom-ldap-user
existingBindSecretDnKey: username
existingBindSecretPasswordKey: password
searchBase: CN=Users,DC=company,DC=tld
url: ldap://my-dc.complany.tld:389
Code-Sprache: YAML (yaml)
When configuring LDAP authentication, you need to match Connectware’s setting to the capabilities of your LDAP server. There are two fundamental decisions to make:
Connectware offers two modes for LDAP authentication:
You can read about them in the Connectware documentation. By default, “group” mode is activated.
A bind user is common in LDAP setup that use a more complicated directory structure. It is a limited user you create in your LDAP directory, that is usually a read-only user with the permission to search through the LDAP directory tree.
It is used when users don’t share a single LDAP base DN (e.g. are not in the same group). If your users are spread among the directory tree, you will likely want to use a bind user.
To enable the LDAP feature in Connectware, you need to set the Helm value global.authentication.ldap.enabled
to true
.
Additionally, you always need to provide these Helm values within the global.authentication.ldap
context:
Value | Example | Description |
---|---|---|
bindDn | CN=Users,DC=example,DC=org | bindDN contains either the LDAP base DN of users logging in, or the DN of a dedicated bind user that is able to search for the user trying to log in within the search base. |
url | ldap://dc.mycompany.tld:389 | URL of the LDAP server in format schema://hostname:port |
Example
global:
authentication:
ldap:
enabled: true
bindDn: CN=Users,DC=company,DC=tld
url: ldap://my-dc.complany.tld:389
Code-Sprache: YAML (yaml)
If you are using a bind user to search through the directory tree, you must specify the full DN of the bind user as bindDn
, and also need to provide these values:
Value | Example | Description |
---|---|---|
bindPassword | ANc97WCO"!xcC=( | bindPassword contains the password for the bind user as defined in your LDAP server. |
url | ldap://dc.mycompany.tld:389 | URL of the LDAP server in format schema://hostname:port |
Example
global:
authentication:
ldap:
enabled: true
bindDn: CN=connectwarebinduser,CN=Users,DC=company,DC=tld
bindPassword: SuperS3cret!
url: ldap://my-dc.complany.tld:389
searchBase: CN=Users,DC=company,DC=tld
Code-Sprache: YAML (yaml)
If you don’t want to provide the bind user and its password through your Helm values, for example because you follow a GitOps approach for your Connectware deployment, you can also provide the bind user through a manually created Kubernetes secret that is specified in existingBindSecret
. You can find detailed instructions in this article.
By providing a bindPassword
through one of these mechanisms, the nature of bindDn
changes from being a single base DN that contains all users that are allowed to log into Connectware, to containing the DN of a single user – the bind user. In this scenario, searchBase
takes the role of containing the base DN which all users share, acting as the root from which a search for valid users will be performed.
To configure Connectware to use LDAP in group mode, you need to specify the LDAP attribute of your user, that specifies what LDAP groups they are part of. This is done through the Helm value memberAttribute
within the global.authentication.ldap
context. Additionally, mode
must be set to group
.
The default value of memberOf
is often the correct choice, but you may have to adapt this to your LDAP server.
These LDAP groups are then mapped to Connectware roles using the Connectware UI as described in the Connectware docs.
Example
global:
authentication:
ldap:
enabled: true
bindDn: CN=Users,DC=company,DC=tld
url: ldap://my-dc.complany.tld:389
mode: group
memberAttribute: memberOf
Code-Sprache: YAML (yaml)
To configure Connectware to use LDAP in attribute mode, you need to specify the LDAP attribute of your user, that specifies the Connectware role that is associated with the user. This is done through the Helm value rolesAttribute
within the global.authentication.ldap
context. Additionally, mode
must be set to attribute
.
The default value of employeeType
is often the correct choice, but you may have to adapt this to your LDAP server.
Example
global:
authentication:
ldap:
enabled: true
bindDn: CN=Users,DC=company,DC=tld
url: ldap://my-dc.complany.tld:389
mode: attribute
rolesAttribute: employeeType
Code-Sprache: YAML (yaml)
Connectware supports connecting to LDAP servers that offer Transport Layer Security. You can find out how to configure this in this article.
You can provide the bind user through a manually created Kubernetes secret that is specified in existingBindSecret
. You can find detailed instructions in this article.
By default the username trying to log in acts as the search filter, but there may be advanced situations where this is not enough, for example when that matches multiple users. Visit this article to learn how to customize the search filter.
The user RDN describes what LDAP attribute contains the username. By default this uses cn
, but if this is not correct for your LDAP setup, you can customize this using the userRdn
Helm value. Find out more in this article.
There are scenarios where it is usefull to extend the default search filter of Connectware. For example:
The filter that will be used by Connectware is (<userRdn>=<username>)
wheras userRdn
is defined as environment variable in your values.yml and username
is the name the user enters during login.
Any extension will result in a filter of the current format:
(&(<userRdn>=<username>)(<your extension>)
Code-Sprache: YAML (yaml)
Info: You could test the filter by performing request with ldapsearch
on your terminal (may require additional packages to be installed)
Example:
ldapsearch -L -b "dc=example,dc=org" -D "cn=admin,dc=example,dc=org" -w admin_pass "(&(cn=User 1)(objectclass=iNetOrgPerson))"
Code-Sprache: YAML (yaml)
Example
In the following example, we have two entries with an RDN cn=a.smith
.
dc=example,dc=org
├ cn=customers
│ └ cn=a.smith
└ cn=employees
└ cn=a.smith
Code-Sprache: YAML (yaml)
Both users are named a.smith, but they are different entries. In a case like this you will use cn=employees,dc=ecample,dc=org
as search base and actually won’t have a problem. But lets use dc=example,dc=org
in order to create a simple example case for the filter extention.
We want to modify the filter in order to search only for entries that have cn=employees
in their DN.
The search command to test on the terminal will for the employee a.smith will look like this:
ldapsearch -L -b "dc=example,dc=org" -D "cn=admin,dc=example,dc=org" -w admin_pass "(&(cn=a.smith)(cn:dn:=employee))"
Code-Sprache: YAML (yaml)
To modify Connectware, we only add the extension itself (cn:dn:=employee
) to the configuration:
global:
authentication:
ldap:
enabled: true
existingBindSecret: my-ldap-user
searchBase: CN=Users,DC=company,DC=tld
searchFilter: cn:dn:=employees
userRdn: cn
url: ldap://my-dc.company.tld:389
Code-Sprache: YAML (yaml)
Important: Be aware the no surrounding brackets are used for the additional expression. Brackets within your expression could be used, e.g. &(objectClass=iNetOrgPerson)(cn:dn:=employees)
In this documentation, we use different variables in the code examples to explain the installation and configuration of Connectware. When you install and configure your Connectware, you can create your own variables.
The following variables are used in this documentation:
Name | variable |
---|---|
Name of the Connectware installation | <installation-name> |
Namespace of the Connectware installation | <namespace> |
values.yaml file | <values.yaml> |
Version number of the Connectware installation | <current-version> |
Version number of the Connectware version that you want to upgrade to | <target-version> |
Local name of the Connectware Helm repository | <local-repo> |
Example
diff <(helm show values <repo-name>/connectware --version <current-version>) <(helm show values <repo-name>/connectware --version <target-version>
Code-Sprache: YAML (yaml)
The values.yaml file is the configuration file for an application that is deployed through Helm. The values.yaml file allows you to configure your Connectware installation. For example, edit deployment parameters, manage resources, and update your Connectware to a new version.
In this documentation, we will focus on a basic Kubernetes configuration and commonly used parameters.
Note: We recommend that you store the values.yaml file in a version control system.
A Helm chart contains a default configuration. It is likely that you only need to customize some of the configuration parameters. We recommend that you create a copy of the default values.yaml file named default-values.yaml and a new, empty values.yaml file to customize specific parameters.
Example
helm show values cybus/connectware > default-values.yaml
Code-Sprache: YAML (yaml)
When you have created the default-values.yaml file, you can create the values.yaml file to add your custom configuration parameters.
Example
vi values.yaml
Code-Sprache: YAML (yaml)
To install Connectware, you need a valid license key.
global.licenseKey
.Example
global:
licensekey: cY9HiVZJs8aJHG1NVOiAcrqC_ # example value
Code-Sprache: YAML (yaml)
You must specify a secret for the broker cluster. The cluster secret value is used to secure your broker cluster, just like a password.
Important: Treat the broker cluster secret with the same level of care as a password.
global.broker.clusterSecret
.Example
global:
licensekey: cY9HiVZJs8aJHG1NVOiAcrqC_ # example value
broker:
clusterSecret: Uhoo:RahShie6goh # example value
Code-Sprache: YAML (yaml)
For a fresh Connectware installation, we recommend that you set best-practice labels on immutable workload objects like StatefulSet volumes.
global.setImmutableLabels
to true
.Example
global:
licensekey: cY9HiVZJs8aJHG1NVOiAcrqC_ # example value
broker:
clusterSecret: Uhoo:RahShie6goh # example value
setImmutableLabels: true
Code-Sprache: YAML (yaml)
By default, Connectware uses three nodes for the broker cluster that moves data. You can specify a custom number of broker nodes. For example, increase the broker nodes to handle higher data loads or decrease the broker nodes for a testing environment.
global.broker.replicaCount
.Example
global:
licensekey: cY9HiVZJs8aJHG1NVOiAcrqC_ # example value
broker:
clusterSecret: Uhoo:RahShie6goh # example value
replicaCount: 5
setImmutableLabels: true
clusterSecret: ahciaruighai_t2G # example value
Code-Sprache: YAML (yaml)
By default, Connectware uses the same broker for data payload processing and control-plane communication. You can use a separate control-plane broker. This might be useful for production environments, as it provides higher resilience and better manageability in cases of the data broker becoming slow to respond due to high load.
global.controlPlaneBroker.enabled
to true
.global.controlPlaneBroker.clusterSecret
.Important: Treat the broker cluster secret with the same level of care as a password.
Example
global:
licensekey: cY9HiVZJs8aJHG1NVOiAcrqC_ # example value
broker:
clusterSecret: Uhoo:RahShie6goh # example value
setImmutableLabels: true
controlPlaneBroker:
enabled: true
clusterSecret: ahciaruighai_t2G # example value
Code-Sprache: YAML (yaml)
Tipp: You can activate/deactivate this option within a scheduled maintenance window.
A broker cluster can contain several Kubernetes StorageClasses. You can specify which StorageClass Connectware should use.
global.storage.storageClassName
.Example
global:
licensekey: cY9HiVZJs8aJHG1NVOiAcrqC_ # example value
broker:
clusterSecret: Uhoo:RahShie6goh # example value
setImmutableLabels: true
storage:
storageClassName: gp2 # example value
Code-Sprache: YAML (yaml)
There are several configuration parameters to control the StorageClass of each volume that Connectware uses.
By default, Connectware is configured for high-performance systems and according to the guaranteed Quality of Service (QoS) class. However, you can use the Kubernetes resource management values requests and limits to specify the CPU and memory resources that Connectware is allowed to use.
Important: Adjusting CPU and memory resources can impact the performance and availability of Connectware. When you customize the settings for CPU and memory resources, make sure that you monitor the performance and make adjustments if necessary.
Example
global:
licensekey: cY9HiVZJs8aJHG1NVOiAcrqC_ # example value
broker:
clusterSecret: Uhoo:RahShie6goh # example value
setImmutableLabels: true
podResources:
distributedProtocolMapper:
limits:
cpu: 2000m
memory: 3000Mi
requests:
cpu: 1500m
memory: 1500Mi
Code-Sprache: YAML (yaml)
Remote agents are Connectware agents that are not directly integrated into the Connectware installation, but standalone deployments that are managed separately.
A common use case for remote agents is using a target infrastructure that is closer to the shop floor than the Connectware installation, but from a Kubernetes point of view they can also be deployed in the same namespace.
Cybus offers a Helm chart to comfortably deploy remote agents.
Please review the requirements for your Kubernetes cluster in Kubernetes cluster requirements for the connectware-agent Helm chart before proceeding with Installing Connectware agents using the connectware-agent Helm chart or Installing Connectware agents without a license key using the connectware-agent Helm chart.
Make sure that your Kubernetes cluster satisfies these requirements before installing the connectware-agent
Helm chart.
ReadWriteOnce
2000m
CPU2000Mi
memorykubectl
installed (Install Tools).kubectl
configured with the current context pointing to your target cluster (Configure Access to Multiple Clusters).You can configure your Connectware agents deployed through the connectware-agent
Helm chart by adjusting Helm values in your values.yaml file and re-applying it using a Helm upgrade.
Example:
helm upgrade -i connectware-agent cybus/connectware-agent -f values.yaml -n <namespace>
Code-Sprache: YAML (yaml)
If you need help starting out with a values.yaml file, follow the Installing Connectware agents using the connectware-agent Helm chart article.
In our examples we will explain the parameters in the protocolMapperAgents
Helm context, but unless otherwise noted they are also available to configure through protocolMapperAgentDefaults
as mentioned in Configuration principles for the connectware-agent Helm chart.
Connectware’s agents are part of a Kubernetes StatefulSet. If any of them are not in the state “running” when you execute helm upgrade
, you will need to manually delete the pod afterwards, for an updated pod to be scheduled.
These values are on the root level of your values.yaml file.
Helm value | Description | Discussed in |
---|---|---|
licenseKey | A valid license for Cybus Connectware | Installing Connectware agents using the connectware-agent Helm chart |
protocolMapperAgentDefaults | This set of configuration values is applied to all agents, unless they override specific values | Configuration principles for the connectware-agent Helm chart |
protocolMapperAgents | A collection of Connectware agents to be deployed. Each collection entry can contain configuration to override the defaults | Configuration principles for the connectware-agent Helm chart |
fullnameOverride | Override the full name of this installation, which is used as a name prefix. Use „“ to remove prefixing | Controlling the name of Kubernetes objects for the connectware-agent Helm chart |
nameOverride | Override the chart name of this installation, which is used as part of the name prefix | Controlling the name of Kubernetes objects for the connectware-agent Helm chart |
These values are within the protocolMapperAgentDefaults
section and control the behavior of all deployed agents.
Helm value | Description | Discussed in |
---|---|---|
connectwareHost | DNS name under which the Connectware installation is available to the agent | Configuring target Connectware for the connectware-agent Helm chart |
controlPlaneBrokerEnabled | Define if the Connectware installation uses the separate control-plane-broker feature | Configuring target Connectware for the connectware-agent Helm chart |
image.name | The name of the container image used for the agent | Configuring image name and version for the connectware-agent Helm chart |
image.version | Container version or tag used for the agent | Configuring image name and version for the connectware-agent Helm chart |
image.registry | Container image registry to be used for the agent. Set to „“ to not specify a registry | Using a custom image registry for the connectware-agent Helm chart |
image.pullPolicy | Kubernetes imagePullPolicy used for the agent. One of: Always, Never, IfNotPresent | Configuring image pull policy for the connectware-agent Helm chart |
image.pullSecrets | A collection of objects containing Kubernetes imagePullSecrets with a name attribute, to be used by the agent | Using a custom image registry for the connectware-agent Helm chart |
mTLS.enabled | Define if mTLS (Certificate Authentication) is enabled | Using Mutual Transport Layer Security (mTLS) for agents with the connectware-agent Helm chart |
mTLS.caChain.cert | The Certificate Authority certificate chain as a literal PEM encoded string | Using Mutual Transport Layer Security (mTLS) for agents with the connectware-agent Helm chart |
mTLS.caChain.existingConfigMap | An existing Kubernetes ConfigMap containing the Certificate Authority certificate chain in a file named ca-chain.pem | Using Mutual Transport Layer Security (mTLS) for agents with the connectware-agent Helm chart |
mqtt.tls | Define if TLS (Transport Encryption) is enabled | Configuring target Connectware for the connectware-agent Helm chart |
mqtt.controlHost | Override the default host for the control-plane MQTT connection | Configuring target Connectware for the connectware-agent Helm chart |
mqtt.dataHost | Override the default host for the data MQTT connection | Configuring target Connectware for the connectware-agent Helm chart |
mqtt.controlPort | Override the default port for the control-plane MQTT connection | Configuring target Connectware for the connectware-agent Helm chart |
mqtt.dataPort | Override the default port for the data MQTT connection | Configuring target Connectware for the connectware-agent Helm chart |
persistence.accessMode | The Kubernetes AccessMode to request for the persistent volume. One of: ReadWriteOnce, ReadWriteMany, ReadWriteOncePod | Configuring agent persistence for the connectware-agent Helm chart |
persistence.size | A Kubernetes Quantity to request as size for the persistent volume | Configuring agent persistence for the connectware-agent Helm chart |
persistence.storageClassName | The name of the Kubernetes StorageClass to request for the persistent volume | Configuring agent persistence for the connectware-agent Helm chart |
podAntiAffinity | Define what type of podAntiAffinity to use for the agent. One of: none, soft, hard | Configuring podAntiAffinity for the connectware-agent Helm chart |
podAntiAffinityOptions | Define configuration values specific to podAntiAffinity | Configuring podAntiAffinity for the connectware-agent Helm chart |
resources.requests.cpu | Kubernetes Quantity that describes the agents CPU requests | Configuring compute resources for the connectware-agent Helm chart |
resources.requests.memory | Kubernetes Quantity that describes the agents memory requests | Configuring compute resources for the connectware-agent Helm chart |
resources.limits.cpu | Kubernetes Quantity that describes the agents CPU limits | Configuring compute resources for the connectware-agent Helm chart |
resources.limits.memory | Kubernetes Quantity that describes the agents memory limits | Configuring compute resources for the connectware-agent Helm chart |
env | A collection of objects with name and value describing environment variables passed to the agent | Configuring environment variables for the connectware-agent Helm chart |
annotations | A set of Kubernetes annotations to be added to all agent resources | Configuring environment variables for the connectware-agent Helm chart |
labels | A set of Kubernetes labels to be added to all agent resources | Configuring labels and annotations for the connectware-agent Helm chart |
podAnnotations | A set of Kubernetes annotations to be added to the agent pod only | Configuring labels and annotations for the connectware-agent Helm chart |
podLabels | A set of Kubernetes labels to be added to the agent pod only | Configuring labels and annotations for the connectware-agent Helm chart |
nodeSelector | A set of Kubernetes labels a node must have for the agent to be scheduled on it | Assigning agents to Kubernetes nodes for the connectware-agent Helm chart |
securityContext | Define the Kubernetes SecurityContext for the agent | Configuring security context for the connectware-agent Helm chart |
podSecurityContext | Define the Kubernetes SecurityContext for the agents pod | Configuring security context for the connectware-agent Helm chart |
service.annotations | A set of Kubernetes annotations to be added to the agents service only | Configuring labels and annotations for the connectware-agent Helm chart |
service.labels | A set of Kubernetes labels to be added to the agents service only | Configuring labels and annotations for the connectware-agent Helm chart |
These values are within the protocolMapperAgents
section, which is a list of agents you want to deploy and are configured per entry for an agent. See Configuration principles for the connectware-agent Helm chart for details. Additionally to the values listed here, all values under protocolMapperAgentDefaults
are available per-agent.
Helm value | Description | Discussed in |
---|---|---|
name | The name of the Connectware agent. If you use mTLS this must match the certificates CN/SAN | Installing Connectware agents using the connectware-agent Helm chart |
mTLS.keyPair.cert | The mTLS certificate chain as a literal PEM encoded string | Using Mutual Transport Layer Security (mTLS) for agents with the connectware-agent Helm chart |
mTLS.keyPair.key | The mTLS private key as a literal PEM encoded string | Using Mutual Transport Layer Security (mTLS) for agents with the connectware-agent Helm chart |
mTLS.existingSecret | An existing Kubernetes Secret containing the mTLS certificate and key in files named tls.crt and tls.key | Using Mutual Transport Layer Security (mTLS) for agents with the connectware-agent Helm chart |
You can adjust the image pull policy used by your agents to any valid value supported by Kubernetes.
To change the image pull policy for the agent, specify the pull policy you want in the image.pullPolicy
value inside the agents entry in the protocolMapperAgents
context of your values.yaml file.
Example
protocolMapperAgents:
- name: bender-robots
connectwareHost: connectware.cybus # adjust to actual hostname of Connectware
image:
pullPolicy: Always
Code-Sprache: YAML (yaml)
Sie müssen den Inhalt von reCAPTCHA laden, um das Formular abzuschicken. Bitte beachten Sie, dass dabei Daten mit Drittanbietern ausgetauscht werden.
Mehr InformationenSie sehen gerade einen Platzhalterinhalt von Facebook. Um auf den eigentlichen Inhalt zuzugreifen, klicken Sie auf die Schaltfläche unten. Bitte beachten Sie, dass dabei Daten an Drittanbieter weitergegeben werden.
Mehr InformationenSie sehen gerade einen Platzhalterinhalt von Instagram. Um auf den eigentlichen Inhalt zuzugreifen, klicken Sie auf die Schaltfläche unten. Bitte beachten Sie, dass dabei Daten an Drittanbieter weitergegeben werden.
Mehr InformationenSie sehen gerade einen Platzhalterinhalt von X. Um auf den eigentlichen Inhalt zuzugreifen, klicken Sie auf die Schaltfläche unten. Bitte beachten Sie, dass dabei Daten an Drittanbieter weitergegeben werden.
Mehr Informationen