Connectware agents use a PersistentVolumeClaim to persist data in between restarts.
Persistence configuration parameters for existing agents can’t be changed. To do so uninstall and reinstall the agent
By default, agents will use the Kubernetes clusters default storage class.
To specify another StorageClass to use, set the Helm value persistence.storageClassName
inside the agents entry in protocolMapperAgents
context of your values.yaml file to the name of your StorageClass.
Example
licenseKey: <your-connectware-license-key>
protocolMapperAgents:
- name: bender-robots
connectwareHost: connectware.cybus # adjust to actual hostname of Connectware
persistence:
storageClassName: nfs-client # use the actual StorageClass name
Code-Sprache: YAML (yaml)
Hint: If you are unsure what StorageClasses are available in your cluster you can view them with the kubectl get sc
command.
If your cluster does not specify a default StorageClass, and you don’t configure this parameter, the PersistentVolumeClaim can’t be scheduled and the agent can’t start
To specify a size for the PersistentVolume used by the agent, set the Helm value persistence.size
inside the agents entry in protocolMapperAgents
context of your values.yaml file to a valid Kubernetes Quantity.
Example
licenseKey: <your-connectware-license-key>
protocolMapperAgents:
- name: bender-robots
connectwareHost: connectware.cybus # adjust to actual hostname of Connectware
persistence:
size: 500Mi
Code-Sprache: YAML (yaml)
To specify an AccessMode for the PersistentVolume used by the agent, set the Helm value persistence.accessMode
inside the agents entry in protocolMapperAgents
context of your values.yaml file to a valid Kubernetes volume AccessMode.
Example
licenseKey: <your-connectware-license-key>
protocolMapperAgents:
- name: bender-robots
connectwareHost: connectware.cybus # adjust to actual hostname of Connectware
persistence:
accessMode: ReadWriteMany
Code-Sprache: YAML (yaml)
kubectl
installed (K8s Install Tools).kubectl
configured with the current context pointing to your target cluster (Configure Access to Multiple Clusters).When having problems with agents installed using the connectware-agent
Helm chart, the first step is usually to delete any pod stuck in a state other than Running and Ready. This can easily happen, because the agents are StatefulSets, which do not automatically get rescheduled if they are unhealthy when their controller is updated, so they need manual intervention.
Simply use kubectl get pod -l app.kubernetes.io/component=protocol-mapper-agent
command to display all agent pods, then delete any pod that is in a faulty state using the kubectl delete pod <podname>
command.
Example
kubectl get pod -l app.kubernetes.io/component=protocol-mapper-agent -n <namespace>
Code-Sprache: YAML (yaml)
kubectl -n <namespace> delete pod <podname>
Code-Sprache: YAML (yaml)
If this does not help, you need to look at the faulty pods events and log to check for helpful error messages.
Depending on the Pods state, you should look at a different detail information to find the issue.
Pod state | Kind of problem | Where to check |
---|---|---|
Pending, ContainerCreating | Kubernetes is trying to create the pod. | Pod events or description (see Checking pod state). |
Running, but not ready or not behaving as expected. | Pod unready, application not working correctly. | Current pod logs (see Checking agent pod logs). |
Unknown | Pod status is unknown, Kubernetes cluster problem. | Kubernetes cluster state and events (see https://kubernetes.io/docs/tasks/debug/debug-cluster/). |
ImagePullBackOff | Image for the pod can’t be pulled. | Helm value configuration (see Verifying container image configuration). |
CrashLoopBackOff | Application is crashing. | Previous pod logs (see Checking agent pod logs). |
When you have problem with a pod not being scheduled, there can be different reasons, that can be classified in two categories:
For both categories, you will find events detailing the problem associated with the pod. We will assume, that you have already identified the pod through the previous steps in this article. You will need to know the name and namespace of the pod you are trying to debug.
Use the following command to display events associated with your pod:
kubectl get event -n <namespace> --field-selector involvedObject.name=<podname>
Code-Sprache: YAML (yaml)
Example
Info: You can also view the events at the end of the output of kubectl describe pod <podname>
Issues with you Kubernetes cluster can take very many forms and are beyond the scope of this article, but you can use Debug pods as a starting point to debug any events you see that indicate a problem with your Kubernetes cluster.
Following are a few common scenarios that include issues with your configuration and how to address them.
Event mentions | Likely problem | Likely solution |
---|---|---|
FailedScheduling, Insufficient cpu, Insufficient memory | You specified CPU and memory resources for your agents, that your Kubernetes cluster can’t provide. | Review Configuring compute resources for the connectware-agent Helm chart and adjust the configured resources to something that is available in your Kubernetes cluster. |
FailedScheduling, didn’t match pod anti-affinity rules | There are no available Kubernetes nodes that can schedule the agent because of podAntiAffinity rules. | Review Configuring podAntiAffinity for the connectware-agent Helm chart and adjust your settings, or add additional nodes to your Kubernetes cluster. |
FailedMount in combination with the names you chose as mTLS secret or CA chain, or “mtls-agent-keypair” / „mtls-ca-chain“ | You enabled mTLS for an agent without providing the necessary ConfigMap and Secret for CA chain and key pair. | Review Using Mutual Transport Layer Security (mTLS) for agents with the connectware-agent Helm chart and adjust your configuration accordingly. |
FailedMount in combination with the names of volumes (starting with “data-” | The currently used storage provider is unable to provide the necessary volumes. | Review Configuring agent persistence for the connectware-agent Helm chart and choose a Kubernetes StorageClass that can provide the necessary volumes. |
When your pods are scheduled, but don’t work the way you expect, are unready, or keep crashing (Status: “CrashLoopBackOff”), then you need to check the logs of this pod for details.
For pods that are ready or unready, check the current logs. For pods in status “CrashLoopBackOff” you need to check the logs of the previous container, to see why it crashed.
To check the current logs of your pod, use the kubectl logs command with the pod name, and look for error messages.
kubectl logs -n <namespace> <podname>
Code-Sprache: YAML (yaml)
Example
To check the logs of a previous container, follow Checking current pod logs, but add the parameter –previous to the command:
kubectl logs -n <namespace> <podname> --previous
Code-Sprache: YAML (yaml)
Event mentions | Likely problem | Likely solution |
---|---|---|
Agent with mTLS enabled not connecting to broker Agent log shows Reconnecting to mqtts://connectware:8883 Broker log shows:[warning] can’t authenticate client {„ssl“,<<„someName“>>} from someIp due to <<„Authentication denied“>> | mTLS not enabled in Connectware. | Enable mTLS in Connectware. Set Helm value global.authentication.mTLS.enabled to true. |
Agent not connecting to broker when mTLS in Connectware is enabled Agent log showsVRPC agent connection to broker lost Reconnecting to mqtts://someIp:8883 | mTLS enabled in Connectware, but not in agent. | Enable mTLS in agent as described in Using Mutual Transport Layer Security (mTLS) for agents with the connectware-agent Helm chart |
Agent with mTLS enabled does not connect to broker Agent log showsError: Client network socket disconnected before secure TLS connection was established | Agent is connecting to the wrong MQTTS port in broker. | If your setup requires manual configuration due to additional NAT or something similar, review Configuring target Connectware for the connectware-agent Helm chart and adjust your configuration accordingly.If you are not aware of any special requirements of your environment, try removing all advanced MQTT target parameters. |
Agent with mTLS enabled does not connect to broker Agent log shows Failed to read certificates during mTLS setup please check the configuration | The certificates provided to the agent are either not found or faulty. | Review Using Mutual Transport Layer Security (mTLS) for agents with the connectware-agent Helm chart and Full mTLS Examples for the connectware-agent Helm chart, and make sure your mTLS certificates fulfill the requirements. |
Allowing an mTLS enabled agent in Connectware Client Registry fails with the message “An Error has occurred – Registration failed” auth-server logs show: Unable to process request: ‚POST /api/client-registry/confirm‘, because: Certificate Common Name does not match the username. CN: someCN, username: agentName | Agent’s certificate invalid. | Review Using Mutual Transport Layer Security (mTLS) for agents with the connectware-agent Helm chart and Full mTLS Examples for the connectware-agent Helm chart, and make sure your mTLS certificate CN matches the name of the agent. |
Agent with mTLS enabled does not connect to broker Agent log shows: Can not register protocol-mapper agent, because: socket hang up | Agent’s certificate invalid. | Review Using Mutual Transport Layer Security (mTLS) for agents with the connectware-agent Helm chart and Full mTLS Examples for the connectware-agent Helm chart, and make sure your mTLS certificate is signed by the correct Certificate Authority (CA). |
Agent with mTLS enabled does not connect to broker Agent log shows: Failed to register agent. Response: 409 Conflict. A conflicting registration might be pending, or a user with the same username | The username of the agent is already taken. | Every agent needs a user with the username of the value configured in the Helm value name for this agent.Verify that the agent’s name is uniqueVerify there is no old agent with the same name, if there is:Delete the Agent using the Systems => Agents UIDelete the user using the User Management => Users and Roles UIIf you created a user with the agent’s name for something else you have to choose a different name for the agent |
Agent pod enters state CrashLoopBackOff Agent log shows:{„level“:30,“time“:1670940068658,“pid“:8,“hostname“:“welder-robots-0″,“service“:“protocol-mapper“,“msg“:“Re-starting using cached credentials“}2{„level“:50,“time“:1670940068759,“pid“:8,“hostname“:“someName“,“service“:“protocol-mapper“,“msg“:“Failed to query license at https://someIp/api/system/info probably due to authentication“: 401 Unauthorized.“} | The agent’s credentials are not correct anymore. | The agent needs to be re-registered:Delete the Agent using the Systems => Agents UIDelete the user using the User Management => Users and Roles UI Delete the agents StatefulSet: kubectl -n <namespace> delete sts <release-name>-<chart-name>-<agent-name>Delete the agents PersistentVolumeClaim:kubectl -n <namespace> delete pvc data-<release-name>-<chart-name>-<agent-name>-0Re-apply your configuration through helm upgrade as described in Configuring agents with the connectware-agent Helm chart. |
When an agent pod is in the Status “ImagePullBackOff” it means that Kubernetes is unable to pull the container image required for this agent.
By default Connectware agents use the official protocol-mapper image from Cybus‘ official container registry. This requires a valid secret of the type kubernetes.io/dockerconfigjson to be used, but you have different ways of achieving this. Another option is to provide the images through a mirror, or even using custom images.
This leaves a lot of options to control the image, for which you have to find the right combination for your use case. How to configure these parameters is discussed in these articles:
To see the effect of your settings, you need to inspect the complete image definition of your agent pods.
To do so, you can use this command:
kubectl -n <namespace> get pod -l app.kubernetes.io/component=protocol-mapper-agent -o custom-columns="NAME:metadata.name,IMAGE:spec.containers[0].image"
Code-Sprache: YAML (yaml)
Example
In this example you can see, that agent “painter-robots” is trying to use an invalid image name, which needs to be corrected using the image.name Helm value inside the agents entry in the protocolMapperAgents section of the Helm values.
kubectl
installed (Install Tools).kubectl
configured with the current context pointing to your target cluster (Configure Access to Multiple Clusters).Add the Cybus connectware-helm
repository to your local Helm installation to use the connectware-agent
Helm chart to install Connectware agents in Kubernetes:
helm repo add cybus https://repository.cybus.io/repository/connectware-helm
Code-Sprache: YAML (yaml)
To verify that the Helm chart is available you can execute a Helm search:
helm search repo connectware-agent
Code-Sprache: YAML (yaml)
NAME | CHART VERSION | APP VERSION | DESCRIPTION |
cybus/connectware-agent standalone agents | 1.0.0 | 1.1.5 | Cybus Connectware |
As with all Helm charts, the connectware-agent
chart is configured using a YAML file. This file can have any name, however we will refer to it as the values.yaml file.
Create this file to start configuring your agent installation by using your preferred editor:
vi values.yaml
Code-Sprache: YAML (yaml)
To quickly install a single agent you only need to add your Connectware license key to your values.yaml file as the Helm value licenseKey
:
licenseKey: <your-connectware-license-key>
Code-Sprache: YAML (yaml)
You can now use this file to deploy your Connectware agent in a Kubernetes namespace of your choice:
helm upgrade -i connectware-agent cybus/connectware-agent -f values.yaml -n <namespace>
Code-Sprache: YAML (yaml)
Output
Release "connectware-agent" does not exist. Installing it now.
NAME: connectware-agent
LAST DEPLOYED: Mon Mar 13 14:31:39 2023
NAMESPACE: connectware
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for using the Cybus Connectware agent Helm chart!
For additional information visit https://cybus.io/
Number of agents: 1
--------------------
- agent
If any of these agents are new, please remember to visit Connectware's client registry to set up the connection to Connectware.
Hint: If you have agents stuck in a status other than "Running", you need to delete the stuck pods before a pod with your new configuration will be created.
Code-Sprache: YAML (yaml)
This will start a single Connectware agent named “agent”, which will connect to a Connectware installation deployed in the same namespace. Unlock the Client Registry in your Connectware admin UI to connect this agent. Refer to Client Registry — Connectware documentation to learn how to use the Client Registry to connect agents.
You can repeat the same command to apply any changes to your values.yaml file configuration in the future.
If you are not deploying the agent in the same Kubernetes namespace, or even inside the same Kubernetes cluster, you need to specify the hostname under which Connectware is reachable for this agent.
In the default configuration, the following network ports on Connectware must be reachable for the agent:
Specify the hostname of Connectware to which the agent connects to by setting the Helm value connectwareHost
inside the protocolMapperAgentDefaults
context of your values.yaml file. For Connectware deployments in a different Kubernetes namespace this is “connectware.<namespace>”.
Example
licenseKey: <your-connectware-license-key>
protocolMapperAgentDefaults:
connectwareHost: connectware.cybus # adjust to actual hostname of Connectware
Code-Sprache: YAML (yaml)
To connect to a Connectware that uses the separate control-plane-broker, you need to set the Helm value controlPlaneBrokerEnabled
to true
inside the protocolMapperAgentDefaults
section of your values.yaml file.
Example
licenseKey: <your-connectware-license-key>
protocolMapperAgentDefaults:
connectwareHost: connectware.cybus # adjust to actual hostname of Connectware
controlPlaneBrokerEnabled: true
Code-Sprache: YAML (yaml)
Note: This adds TCP/1884 to required network ports.
You can use the agent chart to install multiple Connectware agents. Every agent you configure needs to be named using the Helm value name
in a collection entry inside the context protocolMapperAgents
. This way, the default name “agent” will be replaced by the name you give the agent.
Example
licenseKey: <your-connectware-license-key>
protocolMapperAgentDefaults:
connectwareHost: connectware.cybus # adjust to actual hostname of Connectware
protocolMapperAgents:
- name: bender-robots
- name: welder-robots
Code-Sprache: YAML (yaml)
This will deploy two Connectware agents, named “bender-robots” and “welder-robots”, both of which will contact the Client Registry of Connectware inside the Kubernetes namespace “cybus”, as described in Client Registry — Connectware documentation
This quick start guide describes the steps to install the Cybus Connectware onto a Kubernetes cluster.
Please consult the article Installing Cybus Connectware for the basic requirements to run the software, like having access to the Cybus Portal to acquire a license key.
The following topics are covered by this article:
We assume that you are already familiar with the Cybus Portal and that you have obtained a license key or license file. Also see the prerequisites in the article Installing Cybus Connectware.
This guide does not introduce Kubernetes, Docker, containerization or tooling knowledge, we expect the system admin to know about their respective Kubernetes environment, which brings – besides wellknown standards – a certain specific custom complexity, e.g. the choice of certain load balancers, the management environment, storage classes and the like, which are up to the customer’s operations team and should not affect the reliability of Cybus Connectware deployed there, if the requirements are met.
Besides a Kubernetes cluster the following tools and resources are required:
To be able to start with Cybus Connectware on a Kubernetes cluster, use the prepared helm chart and the following steps:
helm repo add cybus https://repository.cybus.io/repository/connectware-helm
Code-Sprache: YAML (yaml)
helm repo update
helm search repo connectware [-l]
Code-Sprache: YAML (yaml)
values.yaml
. This file will be used to configure your installation of Connectware. Initially fill this file with this YAML content:global:
licensekey: <YOUR-CONNECTWARE-LICENSE-KEY>
setImmutableLabels: true
broker:
clusterSecret: <SOME-RANDOM-SECRET-STRING>
Code-Sprache: YAML (yaml)
ReadWriteOnce
and ReadWriteMany
access modes, please also set the value global.storage.storageClassName
to a StorageClass that does the following:storage:
storageClassName: “san-storage” # example value
Code-Sprache: YAML (yaml)
default-values.yaml
and checking if you want to make further adjustments. It is, for example, possible that you need to adjust the resource request/limit values for smaller test clusters. In this case copy and adjust the global.podResources
section from default-values.yaml
to values.yaml
.helm show values cybus/connectware > default-values.yaml
Code-Sprache: YAML (yaml)
helm install <YOURDEPLOYMENTNAME> cybus/connectware -f ./values.yaml --dry-run --debug -n<YOURNAMESPACE> --create-namespace
Code-Sprache: YAML (yaml)
Example
helm install connectware cybus/connectware -f ./values.yaml --dry-run --debug -ncybus --create-namespace
Code-Sprache: YAML (yaml)
helm install <YOURDEPLOYMENTNAME> cybus/connectware -f ./values.yaml --n<YOURNAMESPACE> --create-namespace
Code-Sprache: YAML (yaml)
helm upgrade <YOURDEPLOYMENTNAME> cybus/connectware -f ./values.yml -n<YOURNAMESPACE>
Code-Sprache: YAML (yaml)
When taking a look at the default-values.yaml
file you should check out these important values within the global
section:
licensekey
value handles the licensekey of the Connectware installation. This needs to be a production license key. This parameter is mandatory unless you set licenseFile
licenseFile
value is used to activate Connectware in offline mode. The content of a license file downloaded from the Cybus Portal has to be set (this is a single line of a base64 encoded json object)image
source and version using the image section. broker
section specifies MQTT broker related settings:
broker.clusterSecret
: the authentication secret for the MQTT broker cluster. Note: The cluster secret for the broker is not a security feature. It is rather a cluster ID so that nodes do not connect to different clusters that might be running on the same network. Make sure that the controlPlaneBroker.clusterSecret
is different from the broker.clusterSecret
.broker.replicaCount
: the number of broker instancescontrolPlaneBroker
section specifies MQTT broker related settings:
controlPlaneBroker.clusterSecret
: the authentication secret for the MQTT broker cluster. Note: The cluster secret for the broker is not a security feature. It is rather a cluster ID so that nodes do not connect to different clusters that might be running on the same network. Make sure that the controlPlaneBroker.clusterSecret
is different from the broker.clusterSecret
.controlPlaneBroker.replicaCount
: the number of broker instancescontrolPlaneBroker
is optional. To activate it, type controlPlaneBroker.enabled: true
. This creates a second broker cluster that handles only internal communications within Connectware.loadBalancer
section allows pre-configuration for a specific load balancerpodResources
set of values allows you to configure the number of CPU and memory resources per pod; by default some starting point values are set, but depending on the particular use case they need to be tuned in relation to the expected load in the system, or reduced for test setupsprotocolMapperAgents
section allows you to configure additional protocol-mapper instances in Agent mode. See the documentation below for more detailsHelm allows setting values by both specifying a values file (using -f
or --values
) and the --set
flag. When upgrading this chart to newer versions you should use the same arguments for the Helm upgrade command to avoid conflicting values being set for the Chart; this is especially important for the value of global.broker.clusterSecret
, which would cause the nodes not to form the cluster correctly, if not set to the same value used during install or upgrade.
For more information about value merging, see the respective Helm documentation.
After following all the steps above Cybus Connectware is now installed. You can access the Admin UI by opening your browser and entering the Kubernetes application URL https://<external-ip>
with the initial login credentials:
Username: admin
Password: admin
To determine this data, the following kubectl command can be used:
kubectl get svc connectware --namespace=<YOURNAMESPACE> -o jsonpath={.status.loadBalancer.ingress}
Code-Sprache: YAML (yaml)
Should this value be empty your Kubernetes cluster load-balancer might need further configuration, which is beyond the scope of this document, but you can take a first look at Connectware by port-forwarding to your local machine:
kubectl --namespace=<YOURNAMESPACE> port-forward svc/connectware 10443:443 1883:1883 8883:8883
Code-Sprache: YAML (yaml)
You can now access the admin UI at: https://localhost:10443/
If you would like to learn more how to use Connectware, check out our docs at https://docs.cybus.io/ or see more guides here.
The Kubernetes version of Cybus Connectware comes with a Helm Umbrella chart, describing the instrumentation of the Connectware images for deployment in a Kubernetes cluster.
It is publicly available in the Cybus Repository for download or direct use with Helm.
Cybus Connectware expects a regular Kubernetes cluster and was tested for Kubernetes 1.22 or higher.
This cluster needs to be able to provide load-balancer ingress functionality and persistent volumes in ReadWriteOnce
and ReadWriteMany
access modes provided by a default StorageClass unless you specify another StorageClass using the global.storage.storageClassName
Helm value.
For Kubernetes 1.25 and above Connectware needs a privileged namespace or a namespace with PodSecurityAdmission labels for warn
mode. In case of specific boundary conditions and requirements in customer clusters, a system specification should be shared to evaluate them for secure and stable Cybus Connectware operations.
Connectware specifies default limits for CPU and memory in its Helm values that need to be at least fulfilled by the Kubernetes cluster for production use. Variations need to be discussed with Cybus, depending on the specific demands and requirements in the customer environment, e.g., the size of the broker cluster for the expected workload with respect to the available CPU cores and memory.
Smaller resource values are often enough for test or POC environments and can be adjusted using the global.podResources
section of the Helm values.
In order to run Cybus Connectware in Kubernetes clusters, two new RBAC roles are deployed through the Helm chart and will provide Connectware with the following namespace permissions:
resource(/subresource)/action | permission |
---|---|
pods/list | list all containers get status of all containers |
pods/get pods/watch | inspect containers |
statefulsets/list | list all StatefulSets get status of all StatefulSets |
statefulsets/get statefulsets/watch | inspect StatefulSets |
resource(/subresource)/action | permission |
---|---|
pods/list | list all containers get status of all containers |
pods/get pods/watch | inspect containers |
pods/log/get pods/log/watch | inspect containers get a stream of container logs |
deployments/create | create Deployments |
deployments/delete | delete Deployments |
deployments/update deployments/patch | to restart containers (since we rescale deployments) |
The system administrator needs to be aware of certain characteristics of the Connectware deployment:
licenseFile
above)global.loadBalancer.addressPoolName
or by setting the metallb.universe.tf/address-pool
annotation using the global.ingress.service.annotations
Helm valueThe default-values.yaml
file contains a protocolMapperAgents section representing a list of Connectware agents to deploy. The general configuration for these agents is the same as described in the Connectware documentation.
You can copy this section to your local values.yaml file to easily add agents to your Connectware installation
The only required property of the list items is name
; if only this property is specified the chart assumes some defaults:
name
connectware
which is the DNS name of Connectware.storageSize
is set to 40 MB by default. The agents use some local storage which needs to be configured based on each use case. If a larger number of services is going to be deployed, this value should be specified and set to bigger values.You can check out the comments of that section in default-values.yaml
to see further configuration options.
You can find further information in the general Connectware Agent documentation.
Sie müssen den Inhalt von reCAPTCHA laden, um das Formular abzuschicken. Bitte beachten Sie, dass dabei Daten mit Drittanbietern ausgetauscht werden.
Mehr InformationenSie sehen gerade einen Platzhalterinhalt von Facebook. Um auf den eigentlichen Inhalt zuzugreifen, klicken Sie auf die Schaltfläche unten. Bitte beachten Sie, dass dabei Daten an Drittanbieter weitergegeben werden.
Mehr InformationenSie sehen gerade einen Platzhalterinhalt von Instagram. Um auf den eigentlichen Inhalt zuzugreifen, klicken Sie auf die Schaltfläche unten. Bitte beachten Sie, dass dabei Daten an Drittanbieter weitergegeben werden.
Mehr InformationenSie sehen gerade einen Platzhalterinhalt von X. Um auf den eigentlichen Inhalt zuzugreifen, klicken Sie auf die Schaltfläche unten. Bitte beachten Sie, dass dabei Daten an Drittanbieter weitergegeben werden.
Mehr Informationen