Working with Operators

What is an Operator?

An Operator is an application-specific controller that extends and uses Kubernetes API to install, configure and manage instances of applications.

Why it is needed and how it is useful?

Kubernetes standard controllers such as Deployments/ReplicatSets can manage in a limited and general way (for example, recreate failed instances or scale up/down number of instances) of stateless applications such as Http or Nginx instances but can’t really manage stateful applications because doing so usually requires application-specific knowledge. Operators implement such domain specific knowledge in custom controllers, for example, Mongo or Etcd controllers, allowing them to manage stateful applications such as databases or clustered applications that maintain a state during their lifecycle. Because they are application-specific they allow for automation of application specific operations such as non-disruptive upgrades of databases, addition/removal of cluster member nodes, application specific consistent backups, etc.

How Operators are built and distributed?

Operators are typically built with operator-sdk. They typically consist of an operator image and Kubernetes deployment manifests for registering the custom resource types (CRDs) (see the following section for details about operators’ structure). They can be deployed using standard Kubernetes CLI tools such as kubectl or oc. They can also be deployed via the OperatorHub available in OpenShift’s web console.

How does it differ from existing packaging/deployment tools such as Helm?

Helm is more a tool for packaging, managing and installing set of related Kubernetes resources which is only a small part of the Operator’s capabilities. It’s rather complementary to the Operator framework, for example, Operator framework allows for re-using existing helm charts to implement the functionality for provisioning and initial configuration of the managed application.

Structure of an Operator

Note: For simplicity and brevity the code examples are based on creating Operator for managing instances of Nginx application which might not be realistic example of using an operator since it’s a stateless application, however the concepts illustrated would apply to any stateful applications. At the minimum a functional Kubernetes operator consists of the following:
  1. 1.
    Custom Resource Definition(s) (CRD) – those are provided as yaml file(s) defining the new Kubernetes resource type(s) to be managed by the operator, for example:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: nginxes.example.com
spec:
group: example.com
names:
kind: Nginx
listKind: NginxList
plural: nginxes
singular: nginx
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
type: object
x-kubernetes-preserve-unknown-fields: true
versions:
- name: v1alpha1
served: true
storage: true
2. Controller – controller can be implemented from “scratch” using programming language such as Go or others. They can be also built from existing helm charts or even ansible scripts. Once the controller is implemented controllers are built and packaged into a container image. Typically, container image would be hosted in some container registry that will be referred to in the operator deployment manifest as part of the spec template, for example:
containers:
- name: nginx-operator
# Replace this with the built image name
image: quay.io/adamiak/nginx-operator:v0.0.1
imagePullPolicy: Always
This container image needs to be “pullable” from the cluster during the installation of the operator. If it’s hosted in a private registry requiring authentication registry credentials would have to be provided (typically via Kubernetes Secret). Alternatively, the controller image could be downloaded from an alternate location and pushed to either user’s own registry or to cluster local registry.
3. Operator deployment manifest – this is provided as a standard Kubernetes yaml deployment specification. This deployment yaml file references the container image implementing the controller that was described in the section above, for example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-operator
spec:
replicas: 1
selector:
matchLabels:
name: nginx-operator
template:
metadata:
labels:
name: nginx-operator
spec:
serviceAccountName: nginx-operator
containers:
- name: nginx-operator
# Replace this with the built image name
image: quay.io/adamiak/nginx-operator:v0.0.1
imagePullPolicy: Always
env:
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: OPERATOR_NAME
value: "nginx-operator"
imagePullSecrets:
- name: registry-pull-secret
Usually operator deployment files are accompanied by yaml definitions to define additional Kubernetes resources that may be required for the operation of the operator such as: service accounts, roles and roles bindings.
These three components: CRD(s), controller’s container image and operator’s Kubernetes deployment yaml(s) should be sufficient to manually deploy operator into OpenShift cluster. (Note: Management (installation, updating and management) of operators in the cluster can be greatly simplified by taking advantage of Operator Lifecycle Manager (OLM) that is available out-of-the-box in the OpenShift clusters v 4.3+). However, to take advantage of the OLM operator needs to provide additional metadata information which will be described in the following section)
Manual deployment of an operator in cluster
1. Register Custom Resource Definition(s) for the custom resources to be managed by the operator: oc create -f <your-crd-yaml(s)>
2. Deploy operator manifest along with other cluster resources that may be required by the operator such as service account, role and role bindings:
oc create -f <service-account.yaml> oc create -f <role.yaml> oc create -f <role-binding.yaml> oc create -f <operator.yaml> As described above, the operator.yaml references the controller’s image implementing the main functionality of the operator. Upon deployment of the operator.yaml a new operator pod will be created.
3. With the above completed we should have a functional operator running in the cluster “watching” for events related to custom resources it manages i.e. instances of custom resources definition(s) installed with this operator. The operator uses watches.yaml file to:
  • To determine which custom resources it needs to watch for
  • Specify which helm chart it will need to deploy to fulfill provisioning and upgrade requests
  • To globally override any Helm variables defined in values.yaml file, this can be used for example to specify alternative location for the managed application’s images which could be required for the off-line installations.
Deployment of an operator via OperatorHub in OpenShift web console
Operators available in the RedHat’s Operator Marketplace can also be deployed via OperatorHub UI that is available in OpenShift web console. The OperatorHub provides convenient UI allowing users to:
- search for RedHat certified and community provided operators from RedHat and ISV partners that are published in RedHat Marketplace Note: OpenShift cluster administrators can limit or remove altogether access to default operators’ registries so operator availability in OperatorHub could vary in specific cases
  • install the operator which also register any Custom Resource Definition(s) managed by the operator
  • specify the operator update channel (stable, beta, etc.) and upgrade strategy (manual, automatic).
  • configure, provision and manage instances of the applications managed by the operator
  • delete the managed instances
  • delete the operator itself and de-register all CRDs owned by the operator
How to deploy operators in restricted (aka “air-gapped”) environments.
One challenge with Operators is their deployment in restricted environments where OpenShift cluster may be running with no connectivity to public internet making registries hosting operators’ images and the images of their operands not directly accessible from the cluster during the installation or upgrades.
How to create an Operator with Operator SDK
One of the tools included in “Operator Framework” is Operator SDK. It’s a development kit for building operators. It provides high-level APIs, project scaffolding for building operators.
Operator sdk provides support for building three main types of operators:
  • Helm based operators operator-sdk new nginx-operator --api-version=example.com/v1alpha1 --kind=Nginx --type=helm
  • Ansible based operators operator-sdk new memcached-operator --api-version=cache.example.com/v1alpha1 --kind=Memcached --type=ansible
  • Golang based operator operator-sdk new memcached-operator --repo=github.com/example-inc/memcached-operator
The Helm and Ansible based operators allow to quickly create operators by allowing to reuse existing Helm charts or Ansible playbooks. While the helm-based operator can reuse existing helm chart as-is (in many cases without any modifications) they are limited in their functionality due to inherent limitations of Helm technology which doesn’t provide sufficient flexibility to implement application-specific functionality into helm charts. Therefore Helm-based operators can only provide support for operator capabilities that are mainly limited to provisioning, upgrade and simple configuration functionality.
Ansible and Golang based operator allow to implement all levels of operator functionality.