Service Integration

This section provides instructions on how to implement many of the integration points

Listed on the CP4D Community Page

For the product/service to be listed on the IBM Cloud Pak for Data Community page, an add-on tile needs to be created for the "Partners Catalog." Work with the Partner Onboarding ecosystem team to define the information for your Partner Tile following a template provided. Once the product’s add-on tile is created and its content verified, it can be "graduated" for inclusion on the CP4D Community page.

CPD Punch-Out Experience

Punch-out experience refers to the experience the user has when the launching product's user interface interacts with the service being offered (assuming such UI is offered and available). For the standalone and pop-out integration levels, the UI is launched in a separate tab rather than being embedded into CPD's own UI. Once the service offering in installed, the "Deploy" link in the add-on tile should be replaced by the "Open" link to take the user to the partner's product.
Instructions for replacing the "Deploy" link with the "Open" link during the installation deployment are provided in theConfig Mapsection of this document

Security Hardened for RHOS Cluster Deployment

Following are the security hardening requirements that need to be met for RHOS cluster deployment.
1. No cluster admin privileges - it should be assumed that there will be no cluster-admin privileges during deployment and installations. Any actions requiring elevated cluster-admin privileges like listing and creation of namespaces/projects, creation of service accounts and binding of security policies need to be explicitly called out as they may need to be performed by the cluster administrator upon customer approval.
Any actions requiring elevated cluster-admin privileges need to be explicitly stated and documented in the deployment instructions as required prerequisites for the deployment so that they can be assessed and implemented by the cluster administrator.
2. No direct (i.e. ssh) access to cluster nodes
No SSH for installation. Deployment and installation steps are expected to be implemented via helm or operator utilities.
3. No outbound connectivity - There should be no assumption of availability of outbound connectivity via public network.
It should be taken into account that the solution may be deployed in an "air-gapped" environment without direct outbound connectivity and it should be designed to work in such an environment. For the deployment and installation, it should be possible for the project/cluster admin to download any required images and pre-load them into local registry.
4. Use HTTPS/TLS for inter (micro)services communications
Communication points between pods/services need to use secured HTTPS/TLS protocol
5. Run within the default "restricted" security constraint.
Any exception to the above needs to be explicitly called out for approval by the customer and implemented by cluster admin.
6. Run as non-root - docker images should use explicit USER directive (specifying non-root user) when building docker images to ensure docker processes are executed with non-privileged user account:
Specify explicit USER ... directive when building image. For example: FROM alpine:latest RUN adduser -D myuser USER myuser ENTRYPOINT [“...”] CMD [“...”]
7. No host access or host mounts
8. Deploy within a specific project (namespace). No access to other namespaces or ability to create a new namespace
9. Use OpenShift routes for exposing services.
oc expose service <service-name> [[--name=<route-name>] By default the above should make the service externally available at: https://<route-name>-<project-name>.<cluster-name> where <cluster-name> is public hostname of the cluster

RHOS Image Certification (For containerized solutions)

This certification focuses on the container images. Certification of container images is completed on the Red Hat portal. It validates images built using a supported RHEL base image that hasn’t been manipulated and is scanned for image vulnerabilities.
To complete Red Hat's certification you will need to register on the Red Hat portal and follow the official process: https://connect.redhat.com/explore/red-hat-container-certification
Here are some of the requirements that need to be fulfilled to complete the certification:
  • Vulnerability scans - deployed images need to be free of any open vulnerabilities
  • Use of Red Hat approved base images
  • Use Red Hat's "Universal Base Image" (UBI) as the base layer of the container
Make sure docker is installed on the system.
For a description of, and details on, the available UBI images see: https://catalog.redhat.com/software/container-stacks/detail/5ec53f50ef29fd35586d9a56
To build the images for certification you can find further information here:
  • No modification to image's base layers
  • Use Docker labels to properly describe container images:
name: Name of the image
vendor: Company name
version: Version of the image
release: Number identifying specific build for this image
summary: A short overview of this image
description: A long description of this image
  • Include product license(s) in the /license sub directory
  • Ensure container process runs as non-root

Cloud Pak (Certification) requirements

  1. 1.
    Security
    1. 1.
      Use of HTTPS/TLS for inter (micro)services communication
    2. 2.
      Ensure container process runs as non-root
    3. 3.
      Use Service Accounts & RBAC
  2. 2.
    Best Practices
    1. 1.
      Implement readiness probe
    2. 2.
      Implement liveness probe
  3. 3.
    Test/verify helm charts with helm linter
  4. 4.
    Specify resource limits
  5. 5.
    Metering (see Metering section below)

Tested on RHOS Cluster with Recent CPD Release

The Partner's development team will be responsible for testing and for verifying that the offered solution works on the most recent release of the CPD platform.
As documented under "Infrastructure" in the "Partner Onboarding Process" section of this guide, you may request short-term access to an IBM-supplied cluster to test and verify the add-on tile as well as the deployment/functionality of the actual application.

Multi namespace tenancy model

This is also known as the “Dedicated Namespace Tenancy” model where multiple instances of the service would be provisioned each in its own namespace/project.
Installation/deployment of the service should allow for creating multiple instances of the service each running in its own namespace project. This could be achieved, for example, by performing multiple installations in each project.

Serviceability (Logging & Diagnostics)

Ensure all of your logs are sent to stdout/stderr.

Support installation in “air-gapped” environments

The service/product being offered must be installable in restricted (also known as “air-gapped”) environments that may have no outbound connectivity to the public internet. In such environments pulling images from public registries such as RedHat’s container registries or other public registries will not be possible. Therefore, your deployment documentation needs to provide sufficient information on how to accomplish installation in such restricted environments. This installation requirement for restricted/air-gapped environments applies to all types of deployments regardless of whether your deployment is manual via Kubernetes manifests or a Helm-based deployment or installation via an Operator. In addition to supporting installation in the restricted environment the deployed application must also be able to function in such an environment without any dependency on off-cluster resources such as call-backs to license/entitlement servers or performing API calls to external services. In the case of using existing operators already available via OperatorHub/Red Hat Marketplace, you will need to ensure they can be deployed and used in restricted/air-gapped environments. Note that to support deployments of operators in restricted environments it’s necessary to support both, the off-line deployment of the operator itself and the off-line deployment of operator’s operands, i.e. any applications managed by the operator.
The typical way to accomplish and validate an “air-gapped” environment would involve downloading required artifacts such as application and operator container images from the public registries and pushing them into alternate container registry of customer choosing and providing methods to specify those alternate image locations during the deployment/installation. To simplify documenting installation instructions the deployment instructions and examples can be based on using OpenShift’s internal container registry but it should be assumed that the customer may use other compatible registry so the installation method should provide an easy way of overriding registry references in the deployment manifests during the installation. This would be the string to use when logging in to the registry from outside of the cluster, for example with the podman command: HOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') podman login -u $(oc whoami) -p $(oc whoami -t) --tls-verify=false $HOST Here is simple flow illustrating the concept of supporting installation in an “air-gapped” environment:
  • download any required artifacts such as application (and if applicable operator) container images to a local host
  • tag the images with the cluster’s internal registry url, for example:
podman tag my-app default-route-openshift-image-registry.apps.<cluster-name>.<domain-name>.com/myproject/my-app podman push default-route-openshift-image-registry.apps.<cluster-name>.<domain-name>.com/myproject/my-app After the successful push, the image is available in the cluster’s internal registry and can be referenced from within the cluster by the cluster internal registry url. For example: image-registry.openshift-image-registry.svc:5000/myproject/my-app Once the images are pushed to the customer’s registry or the cluster’s internal registry, the deployment documentation should provide instructions on how to modify/override references to the image(s) in the deployment manifests to ensure images are pulled during the deployment from the cluster’s internal registries instead of the public ones. Following are some best-practice recommendations on how this could be accomplished. Helm based deployments: Helm deployment manifests should only reference image location via parameter defined in values.yaml, preferably a single parameter that defines the image’s registry, repository and tag. For example: deployment.yaml
containers:
- name: {{ template "etcd.fullname" . }}
image: "{{ .Values.image.image }}"
values.yaml
image:
image: <image-registry>/<repository>:<tag>
With the above structure, the user could override the default image location by either modifying the parameter in values.yaml or, more conveniently, by using a command line option supplied when invoking the helm install … command
Operator deployments: To support deployments of operators in restricted environments it’s necessary to support both the off-line deployment of the operator itself as well as the off-line deployment of the operator’s operands. The off-line deployment of the operator itself can be accomplished similarly as described above for the Helm-based deployment, that is, by allowing the cluster administrator to override the location of the operator’s image during the operator installation either using a command line parameter or modifying the operator’s deployment manifests. However, allowing the operator to pull the operand’s images from an alternate container registry requires a couple of additional steps that differ slightly based on the operator’s type (helm-based vs. ansible-based vs. Go-based). The main idea behind the detailed instructions on enabling each type of the operator for off-line installation can be found here: https://redhat-connect.gitbook.io/certified-operator-guide/appendix/offline-enabled-operators.

Verify product works with CPD supported storage classes

Cloud Pak for Data supports and is optimized for the following storage providers:
  • Network file system (NFS) (default)
  • Portworx (A free entitled Portworx instance is available with Cloud Pak for Data)
Reference: https://www.ibm.com/support/producthub/icpdata/docs/content/SSQNUZ_current/cpd/plan/storage_considerations.html CP4D add-ons utilizing cluster-provided storage either for its own deployment and/or for its user data storage requirements, need to be validated as supporting and working at the minimum with the two above-mentioned supported storage options.
If you are using your own cluster, you will need to ensure these two storage options are available on it. Please refer to the Reference document for instructions on how to set up these storage classes. Each add-on will need to be verified as working and compatible with both of the supported storage options. Each add-on will need to:
  • be able to be deployed/installed using both of these options
  • be verified using both of the supported options if your product requires cluster storage for its user data
The default storage class is managed-nfs-storage. To use and validate against the other storage option you will need to explicitly use that option in your deployment manifests or override it during the deployment/installation.

Support for FIPS cryptography

CPD add-ons need to be able to run in FIPS enabled environment as FIPS is critical for federal accounts and the "government" cloud. Running CPD in a FIPS enabled cluster is a supported environment and CPD add-ons need to conform to that requirement. Reference: https://docs.openshift.com/container-platform/4.6/installing/installing-fips.html If you are using your own cluster, you will need to ensure it conforms to the FIPS requirements as described in the Reference document. Note that in general it may not be possible to “convert” a non-FIPS cluster to a FIPS-enabled cluster. Clusters need to build right from the beginning with the FIPS-enablement to ensure only FIPS validated packages are used during the installation. Product installations consisting of containerized deployments need to have their deployments and operations validated against CPD instances running on FIPS-enabled OpenShift clusters. In addition, products using cryptographic libraries will need to ensure any libraries used are FIPS compliant.

Integrated for Metering

The Metering and License services are used to help track and manage software usage. You can use the metering service to view and download detailed usage metrics for the applications and cluster. Metering and license tracking are enabled by adding annotations to any Kubernetes resource that deploys a container. Metering and Licensing Metering and license tracking are enabled by adding annotations to any Kubernetes resource that deploys a container. For example, Deployments, StatefulSets, Jobs, Pods, etc. The license tracking annotations build on the existing metering annotations and allow for the tracking of licenses at a single product level or as a Cloud Pak. Products that are part of a Cloud Pak can and must report licensing metrics as a single Cloud Pak entity. Products that track licenses via the IBM Common Services' License Service do so via chart annotations. These annotations include existing metering annotation names along with new licensing annotation names. When to include metering annotations Base metering annotations are a requirement for all certified containers. When to include licensing annotations Charts that track licenses using one of the core IBM metrics, such as processor value units (PVU), resource value units (RVU MAPC), virtual processor cores (VPC), or install-based metrics must include these annotations to track license metric utilization with the License Service. If your product packages or requires IBM Common Services, then it is highly recommended that you include the License Service annotations. Cloud Paks and products that are part of a Cloud Pak are required to include license tracking annotations in their product.

Metering and Licensing annotations

The following annotations are used by the License Service to track license usage. The first three are the same annotations that are specified for metering annotations and will likely already be a part of your chart.
Annotation name
Required
Meaning and comments
Software resource metering use case
License tracking use case
productName
Yes
Name of the product as listed in the readiness package in the ReadyTool
x
x
productID
Yes
The ISVs that want to use Metering and Licensing are not registered with the IBM Software Catalog and must do what they can to ensure that their GUID is unique. For these products, we recommend the following strategy. Use four fields to define your product ID, and separate them with an underscore (_) character:
productName_productVersion_licenseType_uniqueKey
For example:
ISVProductName_1009_perpetual_00000
These rules are designed to accomplish two goals: - ensure that each product on the platform has a unique ProductID label - ensure that self-defined ProductIDs have an obvious visual meaning
x
x
productVersion
Yes
Version of the product
x
productMetric
Yes
The license metric of the product. Allowed value FREE if the software is for free. Otherwise, see the List of Charge Metric Codes wiki.
x
productChargedContainers
No
Semicolon separated list of containers to be tracked for license usage. Defaults tonone if not specified. Use value of All for all containers. Values are case-insensitive.
x
cloudpakName
Yes - for Cloud Paks
Cloud Pak name
x
x
cloudpakId
Yes - for Cloud Paks
Use “eb9998dcc5d24e3eb5b6fb488f750fe2” for Cloud Pak For Data
x
x
cloudpakVersion
Yes for Cloud Paks
Cloud Pak version
x
productCloudpakRatio
No
For Cloud Paks, the entitlement conversion ratio. See the "Entitlement Conversion Details" section of the License Information document for your Cloud Pak (linked below). Default 1:1 if not specified.
x
Add licensing annotations to any chart/operator/etc. that creates a container (Deployment, StatefulSet, Pod, Job, etc.). Metering annotation values should be specified in the spec.template.metadata.annotations section of the Kubernetes resource. For example:
spec:
template:
metadata:
annotations:
cloudpakId: " eb9998dcc5d24e3eb5b6fb488f750fe2"
cloudpakName: "IBM Cloud Pak for Applications"
cloudpakVersion: "3.0"
productID: "<product-id>"
productName: " <Name of the Addon/Service>"
productVersion: " <Product version>"
productMetric: "VPC"
productChargedContainers: All
All pods are required to have an app label.
Once the annotations and labels are defined as instructed above a background process should automatically capture usage information for all the pods belonging to your deployment and group them under the "My Product Name" category in the "Administer->Manage platform" view.
Drilling down on the "My Product Name -> my-application" should reveal usage stats for all pods associated with the "My Product Name" and "my-application". Each listed pod in the view should have a context menu (revealed by hovering over the end of each entry on the far right side under the "..." icon) with the following actions: Details - see config details Restart - restart the pod View Logs - see logs for the associated pod
IMPORTANT NOTE: Metrics collection showing actual usage vs requested resources are not enabled by default in CPD. They require "Kubernetes Metrics Server" to be installed in the cluster as documented in CPD's pre-req documentation (https://www.ibm.com/support/producthub/icpdata/docs/content/SSQNUZ_latest/sys-reqs/software-reqs.html)

Partner Solution Minimum Required Resources Defined

  1. 1.
    All Content Charts should specify Resource Request and Limits for Memory.
    All Content Charts SHOULD specify with Limit==Request for Memory (Guaranteed Mode)
  2. 2.
    All Content Charts SHOULD specify with Limit==Request for Memory (Guaranteed Mode). It is recommended that all content charts specify Request. This assists Kubernetes in scheduling a pod to the proper node. If Limit is not specified, Kubernetes will allow the CPU usage to go above the requested amount if there is a load on the product and there are CPU resources available on the node.
    NOTE: Kubernetes will scale back down to the requested value if other pods need to be scheduled.
  3. 3.
    Resource limits and requests for cpu and memorywhen exposed must be set to reasonable defaults allowing the workload to run but also not over requesting required resources.
  4. 4.
    Resource Request/Limits SHOULD be customizable via values.yaml. This allows the customer to allocate more memory/CPU to a Pod in a controllable way but will not make much sense if there are throttles lower down (e.g. max Java stack/heap settings).
  5. 5.
    Each onboarded product/service needs to provide deployment documentation with install and setup instructions. The required content to be included in this documentation is described in the Deployment documentation section.

IBM Design/Content Reviews with Partners

The CPD Offering Team will schedule and conduct design and content review meetings with your team.