global:
imageRegistryUsername: {SERVICE_ACCOUNT_USERNAME}
imageRegistryPassword: {SERVICE_ACCOUNT_PASSWORD}
sharedStorage:
bucketName:
s3Url: https://s3.{S3_REGION}.amazonaws.com
s3AccessKey:
s3SecretKey:
sde:
jwtSecret:
secretKey:
superuserEmail:
superuserPassword:
defaultFromEmail: "{SENDER_NAME} <{FROM_EMAIL}>"
sc-mail:
mailFrom: {FROM_EMAIL}
postgresql:
auth:
username: sde
password:
sc-datastore:
clientPassword:
rabbitmq:
auth:
erlangCookie:
password:
Install SD Elements
1. Prerequisites
Review the SD Elements deployment requirements before proceeding further. |
-
Access to the Security Compass artifact store. Instructions for requesting a service account can be found here.
-
An existing Kubernetes cluster that aligns with a compatible Kubernetes platform and version.
-
Access to an object storage solution
-
Cluster DNS service
2. Create a custom values file
Create a custom values file (e.g., values.custom.yaml
). A sample file has been provided below which incudes required or otherwise important values for a standard installation of SD Elements.
Additional configuration options and values can be found in the Configuration section and on the Chart Values reference page.
Keys, secrets, and passwords can be random, unique strings. Manage these credentials like any software that requires version control - these protect the data stored in SD Elements.
Such keys and passwords encrypt or otherwise protect the data stored in SD Elements. Subsequent changes to them could have potentially negative consequences, including SD Elements becoming unable to decrypt data. For a list of those items that should not be changed, please see Immutable Values. |
2.1. (Optional) Installing in Openshift
This section is only applicable for installations of SD Elements 2023.2 or later deployed in an Openshift cluster.
The SD Elements Helm chart disables incompatible configurations (e.g. PodSecurityContext) when Openshift compatibility has been enabled.. |
2.1.1. Prerequisites:
-
Openshift Container Platform Ingress Operator is enabled in the target Openshift cluster
2.1.2. Required Values for Openshift
SD Elements requires the values below to run in Openshift in addition to the base values provided in Create a custom values file. In the event of conflicts, use Openshift-specific values (below) instead of the base values (above).
global:
openshift:
enabled: true
web:
ingressClassName: openshift-default
rabbitmq:
podSecurityContext:
enabled: false
containerSecurityContext:
enabled: false
postgresql:
primary:
containerSecurityContext:
enabled: false
runAsUser: null
allowPrivilegeEscalation: false
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
capabilities:
drop:
- 'ALL'
podSecurityContext:
enabled: false
runAsUser: null
runAsGroup: null
fsGroup: null
seccompProfile:
type: RuntimeDefault
volumePermissions:
enabled: false
shmVolume:
enabled: false
We recommend using the OpenShift Container Platform Ingress Operator. The default IngressClassName
is openshift-default
, this value may differ in your environment.
2.2. (Optional) Using Minio Tenant in OpenShift
When using Minio Tenant with OpenShift, securityContext
, containerSecurityContext
, and volumeClaimTemplate
for each pool have to be updated as illustrated in the following example
minio-tenant:
enabled: true
tenant:
pools:
- servers: 1
name: pool-0
volumesPerServer: 1
size: 100Gi
securityContext:
runAsUser: null
runAsGroup: null
fsGroup: null
containerSecurityContext:
runAsUser: null
runAsGroup: null
volumeClaimTemplate:
metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
3. Install SD Elements
This section will use {PLACEHOLDER}
to indicate places where information will need to be provided. A list of these placeholders are below.
Placeholder |
Description |
|
Determines the name under which the Security Compass artifact store will be referenced on your machine. The recommended value is |
|
The Helm release name for the SD Elements instance. Primarily, the name is used as a prefix for Kubernetes resources within the cluster. |
|
The SD Elements version to use when installing the Helm chart. The latest SD Elements version is always recommended for new instances. |
|
The Kubernetes namespace where SD Elements will be installed. A unique and descriptive namespace name such as |
3.1. Add the SD Elements Helm Repository
$ helm repo add {LOCAL_REPO_NAME} https://repository.securitycompass.com/artifactory/sde-helm-prod \
--username {SERVICE_USERNAME} \
--password {SERVICE_PASSWORD}
3.2. Confirm the functionality of the Kubernetes cluster
Verify all nodes are in a Ready
status using kubectl get nodes
.
global:
imageRegistryUsername: <SERVICE_USERNAME>
imageRegistryPassword: <SERVICE_PASSWORD>
sharedStorage:
bucketName: <BUCKET_NAME>
s3Url: https://s3.<S3_REGION>.amazonaws.com
s3AccessKey: <S3_ACCESS_KEY>
s3SecretKey: <S3_SECRET_KEY>
sde:
jwtSecret: your-jwt-secret
secretKey: your-secret-key
superuserPassword: your-superuser-password
systemAdminEmail: your-sysadmin-email@yourdomain
postgresql:
auth:
username: sde
password: your-database-password
sc-datastore:
clientPassword: your-datastore-password
rabbitmq:
auth:
erlangCookie: your-erlang-cookie
password: your-broker-password
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,master 23h v1.21.0
master2 Ready control-plane,master 23h v1.21.0
master3 Ready control-plane,master 23h v1.21.0
worker1 Ready <none> 23h v1.21.0
worker2 Ready <none> 23h v1.21.0
worker3 Ready <none> 23h v1.21.0
3.3. Install the SD Elements Helm chart
See the upstream Helm docs for usage instructions of helm install
|
$ helm install {RELEASE_NAME} {LOCAL_REPO_NAME}/sde --namespace {NAMESPACE} --values {PATH_TO_CUSTOM_VALUES_FILE}
Add --version {SDE_VERSION}
to the helm install
command to install a specific SD Elements version.
3.4. Verify successful installation
3.4.1. Confirm the Helm chart has been installed
Use the helm list
command to show a list of current Helm releases (i.e., installed Helm charts). Confirm the release is in a deployed
status.
$ helm list -n `{NAMESPACE}`
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
{RELEASE_NAME} {NAMESPACE} 1 2023-03-22 17:22:18.453487279 -0400 EDT deployed sde-{SDE_VERSION} {SDE_VERSION}
3.4.2. Validate the state of the pods
List SD Elements pods using kubectl get pods
. Confirm all pods are in a Running
or Completed
status.
SD Elements may take up to 15 minutes to start, even if the Helm release status is Deployed .
During this time, attempts to access SD Elements will return a 503 error.
|
NAME READY STATUS RESTARTS AGE
{RELEASE_NAME}-broker-0 1/1 Running 0 1m
{RELEASE_NAME}-data-store-XXXX-XX 1/1 Running 0 1m
{RELEASE_NAME}-database-0 1/1 Running 0 1m
{RELEASE_NAME}-job-database-backup-XXXX-XX 1/1 Completed 0 1m
{RELEASE_NAME}-mailer-XXXX-XX 1/1 Running 0 1m
{RELEASE_NAME}-reporting-XXXX-XX 2/2 Running 0 1m
{RELEASE_NAME}-web-XXXX-XX 1/1 Running 0 1m
{RELEASE_NAME}-worker-10-XXXX-XX 1/1 Running 0 1m
{RELEASE_NAME}-worker-15-sde-medium-XXXX-XX 1/1 Running 0 1m
{RELEASE_NAME}-worker-17-XXXX-XX 1/1 Running 0 1m
{RELEASE_NAME}-worker-18-sde-low-XXXX-XX 1/1 Running 0 1m
If pods are other status (e.g., Pending
, ContainerCreating
), wait and check back in a few minutes. If it’s been longer than 30 minutes, please contact SD Elements Support at sdesupport@securitycompass.com.
4. Access SD Elements
Use the Superuser username and password provided in the custom values file to access SD Elements. If the custom values file is not accessible, see Retrieve Superuser Credentials for an Existing Instance.