global:
storageClass: glusterfs-storage
sde:
superuserEmail: sde-superuser@acme.com
superuserPassword: thePasswordForTheDefaultWebSuperUser
defaultFromEmail: "ACME Corp. <noreply@acme.com>"
serverEmail: host@acme.com
defaultOrg: default
feedbackEmail: sde-feedback@acme.com
supportEmail: sde-admin@acme.com
Helm configuration
Advanced settings
The following are examples of advanced optional settings. Please review values.yaml
in the
SD Elements Helm Chart for the full list of options and comments. If in doubt, contact sdesupport@securitycompass.com.
If you use advanced settings, put them in the values.custom.yaml file as you did with the settings used to deploy SD Elements.
|
Configuring an external database
-
When using an external database, set the internal database subchart to false and and set values for external-database
-
The external database should be Postgress 12.x.
sc-database: enabled: false external-database: host: dbhost user: dbuser password: dbpwd
Enabling OpenShift compatibility
This configuration is only compatible with SD Elements versions 2022.2 or newer.
|
When enabling OpenShift compatibility, the helm chart disables incompatible configurations (e.g. PodSecurityContext). |
Pre-requisites:
Configuration:
To enable OpenShift compatibility, add the following configuration to values.custom.yaml
global:
openshift:
enabled: true
web:
ingressClassName: openshift-default
We recommend using the OpenShift Container Platform Ingress Operator. The default IngressClassName
is openshift-default
, this value may differ in your environment.
New Image Registry for Older Versions
This section is only compatible with SD Elements versions older than 2023.1
|
To pull from the new image registry from an older version, the following configuration should be added to your existing configuration:
global:
imageRegistry: repository.securitycompass.com/sde-docker-prod
imageRegistryFormat: "%s/%s_%s/%s:%s"
imageOrganization: prod
Common customizations
Parameter |
Comments |
Default |
|
Sets the default storageclass for all persistent volumes |
(unset) |
|
Use |
|
|
Sets the number of latest SDE release databases to keep and prunes the rest (value cannot be less than 2) |
|
|
List of supported IP protocols. Valid options are |
|
|
Set to |
|
|
Set to |
|
|
Sets the storageclass for the database data volume, overrides global.storageClass |
(unset) |
|
Sets the size of the database data volume |
|
|
The default FROM address to send regular email as |
|
|
The default organization to create SD Elements users under |
default |
|
Set to 'true' to enable JITT (additional license required) |
|
|
E-mail address to which user feedback will be sent |
|
|
Set your site hostname |
|
|
The email address that error messages come from |
|
|
E-mail address to direct in-app support requests to |
|
|
The user session inactivity timeout (seconds) |
|
|
The default admin user email address |
|
|
Adjust the SD Elements application logging level |
|
|
Adjust the log level of the admin email process |
|
|
Adjust the wsgi/apache process logging level |
|
Jobs
Asyncronous jobs are defined in values.yaml
. You can remove default jobs and add new custom jobs.
The jobs must be included under the specifications
section and in map format.
The following are examples of custom jobs added under specifications
:
job:
specifications:
custom_job:
schedule: "01 1 * * *"
niceness: 15
command: ["/bin/sde.sh"]
args:
- "custom_management_command"
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 1
concurrencyPolicy: Forbid
restartPolicy: OnFailure
volumeWritePermission: false
env:
- name: ENV_VAR_NAME
value: ENV_VAR_VALUE
resources:
requests:
cpu: 1
memory: 1024Mi
limits:
cpu: 2
memory: 2048Mi
Shared Object Storage
SD Elements makes use of Shared Object Storage via AWS S3 or an S3 compatible API object storage for sharing files between SD Elements microservices.
Requirements
-
An existing S3 bucket
-
An AWS IAM service account that has read/write access to the S3 bucket
-
The Access Key and Secret Key for the IAM service account
See Amazon S3: Allows read and write access to objects in an S3 Bucket for details on IAM policy configuration.
If you do not have access to AWS S3, see Alternative Configuration
below for details.
S3 configuration
SD Elements can be configured to use S3 by modifying the follow section in your values.yaml overlay:
global:
sharedStorage:
bucketName: my-s3-bucket-name
s3Url: https://s3.us-east-1.amazonaws.com
s3AccessKey: AwsServiceAccountAccessKey
s3SecretKey: AwsServiceAccountSecretKey
Enabling S3 Transfer Acceleration
To enable the use of S3 Transfer Acceleration in SD Elements when performing S3 operations, add the following environment in your values.yaml overlay:
worker:
extraEnvVars:
- name: S3_USE_ACCELERATE_ENDPOINT
value: "true"
Alternative S3 Configuration
s3Url must be formatted in Amazon S3 Path-Style URL
|
You may wish to set up an IAM Policy to restrict service account to the specific S3 bucket
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": [
"arn:aws:s3:::my-s3-bucket-name",
"arn:aws:s3:::my-s3-bucket-name/*"
]
}
]
}
If you are deploying in an environment without AWS S3 object storage, an alternative option is to enable the
MinIO subchart within SD Elements which provides an S3 compatible API service as a replacement. In order to use
MinIO, you should configure both the global.sharedStorage
and minio
sections in your values.yaml overlay and
ensure certain properties match.
minIO bucket naming conventions are the same as those of Amazon S3. See Amazon S3 bucket naming rules for more information. |
minIO secretKey values must be at least 8 characters in length.
|
global:
sharedStorage:
bucketName: my-bucket-name # If using MinIO, ensure value matches a bucket in `minio` section
s3Url: http://{release_name}-minio:9000
s3AccessKey: AccessKeyGoesHere # If using MinIO, ensure value matches `accessKey` in `minio` section
s3SecretKey: SecretKeyGoesHere # If using MinIO, ensure value matches `secretKey` in `minio` section
minio:
enabled: true
rootUser: admin
rootPassword: Password
persistence:
storageClass: myStorageclassName
buckets:
- name: my-bucket-name # should match global.sharedStorage.bucketName
policy: none
purge: false
users:
- accessKey: AccessKeyGoesHere # should match global.sharedStorage.s3AccessKey
secretKey: SecretKeyGoesHere # should match global.sharedStorage.s3SecretKey
policy: readwrite
imagePullSecrets:
- name: "security-compass-secret"
TLS can be enabled for minIO by providing the name of the secret containing the certificate and private key.
minio:
...
tls:
enabled: true
certSecret: my-secret-name
publicCrt: "tls.crt"
privateKey: "tls.key"
If you do not have an external certificate secret, you may choose to use the self signed certificate provided by the Helm chart. In this configuration, SD Elements needs to disable checking S3 certificate validity.
The name of the self-signed certificate is formatted based on the release name. |
In versions older than 2023.1 , set AWS_S3_USE_SSL instead of AWS_S3_VERIFY to False .
|
minio:
...
tlsCreateSelfSigned: true
tls:
enabled: true
certSecret: {release_name}-minio-server-tls-secrets
worker:
extraEnvVars:
- name: AWS_S3_VERIFY
value: "False"
Network isolation
Every SD Elements component can be configured to filter out unauthorized network communications. This is accomplished by utilizing NetworkPolicy resources that Helm automatically generates when the global.networkIsolation
option is set to a value other than its default (none
).
Requirements
This feature requires Container Network Interface (CNI) providers such as Cilium for Kubernetes, and OVN-Kubernetes for OpenShift. OpenShift SDN is also supported for namespace
and ingress
network isolation levels.
Acceptable values for global.networkIsolation
-
none
: disables this feature, allowing each pod to freely communicate with all other pods and resources located within or outside the cluster. -
namespace
: permits pod-to-pod communication within the current Kubernetes namespace while blocking most communications initiated by pods and resources in other namespaces. All outbound traffic is permitted. -
ingress
: each component can only receive network connections from pods and resources that have been expressly authorized. All outbound traffic is permitted. We feel that this option provides the ideal balance between security and usability, as customers may utilize the default ruleset without performing extensive tuning, and the likelihood of lateral movement in the event of a compromise is significantly reduced. -
full
: SDE components can only establish network connections with pods and resources that have been expressly authorized. By default, all outbound traffic is prohibited. Substantial tuning is required for this choice, which may necessitate the assistance of the SDE Support group.
Configuration examples
Add one of the following configurations to an overlay (e.g. values.custom.yaml
) file according to the desired security level.
Disable network isolation
global:
networkIsolation: "none"
Enable ingress-level network isolation
global:
networkIsolation: "ingress"
Additional rules
When global.networkIsolation
is set to either ingress
or full
, extra rules may be required to facilitate connections between pods that are not ordinarily supposed to exchange data.
In this example, we will add a rule to allow NGINX ingress pods with label app.kubernetes.io/name: ingress-nginx
from the Kubernetes namespace ingress-nginx
to access the MinIO console UI on port 9001.
To accomplish this, we will copy the networkPolicy
section that refers to MinIO and all its rules from values.yaml to values.custom.yaml.
After that, we will simply append a new selectors
block like the following one:
networkPolicies:
minio:
podSelector:
matchLabels:
app: minio
ingress:
- selectors:
...omitted for brevity...
- selectors:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: "ingress-nginx"
podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
ports:
- port: '9001'
Once applied this configuration, we can run kubectl get netpol -o custom-columns=:metadata.name | grep np-minio | xargs kubectl get netpol -o yaml
, to see the changes reflected into MinIO’s NetworkPolicy. Specifically, the following block should appear:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginx
podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
ports:
- port: 9001
protocol: TCP
IPv6 support
IPv6 is the most recent version of the Internet Protocol. It offers several improvements over IPv4, including a larger address space, and greater scalability and network efficiency in Kubernetes.
SD Elements can be configured to work with standard IPv4 addressing (default), or IPv6-only fashion by tuning the value for global.ipFamilies
. Although it is possible to use SD Elements in a dual-stack (IPv4 + IPv6) configuration, this mode is unsupported and its adoption is discouraged due to the increased infrastructure network complexity.
Requirements
This feature requires an IPv6-enabled Kubernetes version greater than or equal to 1.24.
If the Kubernetes cluster is not configured to use NAT64 and DNS64, external addresses that don’t expose IPv6 endpoints will be unreachable. |
Configuration examples
By default, SD Elements runs in IPv4-only mode. To configure it to run in IPv6-only mode, add the following configuration to an overlay (e.g. values.custom.yaml
).
General IPv6-only configuration
global:
ipFamilies:
- IPv6
In addition, it may be necessary to adjust the configurations of other components, as discussed in the following sections.
AWS S3 endpoint configuration
When using AWS S3 as shared storage in an IPv6-only mode, the value for
global.sharedStorage.s3Url
must be adjusted to use dual-stack endpoints following the format s3.dualstack.AWS-REGION.amazonaws.com
.
For instance, to access S3 in the us-east-1
region, the endpoint url should be updated to https://s3.dualstack.us-east-1.amazonaws.com
.
global:
sharedStorage:
s3Url: https://s3.dualstack.us-east-1.amazonaws.com
Mailer service configuration
When using an external service to complete the message delivery, it is necessary to verify that its endpoint is also reachable via IPv6. |
The configuration for the SD Elements mailer needs to be also adjusted to enable it to relay messages for the cluster’s CIDR thus ensuring that notifications are correctly relayed. In the following snippet of code, we assume that fd00:c00b:1::/112
is the cluster’s IPv6 CIDR for our Kubernetes installation:
sc-mail:
config:
relayNetworks: "fd00:c00b:1::/112"
Nginx Ingress controller configuration
When using a dedicated Nginx ingress controller, it is also needed to report the IPv6
value in ingress-nginx.controller.service.ipFamilies
.
ingress-nginx:
enabled: true
controller:
service:
ipFamilies:
- IPv6
Example with all the IPv6 configurations above
global:
ipFamilies:
- IPv6
sharedStorage:
s3Url: https://s3.dualstack.us-east-1.amazonaws.com
...
sc-mail:
config:
relayNetworks: "fd00:c00b:1::/112"
ingress-nginx:
controller:
service:
ipFamilies:
- IPv6
...