global:
storageClass: glusterfs-storage
sde:
superuserEmail: sde-superuser@acme.com
superuserPassword: thePasswordForTheDefaultWebSuperUser
defaultFromEmail: "ACME Corp. <noreply@acme.com>"
serverEmail: host@acme.com
defaultOrg: default
feedbackEmail: sde-feedback@acme.com
supportEmail: sde-admin@acme.com
Helm configuration
Advanced settings
The following are examples of advanced optional settings. Please review values.yaml
in the
SD Elements Helm Chart for the full list of options and comments. If in doubt, contact sdesupport@securitycompass.com.
If you use advanced settings, put them in the values.custom.yaml file as you did with the settings used to deploy SD Elements.
|
Configuring an external database
-
When using an external database, set the internal database subchart to false and and set values for external-database
-
The external database should be Postgress 12.x.
sc-database: enabled: false external-database: host: dbhost user: dbuser password: dbpwd
Enabling OpenShift compatibility
This configuration is only compatible with SD Elements versions 2022.2 or newer.
|
When enabling OpenShift compatibility, the helm chart disables incompatible configurations (e.g. PodSecurityContext). |
Pre-requisites:
Configuration:
To enable OpenShift compatibility, add the following configuration to values.custom.yaml
global:
openshift:
enabled: true
web:
ingressClassName: openshift-default
We recommend using the OpenShift Container Platform Ingress Operator. The default IngressClassName
is openshift-default
, this value may differ in your environment.
Common customizations
Parameter |
Comments |
Default |
|
Sets the default storageclass for all persistent volumes |
(unset) |
|
Use |
|
|
Set to |
|
|
Set to |
|
|
Sets the storageclass for the database data volume, overrides global.storageClass |
(unset) |
|
Sets the size of the database data volume |
|
|
The default FROM address to send regular email as |
|
|
The default organization to create SD Elements users under |
default |
|
Set to 'true' to enable JITT (additional license required) |
|
|
E-mail address to which user feedback will be sent |
|
|
Set your site hostname |
|
|
The email address that error messages come from |
|
|
E-mail address to direct in-app support requests to |
|
|
The user session inactivity timeout (seconds) |
|
|
The default admin user email address |
|
|
Adjust the SD Elements application logging level |
|
|
Adjust the log level of the admin email process |
|
|
Adjust the wsgi/apache process logging level |
|
Jobs
Asyncronous jobs are defined in values.yaml
. You can remove default jobs and add new custom jobs.
The jobs must be included under the specifications
section and in map format.
The following are examples of custom jobs added under specifications
:
job:
specifications:
custom_job:
schedule: "01 1 * * *"
niceness: 15
command: ["/bin/sde.sh"]
args:
- "custom_management_command"
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 1
concurrencyPolicy: Forbid
restartPolicy: OnFailure
volumeWritePermission: false
env:
- name: ENV_VAR_NAME
value: ENV_VAR_VALUE
resources:
requests:
cpu: 1
memory: 1024Mi
limits:
cpu: 2
memory: 2048Mi
Shared Object Storage
SD Elements makes use of Shared Object Storage via AWS S3 or an S3 compatible API object storage for sharing files between SD Elements microservices.
Requirements
-
An existing S3 bucket
-
An AWS IAM service account that has read/write access to the S3 bucket
-
The Access Key and Secret Key for the IAM service account
See Amazon S3: Allows read and write access to objects in an S3 Bucket for details on IAM policy configuration.
If you do not have access to AWS S3, see Alternative Configuration
below for details.
S3 configuration
SD Elements can be configured to use S3 by modifying the follow section in your values.yaml overlay:
global:
sharedStorage:
bucketName: my-s3-bucket-name
s3Url: https://s3.us-east-1.amazonaws.com
s3AccessKey: AwsServiceAccountAccessKey
s3SecretKey: AwsServiceAccountSecretKey
Enabling S3 Transfer Acceleration
To enable the use of S3 Transfer Acceleration in SD Elements when performing S3 operations, add the following environment in your values.yaml overlay:
worker:
extraEnvVars:
- name: S3_USE_ACCELERATE_ENDPOINT
value: "true"
Alternative S3 Configuration
s3Url must be formatted in Amazon S3 Path-Style URL
|
You may wish to set up an IAM Policy to restrict service account to the specific S3 bucket
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": [
"arn:aws:s3:::my-s3-bucket-name",
"arn:aws:s3:::my-s3-bucket-name/*"
]
}
]
}
If you are deploying in an environment without AWS S3 object storage, an alternative option is to enable the
MinIO subchart within SD Elements which provides an S3 compatible API service as a replacement. In order to use
MinIO, you should configure both the global.sharedStorage
and minio
sections in your values.yaml overlay and
ensure certain properties match.
minIO bucket naming conventions are the same as those of Amazon S3. See Amazon S3 bucket naming rules for more information. |
minIO secretKey values must be at least 8 characters in length.
|
global:
sharedStorage:
bucketName: my-bucket-name # If using MinIO, ensure value matches a bucket in `minio` section
s3Url: http://{namespace}-minio:9000
s3AccessKey: AccessKeyGoesHere # If using MinIO, ensure value matches `accessKey` in `minio` section
s3SecretKey: SecretKeyGoesHere # If using MinIO, ensure value matches `secretKey` in `minio` section
minio:
enabled: true
rootUser: admin
rootPassword: Password
persistence:
storageClass: myStorageclassName
tls:
enabled: false
buckets:
- name: my-bucket-name # should match global.sharedStorage.bucketName
policy: none
purge: false
users:
- accessKey: AccessKeyGoesHere # should match global.sharedStorage.s3AccessKey
secretKey: SecretKeyGoesHere # should match global.sharedStorage.s3SecretKey
policy: readwrite
imagePullSecrets:
- name: "security-compass-secret"
Network isolation
Every SD Elements component can be configured to filter out unauthorized network communications. This is accomplished by utilizing NetworkPolicy resources that Helm automatically generates when the global.networkIsolation
option is set to a value other than its default (none
).
Requirements
This feature requires Container Network Interface (CNI) providers such as Cilium for Kubernetes, and OVN-Kubernetes for OpenShift. OpenShift SDN is also supported for namespace
and ingress
network isolation levels.
Acceptable values for global.networkIsolation
-
none
: disables this feature, allowing each pod to freely communicate with all other pods and resources located within or outside the cluster. -
namespace
: permits pod-to-pod communication within the current Kubernetes namespace while blocking most communications initiated by pods and resources in other namespaces. All outbound traffic is permitted. -
ingress
: each component can only receive network connections from pods and resources that have been expressly authorized. All outbound traffic is permitted. We feel that this option provides the ideal balance between security and usability, as customers may utilize the default ruleset without performing extensive tuning, and the likelihood of lateral movement in the event of a compromise is significantly reduced. -
full
: SDE components can only establish network connections with pods and resources that have been expressly authorized. By default, all outbound traffic is prohibited. Substantial tuning is required for this choice, which may necessitate the assistance of the SDE Support group.
Configuration examples
Add one of the following configurations to an overlay (e.g. values.custom.yaml
) file according to the desired security level.
Disable network isolation
global:
networkIsolation: "none"
Enable ingress-level network isolation
global:
networkIsolation: "ingress"
Additional rules
When global.networkIsolation
is set to either ingress
or full
, extra rules may be required to facilitate connections between pods that are not ordinarily supposed to exchange data.
In this example, we will add a rule to allow NGINX ingress pods with label app.kubernetes.io/name: ingress-nginx
from the Kubernetes namespace ingress-nginx
to access the MinIO console UI on port 9001.
To accomplish this, we will copy the networkPolicy
section that refers to MinIO and all its rules from values.yaml to values.custom.yaml.
After that, we will simply append a new selectors
block like the following one:
networkPolicies:
minio:
podSelector:
matchLabels:
app: minio
ingress:
- selectors:
...omitted for brevity...
- selectors:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: "ingress-nginx"
podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
ports:
- port: '9001'
Once applied this configuration, we can run kubectl get netpol -o custom-columns=:metadata.name | grep np-minio | xargs kubectl get netpol -o yaml
, to see the changes reflected into MinIO’s NetworkPolicy. Specifically, the following block should appear:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginx
podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
ports:
- port: 9001
protocol: TCP