global:
storageClass: glusterfs-storage
sde:
superuserEmail: sde-superuser@example.com
superuserPassword: thePasswordForTheDefaultWebSuperUser
defaultFromEmail: "ACME Corp. <noreply@example.com>"
serverEmail: host@example.com
defaultOrg: default
feedbackEmail: sde-feedback@example.com
supportEmail: sde-admin@example.com
systemAdminEmail: it-support@example.com
Helm configuration
Advanced settings
The following are examples of advanced optional settings. Please review values.yaml
in the
SD Elements Helm Chart for the full list of options and comments. If in doubt, contact sdesupport@securitycompass.com.
If you use advanced settings, put them in the values.custom.yaml file as you did with the settings used to deploy SD Elements.
|
Upgrading from any versions <2023.2 will require these values adjusted in the custom values.yaml file:
|
postgresql:
primary:
initdb:
user: sde
auth:
username: sde
password: <replace with previous sc-database.clientPassword>
# to enable metrics
metrics:
enabled: true
serviceMonitor:
enabled: true
rabbitmq:
auth:
erlangCookie: your-erlang-cookie
password: <replace with previous sc-broker.clientPassword>
# to enable metrics
metrics:
enabled: true
serviceMonitor:
enabled: true
These changes are required due to our introduction of Bitnami managed charts: postgresql
and rabbitmq
, replacing the previous charts sc-database
and sc-broker
respectively.
CPU and memory pod requests and limits
To configure CPU and memory requests for a pod, you can use the following example for a PostgreSQL resource
postgresql:
...
primary:
resources:
requests:
cpu: 1
memory: 2048Mi
limits:
cpu: 4
memory: 8192Mi
The requests
field defines the minimum amount of CPU and memory that the container requires limits
field establishes the maximum resources that the container is allowed to consume.
Workers example
worker:
wsgiProcesses: 6
synchronous:
lowPriority:
resources:
limits:
cpu: 6
highPriority:
resources:
limits:
cpu: 6
Configuring an external database
-
When using an external database, set the internal database subchart to false and and set values for external-database
-
The external database should be Postgress 12.x.
postgresql: enabled: false external-database: host: dbhost user: dbuser password: dbpwd
Enabling OpenShift compatibility
This configuration is only compatible with SD Elements versions 2023.2 or newer.
|
When enabling OpenShift compatibility, the helm chart disables incompatible configurations (e.g. PodSecurityContext). |
Pre-requisites:
Configuration:
To enable OpenShift compatibility, add the following configuration to values.custom.yaml
global:
openshift:
enabled: true
web:
ingressClassName: openshift-default
rabbitmq:
podSecurityContext:
enabled: false
containerSecurityContext:
enabled: false
postgresql:
primary:
containerSecurityContext:
enabled: false
runAsUser: null
allowPrivilegeEscalation: false
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
capabilities:
drop:
- 'ALL'
podSecurityContext:
enabled: false
runAsUser: null
runAsGroup: null
fsGroup: null
seccompProfile:
type: RuntimeDefault
volumePermissions:
enabled: false
shmVolume:
enabled: false
We recommend using the OpenShift Container Platform Ingress Operator. The default IngressClassName
is openshift-default
, this value may differ in your environment.
New Image Registry for Older Versions
This section is only compatible with SD Elements versions later than 2023.1 .
|
To pull from the new image registry from an older version, the following configuration should be added to your existing configuration:
global:
imageRegistry: repository.securitycompass.com/sde-docker-prod
imageRegistryFormat: "%s/%s_%s/%s:%s"
imageOrganization: prod
Pod replacement strategy
This section is only compatible with SD Elements versions later than 2023.2 .
|
Since version 2023.4, the deployment strategy used to replace old Pods with new ones has been updated to Recreate
for the following components:
-
Minio
It is possible to revert to the previous strategy by adding the following configuration to an overlay (e.g. values.custom.yaml
).
minio:
deploymentUpdate:
type: RollingUpdate
Since version 2023.3, the deployment strategy used to replace old Pods with new ones has been updated to RollingUpdate
for the following components:
-
Web
-
JITT
-
Workers
-
Reporting
It is possible to revert to the previous strategy by adding the following configuration to an overlay (e.g. values.custom.yaml
).
web:
updateStrategy:
type: Recreate
sc-jitt:
updateStrategy:
type: Recreate
worker:
updateStrategy:
type: Recreate
reporting:
updateStrategy:
type: Recreate
Common customizations
Parameter |
Comments |
Default |
|
Sets the default storageclass for all persistent volumes |
(unset) |
|
Use |
|
|
Sets the number of latest SDE release databases to keep and prunes the rest (value cannot be less than 2) |
|
|
List of supported IP protocols. Valid options are |
|
|
Set to |
|
|
Set to |
|
|
Sets the size of the postgresql database data volume |
|
|
The default FROM address to send regular email as |
|
|
The default organization to create SD Elements users under |
default |
|
Set to 'true' to enable JITT (additional license required) |
|
|
E-mail address to which user feedback will be sent |
|
|
Set your site hostname |
|
|
The email address that error messages come from |
|
|
E-mail address to direct in-app support requests to |
|
|
The email address to which technical failure notifications will be sent |
(unset) |
|
The user session inactivity timeout (seconds) |
|
|
The default admin user email address |
|
|
Adjust the SD Elements application logging level |
|
|
Adjust the log level of the admin email process |
|
|
Adjust the wsgi/apache process logging level |
|
Jobs
Asyncronous jobs are defined in values.yaml
. You can remove default jobs and add new custom jobs.
The jobs must be included under the specifications
section and in map format.
The following are examples of custom jobs added under specifications
:
job:
specifications:
custom_job:
schedule: "01 1 * * *"
niceness: 15
command: ["/bin/sde.sh"]
args:
- "custom_management_command"
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 1
concurrencyPolicy: Forbid
restartPolicy: OnFailure
volumeWritePermission: false
env:
- name: ENV_VAR_NAME
value: ENV_VAR_VALUE
resources:
requests:
cpu: 1
memory: 1024Mi
limits:
cpu: 2
memory: 2048Mi
Shared Object Storage
SD Elements makes use of Shared Object Storage via AWS S3 or an S3 compatible API object storage for sharing files between SD Elements microservices.
Requirements
-
An existing S3 bucket
-
An AWS IAM service account that has read/write access to the S3 bucket
-
The Access Key and Secret Key for the IAM service account
See Amazon S3: Allows read and write access to objects in an S3 Bucket for details on IAM policy configuration.
If you do not have access to AWS S3, see Alternative Configuration
below for details.
S3 configuration
SD Elements can be configured to use S3 by modifying the follow section in your values.yaml overlay:
global:
sharedStorage:
bucketName: my-s3-bucket-name
s3Url: https://s3.us-east-1.amazonaws.com
s3AccessKey: AwsServiceAccountAccessKey
s3SecretKey: AwsServiceAccountSecretKey
Enabling S3 Transfer Acceleration
To enable the use of S3 Transfer Acceleration in SD Elements when performing S3 operations, add the following environment in your values.yaml overlay:
worker:
extraEnvVars:
- name: S3_USE_ACCELERATE_ENDPOINT
value: "true"
Alternative S3 Configuration
s3Url must be formatted in Amazon S3 Path-Style URL
|
You may wish to set up an IAM Policy to restrict service account to the specific S3 bucket
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": [
"arn:aws:s3:::my-s3-bucket-name",
"arn:aws:s3:::my-s3-bucket-name/*"
]
}
]
}
If you are deploying in an environment without AWS S3 object storage, an alternative option is to enable the
MinIO subchart within SD Elements which provides an S3 compatible API service as a replacement. In order to use
MinIO, you should configure both the global.sharedStorage
and minio
sections in your values.yaml overlay and
ensure certain properties match.
minIO bucket naming conventions are the same as those of Amazon S3. See Amazon S3 bucket naming rules for more information. |
minIO secretKey values must be at least 8 characters in length.
|
global:
sharedStorage:
bucketName: my-bucket-name # If using MinIO, ensure value matches a bucket in `minio` section
s3Url: http://{release_name}-minio:9000
s3AccessKey: AccessKeyGoesHere # If using MinIO, ensure value matches `accessKey` in `minio` section
s3SecretKey: SecretKeyGoesHere # If using MinIO, ensure value matches `secretKey` in `minio` section
minio:
enabled: true
rootUser: admin
rootPassword: Password
persistence:
storageClass: myStorageclassName
buckets:
- name: my-bucket-name # should match global.sharedStorage.bucketName
policy: none
purge: false
users:
- accessKey: AccessKeyGoesHere # should match global.sharedStorage.s3AccessKey
secretKey: SecretKeyGoesHere # should match global.sharedStorage.s3SecretKey
policy: readwrite
imagePullSecrets:
- name: "security-compass-secret"
TLS can be enabled for minIO by providing the name of the secret containing the certificate and private key.
minio:
...
tls:
enabled: true
certSecret: my-secret-name
publicCrt: "tls.crt"
privateKey: "tls.key"
If you do not have an external certificate secret, you may choose to use the self signed certificate provided by the Helm chart. In this configuration, SD Elements needs to be configured to trust third party CA certificates and the certificate added to the trust.
The name of the self-signed certificate is formatted based on the release name. |
global:
thirdPartyCACertificates:
enabled: true
minioSelfSignedCertSecret: {release_name}-minio-server-tls-secrets
minio:
...
tlsCreateSelfSigned: true
tls:
enabled: true
certSecret: {release_name}-minio-server-tls-secrets
Alternatively, S3 certificate validation may be disabled.
worker:
extraEnvVars:
- name: AWS_S3_VERIFY
value: "False"
In versions older than 2023.1 , replace AWS_S3_VERIFY with AWS_S3_USE_SSL
|
Network isolation
Every SD Elements component can be configured to filter out unauthorized network communications. This is accomplished by utilizing NetworkPolicy resources that Helm automatically generates when the global.networkIsolation
option is set to a value other than its default (none
).
Requirements
This feature requires Container Network Interface (CNI) providers such as Cilium for Kubernetes, and OVN-Kubernetes for OpenShift. OpenShift SDN is also supported for namespace
and ingress
network isolation levels.
Acceptable values for global.networkIsolation
-
none
: disables this feature, allowing each pod to freely communicate with all other pods and resources located within or outside the cluster. -
namespace
: permits pod-to-pod communication within the current Kubernetes namespace while blocking most communications initiated by pods and resources in other namespaces. All outbound traffic is permitted. -
ingress
: each component can only receive network connections from pods and resources that have been expressly authorized. All outbound traffic is permitted. We feel that this option provides the ideal balance between security and usability, as customers may utilize the default ruleset without performing extensive tuning, and the likelihood of lateral movement in the event of a compromise is significantly reduced. -
full
: SDE components can only establish network connections with pods and resources that have been expressly authorized. By default, all outbound traffic is prohibited. Substantial tuning is required for this choice, which may necessitate the assistance of the SDE Support group.
Configuration examples
Add one of the following configurations to an overlay (e.g. values.custom.yaml
) file according to the desired security level.
Disable network isolation
global:
networkIsolation: "none"
Enable ingress-level network isolation
global:
networkIsolation: "ingress"
Additional rules
When global.networkIsolation
is set to either ingress
or full
, extra rules may be required to facilitate connections between pods that are not ordinarily supposed to exchange data.
In this example, we will add a rule to allow NGINX ingress pods with label app.kubernetes.io/name: ingress-nginx
from the Kubernetes namespace ingress-nginx
to access the MinIO console UI on port 9001.
To accomplish this, we will copy the networkPolicy
section that refers to MinIO and all its rules from values.yaml to values.custom.yaml.
After that, we will simply append a new selectors
block like the following one:
networkPolicies:
minio:
podSelector:
matchLabels:
app: minio
ingress:
- selectors:
...omitted for brevity...
- selectors:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: "ingress-nginx"
podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
ports:
- port: '9001'
Once applied this configuration, we can run kubectl get netpol -o custom-columns=:metadata.name | grep np-minio | xargs kubectl get netpol -o yaml
, to see the changes reflected into MinIO’s NetworkPolicy. Specifically, the following block should appear:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: ingress-nginx
podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
ports:
- port: 9001
protocol: TCP
IPv6 support
IPv6 is the most recent version of the Internet Protocol. It offers several improvements over IPv4, including a larger address space, and greater scalability and network efficiency in Kubernetes.
SD Elements can be configured to work with standard IPv4 addressing (default), or IPv6-only fashion by tuning the value for global.ipFamilies
. Although it is possible to use SD Elements in a dual-stack (IPv4 + IPv6) configuration, this mode is unsupported and its adoption is discouraged due to the increased infrastructure network complexity.
Requirements
This feature requires an IPv6-enabled Kubernetes version greater than or equal to 1.24.
If the Kubernetes cluster is not configured to use NAT64 and DNS64, external addresses that don’t expose IPv6 endpoints will be unreachable. |
Configuration examples
By default, SD Elements runs in IPv4-only mode. To configure it to run in IPv6-only mode, add the following configuration to an overlay (e.g. values.custom.yaml
).
General IPv6-only configuration
global:
ipFamilies:
- IPv6
In addition, it may be necessary to adjust the configurations of other components, as discussed in the following sections.
AWS S3 endpoint configuration
When using AWS S3 as shared storage in an IPv6-only mode, the value for
global.sharedStorage.s3Url
must be adjusted to use dual-stack endpoints following the format s3.dualstack.AWS-REGION.amazonaws.com
.
For instance, to access S3 in the us-east-1
region, the endpoint url should be updated to https://s3.dualstack.us-east-1.amazonaws.com
.
global:
sharedStorage:
s3Url: https://s3.dualstack.us-east-1.amazonaws.com
Mailer service configuration
When using an external service to complete the message delivery, it is necessary to verify that its endpoint is also reachable via IPv6. |
The configuration for the SD Elements mailer needs to be also adjusted to enable it to relay messages for the cluster’s CIDR thus ensuring that notifications are correctly relayed. In the following snippet of code, we assume that fd00:c00b:1::/112
is the cluster’s IPv6 CIDR for our Kubernetes installation:
sc-mail:
config:
relayNetworks: "fd00:c00b:1::/112"
Nginx Ingress controller configuration
When using a dedicated Nginx ingress controller, it is also needed to report the IPv6
value in ingress-nginx.controller.service.ipFamilies
.
ingress-nginx:
enabled: true
controller:
service:
ipFamilies:
- IPv6
Example with all the IPv6 configurations above
global:
ipFamilies:
- IPv6
sharedStorage:
s3Url: https://s3.dualstack.us-east-1.amazonaws.com
...
sc-mail:
config:
relayNetworks: "fd00:c00b:1::/112"
ingress-nginx:
controller:
service:
ipFamilies:
- IPv6
...
Trend Reporting
SD Elements 2023.2
introduces a daily cronjob that takes snapshots and generates data for trend reporting. Set the following in values.custom.yaml
to:
-
Change the schedule
job: specifications: trend-reporting: schedule: "35 3 * * *"
-
Disable the feature
trendReporting: enabled: false
Mail Configuration
By default, the sc-mail deployment will deliver email directly to the internet. It can also be configured as a mail relay for the following upstream services:
-
Regular SMTP relay (SMARTHOST)
-
AWS SES
-
Gmail
SMTP relay
An example of relaying mail to a regular SMTP server:
sc-mail:
config:
smartHostType: SMARTHOST
smartHostAddress: smtp.example.com
smartHostPort: 25
SES relay
An example of relaying mail to AWS SES:
sc-mail:
config:
smartHostType: SES
smartHostRegion: us-east-1 # required if using SES type
smartHostUser: AKIAIOSFODNN7EXAMPLE # IAM smtp user access key
smartHostPassword: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY # IAM smtp user secret key
smartHostPort: 587
Gmail relay
An example of relaying email to Gmail:
sc-mail:
config:
smartHostType: GMAIL
smartHostUser: username
smartHostPassword: password
smartHostPort: 587
Soft Maintenance Mode
Soft Maintenance Mode restricts access to SD Elements during maintenance activities. When enabled, the login page is redirected to a soft error page notifying that scheduled maintenance is taking place. This will allow these administrators to complete maintenance without concern for data integrity being compromised or users contacting internal support teams. Users included in the whitelist can access SD Elements regardless. Follow the steps below to configure worker pods for whitelisted users and enable Soft Maintenance Mode.
Configure Soft Maintenance Mode
Add the following environment variable to worker-10
pod configurations in values.yaml
with a list of users that are in the allow list
worker:
extraEnvVars:
- name: SDE_SOFT_MAINTENANCE_WHITELISTED_USERS
value: user1@email.domain.com,user2@email.domain.com
Enable, Disable and check Status of Soft Maintenance Mode
-
For running soft maintenance mode commands, first exec into
worker-10
pod by running the following# Get the worker-10 pod name POD_NAME=$(kubectl get pods -o custom-columns=":metadata.name" | grep worker-10) # Exec into the pod kubectl exec -it ${POD_NAME} -- /bin/bash
-
To enable Soft Maintenance mode, run the following in the
worker-10
pod# Run the enable command /bin/sde.sh soft-maintenance on
-
To disable Soft Maintenance mode, run the following in the
worker-10
pod# Run the disable command /bin/sde.sh soft-maintenance off
-
To check status of Soft Maintenance mode, run the following in the
worker-10
pod# Run command to check status /bin/sde.sh soft-maintenance