Troubleshooting

This section covers common issues that may arise while managing SD Elements and how to address them.

Event 1: Unable to download helm charts

Contact SD Elements support: sdesupport@securitycompass.com

Event 2: Helm install fails

Refer to current software requirements for helm.

Observe the error message. The following are some common errors and immediate solutions:

Unexpected files exceeding filesize
Error: create: failed to create: Request entity too large: limit is 3145728

Ensure that there are no large files not a part of the helm chart templates found in the path where the helm chart resides.

If this is not sufficient to allow for correction of the issue, contact SD Elements support.

When contacting support, be prepared to share information about your helm, Kubernetes, and SD Elements helm chart versions.

Event 3: Helm install has succeeded but one or more pods are showing Statuses such as 'Error' and 'CrashLoopBackoff'

  • Review Events for the pod in question:

    $kubectl describe pod {problem_pod_name}
  • Confirm that persistent volume claims are all bound:

    $kubectl get pvc
    NAME                          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
    sde1-datastore-volume-claim   Bound    pvc-0d13b63a-9f71-47e0-baba-a0b37af04336   10Gi       RWO            microk8s-hostpath   16s
    sde1-db-volume-claim          Bound    pvc-59126ddc-b697-438b-a6cf-482aba8498f1   30Gi       RWO            microk8s-hostpath   16s
    • If any have a Status that isn’t 'Bound' or 'Pending', check the Events section for messages that may indicate the cause:

      $kubectl describe pvc {problem_pvc_name}
  • Review the logs of the problematic pod:

    $kubectl logs -f {problem_pod_name}

Event 4: Cron job pod fails with CreateContainerConfigError

  • Review container status for the pod in question:

    $ kubectl get pod <PROBLEMATIC_POD> -o jsonpath={.status.containerStatuses}
    • If the state for the container is waiting and the message is unable to find a key in the secrets:

      # Get the corresponding job name for the pod
      $ kubectl get pod <PROBLEMATIC_POD> -o jsonpath={.metadata.labels.job-name}
      # Delete the job corresponding to the problematic pod
      # kubectl delete job <JOB_NAME>

results matching ""

    No results matching ""