Enforce Secure Images |
Build |
|
Dockerfile Linting |
Build |
Use specific tags- Lint the Dockerfiles which ensures we are pulling base images from a specific tag, not just the generic “latest” tag
- Great Dockerfile linters are hadolint and dockerfilelint
|
Dockerfile Linting |
Build |
Whitelist base images- Run a script that accepts Dockerfiles with FROM directives pointing to a smart whitelist of base images
- Rejects images that are pulling random untrusted images
- The whitelist is smart because it has a specific list of approved values but also has some resolution strategies/plans when the value is not in the list
- One of them is to check if the base image is a base Docker standard library image
|
Dockerfile Linting |
Build |
Linux Package Managers- Use Linux distributions that check the integrity of the package using the package manager’s security features
|
Dockerfile Linting |
Build |
Sensitive Volumes- Don’t allow images to be built whose Dockerfile specifies a sensitive host path as a volume mount (scan the Dockerfile for volume mounts like
/proc or / ) - If a container like this was to be built and put into a Kubernetes cluster, there is an increased possibility that, if compromised, it could be used to expose information about the host and aid further penetration of the entire infrastructure
|
Dockerfile Linting |
Build |
Block root users- Use linters and static code analysis to reject images that only specify the root user as the process which will execute the program inside of the container
|
Dockerfile Linting |
Build |
Squash Images- Sometimes during image construction, you will need a private key or credentials to download all the associated resources required (e.g., private ruby gems)
- Unfortunately, when those keys or secrets are put into the container at build time, they are there hiding in the filesystem even when they aren’t needed for runtime
- When we absolutely need something like this we use the Docker directive COPY to bring it into the container, then after it has been used we use RUN rm … to remove it
- When you follow that procedure with — squash added to the docker build command the layer key which was just deleted will not end up in the final built image
- This means that the key or secret which was in the container previously is now gone permanently from all layers
- Once you push the squashed image, it is free of those files you would rather keep a secret
|
Dockerfile Linting |
Build |
Inspect Containers- We are very explicit with the packages inside the container
- We run scripts that list everything inside the container on the build server. This forces the developers to see what’s inside their containers and can help them detect any packages they don’t need
- An example for displaying all packages installed on an rpm based distro would be:
docker exec $INSTANCE_ID rpm -qa
|
Image Scanning |
Build |
- Container images are typically built using orchestration tools, like Jenkins
- An image scanning tool needs to be part of the build process to scan each layer used in a container for vulnerabilities
- Clair is an open source image scanner, and CNCF backed image registries like Harbor use Clair to automatically scan all images
- Scan for malware and vulnerable packages, and reject images with high vulnerabilities
- One tool we use within the build that will prevent the pipeline from continuing → This allows us to prevent bad containers from deploying if a CVE is detected and published on it the same day
- One tool we use passively scans all images stored in the image registry → Will alert us if our images contain issues based on the CVE databases as they are updated over time (if the image didn’t have a CVE at the time of shipping, but had one later)
|
Image Provenance |
Operate |
- While image scanning ensures that images that you build are safe, image provenance ensures that images that you run are the ones that you scanned and approved
- It is a way to ensure that only scanned and approved images are run in their clusters
- One way of doing that is provided a list of trusted image registries and using a cluster-wide policy management tool to ensure that images from non-trusted registries are not allowed
- Using Docker Content Trust, the build pipeline signs the metadata of a pushed image cryptographically so when it is pulled later, if the metadata on the image doesn’t match the decrypted metadata from the Notary server, the image is rejected at runtime
|
Secrets Management |
Operate |
- Secrets are sensitive data, like password and keys, required by your application
- The best practice for managing secrets is to use “late-binding” and defer the loading of secrets from a secrets store to the application run-time — typically the initialization phase of the pod
- Example: Hashicorp Vault
|
Namespaces |
Operate |
- Kubernetes Namespaces allow logical segmentation and isolation of resources, basically allowing one physical cluster to appear as several virtual clusters
- Whenever possible, applications should be isolated to their own namespaces
- This is important as several other Kubernetes features, such as RBAC, Resource Quotas, etc. can be applied at the namespace level
- However, it is important to note that namespaces do not automatically provide network isolation — this requires configuration of Network Policies
|
Network Policies |
Operate |
- A Kubernetes Network Policy is like a firewall rule which allows fine-grained control of ingress and egress traffic to each application component i.e. a pod
- Kubernetes network policies should be configured at a Namespace level, for defaults, and at a workload level for each component
- Simply configuring Network Policies does nothing — a CNI that can enforce network policy rules, like Calico, is also needed
|