Pod or Container Without ResourceQuota
- Query id: 48a5beba-e4c0-4584-a2aa-e6894e4cf424
- Query name: Pod or Container Without ResourceQuota
- Platform: Kubernetes
- Severity: Low
- Category: Insecure Configurations
- CWE: 400
- URL: Github
Description¶
Each namespace should have a ResourceQuota policy associated to limit the total amount of resources Pods, Containers and PersistentVolumeClaims can consume
Documentation
Code samples¶
Code samples with security vulnerabilities¶
Positive test num. 1 - yaml file
apiVersion: v1
kind: Pod
metadata:
name: pod1
namespace: myNewPod
spec:
containers:
- name: app
image: images.my-company.example/app:v4
securityContext:
allowPrivilegeEscalation: false
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Positive test num. 2 - yaml file
apiVersion: v1
kind: Pod
metadata:
name: pod2
spec:
containers:
- name: app
image: images.my-company.example/app:v4
securityContext:
allowPrivilegeEscalation: false
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Positive test num. 3 - yaml file
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: my-kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
Positive test num. 4 - yaml file
Code samples without security vulnerabilities¶
Negative test num. 1 - yaml file
apiVersion: v1
kind: Pod
metadata:
name: pod2
namespace: myNewPod2
spec:
containers:
- name: app
image: images.my-company.example/app:v4
securityContext:
allowPrivilegeEscalation: false
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: pods-high
namespace: myNewPod2
spec:
hard:
cpu: "1000"
memory: 200Gi
pods: "10"
scopeSelector:
matchExpressions:
- operator : In
scopeName: PriorityClass
values: ["high"]
Negative test num. 2 - yaml file
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: my-kube-system2
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: pods-high
namespace: my-kube-system2
spec:
hard:
cpu: "1000"
memory: 200Gi
pods: "10"
scopeSelector:
matchExpressions:
- operator : In
scopeName: PriorityClass
values: ["high"]