Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Set resource limit for addon containers #10653

Merged
merged 4 commits into from Jul 2, 2015
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
Expand Up @@ -22,6 +22,10 @@ spec:
containers:
- image: gcr.io/google_containers/heapster:v0.15.0
name: heapster
resources:
limits:
cpu: 100m
memory: 200Mi
command:
- /heapster
- --source=kubernetes:''
Expand Down
Expand Up @@ -22,6 +22,10 @@ spec:
containers:
- image: gcr.io/google_containers/heapster:v0.15.0
name: heapster
resources:
limits:
cpu: 100m
memory: 200Mi
command:
- /heapster
- --source=kubernetes:''
Expand Down
Expand Up @@ -22,6 +22,10 @@ spec:
containers:
- image: gcr.io/google_containers/heapster:v0.15.0
name: heapster
resources:
limits:
cpu: 100m
memory: 200Mi
command:
- /heapster
- --source=kubernetes:''
Expand Down
Expand Up @@ -22,13 +22,21 @@ spec:
containers:
- image: gcr.io/google_containers/heapster_influxdb:v0.3
name: influxdb
resources:
limits:
cpu: 100m
memory: 200Mi
ports:
- containerPort: 8083
hostPort: 8083
- containerPort: 8086
hostPort: 8086
- image: gcr.io/google_containers/heapster_grafana:v0.7
name: grafana
resources:
limits:
cpu: 100m
memory: 100Mi
env:
- name: INFLUXDB_EXTERNAL_URL
value: /api/v1beta3/proxy/namespaces/default/services/monitoring-influxdb:api/db/
Expand Down
Expand Up @@ -22,6 +22,10 @@ spec:
containers:
- image: gcr.io/google_containers/heapster:v0.15.0
name: heapster
resources:
limits:
cpu: 100m
memory: 200Mi
command:
- /heapster
- --source=kubernetes:''
12 changes: 12 additions & 0 deletions cluster/addons/dns/skydns-rc.yaml.in
Expand Up @@ -22,6 +22,10 @@ spec:
containers:
- name: etcd
image: gcr.io/google_containers/etcd:2.0.9
resources:
limits:
cpu: 100m
memory: 50Mi
command:
- /usr/local/bin/etcd
- -listen-client-urls
Expand All @@ -32,11 +36,19 @@ spec:
- skydns-etcd
- name: kube2sky
image: gcr.io/google_containers/kube2sky:1.10
resources:
limits:
cpu: 100m
memory: 50Mi
args:
# command = "/kube2sky"
- -domain={{ pillar['dns_domain'] }}
- name: skydns
image: gcr.io/google_containers/skydns:2015-03-11-001
resources:
limits:
cpu: 100m
memory: 50Mi
args:
# command = "/skydns"
- -machines=http://localhost:4001
Expand Down
3 changes: 3 additions & 0 deletions cluster/addons/fluentd-elasticsearch/es-controller.yaml
Expand Up @@ -22,6 +22,9 @@ spec:
containers:
- image: gcr.io/google_containers/elasticsearch:1.4
name: elasticsearch-logging
resources:
limits:
cpu: 100m
ports:
- containerPort: 9200
name: es-port
Expand Down
3 changes: 3 additions & 0 deletions cluster/addons/fluentd-elasticsearch/kibana-controller.yaml
Expand Up @@ -22,6 +22,9 @@ spec:
containers:
- name: kibana-logging
image: gcr.io/google_containers/kibana:1.3
resources:
limits:
cpu: 100m
env:
- name: "ELASTICSEARCH_URL"
value: "http://elasticsearch-logging:9200"
Expand Down
1 change: 1 addition & 0 deletions cluster/saltbase/salt/fluentd-gcp/fluentd-gcp.yaml
Expand Up @@ -9,6 +9,7 @@ spec:
resources:
limits:
cpu: 100m
memory: 200Mi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't 100Mi more than enough given that in all of Saad's tests it never used that much?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, Saad's report in that issue is only for a couple of hours. The limit I chose here is based on over 2 days. Both of us are running the soaking test against our default configurations.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be clear my report was over the course of 24 hours. It looks like the fluentd container with elasticsearch plugin has a leak, because memory usage continues to grow. After two days it hit 151 MB. See #10335 (comment)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is the memory limit set for fluentd-gcp but not fluentd-es?

env:
- name: "FLUENTD_ARGS"
value: "-qq"
Expand Down