New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubelet access to API server should be subdivided #40476
Comments
cc @kubernetes/sig-auth-feature-requests @kubernetes/sig-node-feature-requests |
Do we have any existing APIs that filter list and watch results like this? Would it be reasonable to implement restrictions for gets on {secrets, configmaps, persistentvolumeclaim}, and updates on pods, then defer futher restrictions on {list,watch} operations once we figure out filtering? Or do nodes {list,watch} secrets today? |
they do not
Yes, label and field selection filters can be applied and they are available (and restrictable) on an http request, so it would be consistent to enforce it with an authorizer. |
I'm assuming the motivation here is to prevent a node compromise from being equivalent to a cluster compromise, by limiting the potential damage to only resources scheduled (transitively) to the node. Given this goal, what's to stop an attacker from simply updating a pod spec to reference the resource they want access to? An obvious solution is to prevent the node from posting anything other than status updates, but this would break the implementation of static pods on the node (via mirror pods). Are there any other motivations to this proposal, or do we need to tackle these issues first? |
Correct.
Pod spec is immutable. I would see nodes as being limited to:
Mirror pods are not allowed to reference secrets |
Cool, good to see this is already in place. Perhaps we should extend the limits to prevent certain volume types as well? |
Would this allow something like "watch all secrets in all namespaces" to server-side filter and return only secrets on this node? Or would it be an error, and we would have to set up N individual watches ourself? |
For lists/watches, the request for ACL-filtered secrets would likely need to be explicit (possibly something explicitly requesting "secrets for pods for node X", which is gorpy to express, or bespoke resource endpoints for nodes, which is gorpy to add for just one type of user). Adding ACL filtering to existing list/watch APIs would be... unexpected. We'd like to find a way to let nodes watch the secrets they're supposed to have access to instead of making them poll or manage lots of individual watches, but even protecting the existing GET requests based on the node->pod->secret relationship would be an improvement. |
I agree. Though, since we would like to change kubelet to actually "list+watch" secrets, configmaps, etc., I'm going to put together a proposal for this part once I'm done with 1.6 work. I will focus more on that part, but this will be strictly related to this one. |
So I created the initial draft of my proposal here: kubernetes/community#443 |
Can we dedupe this with kubernetes/enhancements#279 ? |
closing in favor of kubernetes/enhancements#279 |
In order to obtain the resources needed to run pods, Kubelets currently have broad access to the API:
To properly secure individual nodes and limit the ability of a particular node to access the cluster, the kubelet should only be allowed to retrieve data from the apiserver for resources associated with it. This essentially means that it is only be able to get pods scheduled to it, along with any related items, such as secrets, configmaps, pvcs, etc. This filtering applies to get/list/watch calls as well.
We will not be able to use the generic policy engine for this, as it’s not possible to express rules for this scenario. Instead, we’ll likely need to create a new authorizer with hard-coded rules. We can make this work by building a ref-counter based summary of referenceable objects based on a watch of a subset of resources.
Authorizer/Admission:
New API access patterns needed by the kubelet (only needed if we want to allow list/watch of secrets):moved to kubernetes/community#443{list,watch,get} {secrets,configmaps,persistentvolumeclaims,...} used by pods bound to the nodeThe text was updated successfully, but these errors were encountered: