Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Secret distribution in docker/k8s #2030

Closed
stp-ip opened this issue Oct 28, 2014 · 26 comments
Closed

Secret distribution in docker/k8s #2030

stp-ip opened this issue Oct 28, 2014 · 26 comments
Assignees
Labels
area/api Indicates an issue on api area. area/app-lifecycle area/security kind/design Categorizes issue or PR as related to design. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@stp-ip
Copy link
Member

stp-ip commented Oct 28, 2014

One missing piece of both guidelines and k8s is a way to distribute secrets.

There are multiple ways to go about it and I want to start the discussion not only to suggest a guideline for secret distribution, but to make it easier to be integrated into k8s. We could start by discussion the various ways and including them for usage in the documentation so that these "best practices" are available and furthermore work great with k8s. As a second step we should try to find one common way of doing it, integrate it into k8s to make it as seamless as possible.
There are two issues with secret distribution I find discussable. Sure there are many more as secret distribution is one of the huge problems. Not think about the perfect solution yet, but a start to make it easier for people to start using k8s or use it as reference for their docker installations.

The first issue concerns PWs and credentials for services. For example database PWs, AWS credentials etc..

The second kind of problems arise with key/file distribution. The issues come with multiple secret files, which need to be distributed. Mainly keys for de/encryption, ssl keys, vpn/ssh keys etc..

Option 1: ENV

Pros:

  • Easy to use and implement

Cons:

  • Leakage of ENVs in various places -> seed secret needs to be shared anyway
  • Files/keys can't be distributed (correct me, if there is a way)

Option 2: LDAP

Pros:

  • Central control of credentials and secret files
  • Could be used with ssh auth against LDAP for hosts
  • Most environments run LDAP already
  • Finegrained control of secrets -> easily create one account per service/app

Cons:

  • Seed secret needed (username + pw for LDAP)
  • Relies on a central ldap service to retrieve runtime files (SPOF)
  • Custom logic to retrieve secrets from LDAP
  • Reliance on LDAP in all environments
  • No versioning (no rollback etc.)

Option 3a: Data Volume Container

Pros:

  • Versioned
  • Easy to distribute -> use docker registry
  • Easy to link to other containers

Cons:

  • Leakage of secrets -> worst case to public docker registry
  • Security need for intermediary services between build and runtime

Option 3b: Data Volume Container (encrypted)

Pros:

  • Versioned
  • Easy to distribute -> use docker registry
  • Easy to link to other containers
  • Throw away mentality due to encryption -> "unreadable" without seed secret
  • Can be switched with unencrypted containers in test/dev environments
  • Could be build in a separate "closed down" build environment

Cons:

  • Needs seed secret
  • Adds a dependency on decryption and seed secret distribution

Option 4a: etcd/consul

Pros:

  • Distributed -> no SPOF
  • etcd already running with k8s

Cons:

  • No ACL yet -> will change (layer could be added from k8s for now)
  • Access to etcd of k8s could be a security risk -> separate etcd service?
  • No versioning
  • Issues with bigger files in a key/value store

Option 4b: etcd/consul (encrypted via crypt)

Pros:

  • Distributed -> no SPOF
  • Seed secret needed to have access to files/secrets stored
  • etcd already running with k8s

Cons:

  • No ACL yet -> will change (layer could be added from k8s for now)
  • Access to etcd of k8s could be a security risk -> separate etcd service?
  • No versioning
  • Issues with bigger files in a key/value store

My favourite method would be to use encrypted Data Volume Containers. (Option 3b) This fits best with the modularity and reproducability of docker and makes it easy to switch out secret distribution later on. For example encrypted Volumes could be based on Ceph volumes instead of docker containers
I'm looking forward for your input.

@stp-ip
Copy link
Member Author

stp-ip commented Oct 28, 2014

On a related note some things were discussed in #1553.
cc. @bgrant0607 @bketelsen @thockin

@stp-ip
Copy link
Member Author

stp-ip commented Oct 28, 2014

As we partially discussed @jbeda

@jbeda
Copy link
Contributor

jbeda commented Oct 29, 2014

Very cool! Thanks for writing this up. I love the idea of an encrypted volume -- lots of decisions there though. Might be worthwhile to raise this with the docker folks too.

@bketelsen
Copy link
Contributor

well thought out. Here are my ramblings:

From a data security perspective I need to rotate my keys pretty frequently. It would be nice if the k8s implementation had consideration for re-encrypting secrets with new keys via an api call that let me post new keys. I don't want my private keys stored in k8s, but rather I'd like to embed them in my docker containers at deploy time (which is what I do now from my CI server) , or make them available via a storage container I deploy with the pod. If k8s controls both public and private keys for the secrets, people will have a hard time getting through audits. We built crypt specifically for use in Kubernetes so that we could support a model where each environment (QA/Production) had its own set of keys, and keys could be rotated easily -- by making a new deployment of our pods. Then I took it a step further by building crypt support into spf13/viper (PR pending) so that configs can come from env, config files, command line flags, or etcd/consul (both encrypted and plain text). Having that level of flexibility built into k8s would be great.

For now, I would settle for any mechanism that allows me to rotate keys. Start small.

@stp-ip
Copy link
Member Author

stp-ip commented Oct 29, 2014

@bketelsen key rotation could be done with Option 3b encrypted Data Volume Containers too. Additionally keys wouldn't be stored in k8s, but are stored in the build environment, which could be separate from the non secret build environment. So key rotation could easily be used.

With the LDAP option (Option 2) key rotation would be made easier as it's a central place to change and no rebuild necessary, merely a redeploy (could be circumvented via watching for updates).

How do you inject the key at deploy time? I sense that keys are never commited to the actual images as one would hope.

A few unclear things with viper and especially crypt:
Is storing files reasonable with this method? As viper seems to be centered around configs, are bigger files even supported? Would one just read in a pseudo config file and spit out the actual file from the received byte string?
How about the limitations of crypt and etcd/consul? Is it mostly used for credentials and PWs or could/is it used for actual file distribution? Keys are stored separately in etcd/consul, but encrypted/decrypted with the same asymmetric keys, right?

For all above methods key distribution or better seed secret distribution be it a gpg key or else needs to be done either via k8s or via separate tooling. Either way the decryption key will be exposed to k8s. With crypt and Option 3b the encrypted data is also accessible from k8s (etcd/consul or container registry). With LDAP we would at least have a minimal separation there. But when we consider k8s compromised, then we need to worry about other attack vectors anyway.

Sidenote: I'm going to write up a similar issue on dynamic config generation and the available possibilities etc..

@bgrant0607 bgrant0607 added area/app-lifecycle kind/design Categorizes issue or PR as related to design. labels Oct 29, 2014
@stp-ip
Copy link
Member Author

stp-ip commented Oct 30, 2014

Another way to go about it is to separate keys and credentials. For keys and file based secrets we could use the encrypted Data Volume Containers (de/encrypted via gpg) and for PWs/credentials we would use de/encrypted strings either from ENVs, etcd or consul.
The question is, if that makes sense as ENVs could just be added to the Data Volume Containers as bash script for example. It could be run at runtime. It would make changing values a bit more difficult, but on the other hand we don't need another tool.

My opinion would probably be to use no additional tooling and add PWs and credentials to the encrypted Data Volume Container.

@stp-ip
Copy link
Member Author

stp-ip commented Oct 30, 2014

Concerning the integration into k8s. I think some sort of key rotation and seed secret injection at runtime would be the first step, after we agreed on a method for secret distribution.

@erictune
Copy link
Member

erictune commented Nov 4, 2014

Its hard to evaluate these alternatives without a threat model, as well as stating some assumptions about how kubernetes is used.

Some questions:

  • Who deploys kubernetes apiserver and has root on the nodes? Are we trying to protect the secrets from them (hard, and not going to be solved merely by encrypting the secrets)
  • Who are the set of people who run pods? Are we worried about them escaping from a container and gaining root access to a machine? Are secrets and attackers in the same nodes or are they somehow constrained to be on different nodes?

etc...

@stp-ip
Copy link
Member Author

stp-ip commented Nov 4, 2014

@erictune I agree that it is harder without a specific threat model, but with a few generic threat models and best practices, it is easier to move forward into specifics in my opinion.
To answer your questions:

  • We should assume that the apiserver and basic k8s nodes are in control of a trusted party. Therefore protecting secrets from the apiserver/infrastructure is not necessary. If we were to try that, I think it would fail. It's a too complicated issue to start with and when you can't trust your admin especially with the processes running on there hardware it's already a lost battle most of the time. (If there are ways and techniques, which work to prevent leaking of data, secrets etc. when running on unsecure hardware - except using encrypted data processing methods, which would be too far reached for now - I'm all for it)
  • This issue is one we should be more concerned about in my proposal. Preventing "bad neighbors" to intercept secrets or be able to decrypt secrets is the second step I would love to address. For now this issue should only be prevented in the sense of: "Do not allow access to secrets to any pod/service without intentional permission by the creating party". This would include ACL for etcd/consul, if it were used. With additionally providing methods to encapsulate webtraffic (especially for run requests, which use seed secret parameters).

So my proposal is to start with the most common issue.
It assumes that the hardware and the k8s components can be trusted (for now) and that we are only trying to prevent bad neighbor behaviour (for now not including illegimite gaining of root access) and secret leakage through registries and code repositories in the organisation.
Example:

  • User A doesn't have access to the secrets of User B
  • MySQL doesn't have access to frontend secrets, if it isn't necessary for operation
  • Private/public registry can store images without leaking secrets
  • Code updates can be deployed (no change in secrets assumed) without fair of leaking secrets
  • Separation of concerns - deployes not involving secrets doesn't need decrypted data etc.

These are a few examples to emphasize my view on the first step.

I'm eager to hear other views and will update my "resulting solution" in another comment.

@stp-ip
Copy link
Member Author

stp-ip commented Nov 4, 2014

My "solution" would be tied to #2068 (comment):
Within the directory structure of /custom/mount/ there would be a secret(s) subdirectory. This is either not mounted (then the base image should default to /custom/default/secret(s)) or is available and contains the DECRYPTED data.
This enabled the base container to not worry about decryption and key distribution itself. It is merely provided as a in pod "service" via a mounted volume.

This volume can then in turn be provided through a number of different services. The image providing the data could get started via a seed (gpg key etc.) with the help of a Crypt, stocker, ldap or some custom de/encryption of an encrypted Data Volume Container.

My personal preference right now is to use something like:
Base image <- decrypt image via seed ENV<- encrypted Data Volume Container distributed through registry
Another variant, which would remove the necessity for a seed ENV would be to use a mounted key file at /custom/kubernetes_key or /custom/key. (mounted at the secrets container)
Base image <- decrypt image via mounted /custom/key <- encrypted Data Volume Container distributed through registry.

This is only a suggestion for a best practice guideline now, but could easily be integrated into k8s. With the help of providing a secrets volume of sorts similar to git based volumes #1945. This could then use key rotation, seed injection, ACL and store secrets in etcd or what is choosen.

I think basing, secrets, configuration and data, on volumes will make this best practice much more modular and easier to substitute for a homegrown solution from k8s.
Additionally people, who use crypt with etcd for example, can just run one container providing both /secret(s) and /configuration.

Keep the thoughts coming. \o/

@erictune
Copy link
Member

erictune commented Nov 4, 2014

Okay, I think we agree that we have to trust:

  • kubernetes apiserver
  • kubelet
  • docker daemon
  • whatever docker registry one image repository one choses to use (the official one or some other)

Given that we have to trust those components, I'd tend to favor a solution that reuses one of those components, over one that introduces a new component that needs to be locked down.
So, that reduces option 2 in my estimation. It is likely that we will soon stop using etcd as the means to distribute state to kubelets. So, that also makes me disfavor 4a/4b too.

I'm not sure what the current best practices are for distributing secrets along with docker containers. I'm assuming it is something like 3a or 3b. So, we should definitely support that.
The "Leakage of secrets -> worst case to public docker registry" concern can be mitigated by running ones own registry, if one is paranoid.

@stp-ip
Copy link
Member Author

stp-ip commented Nov 4, 2014

@erictune Yeah I agree with at least basing the initial idea on top of 3a/3b and then use this base to let users choose their tool they wanna work with. Be it etcd/consul/manual files/ENVs.
I tried to outline most of this idea in #2068 (comment) and #2030 (comment). If these have any unclear points or ideas I happily reiterate and clarify.

The thing is, that running a private docker registry is a mostly good way to prevent leaking of said images, but it can't always be.
As Reusing images is part of the docker ecosystem and that fact suggests that you are pulling and pushing from your private and the public registry. Encrypting the production data images would then become an additional failsafe to prevent any accidential pushes and give you a lot more time to fix such issues.
Additionally not everyone with access to the registry has access to the secrets, but only the people with the decryption key, which is another +.

@smarterclayton
Copy link
Contributor

Coming in late to this - may have missed it in the thread:

  • our threat model assumes individual nodes are potentially compromisable and that secrets should be partitioned to the nodes that need them
  • for 3b there are further advantages to allowing the private key to be part of an image but the password to be part of the pod definition. Depending on the 3b implementation that may be very well what you had in mind, but I should be able to rely on the trust in the integrity of the apiserver to limit the potential for a third part to access my key

@smarterclayton
Copy link
Contributor

(Our = Openshift)

@stp-ip
Copy link
Member Author

stp-ip commented Nov 6, 2014

@smarterclayton

  • I think this threat model is included in my proposal/solution building on 3b. At least to the point where root access is gained on a node. Then any encryption, key rotation etc. is worthless as network traffic, memory, processes can be intercepted etc.
  • The thought of binding the key in a special volume such as /custom/kubernetes_key would be interesting. This could be integrated in k8s and remove the necassity to use ENVs for token/key/seed distribution. This on the other hand means that we now have 4 volumes/container (data, config, secrets, k8s_key) for each container. The other idea was to just use ENVs to distribute the k8s_key. But I think I would favor the private key as volume idea. Will add that suggestion to my "solution" post.

Looking forward to more ideas. I'm already in the process of setting up a "production" image for these ideas. Configuration and data is done, but secrets and key is still in the works.

@stp-ip
Copy link
Member Author

stp-ip commented Nov 6, 2014

This is the structure I imagined for secrets and volume standardisation in general:
structure_k8s

(Disclaimer: Was also posted in #2068.)

@jbeda jbeda mentioned this issue Nov 6, 2014
@bgrant0607 bgrant0607 added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Dec 4, 2014
@simo5
Copy link
Contributor

simo5 commented Dec 11, 2014

Sorry for coming late here,

I've seen all the proposal in this discussion and find them all lacking, the sharing of data volume come close, but it looks to me as too static, meaning that you have to interact with a file system to affect the secrets, and it also means secrets are avialble all the time through this volume.
It is also pretty inflexible if you want to share a subset of secrets between
multipe pods and then a bunch of others be specific to a pod or a sinle docker
container as it would require generating many volumes and copying secrets
around (additional risk of lakaage, synchronization issues). Or have secrets
that expire pretty quickly, like OTPs.

I have been musing for a while that a local socket based protocol based on a standardized protocol (protobuffer or what you like) that is endpointed by a proxy service on the host could be a better model.
The proxy service can host secrets itself (for development, toy deployments) or
further proxy container's requests to a centralized redundant services.

In the simplest implementation you have no chicken-egg problem with credentials needed to access the secrets distribution service, the hosts vouches for the containers, yet you can add per container auth/encryption on top of it if so desired.

The only thing that needs to be standardized here becomes the API, not a
filesystem layout (much easier, and less static) and can support multiple
versions at the same time (future proof), as well as additional ACL constraints
based on external events and what not, plus better auditing if desired.

Secrets can then be fetched on the fly.
The good thing about an API is that this can be reused outside of Kubernetes (could be implemented in docker or whatever) and become a standard fixture of common Linux operating system distributions even, if the abstraction is done right, making it a potentialy widely adoptable secret distribution method.

At the same time this pushes part of the security out of the equation by delegating it to the specific implementation (local security is done via simple permissions on unix sockets and whether/how they are bind-mounted in the container).

Ideally the proxy service (when a central secret sharing service is used) would
then use a TLS connection (or other equivalend secure channel) with either certificates or some bearer token/shared secret based authentication mechanism.
Up to the implementation (and potentially to be standardized too), but not
affecting directly the API exposed to the applications/containers and so easier
to change behind the scene, without affecting the applications.

Dynamic rerouting based on the specific secrets being requested can also be
achieved. and finally in the long term, if someone really likes the ide it can
be extended all the way to offer a PKCS#11 compatible/compliant API so that
HSMs can be used (with the propoer routing).

I think a unix socket+API based design offers significant advantages over a
volume based design and should be considered.

HTH,
Simo.

@stp-ip
Copy link
Member Author

stp-ip commented Dec 12, 2014

The thing with your proposal in my opinion is, that this proposes just another api standard/implementation. There are already quite a few of these apis one could use. LDAP, crypt, consul and a bunch of enterprise tools. The difference with this proposal or actually with the latest iteration (moby/moby#9277) is that you don't care about the actual implementation as long as app container maintainer and volume provider agree on a directory and filelayout, which can be quite dynamic. One could iterate throught the /con/secret directory for example, or grep for an AWS key etc. pp..
The filesystem/volume is only the interface. One can either use static volumes such as Host Volumes or Data Volume Containers, but what I envisioned was more dynamic and starts with the idea to use Side Containers. These expose a volume, while actually generating the needed secrets. This could be done continuosly to update secrets, but this in turn would remove versioning. As I favor versioned deploys, I would say redeploy, if secrets changed, but that's just a recommendation.

Furthermore with the final proposal to use Volumes as a Service in Kubernetes, there is a way to make most of the features you want available. A Volume as a Service could do a lot more than just be a static fs. It could and should be available per container, per pod or per service and therefore be quite tailored. Additionally short lived tokens or removing secrets after a successful start could be achieved. Not to mention that these volumes should live in memory only.

So instead of using just another API, which needs to be spoken too, which requires tools inside the container etc., plain reading of files is already supported, especially considering that one could use /con/secret/ENV and just generate ENVs to be used. Moving from Host Volumes to git based volumes to a fully integrated secret distribution method in kubernetes does only require to change the volume type and not a full reconfiguration to a new set of APIs.

@simo5
Copy link
Contributor

simo5 commented Dec 12, 2014

While you could use the file system as an API, I do not see any good reason to do that. In theory, yes, you could have dynamic content appearing and disappearing from a filesystem, but file system interfaces are not built to do that, it would be a quite fragile construction with corner cases and odd/unexpected behaviors.

You can describe conventions, but they would still be something on top and separate from a real filesystem interface, so you would have to retrain developers and programs to behave differently when accessing that specific portion of the filesystem.

What is the advantage of doing that ?
Why should you have to build a whole filesystem driver in order to add the more
dynamic uses cases instead of a single and clean, and well defined (semantics
and behavior-wise) API ?

File systems also have very many layers, including caching in the page cache
(unless you use O_DIRECT which IIRC is not supported by things like FUSE), so
you end up having these secrets copied in quite a few places. It is true the
you must trust the host, but that doesn't mean you should increase the attack
surface unnecessarily :)

When it comes to using an API, you could certainly use LDAP, a good LDAP server
has all the good properties you need, however it is also a lot of work to
change the behavior in some cases. For some of the things you may want to do
you'd have to build plugins/controls/extended operations (I have multi-year
long experience in doing just that in 389ds and to a lesser extend OpenLdap).

If I had to choose between LDAP or a File System interface, LDAP would probably
be a better idea, especially if you need search (As you mentioned with the
'grep' comment). Yet it feels like LDAP is a lot of baggage, where you'll
probably not use most of it and you'd have to do a lot of configuration and ACI
work (ACIs are not standard in LDAP, so you would rapidly go down towards
marrying a specific implementation) to properly protect secrets, then have to
build bind plugins to be able to properly represent containers (which rapidly
leads down having to dynamically create entries for each container/pod that is
being launched by Kubernetes and having to replicate a lot of "ownership"
knowledge in the LDAP tree, or heavily hack the ACI engine to use external
information).

Note that using LDAP as storage is quite clearly a possibility, but using it as
an API to expose to the containers is not necessarily a good idea.

In general it looks to me that you are trying to propose use of existing tools
to save some initial development/standardization process, but I feel like these
tools are a poor fit for the job (if they weren't I bet we'd already have an
existing product/project that uses one of them to solve the exact problem we
are facing and we wouldn't be having this conversation :).

The most concering aspect, is that by using these tools
(filesystem/LDAP/consul) it becomes a lot more difficult to guarantee the
security properties you want to guarantee as these are generic tools not built
with the task of protecting information first and foremost. It becomes quite
easy to misconfigure/misuse them and end up leaking secrets.

The filesystem interface is particularly prone to abuse if the application
running in the container is not well built and may make it much easier to
exfiltrate secrets by tricking the application to read the "wrong" files. I
think this alone should be a strong concern and a red warning in thinking of
using the filesystem as an interface to provide secrets to applications.

For the most advanced case (like the idea of ending up with a PKCS#11
interface), bending a filesystem or LDAP interface to that would be quite
unnatural too, so I think you would end up limiting progress because the cost
of adding those features would be too high to be justifiable.

@stp-ip
Copy link
Member Author

stp-ip commented Jan 9, 2015

Sorry for the long wait on an answer, but I got stuck with other things.

I agree that using the filesystem as an API is not perfect for dynamic stuff, but that is not the main advantage. In my opinion with containers and in general with versioned infrastructure (even versioning data and config) you do not want dynamic content appearing or disappearing without some sort of checkpoint ("docker/git commit" kind of thing). These non dynamic use cases are the focus here. Fully dynamic configuration is only one edgecase I was thinking about.

The main point I wanted to make was that for general supporting data/config, a common interface is essential. Mounting your (persistent) storage to /con/*{data,log,configuration} on each container is easier to work with. As different locations can be abstracted away by the application/container builder, which most likely knows the actual implementations better and therefore has the knowledge to abstract that away. Within a one process per container system one data, configuration location is possible and one of the ways to simplify deployment in my opinion. Additionally it abstracts away orchestration details (mount points) from the actual storage location used by an application/container.

Sure conventions are only ontop, but standards should not be forced upon, but agreed on. I see value in the discussion more than I see in my specific implementation.
That being said. I am using my proposal at moby/moby#9277 on various images and it is used by others too. Quite some simplifications could be seen during orchestration and deploy.

The advantage to use the filesystem is that applications can work with them now. It is easier to let them use what they know for now and therefore provide a filesystem implementation. Which could be backed by an underlying API. As seen in this proposal a lot of different tools came up and no standard API could be agreed upon. On the other hand most of these tools provide a way to export more or less the same config files. Or at least a SideContainer could provide this export, without adding custom code into the application container. Separating concerns is one thing I value in the container approach.

One thing considered was using tmp filesystems to minimize the attack surface for secrets laying around in the system.

LDAP was just one suggested solution and in my and your view it is not worth it. Additionally file distribution is not optimal and seeding LDAP credentials needs another solution again.

It is not about choosing existing tools over new implementations. It is more about understanding existing tools and enable a more generalized solution for a broad usage area. The proposed tools are partly used in existing products/projects, that is one reason they were included. Additionally I am not a fan of proposing a new standard, which just ends up increasing the fragmentation in the ecosystem. I wanted to propose a standard, which enables a wide range of tools and applications to work with and in the same breath simplify orchestration and usage of containers. Moving the application details to the image builders and simplify the orchestration details.

I have to disagree, that using LDAP/consul or my filesystem proposal at moby/moby#9277 makes it more difficult to protect information. Right now most container users either use ENVs, which show up in their bash history, the process and are available to linked containers etc., or embed their credentials inside their container, which easily gets pushed to a public registry or is accessible in their private registry without any further ACL.
Using LDAP/consul or a specific solution shows at least some kind of thinking process into secret protection. Additionally I cannot agree that generic tools are more insecure by default. It depends on the usage. Leaking secrets is one important consideration and beginning to let users split config and secrets is the first step and will enable to move secrets to more secure tooling later on.

When someone can make the application read "wrong" files, one can also assume that they could inject code, rewrite API endpoints and use "wrong" ENVs. Making the application do malicious things is not a problem of filesystem interface usage, but a general issue to be considered.

One idea here was to make it possible to move from beginner cases to advanced cases easily as the backend implementation can be switched easily, but the interface stays the same. Right now the filesystem is one of the most used interfaces for applications, but it lacks some standardization inside docker and kubernetes. Moving to another specific implementation for advanced cases is something for the future and I would love to see secret distribution within kubernetes, which does not rely on bending tools, but that does not mean kubernetes cannot provide a common interface via the filesystem too. At least provide it until applications can actually use the proposed API natively. Adding glue code to each used application just moves the tool bending to another level, which I actually dislike even more.

smarterclayton added a commit to smarterclayton/kubernetes that referenced this issue Feb 12, 2015
This proposed update to docs/design/security.md includes proposals
on how to ensure containers have consistent Linux security behavior
across nodes, how containers authenticate and authorize to the master
and other components, and how secret data could be distributed to
pods to allow that authentication.

References concepts from kubernetes#3910, kubernetes#2030, and kubernetes#2297 as well as upstream issues
around the Docker vault and Docker secrets.
@bgrant0607
Copy link
Member

Status update @pmorie or @erictune?

Should this be closed, or more specific issues filed for unresolved issues?

@bgrant0607 bgrant0607 added priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. area/api Indicates an issue on api area. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Feb 28, 2015
@erictune
Copy link
Member

erictune commented Mar 3, 2015

I will document secrets and then close this with a link to that doc.

@erictune erictune self-assigned this Mar 3, 2015
@erictune
Copy link
Member

We now have secrets, which are distributed via a volume. There is also an option to pass them in env vars. This issue is still good reading on the design space, but it doesn't need to be an open issue.

@tamsky
Copy link
Contributor

tamsky commented Oct 8, 2015

@erictune what is the status of secrets, and are they documented?

I will document secrets and then close this with a link to that doc.

@erictune
Copy link
Member

erictune commented Oct 8, 2015

They are implemented and documented here:
http://kubernetes.io/v1.0/docs/user-guide/secrets.html

@tamsky
Copy link
Contributor

tamsky commented Oct 9, 2015

Thanks!

xingzhou pushed a commit to xingzhou/kubernetes that referenced this issue Dec 15, 2016
This proposed update to docs/design/security.md includes proposals
on how to ensure containers have consistent Linux security behavior
across nodes, how containers authenticate and authorize to the master
and other components, and how secret data could be distributed to
pods to allow that authentication.

References concepts from kubernetes#3910, kubernetes#2030, and kubernetes#2297 as well as upstream issues
around the Docker vault and Docker secrets.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api Indicates an issue on api area. area/app-lifecycle area/security kind/design Categorizes issue or PR as related to design. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests

9 participants