Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal for new kubecfg design (kubectl) #1325

Merged
merged 2 commits into from Oct 15, 2014
Merged

Proposal for new kubecfg design (kubectl) #1325

merged 2 commits into from Oct 15, 2014

Conversation

ghodss
Copy link
Contributor

@ghodss ghodss commented Sep 16, 2014

_UPDATE: This proposal was reworked many times throughout this PR. I am keeping this parent message the same for historical purposes, but you can find the closest description of the final design in this comment (with the exception that inspect became describe)._

Intent

The intent of this pull request is to propose a design for the present and future of kubecfg. The conversation from #1167 was the primary input into this design. The idea is to have kubecfg solve two use cases:

  1. Provide a great user experience to view the current status of a Kubernetes cluster and to execute simple commands on the cluster.
  2. Provide a tool which will accept configuration with generic resources to submit changes to and diff against a Kubernetes cluster.

The problem with the current design is that it is too generic to effectively support (1). The proposed redesign enables custom commands per resource (e.g. kubecfg pods, kubecfg services, etc.) but generic subcommands to satisfy (2) (kubecfg submit, kubecfg diff, kubecfg reconcile, etc.).

Proposed set of subcommands

Only a subset of these have been implemented in the attached commit.

  • kubecfg2 - Parent command.
    • pods
      • list [<id>] - List pods in a column-oriented view (see Examples below). By default, the first argument passed in acts as a filter by ID.
      • get <id> - List info about a pod in a row-oriented view. Designed to show more metadata about a given pod, and over time can be enhanced to make navigating the cluster from a given pod very easy. Currently I'm adding a list of any replication controllers whose selectors match (and therefore who control) the selected pod.
      • delete <id> - Delete a pod.
      • create [-f <filename>] - In the future we can accept directories or even data from stdin.
    • rc - For replication controllers. Happy to use a different abbreviation but replicationControllers seemed too long.
      • list [<id>]
      • get <id> - RC's get command could become extremely rich. For now I've added Pods Status which queries for pods under this replication controller and shows counts for running, waiting and terminated, but in the future you could do diffs against the controller's PodTemplate and show counts for matching and non-matching pods. You could also show the first 3-4 pod ID's per category and then show <X more ...>, making debugging even easier.
      • create [-f <filename>]
      • update [-f <filename>]
      • delete <id> - I have removed the delete/stop/rm replication controller confusion in favor of just delete and resize, where delete takes an optional --force flag if replicas > 0.
      • resize <id> <replicas>
      • Removed rollingupdate <controller> <image> <time> - I've included this here for parity but I would like to propose taking it out. This is the one command that I don't think belongs in kubecfg ... it's too long-running, fraught with edge cases, and in reality would probably want to take more into consideration (like fluctuations in metrics, % of pods that come up successfully, etc.). I think this level of functionality belongs in an independent client that uses the API that can look at more components in the infrastructure than just kubernetes.
    • minions
      • list [<id>]
      • get <id>
    • services
      • list [<id>]
      • get <id>
      • create [-f <filename>]
      • update [-f <filename>]
      • delete <id>
      • Postponed resolve <id> - Just an idea, but it could give you a random IP/port combination to connect to. May be useful for quickly locating a node to test for a given service.
    • Postponed submit [<filename>...] - This command exists to satisfy the second use case above. Reconcile and submit any changes from a given set of config(s), from files or from stdin. If you had an entire directory tree of files that had configs that represented your cluster state, you could use this command to submit them all to either create or update your cluster.
    • Postponed diff [<filename>...] - A dry-run version of submit.
    • run [-p <port spec>] <image> <replicas> <controller> - Same as today.

Examples

Here is what the current commit produces:

These examples are out of date - see this comment for the latest.

$ kubecfg2
kubecfg2 controls the Kubernetes cluster manager.

Find more information at https://github.com/GoogleCloudPlatform/kubernetes.

Usage:
  kubecfg2 [command]

Available Commands:
  version                   Print version of client and server
  pods                      List, inspect and modify pods
  rc                        List, inspect and modify replication controllers
  help [command]            Help about any command

 Available Flags:
  -a, --auth="/Users/sam/.kubernetes_auth": Path to the auth info file. If missing, prompt the user. Only used if doing https.
      --help=false: help for kubecfg2
  -h, --host="": Kubernetes apiserver to connect to
  -m, --match-version=false: Require server version to match client version

Use "kubecfg2 help [command]" for more information about that command.

$ kubecfg2 help pods list
List one or more pods. Pass in an ID to filter.

Usage:
  kubecfg2 pods list [<id>] [flags]

 Available Flags:
  -a, --auth="/Users/sam/.kubernetes_auth": Path to the auth info file. If missing, prompt the user. Only used if doing https.
      --help=false: help for list
  -h, --host="": Kubernetes apiserver to connect to
  -m, --match-version=false: Require server version to match client version
  -l, --selector="": Selector (label query) to filter on


Use "kubecfg2 help [command]" for more information about that command.

$ kubecfg2 pods list
ID                                      IMAGE(S)                HOST            LABELS                                  STATUS
4893ea09-3d0d-11e4-b3ad-0800279696e1    dockerfile/nginx        127.0.0.1/      replicationController=MyNginxController Running
48943371-3d0d-11e4-b3ad-0800279696e1    dockerfile/nginx        127.0.0.1/      replicationController=MyNginxController Running
$ kubecfg2 pods list 4893ea09-3d0d-11e4-b3ad-0800279696e1
ID                                      IMAGE(S)                HOST            LABELS                                  STATUS
4893ea09-3d0d-11e4-b3ad-0800279696e1    dockerfile/nginx        127.0.0.1/      replicationController=MyNginxController Running
$ kubecfg2 pods get 4893ea09-3d0d-11e4-b3ad-0800279696e1
ID:                             4893ea09-3d0d-11e4-b3ad-0800279696e1
Image(s):                       dockerfile/nginx
Host:                           127.0.0.1/
Labels:                         replicationController=MyNginxController
Status:                         Running
Replication Controllers:        MyNginxController (2/2 replicas created)
$ kubecfg2 rc list
ID                      IMAGE(S)                SELECTOR                                REPLICAS
MyNginxController       dockerfile/nginx        replicationController=MyNginxController 2
$ kubecfg2 rc get MyNginxController
ID:             MyNginxController
Image(s):       dockerfile/nginx
Selector:       replicationController=MyNginxController
Labels:         <none>
Replicas:       2 current / 2 desired
Pods Status:    2 running / 0 waiting / 0 terminated

Other notes

  • I am currently using spf13/cobra but am also considering codegangsta/cli. The latter may allow us to write out the command definitions in a cleaner way and support bash autocompletion, but the former allows persistent flags. Open to using either one.
  • Some of these commands (especially get commands that show a lot of info) may result in lots of queries to the server. There are several ways to handle this: (1) don't consider it an issue, (2) offer a "succinct mode" that tries to do as little as possible and (3) over time backport functionality that is particularly useful directly into apiserver as new API endpoints. Especially with (3), you could consider kubecfg a proving ground for new convenience functionality to see whether it's worth implementing in apiserver directly.

File layout

  • cmd/kubecfg2/kubecfg2.go - All cobra and CLI-related code remains in this file/package.
  • pkg/kubecfg2/<subcommand>.go - All logic per subcommand lives in one file per subcommand.
  • pkg/kubecfg2/kubecfg2.go - Common functions used by cmd/kubecfg2 and pkg/kubecfg2 packages.

Open Questions

  • Can we leave rollingupdate out of this version for the merge? If it's needed for demo purposes, can it be reimplemented somewhere outside of this tool?
  • @erictune pointed out that get is not the best name for above. info and details are other options but ideas here are welcome.

@jbeda
Copy link
Contributor

jbeda commented Sep 16, 2014

Looks awesome.

  • For chatty commands to the API -- we should use this to shape the API. So -- option (3). Common things should be doable without tons of API calls.
  • We should think soon about machine readable output from various commands. It is super common for folks to script this.
  • Some documentation on the file formats would be useful. I assume that these are just serializations of the API payloads, but we should figure out how that maps -- especially around versions and update.
  • I like submit -- it basically says -- take this file/directory, figure out what it is and make it happen. This will grow into a rudimentary config system. This can be a can of worms and we will have to think a little here before it grows too big too fast too unstructured. Specifically:
    • Why do I have to create lots of files? Why not just a file that has subsections for various resources.
    • If I have an omnibus file, can I just select certain things out of it?
    • What about parameterization of that file.

@erictune
Copy link
Member

Sam, this is exciting stuff. Comments to follow as I read through more closely.

@erictune
Copy link
Member

I like your idea that kubecfg2 pods get and kubecfg2 rc get will try to join information about a pod or rc from several REST calls.

  1. Do you expect that this command will try to gather a "consistent snapshot of state", or just join information from several calls done close together in time?
    1. The name get doesn't evoke what you are trying to do. Any other thoughts?

@erictune
Copy link
Member

Wondering if you had thoughts on how submit might interact with version control.
Do I commit my desired state, after peer review, to git and then use kubecfg2 to submit that to k8s?

And, if there is parameterization of config, as Joe suggested, does parameter substitution happen before commit or before pushing to k8s?

Not stuff that this PR needs to address, but would welcome your thoughts.

@ghodss
Copy link
Contributor Author

ghodss commented Sep 16, 2014

We should think soon about machine readable output from various commands. It is super common for folks to script this.

I am aiming towards having all the output of the resource commands be parseable in the sed/awk style, e.g. with consistent and parseable whitespace, but in general I believe the kubecfg resource commands (pods, rc, etc) should primarily optimize for human readability and convenience. The REST API is very usable and scriptable with curl, but not very friendly to people, so kubecfg should fill the niche of a person trying to interact with a k8s cluster. However, commands like submit and services resolve can have more of a focus of easily parseable output, since those are likely to be scripted.

Some documentation on the file formats would be useful. I assume that these are just serializations of the API payloads, but we should figure out how that maps -- especially around versions and update.

I think today kubecfg can parse and export both JSON and YAML, so I'd be fine to keep that. Did you mean something else?

I like submit -- it basically says -- take this file/directory, figure out what it is and make it happen. This will grow into a rudimentary config system. This can be a can of worms and we will have to think a little here before it grows too big too fast too unstructured. Specifically:

  • Why do I have to create lots of files? Why not just a file that has subsections for various resources.
  • If I have an omnibus file, can I just select certain things out of it?
  • What about parameterization of that file.

I think we can make it really flexible - you can use one file, many files, recursive directories, do selective updates, etc. Mostly will come down to which style we want to prioritize implementing first.

Glad to see that I'm heading in the right direction!

@ghodss
Copy link
Contributor Author

ghodss commented Sep 16, 2014

I like your idea that kubecfg2 pods get and kubecfg2 rc get will try to join information about a pod or rc from several REST calls.

Do you expect that this command will try to gather a "consistent snapshot of state", or just join information from several calls done close together in time?

Probably the latter. If anything looks off or doesn't make sense I would expect the user to just run the command again, just as if they were navigating around the REST API.

The name get doesn't evoke what you are trying to do. Any other thoughts?

info or details also work for me. Maybe info because it's shorter.

@ghodss
Copy link
Contributor Author

ghodss commented Sep 16, 2014

Wondering if you had thoughts on how submit might interact with version control.

Do I commit my desired state, after peer review, to git and then use kubecfg2 to submit that to k8s?

And, if there is parameterization of config, as Joe suggested, does parameter substitution happen before commit or before pushing to k8s?

Not stuff that this PR needs to address, but would welcome your thoughts.

I think we could support many different workflows. Some people may prefer to have a git repo with all their config and have a git hook on receive which runs kubecfg automatically to synchronize the config state with the cluster. Other people may prefer to run all the synchronizing commands manually but keep a record of all changes in a repo somewhere. I think that if we provide the primitive of "take this file and sync it up with the cluster," people can create their own ways of using it that fit them best. For parameter substitution, we can see what needs there are for templating and see if that's best done in kubecfg or directly in the REST API itself.

@jbeda
Copy link
Contributor

jbeda commented Sep 16, 2014

wrt machine readable output: while it is possible to hit the k8s API directly from curl, I'd still recommend we (probably via an option) provide easily parseable output:

  • Auth gets tricky from curl. If this (or another?) utility does nothing more than be curl+auth, that would be super useful.
  • Many (most?) users will learn and know the system through tools like this and won't want to learn anything about the API. They'll want to take a set of manual actions and automate them. We shouldn't force them to climb a complexity cliff to start doing automation.

We can add it later, of course, but I'd love to keep the awk/sed/grep/cut to a minimum.

@ghodss
Copy link
Contributor Author

ghodss commented Sep 16, 2014

@jbeda Fair enough, that makes sense. We can plan to add something like a --parseable flag later.

@lavalamp
Copy link
Member

If people are running their clusters with duct tape like sed and awk then we're doing something wrong. kubecfg has a nice -template parameter, we should retain that.

},
}

podsListCmd := &cobra.Command{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cobra.Command looks pretty sweet, but I think we can generate these default subcommands programatically given a few strings?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I presume you are referring to the List/Get/Create/Delete subcommands, but each resource (Pod/RC/Minion/Service) may want to customize some subset of the subcommand fields (Use/Short/Long/Run/Flags), and if you parameterize all of them I think you end up just as well off writing out the full subcommands.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know for sure, but I suspect all the fancy custom things we might want to do ought to be part of config, which has yet to be written. Not duplicating code seems to be my pet issue :)

@lavalamp
Copy link
Member

Overall comments: the cobra command stuff looks pretty awesome, +1 to that. I do not like the List, Get, etc functions in pkg/kubecfg2, though.

@brendandburns
Copy link
Contributor

I'm not super excited about 'rc' as the replica controller abbreviation. it's hard to parse for a first time user. Perhaps we could support both "short" ('rc', 'pd' ...) and long ('replicaController', 'pod', ..) names?

@brendandburns
Copy link
Contributor

wrt to 'get' vs 'info', I think I'd prefer 'get' and other restful verbs, because it will help people get used to the restful API, but I can be convinced otherwise.

I'm happy to pull rollingupdate out into its own binary.

@thockin
Copy link
Member

thockin commented Sep 17, 2014

I'll make comments on the text before I go look at code.

On Mon, Sep 15, 2014 at 5:48 PM, Sam Ghods notifications@github.com wrote:

Intent

The intent of this pull request is to propose a design for the present and
future of kubecfg. The conversation from #1167
#1167 was the
primary input into this design. The idea is to have kubecfg solve two use
cases:

  1. Provide a great user experience to view the current status of a
    Kubernetes cluster and to execute simple commands on the cluster.
  2. Provide a tool which will accept configuration with generic
    resources to submit changes to and diff against a Kubernetes cluster.

The problem with the current design is that it is too generic to
effectively support (1). The proposed redesign enables custom commands per
resource (e.g. kubecfg pods, kubecfg services, etc.) but generic
subcommands to satisfy (2) (kubecfg submit, kubecfg diff, kubecfg
reconcile, etc.).
Proposed set of subcommands

Note that only a few have been implemented so far in the attached commit.

  • kubecfg2 - Parent command.
    • pods

Would be nice if "pod" was a magic alias to "pods" so that both work.

      • list [] - List pods in a column-oriented view (see Examples
        <#1487bee580fe8cb5_examples> below). By default, the first
        argument passed in acts as a filter by ID.

Regarding output: Something that has served me well in the past with CLIs
was to make a set of top-level flags (or a single flag) that sets the
output mode for commands. Internally, the app collects data to be printed
in a slightly more abstract structure, and when it comes time to print,
apply the global formatting.

For concrete example, assume 3 output modes: table, key-value, json"

$ clitool -t something
ID | Name | Value
123 | CharlieBrown | yellow
456 | Linus | blue

$ clitool -k something
id="123" name="CharlieBrown" value="yellow"
id="456" name="Linus" value="yellow"

$ clitool -j something
[
{ "id": "123", "name": "CharlieBrown", "value": "yellow" },
{ "id": "456", "name": "Linus", "value": "yellow"}
]

Internally the data can all be stored as key-value, but the varying output
lets people use the data in different ways. A idea.

      • get - List info about a pod in a row-oriented view.
        Designed to show more metadata about a given pod, and over time can be
        enhanced to make navigating the cluster from a given pod very easy.
        Currently I'm adding a list of any replication controllers whose selectors
        match (and therefore who control) the selected pod.

I think we should strive for mostly well-defined, limited commands that
can be composed. Be wary of jamming everything into a single command.

      • delete - Delete a pod.
      • create - In theory this can come from stdin as well.

Please handle file, stdin, and if possible an inline json string (as the
next arg).

What about a generic "select" operation that evaluates a selector? Like:

$ kubecfg -k pods select "user in (thockin)"
uid="12093-1022-123893-1231-3123123" name="tims-nifty"
uid="64564-4564-456466-4634-3453453" name="testpod3"

This pattern could apply to all REST resource types.

      • rc - For replication controllers. Happy to use a different
        abbreviation but replicationControllers seemed too long.

If it is too long, it is too long everywhere - let's not abbreviate in some
places.

      • list []
      • get - RC's get command could become extremely rich. For
        now I've added Pods Status which queries for pods under this replication
        controller and shows counts for running, waiting and termianted, but in the
        future you could do diffs against the controller's PodTemplate and show
        counts for matching and non-matching pods. You could also show the first
        3-4 pod ID's per category and then show , making debugging even easier.
      • delete - Straight REST delete without resizing first, as
        per current behavior.
      • create
      • update
      • set - A slightly more generic version of
        resize.
      • stop - First sets replicationController to 0, then
        deletes it, like the current kubecfg behavior.
      • rollingupdate - I've included this
        for parity but this is the one command that I don't think belongs in
        kubecfg ... it's too long-running, fraught with edge cases, and in reality
        would probably want to take more into consideration (like fluctuations in
        metrics, % of pods that come up successfully, etc.). I think this level of
        functionality belongs in an independent client that uses the API or kubecfg
        that can look at more components in the infrastructure than just kubernetes.

I agree - I would be OK to lose this into a higher-level tool, or even just
a shell script that wraps kubecfg

      • minions
      • list []
      • get
    • services
      • list []
      • get
      • resolve - Just an idea, but it could give you a random
        IP/port combination to connect to. May be useful for quickly locating a
        node to test for a given service.

I would call resolve 'endpoints' and just return the endpoints of a
service - let the user pass it to "| head -1" or something.

What about creating and updating a service?

      • submit [...] - This command exists to satisfy the
        second use case above. Reconcile and submit any changes from a given set of
        config(s), from files or from stdin. If you had an entire directory tree of
        files that had configs that represented your cluster state, you could use
        this command to submit them all to either create or update your cluster.
    • diff [...] - A dry-run version of submit.

Examples

Here is what the current commit produces:

$ kubecfg2
kubecfg2 controls the Kubernetes cluster manager.

Find more information at https://github.com/GoogleCloudPlatform/kubernetes.

Usage:
kubecfg2 [command]

Available Commands:
version Print version of client and server
pods List, inspect and modify pods
rc List, inspect and modify replication controllers
help [command] Help about any command

Available Flags:
-a, --auth="/Users/sam/.kubernetes_auth": Path to the auth info file. If missing, prompt the user. Only used if doing https.
--help=false: help for kubecfg2
-h, --host="": Kubernetes apiserver to connect to
-m, --match-version=false: Require server version to match client version

Use "kubecfg2 help [command]" for more information about that command.

Will you cache things like KUBERNETES_MASTER, for example, so I don't have
to set them?

$ kubecfg2 help pods list
List one or more pods. Pass in an ID to filter.

Usage:
kubecfg2 pods list [] [flags]

Available Flags:
-a, --auth="/Users/sam/.kubernetes_auth": Path to the auth info file. If missing, prompt the user. Only used if doing https.
--help=false: help for list
-h, --host="": Kubernetes apiserver to connect to
-m, --match-version=false: Require server version to match client version
-l, --selector="": Selector (label query) to filter on

How does flag parsing work? Can I say

kubecfg -a="/tmp/foo" pods list -l "foo in (bar)" or do flags have to come
at the end? I think that doing what th euser intuits around flags is
something that can make or break a CLI experience.

Use "kubecfg2 help [command]" for more information about that command.

$ kubecfg2 pods list
ID IMAGE(S) HOST LABELS STATUS
4893ea09-3d0d-11e4-b3ad-0800279696e1 dockerfile/nginx 127.0.0.1/ replicationController=MyNginxController Running
48943371-3d0d-11e4-b3ad-0800279696e1 dockerfile/nginx 127.0.0.1/ replicationController=MyNginxController Running

$ kubecfg2 pods list 4893ea09-3d0d-11e4-b3ad-0800279696e1
ID IMAGE(S) HOST LABELS STATUS
4893ea09-3d0d-11e4-b3ad-0800279696e1 dockerfile/nginx 127.0.0.1/ replicationController=MyNginxController Running

$ kubecfg2 pods get 4893ea09-3d0d-11e4-b3ad-0800279696e1
ID: 4893ea09-3d0d-11e4-b3ad-0800279696e1
Image(s): dockerfile/nginx
Host: 127.0.0.1/
Labels: replicationController=MyNginxController
Status: Running
Replication Controllers: MyNginxController (2/2 replicas created)

$ kubecfg2 rc list
ID IMAGE(S) SELECTOR REPLICAS
MyNginxController dockerfile/nginx replicationController=MyNginxController 2

$ kubecfg2 rc get MyNginxController
ID: MyNginxController
Image(s): dockerfile/nginx
Selector: replicationController=MyNginxController
Labels:
Replicas: 2 current / 2 desired
Pods Status: 2 running / 0 waiting / 0 terminated

Other notes

  • I am currently using spf13/cobra https://github.com/spf13/cobra
    but am also considering codegangsta/cli
    https://github.com/codegangsta/cli. The latter may allow us to write
    out the command definitions in a cleaner way and support bash
    autocompletion, but the former allows persistent flags. Open to using
    either one.
  • Some of these commands (espeically get commands that show a lot of
    info) may result in lots of queries to the server. There are several ways
    to handle this: (1) don't consider it an issue, (2) offer a "succinct mode"
    that tries to do as little as possible and (3) over time backport
    functionality that is particularly useful directly into apiserver as new
    API endpoints. Especially with (3), you could consider kubecfg a proving
    ground for new convenience functionality to see whether it's worth
    implementing in apiserver directly.

File layout

  • cmd/kubecfg2/kubecfg2.go - All cobra and CLI-related code remains in
    this file/package.
  • pkg/kubecfg2/.go - All logic per subcommand lives in one file per
    subcommand.
  • pkg/kubecfg2/kubecfg2.go - Common functions used by cmd/kubecfg2 and
    pkg/kubecfg2 packages.

You can merge this Pull Request by running

git pull https://github.com/ghodss/kubernetes kubecfg2

Or view, comment on, or merge it at:

#1325
Commit Summary

  • Initial commit of kubecfg rewrite to support more customized commands

File Changes

Patch Links:

Reply to this email directly or view it on GitHub
#1325.

out = new(tabwriter.Writer)
out.Init(os.Stdout, 0, 8, 1, '\t', 0)

kubecfg2Cmd := &cobra.Command{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: just call this "cmds" or "cmdTree" :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FTR it will look less ugly when there isn't a 2 in there... but I changed it to cmds anyways. :)

@lavalamp
Copy link
Member

One additional argument for generating the rest interaction code programmatically: I want to add support for k8s plugins. Therefore, kubecfg needs to be able to deal with resource types that weren't compiled into it.

@ghodss
Copy link
Contributor Author

ghodss commented Sep 19, 2014

A lot of really good points have been brought up in this thread. I want to specifically try and resolve one of them: how kubecfg reconciles with the REST data model.

(I am only talking about read operations here (like listing, querying and filtering), not updating config - I think config updates and diffs should match 1:1 with the REST API data model.)

If I try and look at the raw JSON dump of a Pod (using the latest v1beta3 spec), I get something like this:

{
    "apiVersion": "v1beta3",
    "kind": "Pod",
    "metadata": {
        "creationTimestamp": null,
        "labels": {
            "label1": "value1",
            "label2": "value2"
        },
        "name": "pod-1",
        "resourceVersion": "456",
        "uid": "123"
    },
    "spec": {
        "containers": [
            {
                "image": "dockerfile/nginx",
                "name": "nginx"
            }
        ],
        "restartpolicy": {
            "always": {}
        },
        "volumes": null
    },
    "status": {
        "host": "minion-1",
        "hostIP": "10.10.10.10",
        "podIP": "10.10.10.11",
        "status": “Running”
    }
}

OTOH, if I wanted to do something focused on an easy-to-read output for kubecfg pods get <id>, I would like to print out something like this:

Kind:                   Pod
Labels:                 label1=value1, label2=value2
Name:                   pod-1
Container Images:       dockerfile/nginx
Restart Policy:         Always
Status:                 Running
Replication Controller: MyNginxController

Basically, to increase readability and usability, I’m doing a number of things to the original parameters:

  1. I’m shuffling them around (e.g. moving Name to be a first-level key).
  2. I’m converting groups of maps or structs into strings (e.g. Labels or RestartPolicy).
  3. I’m creating new keys by pulling in data from other REST calls (to get "Replication Controller” I query for any controllers that have selectors that match my labels).

But all told, what I’m really doing is creating a new Pod data model, which we can call KubecfgPod for convenience, that has all this information in a more user-friendly format.

Now, if all I’m doing is printing this out, this is fine. I could print out the above in the key-value textual output I have above, or I can print it in YAML or JSON, no big deal.

But if I want kubecfg to become more advanced with querying and filtering (as has been mentioned by @thockin, @bgrant0607 and @smarterclayton) or even more tightly integrated into config, this becomes problematic. What should people refer to, KubecfgPod fields or Pod fields? If they refer to KubecfgPod fields, we would have to publish it and evolve it side by side with Pod fields, and force people to potentially learn two conflicting models, making config management confusing. If we go with Pod fields, then what they see printed out and how they are querying are at odds.

The question is, how do we reconcile ease of use with the kubecfg tool compared to what the API is actually outputting and how config is structured?

@thockin
Copy link
Member

thockin commented Sep 19, 2014

Ease of use is choosing the 5 or 6 most important fields and showing those.

You can't be full featured and easy to use.

On Thu, Sep 18, 2014 at 7:00 PM, Sam Ghods notifications@github.com wrote:

A lot of really good points have been brought up in this thread. I want to
specifically try and resolve one of them: how kubecfg reconciles with the
REST data model.

(I am only talking about read operations here (like listing, querying and
filtering), not updating config - I think config updates and diffs should
match 1:1 with the REST API data model.)

If I try and look at the raw JSON dump of a Pod (using the latest v1beta3
spec), I get something like this:

{
"apiVersion": "v1beta3",
"kind": "Pod",
"metadata": {
"creationTimestamp": null,
"labels": {
"label1": "value1",
"label2": "value2"
},
"name": "pod-1",
"resourceVersion": "456",
"uid": "123"
},
"spec": {
"containers": [
{
"image": "dockerfile/nginx",
"name": "nginx"
}
],
"restartpolicy": {
"always": {}
},
"volumes": null
},
"status": {
"host": "minion-1",
"hostIP": "10.10.10.10",
"podIP": "10.10.10.11",
"status": "Running"
}
}

OTOH, if I wanted to do something focused on an easy-to-read output for kubecfg
pods get , I would like to print out something like this:

Kind: Pod
Labels: label1=value1, label2=value2
Name: pod-1
Container Images: dockerfile/nginx
Restart Policy: Always
Status: Running
Replication Controller: MyNginxController

Basically, to increase readability and usability, I'm doing a number of
things to the original parameters:

  1. I'm shuffling them around (e.g. moving Name to be a first-level
    key).
  2. I'm converting groups of maps or structs into strings (e.g. Labels
    or RestartPolicy).
  3. I'm creating new keys by pulling in data from other REST calls (to
    get "Replication Controller" I query for any controllers that have
    selectors that match my labels).

But all told, what I'm really doing is creating a new Pod data model,
which we can call KubecfgPod for convenience, that has all this information
in a more user-friendly format.

Now, if all I'm doing is printing this out, this is fine. I could print
out the above in the key-value textual output I have above, or I can print
it in YAML or JSON, no big deal.

But if I want kubecfg to become more advanced with querying and filtering
(as has been mentioned by @thockin https://github.com/thockin,
@bgrant0607 https://github.com/bgrant0607 and @smarterclayton
https://github.com/smarterclayton) or even more tightly integrated into
config, this becomes problematic. What should people refer to, KubecfgPod
fields or Pod fields? If they refer to KubecfgPod fields, we would have to
publish it and evolve it side by side with Pod fields, and force people to
potentially learn two conflicting models, making config management
confusing. If we go with Pod fields, then what they see printed out and how
they are querying are at odds.

The question is, how do we reconcile ease of use with the kubecfg tool
compared to what the API is actually outputting and how config is
structured?

Reply to this email directly or view it on GitHub
#1325 (comment)
.

@fabianofranz
Copy link
Contributor

@ghodss This is a very nice proposal and thread. In OpenShift we are in the process of writing the very first skeleton proposals for our end-user CLI[1] and we would like to stay in conformance with the design and technology choices of kubecfg.

We are also considering codegangsta/cli for cleaner command definitions, specifically the clear separation of arguments and flags, default values and so on. It does appear to support global flags through func (c *Context) GlobalBool(name string) bool and similar, although I didn't test it. spf13/cobra on the other hand has a very nice sample project, like a reference implementation, with spf13/hugo.

One feature I didn't find in any of them is flags that accept a list of predefined values, for example something like: [--format=raw|json|yaml].

For terminal colors I'm considering fatih/color which allows you to integrate nicely with fmt, like: fmt.Printf("this %s rocks!\n", green("package")).

From @thockin's comments, the idea of output formats looks really powerful specially for scripting.

[1] openshift/origin#112

@smarterclayton
Copy link
Contributor

I thought about --master but I don't know if that's the right word for an end user command. --server seems more appropriate to me.

----- Original Message -----

Quick note - in the new version of cobra, -h is reserved for help (which I
think is the right behavior). Unfortunately this conflicts with our -h host
param, so I changed it to --server|-s. Happy to take other naming
suggestions as long as they aren't -h.


Reply to this email directly or view it on GitHub:
#1325 (comment)

@ghodss
Copy link
Contributor Author

ghodss commented Oct 9, 2014

Unless anyone disagrees, I am following @thockin's plan but without adding back run, stop, resize or rollingupdate (as per @bgrant0607's comments).

  1. resolve comments and add TODOs - DONE
  2. rename to kubectl (it's a better name anyway) - DONE
  3. commit - PENDING
  4. convert as many examples as possible to kubectl
  5. iterate until examples and docs are all converted
  6. announce removal of kubecfg
  7. remove kubecfg

I anticipate I may have missed something in how to package a brand new binary (kubectl), so I would appreciate comments on that. But otherwise this should be pretty close to merge.

@ghodss ghodss changed the title Proposal for new kubecfg design (kubecfg2) Proposal for new kubecfg design (kubecfg2/kubectl) Oct 10, 2014
@ghodss ghodss changed the title Proposal for new kubecfg design (kubecfg2/kubectl) Proposal for new kubecfg design (kubectl) Oct 10, 2014
return nil, fmt.Errorf("Minion %s not found", id)
}

// Get all replication controllers whose selectors would would match a given
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: double would

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Done

@thockin
Copy link
Member

thockin commented Oct 13, 2014

last comment - we've had a lot of "good idea, do it later" stuff - are you keeping track of it all? Can you file issues? We can make a new category for CLI.

@ghodss
Copy link
Contributor Author

ghodss commented Oct 14, 2014

Yes - once this is merged, I will go through the entire thread and create issues for anything relevant and outstanding.

@ghodss
Copy link
Contributor Author

ghodss commented Oct 15, 2014

Okay, I think I've addressed all current comments.


func NewCmdDelete(out io.Writer) *cobra.Command {
cmd := &cobra.Command{
Use: "delete ([-f filename] | (<resource> <id>))",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sub "kind" for "resource" - since "kind" is the JSON field? And if so, same with other commands.

@thockin
Copy link
Member

thockin commented Oct 15, 2014

LGTM. Fire in the hole!

thockin added a commit that referenced this pull request Oct 15, 2014
Proposal for new kubecfg design (kubectl)
@thockin thockin merged commit c88537b into kubernetes:master Oct 15, 2014
@j3ffml j3ffml mentioned this pull request Dec 9, 2014
soltysh pushed a commit to soltysh/kubernetes that referenced this pull request Aug 23, 2022
…-release-4.9

[release-4.9] Bug 2106655: UPSTREAM: 109103: cpu/memory manager containerMap memory leak
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet