Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add simple online signer store (wip) #427

Closed

Conversation

lukpueh
Copy link
Collaborator

@lukpueh lukpueh commented Dec 4, 2023

@lukpueh
Copy link
Collaborator Author

lukpueh commented Dec 4, 2023

Let's use this PR to discuss the comments I left in #419 (comment) and inline (see FIXME in the diff view).

@lukpueh lukpueh mentioned this pull request Dec 4, 2023
Copy link
Member

@MVrachev MVrachev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will add some questions and I will enumerate them, so you can address them:

  1. Can you describe why this is a better implementation of what we currently do - one child class per signer?
  2. The next important question is what will it take to use this new approach to add support for a new signer as in my case for AWSSigner.
    What I see is that: I will need to add a section in __init__ specificy for AWSSigner where I will need to import my AWSKey, import the signer, use it and make sure it's working
  3. How do we make sure that the private keys we have loaded through from_priv_key_uri can work with the public key that has been used to load that private key?
    I ask as I don't see such a code in securesystemslib.

@lukpueh
Copy link
Collaborator Author

lukpueh commented Dec 5, 2023

I will add some questions and I will enumerate them, so you can address them:

Great questions!

  1. Can you describe why this is a better implementation of what we currently do - one child class per signer?

We currently have a redundant abstraction layer over individual signer implementations. Why implement a LocalKeyVault class, if securesystemslib already gives you a CryptoSigner class?

Moreover, the IKeyVault interface, is not as powerful as Signer. The latter gives you a method, to transparently load any signer based on a configured URI. This means you don't have to deal with specific signer implementations in code at all.

Maybe the advantage becomes more apparent below:

  1. The next important question is what will it take to use this new approach to add support for a new signer as in my case for AWSSigner.
    What I see is that: I will need to add a section in __init__ specificy for AWSSigner where I will need to import my AWSKey, import the signer, use it and make sure it's working

The work we currently do in __init__ (and _parse_local) is just a bridge to convert the existing config to a URI, which is compatible with the Signer API.

If we'd directly configure <keyid>=<signer uri> pairs, SignerStore.get would just work for any signer implementation known to the securesystemslib Signer API, without prior case handling.

For AWS this means:

  1. user writes <local keyid>=awskms:<aws keyid> and aws envvars to online key signer config
  2. worker exposes aws envvars in signer environment: this could be done in SignerStore.__init__ but also independently
  3. SignerStore caches configured <local keyid>=awskms:<aws keyid> on init: if all signers are configured that way, parsing becomes generic
  4. caveat: in my current implementation, we also create signer class specific secrets handlers. IMO online signing shouldn't need a secrets handler at all (but we can discuss this separately)
  5. Finally, SignerStore.get transparently returns a specific signer for the passed the key (keyid).
  1. How do we make sure that the private keys we have loaded through from_priv_key_uri can work with the public key that has been used to load that private key?
    I ask as I don't see such a code in securesystemslib.

The most reliable way to see if they belong together is to create a signature and verify it, which we'll do anyway when bootstrapping/updating metadata. Otherwise, it should be okay to just rely on the mapping by keyid.

@kairoaraujo
Copy link
Member

I understand that the significant change here is that we no longer need to create the IKeyVault Service.

The background here, @MVrachev is that we started with securesystemslib (sslib) in RSTUF when it was on version 0.23.0.

At this moment, sslib supported only files for signing, so the idea was to use it for the IKeyVault service LocalKeyVault only, and I assumed that we should create our signers for HSM, Cloud Services, etc.

The sslib has greatly improved and now supports all these signers; we can simplify it and relay it on securesystemslib.
For any future implementation, as HashiCorp Vault, we should contribute with sslib.

In the RSTUF, we need to keep the user experience of configuring the containers easily, implementing the environment variables (RSTUF_*), supporting secrets, custom configurations, etc.

@lukpueh
Copy link
Collaborator Author

lukpueh commented Dec 6, 2023

In the RSTUF, we need to keep the user experience of configuring the containers easily, implementing the environment variables (RSTUF_*), supporting secrets, custom configurations, etc.

Is this a response to something I said? Regardless, I have some comments... :)

Re: container config UX...

Agreed, this should be super easy. Asking the user to come up with config like <long hex string>=<custom uri format> is not great. I suggested to automate this. E.g. when the user adds the public key in root metadata via the RSTUF admin cli, we can ask them to provide the necessary information for the tool to construct the URI, e.g. "Enter the path to the online private key in the container:". Or, the other way around, we tell them where to write the key to, e.g. "Make sure the key is available at /run/secrets/KEYID in the container!".

Then the CLI could print the constructed <keyid>=<uri> config entry and instruct the user to include it in the worker config. But this is still not great. The user already has to configure the key itself, configuring the location too, does not seem user friendly.

Alternatively, and this is what tuf-on-ci does, we could just include the uri with the public key in tuf metadata. Then the user does not have to configure the key information at all, just the key itself.

I think we should go for that.

Re: environment variables (RSTUF_*)
We discussed this with Jussi, but I can't remember why it is important that all environment variables have an RSTUF_ prefix? If we are dealing with a 3rd party env var, which RSTUF just "relays", why not use the original name? E.g. what's the benefit of asking the user to write AWS_ACCESS_KEY_ID as RSTUF_AWS_ACCESS_KEY_ID, just so that RSTUF internally has to then strip the prefix, before rewriting it to the environment?

Re: supporting secrets, custom configurations, etc.
Yeah, about that, I wonder if we really need to support multiple different ways of configuring local online keys. Especially, in the light of other signers in securesystemslib, which seem better suited for online signing (gcp, aws, azure).

Currently, you can configure a private key via RSTUF_LOCAL_KEYVAULT_KEYS as one of:

  • path to file with key data
  • path to file with path to file with key data
  • key data

I suggest to allow either path to file with key data or key data itself.

@kairoaraujo
Copy link
Member

In the RSTUF, we need to keep the user experience of configuring the containers easily, implementing the environment variables (RSTUF_*), supporting secrets, custom configurations, etc.

Is this a response to something I said? Regardless, I have some comments... :)

Re: container config UX...

Agreed, this should be super easy. Asking the user to come up with config like <long hex string>=<custom uri format> is not great. I suggested to automate this. E.g. when the user adds the public key in root metadata via the RSTUF admin cli, we can ask them to provide the necessary information for the tool to construct the URI, e.g. "Enter the path to the online private key in the container:". Or, the other way around, we tell them where to write the key to, e.g. "Make sure the key is available at /run/secrets/KEYID in the container!".

Not clear here, but I believe you're mentioning the use case for type LocalKeyVault, which we agree shouldn't be most used in production deployments (but will have anyway).
We need to make it easy and I agree we can simplify if there are ways.
The case for supporting multiple cases here. While deploying the container, there are a lot of chicken and egg problems.
Specifically, LocalKeyStorage is the first-experience-user trying RSTUF.

Then the CLI could print the constructed <keyid>=<uri> config entry and instruct the user to include it in the worker config. But this is still not great. The user already has to configure the key itself, configuring the location too, does not seem user friendly.
Alternatively, and this is what tuf-on-ci does, we could just include the uri with the public key in tuf metadata. Then the user does not have to configure the key information at all, just the key itself.

I think we should go for that.

Should we make the user obligated to use the CLI? Now, it is more of an interface and interface example for RSTUF API.
But it is a decision we should make.
I like the idea that an organization can create its own interface for RSTUF API, including its own process.

Remember that the RSTUF deployment can have a different setup as it uses containers. There is a difference from tuf-on-ci here.
The users must deal with deployment, which means configuring containers using compose and manifests depending on the strategy and environment.

Re: environment variables (RSTUF_*) We discussed this with Jussi, but I can't remember why it is important that all environment variables have an RSTUF_ prefix? If we are dealing with a 3rd party env var, which RSTUF just "relays", why not use the original name? E.g. what's the benefit of asking the user to write AWS_ACCESS_KEY_ID as RSTUF_AWS_ACCESS_KEY_ID, just so that RSTUF internally has to then strip the prefix, before rewriting it to the environment?

Here, I disagree.
First, because we are saying to the user to go to securesystemslib, that will say got to boto3 and use their environment variable. It is a bit distributed experience. Imagine doing it for SQLAlchemy, and Redis, we use all those libraries to build the service.

Secondly, it is a standard. Here are some examples:
(See the environment variables)
https://hub.docker.com/_/postgres
https://hub.docker.com/r/bitnami/kafka
https://hub.docker.com/_/mysql

Re: supporting secrets, custom configurations, etc. Yeah, about that, I wonder if we really need to support multiple different ways of configuring local online keys. Especially, in the light of other signers in securesystemslib, which seem better suited for online signing (gcp, aws, azure).

Currently, you can configure a private key via RSTUF_LOCAL_KEYVAULT_KEYS as one of:

  • path to file with key data
  • path to file with path to file with key data
  • key data
    I suggest to allow either path to file with key data or key data itself.

It sounds strange, but I think we support the two common use cases for containers with the constraint that it is a volume always.

  • path to the key: Which covers the file that is available or stored in a secret
  • base64 encoded key content: Which covers the file will be saved in the volume
    • It supports be given as secrets as well (base64)

@lukpueh
Copy link
Collaborator Author

lukpueh commented Dec 6, 2023

While deploying the container, there are a lot of chicken and egg problems. Specifically, LocalKeyStorage is the first-experience-user trying RSTUF.

What problems do you mean and how are they related to my proposal?

Should we make the user obligated to use the CLI?

No, and we don't need to. An organisation can very well build their own tooling to create TUF metadata, with private key uris included in public keys.

The users must deal with deployment, which means configuring containers using compose and manifests depending on the strategy and environment.

Sure, that doesn't change with my proposal. As before, the user still needs to get the private key somehow into the system. The difference is that now the user only needs to configure the key (e.g. via mounting a volume with the key file), and the information where to get the key from is included transparently via TUF metadata. Whereas previously the user had to configure both, the key and the information where to load it from.

Here, I disagree. First, because we are saying to the user to go to securesystemslib, that will say got to boto3 and use their environment variable. It is a bit distributed experience. Imagine doing it for SQLAlchemy, and Redis, we use all those libraries to build the service.

I would argue that it is already a distributed experience, because the user will likely manage their aws key via aws tooling.

It seems okay to say in user docs something like, "if you are using aws, make sure to configure these aws env vars: [link to aws env var docs]".

Alternatively, you'd have to say something like, "if you are using aws make sure to configure these rstuf env vars, which correspond to these aws env vars but w/o the RSTUF_ prefix: [link to aws env var docs]". :P

But yeah, I won't insist. It's okay with me to just re-export them.

  • path to file with key data
  • path to file with path to file with key data
  • key data
    I suggest to allow either path to file with key data or key data itself.

It sounds strange, but I think we support the two common use cases for containers with the constraint that it is a volume always.

  • path to the key: Which covers the file that is available or stored in a secret

  • base64 encoded key content: Which covers the file will be saved in the volume

    • It supports be given as secrets as well (base64)

Do you disagree that we currently support the 3 ways of configuring local keys above? Also, my question was, if the user really needs all that flexibility.

@kairoaraujo
Copy link
Member

kairoaraujo commented Dec 6, 2023

What problems do you mean and how are they related to my proposal?

No, and we don't need to. An organisation can very well build their own tooling to create TUF metadata, with private key uris included in public keys.

👍

Sure, that doesn't change with my proposal. As before, the user still needs to get the private key somehow into the system. The difference is that now the user only needs to configure the key (e.g. via mounting a volume with the key file), and the information where to get the key from is included transparently via TUF metadata. Whereas previously the user had to configure both, the key and the information where to load it from.

What I was trying to elaborate on is that when you deploy the RSTUF you don't yet have the TUF metadata (user will bootstrap it). The RSTUF-Worker will not have this information and cannot access the key. I thought it could be a problem.🤔
Unless we don't try to access the key at the start of the container. The user will notice the error only when executing the bootstrap for example (during signing).

Do you disagree that we currently support the 3 ways of configuring local keys above? Also, my question was, if the user really needs all that flexibility.

I'm not saying that I disagree here.
I'm saying they are common use cases during container deployment.
While deploying my RSTUF in my Kubernetes cluster I decided to use API key service with certificates I got the same options to set up.
https://github.com/Kong/kong/blob/89e62669ea26fea36f66bcab092fd507a9abd326/kong.conf.default#L843C1-L845C76

https://github.com/kairoaraujo/rstuf.kairo.dev/blob/e74a352bd097acd9c73d26fa2c59282de43d1485/deployment.yml#L263-L266

If you just try to deploy your environment using, for example, Docker, the volume is not created until you deploy. Only after it starts, are you able to add a key inside the volume. Here is the trick part, to add a key, you must do it through a container that mounts this volume.
But it is useful while using a secret which is a combination of volume + file.
That is why we give the option to use the key data (base64) as an option, so you create the key in the volume during the start.

@lukpueh lukpueh force-pushed the simple-signer-store branch 2 times, most recently from 68ae260 to 3d798f8 Compare December 11, 2023 15:39
@lukpueh
Copy link
Collaborator Author

lukpueh commented Dec 11, 2023

@kairoaraujo, @MVrachev, I updated the PR to implement what we discussed:

d079add rewrites the signer store to remove any custom key config parsing and instead uses a key URI, which can be attached to the public key. This solves the problem of mapping public keys to private keys at signing time, works for any registered Signer implementation, and makes the signing code extremely simple.

The other two commits (39ea16c, 68ae260) implement two custom signer loaders. One for relative path URIs, where a base path is expected as environment variable (as you requested). And one for URIs, which point to environment variable names, from which the key can be loaded.

As you can see, they only need to be registered in the Signer API's dispatch table for SignerStore.get to use them. No other change is needed.

Btw. I mostly removed the config parsing code for emphasis (to show how the code becomes even simpler). If we want to stay backwards compatible we can easily revert that and make the SignerStore use the old-style config if it exists, but expect URIs on keys otherwise.

@MVrachev
Copy link
Member

@kairoaraujo, @MVrachev, I updated the PR to implement what we discussed:

d079add rewrites the signer store to remove any custom key config parsing and instead uses a key URI, which can be attached to the public key. This solves the problem of mapping public keys to private keys at signing time, works for any registered Signer implementation, and makes the signing code extremely simple.

The other two commits (39ea16c, 68ae260) implement two custom signer loaders. One for relative path URIs, where a base bath is expected as environment variable (as you requested). And one for URIs, which point to environment variable names, from which the key can be loaded.

As you can see, they only need to be registered in the Signer API's dispatch table for SignerStore.get to use them. No other change is needed.

Btw. I mostly removed the config parsing code for emphasis (to show how the code becomes even simpler). If we want to stay backwards compatible we can easily revert that and make the SignerStore use the old-style config if it exists, but expect URIs on keys otherwise.

I like what you did here, but will feel more confident when we have the other part which is at the CLI.

lukpueh added a commit to lukpueh/repository-service-tuf-cli that referenced this pull request Dec 13, 2023
Re-implement key loading and signing parts of ceremony cli, with the
following goals:
- configure online signer via uri attached to public key
  (for repository-service-tuf/repository-service-tuf-worker#427)
- use state-of-the-art securesystemslib API only
- make re-usable for similar cli
- simplify (e.g. avoid custom/redudant abstractions over
  python-tuf/securesystemslib Metadata API)

Signed-off-by: Lukas Puehringer <[email protected]>
Can be used to load/cache/use any signer from securesystemslib Signer
API, including custom signers.

Should replace IKeyVault and related classes.

Signed-off-by: Lukas Puehringer <[email protected]>
- remove all config parsing (revert this for backwards compatibility
  with LOCAL_KEY_VAULT config style)
- use x-rstuf-online-key-uri in public key (requires adoption in CLI)

Signed-off-by: Lukas Puehringer <[email protected]>
The base path is provided via envrionment variable.

Signed-off-by: Lukas Puehringer <[email protected]>
@lukpueh lukpueh force-pushed the simple-signer-store branch from 3d798f8 to b02b4f6 Compare January 22, 2024 09:59
lukpueh added a commit to lukpueh/repository-service-tuf-worker that referenced this pull request Feb 5, 2024
implements option 2 from repository-service-tuf/repository-service-tuf#580
supersedes repository-service-tuf#427 (does not include a custom "relative file path signer",
can be added in a follow-up PR)

--

Change `SignerStore.get` to load non-cached signers from private key uri
configured on the passed public key in a  "x-rstuf-online-key-uri" field.

If the public key does not include a uri, RSTUF_KEYVAULT_BACKEND is used
as fallback.

securesystemslib.signer.CryptoSigner is "registered" to load signers
from private key files. No key specific secrets handling is added. This
means the keys must be stored unencryped, preferrably using the secrets
handling of the deployment platform (e.g. docker secrets).

Default schemes in `securesystemslib.signer.SIGNER_FOR_URI_SCHEME` can
be used but are untested.

**Tests**
Add test to load actual signer from private key file.

Uses new unencrypted ed25519 private key copied from:
secure-systems-lab/securesystemslib@7952c3f

Public key stubs in other tests are updated, because signer store now
reads the `unrecognized_fields` attribute, which is mandatory in Key
objects.

--

implements option 2 described in
repository-service-tuf/repository-service-tuf#580

mostly supersedes repository-service-tuf#427 (does not include a custom "relative file path
signer")

Signed-off-by: Lukas Puehringer <[email protected]>
@lukpueh
Copy link
Collaborator Author

lukpueh commented Feb 7, 2024

will be superseded by #451 and #452

lukpueh added a commit that referenced this pull request Feb 7, 2024
implements option 2 from repository-service-tuf/repository-service-tuf#580
supersedes #427 (does not include a custom "relative file path signer",
can be added in a follow-up PR)

--

Change `SignerStore.get` to load non-cached signers from private key uri
configured on the passed public key in a  "x-rstuf-online-key-uri" field.

If the public key does not include a uri, RSTUF_KEYVAULT_BACKEND is used
as fallback.

securesystemslib.signer.CryptoSigner is "registered" to load signers
from private key files. No key specific secrets handling is added. This
means the keys must be stored unencryped, preferrably using the secrets
handling of the deployment platform (e.g. docker secrets).

Default schemes in `securesystemslib.signer.SIGNER_FOR_URI_SCHEME` can
be used but are untested.

**Tests**
Add test to load actual signer from private key file.

Uses new unencrypted ed25519 private key copied from:
secure-systems-lab/securesystemslib@7952c3f

Public key stubs in other tests are updated, because signer store now
reads the `unrecognized_fields` attribute, which is mandatory in Key
objects.

--

implements option 2 described in
repository-service-tuf/repository-service-tuf#580

mostly supersedes #427 (does not include a custom "relative file path
signer")

Signed-off-by: Lukas Puehringer <[email protected]>
@lukpueh
Copy link
Collaborator Author

lukpueh commented Feb 12, 2024

Closing in favour of #455

@lukpueh lukpueh closed this Feb 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants