-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature request: support custom CAs and ca-bundle from openshift Cluster Proxy Config in AWS S3 connectivity e.g. model serving puller #113
Comments
conceptually related to opendatahub-io/kubeflow#105 |
@shalberd I think this issue would be a workaround to solve this AWS_CA_BUNDLE issue. In addition, we are discussing this cert issue with this ticket so please check it out. |
Hello, there are 3 issues at least which sound aiming for the same result: Could you please review them and close duplicates? @Jooho @shalberd |
would this be a replacement for or an extension to the work done in opendatahub-io/data-science-pipelines-operator#362? I was just noting that in data science pipelines, the approach to have the ability to manually add an openshift secret and add it to the DataSciencePipelinesApplication CR is ok, but that an approach where the operator handles secret creation and secret referencing automatically should be preferred. See comment opendatahub-io/data-science-pipelines-operator#440 (comment) Looks like modelmesh-serving is going a similar approach pipelines, own added secrets ... https://github.com/opendatahub-io/odh-model-controller/pull/62/files#diff-0314b35123f848e2abc6db2e83259740f923292de02159b3734223bfbbb59e81 @Jooho @HumairAK @gregsheremeta My point is, why not let either data science pipelines operator or modelmesh controller handle the secret creation and mounting, as odh notebooks controller is doing it? Env variables pointing to the ca bundle location could then point to a standardized location where the content of the configmap is at. The advantage would be that I only have to add custom trusted CAs and self-signed certs in PEM format once at central cluster Proxy config in a secret in namespace openshift-config, which is much more streamlined and less distributed. and then the bundle is available automatically via configmap content and mount, without me referencing a secret by name or having to add it to a config CR. And without me having to create that secret myself. trusted CA and self-signed cert info should be kept out of a config-type secret with other aspects, like bucket name, host name, all that stuff. |
Long term, we are pursuing a global cluster-level approach. Whether that uses the Proxy or not is TBD. A future global cluster-level approach does not obviate the need for component-level control as well. |
agreed. see my thoughts elsewhere |
@shalberd is there outstanding issue specifically for us since this is more a platform-level issue? |
Hi, is it still an issue? |
In ODH Dashboard Model Serving, there is the possibility to define access (url, credentials and so on) to S3-compatible storage buckets for model files (described in ODH Dashboard as Data Connection).
If those files were located an a server that has certificates based on custom / private PKI CAs, there can be SSL trust validation issues when connecting to e.g. ceph or IBM object storage via https.
If model serving puller in the background makes use of boto3, this approach would be feasible:
Allow mounting-in of trusted-ca-bundle cert trust into modelmesh serving via AWS_CA_BUNDLE env var, in addition to HTTP_PROXY and HTTPS_PROXY and NO_PROXY info from OCP cluster proxy resource.
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#environment-variable-configuration
If the code for S3 connectivity is based not on python but golang (aws-sdk-go), maybe an approach for custom CA / system CA bundle support instead of ENV AWS_CA_BUNDLE would be similar to the notebooks effort, i.e. putting ca bundle at
/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
(default system root CA bundle) as also works with golang https manifests download code in the opendatahub operatorHTTP_PROXY and HTTPS_PROXY and NO_PROXY support should not be an issue either library-wise.
Unsure whether this feature request belongs here or at modelmesh runtime adapter, cannot create a feature request there https://github.com/opendatahub-io/modelmesh-runtime-adapter/tree/main/model-serving-puller
Describe alternatives you have considered
Additional context
opendatahub-io/odh-dashboard#1381 (comment)
Similar to, but other place to inject most likely, this effort in notebooks:
opendatahub-io/kubeflow#43
Ticket created by @codificat at https://issues.redhat.com/browse/RHODS-8813 describing the general wishes, both for secure cluster-internal services like ceph as well as cluster-external https locations based on custom corporate non-publicly trusted CAs.
The text was updated successfully, but these errors were encountered: