Replies: 2 comments
-
To provide an example, I think it should be possible to write a basedOn: template://k3s
provision:
- file: template://install/helm
- script: |
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update bitnami
helm install wordpress bitnami/wordpress \
--set service.type=NodePort \
--set volumePermissions.enabled=true \
--set mariadb.volumePermissions.enabled=trueThis will only work if the provisioning script from I know that there are workarounds, like relying on the fact that |
Beta Was this translation helpful? Give feedback.
-
I realized that This means combining multiple So I propose switching to using The only issue is that somebody might have used the current algorithm to move the default mount location for the home directory to a different mount point in mounts:
- location: '~'
mountPoint: /home/guestRight now this would modify the existing mount. With the proposed change, this would create an additional mount. While not ideal, I don't think this should break anything (famous last words alert!). So I guess I have a 3rd question:
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I've been working on the
basedOnfeature that I've discussed previously at #2520 (reply in thread).A template can take a list of other (base) templates to provide default settings:
Each base template can recursively be
basedOnadditional templates.It already works quite nicely, maintaining YAML comments from both the instance and the base templates as appropriate.
I do want to use the same mechanism during instance start for merging
defaults.yamlandoverride.yaml(thebasedOnmechanism is only executed during instance create and the assembled template is then stored in the instance directory).The existing merge algorithm is basically:
There are some exceptions to that (e.g.
dnslists work like scalar values and are not appended).Both
mountsandnetworksare combined in reverse order (lowest to highest). I believe I did this because both use a shared key (mounts[].locationandnetworks[].interface) to update the lower priority settings with higher priority ones later in the list (which are then discarded)1.Otherwise the order of
mountsandnetworksshouldn't really matter (except maybe for the buggy behaviour of overlapping reverse-sshfs mounts).I had assumed that we also concatenated the
provisionandprobeslist in reverse order, so that the highest level scripts run last, and can adapt to the lower level one running before. But we don't actually do so.Do you think anyone relies on a provisioning script from
override.yamlto run before the provisioning scripts of the regular template? I can't think of any of our bundled scripts that would be configurable by another script running first.So here are my questions:
Can we change the order of
mountsandnetworksas long as the combining mechanism on the shared key continues to work the same way?Should we reverse the order of combined
provisionandprobesscripts?Footnotes
I noticed that for consistency
additionalDisksshould probably be treated the same way, withadditionalDisks[].namebeing the shared key, even though I don't really see much of a use case for it. ↩Beta Was this translation helpful? Give feedback.
All reactions