Skip to content

Validate target environments #2806

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

itowlson
Copy link
Collaborator

@itowlson itowlson commented Sep 6, 2024

EXTREMELY WIP.

There are plenty of outstanding questions here (e.g. environment definition, Redis trigger, hybrid componentisation) , and reintegration work (e.g. composing dependencies before validating) (ETA: component dependencies now work). In its current state, the PR is more for visibility than out of any sense of readiness. Also, it has not had any kind of tidying pass, so a lot of naming is not great.

But it does work, sorta kinda, some of the time. So we progress.

cc @tschneidereit

@itowlson itowlson force-pushed the validate-target-environment branch 6 times, most recently from 30aa1fc to 3b25dad Compare September 9, 2024 22:10
@itowlson itowlson force-pushed the validate-target-environment branch 2 times, most recently from e74895b to 828cb70 Compare September 17, 2024 00:57
@itowlson
Copy link
Collaborator Author

Checking environments adds an appreciable bump to the time to run spin build. I looked into caching the environments but a robust way didn't really help - the bulk of the time, in my unscientific test, was in the initial registry request, which retrieval of the Wasm bytes (for the Spin environment) about a quarter of the time. (In my debug build, from a land far away from Fermyon Cloud and GHCR where my test registry is hosted: approx 2500ms to get the digest, then 600ms to get the bytes. The cache eliminated only the 600ms.)

We could, of course, assume that environments are immutable, and key the cache by package reference instead of digest. But that would certainly be an assumption and not guaranteed to be true.

@itowlson
Copy link
Collaborator Author

This is now sort of a thing and can possibly be looked at.

Outstanding questions:

  • Where shall we publish the initial set of environments?
    • The wkg configuration will need to reflect this. At the moment it uses the user's default registry. Yeah nah.
  • Where shall we maintain the environment WITs?
    • Separate repo in the Fermyon org?
  • What does that initial set contain?
    • Currently I've created a Spin CLI "2.5" world with just the HTTP trigger (WASI 0.2 and WASI RC).
  • Testing

Possibly longer term questions:

  • How can we manage environments where the set of triggers is variable - specifically the CLI with trigger plugins?
  • How to avoid a lengthy network round-trip on every build
  • Better error reporting for environments where a trigger supports multiple worlds (like, y'know, the Spin CLI).

If folks want to play with this, add the following to your favourite spin.toml:

[application]
targets = ["spin:[email protected]"]

and set your wkg config (~/.config/wasm-pkg/config.toml) to:

default_registry = "registrytest-vfztdiyy.fermyon.app"

[registry."registrytest-vfztdiyy.fermyon.app"]
type = "oci"

(No, that is not the cat walking across the keyboard... this is my test registry which backs onto my ghcr.)

@itowlson itowlson marked this pull request as ready for review September 18, 2024 23:56
@lann
Copy link
Collaborator

lann commented Sep 19, 2024

Where shall we publish the initial set of environments?

fermyon.com?

Where shall we maintain the environment WITs?

I'd suggest "next to the code that implements them"; ideally generated by that code.

let dt = deployment_targets_from_manifest(&manifest);
Ok((bc, dt, Ok(manifest)))
}
Err(e) => {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not obvious to me what's happening here. It reads like we're trying to get build and deployment configs even when we can't parse the manifest? I think some terse one-line comments on each branch of this match would go a long way to helping this be a bit more understandable.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reading the code more, I'm unsure why we go through all these great lengths to read the targets config here when later on we only seem to run the targets check if the manifest was successfully parsed. Won't this information just be thrown away?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rylev You are absolutely right - this was a holdover from an earlier iteration before I realised I was going to need to full manifest - thanks for catching it. I've pared this back to "has deployment targets", which I think is worth keeping so we can warn if the manifest errors have caused us to bypass checking.

Ok((bc, dt, Ok(manifest)))
}
Err(e) => {
let bc = fallback_load_build_configs(&manifest_file).await?;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like there's some need for better error messages here through liberal use of .context. As the code stands now, component_build_configs function might return an error saying only "expected table found some other type" which would be very confusing.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On reflection these should preserve (and immediately return) the original manifest load error rather than blatting it with whatever went awry during the fallback attempt.

Comment on lines 85 to 96
let table: toml::value::Table = toml::from_str(&manifest_text)?;
let target_environments = table
.get("application")
.and_then(|a| a.as_table())
.and_then(|t| t.get("targets"))
.and_then(|arr| arr.as_array())
.map(|v| v.as_slice())
.unwrap_or_default()
.iter()
.filter_map(|t| t.as_str())
.map(|s| s.to_owned())
.collect();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would serializing to a type through serde::Deserialize making this easier to read?

@@ -57,6 +75,30 @@ async fn fallback_load_build_configs(
})
}

async fn fallback_load_deployment_targets(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: is "deployment targets" the right nomenclature? That sounds like what you would be targeting for spin deploy and not the environment you're targeting. Perhaps we could play with the wording here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rylev I'm very open to naming on "deployment targets." The current name emerged in a woolly manner from a sense of "these are possible targets for deployment" or "the target environments we want to be able to deploy to." I do feel, albeit not strongly, that it's worth specifying 'deployment' (cf. e.g. 'compilation target') - but for sure let's improve this!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"runtime targets"?

Comment on lines +91 to +97
async fn load_component_source(&self, source: &Self::Component) -> anyhow::Result<Vec<u8>>;
async fn load_dependency_source(&self, source: &Self::Dependency) -> anyhow::Result<Vec<u8>>;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we implementing ComponentSourceLoader more than once? I'm a bit lost why we need the flexibility of defining the type of dependency and component instead of hard coding. Can you explain what this buys us?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current build system doesn't create a lockfile. The current composition implementation depends on the locked types (because it runs at trigger time). This allows us to implement composition on the raw AppManifest as well as on the LockedApp.

"#
);

let doc = wac_parser::Document::parse(&wac_text)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We really should implement a programatic API in wac for this so that we don't need to manipulate a wac script.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah! I tried constructing a wac document AST, which seemed cleaner than parsing a document, but I foundered on - if memory serves - something as trivial as the miette spans. I tried it with dummy spans but something internal seems to have been trying to use the length of the name in error or something and anyway there was anguish. Using the underlying model would have been awesome, but was (for me) hard to understand and to line up inputs for.

}

pub async fn load_and_resolve_all<'a>(
app: &'a spin_manifest::schema::v2::AppManifest,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: seems like we pass app and resolution_context around a lot. It might be nicer to put those into a Resolver struct and implement these functions as methods on that struct.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had an explore of this, and have pushed something like it as a separate commit for easy reversion. I ended up naming the proposed struct ApplicationToValidate because otherwise I ended up with no "application" visible in a module that was ostensibly all about validating applications. I still have mixed feelings but there's definitely some nice aspects to encapsulating the component loading stuff - I was able to tuck some more clutter away and split things up in a way that's hopefully more readable...

@tschneidereit
Copy link
Contributor

We could, of course, assume that environments are immutable, and key the cache by package reference instead of digest. But that would certainly be an assumption and not guaranteed to be true.

@itowlson I think that's an okay assumption to make, personally. Perhaps paired with a way to flush the local cache explicitly?

@tschneidereit
Copy link
Contributor

We could also consider introducing something like target-environments.lock which would be stored next to Spin.toml, and which would contain the environments. Then that explicit way to flush the cache would become "remove the file".

@itowlson itowlson force-pushed the validate-target-environment branch from 4870bba to 531e4fb Compare September 24, 2024 00:27
@itowlson
Copy link
Collaborator Author

Okay I have pencilled in fermyon.com as the default registry for environment definitions (with !NEW! the option to override it or use a local path !NEW!). One thing that occurred to me though was that, at the moment, an environment reference is a package name. Are we okay with environments occupying the same namespace as worlds and components? Or should we adopt an implicit prefix e.g. instead of spin:[email protected] have users write [email protected] and translate this to e.g. environment:[email protected]? I'm not sure I have an opinion, but hopefully someone else does...

@tschneidereit
Copy link
Contributor

One thing that occurred to me though was that, at the moment, an environment reference is a package name.

Aren't environments also just packages though? Ones that we apply some additional semantics to, yes, but not ones that are someone non-standard.

Which actually raises the question: could we use the exact packages to generate bindings for Spin and the SDKs as well? (Not necessarily as part of this effort, but as a future thing.)

@itowlson itowlson force-pushed the validate-target-environment branch from 51a09f4 to 76cb8af Compare September 24, 2024 21:08
@itowlson
Copy link
Collaborator Author

itowlson commented Oct 4, 2024

Implemented caching on the basis of immutability as per Till's comment #2806 (comment) (and double checked on Windows).

I'll have a think about the lockfile idea. A lockfile mapping environment IDs to digests would allow us to reuse the existing Wasm by-digest caching infra instead of having something custom for name-based lookup. If we generated it into .spin then it would get picked up by .gitignore which is probably desirable, although it would be less discoverable for deletion purposes (.spin already contains a bunch of other stuff of varying deletability).

ETA: I implemented Till's lockfile-based cache strategy. It does make some things cleaner for sure. I've done it as a separate commit for now, but will squash once we agree on it.

@itowlson itowlson force-pushed the validate-target-environment branch 5 times, most recently from b93f02e to 651aca5 Compare May 5, 2025 02:53
@itowlson itowlson force-pushed the validate-target-environment branch from 651aca5 to 8427a07 Compare May 5, 2025 03:31
@itowlson itowlson force-pushed the validate-target-environment branch from ae77cd9 to 86f61b8 Compare May 5, 2025 04:01
@itowlson
Copy link
Collaborator Author

itowlson commented May 5, 2025

I did some unscientific performance testing with this. Validation for local or HTTP components is quick once registry targets are cached. If they have local dependencies, then composition before validation is also quick. However, if a component has a registry dependency, then that seems to be not cacheable, and that does introduce a perceptible delay. I recall in the previous work on this I tested whether I could speed it up by checking the registry digest and re-downloading only if changed, but it turned out that even just checking fetching the digest was slow. So I am not sure of the way round this.

(ETA: for clarity: the initial download of a registry env is a perceptible delay; and we do not currently check for updates (because of the perf hit). You need to delete the lockfile to force re-download. So environments once published must be stable.)

@itowlson
Copy link
Collaborator Author

itowlson commented May 6, 2025

@tschneidereit

Aren't environments also just packages though? Ones that we apply some additional semantics to, yes, but not ones that are someone non-standard.

Yes and no. I found that the canonical spin:up package from the Spin WIT folder didn't accept the fileserver component (at least the version I was testing), because the fileserver exported an RC version of wasi:http/incoming-handler - which Spin supports, but which is not listed in the top-level Spin world. Maybe that's an error in the spin:up package in that it fails to represent all worlds accepted by spin up? But it's not necessarily one we want to remedy in Spin "canon," because the RC world is for back compat with Spin 2, not one we want to view as a primary entry point going forwards.

We may also find it tricky to manage trigger plugins if we try to unify. E.g. would we want the world in the Spin repo to list the cron trigger export? But the target environments thing is going to have to allow for it I expect.

@itowlson
Copy link
Collaborator Author

itowlson commented May 6, 2025

The environment package I am using for testing:

https://github.com/itowlson/env-wits

The deps directory is identical to the Spin main deps directory, but as noted above I had to make some tweaks to world.wit

(It is published at { registry = "registrytest-vfztdiyy.fermyon.app", package = "spin:[email protected]"} and can be used from there - should be public now.)

@itowlson
Copy link
Collaborator Author

itowlson commented May 6, 2025

Okay another nasty with the "environments are just packages" thought process is that environments do a bit more than "world" packages, in that they have to map triggers to worlds. This bit me while trying to do a wasi-http environment. There's nothing in the vanilla wasi:http package that says "this world handles the HTTP trigger." At the moment I use a naming convention for worlds to say "this world is supported by that trigger," but that's not going to work for a world called wasi:http/proxy.

A possible approach is to search through all worlds in the package and see if the component matches any of them, but you often find worlds in the deps, and that doesn't mean Spin supports them. For example, the Spin deps tree contains the wasi:keyvalue/watch-service world; but a component that conforms to wasi:keyvalue/watch-service is not runnable by Spin and certainly not by the HTTP trigger. So I am a bit suspicious of this.

On the other hand it would be really nice to unify because it would be really nice to identify target environments with packages rather than having to do some evasive importing and dodgy namespacing (cf. https://github.com/itowlson/env-wits/blob/main/wasi-http-0.2/world.wit which is pretty obnoxious). So I am very much open to ideas here - this may be yet another time I have gotten too close to the problem and can't see the obvious solution!

(ETA: although I am not sure wasi:http is a useful world for this because Rust at least seems unable to elide its wasi:environment import so 🤷; perhaps it would be better to define the minimal world as wasmtime:serve.)

@tschneidereit
Copy link
Contributor

At the moment I use a naming convention for worlds to say "this world is supported by that trigger," but that's not going to work for a world called wasi:http/proxy.

I think that's not going to work, and that we'll instead have to have some kind of mapping. Even if we were okay with requiring trigger and/or world naming schemes that would make such a mapping viable, it'd not account for versions: Imagine a situation where people start generating WASIp3 HTTP components, and then try to run them in a version of Spin that doesn't yet support p3: a name based mapping would make the check pass, whereas in reality we couldn't run the component.

Would it be possible to change the definitions of triggers to include a set of target world names and versions? I can't really think of another viable solution, and this would also address the issue with the fileserver component you mentioned above.

@tschneidereit
Copy link
Contributor

We may also find it tricky to manage trigger plugins if we try to unify. E.g. would we want the world in the Spin repo to list the cron trigger export? But the target environments thing is going to have to allow for it I expect.

If we allow triggers to say which target worlds they support, then we'd presumably also allow trigger plugins to do so. I had always assumed that we'd support fetching target world definitions from more than a single place, so my expectation would be that the plugin would say which world(s) it supports, and where to find their definitions.

@tschneidereit
Copy link
Contributor

However, if a component has a registry dependency, then that seems to be not cacheable, and that does introduce a perceptible delay.

Can you say more about why that's not cacheable? Shouldn't we cache registry dependencies entirely independently from target worlds, because you absolutely don't want to refetch them all the time?

I recall in the previous work on this I tested whether I could speed it up by checking the registry digest and re-downloading only if changed, but it turned out that even just checking fetching the digest was slow. So I am not sure of the way round this.

What's different about how we use dependencies over how something like npm or cargo do? For registry dependencies we have a version number, and I think it's more than reasonable to not check whether the registry changed its mind and decided to send something different for the same version.

@itowlson
Copy link
Collaborator Author

itowlson commented May 6, 2025

Would it be possible to change the definitions of triggers to include a set of target world names and versions?

I'm not sure what you're envisaging. Are you suggesting Spin should have hardwired knowledge of all trigger type names and what worlds they map to?

@itowlson
Copy link
Collaborator Author

itowlson commented May 6, 2025

Can you say more about why that's not cacheable?

Sorry, it is cacheable and indeed cached, but it is cached by digest. So until we know the digest...

What's different about how we use dependencies over how something like npm or cargo do?

I dunno about npm but Cargo versions are (as you know) immutable. Once you've got serde 1.2.3, you can rely on it never changing, never needing to be re-downloaded. OCI registries don't offer that guarantee.

For registry dependencies we have a version number, and I think it's more than reasonable to not check whether the registry changed its mind and decided to send something different for the same version.

This would be a change of behaviour for composition, and given that registry tags are mutable, could lead to undesired outcomes if the registry does change its mind. I guess the user could delete their cache. I can do this if people favour it but it is not what composition has done in the past.

@itowlson
Copy link
Collaborator Author

itowlson commented May 6, 2025

I had always assumed that we'd support fetching target world definitions from more than a single place

I feel like we've had a lot of design discussions before and this has not been expressed. I'm sorry I didn't communicate what I'd been building more clearly but I did think we had been over the "what does an environment look like and why is it not a world" topic, and I thought I'd described how I proposed to tackle it, but maybe that was to someone other than you. We had previously talked about identifying an environment by a string such as spin:[email protected] or fermyon:cloud and I understood you wanted those to be WIT package references rather than somehow aggregating from multiple sources. So it would be super useful to retrench and gain a shared understanding, and this would certainly be an opportune moment since I'm still refamiliaring myself with the work!

@itowlson
Copy link
Collaborator Author

itowlson commented May 6, 2025

Would it be possible to change the definitions of triggers to include a set of target world names and versions?

I'm not sure what you're envisaging. Are you suggesting Spin should have hardwired knowledge of all trigger type names and what worlds they map to?

Oh, is the idea that Spin should invoke the trigger binary with a --gimme-the-world flag and it will return its world (or world package reference) over stdout kind of thing?

@tschneidereit
Copy link
Contributor

I'm not sure what you're envisaging. Are you suggesting Spin should have hardwired knowledge of all trigger type names and what worlds they map to?

Not Spin, but a trigger implementation should. And does, in fact, because it contains bindings generated from WIT worlds. I don't want to trivialize the effort required to make this information available in validation, but I do think that this isn't anything fundamentally novel.

@tschneidereit
Copy link
Contributor

Oh, is the idea that Spin should invoke the trigger binary with a --gimme-the-world flag and it will return its world (or world package reference) over stdout kind of thing?

Ah, serves me right for replying before reading all comments, sorry. That seems like a viable option, yes. Perhaps done only once when installing a trigger plugin, instead of at validation time

@tschneidereit
Copy link
Contributor

So it would be super useful to retrench and gain a shared understanding, and this would certainly be an opportune moment since I'm still refamiliaring myself with the work!

We absolutely should, and I agree that now seems to be a very good time to do so.

To the particular point of target worlds: I don't see how we could centralize them without also centralizing trigger plugins, which doesn't seem desirable.

I'm actually pretty sure that I undercommunicated about this, and want to apologize for that.

@itowlson
Copy link
Collaborator Author

itowlson commented May 6, 2025

To the particular point of target worlds: I don't see how we could centralize them without also centralizing trigger plugins, which doesn't seem desirable.

Centralisation would be undesirable for sure, but I'm not talking about centralisation. E.g. the current implementation allows you to specify a registry-package pair so anyone can host an environment definition. But I am struggling to understand how we reconcile "an environment is a package" with "fetch definitions from multiple places." Sorry for the lack of clarity.

@tschneidereit
Copy link
Contributor

Once you've got serde 1.2.3, you can rely on it never changing, never needing to be re-downloaded. OCI registries don't offer that guarantee.

Does all tooling call home for all artifacts all the time, then? That seems very unfortunate if so. And if not, are there patterns we could follow?

But also, doesn't this mean we have to do the digest check when running spin up in any case?

@itowlson
Copy link
Collaborator Author

itowlson commented May 6, 2025

I believe we do do the digest check at spin up. Although I haven't read that code for a while so I may be misunderstanding / misremembering!

@tschneidereit
Copy link
Contributor

In that case, is there any potential for combining these checks? Alternatively we could consider caching the results at least for a few minutes, so we'd effectively combine them in most cases: spin up would usually hit the cache in a build/test cycle, as obviously would spin watch and spin build --up

@itowlson
Copy link
Collaborator Author

itowlson commented May 6, 2025

So this takes us back into "what is spin build" territory.

There is certainly scope in the spin up --build and spin watch cases (we may need to do some significant refactoring, but it's possible). The trouble is the spin build (time passes) spin up case, because in that case spin up has no knowledge that the check has already been done.

As a related (but different) example, #3117 asks us to validate component composition as part of spin build and proposes caching the composition result so that spin up doesn't have to recreate it. And this runs into kind of the same problem - either spin build is now the way you build Spin applications, or spin up needs to re-perform the checks/composition/whatever because it doesn't know you already did them.

In a sense the trouble is that getting from source to running in Spin has (at least) three phases:

  • Build the source code (done by external tools such as Cargo) - roughly, spin build
  • Load the resulting binaries and manifest and munge them into a runnable state (composition, world validation, downloading, locking, etc.) - roughly, spin up
  • Execute the results of the shenanigans - roughly spin trigger ...

And right now, we strongly couple the load-and-munge step with the execute step, to the point where you cannot do the load-and-munge step without doing the execute step. (Well, you can. spin registry push does, and that may be a useful reference.) But now we want to do some of the static analysis parts of load-and-munge separately from the execute step, correlating them more with the build step. And if we want to avoid repeating the load-and-munge work when we execute, then maybe we need to think about capturing the output of load-and-munge and running the execute step directly against that rather than requiring it to restart at spin up.

And I have no idea what that looks like, yet. But that's where the desire for pre-up validation seems to lead us in my mind.

Sorry I know this seems like a bit of a wild ride from a simple "hey let's cache things" but hopefully it gives some sense of what's going through my mind when people talk about doing more in build. And these are early thoughts and very much in flux so yeah not sure about anything really

@itowlson
Copy link
Collaborator Author

itowlson commented May 6, 2025

It occurs to me that the long ramble above also ties in with some of the discussion/concerns around the multiple build profiles SIP.

@itowlson
Copy link
Collaborator Author

itowlson commented May 6, 2025

Okay, on the subject of centralising and decentralising and getting WITs from different sources:

My original stab at this (okay, one of my original stabs at this) was that an "environment" was not a WIT package but a document which mapped triggers to possible WIT worlds. This allowed for the "I want to target Spin 3.2" declaration while not requiring worlds to adhere to any convention or be hosted in the same place or whatever. E.g.

# spin:[email protected] target (example only!), hosted on e.g. spinframework.dev
# you'd reference this as `targets = ["spin:[email protected]"]`

http = ["spin:up/[email protected]", { registry = "fermyon.com", world = "fermyon:spin/[email protected]" }] # any of these is acceptable
redis = ["fermyon:spin/inbound-redis"]
default= { source = "trigger" } # runs e.g. spin trigger whatever --world-me-up for the WIT

# wasi:[email protected] target (example only!), hosted on e.g. wasi.dev
# you'd reference this as `targets = [{ registry = "wasi.dev", name = "wasi:[email protected]"]`

http = [{ registry = "bytecodealliance.org", package = "wasi:http/[email protected]" }]

We moved away from this because of the wish for the environment to be itself a WIT package, allowing it to be used for bindings etc.

But that puts us into the sticky situation of needing to infer a relationship between triggers and worlds. It's not enough to say "here is the complete spin up 3.2 WIT, you can use any of these worlds" because:

  1. You need to know which worlds are actually runnable, e.g. spin:up/http-trigger is runnable but wasi:keyvalue/watcher-service isn't.
  2. You need to know which worlds correspond to which triggers, e.g. a component that implements fermyon:spin/inbound-redis isn't valid for a redis trigger. (At least this is my assumption. Maybe this is out of scope. But it would feel kind of weird not to validate it.)

The current PR (at time of writing) uses a naming convention to express this relationship, but this is problematic (shorthand for "I found it inconvenient" or "Till recoiled in horror", take your pick). We really just want to be able to point to the wasi:http/proxy world or the spin:up/http-trigger world and say "that one, validate against that one" rather than having to repackage it into a magically named world.

There are a couple of places where I don't think we can do the mappings:

  • Don't do them in the manifest. The user should not have to correctly remember which worlds are in Spin 3.7. They should say "I want to target Spin 3.7 and SpinKube 2.9", and we should figure out how that translates into WITs.
  • Don't do them in triggers, except for the CLI environment. The MQTT trigger, for example, should not carry knowledge about which versions of SpinKube support which versions of its world, or whether it is supported in Fermyon Cloud. Trigger plugins knowing their worlds works okay for the CLI experience because the version you are validating against is the version that you're asking.

Again sorry for the rambly post, I am just trying to get down the forces at work and the thought processes and options which I've explored.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants