Closed chlunde closed 1 week ago
@chlunde thank you for the report. I assume we could make tolerant
parsing an option to be set.
@saschagrunert the main issue is that two different functions are used, and the strict one is after the filter. I could of course add an option but I'm not sure if that's required here.
Would it be OK to set tolerant as the default (to match what is allowed through the filter now), and then add a flag if there's a use case to run only with the strict one?
Hmm, tough call, I agree we should have consistent handling (either all strict or all tolerant).
Potential way forward:
strict
(potentially a breaking change, but as seen above the functionality is already broken)false
wdyt?
Hmm, tough call, I agree we should have consistent handling (either all strict or all tolerant).
Potential way forward:
- Set the container match to default to
strict
(potentially a breaking change, but as seen above the functionality is already broken)- Add an option on each dependency for whether the semver should be tolerant, defaulting to
false
wdyt?
I'd prefer that solution. :+1:
- Add an option on each dependency for whether the semver should be tolerant, defaulting to
false
Hmmm, there's already scheme: semver
. Would scheme: semver-tolerant
(I don't like the name but I don't have anything better) instead of a new flag, make sense?
Actually, there are some options per flavour already, like:
// GitLab upstream representation
type GitLab struct {
Base `mapstructure:",squash"`
...
// Optional: semver constraints, e.g. < 2.0.0
// Will have no effect if the dependency does not follow Semver
Constraints string
func latestGitLabRelease(upstream *GitLab) (string, error) {
...
semverConstraints := upstream.Constraints
if semverConstraints == "" {
// If no range is passed, just use the broadest possible range
semverConstraints = DefaultSemVerConstraints
}
Actually, there are some options per flavour already
Yeah, that's what I had in mind 👍 adding it as an extra option there.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/reopen
@Pluies: Reopened this issue.
Will try and have a look at this soon!
After starting implementation for the solution drafted above with a switch etc, I've changed my mind - I now think it makes more sense to use parseTolerant everywhere.
We already effectively use parseTolerant in the offending code in dependency/version.go
, as we remove the leading v
. And adding the whole strict / tolerant distinction in each upstream code introduces extra complexity and cognitive load on users.
Let's fix the current bug, and use parseTolerant going forwards 👍
What happened:
Our
zeitgeist validate
workflow randomly stopped today with the error:What you expected to happen:
Result:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
This didn't happen last week, so I guess it randomly occurs depending on the order of tags from the GHCR API (for example, is 37.198 returned before or after 37.198.0)
Environment:
N/A