pyblish / pyblish-qml

Pyblish QML frontend for Maya 2013+, Houdini 11+, Nuke 8+ and more
GNU Lesser General Public License v3.0
115 stars 44 forks source link

Prompt failed message at the end if any error occurred after validation (default test) #344

Closed davidlatwe closed 5 years ago

davidlatwe commented 5 years ago

This PR resolves #343, footer will show failed message if any instance got error during extraction or integration.

qml

davidlatwe commented 5 years ago

This "Publish failed" is only on errors after Validation, correct?

I actually missed that case, but it is fixed now 79bfbc0 :)

davidlatwe commented 5 years ago

Note that I added a new flag "onPublished", I hope this make sense to you.

Without this flag, the alternative solution is to iterate over plugins which order is after api.ValidationOrder, and check if it has any error then prompt the message. I prefer not to implement in this way is because I think checking on instances is making more sense in this scenario.

mottosso commented 5 years ago

Note that I added a new flag "onPublished", I hope this make sense to you.

Hmm, no sorry, this doesn't make sense. onSomething is event handler grammar, something that happens in response to something. Here, it's a boolean which I can't wrap my head around.

From how you are using it, would data["failed"] be better suited? Except, there already is a concept of failure. Is data["anyFailed"] what you are using it for? If any process failed, regardless of the test or order?

davidlatwe commented 5 years ago

Thanks !

The use case was to know that the process is inside the range between extraction and integration, so the publish failed message won't override the message which emitted from the validation failed during publish, here.

Maybe rename to "isPublishing" ?

davidlatwe commented 5 years ago

Ah, maybe "testPassed" is better fit. The range between extraction and integration was actually the range which has passed the test. So if the test passed, any error after that point could fit the publish failed case.

davidlatwe commented 5 years ago

Hope this making much more sense ! And will do more test tomorrow :)

davidlatwe commented 5 years ago

Okay, here's what I have tested and worked as expect :

qml

mottosso commented 5 years ago

Sorry, I'm still not sure it's any better to say publishing has failed if one plug-in fails. That's what the custom test is for; if extraction is critical to the success of a publish, then it should catch that and stop.

Also, what does retroactively fixing an extractor mean? Since integration has already run, isn't it the case that the files have already been uploaded and registered in whatever database it lives in from then on?

Would a custom test solve this problem?

BigRoy commented 5 years ago

Since integration has already run, isn't it the case that the files have already been uploaded and registered in whatever database it lives in from then on?

What if the Integrator failed? In that case you'd still want it crystal clear to the user that something happened.

Artists here were applauding this change. Not necessarily for Integrator failing, but more some weird Extractor acting up (Maya sometimes randomly doesn't want to export a thing, usually local disk space related though.) Since our list of plug-ins is so long you don't necessarily see the bottom ones like the Integrator. As such extra clarifying that something happened down there I think is a good help.

I'd be happy with either: "Publish failed" or "Errors occurred". Where I think the first on is clearer to the artist but might be technically wrong in technical speak because potentially it could have published but just fail on cleaning up or some other plug-in.

mottosso commented 5 years ago

It sounds like both of you need a test to stop publishing if any plug-in fails? That would both print the right message and do the right thing?

tokejepsen commented 5 years ago

A feature from Pype's version of pyblish-lite is that once a step in the publishing (collection/validation/extraction/integration) completes without any errors, that sections collapses. For pipelines with long lists of plugins, this is helpful since it focuses users where the issue would be.

davidlatwe commented 5 years ago

Also, what does retroactively fixing an extractor mean?

Sorry for the confusion, I was just testing that the action's messages doesn't get changed after this PR :)

It sounds like both of you need a test to stop publishing if any plug-in fails?

That actually won't work for me here, at least for now. I have some plugins run at the end of the integration to fall back any change in that scene if error occurred, like unlocking Maya scene. If the publish stopped, will possible leave a dirty scene to artist.

Plus, I think the only reasonable stop point is at the end of the validation, if not stopped at there, the flow should keep moving. One more stop point means more situation to handle, I suppose.

A feature from Pype's version of pyblish-lite is that once a step in the publishing (collection/validation/extraction/integration) completes without any errors, that sections collapses.

That does sounds like a good UX implementation :o

mottosso commented 5 years ago

Ok, I think we're getting closer to the real issue.

What if there was a class OnFailure(...) plug-in, that got run whenever the test didn't pass? Per default, this would run on failed validation, but if you had a custom test to fail even during extraction, it would run there instead.

Like actions, we could introduce a on = "failure" flag for plug-ins, that default to on = "process".

BigRoy commented 5 years ago

Ok, I think we're getting closer to the real issue.

Aside of thinking of the "core issue" at hand. Doesn't it make sense at the very least to notify the user of errors if they occurred. (Let's leave whether Extraction/Integration would need to ever fail in the middle). At the very least I think "Published, with errors.." would still seem like a good hint - even if Integration worked fine and the "pipeline publish" succeeded but a plug-in failed. Or is there a good reason to let that fly by the User?

I'll let others reply on the OnFailure class ideas, however my only note would be that at first sight it seems overly complicated. If you always wanted to do something on failure, wasn't there a callback you could hook into? It wouldn't be a plug-in but I think you'd still be able to perform any cleanup if you have to.

mottosso commented 5 years ago

"Published, with errors.."

Sounds good to me.

davidlatwe commented 5 years ago

The tear down plugin does sound interesting, need some time to think about what's the relationship between the tear down plugin and the process continuing tests.

Yeah, the tear down plugin is a potential proper solution on this I suppose. This PR should consider as a hot fix while the tear down feature may cover this issue.

If this is the right perspective to see this, maybe I add a NOTE comment in code ?

"Published, with errors.."

Sounds good to me, too !

mottosso commented 5 years ago

If this is the right perspective to see this, maybe I add a NOTE comment in code ?

That's allright, I've made an issue for it that's been linked to this issue (see above).

With that message, this PR is good to merge!