Open Lypsolon opened 1 month ago
sooo here is a small scene to play around.
its important to know that currently publishing only works via the render button on the rop. you can not trigger the setup from the Publisher Ui.
also all files need unique file extensions per ayon_publish rop node.
@Lypsolon I checked out your PR.. decent start. I did some tweaking as I tested it: https://github.com/ynput/ayon-houdini/pull/127
The PR also fixes:
find out how to correctly split .bgeo.sc extention from an houdini file name like UT _PDG_Pub_PDG_GetAllData_v030.geometry1.1001.bgeo.sc
We'll need to keep in mind some other goals to clean things up down the line:
node.addError
?soo a bit later we got a little system that exports an advanced character via deadline. i will append a scene that i used for experimentation.
its important to know that this system dose not support Ayon submit to farm right now as the submit to farm dose not generate the dependent jobs.
also the system will run Jobs for all the nodes in the Graph (as Deadline usually dose) this is nice because upstream nodes are executed and if we would do things like Run node A -> load the output -> export the loaded data this should work. but i have the feeling that things are run multiple times (i have no prove for this but that's what i think)
1. This HDA file `ayon-houdini\dell\ynput.dev.ayon_publish.1.6.hda`, I think you can remove it from this PR. You can keep it on your side if you are using it, but can just exclude it when using `git add` ``` git add . git restore --staged dell/ynput.dev.ayon_publish.1.6.hda ```
forgot to remove it. did.
2. when saving/creating an HDA for ayon-houdini, we * set the menu Entry to `AYON` * set the Author to `ayon` * set version to `1.0`. currently, we don't change the version unless we have made a non backward compatible changes. (actual version control is done explicitly with git) * don't use branches or adding them to the HDA name. * add the Houdini context to the node. so, the name should be `ayon_rop_publish`
done.
Hey, could you add some additional info to the PR description for future refence?
e.g.
Please, correct me.
SOHO describes What Happens When The Render Button Is Hit? SOHO is invoked through a SOHO output driver. A SOHO output driver is an OTL with some specific parameters created to control the behavior of the output driver.
SOHO comes with standard callback functions, or hooks, pre-render
, pre-frame
, post-render
and post-frame
.
For this prototype, we are using SOHO to leverage the power of these parameters:
Currently, we trigger/run the ayon publish logic
(hou.phm().run()
) using pre-render
script.
Yes, I've re-arranged the parameters to take this screenshot.
For more info, please refer to SOHO | Houdini Docs and SOHO | HDK Docs
I wonder if could we have a Driver
tab in compliance with mantra node.
which can be used to perform headless publishes? I'm not sure if it's a good idea, but just sharing food for thought.
I gave it a simple test, this one works. (but it only publishes the first frame)
and this one raises an error.
ayon_core.pipeline.publish.publish_plugins.KnownPublishError: Transfer to destination is already in queue: \\storage\work\ayon\RnD\Ideas\work\Generic_Publisher\geo\rnd_Ideas_Generic_Publisher_v005.Cube.1001.bgeo.sc -> \\storage\work\ayon\RnD\Ideas\publish\pub\pubTestCube\v007\rnd_Ideas_pubTestCube_v007.bgeo.sc. It's not allowed to be replaced by a new transfer from \\storage\work\ayon\RnD\Ideas\work\Generic_Publisher\geo\rnd_Ideas_Generic_Publisher_v005.Sphere.1001.bgeo.sc
so, a question, do multiple inputs considered as different products ? or should they considered as one product with multiple representation ?
The current status of this PR: we have one product: product_type
-> pub
and product_name
-> {product_type}_{variant_name}
.
im gonna start with this one.
I gave it a simple test, this one works. (but it only publishes the first frame)
this only publishes the first frame because the default behavior of the /out context is to use what ever is set in the last node. aka if you AyonPub node is set to single frame everything above will be single frame. (you can test this behavior with multiple geo publishes in a row set the upper ones to a frame range and the last one to a single frame)
this behavior dose not apply to null nodes or nodes that dont have a frame range to set. but those nodes also dont have a pre render script.
its also important to know that when you use the deadline default submitter it allows you to export the frame ranges from the different nodes. we currently support this by checking the outer bounds of what a node can publish and then checking how much of this is actually on disk. this runs the risk that if something did not export correctly our node will ignore instead of erroring, but we cant get the correct range from the node because this can be changed at runtime.
and this one raises an error.
ayon_core.pipeline.publish.publish_plugins.KnownPublishError: Transfer to destination is already in queue: \\storage\work\ayon\RnD\Ideas\work\Generic_Publisher\geo\rnd_Ideas_Generic_Publisher_v005.Cube.1001.bgeo.sc -> \\storage\work\ayon\RnD\Ideas\publish\pub\pubTestCube\v007\rnd_Ideas_pubTestCube_v007.bgeo.sc. It's not allowed to be replaced by a new transfer from \\storage\work\ayon\RnD\Ideas\work\Generic_Publisher\geo\rnd_Ideas_Generic_Publisher_v005.Sphere.1001.bgeo.sc
so, a question, do multiple inputs considered as different products ? or should they considered as one product with multiple representation ?
The current status of this PR: we have one product:
product_type
->pub
andproduct_name
->{product_type}_{variant_name}
.
yes this errors because every publish node you place is its own product. and the inputs get to be representations.
this is just the way this feature was planed because it makes control over the node graph easier.
Hey, could you add some additional info to the PR description for future refence? e.g.
* References like [SOHO | Houdini Docs](https://www.sidefx.com/docs/houdini/render/soho.html), [SOHO | HDK Docs](https://www.sidefx.com/docs/hdk/_h_d_k__s_o_h_o.html). * Also, mention the benefits of using soho.
Please, correct me.
Additional Info: About SOHO
SOHO describes What Happens When The Render Button Is Hit? SOHO is invoked through a SOHO output driver. A SOHO output driver is an OTL with some specific parameters created to control the behavior of the output driver.
SOHO comes with standard callback functions, or hooks,
pre-render
,pre-frame
,post-render
andpost-frame
.For this prototype, we are using SOHO to leverage the power of these parameters: Currently, we trigger/run the
ayon publish logic
(hou.phm().run()
) usingpre-render
script. Yes, I've re-arranged the parameters to take this screenshot.For more info, please refer to SOHO | Houdini Docs and SOHO | HDK Docs
so having a lot of documentation about soho is a bit over the top if you ask me.
hmm we can add a bit if its needed sure but i use the description for too-do and keeping it short makes my live easier so lets keep it in a comment.
also you talk a lot about SOHO but we only need it because we have a output driver.
the content of the soho script is just pass
and we use the pre-render callback to call a python function in ayon-houdini.
I wonder if could we have a
Driver
tab in compliance with mantra node. which can be used to perform headless publishes? I'm not sure if it's a good idea, but just sharing food for thought.
hmm i dont know if we need this tbh. the main target that roy gave me is to work with the deadline driver and this thing already publishes in the background.
also we need the node graph to be intact for our operation. aka a .hou file needs to be some where so we can get the current state of the node graph, while the mantra node only needs the geo scene and can just load it into memory.
with that in mind i will place it late onto the wanted list ?
i and @BigRoy talked last week about the question if depended network publishes might be useful. i said yes he said show here we go.
lets say you have a pipeline like this. you have Houdini artists that work mainly in sops (because that's nice) but your pipeline for shading lighting etc is in USD. now you have a simulation lets say sand or stones on the ground, and you have a second artist that will scatter plants on this sim (or simulate the impact of the moving stones on those plants) for this its great not to deal with USD because your usd file will probably use an instance to store only points with the sim data on-top and de-referencing usd in sops is heavy if you have an instancer.
with that in mind we dont want to simulate 2 times so having the sim triggered before we trigger the usd export is efficient. we could probably use a file Cache but this setup is nice and small.
hmm we can add a bit if its needed sure but i use the description for too-do and keeping it short makes my live easier so lets keep it in a comment.
Personally, I'm fine as long as the info is reachable. Maybe, we can adjust the PR later to be consistent with other PRs.. where we have some sections like: changelog description. how the feature works. demo, testing note.. this is important for future reference as we may need in the future.
e.g. this falls under how the feature works
also you talk a lot about SOHO but we only need it because we have a output driver.
the content of the soho script is just pass and we use the pre-render callback to call a python function in ayon-houdini.
hmm i dont know if we need this tbh. the main target that roy gave me is to work with the deadline driver and this thing already publishes in the background.
also we need the node graph to be intact for our operation. aka a .hou file needs to be some where so we can get the current state of the node graph, while the mantra node only needs the geo scene and can just load it into memory.
with that in mind i will place it late onto the wanted list ?
Personally, I imagine something like splitting the export and publish jobs, where we have ayon, publish info and list of files to publish. I think some people will complain about the need for Houdini license just for running a python script. anyways, this is just a food for thought and let's keep it in mind.
Personally, I'm fine as long as the info is reachable. Maybe, we can adjust the PR later to be consistent with other PRs.. where we have some sections like: changelog description. how the feature works. demo, testing note.. this is important for future reference as we may need in the future.
e.g. this falls under how the feature works
also you talk a lot about SOHO but we only need it because we have a output driver. the content of the soho script is just pass and we use the pre-render callback to call a python function in ayon-houdini.
yes we should add it up there but not while its in Draft currently this PR is just a work space. lets stop treating it as something to be for reference lets treat it like a branch whit a node pad.
lets not get caught up in the idea that this pr is gonna get testing nodes or a change log right now. we dont know what we will change this is a Draft not a PR.
also i dont treat a Draft PR like a Post or Documentation draft. a draft PR for me is a place to work on a feature that will be as dirty as it needs to be and when the feature is cleaned up its decided this PR is gonna go Live then and only then i will clean up the Description and add Testing notes and a change log. if i would do this right now i would need to add to the change log every time i push, and with every idea and feature that we change the testing notes would need to change. this would take for ever.
yes this errors because every publish node you place is its own product. and the inputs get to be representations.
like this ?
there was a debate about how to handle temp data.
this is an image where both sides first need to export some temp data and then publish a variation of that.
the question is what side is nicer to use.
left side:
right side:
the question is what side is nicer to use.
There is an issue in your explanation of the left side because it reads like it only ever uses one input, but instead it publishes all direct inputs (not just the first direct input - it just doesn't recurse up the graph further and only considers direct input).
I think the right is visually nicer - but it also adds more complexity in us needing to 'manage' the dependency tree more. Plus I'm worried about the cases where a publisher breaker is in there, but a connection accidentally also 'passes' it around in complex graphs. As soon as we get to more complex graphs I think it may be visually nicer to just see whatever inputs are to the publish ROP itself - without not needing to care about anything else. It makes our ROP more stupid and only care about its direct input which I think from a coding perspective is actually a good thing too. (Plus requires less custom "layer break" nodes and whatnot)
TL;DR - my vote goes for left.
@antirotor @MustafaJafar on those two graphs, thoughts?
It makes our ROP more stupid and only care about its direct input which I think from a coding perspective is actually a good thing too. (Plus requires less custom "layer break" nodes and whatnot)
I agree with you, the one the left should be simpler. But, I still have some problems understanding the intended flow. e.g. how to interpret the graph?
As far as I understood, publisher node resembles a product, its input resemble representations.
e.g. this should be my testCube
product, its type is pointcache
. it has two representations: bgeo
and abc
. also, it has an additional review representation.
In this context, how can I interpret the graph on the left? knowing that the graph is equivalent to:
So, How can we tell the difference between:
Also, when computing the dependency graph, I'd like to stick the default Houdini behavior. without adding any additional logical relations to it. So, these two graph should give the same results.
So, How can we tell the difference between:
* Rop that I want to run at a particular time with in the node graph. * Representations to publish. * Auxiliary files that should be included to a representation as they are referenced by that particular representation. (but, they are generated by a separate Rop)
let me try to answer this. the order off compute will allays be handled by Houdini. aka in my left example only the direct inputs are exported but because the Usd export is depended on the Geo export the geo export will be run first and not in parallel. so the answer to Q1 is Houdini dose it for us if we trigger .render() on a downstream node.
Q2 the representations to publish is decided by the file extension. aka you will get a publish product (that currently has no validation etc) and every node gets to be an representation and if you did 2 nodes with the same file type you get an error. i believe this is in line with the design of ayon.
ps i dont think there is a good way to say if 2 nodes have the same file type we add it to the same representation, i imagine this to be very confusing.
Q3 with this one im unsure what you mean tbh so i cant fully answer it right now sorry.
Also, when computing the dependency graph, I'd like to stick the default Houdini behavior. without adding any additional logical relations to it. So, these two graph should give the same results.
the thing is those 2 graphs dont give you the same result because the calculation is ordered differently.
on the left: the usd publish can run in parallel with the geo publish (in deadline it would be, in Houdini it depends on how parallel things get handled)
on the right: the usd publish can only run after the geo publish.
this is because Houdini dose not see the graph from the last node but from a dependency standpoint. aka on the right usd rop is depended on geo rop so it gets blocked until geo rop is done. on the left its not so it can run when it wants. this is a problem if your usd export imports the geo export.
There is an issue in your explanation of the left side because it reads like it only ever uses one input, but instead it publishes all direct inputs (not just the first direct input - it just doesn't recurse up the graph further and only considers direct input).
your right my bad.
I think the right is visually nicer - but it also adds more complexity in us needing to 'manage' the dependency tree more. Plus I'm worried about the cases where a publisher breaker is in there, but a connection accidentally also 'passes' it around in complex graphs. As soon as we get to more complex graphs I think it may be visually nicer to just see whatever inputs are to the publish ROP itself - without not needing to care about anything else. It makes our ROP more stupid and only care about its direct input which I think from a coding perspective is actually a good thing too. (Plus requires less custom "layer break" nodes and whatnot)
TL;DR - my vote goes for left.
honestly after playing around a bit i think left is a bit simpler not having an extra node to handle is a good thing. but i also think it might be more complex in explanations? but this is just a feeling i find the right one simpler to understand (but this might be because i implemented it)
with that in mind if the left one actually covered all grounds and we don't forget something i will vote for it too. it will make the code a lot simpler (but we will need to rewrite / reduce a good few functions)
Auxiliary files that should be included to a representation as they are referenced by that particular representation. (but, they are generated by a separate Rop)
This is a good question. But:
[Just For Reference] we took the decision that the publish node should only publish direct inputs.
here are changes that's gonna happen.
soo a new commit has been done and now we only accept direct inputs. i removed everything we dont need (maybe that is)
A follow up Comment with 2 Test Sceenes. 1 For the more "Complex" test roy gave me
and 1 for the 3 simple ones.
Regarding Raising errors on the node.
in Houdini if you want to Raise an error on a node you do raise hou.NodeError()
this works only if we are in an execution that is on the node.
but with piblish we are not on the node anymore. further more errors are Shadowed/Handled by piblish so they never get to the node.
i assume we would need some kind of a callback or similar to get back into the nodes unit of execution to raise the error in the correct place to achieve this.
Changelog Description
Implements a special AYON Publish ROP node that publish a product from any of the files of its input ROPs. This way, any ROP can be easily published into a product type of the user's choice and it can act as a generic building block to allow publishing from within custom studio HDAs.
Additional review information
This PR implements a General Publish node into the
/out
context. The idea is that you will be capable to publish everything you want.The approach
A new AYON Publish ROP will publish any of its direct inputs as representations to a new product defined by the AYON Publish ROP's attributes.
With systems like deadline's own submitter from Houdini you can then publish complex graphs to Deadline without requiring AYON logic to do the farm submission itself.
Implementation.
ayon_publish()
function in the ayon_houdini module to keep most code in the AYON codebase instead of the HDA.ayon_publish
function calls publish with a selection of pyblish plugins.render()
function on the givenayon_publish
node. In case we publish from a node we will set a variable to bypass the.render()
call because the nodes will call it themselves.Known issues
$HIP
variable gets out of sync because upstream nodes will export with the different file version as downstream nodes.ip
ormd
does not work because it writes no files)Testing notes:
Local publish
Farm publish
It should work also completely on the farm when customizing some logic in Deadline's submitter, see the changes here.
Complex dependency graphs
A complex dependency graph in
/out
should also work fine. See the example files shared here.TODO
node.addError
?model
validations if publish model product type (this may required a separate PR to differentiate the relevant validators for certain ROPs to target the ROPs instead once https://github.com/ynput/ayon-core/pull/994 is merged.AYON_productName
on the instance, but compute it at runtime from creator + variant, etc.