Open ioangatop opened 3 months ago
I think this would not be possible. If you run inspect.signature(torch.argmax)
it just fails. And this wouldn't even be possible to fix in pytorch side. The problem is that torch.argmax
has multiple signatures, see help(torch.argmax)
, which is something that native python functions don't support.
I will keep this in mind in case some better idea comes up. But now I think a wrapper class is the best option. Possibly a single class which gets the torch function name so that there is no need for one class for each function.
Thanks for the fast response!
Possibly a single class which gets the torch function name so that there is no need for one class for each function.
This is also what I did, but I also came up with a different kinda hacky idea to pass it as dict and parse it later as a partial function. For example:
class SomeClass:
def __init__(self, process: Callable[..., torch.Tensor] | Dict[str, Any]) -> None:
self.process = self.parse(process) if isinstance(process, dict) else process
def parse(self, item):
return functools.partial(
jsonargparse._util.import_object(item["class_path"]), **item.get("init_args", {})
)
if you have any other idea how to improve it, please do let me know 🙏
🚀 Feature request
Hi! I would like to use torch functions straight from the
yaml
file, for example:However, I have had time to succeed, as the typing check fails, as for example the following:
The way around is to wrap them around callable classes, but it would be great to support them, like the dot imports for the torch optimizes, so I dont have to duplicate them or clear a wrapper class