Open freud14 opened 3 years ago
Adding metric learning models like ArcFace , CosFace etc ? If so I can create a pull request and implement them ?
Hi @Atharva-Phatak, thank you for the suggestion, I think it's a really good idea. However, before you implement this feature, I would like to know more on how you're planning to implement it. For instance, do you plan to have a unified interface which would be used all learned metrics? In which case, what would this interface look like? Do you plan on providing utilities for re-training them or just pre-trained learned metrics?
Well the modules like ArcFace , CosFace have different logic(some calculations are different) I was thinking of implementing them as simple classes which inherit from nn.Module and have their own forward method.
There is no point in developing a unified interface because the parameters and core logic they require are different.
Also arcface and cosface are more like heads which are used on top of pretrained backbones instead of simple linear layers, they are used extremely in Face Recognition.
Let me give you an example.
Class ArcFace(nn.Module):
def __init__(self , some_params):
initialize those params
def forward(self , x , targets):
core logic of arcface
#Then you can create arcface head like this
arcface = ArcFace(some_params)
#Then you can add this arcface layer/module on top of any model.
res18 = torchvision.models.resnet18(pretrained = True)
res18.head = ArcFace(some_param)
And Viola you can now train your own metric learning model
I hope I was clear enough. I you have further doubts I can definitely clarify further.
Hi,
I don't know if you've seen but in Poutyne we split the network part and the loss function part. Now, as I understand, ArcFace would be used as a loss function. So either the user would have to do manipulations so that the parameters of the loss function be passed to the optimizer or we do it ourselves in Model when the user passes a string for the optimizer. So I'm just wondering on how you see the integration of this with Poutyne. If you can give me an example (maybe start with your current example?) on how you see this integration.
Thank you.
Hi,
So the paper mentions it as loss function, but in implementation it is actually a series of steps(called as margin products) which help you obtain the logits and when you get logits you can apply any loss function on those logits(same as classification).
So lets consider the example.
Class ArcFace(nn.Module):
def __init__(self , some_params):
initialize those params
def forward(self , x , targets):
core logic of arcface,
returns logits
#Then you can create arcface head like this
arcface = ArcFace(some_params)
#Then you can add this arcface layer/module on top of any model.
res18 = torchvision.models.resnet18(pretrained = True)
res18.head = ArcFace(some_param)
logits = res18(inputs , targets) #since while training arcface requires the targets(to compute the margins)
loss = loss_fn(logits , targets)
The above steps are same as what poutyne does, when creating the model the user just defines the ArcFace or ArcFaceHead(names can be discussed later) in the model and then loss can be computed in standard way how poutyne does it.
I hope this is helps.
Hi, sorry for the delay. Could you make a complete working/runnable example with Poutyne just so I can see how everything fits together? I can be with a dummy learned metric to make it simpler.
Like in scikit-learn the model should deduce the number of
from training input data. This is an issue also for other similar libraries like skorch
Hi @AlekseyFedorovich, I am not sure of the need to deduce these. Could you elaborate? By the way, we already deduce the number of training samples so that the aggregation of the batch metrics be exact.
@freud14 if you put your predictor in a pipeline with other transformations it is very unhandy to pass the input dimensions to the predictor inside the pipeline (in many cases you don't know that dimensions a-priori like in feature selections based on statistics), furthermore from the "Do not Repeat Yourself" point of view this is redundant since the dimension of the input X can be deduced from the X itself during training
Hi, Poutyne does not create the architecture of the network for you and thus cannot modified your network in that way. However, it might be a good idea to make some kind of class that calls a user-defined function to instantiate the network with a some input size. I've never really used ML pipelines (I suppose à la sklearn pipeline?) but I would be open for a contribution in that way. Let me know if you have any plan of doing so.
When you put your model in production you must have some sort of pipeline because data are never in a form that can feed the predictor directly. Yes I normally use the sklearn Pipeline functionality for that. I'd love to contribute to an open-source project like this but I lack the skills and the time for that, I'm sorry. I appreciated the request for improvements of this issue and I wanted to bring my point of view.
Alright. Anyway, if you have an idea of what an API for that would look like, let me know,
Great work with the library. Is there any plan to support TPU training or creating an abstraction to allow swapping devices with minimal changes to the code?
Hi @adnan33, you can already change from CPU to GPU or GPU to another GPU easily. For TPUs, I never really used them (and don't know anyone who does) so I don't really know the technical implications of integrating them into Poutyne. However, I would be really interested in adding the feature to Poutyne if you think you (or anyone else) can make the contribution.
I would like to see better integration with MLFlow and the ability to use the high-level API mlflow.
Hi @ShawnSchaerer, if you haven't seen, we already have an MLFlowLogger. Let me know if you think there are missing features in it.
Hi @ShawnSchaerer, if you haven't seen, we already have an MLFlowLogger. Let me know if you think there are missing features in it.
It looks like if you use the mlflow client then the mlflow tags are not set and you need to do that manually. If you use the higher level MLflow API directly then the tags are filled in automatically. I am filling them in manually but they are reserved and not sure if that is an issue or not. You could use the MLflow API directly and instead of the lower level Tracking API
It looks like if you use the mlflow client then the mlflow tags are not set and you need
Do you have a code example? I'm not sure to follow what you mean.
f you use the higher level MLflow API directly then the tags are filled in automatically. not sure what you mean here.
I am filling them in manually but they are reserved and not sure if that is an issue or not. not sure to follow up on that one. If you mean default tags like this it is not an error. Otherwise, need more explication on my side.
Also, at first, the idea of using the MLFlow client class was to be able to save the model directly into the artifacts but I was not able to manage properly the path since MLFlow is broken (I mean by that that I was able to save it locally OR save it on remote but not the two). We chose at the end to remove that part since it was not working.
I would like for there to be a Dockerfile accompanying poutyne
. I usually run training jobs on VMs using the NVIDIA Container Toolkit. Having a Docker image for this package would be really helpful for experimentation and building on top of poutyne
.
More than happy to work on this.
I would like for there to be a Dockerfile accompanying
poutyne
. I usually run training jobs on VMs using the NVIDIA Container Toolkit. Having a Docker image for this package would be really helpful for experimentation and building on top ofpoutyne
.More than happy to work on this.
@ShawnSchaerer Good idea! On what image do you want to base it on?
I would like for there to be a Dockerfile accompanying
poutyne
. I usually run training jobs on VMs using the NVIDIA Container Toolkit. Having a Docker image for this package would be really helpful for experimentation and building on top ofpoutyne
. More than happy to work on this.@ShawnSchaerer Good idea! On what image do you want to base it on?
I think the base image should be with the latest stable versions of PyTorch, cuda and cudnn. If size is an issue I'm sure we can reduce the size by using multi-staged builds and wheels for python packages. What do you think ?
I would like for there to be a Dockerfile accompanying
poutyne
. I usually run training jobs on VMs using the NVIDIA Container Toolkit. Having a Docker image for this package would be really helpful for experimentation and building on top ofpoutyne
. More than happy to work on this.@ShawnSchaerer Good idea! On what image do you want to base it on?
I think the base image should be with the latest stable versions of PyTorch, cuda and cudnn. If size is an issue I'm sure we can reduce the size by using multi-staged builds and wheels for python packages. What do you think ?
Sounds about right. Is there already one that exists? Because the one on the PyTorch repo uses the nightly build.
I would like for there to be a Dockerfile accompanying
poutyne
. I usually run training jobs on VMs using the NVIDIA Container Toolkit. Having a Docker image for this package would be really helpful for experimentation and building on top ofpoutyne
. More than happy to work on this.@ShawnSchaerer Good idea! On what image do you want to base it on?
I think the base image should be with the latest stable versions of PyTorch, cuda and cudnn. If size is an issue I'm sure we can reduce the size by using multi-staged builds and wheels for python packages. What do you think ?
Sounds about right. Is there already one that exists? Because the one on the PyTorch repo uses the nightly build.
How about pytorch/pytorch:1.10.1-cuda11.3-cudnn8-runtime
How about
pytorch/pytorch:1.10.1-cuda11.3-cudnn8-runtime
Sounds good. Just open a PR when you're ready.
I was thinking of linking the Experiment object to an Experiment of determined AI framework (https://www.determined.ai/). The idea would be to train the models on determined ai managed resources if you choose the device="detai"
or something similar.
I think of investigating this idea in the next days. If it make sense, I will make a PR for that.
Hi @gbmarc1, this looks interesting indeed. If you can make a PR for that, it would be nice. However, we have to take care of not overcomplexifying the code base since this seems to impact all of the Model class code. Let me know of your exact plan of how you think you can integrate this so that we do not loose each other's time. If you wish, we can make a Zoom call about it.
Hi @gbmarc1, this looks interesting indeed. If you can make a PR for that, it would be nice. However, we have to take care of not overcomplexifying the code base since this seems to impact all of the Model class code. Let me know of your exact plan of how you think you can integrate this so that we do not loose each other's time. If you wish, we can make a Zoom call about it.
Good idea. I will first do a little design first, and then we could have that meeting. I will come back to you when I am ready.
Ideally, I think the design should allow using a more generic interface for cloud computing. I'm pretty sure there are other similar solutions out there. Thus, if the proposed approach only allows the use of determined AI, it complicates the codebase and only binds Poutyne to this particular solution.
I think the best approach would be to keep the two as decoupled as possible. Poutyne does not need a cloud computing solution to run, it only need a "device" whatever type that is (cpu
, gpu
, or cloud service
). Thus, I think we should inject a dependency rather than add dependency in the Poutyne Experiment interface. Something like device=InterfaceObjectForcloudComputing()
. I don't know if it is possible due to networking and all.
My two cents on that matter
Ideally, I think the design should allow using a more generic interface for cloud computing. I'm pretty sure there are other similar solutions out there. Thus, if the proposed approach only allows the use of determined AI, it complicates the codebase and only binds Poutyne to this particular solution.
I think the best approach would be to keep the two as decoupled as possible. Poutyne does not need a cloud computing solution to run, it only need a "device" whatever type that is (
cpu
,gpu
, orcloud service
). Thus, I think we should inject a dependency rather than add dependency in the Poutyne Experiment interface. Something likedevice=InterfaceObjectForcloudComputing()
. I don't know if it is possible due to networking and all.
@davebulaval do you have another solution for cloud computing that I could refer to help me generalize the design?
@freud14 , I am hitting some roadblocks and I think it would be good to brainstorm with someone else before going further. I added you on LinkedIn to organize the zoom meeting when you are available!
ONNX/general deployment support? Is there a good way
@redthing1 IMO ONNX is a pain in the ass to use since you need to specify the batch size beforehand.
ONNX/general deployment support? Is there a good way
Thanks for the suggestion. I've never used ONNX so I'm not sure of the pitfalls and how actually easy it is to work with. If you were able to make some pseudocode/code on how you would like to use it with poutyne, I would be happy.
Any interest in supporting closure
for the optimizer step? https://pytorch.org/docs/stable/optim.html#optimizer-step-closure
Hi @atremblay , Would be interested yes but that would require quite a bit of refactoring with the current state of the code base. Also, since theses approaches compute multiple times the predictions and the loss, I'm not sure how we would integrate that with the metrics we return. I never actually used these optimizers so not sure what people do in practice.
We are looking for ideas to improve Poutyne. So, if you have any, you can provide them by replying to this issue.