Closed Odedbenc closed 1 year ago
Link for tflite base model: https://drive.google.com/file/d/1EU6StFZXE2ViJlOH0wKrChBduCCl_DGt/view?usp=sharing
Hi my friend, you sent me the tflite model, I was talking about the h5 model... I want to use it as a transfer learning model, and to fine tune it to my task.. If it is still available, I will be very grateful. thanks, Oded
On Tue, 21 Mar 2023 at 19:36, Aman Rangapur @.***> wrote:
Closed #27 https://github.com/Ant-Brain/EfficientWord-Net/issues/27 as completed.
— Reply to this email directly, view it on GitHub https://github.com/Ant-Brain/EfficientWord-Net/issues/27#event-8808573091, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGH4TAKTVPFZKG53BXEUFE3W5HRIJANCNFSM6AAAAAAWCSRNB4 . You are receiving this because you authored the thread.Message ID: @.***>
By the way, You did a great job with the model, and the paper is also well-written, and very interesting.
On Tue, 21 Mar 2023 at 19:45, Oded Ben Chetrit @.***> wrote:
Hi my friend, you sent me the tflite model, I was talking about the h5 model... I want to use it as a transfer learning model, and to fine tune it to my task.. If it is still available, I will be very grateful. thanks, Oded
On Tue, 21 Mar 2023 at 19:36, Aman Rangapur @.***> wrote:
Closed #27 https://github.com/Ant-Brain/EfficientWord-Net/issues/27 as completed.
— Reply to this email directly, view it on GitHub https://github.com/Ant-Brain/EfficientWord-Net/issues/27#event-8808573091, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGH4TAKTVPFZKG53BXEUFE3W5HRIJANCNFSM6AAAAAAWCSRNB4 . You are receiving this because you authored the thread.Message ID: @.***>
Hi my friend, you sent me the tflite model, I was talking about the h5 model... I want to use it as a transfer learning model, and to fine tune it to my task.. If it is still available, I will be very grateful. thanks, Oded …
Hey Oded, Unfortunately, I didn't find the .h5 base model because this project was developed while I was an undergrad, all the models and checkpoints were saved in the .edu drive. But now, my university has deactivated the account.
Ohh, that's a big bummer 😞.. So I can not recreate the keras base model with the tflite one, right? Thanks for your response
On Sun, 26 Mar 2023, 17:15 Aman Rangapur, @.***> wrote:
Hi my friend, you sent me the tflite model, I was talking about the h5 model... I want to use it as a transfer learning model, and to fine tune it to my task.. If it is still available, I will be very grateful. thanks, Oded … <#m1110339732370046775>
Hey Oded, Unfortunately, I didn't find the .h5 base model because this project was developed while I was an undergrad, all the models and checkpoints were saved in the .edu drive. But now, my university has deactivated the account.
— Reply to this email directly, view it on GitHub https://github.com/Ant-Brain/EfficientWord-Net/issues/27#issuecomment-1484109771, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGH4TALX3DSKJ5N73FSQTFLW6BFRRANCNFSM6AAAAAAWCSRNB4 . You are receiving this because you authored the thread.Message ID: @.***>
Ohh, that's a big bummer 😞.. So I can not recreate the keras base model with the tflite one, right? Thanks for your response …
You can't. But we are working on developing a new version of this project. We will publish the .h5 file. If you want to recreate this model, I can share the dataset.
I will really appreciate it. If you can send me a we transfer or a link to S3 bucket that would be great!!
On Sun, 26 Mar 2023, 17:31 Aman Rangapur, @.***> wrote:
Ohh, that's a big bummer 😞.. So I can not recreate the keras base model with the tflite one, right? Thanks for your response … <#m6038929990755485839>
You can't. But we are working on developing a new version of this project. We will publish the .h5 file. If you want to recreate this model, I can share the dataset.
— Reply to this email directly, view it on GitHub https://github.com/Ant-Brain/EfficientWord-Net/issues/27#issuecomment-1484114151, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGH4TAPYBSMQPSLHCYWZYELW6BHMZANCNFSM6AAAAAAWCSRNB4 . You are receiving this because you authored the thread.Message ID: @.***>
Thank you very much, I highly appreciate it 🙏
On Sun, 26 Mar 2023, 18:00 Aman Rangapur, @.***> wrote:
Reopened #27 https://github.com/Ant-Brain/EfficientWord-Net/issues/27.
— Reply to this email directly, view it on GitHub https://github.com/Ant-Brain/EfficientWord-Net/issues/27#event-8847415492, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGH4TAJDBVLM5G2UNTRQWSLW6BKZTANCNFSM6AAAAAAWCSRNB4 . You are receiving this because you authored the thread.Message ID: @.***>
Hi again, can you please send me the ligmelcalc.tflite converter?
On Sun, 26 Mar 2023, 18:38 Oded Ben Chetrit, @.***> wrote:
Thank you very much, I highly appreciate it 🙏
On Sun, 26 Mar 2023, 18:00 Aman Rangapur, @.***> wrote:
Reopened #27 https://github.com/Ant-Brain/EfficientWord-Net/issues/27.
— Reply to this email directly, view it on GitHub https://github.com/Ant-Brain/EfficientWord-Net/issues/27#event-8847415492, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGH4TAJDBVLM5G2UNTRQWSLW6BKZTANCNFSM6AAAAAAWCSRNB4 . You are receiving this because you authored the thread.Message ID: @.***>
Sorry. I found it.
On Wed, 29 Mar 2023, 18:07 Oded Ben Chetrit, @.***> wrote:
Hi again, can you please send me the ligmelcalc.tflite converter?
On Sun, 26 Mar 2023, 18:38 Oded Ben Chetrit, @.***> wrote:
Thank you very much, I highly appreciate it 🙏
On Sun, 26 Mar 2023, 18:00 Aman Rangapur, @.***> wrote:
Reopened #27 https://github.com/Ant-Brain/EfficientWord-Net/issues/27.
— Reply to this email directly, view it on GitHub https://github.com/Ant-Brain/EfficientWord-Net/issues/27#event-8847415492, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGH4TAJDBVLM5G2UNTRQWSLW6BKZTANCNFSM6AAAAAAWCSRNB4 . You are receiving this because you authored the thread.Message ID: @.***>
Nice, really appreciate it. How do you synthesize the data?
On Fri, Apr 14, 2023, 14:43 TheSeriousProgrammer @.***> wrote:
We have trained a newer model lately , kindly test it out, just tiding up the train flow in that , soon training code too will be released
— Reply to this email directly, view it on GitHub https://github.com/Ant-Brain/EfficientWord-Net/issues/27#issuecomment-1508379504, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGH4TALF62Z3AQMKWMIYJK3XBEZ5NANCNFSM6AAAAAAWCSRNB4 . You are receiving this because you authored the thread.Message ID: @.***>
The newer model has been trained with a dataset derived from People speech dataset from MLCommons, the original dataset has a lot of imbalances and they havent pre split into train and test, so we had to create a more polished derivative of it , will share them soon . For now you can now have a look at the uncleaned training code if you want
https://github.com/Ant-Brain/EfficientWord-Net-Trainer.git
simple_trainer.py is the entry point
I also trained a model using mlcommon, I wonder what kind of metric you used to eliminate similar words.. I used a combination of jaccard similarity and levinstein distance..
On Fri, Apr 14, 2023, 14:49 TheSeriousProgrammer @.***> wrote:
The newer model has been trained with a dataset derived from People speech dataset from MLCommons, the original dataset has a lot of imbalances and they havent pre split into train and test, so we had to create a more polished derivative of it , will share them soon . For now you can now have a look at the uncleaned training code if you want
https://github.com/Ant-Brain/EfficientWord-Net-Trainer.git
simple_trainer.py is the entry point
— Reply to this email directly, view it on GitHub https://github.com/Ant-Brain/EfficientWord-Net/issues/27#issuecomment-1508385265, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGH4TAMSCADD7RQ42WK6BALXBE2TTANCNFSM6AAAAAAWCSRNB4 . You are receiving this because you authored the thread.Message ID: @.***>
And by the way, I played with some other versions of efficient net, and they gave better results.
On Fri, Apr 14, 2023, 14:52 Oded Ben Chetrit @.***> wrote:
I also trained a model using mlcommon, I wonder what kind of metric you used to eliminate similar words.. I used a combination of jaccard similarity and levinstein distance..
On Fri, Apr 14, 2023, 14:49 TheSeriousProgrammer @.***> wrote:
The newer model has been trained with a dataset derived from People speech dataset from MLCommons, the original dataset has a lot of imbalances and they havent pre split into train and test, so we had to create a more polished derivative of it , will share them soon . For now you can now have a look at the uncleaned training code if you want
https://github.com/Ant-Brain/EfficientWord-Net-Trainer.git
simple_trainer.py is the entry point
— Reply to this email directly, view it on GitHub https://github.com/Ant-Brain/EfficientWord-Net/issues/27#issuecomment-1508385265, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGH4TAMSCADD7RQ42WK6BALXBE2TTANCNFSM6AAAAAAWCSRNB4 . You are receiving this because you authored the thread.Message ID: @.***>
In this derivative we didnt eliminate the words. if we keep close enough words, the model tends to learn better Even in this newer model struggles to differentiate words like "lights on" and "lights off", but keeping them in the dataset has certainly increased the discriminative powers of the model
Moreover we have used ArcLoss instead of triplet loss, (currently state of the art face rec papers are using the same), that too has played quite a role
Interesting. I would really like to have a look at it. By the way, the link you sent me is broken.. 404..
On Fri, Apr 14, 2023, 14:56 TheSeriousProgrammer @.***> wrote:
In this derivative we didnt eliminate the words. if we keep close enough words, the model tends to learn better Even in this newer model struggles to differentiate words like "lights on" and "lights off", but keeping them in the dataset has certainly increased the discriminative powers of the model
Moreover we have used ArcLoss instead of triplet loss, (currently state of the art face rec papers are using the same), that too has played quite a role
— Reply to this email directly, view it on GitHub https://github.com/Ant-Brain/EfficientWord-Net/issues/27#issuecomment-1508393194, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGH4TALBT6FJTOYPLZQIBBLXBE3QRANCNFSM6AAAAAAWCSRNB4 . You are receiving this because you authored the thread.Message ID: @.***>
Sorry for all the questions, but I really want to understand more about the mfcc conversation you did, tflite is not working with a more simple tf functions to create mfcc? Is it possible to connect the mfcc conversation to the model as a layer in the network and to wrap it up to one tflite model?
On Fri, Apr 14, 2023, 14:59 Oded Ben Chetrit @.***> wrote:
Interesting. I would really like to have a look at it. By the way, the link you sent me is broken.. 404..
On Fri, Apr 14, 2023, 14:56 TheSeriousProgrammer @.***> wrote:
In this derivative we didnt eliminate the words. if we keep close enough words, the model tends to learn better Even in this newer model struggles to differentiate words like "lights on" and "lights off", but keeping them in the dataset has certainly increased the discriminative powers of the model
Moreover we have used ArcLoss instead of triplet loss, (currently state of the art face rec papers are using the same), that too has played quite a role
— Reply to this email directly, view it on GitHub https://github.com/Ant-Brain/EfficientWord-Net/issues/27#issuecomment-1508393194, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGH4TALBT6FJTOYPLZQIBBLXBE3QRANCNFSM6AAAAAAWCSRNB4 . You are receiving this because you authored the thread.Message ID: @.***>
The repo was private , can you check again? Moreover the logmelcalc.tflite still works, can you check again? Cause I did some tests on the older model too to check if the api changes where working
The problem with adding logmelcalc.tflite to the model itself is that, the converted was generator has a very tiny bug which makes it crash on adding the batch dimension, its working in the inference pipeline as there was no need to make batch inferences yet
The newer model doesnt make use of the same anymore, instead on pure numpy operators , moreover the new preprocessor can't be added to the model itself as, some the operators involved can't be exported to onnx
(about the preprocess being a part of the model itself) I am not able to dive too much into it due to time contraints, but it should be theoretically possible Kindly share us if you make any progress in that front : )
Of course, thank you very much for your sharing. ,🙏
On Fri, Apr 14, 2023, 15:13 TheSeriousProgrammer @.***> wrote:
(about the preprocess being a part of the model itself) I am not able to dive too much into it due to time contraints, but it should be theoretically possible Kindly share us if you make any progress in that front : )
— Reply to this email directly, view it on GitHub https://github.com/Ant-Brain/EfficientWord-Net/issues/27#issuecomment-1508412515, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGH4TAOABZKFPN4T422BDG3XBE5PRANCNFSM6AAAAAAWCSRNB4 . You are receiving this because you authored the thread.Message ID: @.***>
Currently closing this issue as major questions have been answered
Hi, I am playing with the model, and wanted to train with new data on the existing model. Since, I cannot retrieve the model's weights from the tflite model, is there a chance to get the h5 model from you??