Open Yeementh24 opened 8 months ago
Hello, without Flair NER the model requires near 3 GB RAM. If you don't have GPU and use NER it requires 4.5 GB RAM. If you use GPU and NER it requires 8 gb RAM and 2 GB RAM for NER. It also depends on the number of texts due to batch encoding. I can help you to solve the problem if you will send logs of error. We also have example in google colab. May be it can help you to https://colab.research.google.com/drive/15YcTW9KPSWesZ6_L4BUayqW_omzars0l?usp=sharing
Actually we are using amazon sagemaker notebook instance ml.g4dn.4xlarge we have around 514 sentences when ever we are running through the model kernal die even it happen same with google colab it run out of ram Then we tried in batches like 100 sentences ,also we tried to decrease it to 50 sentences but the kernal dies at every point
On Wed, 13 Dec 2023 at 1:53 PM, E-Kovtun @.***> wrote:
Hello, without Flair NER the model requires near 3 GB RAM. If you don't have GPU and use NER it requires 4.5 GB RAM. If you use GPU and NER it requires 8 gb RAM and 2 GB RAM for NER. It also depends on the number of texts due to batch encoding. I can help you to solve the problem if you will send logs of error. We also have example in google colab. May be it can help you to https://colab.research.google.com/drive/15YcTW9KPSWesZ6_L4BUayqW_omzars0l?usp=sharing
— Reply to this email directly, view it on GitHub https://github.com/sb-ai-lab/ESGify/issues/1#issuecomment-1853461183, or unsubscribe https://github.com/notifications/unsubscribe-auth/ASYXMRR7WQIQWTWWBPADKZ3YJFQYZAVCNFSM6AAAAABASTSUXCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNJTGQ3DCMJYGM . You are receiving this because you authored the thread.Message ID: @.***>
I recommend you add to you block of code that 2 strings: model = model.eval() with torch.no_grad(): to_model = tokenizer.batch_encode_plus( texts, add_special_tokens=True, max_length=512, return_token_type_ids=False, padding="max_length", truncation=True, return_attention_mask=True, return_tensors='pt', ) results = model(**to_model)
Hey Sb-Ai-Lab , I am writing to express my sincere gratitude for your amazing work on creating the ML model that solved our working problem statement. You have demonstrated exceptional skills and creativity in developing such a powerful and elegant solution. I appreciate your dedication, passion, and innovation . Thank you for sharing your brilliant work with us. I hope to learn from you and collaborate with you in the future. Sincerely, Yeementh Virutkar Machine Learning Engineer at Quantiphi
On Wed, 13 Dec 2023 at 4:21 PM, kazakov15 @.***> wrote:
I recommend you add to you block of code that 2 strings: model = model.eval() with torch.no_grad(): to_model = tokenizer.batch_encode_plus( texts, add_special_tokens=True, max_length=512, return_token_type_ids=False, padding="max_length", truncation=True, return_attention_mask=True, return_tensors='pt', ) results = model(**to_model)
— Reply to this email directly, view it on GitHub https://github.com/sb-ai-lab/ESGify/issues/1#issuecomment-1853685807, or unsubscribe https://github.com/notifications/unsubscribe-auth/ASYXMRXCOSB4SI4FAPPIIA3YJGCBPAVCNFSM6AAAAABASTSUXCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNJTGY4DKOBQG4 . You are receiving this because you authored the thread.Message ID: @.***>
Hi Sb-Ai-Lab Hi we are facing problem in deployment of Esgify model on amazon sagemaker can u provide the script for deploying ESGify model
On Thu, 21 Dec 2023 at 11:53 AM, Yeementh Virutkar @.***> wrote:
Hey Sb-Ai-Lab , I am writing to express my sincere gratitude for your amazing work on creating the ML model that solved our working problem statement. You have demonstrated exceptional skills and creativity in developing such a powerful and elegant solution. I appreciate your dedication, passion, and innovation . Thank you for sharing your brilliant work with us. I hope to learn from you and collaborate with you in the future. Sincerely, Yeementh Virutkar Machine Learning Engineer at Quantiphi
On Wed, 13 Dec 2023 at 4:21 PM, kazakov15 @.***> wrote:
I recommend you add to you block of code that 2 strings: model = model.eval() with torch.no_grad(): to_model = tokenizer.batch_encode_plus( texts, add_special_tokens=True, max_length=512, return_token_type_ids=False, padding="max_length", truncation=True, return_attention_mask=True, return_tensors='pt', ) results = model(**to_model)
— Reply to this email directly, view it on GitHub https://github.com/sb-ai-lab/ESGify/issues/1#issuecomment-1853685807, or unsubscribe https://github.com/notifications/unsubscribe-auth/ASYXMRXCOSB4SI4FAPPIIA3YJGCBPAVCNFSM6AAAAABASTSUXCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNJTGY4DKOBQG4 . You are receiving this because you authored the thread.Message ID: @.***>
@Yeementh24, please explain what do you mean under deploying script? Is file with requirements enough for you? Or you want something else?
Hi Sb-Ai-Lab/Esgify I want to deploy Esgify model on amazon sagemaker environment as endpoint such that i don’t need to load it on notebook as it acquire memory space on notebook .
On Thu, 28 Dec 2023 at 5:08 PM, kazakov15 @.***> wrote:
@Yeementh24 https://github.com/Yeementh24, please explain what do you mean under deploying script? Is file with requirements enough for you? Or you want something else?
— Reply to this email directly, view it on GitHub https://github.com/sb-ai-lab/ESGify/issues/1#issuecomment-1871085116, or unsubscribe https://github.com/notifications/unsubscribe-auth/ASYXMRXXE6H4RXA4G4X72ILYLVK4LAVCNFSM6AAAAABASTSUXCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZRGA4DKMJRGY . You are receiving this because you were mentioned.Message ID: @.***>
Hi, I am facing the problem with memory requirement for ESGIFY model to run. Can u Guide me with the resource required for esgify model.