Closed hellodario closed 3 years ago
I can also see other users found useful to check the custom handler. please find it attached. This material has been developed by mukut03 and all credits to him!
worth noting also other users are experiencing the same issue as flagged in the reply above.
thanks! Dario
I can also see other users found useful to check the custom handler. please find it attached. This material has been developed by mukut03 and all credits to him!
worth noting also other users are experiencing the same issue as flagged in the reply above.
thanks! Dario Hello, Did you manage to start mukut03 with this new handler? I copy paster custom one that you provided, but backend worker still dies...
hi there thanks for response
no. mukut03 did both the model and the handler.
@hellodario The error, that I see in your logs is indentation error (if you open the logs and search for it, you will find it), other than the indentation errors, if you are using the latest torchserve version, the handler needs some changes to adopt the BaseHandler and recent changes. I have attached the modified handler that worked on my side along with server side logs. Please let me know if you still facing the same issue.
Thanks Hamid, it worked!!
Last question if I may. I am now trying to predict multiple sentences in the same file. How should I modify the handler to iterate through rows of the predict.txt?
Will post your solution in my thread other side
thanks Dario
@hellodario Whats the output that you are getting? Because when I run the senteces from .txt file, I just get 12,14,25 but not the actual classes name. Can you share abit of details on that? really appriciated
hey man,
Thats wiered as I do not get the label_dict.txt created at all. That output that your provided as screenshoot I saw it at muku03 account.
But when I run curl command I get following: INPUT: squeezee@SQ-MBP ESG_NLP % curl -X POST http://127.0.0.1:8080/predictions/bert -T predict.txt
OUTPUT: 16%
Thats it.
Thanks Hamid, it worked!!
Last question if I may. I am now trying to predict multiple sentences in the same file. How should I modify the handler to iterate through rows of the predict.txt?
Will post your solution in my thread other side
thanks Dario
I solved this doing the requests through python + pandas apply. thanks
@HamidShojanazeri I tried running your code but it isnt working. Im attaching the terminal output here. Can you please help me out here
@hellodario
Hi there, I am trying to run the ESG BERT model for sustainable reporting analysis created by mukut03
https://github.com/mukut03/ESG-BERT
Unfortunately, I receive the following error, which I believe is related to the model having some issues
I also attach the bigger log in case useful ts_log.log
can someone help please? thank you Dario
` 2021-03-03 11:57:58,946 [DEBUG] W-9001-bert_1.0 org.pytorch.serve.wlm.WorkerThread - W-9001-bert_1.0 State change WORKER_STOPPED -> WORKER_STARTED 2021-03-03 11:57:58,946 [DEBUG] W-9000-bert_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED 2021-03-03 11:57:58,946 [INFO ] W-9000-bert_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died. 2021-03-03 11:57:58,945 [INFO ] W-9003-bert_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9003 2021-03-03 11:57:58,949 [INFO ] W-9000-bert_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last): 2021-03-03 11:57:58,954 [INFO ] W-9001-bert_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9001 2021-03-03 11:57:58,948 [DEBUG] W-9000-bert_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died. java.lang.InterruptedException at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056) at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2133) at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:432) at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:188) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834)
`