Closed SimKarras closed 3 years ago
Hi @JiaweiShiCV, try setting max_response_size
to a bigger value in config.properties
thx for your help! It works.
Try running your server without --ncs
flag this will create config
folder under logs folder
e.g. torchserve --start --model-store model_store --models densenet161.mar
You can copy the settings to a different file and add max_response_size=65535000
Then run with --ts_config
flag
e.g. torchserve --start --ncs --model-store model_store --models clean.mar --ts_config <your_config_file>
Please refer to the docs
Try running your server without
--ncs
flag this will createconfig
folder under logs folder e.g.torchserve --start --model-store model_store --models densenet161.mar
You can copy the settings to a different file and addmax_response_size=65535000
Then run with--ts_config
flag e.g.torchserve --start --ncs --model-store model_store --models clean.mar --ts_config <your_config_file>
Please refer to the docs
Yes, I have overcome this problem through your method. Thank you very much!
📚 Documentation
I tried to serve a generative model, but failed. Can you help me? repo: https://github.com/TencentARC/GFPGAN
Files
main network :
examples/GAN/gfpgan/gfpganv1_clean_arch.py
sub-network:
examples/GAN/gfpgan/stylegan2_clean_arch.py
simple handler:
ts/torch_handler/generative.py
Store a Model
Start TorchServe
logs
:Inference
output:
logs: