Open Caet-pip opened 1 year ago
Are there any error logs?
Note it should be 'ip:port', not 'ip/port',
Apologies i meant ip:port
I am trying inference over local connection with same wifi. I am running the server with certificate enabled because i am connecting to connection apart from local host on one computer and connecting to it from another computer. When i try the client websocket py file i get inference in terminal but if i go to the ip address:port it says no response
I tried to ping ip address and i get response so there is no problem in the connection but i get no response when i try accessing the ipaddress:port
Are there any error logs?
If yes, please either copy the error logs verbatim or post screenshots about them. And please tell use the complete command you are using to start the server.
Note that there are two kinds of error logs:
The more error logs you give us, the quicker we can fix your issue.
There are no error logs,
when I am connecting with the ipaddress:port using the client websocket with microphone.py I can accsedd the server which is hosted on another computer connected to the same wifi network
but when i am trying to access this using the website http://ipaddress:port/streaming_record.html i cannot send audio and the streaming record button does not work
I tried using the generate.py to generate a certificate and ran server again using the certificate argument but this time the website http://ipaddress:port/streaming_record.html does not open at all
I am trying this to connect two computers in same wifi network
There are no error logs,
No, there should be.
Please search with google for how to view the console logs within your browser.
If you use a certificate, please don't use http
any more, use https
instead.
(please read the console log messages after you start the server. It should have told you that you need to access the correct address; it is https
, not http
).
I took few pics, sorry for the bad quality of image
these are what is being shown, and they are two separate computers, image on bottom shows connection possible with ipaddress:port but when using another computer its not reachable which is in same wifi
Please show the complete logs of the server. The logs at the very beginning of the server are missing.
Also, please show the console logs from your browser. If you don't know how to get the console logs of your browser for your current page, please google it.
I took few pics, sorry for the bad quality of image
these are what is being shown, and they are two separate computers, image on bottom shows connection possible with ipaddress:port but when using another computer its not reachable which is in same wifi
By the way, the logs from the server are pretty normal.
Hello, I was able to get it up and running but I encountered another issue. I am not able to get the logs of transcript in Python Server with two connections to the server when running it as a docker image. When I run it locally I can get the transcript in the logs but when i use docker only the number of connections come in logs
Logs when running in docker:
(base) C:\Users\path>docker run -p port:porttranscribe1 2023-08-15 16:37:41,204 INFO [streaming_server1.py:719] {'encoder': './sherpa-onnx-streaming-zipformer-en-2023-06-21/encoder-epoch-99-avg-1.int8.onnx', 'decoder': './sherpa-onnx-streaming-zipformer-en-2023-06-21/decoder-epoch-99-avg-1.int8.onnx', 'joiner': './sherpa-onnx-streaming-zipformer-en-2023-06-21/joiner-epoch-99-avg-1.int8.onnx', 'paraformer_encoder': None, 'paraformer_decoder': None, 'tokens': './sherpa-onnx-streaming-zipformer-en-2023-06-21/tokens.txt', 'sample_rate': 16000, 'feat_dim': 80, 'provider': 'cpu', 'decoding_method': 'greedy_search', 'num_active_paths': 4, 'use_endpoint': 1, 'rule1_min_trailing_silence': 2.4, 'rule2_min_trailing_silence': 1.2, 'rule3_min_utterance_length': 20, 'port': port, 'nn_pool_size': 1, 'max_batch_size': 50, 'max_wait_ms': 10, 'max_message_size': 1048576, 'max_queue_size': 32, 'max_active_connections': 500, 'num_threads': 2, 'certificate': './web/cert.pem', 'doc_root': './web'} 2023-08-15 16:37:51,186 INFO [streaming_server1.py:542] Using certificate: ./web/cert.pem 2023-08-15 16:37:51,217 INFO [server.py:711] server listening on 0.0.0.0:port 2023-08-15 16:37:51,220 INFO [server.py:711] server listening on [::]:port 2023-08-15 16:37:51,244 INFO [streaming_server1.py:573] Please visit one of the following addresses:
https://localhost:port https://0.0.0.0:port https://ipaddress:port https://ipaddress:port
2023-08-15 16:37:56,399 INFO [server.py:233] connection failed (200 OK) 2023-08-15 16:37:56,417 INFO [server.py:268] connection closed 2023-08-15 16:37:56,512 INFO [server.py:233] connection failed (200 OK) 2023-08-15 16:37:56,552 INFO [server.py:233] connection failed (200 OK) 2023-08-15 16:37:56,554 INFO [server.py:233] connection failed (200 OK) 2023-08-15 16:37:56,558 INFO [server.py:233] connection failed (200 OK) 2023-08-15 16:37:56,566 INFO [server.py:233] connection failed (200 OK) 2023-08-15 16:37:56,568 INFO [server.py:268] connection closed 2023-08-15 16:37:56,570 INFO [server.py:268] connection closed 2023-08-15 16:37:56,588 INFO [server.py:268] connection closed 2023-08-15 16:37:56,588 INFO [server.py:268] connection closed 2023-08-15 16:37:56,588 INFO [server.py:268] connection closed 2023-08-15 16:37:56,648 INFO [server.py:233] connection failed (200 OK) 2023-08-15 16:37:56,657 INFO [server.py:268] connection closed 2023-08-15 16:37:58,106 INFO [server.py:646] connection open 2023-08-15 16:37:58,107 INFO [streaming_server1.py:614] Connected: ('ipaddress', number). Number of connections: 1/500
When running locally
(base) C:\Users\path>python ./python-api-examples/Tests/streaming_server1.py --tokens=./sherpa-onnx-streaming-zipformer-en-2023-06-21/tokens.txt --encoder=./sherpa-onnx-streaming-zipformer-en-2023-06-21/encoder-epoch-99-avg-1.int8.onnx --decoder=./sherpa-onnx-streaming-zipformer-en-2023-06-21/decoder-epoch-99-avg-1.int8.onnx --joiner=./sherpa-onnx-streaming-zipformer-en-2023-06-21/joiner-epoch-99-avg-1.int8.onnx Xformers is not installed correctly. If you want to use memory_efficient_attention to accelerate training use the following command to install Xformers pip install xformers. 2023-08-15 11:01:03,702 INFO [streaming_server1.py:754] {'encoder_model': './sherpa-onnx-streaming-zipformer-en-2023-06-21/encoder-epoch-99-avg-1.int8.onnx', 'decoder_model': './sherpa-onnx-streaming-zipformer-en-2023-06-21/decoder-epoch-99-avg-1.int8.onnx', 'joiner_model': './sherpa-onnx-streaming-zipformer-en-2023-06-21/joiner-epoch-99-avg-1.int8.onnx', 'tokens': './sherpa-onnx-streaming-zipformer-en-2023-06-21/tokens.txt', 'sample_rate': 16000, 'feat_dim': 80, 'decoding_method': 'greedy_search', 'num_active_paths': 4, 'use_endpoint': 1, 'rule1_min_trailing_silence': 2.4, 'rule2_min_trailing_silence': 1.2, 'rule3_min_utterance_length': 20, 'port': port, 'nn_pool_size': 1, 'max_batch_size': 50, 'max_wait_ms': 10, 'max_message_size': 1048576, 'max_queue_size': 32, 'max_active_connections': 500, 'num_threads': 1, 'certificate': None, 'doc_root': './python-api-examples/Tests/web'} BEFORE SOCKET!! 2023-08-15 11:01:12,531 INFO [streaming_server1.py:558] No certificate provided 2023-08-15 11:01:12,538 INFO [server.py:711] server listening on 0.0.0.0:port 2023-08-15 11:01:12,541 INFO [server.py:711] server listening on [::]:port 2023-08-15 11:01:12,562 INFO [streaming_server1.py:575] Please visit one of the following addresses:
http://0.0.0.0:port http://localhost:port http://ip:port http://ip:port
2023-08-15 11:02:06,416 INFO [server.py:646] connection open 2023-08-15 11:02:06,417 INFO [streaming_server1.py:616] Connected: ('ipaddress', number). Number of connections: 1/500 {'text': '', 'sentiment': '', 'namedentity': '', 'segment': 0} {'text': '', 'sentiment': '', 'namedentity': '', 'segment': 0} {'text': 'yeah', 'sentiment': 'neutral', 'namedentity': '-', 'segment': 0} {'text': 'yeah so', 'sentiment': 'neutral', 'namedentity': '-', 'segment': 0} {'text': 'yeah so this work', 'sentiment': 'positive', 'namedentity': '-', 'segment': 0} {'text': 'yeah so this works', 'sentiment': 'positive', 'namedentity': '-', 'segment': 0} {'text': 'yeah so this works', 'sentiment': 'positive', 'namedentity': '-', 'segment': 0} {'text': 'yeah so this works', 'sentiment': 'positive', 'namedentity': '-', 'segment': 0} {'text': 'yeah so this works', 'sentiment': 'positive', 'namedentity': '-', 'segment': 0}
Please let me know why i cannot get streaming transcript logs when running in dockerfile
Please post the changes you've made to the code, preferably the output of
git diff
By the way, could you tell us what you have done to make it work?
By the way, could you tell us what you have done to make it work?
The signal for microphone was being blocked by chrome and edge browser so I unblocked it in chrome using the chrome://flags/#unsafely-treat-insecure-origin-as-secure edit and adding the url as trusted source
I cannot get transcript results in server console after updating to ver 1.7.7 sherpa-onnx and previous versions which i was using earlier no longer work
I noticed that the python api example scripts have been changed, for eg when i use old script i get this error:
(whisper) C:\Users\Fawaz Shaik\sherpao\sherpa-onnx>python ./python-api-examples/Tests/streaming_server1.py --tokens=./sherpa-onnx-streaming-zipformer-en-2023-06-21/tokens.txt --encoder=./sherpa-onnx-streaming-zipformer-en-2023-06-21/encoder-epoch-99-avg-1.int8.onnx --decoder=./sherpa-onnx-streaming-zipformer-en-2023-06-21/decoder-epoch-99-avg-1.int8.onnx --joiner=./sherpa-onnx-streaming-zipformer-en-2023-06-21/joiner-epoch-99-avg-1.int8.onnx
Xformers is not installed correctly. If you want to use memory_efficient_attention to accelerate training use the following command to install Xformers
pip install xformers.
2023-08-15 23:11:18,415 INFO [streaming_server1.py:754] {'encoder_model': './sherpa-onnx-streaming-zipformer-en-2023-06-21/encoder-epoch-99-avg-1.int8.onnx', 'decoder_model': './sherpa-onnx-streaming-zipformer-en-2023-06-21/decoder-epoch-99-avg-1.int8.onnx', 'joiner_model': './sherpa-onnx-streaming-zipformer-en-2023-06-21/joiner-epoch-99-avg-1.int8.onnx', 'tokens': './sherpa-onnx-streaming-zipformer-en-2023-06-21/tokens.txt', 'sample_rate': 16000, 'feat_dim': 80, 'decoding_method': 'greedy_search', 'num_active_paths': 4, 'use_endpoint': 1, 'rule1_min_trailing_silence': 2.4, 'rule2_min_trailing_silence': 1.2, 'rule3_min_utterance_length': 20, 'port': 6006, 'nn_pool_size': 1, 'max_batch_size': 50, 'max_wait_ms': 10, 'max_message_size': 1048576, 'max_queue_size': 32, 'max_active_connections': 500, 'num_threads': 1, 'certificate': None, 'doc_root': './python-api-examples/Tests/web'}
Traceback (most recent call last):
File "C:\Users\Fawaz Shaik\sherpao\sherpa-onnx\python-api-examples\Tests\streaming_server1.py", line 798, in
and i saw the change in line 346
def create_recognizer(args) -> sherpa_onnx.OnlineRecognizer: recognizer = sherpa_onnx.OnlineRecognizer( tokens=args.tokens, encoder=args.encoder_model, decoder=args.decoder_model, joiner=args.joiner_model, num_threads=1, sample_rate=16000, feature_dim=80, decoding_method=args.decoding_method, max_active_paths=args.num_active_paths, enable_endpoint_detection=args.use_endpoint != 0, rule1_min_trailing_silence=args.rule1_min_trailing_silence, rule2_min_trailing_silence=args.rule2_min_trailing_silence, rule3_min_utterance_length=args.rule3_min_utterance_length, )
return recognizer
def create_recognizer(args) -> sherpa_onnx.OnlineRecognizer: if args.encoder: recognizer = sherpa_onnx.OnlineRecognizer.from_transducer( tokens=args.tokens, encoder=args.encoder, decoder=args.decoder, joiner=args.joiner, num_threads=args.num_threads, sample_rate=args.sample_rate, feature_dim=args.feat_dim, decoding_method=args.decoding_method, max_active_paths=args.num_active_paths, enable_endpoint_detection=args.use_endpoint != 0, rule1_min_trailing_silence=args.rule1_min_trailing_silence, rule2_min_trailing_silence=args.rule2_min_trailing_silence, rule3_min_utterance_length=args.rule3_min_utterance_length, provider=args.provider, ) elif args.paraformer_encoder: recognizer = sherpa_onnx.OnlineRecognizer.from_paraformer( tokens=args.tokens, encoder=args.paraformer_encoder, decoder=args.paraformer_decoder, num_threads=args.num_threads, sample_rate=args.sample_rate, feature_dim=args.feat_dim, decoding_method=args.decoding_method, enable_endpoint_detection=args.use_endpoint != 0, rule1_min_trailing_silence=args.rule1_min_trailing_silence, rule2_min_trailing_silence=args.rule2_min_trailing_silence, rule3_min_utterance_length=args.rule3_min_utterance_length, provider=args.provider, ) else: raise ValueError("Please provide a model")
return recognizer
I noticed that the python api example scripts have been changed,
Yes, it has been changed. Please use the latest sherpa-onnx to work with the python-api-examples.
I cannot get transcript results in server console after updating to ver 1.7.7 sherpa-onnx and previous versions which i was using earlier no longer work
Please post your error logs and the exact command you are using.
I found that this was removed print(message) which caused the text to not be displayed
when running the streaming server there are multiple options other than localhost to connect to but after generating certificate to connect to device ip adress I am not able to access server from another laptop which is accessing the server using 'ip address'/'port number'
When not using the certificate, I can access the website and start connection but can't send audio.