openwallet-foundation / acapy

Hyperledger Aries Cloud Agent Python (ACA-Py) is a foundation for building decentralized identity applications and services running in non-mobile environments.
https://wiki.hyperledger.org/display/aries
Apache License 2.0
409 stars 512 forks source link

Unable to resolve timed out waiting for agent process to start (status=None) in ubuntu 20.04 and ubuntu 20.04 Virtual machine. #1793

Closed eshita53 closed 1 month ago

eshita53 commented 2 years ago

Hi, I tried to use the Aries cloud agent on my laptop. At first, I cloned the Aries cloud agent using this command: git clone https://github.com/hyperledger/aries-cloudagent-python. Then install the requirements using this one: pip3 install -r demo/requirements.txt. Redirect to the demo folder and run this command in the command prompt: LEDGER_URL=http://dev.greenlight.bcovrin.vonx.io ./run_demo faber. But every time I get the above-mentioned error. I tried every possible solution I could find online. I even tried it on both Ubuntu 20.04 and Ubuntu 20.04 VM machines. But I get the same error. Could you please help me resolve this issue? 

swcurran commented 2 years ago

Please put the logs of what you are doing. Are you behind a firewall?

One thing you could try to ensure that is not the case is to run a local instance of von-network

That will eliminate firewall issues accessing external ledgers.

eshita53 commented 2 years ago

Hi, After running the above-mentioned method it is giving me another error: Error retrieving ledger genesis transactions. Another thing I want to mention I am having an Error initializing the pool ledger in my localhost:9000. I Don't know what is the problem.

eshita53 commented 2 years ago

./run_demo faber Preparing agent image... sha256:fd61ce9c4175820e5bb321e19472cc77aaabd7206d364267233c3479087d2ad5 Trying to detect ngrok service endpoint ngrok not detected for agent endpoint DOCKERHOST=10.0.2.15 DOCKER_ENV=-e LOG_LEVEL= -e RUNMODE=docker -e DOCKERHOST=10.0.2.15 -e AGENT_PORT=8020 -e TRACE_TARGET=log -e TRACE_TAG=acapy.events -e TRACE_ENABLED= Starting [faber] agent with args [--port 8020] ^[[2~^[[2~^[[3~^[[3~ERROR:runners.support.agent:Error loading genesis transactions: Traceback (most recent call last): File "/home/indy/.pyenv/versions/3.6.13/lib/python3.6/site-packages/aiohttp/connector.py", line 986, in _wrap_create_connection return await self._loop.create_connection(*args, **kwargs) # type: ignore[return-value] # noqa File "/home/indy/.pyenv/versions/3.6.13/lib/python3.6/asyncio/base_events.py", line 798, in create_connection raise exceptions[0] File "/home/indy/.pyenv/versions/3.6.13/lib/python3.6/asyncio/base_events.py", line 785, in create_connection yield from self.sock_connect(sock, address) File "/home/indy/.pyenv/versions/3.6.13/lib/python3.6/asyncio/selector_events.py", line 439, in sock_connect return (yield from fut) File "/home/indy/.pyenv/versions/3.6.13/lib/python3.6/asyncio/selector_events.py", line 469, in _sock_connect_cb raise OSError(err, 'Connect call failed %s' % (address,)) TimeoutError: [Errno 110] Connect call failed ('10.0.2.15', 9000)

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/home/indy/demo/runners/support/agent.py", line 97, in default_genesis_txns f"http://{DEFAULT_EXTERNAL_HOST}:9000/genesis" File "/home/indy/.pyenv/versions/3.6.13/lib/python3.6/site-packages/aiohttp/client.py", line 1138, in aenter self._resp = await self._coro File "/home/indy/.pyenv/versions/3.6.13/lib/python3.6/site-packages/aiohttp/client.py", line 536, in _request req, traces=traces, timeout=real_timeout File "/home/indy/.pyenv/versions/3.6.13/lib/python3.6/site-packages/aiohttp/connector.py", line 542, in connect proto = await self._create_connection(req, traces, timeout) File "/home/indy/.pyenv/versions/3.6.13/lib/python3.6/site-packages/aiohttp/connector.py", line 907, in _createconnection , proto = await self._create_direct_connection(req, traces, timeout) File "/home/indy/.pyenv/versions/3.6.13/lib/python3.6/site-packages/aiohttp/connector.py", line 1206, in _create_direct_connection raise last_exc File "/home/indy/.pyenv/versions/3.6.13/lib/python3.6/site-packages/aiohttp/connector.py", line 1187, in _create_direct_connection client_error=client_error, File "/home/indy/.pyenv/versions/3.6.13/lib/python3.6/site-packages/aiohttp/connector.py", line 992, in _wrap_create_connection raise client_error(req.connection_key, exc) from exc aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host 10.0.2.15:9000 ssl:default [Connect call failed ('10.0.2.15', 9000)] Error retrieving ledger genesis transactions

ianco commented 2 years ago

The agent is not able to connect with your ledger. Can you post the logs for your local von-network?

eshita53 commented 2 years ago

Here is the logs: ode4_1 | 2022-06-03 16:26:04,188|NOTIFICATION|node_bootstrap.py|BLS: BLS Signatures will be used for Node Node4 node4_1 | 2022-06-03 16:26:04,193|INFO|pool_manager.py|Node4 sets node Node1 (Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv) order to 5 node4_1 | 2022-06-03 16:26:04,193|INFO|pool_manager.py|Node4 sets node Node2 (8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb) order to 5 node4_1 | 2022-06-03 16:26:04,194|INFO|pool_manager.py|Node4 sets node Node3 (DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya) order to 5 node4_1 | 2022-06-03 16:26:04,195|INFO|pool_manager.py|Node4 sets node Node4 (4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA) order to 5 node4_1 | 2022-06-03 16:26:04,208|INFO|notifier_plugin_manager.py|Found notifier plugins: [] node4_1 | 2022-06-03 16:26:04,212|INFO|notifier_plugin_manager.py|Found notifier plugins: [] node4_1 | 2022-06-03 16:26:04,229|INFO|stacks.py|Node4C: clients connections tracking is enabled. node4_1 | 2022-06-03 16:26:04,230|INFO|stacks.py|Node4C: client stack restart is enabled. node4_1 | 2022-06-03 16:26:04,383|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: typing.Dict] because it does not have a 'pluginType' attribute node4_1 | 2022-06-03 16:26:04,383|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin.stats_consumer.stats_publisher.StatsPublisher'>] because it does not have a 'pluginType' attribute node4_1 | 2022-06-03 16:26:04,384|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <enum 'Topic'>] because it does not have a 'pluginType' attribute node4_1 | 2022-06-03 16:26:04,384|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.plugin_loader.HasDynamicallyImportedModules'>] because it does not have a 'pluginType' attribute node4_1 | 2022-06-03 16:26:04,385|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: <class 'plenum.server.stats_consumer.StatsConsumer'>] because it does not have a 'pluginType' attribute node4_1 | 2022-06-03 16:26:04,385|INFO|plugin_loader.py|plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer node4_1 | 2022-06-03 16:26:04,389|INFO|node.py|Node4 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node2', 'Node4', 'Node1', 'Node3'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)} node4_1 | 2022-06-03 16:26:04,390|INFO|consensus_shared_data.py|Node4:0 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4'] node4_1 | 2022-06-03 16:26:04,396|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec node4_1 | 2022-06-03 16:26:04,398|NOTIFICATION|replicas.py|Node4 added replica Node4:0 to instance 0 (master) node4_1 | 2022-06-03 16:26:04,398|INFO|replicas.py|reset monitor due to replica addition node4_1 | 2022-06-03 16:26:04,401|INFO|consensus_shared_data.py|Node4:1 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4'] node4_1 | 2022-06-03 16:26:04,406|NOTIFICATION|replicas.py|Node4 added replica Node4:1 to instance 1 (backup) node4_1 | 2022-06-03 16:26:04,406|INFO|replicas.py|reset monitor due to replica addition node4_1 | 2022-06-03 16:26:04,407|INFO|consensus_shared_data.py|Node4:0 updated validators list to {'Node2', 'Node4', 'Node1', 'Node3'} node4_1 | 2022-06-03 16:26:04,407|INFO|consensus_shared_data.py|Node4:1 updated validators list to {'Node2', 'Node4', 'Node1', 'Node3'} node4_1 | 2022-06-03 16:26:04,407|INFO|replica.py|Node4:0 setting primaryName for view no 0 to: Node1:0 node4_1 | 2022-06-03 16:26:04,407|NOTIFICATION|primary_connection_monitor_service.py|Node4:0 lost connection to primary of master node4_1 | 2022-06-03 16:26:04,408|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec node4_1 | 2022-06-03 16:26:04,408|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node1:0 for instance 0 (view 0) node4_1 | 2022-06-03 16:26:04,409|INFO|replica.py|Node4:1 setting primaryName for view no 0 to: Node2:1 node4_1 | 2022-06-03 16:26:04,409|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node2:1 for instance 1 (view 0) node4_1 | 2022-06-03 16:26:04,409|INFO|node.py|total plugins loaded in node: 0 node4_1 | 2022-06-03 16:26:04,415|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f9655a406d8> initialized state for ledger 0: state root 9mYGVPSfvA24aJo5vnrC1JDBNf3dVSYEAJUVwaapkjeL node4_1 | 2022-06-03 16:26:04,415|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f9655a406d8> found state to be empty, recreating from ledger 2 node4_1 | 2022-06-03 16:26:04,415|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f9655a406d8> initialized state for ledger 2: state root DfNLmH4DAHTKv63YPFJzuRdeEtVwF5RtVnvKYHd8iLEA node4_1 | 2022-06-03 16:26:04,424|INFO|ledgers_bootstrap.py|<indy_node.server.node_bootstrap.NodeBootstrap object at 0x7f9655a406d8> initialized state for ledger 1: state root GWEizEWQdGH5QFdxRpoQGvPR42zSUEJoFkQKGBHiNDbM node4_1 | 2022-06-03 16:26:04,424|INFO|motor.py|Node4 changing status from stopped to starting node4_1 | 2022-06-03 16:26:04,428|INFO|zstack.py|Node4 will bind its listener at 0.0.0.0:9707 node4_1 | 2022-06-03 16:26:04,429|INFO|stacks.py|CONNECTION: Node4 listening for other nodes at 0.0.0.0:9707 node4_1 | 2022-06-03 16:26:04,434|INFO|zstack.py|Node4C will bind its listener at 0.0.0.0:9708 node4_1 | 2022-06-03 16:26:04,480|INFO|node.py|Node4 first time running... node4_1 | 2022-06-03 16:26:04,481|INFO|node.py|Node4 processed 0 Ordered batches for instance 0 before starting catch up node4_1 | 2022-06-03 16:26:04,484|INFO|node.py|Node4 processed 0 Ordered batches for instance 1 before starting catch up node4_1 | 2022-06-03 16:26:04,484|INFO|ordering_service.py|Node4:0 reverted 0 batches before starting catch up node4_1 | 2022-06-03 16:26:04,484|INFO|node_leecher_service.py|Node4:NodeLeecherService starting catchup (is_initial=True) node4_1 | 2022-06-03 16:26:04,485|INFO|node_leecher_service.py|Node4:NodeLeecherService transitioning from Idle to PreSyncingPool node4_1 | 2022-06-03 16:26:04,485|INFO|cons_proof_service.py|Node4:ConsProofService:0 starts node4_1 | 2022-06-03 16:26:04,491|INFO|kit_zstack.py|CONNECTION: Node4 found the following missing connections: Node2, Node1, Node3 node4_1 | 2022-06-03 16:26:04,492|INFO|zstack.py|CONNECTION: Node4 looking for Node2 at 10.0.2.15:9703 node4_1 | 2022-06-03 16:26:04,493|INFO|zstack.py|CONNECTION: Node4 looking for Node1 at 10.0.2.15:9701 node4_1 | 2022-06-03 16:26:04,497|INFO|zstack.py|CONNECTION: Node4 looking for Node3 at 10.0.2.15:9705 node4_1 | 2022-06-03 16:26:19,488|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0 node4_1 | 2022-06-03 16:26:34,489|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0 node4_1 | 2022-06-03 16:26:49,502|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0 webserver_1 | 2022-06-03 16:26:57,688|WARNING|libindy.py|_indy_loop_callback: Function returned error webserver_1 | 2022-06-03 16:26:57,691|ERROR|anchor.py|Initialization error: webserver_1 | Traceback (most recent call last): webserver_1 | File "/home/indy/server/anchor.py", line 221, in _open_pool webserver_1 | self._pool = await pool.open_pool_ledger(pool_name, json.dumps(pool_cfg)) webserver_1 | File "/home/indy/.pyenv/versions/3.6.13/lib/python3.6/site-packages/indy/pool.py", line 88, in open_pool_ledger webserver_1 | open_pool_ledger.cb) webserver_1 | indy.error.PoolLedgerTimeout webserver_1 | webserver_1 | The above exception was the direct cause of the following exception: webserver_1 | webserver_1 | Traceback (most recent call last): webserver_1 | File "/home/indy/server/anchor.py", line 317, in open webserver_1 | await self._open_pool() webserver_1 | File "/home/indy/server/anchor.py", line 223, in _open_pool webserver_1 | raise AnchorException("Error opening pool ledger connection") from e webserver_1 | server.anchor.AnchorException: Error opening pool ledger connection webserver_1 | 2022-06-03 16:26:57,693|INFO|server.py|--- Trust anchor initialized --- node2_1 | 2022-06-03 16:27:03,271|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long node2_1 | 2022-06-03 16:27:03,271|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled. node2_1 | 2022-06-03 16:27:03,272|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec node2_1 | 2022-06-03 16:27:03,298|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 primary has been disconnected for too long node2_1 | 2022-06-03 16:27:03,298|INFO|primary_connection_monitor_service.py|Node2:0 The node is not ready yet so view change will not be proposed now, but re-scheduled. node2_1 | 2022-06-03 16:27:03,298|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec node2_1 | 2022-06-03 16:27:03,438|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0

On Fri, Jun 3, 2022 at 8:49 PM Ian Costanzo @.***> wrote:

The agent is not able to connect with your ledger. Can you post the logs for your local von-network?

— Reply to this email directly, view it on GitHub https://github.com/hyperledger/aries-cloudagent-python/issues/1793#issuecomment-1146041773, or unsubscribe https://github.com/notifications/unsubscribe-auth/AG7THYCSQ6OH65C5DQRWZUTVNILQ5ANCNFSM5XURFJYA . You are receiving this because you authored the thread.Message ID: @.***>

ianco commented 2 years ago

webserver_1 | Traceback (most recent call last): webserver_1 | File "/home/indy/server/anchor.py", line 317, in open webserver_1 | await self._open_pool() webserver_1 | File "/home/indy/server/anchor.py", line 223, in _open_pool webserver_1 | raise AnchorException("Error opening pool ledger connection") from e webserver_1 | server.anchor.AnchorException: Error opening pool ledger connection

It looks like your ledger isn't starting up properly.

If you're having issues with both a local and remote ledger (you mentioned http://dev.greenlight.bcovrin.vonx.io/) then you have some issue with your local environment. There could be lots of causes - running out of docker resources is a common one, firewall issues (as @swcurran mentioned), docker version etc.

Note that when you run the demo it runs in a docker container, so you don't need to "install the requirements using this one: pip3 install -r demo/requirements.txt" (you only need this if you are running aca-py locally not in docker)

eshita53 commented 2 years ago

I have tried it with two laptops. one with ubuntu 20.04 where the Docker version is 20.10.14, Docker-compose 1.27.4, python version 3.8.10. Another in windows 10 running a ubuntu 20.04 vm machine. Docker version 20.10.16, Docker Compose version v2.6.0, python version 3.8.10. Now could you please suggest what should I do?

ianco commented 2 years ago

I suggest not running in a VM - everything runs in docker containers so no reason to run docker in a VM.

Otherwise it's difficult to diagnose problems in your specific environment. For example if you're running a local firewall (as @swcurran mentioned) it could be affecting the agent and nodes being able to connect.

If you're connecting to http://dev.greenlight.bcovrin.vonx.io/, you should be able to open this in a browser (will verify that your laptop can connect to this service).

If you're not able to run von-network locally I suggest opening an issue in the von-network github repository.