hyperledger / aries-cloudagent-python

Hyperledger Aries Cloud Agent Python (ACA-Py) is a foundation for building decentralized identity applications and services running in non-mobile environments.
https://wiki.hyperledger.org/display/aries
Apache License 2.0
403 stars 509 forks source link

Document updates for the Controller Demo and getting it working under python #62

Closed SmithSamuelM closed 4 years ago

SmithSamuelM commented 5 years ago

Should change readme.md to include explicit link to von-network setup page where it says to start up von-network

https://github.com/bcgov/von-network#running-the-network-locally

SmithSamuelM commented 5 years ago

This section is not sufficiently specific:

Follow The Script

With both the Alice and Faber agents started, go to the Faber terminal window. The Faber agent has created and displayed an invitation. Copy this invitation and paste it at the Alice prompt. The agents will connect and then show a menu of options:

It should say: Copy only the value of the invitation: key from the Invitation Response. Include the bounding curly brackets. Paste in at the Alice prompt for Invite Details. ....

SmithSamuelM commented 5 years ago

The menu is wrong

The correct menu for Faber is

(1) Issue Credential, (2) Send Proof Request, (3) Send Message (X) Exit? [1/2/3/X]

The correct menu for Alice is

(3) Send Message (4) Input New Invitation (X) Exit? [3/4/X]:

SmithSamuelM commented 5 years ago

The Running Locally instructions are bad. There is not faber-pg.py file and faber.py does not seem to be meant to be run from the command line. Likewise for alice-pg.py and alice.py

Running Locally

To run locally, complete the same steps above for running in docker, except use the following in place of the run_demo commands for starting the two agents.

python faber-pg.py 8020 python alice-pg.py 8030 Note that Alice and Faber will each use 5 ports, e.g. running python faber-pg.py 8020 actually uses ports 8020 through 8024. Feel free to use different ports if you want.

To create the Alice/Faber wallets using postgres storage, just add the "--postgres" option when running the script.

Refer to the Follow the Script section below for further instructions.

SmithSamuelM commented 5 years ago

I used the python -m from the parent directory to avoid the module not found (due to relative import references) but still get this error

Looks like there is still a dependency on indy? that is not installed?

import indy.anoncreds Faber | ModuleNotFoundError: No module named 'indy' Faber |

$ python3 -m demo.faber 8020

1 Provision an agent and wallet, get back configuration details

Faber | Registering Faber Agent with seed d_000000000000000000000000646939 Faber | Got DID: 7DFJs2c9LDBRiojWrS5BPX Faber | 2019-07-11 14:45:58,660 asyncio ERROR Task exception was never retrieved Faber | future: <Task finished coro=<start_app() done, defined at /Data/Code/public/aries/cloudagentpy/aries_cloudagent/init.py:27> exception=ModuleLoadError('Unable to import module: aries_cloudagent.wallet.indy')> Faber | Traceback (most recent call last): Faber | File "/Data/Code/public/aries/cloudagentpy/aries_cloudagent/classloader.py", line 137, in load_class Faber | mod = import_module(mod_path) Faber | File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/importlib/init.py", line 127, in import_module Faber | return _bootstrap._gcd_import(name[level:], package, level) Faber | File "", line 1006, in _gcd_import Faber | File "", line 983, in _find_and_load Faber | File "", line 967, in _find_and_load_unlocked Faber | File "", line 677, in _load_unlocked Faber | File "", line 728, in exec_module Faber | File "", line 219, in _call_with_frames_removed Faber | File "/Data/Code/public/aries/cloudagentpy/aries_cloudagent/wallet/indy.py", line 7, in Faber | import indy.anoncreds Faber | ModuleNotFoundError: No module named 'indy' Faber | Faber | During handling of the above exception, another exception occurred: Faber | Faber | Traceback (most recent call last): Faber | File "/Data/Code/public/aries/cloudagentpy/aries_cloudagent/init.py", line 30, in start_app Faber | await conductor.start() Faber | File "/Data/Code/public/aries/cloudagentpy/aries_cloudagent/conductor.py", line 151, in start Faber | wallet: BaseWallet = await context.inject(BaseWallet) Faber | File "/Data/Code/public/aries/cloudagentpy/aries_cloudagent/config/injection_context.py", line 124, in inject Faber | return await self.injector.inject(base_cls, settings, required=required) Faber | File "/Data/Code/public/aries/cloudagentpy/aries_cloudagent/config/injector.py", line 75, in inject Faber | result = await provider.provide(ext_settings, self) Faber | File "/Data/Code/public/aries/cloudagentpy/aries_cloudagent/config/provider.py", line 82, in provide Faber | self._instance = await self._provider.provide(config, injector) Faber | File "/Data/Code/public/aries/cloudagentpy/aries_cloudagent/config/provider.py", line 105, in provide Faber | instance = await self._provider.provide(config, injector) Faber | File "/Data/Code/public/aries/cloudagentpy/aries_cloudagent/wallet/provider.py", line 39, in provide Faber | wallet = ClassLoader.load_class(wallet_class)(wallet_cfg) Faber | File "/Data/Code/public/aries/cloudagentpy/aries_cloudagent/classloader.py", line 139, in load_class Faber | raise ModuleLoadError(f"Unable to import module: {mod_path}") Faber | aries_cloudagent.classloader.ModuleLoadError: Unable to import module: aries_cloudagent.wallet.indy Faber | Faber | Shutting down Faber | Exited with return code 0 Traceback (most recent call last): File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/Data/Code/public/aries/cloudagentpy/demo/faber.py", line 209, in asyncio.get_event_loop().run_until_complete(main()) File "/usr/local/Cellar/python/3.7.4/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py", line 579, in run_until_complete return future.result() File "/Data/Code/public/aries/cloudagentpy/demo/faber.py", line 106, in main await agent.start_process() File "/Data/Code/public/aries/cloudagentpy/demo/agent.py", line 282, in start_process await self.detect_process() File "/Data/Code/public/aries/cloudagentpy/demo/agent.py", line 366, in detect_process raise Exception(f"Timed out waiting for agent process to start") Exception: Timed out waiting for agent process to start

SmithSamuelM commented 5 years ago

The offending file is

wallet/indy.py

with

import indy.anoncreds import indy.did import indy.crypto import indy.pairwise from indy.error import IndyError, ErrorCode

Note that in requirements.txt

aiohttp~=3.5.4 aiohttp-apispec~=1.1.2 aiohttp-cors~=0.7.0 base58 marshmallow==3.0.0rc3 msgpack>=0.6.1<0.7 pysodium>=0.7.1<0.8

There is no dependency on indy

nrempel commented 5 years ago

Hi @SmithSamuelM, at the moment the recommended approach for running aca-py is to use the provided Dockerfiles.

Indy has a number of complicated dependencies so we wrap those up in a docker image.

For running the agent locally, please refer to instructions here: https://github.com/hyperledger/aries-cloudagent-python/blob/master/DevReadMe.md#running-locally

For running the demo, please refer to the docker instructions here: https://github.com/hyperledger/aries-cloudagent-python/blob/master/demo/README.md#running-in-docker

SmithSamuelM commented 5 years ago

@nrempel

Thanks for the pointer but that doesn’t really help me much. I am trying to dig into the code to understand how it works so I can write something for real besides the demos. I want to be able to run the agent demo code locally (not in docker) so I can single step through the code and trace all the calls and dependencies. Especially so I can write my own controller or change the functionality of the agent.

It seems the real issue is that the package setup.py requirements.txt has a dependency that is not installed correctly (because its not listed in requirements.txt). What would be most helpful is if you could confirm what version of the PyPI Indy package is the right one and then I can install it. Secondly one reason for my going through the demos is to to help clean up the code. So it seems that you would want to fix the dependency issue not merely tell me to ignore it by using docker images. That you would also want to fix the docker images themselves to be clean instead of having stale dependencies. Also given that all Indy code is now refactored into Indy ledger, Aries and Ursa. It would be nice to know that the dependent code is in the right place. Finally the instructions for the demos conflict with your advice above so it would be good to fix them. To restate could you confirm what the version of PYPI Indy to use to provide the missing dependencies? Thanks

swcurran commented 5 years ago

Sam, the even better answer that will help you even more is learning how to debug when running in docker. Best of both worlds. I think Nick has that nailed.

SmithSamuelM commented 5 years ago

@swcurran as far as I can tell the only support for debugging is for Visual Studio debugger and I am on MacOS and don’t use visual studio. It really shouldn’t be hard to get it running locally I can just trial and error the Indy dependencies but I thought your guys would just know which one to use to save me time. Moreover debugging a demo is not the same as doing development. As this is a python repo it should just work with pip install in a virtual environment at the least and not require running docker. Docker is best for deployment not development workflows. It’s one thing to run a functional test with some parts running in docker but unittests should run w/o docker. I am happy to submit pull requests to help it get there. I just don’t yet have visibility into how the refactoring is being done so I am pointing out a problem with a dependency to help clarify if its a real hard dependency or an error and the actual dependency needs to be refactored. And if its in process then telling me what to do in the meantime would be helpful. Thanks

nrempel commented 5 years ago

@SmithSamuelM, gotcha.

There are several reasons we lean on Docker heavily:

A) Indy is intended to be an optional dependency to aca-py since the project should be agnostic to the ledger identities are anchored in. Indy is only referenced if the agent is run in an "Indy" mode - we will be working to split this out further in the future.

B) We use Docker internally for our deployments.

C) Installing Indy is not as simple as running pip install unfortunately. There are many other operating system level dependencies used to reference the Indy SDK. If you want your system to indy-ready, you can start by looking at the current Docker base image we use here: https://github.com/PSPC-SPAC-buyandsell/von-image/blob/master/1.9/Dockerfile.ubuntu. The Indy project also has some guidelines on how to make your system indy-ready.

But to answer your question: the version of indy currently used is 1.9.

The indy python package is added as part of the above-referenced Docker base image. Ideally in the future we will have something like a python package called aries-cloudagent-python-indy which would suck in all of the indy related stuff at once (or as much as it could).

As for the docs - we'll be cleaning those up soon. Thanks for letting us know.

SmithSamuelM commented 5 years ago

@SmithSamuelM, gotcha.

There are several reasons we lean on Docker heavily:

A) Indy is intended to be an optional dependency to aca-py since the project should be agnostic to the ledger identities are anchored in. Indy is only referenced if the agent is run in an "Indy" mode - we will be working to split this out further in the future.

As it should be. Happy to help. From the docker config it looks like the only issues that really require docker like config are permissions issues for storing private keys which are not so important in development. The rest are just library dependencies for the Indy crypto libraries which should be installable as either wheel packages are explicit dependencies. Using docker just hides making the install architecture clean. It’s more work but worth it in the long run especially when up for broader use.

B) We use Docker internally for our deployments.

Nothing wrong with that.

C) Installing Indy is not as simple as running pip install unfortunately. There are many other operating system level dependencies used to reference the Indy SDK. If you want your system to indy-ready, you can start by looking at the current Docker base image we use here: https://github.com/PSPC-SPAC-buyandsell/von-image/blob/master/1.9/Dockerfile.ubuntu. The Indy project also has some guidelines on how to make your system indy-ready.

thanks the docker config was informative.

But to answer your question: the version of indy currently used is 1.9.

Thanks I will start their

The indy python package is added as part of the above-referenced Docker base image. Ideally in the future we will have something like a python package called aries-cloudagent-python-indy which would suck in all of the indy related stuff at once (or as much as it could).

Yes!

As for the docs - we'll be cleaning those up soon. Thanks for letting us know.

=)

nrempel commented 5 years ago

Thanks Sam!

swcurran commented 5 years ago

@ianco - can you please take a look at this one? I can't assign it to you. There are a couple of things referenced in this - the menu, some instructions text and then running locally. I could address the first two, but the last I can't do because, well - you know - I don't have a fan.

I'll assign to me, but if you could look, that would be great...

ianco commented 5 years ago

I think you just need to:

pip install python3-indy

This is part of the base image when we build the docker image so it's not in requirements.txt

Note this will also need libindy.so (shared library binary of the core indy sdk) which you can build from source or download per instructions here:

https://github.com/hyperledger/indy-sdk#installing-the-sdk

swcurran commented 5 years ago

The problem of building indy on bare-metal has been a constant pain in this project, which is why very early on we decide docker only. We see countless developers trying to sort these issues out for days just trying to get Indy to running on bare-metal.

We've got a pull request in to fix the folders problem, and we'll update the docs to give some guidance on how to run it, but the problem of getting Indy to work on your particular system is going to have to be left to the reader. We'll try to find the pointer to the relevant document in the indy-sdk repo.

@SmithSamuelM - the most recent error we have in this chain from you is a the "Indy Module Not Found". If you are past that could you send your latest error? If you think it is in the Aries code, perhaps we could get a pair programming session going on Zoom to try to move this forward?

Sorry this is so painful.

SmithSamuelM commented 5 years ago

Thanks for following up. I reported my problem in rocket chat I will copy. Here. I was able to get the Indy working on bare metal. It just needs a little more depth on the explaination and I will write that up as a pull request so the next person doesn’t have to repeat my problem. The biggest problem I had was not an Indy issue at all but that fact that rustc 1.36 and rusqlite 0.13.0 are incompatible. Once I downgraded to rustic 1.35 then Indy built like a charm (given all the dependencies in the macOS build instructions. The next problem took some guess work because the build instructions were wrong. That is I had to symlink the indy_sdk.dylb to /user/local/lib. So that fixed that problem. The problem I am having now is that when it goes to build the schema on the ledger it complains that the ledger state is incompatible. I suspect that this is a configuration problem. My best guess is that the docker config assigns a bunch of default environment variables that are not assigned in the non-docker approach. Several places in the agent code a config variable is assigned as AGENT_CONFIG_VARIABLE = os.getenv(ENVIRONMENT_VARIABLE_NAME). Which means that if the env variable is not assigned then an empty string is used and some default logic someplace happens. This default logic is apparently not designed to just work. This is a more fragile approach. I suggest that better would be

AGENT_CONFIG_VARIABLE = os.getenv(ENVIRONMENT_VARIABLE_NAME) or INTELLIGENT_DEFAULT

In this case a default configuration that always just works may be guaranteed if you design it that way. The success for bare metal (non docker) instantiations becomes much higher and one can write unittests against the set of INTELLIGENT_DEFAULTS

That said you guys are doing an awesome job putting out an agent and helping the community.

I have spent a lot of time in configuration management of web cloud systems (I was director of R&D for SaltStack for example). A lesson learned is to always have intelligent defaults for configurations and not assume that the environment will be correct. This makes it easier to track down and fix misconfigurations partial or complete without having failures that are not traceable.

FWIW: The good and bad of relying on docker in development.

The good: deployment dependencies are minimized. This means that some dependency not under your control will not be able to break the system since the docker image essentially freezes those dependencies.

The bad: All dependencies are frozen even the ones your should have control over. When all dependencies are frozen then your development configuration becomes fragile. Development requires making changes and improvements to those dependencies you should have control over. When all dependencies are frozen then developers forget how or why their own dependencies exist so making any change breaks things and its hard to find out why.

So dependencies need to be managed in two classes. Dependencies under your control and dependencies not under your control. Not under your control should be frozen and isolated. How they are frozen and isolated is up to the developer. Docker is one way if not the most convenient way because debugging is much harder. (Its remote debug not local) But its an acceptable way. Dependencies under your control need to be fully explicated, automatically tested, and documented. And should be locally run first. This forces developers to understand dependencies they should understand.

What I see is that some time in history a combination of insider developers were able to get a docker image that worked and now that docker image is used everywhere without understanding how it works or how to change it or fix it. This is expedient in the short run but becomes a nightmare when trying to get outsider developers to build on top of the system.

I suspect that most of these issues on dependencies are not that complex. That its a bunch of environment variables that the docker config sets and the code unfortunately assumes. So at the least explicating those environment configs will make it easy for someone to get a dev environment working.

I understand you have limited resources and Indy is complex in its own right and its a huge dependency you don’t control. But the indy environment config you should control since you are using a local Indy Ledger so that should be explicatable.

Hope this helps. I want to help and my next wack will be to find all the getenv() calls and see if I can match them up to the docker values for them and see if that makes it work. Any help there would be appreciated.

BTW the problem getting indy running is that the build instructions do not detail the specific version set of all the dependent libraries. So a new dev by default will just install the latest version of all the libraries which often means something does not build. And the indy devs are not aggressively testing against latest versions so these problems go undetected. A stable build spec with the versions will allow someone to reproduce a stable build. That is what happened to me. (Rustic 1.26 rusqlite 0.13.0)

Docker avoids this problem but makes it must harder to develop against.

SmithSamuelM commented 5 years ago

A suggestion: As a long time python developer. The standard practice that Python devs are familiar with and would be more welcoming is to use python virtual environments to freeze dependencies you don’t have control over instead of docker. It makes sense to use docker to run the indy ledger its an external dependency but not the agent. Although your design approach is that the agent should be standalone and the only thing needed is a controller. Its still early for that. And you limit the pool of developers who can help improve the agent. Being python friendly first to a local agent will pay off in the long run. One of my objectives is to figure out how your agent is configured well enough to do that. Hope you don’t mind me asking for help in that regard.

swcurran commented 5 years ago

Thanks for the thoughts, Sam.

We are very happy with our choice to be all in on container-based development and we won't be able to go too deep in supporting the range of bare-metal setups that might be used. We're glad to help, and we're very glad to take PRs on how to do that, but we will stick to a Docker/container focus for our dev work. I would add that our take is the use of Docker is VERY developer friendly. We do encounter devs that aren't yet familiar with containers, but we don't find they push back once they are in the flow. That said I'm still amazed that remote debugging isn't better than it is... :-)

Our experience in the last 18 months of doing Indy development has continually re-enforced our chosen approach. We have seen many, many developers show up and die on the hill of trying to get things to run on bare-metal. We did a couple of things that helped in avoiding that fate. We deliberately had developers focused on getting underlying components in place for others, so the others didn't have to worry about that level of detail to deliver. That got us delivering value, vs. getting things working. Second, to your point on dependencies - we keep a keen focus on what is changing in the dependencies and continue to move them forward. Things are working before they are presented to other devs, but at the same time they track whats been released. However, if a developer is not developing something, they do not use the master branch - they use the last release. We are completely up to date on Indy within hours/days of a new release. We have pioneers that check out the changes and account for them in the artifacts we build. The rest of the developers only need to know what they need to know to keep delivering.

We're pretty confident that the result is that we deliver far faster and we spend way less time on local issues. Any productivity improvement is immediately available to the rest of the team.

Note that our container strategy goes right to production. We have an Openshift (aka Kubernetes) environment and all of our work is pretty rapidly pushed to production. We have (at last count) 7 different streams that go to production, all going through dev, test and into prod. The consistency from the developers desk to production enabled by containers is too useful to give up. Let alone that our build and test automation along the way is all container based.

Anyway, I'm sure you know about all that.

SmithSamuelM commented 5 years ago

@swcurran thanks for the detailed explanation. Docker is popular and capable enough that I can see it being chosen as a hard dependency; with good reason. =) Docker first, only, and always. Which means only remote debugging and only docker deployment. That would also naturally but unnecessarily tend toward hard dependency on Docker injection of environment dependencies even in dev. My confusion with that is how to reconcile the fact that the Alice Faber demo explicitly includes instructions for running the Faber and Alice agents locally without docker as a hard dependency for them. As far as I can tell from the bugs I encountered, the local running can’t work as provided. I assumed that indeed local running was a supported and tested option and that whoever documented it could help me get it running by being more explicit in describing the tested configuration and environment that allowed them to run it locally. What I am hearing suggests that that is not possible? I was in good faith working through the problems I encountered which likely would be due to differences in the setup and configuration but not documented in the tutorial as requirements to running locally. So it seems that either the tutorial needs to be changed to state that running locally is not supported as in try at your own risk or whoever on your end is able to run it locally could help by being more complete in the description of their successfully locally non-docker running configuration. =). Unfortunately because I anticipate some future development and deployment applications that require running non docker agents I am hopeful and grateful for your continued help in this regard.

swcurran commented 5 years ago

We have the demo running locally on a mac with the most recent code update using the most recent documentation updates. Have you given that a try? We're glad to help out with the issues you are having - please spin it up, and send us the init setup and log (debug level). We can also get on a Zoom session with you to debug it. We got logs yesterday and think we have a fix for a "time out too short" issue discovered with the user having trouble on a Raspberry Pi.

There's no reason it shouldn't work running locally - just that our team will be using docker for development. Others are welcome to use other techniques and point out any issues if we have a problem.

FYI - we are planning on splitting the demo into two separate proesses - the controller and the agent. This will change from the agent currently being a sub-process of the controller. We'll use docker compose for the management of that.

Sorry for the delay on this - we definitely want this to be smooth and easy.

On Fri, Jul 19, 2019 at 6:41 AM Samuel Smith notifications@github.com wrote:

@swcurran https://github.com/swcurran thanks for the detailed explanation. Docker is popular and capable enough that I can see it being chosen as a hard dependency; with good reason. =) Docker first, only, and always. Which means only remote debugging and only docker deployment. That would also naturally but unnecessarily tend toward hard dependency on Docker injection of environment dependencies even in dev. My confusion with that is how to reconcile the fact that the Alice Faber demo explicitly includes instructions for running the Faber and Alice agents locally without docker as a hard dependency for them. As far as I can tell from the bugs I encountered, the local running can’t work as provided. I assumed that indeed local running was a supported and tested option and that whoever documented it could help me get it running by being more explicit in describing the tested configuration and environment that allowed them to run it locally. What I am hearing suggests that that is not possible? I was in good faith working through the problems I encountered which likely would be due to differences in the setup and configuration but not documented in the tutorial as requirements to running locally. So it seems that either the tutorial needs to be changed to state that running locally is not supported as in try at your own risk or whoever on your end is able to run it locally could help by being more complete in the description of their successfully locally non-docker running configuration. =). Unfortunately because I anticipate some future development and deployment applications that require running non docker agents I am hopeful and grateful for your continued help in this regard.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/hyperledger/aries-cloudagent-python/issues/62?email_source=notifications&email_token=AAHYRQUDCCFXKDOOWGXDFFTQAHACHA5CNFSM4IBS2RVKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD2LVBYI#issuecomment-513233121, or mute the thread https://github.com/notifications/unsubscribe-auth/AAHYRQWA6E2SKYG2UVUYY43QAHACHANCNFSM4IBS2RVA .

SmithSamuelM commented 5 years ago

Thank @swcurran I had some other stuff keep me busy last couple of days. I will update to the latest version and try again. The zoom call offer is much appreciated. =)

swcurran commented 4 years ago

I think we are pretty good with this issue and am closing.

dnaicker commented 2 years ago

Update: decided to edit response, apologies was just frustrated.