whittlem / pycryptobot

Python Crypto Bot (PyCryptoBot)
Apache License 2.0
1.97k stars 738 forks source link

Proposal for a 3.0 architecture #281

Closed yanone closed 3 years ago

yanone commented 3 years ago

Hello all,

I discovered pycryptobot only a few days ago; I literally searched for python crypto bot ;) I instantly fell in love with it, because for the first time I could understand the mechanics of algorithmic trading. What was unfathomable only a week ago became a reality for me. I tried some unsuccessful stock trading earlier, but I’m disgusted by the stock market and have long been looking for an entry point into crypto and algorithmic trading, particularly for PoS-based coins for energy efficiency. The chat group is nice and welcoming, also growing very fast (55% in the one week I’m in). Maybe a thread-based chat could be useful? Reddit?

My background is graphic design and type design, some art, but I’ve always been successful with my software development skills more than anything else; today earning my living with Python development; all on open source projects.

I already contributed the Raspberry Pi Tutorial, and planning to contribute more. I want to turn my Pi into somewhat of an art project, framing it in a thick golden frame and hanging it on my wall, displaying some custom content, like maybe a nicely designed and rendered pixel image about its status and past trades, maybe candlesticks. I want to maybe add physical buy and sell buttons for fun.

Combining my ideas with things recently proposed or inquired on the chat, I quickly realized that the current architecture is very limiting. Asking the developers to incorporate all of those feature requests is too much to ask, I feel. Yet, some of the ideas are extremely awesome, like receiving TradingView calls for trade signals is a killer feature. My physical button idea alone requires an architecture change. I want to be able to do such things myself without bothering anyone. But I don't want to do that in a fork, as incorporating upstream updates becomes increasingly messy, nor do I want to start a completely separate project, because I lack the knowledge in trading. And ultimately, I do want to give my own ideas of trading algorithms a try, as do others. And honestly, the trading algorithm is where the tendies are, so I’m convinced it’s in everyone’s interest to allow streams of different ideas to pour into the project. The current iterative design kind of forbids outside ideas for lack of efficiency to incorporate them, and also I believe that the current setup with all code in one file is error-prone because it’s completely overloaded.

Therefore, I want to propose a radical rewrite of pycryptobot for a 3.0 version. I’m dreaming of picking apart the current bot cleanly into it its components. I see four categories of modules at play here:

Architecture-3 0

Traders, Exchanges, Renderers, and a central Dealer. Feel free to find better suited names for these.

The lighter colours in the image represent home-cooked modules, the others are currently part of the core package.

Each of these categories shall be completely separated into possibly their own modules, possibly even separate Python packages altogether. They could also be grouped into one package, but don’t need to. So the original pycryptobot could contain all of its original components in one package, essentially not changing the way it’s used today, while allowing others to develop and optionally publish their own components in any of the categories (except the Dealer). Modules would simply have to be available in Python’s sys.path and called by their name through the config. This opens up several channels for distribution: PyPI could be used for the main packages, but anyone can develop their own module at home and place it either in a directory already on sys.path, or the config file includes the module’s path to be added to sys.path by the Dealer during module loading.

Even the original pycryptobot could be shipped in separate packages altogether. I have a hunch that some developers already focus on certain parts more than on others, let’s say the communication with an exchange for example. In this case, the core pycryptobot package would simply reference the up-to-date version of the exchange package with its version explicitly in setup.py as pycryptobot-exchange-coinbasepro>=1.2.3, and an update call to pycryptobot with pip would automatically update the dependent packages as well (actually I'm note sure how this works, but either way I'm sure it's possible to issue a pip update command that also updates dependencies to their latest version regardless of whether the version is explicitly mentioned in setup.py).

Utilizing pip, packages don’t need to be hosted on PyPI, either. They could stay on Github alone and be installed using pip’s git+ URL notation. This could also be used for the core package if you guys don’t feel like putting it on PyPI.

The Categories In Detail

Let’s look at the individual categories. The idea is that each of the boxes in the image exists as a separate class regardless of which package they belong to. They communicate with each other through a clearly defined interface of methods. Basically all we need to do is precisely define the interface, as agnostic to individual realities as possible (looking at the "1h" and 3600 granularity settings here), and the ecosystem will flourish on its own without interaction of the core developers. This will not only enable an ecosystem, but also relieve the developers of some of their duties, as anyone can develop their own specific solution by just taking the each category’s class template and filling it, or cloning an existing module and tweaking it.

Dealer

This would exist only once and is is the center in the hub-and-spoke design. It would only do a few things:

The Dealer exposes all modules for cross-reference:

The Dealer exposes the following callback methods:

The Dealer exposes the following functional methods:

The Dealer issues the minutely tick to the Trader modules.

Traders

Currently the actual trading algorithm; Michael’s MACD algo. When it received the tick, it asks the Exchange for updated trading data, processes the data, and issues trading signals and candlestick detection back to the Dealer, who processes these calls.

However, a Trader isn’t required to implement the tick, nor is it required to obtain trading data from the Exchange. Any other method of obtaining data and trading signals is allowed, such as listening to messaging queues or web hooks (like signals coming on from Trading View), physical buttons on a Raspberry Pi, etc. The Telegram trader in the image above implies a module that receives /buy and /sell signals from a Telegram bot that you can type. Just an idea.

The Dealer exposes the following callback methods:

Exchange

An Exchange module represents the connection to an actual coin exchange such as Binance or Coinbase Pro. It’s responsible for the actual trades, obtaining live or historic trading data, as well as recover the state that the account and therefore the bot are currently in (last action=buy/sell).

The Exchange may expose the following callback methods:

The Exchange may expose the following functional methods:

The method descriptions above say may because it could be possible that one Exchange is responsible for the trading, while another Exchange is responsible for providing the trading data. I don’t see a use case for this right away, but I would want to have this option from the beginning to retain full flexibility. The Dealer vets each Exchange for their capabilities on startup and raises an error if a mismatch or omission is detected. The way this makes a bit of sense is if you look at simulations. I could choose to obtain a very large dataframe covering many weeks in minute-by-minute data points, and expose the database to the simulation through the standardized interface of my home-rolled Exchange module.

Renderers

Renderers are any form of output. This includes the console, image generation, log files, Telegram notification. Renderers are one of the most obvious fields where separate modules make sense. What if I don’t want to see candlestick announcements in my Telegram notifications? Do I want to bother the authors to make that feature, or will I simply clone the Telegram Renderer and tweak that little detail myself, while keeping the entire original codebase intact? Do I want to bother anyone to make me a hook for generating my own images?

The Renderer may expose the following callback methods:

Other Notes

Invocation

Since pycryptobot would be installed as a package and sit in the site-packages folder, the main app would be exposed via the entry_points feature of setup.py. A command line call would then simply be:

pycryptobot -c config.json

Language

All methods and variables need to be named in a clear and understandable way. For methods, I already introduced the *Callback() and get*() scheme. Variables need to do that, too. df is not understandable for the lay person who is just learning programming or trading and skimming through the code.

Error Handling

All functional methods (not callback methods) need to return a success, message tuple for proper error handling by the Dealer.

For example, I recently set up my Binance bot, but hadn’t traded yet and didn’t know about the 5,000€ trading limit before my home address has been verified. My Telegram bot already received the successful BUY signal message with only the console showing the error. Without my attention the error would have been lost on me. I can see how something like this gets lost in the current iterative approach, as it’s just overwhelming to maintain.

Atomizing the current architecture alone will greatly reduce the risk of erroneous programming by breaking down large code into easily recognizable chunks.

Configuration

I vouch for rewriting the configuration as well, despite a necessary restructuring into sections for the separate modules. First, only positive option names should be used, avoiding double negative situations like disablebuynearhigh=1 which is too difficult to understand.

Also, I propose to move to true/false for boolean options, leaving integer settings to where they make sense. For example, the verbose=1 setting is confusing, as many command line tools implement varying degrees of verbosity levels through integer settings, whereas in pycryptobot, a boolean setting is intended.

As far as configuration on the command line goes, I propose to hand over a separate JSON string as configuration for each individual module in case of command-line-level configuration, so something like this:

pycryptobot -e coinbasepro '{"option": true}' -t macd '{"option": true}' -r console -r telegram '{"key": "a0cd"}'

Final Notes

My proposal probably isn’t complete, as I haven’t actually read all the code and understood all the functionality. Most importantly, I haven’t looked that the trading data yet.

My idea is that, after we’ve finalized the API together, I would contribute:

But I would leave it to you to rewrite your code into the individual modules.

I have time to work on this starting July.

API

We would have to create a clear API definition, and I would like to be involved in that to ensure service-agnostic data definitions. I propose to send only JSON data as a parameter between methods, or rather, a dictionary, implementing all parameters in just one variable.

An API version could be shipped in the dictionary, so that in the rare case of an API change, all modules can react to them accordingly to implement backwards-compatibility. All classes would expose their supported API versions, and the Dealer would check for module compatibility on startup, otherwise raise an error. Maintaining the API versions is a bit of an overhead in the work, but a necessary price to pay for the much cleaner design.

Also, using the dictionary means that one can simply hand over some additional variable without breaking the API for testing, or even permanently, as long as the receiving module checks for the availability of that key in the dictionary rather than blindly relying on it.

Benchmarking

Also, I would love it if pycryptobot provided an extensive backtest for benchmarking. Like a month-long database of all available granularities with the exact same minutely price updates that the app uses when being live. As far as I understand, the simulations currently cover datapoints identical to the granularity, not by the minute. If this database is from a bear market, even better. As far as I know, the exchanges only return 300 data points, but in my idea each granularity would come as 43,200 data points (60min x 24h x 30days).

Having such a fixed database readily available as part of the core package (in my idea as an Exchange module) would enable us to truly compare a single algorithm between granularities, and benchmark algos against each other. Someone would just have to log the live data in all granularities for a whole month, and package that into an Exchange module.

whittlem commented 3 years ago

This sounds great. You obviously have put a lot of thought and effort into this. Although I've been programming for over 20 years, I only learnt Python 7 months ago when I did my Applied Machine Learning course. I'm aware that maybe some or quite a lot of my original code could be improved. Let's see what feedback the other contributors (@dthevenin @happz @iThom @letsautom8 @marioK93 @rcarmo @edrickrenan @emreeroglu) have but it does sound like a positive change to me. How do you propose 3.0 is developed? Should we create a new 3.0 branch? I am sure other contributors as well as myself would be interested to help with this once a plan is finalised.

happz commented 3 years ago

First of all: strict API, modularization? Hell yes. pycryptobot does its job, but, honestly, for an outsider, it's quite hard to modify and/or extend.

Hello all,

...

Each of these categories shall be completely separated into possibly their own modules, possibly even separate Python packages altogether. They could also be grouped into one package, but don’t need to. So the original pycryptobot could contain all of its original components in one package, essentially not changing the way it’s used today, while allowing others to develop and optionally publish their own components in any of the categories (except the Dealer). Modules would simply have to be available in Python’s sys.path and called by their name through the config. This opens up several channels for distribution: PyPI could be used for the main packages, but anyone can develop their own module at home and place it either in a directory already on sys.path, or the config file includes the module’s path to be added to sys.path by the Dealer during module loading.

Even the original pycryptobot could be shipped in separate packages altogether. I have a hunch that some developers already focus on certain parts more than on others, let’s say the communication with an exchange for example. In this case, the core pycryptobot package would simply reference the up-to-date version of the exchange package with its version explicitly in setup.py as pycryptobot-exchange-coinbasepro>=1.2.3, and an update call to pycryptobot with pip would automatically update the dependent packages as well (actually I'm note sure how this works, but either way I'm sure it's possible to issue a pip update command that also updates dependencies to their latest version regardless of whether the version is explicitly mentioned in setup.py).

Utilizing pip, packages don’t need to be hosted on PyPI, either. They could stay on Github alone and be installed using pip’s git+ URL notation. This could also be used for the core package if you guys don’t feel like putting it on PyPI.

Few points here:

The Categories In Detail

Let’s look at the individual categories. The idea is that each of the boxes in the image exists as a separate class regardless of which package they belong to. They communicate with each other through a clearly defined interface of methods. Basically all we need to do is precisely define the interface, as agnostic to individual realities as possible (looking at the "1h" and 3600 granularity settings here), and the ecosystem will flourish on its own without interaction of the core developers. This will not only enable an ecosystem, but also relieve the developers of some of their duties, as anyone can develop their own specific solution by just taking the each category’s class template and filling it, or cloning an existing module and tweaking it.

Indeed, pycryptobot package should provide base classes for the plugins of different categories, which would define their API.

Dealer

This would exist only once and is is the center in the hub-and-spoke design. It would only do a few things:

  • Load all the components from their modules, hand over their respective configuration
  • Issue a tick(i) every minute. The tick goes out mainly to the Trader module, who can choose to ask the Exchange module for fresh trading data (but doesn’t need to, see section on Traders). If deemed necessary, the Trader module would then issue a trading signal back to the Dealer, who would forward it to the Exchange module. Finally, the Dealer notifies all Renderers of the transaction.

The Dealer exposes all modules for cross-reference:

  • exchanges = [Exchange()] is a list of Exchange modules. Normally, only one makes sense, but it’s thinkable that one Exchange module handles the trade signal with the exchange, while another provides the trading data from a different data source. So better keep this option open from the beginning.
  • traders = [Trader()] is a list of Trader modules. While normally only one makes sense, in case of my idea of adding physical buttons to my Raspberry Pi next to the bot’s algo, two are needed, so that should be allowed.

IIUIC, the idea is to allow more than one trader, which is the piece examining the market and deciding whether to buy/sell/wait. What happens when these multiple traders won't agree about the signal, each of them yielding a different decision?

  • renderers = [Renderer()] are any number of renderers.

The Dealer exposes the following callback methods:

  • sellCallback() Triggered by the Trader, fanned out first to the Exchange and then to the Renderer modules
  • buyCallback() Triggered by the Trader, fanned out first to the Exchange and then to the Renderer modules
  • candlestickDetectedCallback() Triggered by the Trader, fanned out to the Renderer modules

Now here I see a problem: IIUIC, Dealer calls a Trader, and this Trader then calls Dealer, which then calls an Exchange and so on.

Instead, I'd rather keep the Dealer in control: Dealer calls Trader, receives a decision (or error), and Dealer then decides what to do with it. No calls from Trader back to Dealer, the decision should be reported back to Dealer as a return value from Trader. This gives Dealer full power over the process:

So, instead of callbacks, which tend to form unexpected chains, I'd rather propose using return values, which can be more complex than just True or False. With Python's dataclasses and enums, it's possible to create a "container" bundling the decision and any other interesting info Trader might want the Dealer to be aware of.

The Dealer exposes the following functional methods:

  • getTradingData(granularity, timeframe) Because it’s unclear which Exchange module provides the trading data, it’s best to control this functionality centrally. The Dealer’s getTradingData() method is the central interface, and it decides on its own where to get the data from depending on the module setup.

The Dealer issues the minutely tick to the Trader modules.

Traders

Currently the actual trading algorithm; Michael’s MACD algo. When it received the tick, it asks the Exchange for updated trading data, processes the data, and issues trading signals and candlestick detection back to the Dealer, who processes these calls.

However, a Trader isn’t required to implement the tick, nor is it required to obtain trading data from the Exchange. Any other method of obtaining data and trading signals is allowed, such as listening to messaging queues or web hooks (like signals coming on from Trading View), physical buttons on a Raspberry Pi, etc. The Telegram trader in the image above implies a module that receives /buy and /sell signals from a Telegram bot that you can type. Just an idea.

+1 here, sounds about right.

The Dealer exposes the following callback methods:

  • tickCallback(i) Triggered by the Dealer. This is the main loop with in incremental iteration index

Exchange

An Exchange module represents the connection to an actual coin exchange such as Binance or Coinbase Pro. It’s responsible for the actual trades, obtaining live or historic trading data, as well as recover the state that the account and therefore the bot are currently in (last action=buy/sell).

The Exchange may expose the following callback methods:

  • sellCallback() Triggered by the Dealer as a forward from the Trader
  • buyCallback() Triggered by the Dealer as a forward from the Trader

The Exchange may expose the following functional methods:

  • getTradingData(granularity, timeframe)
  • getState()

The method descriptions above say may because it could be possible that one Exchange is responsible for the trading, while another Exchange is responsible for providing the trading data. I don’t see a use case for this right away, but I would want to have this option from the beginning to retain full flexibility. The Dealer vets each Exchange for their capabilities on startup and raises an error if a mismatch or omission is detected.

There should be no may in the API - rather provide a default implementation, returning e.g. an empty dataframe, or None. If you wish the plugins to have some sort of inspectable capabilities, I'd rather use class/instance attributes to announce what plugins can or cannot do. Methods should always exist, with dummy implementations when nothing else is needed.

The way this makes a bit of sense is if you look at simulations. I could choose to obtain a very large dataframe covering many weeks in minute-by-minute data points, and expose the database to the simulation through the standardized interface of my home-rolled Exchange module.

Renderers

Renderers are any form of output. This includes the console, image generation, log files, Telegram notification. Renderers are one of the most obvious fields where separate modules make sense. What if I don’t want to see candlestick announcements in my Telegram notifications? Do I want to bother the authors to make that feature, or will I simply clone the Telegram Renderer and tweak that little detail myself, while keeping the entire original codebase intact? Do I want to bother anyone to make me a hook for generating my own images?

The Renderer may expose the following callback methods:

  • sellCallback() Triggered by the Dealer as a forward from the Trader, after successful transaction
  • buyCallback() Triggered by the Dealer as a forward from the Trader, after successful transaction
  • candlestickDetectedCallback() Triggered by the Dealer as a forward from the Trader
  • updatedTradingDataCallback() Triggered by the Dealer after a successful getTradingData(), for printing minutely trading numbers
  • errorCallback() Triggered by the Dealer after the detection of an erroneous transaction

Other Notes

Invocation

Since pycryptobot would be installed as a package and sit in the site-packages folder, the main app would be exposed via the entry_points feature of setup.py. A command line call would then simply be:

pycryptobot -c config.json

Language

All methods and variables need to be named in a clear and understandable way. For methods, I already introduced the *Callback() and get*() scheme. Variables need to do that, too. df is not understandable for the lay person who is just learning programming or trading and skimming through the code.

class Foo:
  bar = 79

  def setBar(self, new_bar):
    self.bar = new_bar  # nope... waste of time, space, and CPU. adds nothing extra.

  def setBar(self, new_bar):
    self.bar = new_bar
    self.another_variable_we_want_to_update_every_time_bar_changes += 1  # oh yes, *this* is why getters/setters exist, to do something besides just updating the variable

class Foo:
  bar = 79

Foo().bar = 80  # absolutely fine in Python lands, no need for setBar()

Unless something special is needed, get/set methods are just adding noise with no value.

Error Handling

All functional methods (not callback methods) need to return a success, message tuple for proper error handling by the Dealer.

For example, I recently set up my Binance bot, but hadn’t traded yet and didn’t know about the 5,000€ trading limit before my home address has been verified. My Telegram bot already received the successful BUY signal message with only the console showing the error. Without my attention the error would have been lost on me. I can see how something like this gets lost in the current iterative approach, as it’s just overwhelming to maintain.

We've been using something similar yet a bit more strict, https://github.com/gluetool/gluetool/blob/master/gluetool/result.py

def foo() -> Result[int, str]:
  return Ok(79)

  # oops, there's something wrong, so, instead:
  return Error('the datacenter is on fire')

r = foo()

if r.is_ok:
  ...

if r.is_error:
  ...

r.unwrap() # gives 79
r.unwrap_error()  # gives 'the datacenter is on fire'

In general, I was never very fond of exceptions, having a more structured return value to express the error states would be nice.

Atomizing the current architecture alone will greatly reduce the risk of erroneous programming by breaking down large code into easily recognizable chunks.

Configuration

I vouch for rewriting the configuration as well, despite a necessary restructuring into sections for the separate modules. First, only positive option names should be used, avoiding double negative situations like disablebuynearhigh=1 which is too difficult to understand.

+1

Also, I propose to move to true/false for boolean options, leaving integer settings to where they make sense. For example, the verbose=1 setting is confusing, as many command line tools implement varying degrees of verbosity levels through integer settings, whereas in pycryptobot, a boolean setting is intended.

+1

As far as configuration on the command line goes, I propose to hand over a separate JSON string as configuration for each individual module in case of command-line-level configuration, so something like this:

pycryptobot -e coinbasepro '{"option": true}' -t macd '{"option": true}' -r console -r telegram '{"key": "a0cd"}'

A config file would be better suited for this. I understand the need to override the config file values from time to time, but I'm not sure JSON is a good choice for a command line. Maybe something like -e coinbasepro --coinbasepro-some-option=true --coinbasepro-another-option=79? Plugins can easily have a method for the startup, to be called by the dealer to populate a given argparse.Parser instance with plugin's own options.

Final Notes

My proposal probably isn’t complete, as I haven’t actually read all the code and understood all the functionality. Most importantly, I haven’t looked that the trading data yet.

My idea is that, after we’ve finalized the API together, I would contribute:

  • A functional Dealer module, implementing the configuration and module loading, the tick, and the actual functional Dealer methods, forwarding calls between the modules including error handling.
  • Dummy classes for the other three categories, functional only as far as config loading goes.
  • A functional installable Python package, with a functional example of a home-cooked module not sitting in sys.path (functional here means on package-level).

But I would leave it to you to rewrite your code into the individual modules.

I have time to work on this starting July.

API

We would have to create a clear API definition, and I would like to be involved in that to ensure service-agnostic data definitions. I propose to send only JSON data as a parameter between methods, or rather, a dictionary, implementing all parameters in just one variable.

An API version could be shipped in the dictionary, so that in the rare case of an API change, all modules can react to them accordingly to implement backwards-compatibility. All classes would expose their supported API versions, and the Dealer would check for module compatibility on startup, otherwise raise an error. Maintaining the API versions is a bit of an overhead in the work, but a necessary price to pay for the much cleaner design.

Also, using the dictionary means that one can simply hand over some additional variable without breaking the API for testing, or even permanently, as long as the receiving module checks for the availability of that key in the dictionary rather than blindly relying on it.

-1 here. Please, don't use dictionaries. It's hard to enforce correct key names, it's hard to enforce correct types. Use built-in types when possible, data classes or tuples to build more complex types, keyword-only arguments to be specific about what API accepts, *args and *kwargs to ignore optional future arguments the plugin isn't yet willing to accept.

Final comments:

Disclaimer: I make my living partially as a QE for Linux toolchain, partially as a developer of CI systems. Both involve Python in a nontrivial manner, especially when it comes to implementing services that don't exist elsewhere (e.g. https://gitlab.com/testing-farm/artemis). So, I'm not trying to bikeshed or shoot down your ideas just for fun, merely sharing things I learned along the way. Sorry :/

How do you propose 3.0 is developed? Should we create a new 3.0 branch? I am sure other contributors as well as myself would be interested to help with this once a plan is finalised.

Yes, a dedicated branch would be the best way to proceed. I can help for sure, although it's just a hobby for me.

yanone commented 3 years ago

So we’re generally on the same page. That’s great.

@happz, I agree with most of your remarks.

IIUIC, the idea is to allow more than one trader, which is the piece examining the market and deciding whether to buy/sell/wait. What happens when these multiple traders won't agree about the signal, each of them yielding a different decision?

It makes no sense to couple several active traders together as you correctly pointed out. The multiple traders were intended for a situation where an active trading algorithm is supplemented by a passive input method, such as physical buy/sell buttons I intend to create for my Raspberry Pi art installation.

For sake of clean implementation, we could diversify trading classes into active and passive ones, allowing only one active trader per bot. However, this still won't prevent a user from hooking up several passive traders which are all listening to outside signals such as coming in from TradingView, which are passive only as far as pycryptobot is concerned. Then you end up with the same dilemma. Moral of the story: The user still needs to control their setup correctly, that's just how it is. A good documentation will do the trick.

I also see how my Exchange design could be diversified into Exchange and Backtesting, as they have a different interface, actually, eliminating the need for partial implementation of the interface.

if you guys start something like this, then: freeze the dependencies the project depends on. You need absolutely stable environment, every developer involved must use the very same version of libraries. For example, when there's a newer binance package, don't let a random user install it and report "bot's broken" - no, instead, install it in your local development environment, test the bot, then submit a pull request bumping the version. That way, everyone, and their mother are using the very same tools, libraries, environment. No surprises, no libraries just updating under your hands.

How does one enforce package versions? Would you check them in the main app on startup? And how, through pip?

So, I propose that we start to collaborate on a document in a stage even prior to an API definition, with a wishlist, basically: What are our objectives for the 3.0 bot?

I'm proposing this because I had even more ideas in the meantime: I may want to help some of my friends to more cash, too. Rather than setting up several bots, I would only set up one instance, and connect several exchange accounts, each with their own credentials and balance, each being entirely in the legal responsibility of their respective owners (especially as far as tax paying is concerned), but all triggered by the same signal.

This idea came after the discovery of people advertising their algos on TradingView. I actually may want to invest in renting an algo there, web-hooking my way into my bot instance, then trading several accounts at once. This kind of calls for a Google App Engine setup where the bot is loaded directly as part of the web server, which in turn requires the central bot app to be loadable as a class, with the command line interface being merely another interface to the same class.

So I propose you create the branch, add me to the contributors, and we’ll start with a simple wishlist document, slowly expanding it into an API definition, before anything gets implemented.

happz commented 3 years ago

if you guys start something like this, then: freeze the dependencies the project depends on. You need absolutely stable environment, every developer involved must use the very same version of libraries. For example, when there's a newer binance package, don't let a random user install it and report "bot's broken" - no, instead, install it in your local development environment, test the bot, then submit a pull request bumping the version. That way, everyone, and their mother are using the very same tools, libraries, environment. No surprises, no libraries just updating under your hands.

How does one enforce package versions? Would you check them in the main app on startup? And how, through pip?

This is usually left for packaging tools, not performed by the application itself. Trust the packaging, keep the requirements clean and precise.

With setup.py or pip and its requirements.txt, you can request binance==1.2.3, for example:P pip install binance==1.2.3, and pip will install this particular version. However, this does not cover the requirements of binance package itself, they can still change from one user to another:

pip and setup.py can help, but as seen above, there will be gaps and surprises - nobody hates surprises more than developers trying to debug their own app... It took some time, but here come more advanced tools - sort of "next-generation" - like Poetry:

$ poetry lock

This will create a "lock" file - file describing all packages needed and their versions. Poetry will inspect all available versions of each package I mentioned in pyproject.toml and their dependencies, too, and dependencies of the dependencies and so on, and will write them all down. After this point, there is nothing left for guessing - everyone can have the very same environment. Both pyproject.toml and poetry.lock are committed to the repository, BTW - this makes them available to every user of pycryptobot. There's nothing secret in them anyway.

$ poetry install

This will create the environment for your project - everyone will have the same packages, same versions, everything. You can run this command on your laptop for the development, or inside a Docker context, while building an image - everywhere, you get the same environment.

To run the bot:

$ poetry run pycryptobot ...

There's no messing with system packages, no random pip install commands, everything is under control and rock solid.

For upgrades - if you think it's time to use a newer version of one or more packages pycryptobot uses, e.g. new binance becomes available, with a couple of bugfixes, then:

From now on, new users and those who update their pycryptobot installation will use the new binance version - not a random one, but the one you picked and tested.

There's also poetry publish which pushes the package to PyPI. And it's easy to set up tox and poetry to play nicely to use the very same environment for tests as well, for pytest and mypy.

yanone commented 3 years ago

Hi all, I have to withdraw my offer to help on this due to personal circumstances. Moving forward, I will have less time and concentration available than expected. Please feel free to take inspiration from my proposal for a future rewrite.

One thing I wanted to add: While safety features such as stop loss could be implemented in the Trader modules, I would propose to implement them only once in the central Dealer module, to avoid mistakes in poorly programmed Trader modules, once the ecosystem allowed combination of different modules, and for consistency. This will leave the Trader modules with the only task to send Buy and Sell signals, which adds to the clean separation of responsibilities. Maybe they could be called Signal modules instead, actually.