jesse-ai / jesse

An advanced crypto trading bot written in Python
https://jesse.trade
MIT License
5.73k stars 733 forks source link

Reattach optimization session #352

Closed domett93 closed 1 year ago

domett93 commented 2 years ago

It would be great if I can reattach the optimization process after closing the browser tab or a lost network connection. At the moment Jesse is working in the background but its not possible to get any results or to see the current progress after closing a browser tab or a lost network connection. I always have to kill the processes at OS level when something like this happens, because they are using a lot of CPU and the results are lost. It would also be very helpful for long running optimization sessions.

stale[bot] commented 2 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

mschultheiss83 commented 2 years ago

bump or +1

justcruzinby commented 2 years ago

Joined github just to bump this as a problem too please. Spent 3 days straight crunching through a complex strategy and was really sad to have lost the results near to the end.

movy commented 2 years ago

I was about to open the same issue. Jesse being great, really lacks in this crucial optimisation area. Most notable issues:

These are just off the top of my head, but as a result, I had to implement my own optimisation routine, see below.

movy commented 2 years ago

As a temporary (I hope) solution for the drawbacks outlined, I switched to amazing Ray Tune library, https://docs.ray.io/en/latest/tune, and it pretty much solved all problems. Basic code:

import jesse.helpers as jh
from jesse import research
from jesse.research import backtest

import time

import ray
from ray import air, tune
from ray.air import session
from ray.air.checkpoint import Checkpoint
from ray.tune.schedulers import PopulationBasedTraining

ray.init()

exchange = 'Binance Spot'
symbol = 'ADA-USDT'
timeframe = '2h'

config = {
    'starting_balance': 1000,
    'fee': 0.001,
    'type': 'spot',
    'exchange': exchange,
    'warm_up_candles': 0
}

routes = [
    {'exchange': exchange, 'strategy': buy_low_sell_high, 'symbol': symbol, 'timeframe': timeframe}
]

config_ref = ray.put(config)
routes_ref = ray.put(routes)

test_dates = [('2021-06-01', '2021-10-01'),
              ('2021-09-01', '2022-01-01'),
              ('2021-11-01', '2022-05-01'),
              ('2022-04-01', '2022-08-01')]
test_candles_rungs = []
test_candles_refs = []

start = time.process_time()
print(f"Fetching data for {symbol} from {exchange}...")
# get candles for each test date range and store references to them to be used by ray later
for date_range in test_dates:
    candles_rung = {
        jh.key(exchange, symbol): {
            'exchange': exchange,
            'symbol': symbol,
            'candles': research.get_candles(exchange, symbol, '1m', date_range[0], date_range[1]),
        }
    }
    test_candles_rungs.append(candles_rung)
    test_candles_refs.append(ray.put(candles_rung))
print("Import done", time.process_time() - start, 'seconds')

def trainable(params, checkpoint_dir=None):
    for candles_ref in test_candles_refs:
        result = backtest(ray.get(config_ref),  # starting balance, fees etc.
                          ray.get(routes_ref),  # exchange, strategy, symbol, timeframe
                          [],  # alternative routes, empty for now
                          ray.get(candles_ref),  # candles within backtest date range
                          hyperparameters=params)
        checkpoint_data = result['metrics']
        checkpoint = Checkpoint.from_dict(checkpoint_data)
        session.report(result['metrics'], checkpoint=checkpoint)

hyperparams = {
    'red_mult': tune.quniform(0.3, 4.4, 0.05),
    'tp_perc': tune.quniform(0.4, 2.5, 0.1),
    'bars': tune.randint(1, 200),
    'red_limit_on_val': tune.grid_search([0, 1]), 
}

pbt_scheduler = PopulationBasedTraining(
    time_attr='training_iteration',
    metric='smart_sharpe',
    mode='max',
    perturbation_interval=20,
    require_attrs=False,
    hyperparam_mutations=hyperparams)

tune_config = tune.TuneConfig(num_samples=1000, scheduler=pbt_scheduler)
run_config = air.RunConfig(
    name=f'{symbol}-{timeframe}-{exchange}',
    local_dir="/mnt/air/",
    verbose=2, 
    sync_config=tune.SyncConfig(
        syncer=None  
    ),
    checkpoint_config=air.CheckpointConfig(
        num_to_keep=40,
    )
)
tune_hp = tune.Tuner(trainable, param_space=hyperparams, tune_config=tune_config, run_config=run_config)

results = tune_hp.fit()
best_result = results.get_best_result(metric="smart_sharpe", mode="max")  # Get best result object
best_config = best_result.config  # Get best trial's hyperparameters
df = results.get_dataframe(filter_metric="smart_sharpe", filter_mode="max")  # Get a dataframe of results for a specific score or mode
saleh-mir commented 2 years ago

@movy wow, that's awesome. Could you please host this on a separate repository (or a Github Gist if it's easier that way) and submit it to https://github.com/ysdede/awesome-jesse?

btagliani commented 2 years ago

@saleh-mir I've been trying to use that code, and I'm getting this error:

(trainable pid=77273) 2022-11-07 17:47:39,706   ERROR function_trainable.py:298 -- Runner Thread raised error.
(trainable pid=77273) Traceback (most recent call last):
(trainable pid=77273)   File "/Users/bruno/miniforge3/envs/jesse_env/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 289, in run
(trainable pid=77273)     self._entrypoint()
(trainable pid=77273)   File "/Users/bruno/miniforge3/envs/jesse_env/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 362, in entrypoint
(trainable pid=77273)     return self._trainable_func(
(trainable pid=77273)   File "/Users/bruno/miniforge3/envs/jesse_env/lib/python3.9/site-packages/ray/util/tracing/tracing_helper.py", line 466, in _resume_span
(trainable pid=77273)     return method(self, *_args, **_kwargs)
(trainable pid=77273)   File "/Users/bruno/miniforge3/envs/jesse_env/lib/python3.9/site-packages/ray/tune/trainable/function_trainable.py", line 684, in _trainable_func
(trainable pid=77273)     output = fn()
(trainable pid=77273)   File "/Users/bruno/Documents/Code/conda/my-bot/optimize-ray.py", line 63, in trainable
(trainable pid=77273)     result = backtest(ray.get(config_ref),  # starting balance, fees etc.
(trainable pid=77273)   File "/Users/bruno/miniforge3/envs/jesse_env/lib/python3.9/site-packages/jesse/research/backtest.py", line 50, in backtest
(trainable pid=77273)     return _isolated_backtest(
(trainable pid=77273)   File "/Users/bruno/miniforge3/envs/jesse_env/lib/python3.9/site-packages/jesse/research/backtest.py", line 136, in _isolated_backtest
(trainable pid=77273)     backtest_result = simulator(
(trainable pid=77273)   File "/Users/bruno/miniforge3/envs/jesse_env/lib/python3.9/site-packages/jesse/modes/backtest_mode.py", line 302, in simulator
(trainable pid=77273)     short_candle = _get_fixed_jumped_candle(previous_short_candle, short_candle)
(trainable pid=77273)   File "/Users/bruno/miniforge3/envs/jesse_env/lib/python3.9/site-packages/jesse/modes/backtest_mode.py", line 403, in _get_fixed_jumped_candle
(trainable pid=77273)     candle[1] = previous_candle[2]
(trainable pid=77273) ValueError: assignment destination is read-only

Not sure if that code should still be ran (_get_fixed_jumped_candle) or not.

stale[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.