Closed KatunaNorbert closed 9 months ago
There's a good chance this is very out of date, because of Berkay's recent changes for parallel agent management.
TBH I don't mind if all of the existing PM2 stuff is deprecated in favor of Berkay's work. (Which actually may have PM2? I forget)
module.exports = {
apps : [{
"name": "pdr-predictoor1-1-BTC-USDT-5m-binance",
"script": "python ./pdr predictoor 1 ./ppss.yaml sapphire-testnet --predictoor_ss.predict_feed=\"binance BTC/USDT c 5m\" --predictoor_ss.aimodel_ss.input_feeds='[\"binance BTC/USDT c 5m\"]' --lake_ss.feeds='[\"binance BTC/USDT 5m\"]' --predictoor_ss.bot_only.stake_amount=15 --predictoor_ss.bot_only.s_until_epoch_end=20",
"env": {
"PRIVATE_KEY": "0x..."
},
"watch": false,
"exec_mode": "fork",
"restart_delay": 5000,
"autorestart": true,
}]
}
This is how the new config looks like.
This is how the new config looks like.
Where is that? It doesn't look like yaml.
This is probably the new content of pm2 config files following the possible fix nr 1, although I see @trizin went for fix nr 2 by removing the pm2 files
Yep, sorry for not giving context, this is the new pm2 config file generated by the deployer module. It's a bit more complicated than the previous version with all the arguments etc. Just sharing this to support (2)
Do we need pm2 at all?
Why not just have one flow that we support super-well. Eg k8s. Super-confusing to have >1. And extra work, like we're seeing right now.
As far as I can see some users prefer it, and it's simpler than k8s since you don't need to configure cloud credentials. It doesn't add a lot of extra work, I've already added support for pm2, just a few commands. The real pain was supporting GCP, AWS and Azure :D
Here's an example of deploying with pm2: https://github.com/oceanprotocol/pdr-backend/blob/70553d5ca08fb54866b9ecfd0dae441b32671bf1/READMEs/agent-deployer.md#push
simpler than k8s since you don't need to configure cloud credentials
Would people ever seriously run 20 agents on their local machine? That's too much compute, no? And what if you want to shut off your machine.
pain was supporting GCP, AWS and Azure
I wish we could find a way to chop this down. It feels like too much scope to maintain for this level of maturity of the SW. Who uses each of {GCP, AWS, Azure}?
pain was supporting GCP, AWS and Azure
I wish we could find a way to chop this down. It feels like too much scope to maintain for this level of maturity of the SW. Who uses each of {GCP, AWS, Azure}?
I don't think there's a way to chop it down, all of them are widely used, there isn't a single one that stands out as the most popular.
Here are the cloud provider interfaces/classes
Would people ever seriously run 20 agents on their local machine? That's too much compute, no? And what if you want to shut off your machine.
Definitely not the best way to run 20 agents, perhaps it's good for running a few agents on an instance, or testing locally? Idk. pm2 is not a new tool, the template and commands aren't likely to change, so it won't require further maintenance, I think.
On whether we need pm2, let's recap...
Berkay:
module.exports = {
apps : [{
...
<15 more lines>
...
}
Berkay: this is the new pm2 config file generated by the deployer module. It's a bit more complicated than the previous version with all the arguments etc. Just sharing this to support (2)
Trent: Do we need pm2 at all? Why not just have one flow that we support super-well.
Berkay: As far as I can see some users prefer it, and it's simpler than k8s since you don't need to configure cloud credentials. It doesn't add a lot of extra work, I've already added support for pm2, just a few commands.
Trent: Would people ever seriously run 20 agents on their local machine? That's too much compute, no? And what if you want to shut off your machine.
Berkay: Definitely not the best way to run 20 agents, perhaps it's good for running a few agents on an instance, or testing locally? Idk. pm2 is not a new tool, the template and commands aren't likely to change, so it won't require further maintenance, I think.
OK, that's the recap. Here's my next comment.
Trent: it's clear to me now that the benefit of this is low, and it adds extra complexity.
>>1
remote bots --> k8s on remote machine (cloud)Make sense? If yes, let's deprecate all signs of pm2.
1-3 remote bots --> direct on remote machine (cloud)
It'd be some hassle to deploy these without pm2 (or docker-compose). Remote bots require a process manager.
Trent: 1-3 remote bots --> direct on remote machine (cloud)
Berkay: It'd be some hassle to deploy these without pm2 (or docker-compose). Remote bots require a process manager.
Trent: no, all you need to do is ssh into the local machine and run it. Like the vps.md README does. Key lines:
ssh -i ~/Desktop/myKey.pem azureuser@74.234.16.165
pdr predictoor 3 my_ppss.yaml development
We need to be super vigilant to keep complexity down.
Trent: no, all you need to do is ssh into the local machine and run it. Like the vps.md README does. Key lines:
ssh -i ~/Desktop/myKey.pem azureuser@74.234.16.165 pdr predictoor 3 my_ppss.yaml development
We need to be super vigilant to keep complexity down.
This runs only one Predictoor, to run multiple, you either need to create multiple yaml files or pass down all these flags:
--predictoor_ss.predict_feed=\"binance BTC/USDT c 5m\" --predictoor_ss.aimodel_ss.input_feeds='[\"binance BTC/USDT c 5m\"]' --lake_ss.feeds='[\"binance BTC/USDT 5m\"]' --predictoor_ss.bot_only.stake_amount=15 --predictoor_ss.bot_only.s_until_epoch_end=20
And the predictoor quits as soon as the ssh connection is lost.
wrt complexity:
I agree it adds some complexity. It isn't that much at the interface level, the commands for all deployment methods are same:
pdr deployer generate [pm2,docker-compose,k8s]
pdr deployer deploy
pdr deployer logs
pdr deployer destroy
Code perspective, it adds a template file and a few pm2 commands like "pm2 start", "pm2 delete" etc.
users will have questions as they try to use it
I agree with this one. Perhaps we can keep the support while not providing any examples or incentivizing the usage? or printing a "this method is not recommended" etc?
I'm fine with removing it and not trying to insist. Just raising some concerns.
This runs only one Predictoor, to run multiple, you either need to create multiple yaml files or pass down all these flags ..
Or just open 3 terminals to ssh into, one for each predictoor.
the predictoor quits as soon as the ssh connection is lost.
No, you just run it in the background with "&" at the end. This is exactly how I've been running a remote barge for the last two months. Works like a charm.
I'm fine with removing it and not trying to insist.
Let's just remove it.
Thank you for the iterations, @trizin.
@trentmc what about docker-compose?
@trentmc what about docker-compose?
I guess we discussed: let's turn it off too.
Deep not broad. Focus is a flow to "make $".
Background / Motivation
The predictoor README tells how to use pm2. But this crashes.
Why: main.py file was removed (due to recent CLI update), but pm2 config was not updated, so pm2 command is crashing.
Possible fixes
We will do (2).
TODOs