Open jbutkus opened 10 years ago
This is a very reasonable request. Communicating the same kind of status information that is available in the dashboard is a requirement to meet full "feature parity" for the command-line.
We are currently mid-way through refactoring how workflow operations are tracked, which will make them much more exposable to the API, which should make it easy to poll/wait for the completion of any operation in the same way that we currently do for new site spinups, etc.
I expect we should be able to make progress on this in the month of November. Having a list of the highest-value "long running jobs" from you is very helpful.
@jbutkus +1 to what @joshkoenig especially about the most common commands you use. And I can definitely confirm you'll see an initial release of Terminus 2.0 in November that will include the commands above. Cheers,
Would it be possible to get some reporting from terminus for long running commands?
Right now our approach is as follows:
pantheon-site-create
- keep cURL'ing page until we see desired response in HTML;pantheon-site-service-level
- we assume that change is immediate and proceed almost immediately (giving it 60 seconds); we think it may actually take longer, as, when you switch to >=Business server second container must be started;pantheon-site-deploy
- our approach is the same as withpantheon-site-create
;terminus wp db query
- if we replace a significant portion of the database we check for the output; but again, with >=Business plan levels it may not be true, as cache might be cleared only on one server by that time and it might be wise to wait a bit more.We think our cURL approach is sound. But then we would like to get reports from Pantheon. Like a Yes/No response to a query "has pantheon-site-create completed".
This would save some trouble for both parties, I think. As now we pre-heat edge servers with these requests and that cache has to be purged once command truly succeeds.
Then our assumptions may be wrong, because we re-try cURL'ing for no more than 4 minutes for a single command. I guess that this might be too low during peak times and then we just abandon the process entirely instead of waiting a bit longer. And, contrary, we cannot keep re-querying page forever, as some commands might have failed and it would be better to try to restart process from the very beginning instead of waiting more.
Does this sound reasonable? I can provide some more details on how we use these scripts if it would help anyhow.