Azure / draft-classic

A tool for developers to create cloud-native applications on Kubernetes.
https://draft.sh
MIT License
3.92k stars 396 forks source link

Using multiple machines for the same application #618

Open radu-matei opened 6 years ago

radu-matei commented 6 years ago

Both draft connect and draft logs try to get the latest build ID locally, from the logs directory. This makes both commands much faster, but also makes the assumption that the same machine was used to draft up and to get the logs / connect.

If there were two machines used, on the second machine the user doesn't have any way of doing draft logs (since the logs are only stored locally), nor draft connect, since it requires the latest build ID.

For draft logs there is no clear way of solving this unless we store the logs in the cluster as well.

For draft connect:

The question: is the scenario above often enough so that it justifies the increase in the connection time?

radu-matei commented 6 years ago

For example I just switched between Windows, WSL and macOS with the same cluster, but I don't think that's a sane thing that people do very often.

bacongobbler commented 6 years ago

semi-related discussion around this concern: https://github.com/Azure/draft/issues/585

squillace commented 6 years ago

@gabrtv I know you were interested in this area.

radu-matei commented 6 years ago

Do we have any more thoughts here?

One scenario that happens with the current setup: since all logs are written in $DRAFT_HOME/logs and are not grouped by the application name, if you're working with on the same machine with app A and app B: you execute:

app B: $ draft up

app A: $ draft logs - here you see the build logs from app B

We can:

Thoughts?

bacongobbler commented 6 years ago

I think we've made some good moves in regards to draft logs via enhancements like #664, so that part has been accomplished if I'm reading this right. I think at this point the last thing we need to tackle is to fetch the latest build ID from the storage backend as mentioned in the original comment, right?

radu-matei commented 6 years ago

The workflow could be:

Since our storage is based on config maps (whose size limit is I think 1 MB) - how would we handle log files larger than 1 MB?