memgraph / helm-charts

Helm charts for deploying Memgraph, an open-source in-memory graph database.
https://memgraph.github.io/helm-charts/
Apache License 2.0
13 stars 9 forks source link

[Bug]: How to get the UI deployed #9

Closed jseiser closed 1 year ago

jseiser commented 1 year ago

Contact Details

justinseiser@gmail.com

What happened?

We are currently using memgraph in a project. Locally teh developers utilize the memgraph-platform image, and we really want to be able to demo out the UI in this project.

Is it as simple as overriding the default image to be memgraph-platform, and configure the service to expose a UI port, and creating an ingress?

Is it possible to deploy the UI separate from the DB itself?

Chart type

Standalone

Chart version

0.1.0

Environment

Amazon Web Services

Relevant log output

No response

antejavor commented 1 year ago

Hi @jseiser, so the Memgraph Lab is currently closed-source.

Why do you need a Memgraph lab inside the Kubernetes cluster?

Memgraph lab was designed to be used as a standalone application where you can connect to different Memgraph instances from your local machine: Memgraph LAB This means each dev can connect to same Memgraph, but from a personal PC/Machine.

But to answer your question:

Is it possible to deploy the UI separately from the DB itself?

No, since it is closed-source at the moment, but this will probably change in the future. There is a hacky way to achieve this by fiddling with Docker images but it is not straightforward action.

Is it as simple as overriding the default image to be memgraph-platform, and configure the service to expose a UI port, and creating an ingress?

Almost, the issue arises due to Memgraph Console running in the terminal by default (and failing due to not being able to connect to tty), this crashes the container in Kubernetes. We are in the process of changing this in the upcoming weeks. Once that is cleared, deploying MemgraphDB and the Memgraph platform should be almost identical,

Deploying Memgraph Platform and Memgraph Mage in Kubernetes are two next improvements we are planning for this repo.

jseiser commented 1 year ago

Ante,

The real desire is being able to demonstrate all the "cool things" that our developers are doing with memgraph to the various shareholders of this RFC we are working on.

Being able to port-forward and then remotely connect the UI will work as well

On Tue, Aug 22, 2023, 5:43 PM Ante Javor @.***> wrote:

Hi @jseiser https://github.com/jseiser, so the Memgraph Lab is currently closed-source.

Why do you need a Memgraph lab inside the Kubernetes cluster?

Memgraph lab was designed to be used as a standalone application where you can connect to different Memgraph instances: Memgraph LAB https://memgraph.com/lab

But to answer your question:

Is it possible to deploy the UI separately from the DB itself?

No, since it is closed-source at the moment, but this will probably change in the future. There is a hacky way to achieve this by fiddling with Docker images but it is not straightforward action.

Is it as simple as overriding the default image to be memgraph-platform, and configure the service to expose a UI port, and creating an ingress?

Almost, the issue arises due to Memgraph Console running in the terminal by default (and failing due to not being able to connect to tty), this crashes the container in Kubernetes. We are in the process of changing this in the upcoming weeks. Once that is cleared, deploying MemgraphDB and the Memgraph platform should be almost identical,

Deploying Memgraph Platform and Memgraph Mage in Kubernetes are two next improvements we are planning for this repo.

— Reply to this email directly, view it on GitHub https://github.com/memgraph/helm-charts/issues/9#issuecomment-1688970535, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABFBNZ3G44ML4O4GLXNWMQLXWURXBANCNFSM6AAAAAA3RI56DI . You are receiving this because you were mentioned.Message ID: @.***>

antejavor commented 1 year ago

Cool than just installing Memgraph LAB locally and connecting to a remote instance, that is the way to go.