Open pier-oliviert opened 7 years ago
As a follow up, I rebuild minikube and all my images using the xhyve driver and it takes 6s to get to DOMContentLoaded
which is much, much better.
From the look of it, it might be a VM driver issue (Virtualbox). The unfortunate thing about all this is that I have permissions issue with xhyve that prevents me from using it to develop (Xhyve has permission problems writing files back to the host through volumes).
So I'm very much interested in finding out the issue with Virtualbox's driver.
Also, I started moving our assets to webpack, which concatenate all our files into 1 even in development and the page load went down from 2 minutes to ms. My assumption would be that there is a waterfall event that start clugging the pipe when there are 20+ requests in a short amount of time.
My knowledge is very limited when it comes to docker machines and virtualization so my apologies for not being more helpful :(
I am currently seeing this, even with webpack. My speed is ~5kb/sec, which is making it basically unusable. If there is any info I could provide that might help just let me know!
The problem seems to lie outside of Minikube. Docker uses osxfs which is a custom filesystem to try to bring native container capabilities to OS X. It works fine when communication are between container but things fall apart when trying to communicate with the host.
From what I read, it's due to synching filesystem between the 2. One way to fix it is to use a nfs server to serve files from host to guest. Or rsync.
We don't actually use osxfs in minikube. The host folder mount is done with a 9p filesystem. This is how the xhyve driver as well as he minikube mount command works. Virtualbox uses vboxfs, which is its own proprietary way to share files between the guest and the host.
If find performance issues with vboxfs, you could try the minikube mount command, rsync, or nfs
Having the same issue, also using 9p mont from minikube with Virtualbox. Ubuntu 16.04.
My guess is that this is caused by poor I/O performance on the guest side of the 9p mount. I tried measuring the time to extract a 80MB tarball containing lots of small files as a simple test... Here are my findings:
On the host machine, in the directory mounted via 9p: 0.09s On the guest machine, outside the directory mounted via 9p: 0.140s On the guest machine, inside the directory mounted via 9p: 22.094s (!)
Running on: Arch Linux, minikube 0.20, VirtualBox 5.1.22
It seems that the 9p mount is really unsuited for any kind of real-life, non-trivial workloads. Any kind of I/O-heavy build step run from the guest takes forever to finish (for example: webpack, npm install, composer install).
I'm running into this issue as well.
I had extremely poor performances for my PHP app too (HTTP request taking anywhere between 1 and 2 minutes to get a response).
My code was hosted on my mac, mounted in an xhyve
virtual machine via minikube mount
, and finally mounted inside my kubernetes pod via a hostPath
volume mount.
As as test, I copied my source code directly inside the pod's container instead of serving it through 9p.
I am now getting responses in 800ms.
Unfortunately, since this is for local development, I need to see my changes immediately and using a full copy of my source code instead of serving it through the network is not an option.
I'm going to setup an NFS mount and see what the performances are compared to 9p. However my test goes, in the current state, 9p on minikube is definitely way too slow for any workload.
@huguesalary: I'm very curious to find out how the NFS mount performance compared to 9p?
Thanks! Walco
I just retried this week with NFS and while things are better, it's still taking ~20seconds to load a page due to i/o constraints
How can I setup an NFS mount with minikube?
@pothibo thanks for the write-up. so still unusable, unfortunately.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten /remove-lifecycle stale
/remove-lifecycle rotten
Hi guys! You can use NFS as PersistentVolume for POD in k8s. It work`s. You need to setup NFS on your host machine and just add PV and PVC configurations fo your project.
My configuration:
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-project-volume
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.99.1
path: "/var/projects/my-project"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-project-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Gi
In deployment configuration you just add mount options for persistent volume claim.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten /remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
/reopen
@jpswade: you can't re-open an issue/PR unless you authored it or you are assigned to it.
It is more performance to synchronize the local and the minikube filesystem using syncthing than using shared folders. We built an open source tool that follows this approach: https://github.com/okteto/cnd
Can confirm this is still an issue and should be reopened.
Edit: Can be solved on OSX by using --vm-driver=vmwarefusion
instead, but considering that is a paid method, it might not be the best way for most people. Still looking to find a way to make this usable on Linux with the number of mounted directories I have.
/reopen
@pothibo: Reopening this issue.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
/reopen
@du86796922: You can't reopen an issue/PR unless you authored it or you are a collaborator.
/reopen
@pothibo: Reopened this issue.
Has anyone been able to duplicate this with local resources?
I'm concerned that this may be an issue with 9pfs mounts, and not networking.
Yes, 9pfs mounts is also slow. It's not a networking issue, it's an IO issue, I believe.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
The fix for this will involve adding support for alternatives to the 9p filesystem, such as #4324
This is still an issue and has not been forgotten. As mentioned previously, fixing this will require non-9p alternatives to be supported. One such example is CIFS: #5545
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
This is still an issue.
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
This is still an issue.
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
This is still an issue.
/remove-lifecycle rotten
/reopen
@du86796922: You can't reopen an issue/PR unless you authored it or you are a collaborator.
BUG REPORT
I have a website that I'm trying to run through minikube and while everything works, loading a single page on my host browser take upwards of 2 minutes. Connections between Pods seem to be normal.
The problem might originate from VirtualBox, but I'm not sure. Here's the minikube config:
I did change the NIC to use the paravirtualized network but speed stayed the same.
I also tried #1353 but it didn't fix it for me. Here's a poorly representative screenshot of what's going on when I load the page and look at the network tab in Chrome:
minikube version: v0.18.0
Environment:
What you expected to happen: Get the page load under 600ms would be acceptable.
How to reproduce it (as minimally and precisely as possible):
Start minikube with VirtualBox and run a rails server and try to access it from the host. That page needs to have external asset to increase the number of connection going through minikube.
Anything else do we need to know:
My setup might not be similar to what others do and while unlikely, it could be the cause of all my problems. Here's a gist of my Dockerfile and k8s config file
Notice how the image is "empty" and only loads the Gemfile and then when the image gets loaded into the pod, a volume from the host is mounted on that image. That allows me to develop on my host in the same folder as all my other project while running everything through minikube.
Let me know if you need extra information, I'd be glad to help!