Closed ktaggart closed 5 years ago
Hi @ktaggart!
Are you using Docker with Scanner? If so, have you ran docker pull scannerresearch/scanner:cpu-latest
to make sure the Docker image is the most recent? I just pulled down the latest image and I see the following in the examples/apps
directory:
root@b3bb92f78ef0:/opt/scanner# ls -lah examples/apps/
total 40K
drwxrwxr-x 10 root root 4.0K Mar 29 18:17 .
drwxrwxr-x 5 root root 4.0K Mar 29 18:17 ..
drwxrwxr-x 2 root root 4.0K Mar 29 18:17 aws_kubernetes
drwxrwxr-x 2 root root 4.0K Mar 29 18:17 detectron
drwxrwxr-x 2 root root 4.0K Mar 29 18:17 face_detection
drwxrwxr-x 2 root root 4.0K Mar 29 18:17 gcp_kubernetes
drwxrwxr-x 3 root root 4.0K Mar 29 18:17 object_detection_tensorflow
drwxrwxr-x 2 root root 4.0K Mar 29 18:17 open-reid-feature-extraction
drwxrwxr-x 2 root root 4.0K Mar 29 18:17 pose_detection
drwxrwxr-x 2 root root 4.0K Mar 29 18:17 walkthroughs
If you're using the newer scannerresearch/scannertools:cpu-latest
image, it might be that was not updated to the newest version. Let me look into that now.
Was an issue in the build infrastructure that's fixed now. Give an hour for the changes to propagate and then the guide + docker images should be up to date.
Can confirm this has been updated. @ktaggart: let us know if it's fixed on your end.
I pulled the latest image and was able to run the walkthrough example, so that is fixed.
I will try the jupyter walkthrough tomorrow morning.
Q: will there be a porting guide from old to new API to bring any 3rd party apps out there up to date?
Great. Thanks for filing the issue.
We won't be writing a porting guide due to time constraints, but I can summarize the main changes:
StoredStream
objects, which use the old sources/sinks API behind the scenesJob
construct has been removed. Instead, per-job (now called "stream config") arguments are passed directly to the kernel. See "Operations - Declaring Parameters" in the guide.Thanks for the info on main changes.
I tried the jupyter notebook walkthrough again, but with the same result: the server comes up with the root of the server as the local directory I launched from, e.g. ~/workspace/scanner
Here is what I am attempting to do:
$ mkdir jupyter_test
$ cd jupyter_test
$ curl -O https://raw.githubusercontent.com/scanner-research/scanner/master/docker/docker-compose.yml
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 426 100 426 0 0 1521 0 --:--:-- --:--:-- --:--:-- 1521
$ docker-compose up cpu
Creating network "jupyter_test_default" with the default driver Creating jupyter_test_cpu_1 ... done Attaching to jupyter_test_cpu_1 cpu_1 | [I 17:49:04.692 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret cpu_1 | [I 17:49:04.969 NotebookApp] Serving notebooks from local directory: /app cpu_1 | [I 17:49:04.969 NotebookApp] The Jupyter Notebook is running at: cpu_1 | [I 17:49:04.969 NotebookApp] http://(31803b558c2e or 127.0.0.1):8888/?token=0d49b69ec71c080b0d4ed4265b277d2c5ba32ee7b1147d28 cpu_1 | [I 17:49:04.970 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). cpu_1 | [W 17:49:04.973 NotebookApp] No web browser found: could not locate runnable browser. cpu_1 | [C 17:49:04.973 NotebookApp] cpu_1 |
cpu_1 | To access the notebook, open this file in a browser: cpu_1 | file:///root/.local/share/jupyter/runtime/nbserver-1-open.html cpu_1 | Or copy and paste one of these URLs: cpu_1 | http://(31803b558c2e or 127.0.0.1):8888/?token=0d49b69ec71c080b0d4ed4265b277d2c5ba32ee7b1147d28
I then paste the following into my browser address bar:
http://127.0.0.1:8888/?token=0d49b69ec71c080b0d4ed4265b277d2c5ba32ee7b1147d28
Which gives me the directory contents of "jupyter_test" as the root of the server with nothing but the docker-compose.yml file listed.
It should start the server root in /opt/scanner, from where I can then follow the instructions to click through to examples/Walkthrough.ipynb, right?
Try downloading the docker-compose.yml
file again.
Did it change from when I downloaded it an hour ago?
I grabbed it again, it looks the same, with the same result:
version: "2.3"
services:
gpu:
image: scannerresearch/scannertools:gpu-9.1-cudnn7-latest
command: jupyter notebook --allow-root --ip=0.0.0.0 --port=8888
ports:
- "8888:8888"
volumes:
- .:/app
runtime: nvidia
cpu:
image: scannerresearch/scannertools:cpu-latest
command: jupyter notebook --allow-root --ip=0.0.0.0 --port=8888
ports:
- "8888:8888"
volumes:
- .:/app
Is there a different one I should be using?
Eek, try again?
Bueno, that looks good. I'll try executing the notebook.
Thx!
Getting a message that the kernel died during frame analysis and is restarted, but nothing is happening after the restart:
$ docker-compose up cpu Recreating jupyter_test_cpu_1 ... done Attaching to jupyter_test_cpu_1 cpu_1 | [I 19:15:53.267 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret cpu_1 | [I 19:15:53.519 NotebookApp] Serving notebooks from local directory: /opt/scanner cpu_1 | [I 19:15:53.519 NotebookApp] The Jupyter Notebook is running at: cpu_1 | [I 19:15:53.519 NotebookApp] http://(7438c828cf9c or 127.0.0.1):8888/?token=65129c0fd04bbbfbb70a8b4519a66e8afab11f22cc3841ad cpu_1 | [I 19:15:53.520 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). cpu_1 | [W 19:15:53.524 NotebookApp] No web browser found: could not locate runnable browser. cpu_1 | [C 19:15:53.524 NotebookApp] cpu_1 |
cpu_1 | To access the notebook, open this file in a browser: cpu_1 | file:///root/.local/share/jupyter/runtime/nbserver-7-open.html cpu_1 | Or copy and paste one of these URLs: cpu_1 | http://(7438c828cf9c or 127.0.0.1):8888/?token=65129c0fd04bbbfbb70a8b4519a66e8afab11f22cc3841ad cpu_1 | [W 19:16:04.739 NotebookApp] Clearing invalid/expired login cookie username-127-0-0-1-8888 cpu_1 | [W 19:16:04.740 NotebookApp] Forbidden cpu1 | [W 19:16:04.742 NotebookApp] 403 GET /api/sessions?=1554232255319 (172.22.0.1) 3.59ms referer=http://127.0.0.1:8888/tree cpu_1 | [W 19:16:04.746 NotebookApp] Clearing invalid/expired login cookie username-127-0-0-1-8888 cpu_1 | [W 19:16:04.747 NotebookApp] Forbidden cpu1 | [W 19:16:04.748 NotebookApp] 403 GET /api/terminals?=1554232255320 (172.22.0.1) 3.28ms referer=http://127.0.0.1:8888/tree cpu_1 | [I 19:16:18.610 NotebookApp] 302 GET /?token=65129c0fd04bbbfbb70a8b4519a66e8afab11f22cc3841ad (172.22.0.1) 0.51ms cpu_1 | [I 19:16:30.263 NotebookApp] Writing notebook-signing key to /root/.local/share/jupyter/notebook_secret cpu_1 | [W 19:16:30.264 NotebookApp] Notebook examples/Walkthrough.ipynb is not trusted cpu_1 | [W 19:16:30.546 NotebookApp] Notebook examples/Walkthrough.ipynb is not trusted cpu_1 | [I 19:16:31.000 NotebookApp] Kernel started: 3c09ec1e-3049-4895-88c1-d730f0c879fc cpu_1 | [I 19:16:31.643 NotebookApp] Adapting to protocol v5.1 for kernel 3c09ec1e-3049-4895-88c1-d730f0c879fc cpu_1 | [I 19:19:36.667 NotebookApp] Saving file at /examples/Walkthrough.ipynb cpu_1 | [I 19:19:38.115 NotebookApp] KernelRestarter: restarting kernel (1/5), keep random ports cpu_1 | WARNING:root:kernel 3c09ec1e-3049-4895-88c1-d730f0c879fc restarted
In the browser, I see the following:
/tmp/example.mp4 Reading frames from video...
So it appears that the example enters the white loop:
while True:
ret, frame = video.read()
if not ret: break
frames.append(frame)
But never exits (normally) so that the following print statement does not execute:
print(len(frames))
You're probably running out of memory. This example will load ~1000 video frames using the non-Scanner API. Is this running on OS X? How much memory have you allocated to Docker?
Yes, OSX with 16G RAM.
Docker has 2GB mem, 1GB swap.
What settings would you recommend?
Thx!
I would try 8 GB and see if that works.
Awesome, that worked. I'll play around with it to see what the minimum requirement is for this clip to better understand mem issues per frame length.
I'll close this now.
Thx.
Please refer to the "walkthroughs" page:
http://scanner.run/guide/walkthrough.html
The steps include changing into the /opt/scanner/examples/apps/walkthroughs directory and executing the grayscale_conversion.py script.
The "walkthroughs directory does not exist, nor any of the scripts that would otherwise be contained within:
Likewise with the "advanced walkthrough", once the jupyter app starts, the only file existing in the jupyter directory is:
So, the advanced walkthrough is unusable.
When looking at the available tutorials and example code, it all appears to be based on the old API, not the new.
Is there another release coming to address these issues?