Open ediaz-biamp opened 3 weeks ago
Which docker image are you using? I am currently using this one https://hub.docker.com/r/savatar101/marker-api (0.3) and it works just fine! There the last update was 4 Months ago, so there were no recent changes. In your logs the port and adress seem pretty weird to me, this might also be the case why you are getting those 404s.
Can you please send me your obsidian configuration?
When you built your own image like described here https://github.com/adithya-s-k/marker-api, which setup did you use? The simple server setup?
So the docker image you sent has some notable differences, for instance the default port on the github build is now 8080. Also, I couldn't find the ability to run the server by calling "marker-api" in the CLI in this build. I'll try your image to see if it works.
I indeed used the simple server setup. Can you give me some more specifics about the config info you need? Is it a particular file or do you want the entire folder?
I just tried your docker image, and I get the same issue...
Okay, this is weird, it should be working if you use the same image as I do. Please send me a screenshot of this plugins' settings in the obsidian settings, so I can make sure you've configured everything correctly.
I also had a deeper look now and there is indeed a new Version and whole new Setup for the API part. However, the Endpoint shouldn't have changed. I'll try the new setup tomorrow and have a look if it works for me.
Sure. I was messing around with different ports because I use another tool on 8080, but he's my current setup.
Let me know how the new setup works for you. I'm also having some permission issues running scripts with my new computer. I'll get these fixed just in case and try again.
I'm sorry but I can't get the new version of the marker api to run on my device, there is also this issue: https://github.com/adithya-s-k/marker-api/issues/20 and I have exactly the same problem, I tried everything recommended to make this work, but I couldn't build the docker image nor run the python server manually.
But the image mentioned earlier works fine for me, so it should also work for you. Maybe try another port when you have running something else on 8080 and you could also have a look in the developer console of obsidian if there is a hint to what is happening
I'll try again setting it up once the error gets fixed
I recently got a new computer and I had to re-install marker API. I noticed some differences in the new build, but I was able to set up the docker local version with GPU support. However, when I use the plugin on Obsidian, I keep getting errors. Not sure if there were changes to the build that make the plugin deprecated, or maybe something else. Of note, I was able to run the local python server as well, but the issue persists. Please refer to the logs below.
docker run --gpus all -p 8080:8080 marker-api-gpu
========== == CUDA ==
CUDA Version 11.8.0
Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License. By pulling and using the container, you accept the terms and conditions of this license: https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.
INFO: Started server process [1] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
8888ba.88ba dP oo 88
8b
8b 88 88 88 88 .d8888b. 88d888b. 88 .dP .d8888b. 88d888b. .d8888b. 88d888b. dP 88 88 88 88'88 88'
88 88888" 88ooood8 88'88 88888888 88'
88 88'88 88 88 88 88 88. .88 88 88
8b. 88. ... 88 88. .88 88. .88 88 dP dP dP88888P8 dP dP
YP88888P' dP
88888P8 88Y888P' dP 88 dPEasily deployable and highly Scalable 🚀 API to convert PDF to markdown quickly with high accuracy. Abstracted by Adithya S K : https://twitter.com/adithya_s_k
Loaded detection model vikp/surya_det3 on device cuda with dtype torch.float16 Loaded detection model vikp/surya_layout3 on device cuda with dtype torch.float16 Loaded reading order model vikp/surya_order on device cuda with dtype torch.float16 Loaded recognition model vikp/surya_rec2 on device cuda with dtype torch.float16 Loaded texify model to cuda with torch.float16 dtype INFO: 172.17.0.1:51880 - "POST /convert HTTP/1.1" 404 Not Found INFO: 172.17.0.1:56182 - "POST /convert HTTP/1.1" 404 Not Found INFO: 172.17.0.1:41442 - "POST /convert HTTP/1.1" 404 Not Found