C9Glax / tranga

Docker-Container to monitor (Manga) Scanlation-Sites for download new chapters.
GNU General Public License v3.0
157 stars 15 forks source link

Server v2 #167

Open C9Glax opened 7 months ago

C9Glax commented 7 months ago

@db-2001 While going over your PR (related issue) I thought about some off the issues we have with the API. https://github.com/C9Glax/tranga/issues/113#issue-2120522576

make the /Jobs/Waiting smaller? It seems like it's sending 3.8MB of text every second, which is hell on my data usage when accessing remotely. Is there a need to sync the full metadata of all manga jobs every second, instead of on demand?

Resolves #74 Resolves #114 Could be their own Requests.

By returning a list of JobIds and crosschecking locally which jobs exist, this should be a lot easier on data.

Generally the API transmits a lot more data than it needs to, so streamlining requests (stripping data, splitting single requests into multiple smaller ones) we can react a lot better to changes.

So extending your effort, I propose that we also get going on a Version 2 of the API. The old API would still be accessible (for now), but we could gradually shift requests to the new one.

I have started writing out the API-Documentation. If you have any more requests you would like to see, then write them in this PR.

db-2001 commented 7 months ago

Yeah this makes sense, let's make a branch of the current working code, like a V01.XX.XX and generate a docker image for that so people can force their server to run that code.

db-2001 commented 7 months ago

And you can just merge my current json API branch into this Server V2 branch if you want a head start. Once that's done we can close my branch and Implement json API as part of V2 and not have any in V1

C9Glax commented 7 months ago

You want a hard cut-over? The way I wrote the current requests, we could still have both parts running, and be future-proof (v3,4,5...) too. But yea I will merge your current branch into this one

db-2001 commented 7 months ago

I think something as big as an API changing structure would be enough to do a hard cut-over, it's your call in the end but I think V2 being the version with the big API change makes sense to me and then we just leave the old version alone, maybe patch it every now and then if we need to but otherwise leave it alone.

db-2001 commented 7 months ago

Also do you want to take the lead on the server side of V2 then? That'll let me focus on doing the frontend V2 implementing all these manga pop-up and search/add pop-up changes that everybody's been wanting. I can also then implement any other changes as a result of the server V2. What do you think?

C9Glax commented 7 months ago

Sounds like plan! So it would essentially be complete V2 front- and backend, then a hard cut-over would make a lot of sense.

db-2001 commented 7 months ago

You can use this thread to let me know how the front end implementation should change architecturally. I agree that syncing all the metadata for all mangas is a bit much every time, if I understand correctly, now the API will return a JSON for all the job-ids and then fronend can check which job ids it's rendered and which one's it hasn't and then call for the metadata for the missing ones.

C9Glax commented 7 months ago

That is the idea. Basically any time you expect a list of items, you will probably get a list of ids that you can then retrieve. Which gives me the idea of bulk-requests...

db-2001 commented 7 months ago

For the latest downloaded chapter number, do you think it would make sense to add one function/call to do the loop through all the zips as a "force-check" but otherwise just store the highest number in the series.json and then read/update that?

C9Glax commented 7 months ago

I would loop through the filesystem real quick, since you could move/delete files etc. Also should probably return a local value, and a connector value.

db-2001 commented 7 months ago

Makes sense, and then automatically check for new chapters if the local and connector differ?

C9Glax commented 7 months ago

The Connectors still do their own stuff, retrieving the Chapters from the website and automatically creating jobs if there are new ones, or missing ones.

C9Glax commented 7 months ago

Okay this commit changed a lot of the backend. If stuff breaks lemme know here.

C9Glax commented 7 months ago

Since there might be changes now that only apply to ServerV2 or cuttingedge(to be legacy when ServerV2 becomes cuttingedge), but also some, that might be used in both, but merging between ServerV2 and cuttingedge is no longer possible, I branched the last point where both were able to merge, and called it cuttingedge-merge-serverv2. So anything on the API side that is needed in both branches, should be changed in there, until no longer possible.

C9Glax commented 7 months ago

@db-2001 I think all the Endpoints are implemented now. Did a few small tests, but nothing thorough. If you see anything that seems non-intuitive, broken, redundant, lmk. Also very much open to Status-Codes, if you feel they are not descriptive of what went wrong, or not uniform across requests.

db-2001 commented 7 months ago

Ok, sounds good. I still need to fix up and do some work on the front end with the current functionality before I move over to the V2 api. What are the big changes that need to happen for the overall architecture of the API? I.e. anything that isn't just a straight replacement for the original call

C9Glax commented 7 months ago

More granularity, e.g. there might be a second call needed with the results of the first. (for example /v2/Jobs/<Type> will only return ids, for more information, you would need to call /v2/Job/<id> to get that [I plan to add the functionality that you can pass a list of ids])

db-2001 commented 7 months ago

Is Global search implemented in this branch? How does the API call work if so?

Also am I reading the new code correctly, does creating a job no longer support a custom folder name and downloading after a certain chapter?

db-2001 commented 7 months ago

Also does the API have an internal version number that you've been incrementing? I'm thinking to add something hard-coded in the HTML of the frontend version, could also display API versions so it's easier to track down issues when people can see the version of each they're running. I don't know how feasible some sort of "Check for Update" function is.

C9Glax commented 7 months ago

Is Global search implemented in this branch?

Not yet, for now I got old functionality running.

Also am I reading the new code correctly, does creating a job no longer support a custom folder name and downloading after a certain chapter?

Totally forgot about that. The way it currently worked was a field in the Manga-struct, so it wasn't job specific (but Manga-specific), which I think would make more sense?

Also does the API have an internal version number that you've been incrementing?

Kind of, but also not. In the TrangaSettings file there is version field, that I intended to use, but have not really found a way to, efficiently.

C9Glax commented 7 months ago

Okay GlobalSearch is implemented, however MangaSee slows it down bad with the current search function. So might have to do #132 /v2/Manga/Search?title=xxx Also added the bulk requests for internalIds and jobIds

db-2001 commented 7 months ago

If MangaSee is slowing it down right now, do we want to add a way to enable/disable connectors from the front-end? Like I know one of the connectors is for french, and I personally would never use that connector since I don't speak french.

When returning the response for GET /v2/Connectors the response could include the

{
  name: connector_name,
  url: generic_connector_url,
  enabled: 0/1,
  icon (optional): url to icon for front-end,
}
C9Glax commented 7 months ago

If MangaSee is slowing it down right now, do we want to add a way to enable/disable connectors from the front-end? Like I know one of the connectors is for french, and I personally would never use that connector since I don't speak french.

When returning the response for GET /v2/Connectors the response could include the

{
  name: connector_name,
  url: generic_connector_url,
  enabled: 0/1,
  icon (optional): url to icon for front-end,
}

Sounds like a good idea...

C9Glax commented 7 months ago

Added a fuzzy-search though, so it returns very few entries now 👍🏼

db-2001 commented 2 months ago

@C9Glax I noticed that Server V2 doesn't necessarily return the jobs in the same order every time: image image

I'm using the monitoring Jobs API call to populate the library view. Clearly the job order needs to be sorted (probably by Manga Name), wondering if you want to do this on API side or if I should do it on front-end. Only downside to doing it on the frontend is that the front end would be a tiny bit more resource intensive every time we want to refresh the library because it has to sort. May slow down as library size grows.

(Side note: latest chapter indicator still shows zero, not sure if it's because it's not working or you haven't worked on it yet.)

db-2001 commented 2 months ago

Thinking more about how I'm going to dynamically update this view without loading the entire page, I'm going to have to alphabetically sort anyways on the front end so I'll do it there unless you have a better idea.

Edit; the Jobs actually seem to be returning in the same order every time but the javascript seems to be having a fit about it

db-2001 commented 2 months ago

Pulled the most recent version of V2 (commit from 3 days ago) and Tranga didn't start up due to this: image

Let me know if you need any more info

C9Glax commented 2 months ago

https://github.com/C9Glax/tranga/pull/167#issuecomment-2351265505 We moved to a local chromium install on the image, instead of having to download it separately through Puppeteer. Is this on amd64?

db-2001 commented 2 months ago

Yeah this is on amd64, what was the reason behind moving to the new one? And how are end users going to get informed on how to do that, if any action is needed from them

Edit: I will say I'm not using a docker image to test the v2 code (cuz I don't think there is one?) just cloned the branch

C9Glax commented 2 months ago

ah that explains it. Users should not notice any change, so no reason to inform anyone. The change is, that instead of downloading Chromium on startup, it is already baked into the image (installed on the machine). So if you are testing locally I recommend you either install chromium on your testsystem, or you use the *.local.yaml file to build the image with docker desktop https://www.docker.com/products/docker-desktop/

C9Glax commented 2 months ago

As for the reason why: Separating our code from chromium, and simply having the latest version of chromium on every build.