Difegue / TVC-16

Pelican sources for TVC-16. https://tvc-16.science/blogopolis-docker.html
3 stars 0 forks source link

https://tvc-16.science/lrr-survey.html #3

Open utterances-bot opened 5 years ago

utterances-bot commented 5 years ago

Blogopolis - LANraragi User Survey Results

undefined

https://tvc-16.science/lrr-survey.html

CirnoT commented 5 years ago
I got an issue in about adding an API for making/restoring backups, so I'll probably do that instead so people can automate their own backup strategy.

A simple Perl script that generates json and outputs it to stdout would work too and would most likely be easier to automate with cronjob - can even provide example in documentation.

This feature is pretty generic and allows people to shape their collection however they want (chapters of a tank, favorites, etc.)

Keep in mind categories are not archive sets. What people most likely want is to be able to filter specific categories (like Manga, Non-H) and separately search filtered result set. I assume archive sets would be able to be viewable on per-one basis, as I you can view 'Game CG' and search it but you can't select to search for tag in both 'Game CG' and 'Artist CG' sets. Having categories being reliant on sets would also clutter them beyond recognition should plugins be allowed to create and add archives to them. What is called 'Doujinshi' on one site may be called 'H-Doujin' on another - if they are not predefined users will be forced to categorize stuff themselves or fix what plugin set. I really think having separate categories like on EH would be best suited here with having archive sets separately.

CirnoT commented 5 years ago

Favorites can indeed be replaced by archive sets, tho again this is an issue if you won't be able to combine search using more than one set at single time.

Regex searching and Tag suggestions are both pretty high up here, and will probably be done when I revamp the current index to use server-side processing instead of loading a big clunky JSON cache

Both should be toggable as 'advanced search' options similarly to HPX and the option should be saved locally in browser not server-side, as well as not hidden behind Admin access. This can also add stuff like 'match on any' or 'match on all'. Doing search by specific namespace in EH form is also important as this is what people are accustomed to (female:"schoolgirl uniform"$ - $ here means match exact, without $ it would match 'female:schoolgirl uniform asdfg' too)

Difegue commented 5 years ago

Categories as you view them(Non-H/Artist CG/etc) are basically already there as the favorite tags feature.
I'm kinda stingy about adding extra fields to the database model and don't see the interest in adding a field explicitly for categories.

Favorite tags in their current state are a bit stiff however (limited to 5), so it might be interesting to rework them a bit to make them more flexible.

I'm not quite sure on how to display sets on the UI yet: I thought about just making 'em folders in the base list/thumbnail view.

Agreed on the search part.

CirnoT commented 5 years ago

Indeed, since they work as OR, but being limited to only 5 is making them unusable for this.

Speaking of favorites, creating favorite 'female:lolicon male:shotacon' and selecting it shows empty result set, but tying same thing in search box works properly. Is it intended behavior? UI says "or combinations" so I would assume that it should work.

Difegue commented 5 years ago

Favorite tags run in datatables regex search mode to implement OR: female:x.*male:y should work (and if you want it to work for the reverse order of tags you'll have to write a bigger regex). It's written in the documentation, but it's such a bad hack it's understandable if no-one knows about it.

Another reason to work on making search server-side...

CirnoT commented 5 years ago

That's nice to know tho that makes them look very ugly. Maybe allow having separate name for favorite too? Having 40 characters regular expression on a button sure would suck.

Difegue commented 5 years ago

Oh yeah, they def. look bad in that state.
Adding custom names would be good to have when I'd get to redoing the database structure for favtags(currently they're stuck in the options hash), as it'd allow to hide namespaces as well.

Jisagi commented 5 years ago

A simple Perl script that generates json and outputs it to stdout would work too and would most likely be easier to automate with cronjob - can even provide example in documentation.

Being the one who initially created that issue for the backup api path, nothing is easier than running curl on an API route. Especially not a perl script.

Something about desktop/mobile clients as well, since people seems to like them. The latter one already exists and it works exactly as it should be, through the existing API. More (comprehensive) API routes will let people create basically anything they need without too much work on your side. Everything can be automated (even uploads and tagging could be) and any kind of client would go though the api as well. Currently the a potential client has to get the complete archive list and then filter it clientside, which is .. not perfect.

@CirnoT Don't get me wrong, I'm honest about me not really liking perl but looking at it objectively, an API is more versatile for all the other possible and not too far fetched use-cases.

CirnoT commented 5 years ago

I see no difference between curl http://localhost/api/backup -o backup.json and perl scripts/backup.pm > backup.json in cronjob. In fact the second one is more universal as it does not create dependency on curl or wget while LRR already depends on Perl. And in fact, both can exist at the same time without any issue.

Jisagi commented 5 years ago

Purely in cronjob? Yes, no difference at at.

Not sure what you mean by talking about "dependencies" while talking about curl (or wget). curl was merely an example, an API can be queried any way you want. The focus being on "any way you want". What LLR depends on, doesn't really matter btw, since an API tries to elimiante all this by providing a unified and standardized interface to it (be it in json, xml, ...). The underlying platform/software can be written in brainfuck if it wants to be. As long as the interface output is in a certain format, the one that queries it, can read and interpret it.

I agree on "both can exist at the same time". This is true, but the existence of an API route for this would eliminate the need for any script, be it written in perl or any other language. This would be imho the far more verstile solution. The survey has shown, that many users use LLR on their local machine and not "outsourced" on a NAS or an external server. I poersonally think it's far easier to have an aPI which can be queried on teh machine it's hosted on and from external one, instead of "only" on the one hosting it. the perl script would limit the backup possibilities quite a bit.

Difegue commented 5 years ago

I was going to only introduce the API endpoint but since I got the script courtesy of https://github.com/Difegue/LANraragi/pull/168/ I guess there can be both.

They'll both rely on the same internal method (LANraragi::Model::Backup::build_backup_JSON();), so it's no big deal.

Jisagi commented 5 years ago

Are there already plans yet on creating some kind of "community hub"on discord, irc, or something else? I'm advocating for discord btw :smile:

Difegue commented 5 years ago

I'm not a big fan of discord, but it's what's in at the moment so I'll probably go for that. (And I just saw it has CLI open-source clients, which alleviates my data-collection fears by a sizeable amount) Managing a chat is a pain tho so ideally I'd love to have it like the hydrus server where it's mostly ran by the users and I can just check in to reply to stuff.

I'll put a notice here/on the repo readme if I get it setup.

Difegue commented 5 years ago

Well I was waiting for my multiarch builds to complete so I went ahead and did it: https://discord.gg/aRQxtbg

Hopefully this won't peter out at like 3 members

Hakker commented 5 years ago

I would say a proper database support. I wonder how lanraragi will deal with large setups. Currently I have about a TB worth of stuff and well it make me wonder how a flat file will deal with it. especially since deletion doesn't delete anything from the file.

Difegue commented 5 years ago

I would advise you try it and see how it scales. :^)
Currently the major bottleneck lies not in the database but rather in the JSON cache functionality.

It's funny you say "proper" database since Redis is in that way much closer to one than SQLite: The file is only used for serializing at regular intervals and most of the DB is stored in memory when the Redis server starts, much like how postgreSQL/mariaDB/choose your poison works.

I actually considered switching Redis for SQLite at some point as tables might be handy for when I implement server-side search and not having to spin up a second server gives me less failure states to deal with, but I guess that'd go against your suggestion. 😅