Open mduplouy opened 7 years ago
Calibre schema is OK with me. It's a good one, and no real point to invent something different. With future library updates (that will hopefully follow), I just have to make some minor modifications manually again (like I did here), and give the script another 3 days to have its job done :) not a big deal, since it's not being updated more often than twice a year
actually calibre database structure suffers some design defaults, but nothing really important :p
I didn't say it was 'perfect', but much better than myrulib db schema :p
Mysql support should be ok now, at least the internal database builds nicely with cli command :+1:
If you need PostGre support, please let me know
Everything is in the feature_mysql branch
That's great! Will test as soon as conversion is complete. Also, please do add PostGre support, since I don't know which one admins at work will prefer (I can set up both at home, of course). Thanks 👍
done :)
wow, you'r lightning fast :) 👍 too bad the conversion isn't that fast :( but I honestly just don't know how to make it work faster... (it's on ~74% now)
That's the strength of not too poor design :dancing_women:
Maybe you should think of a new computer ? :stuck_out_tongue_winking_eye:
I'm not doing it on a PC, but on a Synology NAS :) I can't have my core i7 laptop run 24h uninterrupted, and actually CPU isn't the bottleneck here (NAS is doing just as fine) : the RAM access speed actually is. The script has to grep through 49678 authors for every of the 62800 books to construct authors list. It's all in RAM all right (Synology's 2GB is more then enough for that), but it still takes a while. That's myrulib poor db design for you :( I've optimized the script as much as I can, so it's only one grep using sed
(which is fast) for every book, but still, the volume of RAM accessing to do is... impressive
P.S. are those no newlines at the ends of files intended?
absolutely not :) guess my settings were not well imported on my new laptop
Thanks !
I wonder how long it would take on a high-end PC, but maybe the limitation is you can use just one cpu core for this :/
I've already processed db dumps of 60GB, that was some kind of pain :D
maybe the limitation is you can use just one cpu core for this :/
I think not, since it doesn't consume even 1 core entirely on my laptop: ~20% at max CPU (I have 2 cores). So I think that RAM access speed is the issue here: it has to grep through all author for each individual book, and since there's this many authors, it takes time to access the data in RAM. Only a supercomputer frame would help here))) p.s. gotta go now, will post back when conversion is done (prob tomorrow evening)
I have a huge amount of work next week, don't worry if you don't have any news from me :)
Actually, whatever the database is supported by doctrine DBAL one should be able to use it :+1:
Please note the table structures won't be modified as silex cops is stuck to the calibre schema