tajmone / PBCodeArcProto

PB CodeArchiv Rebirth Indexer Prototype
4 stars 0 forks source link

Project Status #11

Open tajmone opened 6 years ago

tajmone commented 6 years ago

This issue is to monitor how close the project is to being usable. It consideres the overall status of the project (not single aspects of it).

HTML Pages Creator

This is the single app that will handle checking the project's integrity and then building the HTML pages.

Checks Functionality

For finer details on how errors are handled, see Issue #8 — Errors & Warnings.

HTML Output

For finer details on the HTML contents TODOs and status, see Issue #5 — HTML Pages Creation Status.

Debug Cosmetics

Most of the entries in the current code's TODOs list have to do with the app's debug messages and it final report to the user. None of these affect the functionality of creating the web pages, so the app could become operative before these pending tasks are fulfilled.

tajmone commented 6 years ago

Today I've took some time to go over the various TODOs lists and the old tools, in order to try and work out how far we are from the goal of the app being operative.

It seems that we're actually quite close. Most of the code pending tasks concern the details of how the app debugs output to its user. Since these are just cosmetics, they don't prevent actually using for buidling the HTML contents.

It looks like the major tasks in front us are:

What I really need now is to have a clear list of the mandatory tasks ahead of us for the app to be operative. Some design details haven't been so far discussed, and those need to be clearly defined as goals.

Some aspects, like modularization of code, might actually not be a strict need right now. From what I understood of the previous checks tool, the current app can easily integrate them in its code with a few retouches and/or by copying over some code from the old tool.

I'd rather have the app up and running, and see the HTML website integrated into the project, and postone cosmetic changes to a post-publishing time. Keep in mind that testing how the app works with full original project is going to take some time and might introduce issues we didn't think about.

Resources Links

Right now I'm also thinking of the difference between HTML pages as locally browsed pages vs online website. Their use differs quite a lot, especially when it comes to linking. We already touched on the issue, and the possibility of having foldered resources linked as a downloadabe "on demand" zip archive (via GitHub API), but you pointed out (quite correctly) that even for single file resources the download might be deceiving due to the file depending on the inclusion of other resources from the project.

This consideration lead me to think that it would make sense to add an extra key-value pair in the comments header, to link to dependencies. This would allow users to clearly see from the resume card of a resource that it depends on other resources, and also provide a link to them (or to their card), allowing them to get a better picture of the needed files, and possibly these links would also allow to download all required files from the website.

Example (ad absurdum):

;: dependency: ../filesystem/eol.pbi
;: dependency: ../console/text-utils.pbi

... where these will be show as clickable links in the resume card, either opening the actual resource file or its resume card.

Another format could be:

;: dependencies:
;.   ../filesystem/eol.pbi
;.   ../console/text-utils.pbi

... using the carry-on syntax, to group all dependencies under a single key.

In any case, these special key-value pairs should be aknowledged by the comments parser's post-processor (just like URLs are being already converted to links):

This needs to be thought carefully though, as dependency from one file might entail the dependencies of that file too. So, if the goal is to allow a user to gather all the needed resources to start using a resource he/she's has chosen via its resume card (wether locally or from the website), the system should grant that all required files will be downloaded.

Maybe, the HTML interface is not so different in its local use than it is from the website. If the aim is to chose a resource and all its dependencies, having a link that gathers all required files is still practical even when the repo has been cloned locally. But if the resources download goes through the GitHub API, then the actual files downloaded will always be those from the master branch of the project — which might not be the expected result from local browsing, since the user might have checked out an older release tag or commit.

Anyhow, we need to work out some solution of sorts, otherwise the HTML pages and their resources resume cards will be detached from the actual contents of the resources — this might even be fine, if we want the website to be more like a browsable catalogue.

But my guess is that we can come up with some solution to inteface the cards to the actual resources.

SicroAtGit commented 6 years ago

We already touched on the issue, and the possibility of having foldered resources linked as a downloadabe "on demand" zip archive (via GitHub API), but you pointed out (quite correctly) that even for single file resources the download might be deceiving due to the file depending on the inclusion of other resources from the project.

It would be nice if Gitzip could accept a list of URLs (to directories and to files) and create a zip file from them. There are add-ons for Firefox and Chrome that can do this: https://addons.mozilla.org/en-US/firefox/addon/gitzip/ All we have to do is figure out how this works.

;: dependencies:
;.   ../filesystem/eol.pbi
;.   ../console/text-utils.pbi

This looks good.

This needs to be thought carefully though, as dependency from one file might entail the dependencies of that file too. So, if the goal is to allow a user to gather all the needed resources to start using a resource he/she's has chosen via its resume card (wether locally or from the website), the system should grant that all required files will be downloaded.

Yes, the HTMLPagesCreator must also determine all included include files from the include file and the included include files from these included include files, and so on.

For example:

In the Resume Card the dependencies of MainIncludeFile.pbi should then perhaps be listed like this:

You can see a problem in the list above. If all required include files are downloaded separately, the directory structure must also be created on the local computer, otherwise the paths for the integration IncludeFile "FilePath" are no longer correct.

Anyhow, we need to work out some solution of sorts, otherwise the HTML pages and their resources resume cards will be detached from the actual contents of the resources — this might even be fine, if we want the website to be more like a browsable catalogue.

Perhaps it is better if the website only serves as a catalog, because the future codes perhaps have very nested dependencies of include code files. Therefore, it will always be easier to download the complete archive, instead of download all include code files separately and create the needed directory structure.

I hope we can find a solution. If we can figure out how the Firefox add-on does this with Gitzip (see above), downloading code with all dependencies would be very easy.

tajmone commented 6 years ago

Just an idea, maybe a long shot ... but woth considerin.s

Using SpiderBasic to Dowload/Extract Resources

I think that the ideal solution would be to use SpiderBasic for this task.

First of all, it would allow us to reuse part of the existing PB code with little adaptation; second, it will offer a browser-side solution to the problem at hand, which means it won't depend on external APIs nor on an Internet connection (when used locally). Obviously, an solution independent from external APIs is preferable in terms of maintainance since it's not subject to API updates.

SpiderBasic could handle all dependencies, recusrviely, and prepare a zip archive with all required files, and even correct the include string-paths, providing all needed resource as files ready to use in the same folder (which solves the problem of having to replicate folders structure to use the resources).

This would be useful also for local use of the project, because by navigating the catalog the enduser could choose the needed resource and extract it with all dependencies without having to change the include paths afterwards. This would be quite cool when dealing with lots of files.

Potentially, the interaction between local PB code and SB code in the browser could allow all sort of interesting interactions — eg, updating included file in a local project when resources are updated, and so on.

I haven't really used SpiderBasic, but from what I've seen in its documentation it does support lots of PB commands, so it should be doeable. Also, once the code to deal with this is ready, maintainers don't necessarily have to own a SpiderBasic license to maintain the project since that code won't need to be updated unless new features are required, and it's independent of any PB version number updates.

What do you think of this?

SicroAtGit commented 6 years ago

Using SpiderBasic to Dowload/Extract Resources

I also think it's a very good idea.

I have bought SpiderBasic recently. There is certainly something similar to SpiderBasic for free, but I like the PB language and so I have supported the team.

When I have enough time, I will see if the implementation with SpiderBasic is possible.