ElektraInitiative / libelektra

Elektra serves as a universal and secure framework to access configuration settings in a global, hierarchical key database.
https://www.libelektra.org
BSD 3-Clause "New" or "Revised" License
207 stars 123 forks source link

Website structure #1015

Closed Namoshek closed 7 years ago

Namoshek commented 7 years ago

Suggestions on the structure and design of the new website are needed @ElektraInitiative/elektradevelopers.

Currently, there is ...

In my opinion, we definitely need some more things like ...

It would be nice to hear some opinions about what to put where!

A live demo of the website can always be found at http://namoshek.at:9000.


TODO

To implement are following menu entry types:

Menu entries (root level) must be able to have sub-entries.

markus2330 commented 7 years ago

Yes, in order of priorities we need:

  1. guidance through the documentation (starting from main page)
  2. news page (= ChangeLog)
  3. link to github via icon
  4. Some Download/Get it/Get Started
  5. Overview of Plugins/Bindings/Tools with links to individual Plugins/Bindings/Tools

It would be nice to hear some opinions about what to put where!

Do you plan to add other menus, too?

If interested, for older versions (till b8f050484f0ec49b51ee9750bca6180de4f5da1f) the homepage was within the main repo, so you can still find the old structure there:

Before that we had a mediawiki installation only available at archive.org:

Namoshek commented 7 years ago
  1. Overview of Plugins/Bindings/Tools with links to individual Plugins/Bindings/Tools

What I personally would like to see is some collection of all packages you need to get a full installation running. Currently, you need to look through all plugin/tool readmes and collect them yourself (e.g. libyajl-dev, libxml2-dev, libaugeas-dev, ...). That's also why several plugins were not enabled on my server until yesterday (simply forgot about).

Do you plan to add other menus, too?

If necessary, then of course. I've also no problem with adding sub-menus (e.g. tutorials below documentation).

What is also a thing I need to look into is how to handle links within the documentation.

markus2330 commented 7 years ago

What I personally would like to see is some collection of all packages you need to get a full installation running.

In which way? As list in doc/COMPILE.md? Or should we add a way how to collect all deps in the README.md of plugins. (similar to #1012)?

Currently, CMake should output all deps that are missing. The problem is that package names are different across distros and versions, so the info you got might or not might not work.

If necessary, then of course. I've also no problem with adding sub-menus (e.g. tutorials below documentation).

What is your opinion about hover menus?

What is also a thing I need to look into is how to handle links within the documentation.

Yes, that is definitely important.

Namoshek commented 7 years ago

In which way? As list in doc/COMPILE.md? Or should we add a way how to collect all deps in the README.md of plugins. (similar to #1012)?

I've no real preference, but collecting data automatically reduces maintenance effort.

What is your opinion about hover menus?

I have no problem with them (and that is what I was talking about btw).

omnidan commented 7 years ago

some input:

Namoshek commented 7 years ago

What do you think of using a manually maintained json file that is used to specify the content being displayed on the website? I'm thinking of something like that:

{
   "sources": {
     "github": {
         "api": "https://api.github.com/repos/ElektraInitiative/libelektra/",
         "raw": "https://raw.githubusercontent.com/repos/ElektraInitiative/libelektra/"
      },
      "website": {
         "doc": "http://doc.libelektra.org/"
      }
   },
   "sections": [
      {
         "target": "documentation",
         "content": [
            {
               "type": "section",  // can be used to build pretty menus / sub-menus
               "name": "Getting started",
               "content": [
                  {
                      "type": "document",
                      "name": "Compiling",
                      "path": "contents/doc/COMPILE.md",
                      "source": "github.api"
                  },
                  {
                      "type": "document",
                      "name": "Installing",
                      "path": "contents/doc/INSTALL.md",
                      "source": "github.raw"
                  }
               ]
            },
            {
               "type": "link",
               "name": "API Docs",
               "path": "api/0.8.18/html/",
               "source": "website.doc"
            }
         ]
      }
   ]
}

This would give us a very powerful tool to modify structure, document sources, etc. On the one hand there is the question if this is maintainable at all, because the path for the last example would change frequently for example. On the other hand, I think we need a way to describe structure and if we do that, we can also pull in more than one source (GitHub). We could also use the GitHub API approach and use full path links everywhere instead of sources (we would still need a way to differentiate between API content and raw files though).

markus2330 commented 7 years ago

I also do not like the part that it needs to be manually maintained ;) And I am afraid that it will get really long (even now: above an entry has about 256 bytes, doc alone are about 190 entries, i.e. 50kB), which is not something easy to maintain.

A server-side solution would not have this limitation, then we could also easily iterate over all files and use README.md as index (and for filenames). Alternatively, we could also commit a generated file (not ideal, but would allow us to use the github API as-is - and would allow us to go back in history, at least until where the file was introduced).

Then we would only have to describe the few exceptions, and not dozens of files which are displayed in the same way as they are in the folder structure. What do you think about a REST call that basically assembles the JSON as proposed by you from directory listings and README.md (and an override json for exceptions)? Btw. how many github API calls do you use? What would be the effort to reimplement these API calls in cppcms? (are more speculative questions, sounds like generating a file would be the simplest way with the best feedback for someone maintaining the webpage).

This would give us a very powerful tool to modify structure, document sources, etc.

Yes, the question is if this power is needed and how we can keep this file in sync with the directory contents. The directory contents are already duplicated in README.md. There we had dozens of issues until @KurtMi wrote the link checker. (The generator needs to do these checks too!)

On the one hand there is the question if this is maintainable at all, because the path for the last example would change frequently for example.

This path in particular would be a smaller issue: we could define versions at one place and refer to it at others. I am more concerned about (re)moved files or newly created files.

On the other hand, I think we need a way to describe structure and if we do that, we can also pull in more than one source (GitHub).

Which other sources do you mean here?

We could also use the GitHub API approach and use full path links everywhere instead of sources

This will bloat up the file even more (make it even harder to maintain). I would even suggest to make it shorter by e.g. assuming "type": "document" and "source": "github.api" as default.

Might be irrelevant if a generator generates such a file (then we can use full URLs if it is easier for you).

(we would still need a way to differentiate between API content and raw files though).

That is a tricky question, especially within markdown links. See #996 for a report about this problem. I hope you find a good solution there.

Namoshek commented 7 years ago

What do you think about a REST call that basically assembles the JSON as proposed by you from directory listings and README.md (and an override json for exceptions)?

The reason I proposed it is not that I need to get the directory structure, I can easily use the GitHub API for that, but because I would like us to have a way to define the structure on the website without regard to the structure we have on GitHub. The biggest problem with listings is for example that they are alphabetical (if we do not use any other sort criteria). In my opinion our documentation shouldn't be structured alphabetically, but rather logically by steps you need to do until the software runs. So some sort of manual control is required anyhow. But of course we can limit this control (and effort) quite a bit by only white- or blacklisting files for example (btw. I'm currently blacklisting cmake, doxygen and other irrelevant files, so this is no bad approach).

We could also write a minimal version of the structure by hand and let a generator make something more useful out of it (i.e. incorporate all of your suggestions with default values, etc). Most parts shouldn't change frequently anyway, but some small things might do.

Btw. how many github API calls do you use?

One to get the source tree (currently only one directory per website section, but as far as I've seen this could be used with arbitrary depth) and another one to get an individual file. That shouldn't be more than GitHub is using itself.

What would be the effort to reimplement these API calls in cppcms?

The effort in C++ is a lot higher than with JavaScript (at least a factor of 10). But what use case do you see here anyway? If it should only act as proxy, I don't see a good reason to do it, as it is an error source and it would slow down the website probably quite a bit.

Which other sources do you mean here?

Well, sources that might be a thing in future. Currently, the only non-GitHub stuff I can think of are API or code docs like used in the snippet above (which are actually more links than data sources).

That is a tricky question, especially within markdown links. See #996 for a report about this problem. I hope you find a good solution there.

In theory it should be easy to resolve relative paths. The hard part is to stay in sync with the menu on the website when loading another documentation page. Theoretically there could be links to files that are not even meant to be on the website - what to do in this case? Remove the link? Allow the page load?

The fanciest solution would be to use the snippet sharing function to maintain (store) the website structure. Then we would have a self-sustaining system. :smile:

markus2330 commented 7 years ago

The reason I proposed it is not that I need to get the directory structure, [...] I would like us to have a way to define the structure on the website without regard to the structure we have on GitHub.

Yes, that is why I referred to README.md. For example in src/plugins/README.md the - [resolver](resolver/) uses POSIX already provides the information: - [Title of page](link to page/) hover text. Furthermore the links are already in a non-alphabetic order (hopefully useful).

The idea is that we use this structure also for the web page.

rather logically by steps you need to do until the software runs.

I fully agree and hope you help in giving the content a better structure!

So some sort of manual control is required anyhow.

Yes, also the structure of README.md and of the webpage might sometimes conflict. I fully agree that we need a file for overrides but it should be as maintainable as possible.

But of course we can limit this control (and effort) quite a bit by only white- or blacklisting files for example (btw. I'm currently blacklisting cmake, doxygen and other irrelevant files, so this is no bad approach).

Whitelisting is already done implicitly by README.md, but nevertheless it is no surprise if we still need some exceptions on top of that.

We could also write a minimal version of the structure by hand and let a generator make something more useful out of it (i.e. incorporate all of your suggestions with default values, etc). Most parts shouldn't change frequently anyway, but some small things might do.

Yes, this is a good compromise. Can you write it in shell? Ideally it should be executed also on the build server and check consistency of folder structure, README.md and the website overrides.

One to get the source tree [...] and another one to get an individual file

Ok, but getting the source tree will not be needed anymore once we have a js with full listings of the content?

But what use case do you see here anyway?

Its only about not being dependent on the github API.

Theoretically there could be links to files that are not even meant to be on the website - what to do in this case? Remove the link? Allow the page load?

Do you mean references to other pages in the js override files? I thought you access content of github files via https://developer.github.com/v3/repos/contents/ so that only files from the repo can be accessed?

Namoshek commented 7 years ago

Yes, that is why I referred to README.md.

Now I got what you meant. I wasn't really sure what README you were talking about, now it's clear, thanks.

I like the idea, but I'm not so sure if parsing this is easy (e.g. know at which line the list begins)?

Can you write it in shell?

My shell skills are somewhere near zero. I've not even an idea where or how to start...

Ok, but getting the source tree will not be needed anymore once we have a js with full listings of the content?

Yes, this will be obsolete.

Its only about not being dependent on the github API.

I guess it is easier to just use direct links to the raw content of files. This way we could easily switch to another provider if necessary.

Do you mean references to other pages in the js override files? I thought you access content of github files via https://developer.github.com/v3/repos/contents/ so that only files from the repo can be accessed?

No, I mean something different. Absolute URLs are quite easy to filter I guess, so other websites and stuff are no issue (I'd probably just leave the links there and make them target="_blank" or something like that).

I'm talking about relative paths, e.g. if we use the README.md of a plugin in the documentation (all fine so far) and in the readme there is a link to, let's say, a code file or something else that isn't part of the documentation. How should we handle such cases? Make a link to GitHub out of it? Remove the link?

markus2330 commented 7 years ago

Now I got what you meant. I wasn't really sure what README you were talking about, now it's clear, thanks. I like the idea, but I'm not so sure if parsing this is easy (e.g. know at which line the list begins)?

We can add restrictions and helper info in the structure.js. E.g. the structure.js tells if headlines should be used as submenus, and in the file we only use lines in the form of ^- [.*](.*). So the approach is more to change the README.md to a form that they are easily usable than the other way round.

My shell skills are somewhere near zero. I've not even an idea where or how to start...

What are our options? If we exclude shell, we still have plenty:

  1. make everything in the client: that means at least 3 requests (structure.js, the README and a file for content) and that a regex or similar is applied to the content of the README.
  2. implement it in C/C++ (too cumbersome?)
  3. implement it in a different scripting language (python, lua, ruby,..). python might be best as it is already used for a script (find-tools).
  4. cmake: language is not well suited for this task
  5. implement it with (node)js, e.g. within grunt with an extra target (no extra dependency? heavyweight).
  6. implement it with php (new dependency, heavyweight)

I guess it is easier to just use direct links to the raw content of files. This way we could easily switch to another provider if necessary.

Good idea!

I'm talking about relative paths, e.g. if we use the README.md of a plugin in the documentation (all fine so far) and in the readme there is a link to, let's say, a code file or something else that isn't part of the documentation. How should we handle such cases? Make a link to GitHub out of it? Remove the link?

If we have a generator it should tell us such cases. Then we either can change it to an absolute URL or (in most cases) extend the documentation to contain the file, too.

Namoshek commented 7 years ago

We can add restrictions and helper info in the structure.js. E.g. the structure.js tells if headlines should be used as submenus, and in the file we only use lines in the form of ^- .*. So the approach is more to change the README.md to a form that they are easily usable than the other way round.

Guess that could work, yes.

What are our options? If we exclude shell, we still have plenty:

I think doing it in nodeJS / as grunt task makes totally sense, as it's part of the website anyway. json is also the prefered data format of JS, so this would fit quite well. Making a custom grunt task is no problem at all.

If we have a generator it should tell us such cases. Then we either can change it to an absolute URL or (in most cases) extend the documentation to contain the file, too.

Well, I didn't want to parse content with the generator...?! That would also mean either duplicating the documentation for serving it on the website or doing parsing + transforming twice (in generator + frontend). I'd like to keep this part to the frontend.

markus2330 commented 7 years ago

I think doing it in nodeJS / as grunt task makes totally sense, as it's part of the website anyway. json is also the prefered data format of JS, so this would fit quite well. Making a custom grunt task is no problem at all.

Ok. Then we need to run it on the build server because it is unlikely that everybody has a working npm installation. For the build server we only have to check if the resulting structure json equals to what is checked in. And please make it also work for Debian stable's npm.

Well, I didn't want to parse content with the generator...?! That would also mean either duplicating the documentation for serving it on the website or doing parsing + transforming twice (in generator + frontend). I'd like to keep this part to the frontend.

I think we talk past each other. I mean that the grunt job that builds the structure json file. Can you please describe the full toolchain (what the grunt job and what the frontend would do) in the README.md? Then we are on the same page.

Namoshek commented 7 years ago

I tried some things and I also came across quite some issues already.

What my script currently can do is the following:

The problem I currently face is the plugin documentation. Parsing the README.md is not going to work within a dynamic context. There are simply too many factors I'd have to consider. Making a static script for it might work, but I don't like this either.

So currently I'm not sure what to do. I can easily build a list of plugin readmes dynamically from the src/plugins sub-directories, but it would not be structured (storage, resolver, ...).

I also ran into the API rate limit while trying around. The limit is set to 60 requests per hour, which could be an issue if our website contains more than 60 documents originated at github. If this is the case, we should probably copy the files and serve them with the website directly (cumbersome...). For building it is no issue, because we could use authorization on the build server which grants up to 5000 requests per hour.

Somehow I really feel as if a manually maintained structure file would still be our best option. On the one hand it means more work maintaining it, on the other hand it gives more control over what is part of the website and it prevents accidents. Having a small generator that adds some information automatically, is not the issue, but I think that a quite solid structure file has to be made by hand...

markus2330 commented 7 years ago

First off: Yes, the basic structure ("the static parts") should be defined by hand.

Take a very minimalistic source file, parse it, make some HTTP calls to github and build a more comprehensive structure file. (Sounds a lot better than it actually is...)

Why making HTTP calls during this process? Just iterating over directories and files should be enough.

The problem I currently face is the plugin documentation. Parsing the README.md is not going to work within a dynamic context. There are simply too many factors I'd have to consider.

What do you mean with dynamic context? I thought that the script parsing the README.md does so in the repo statically. In the dynamic context (the browser) should then only fetch a single prepared file (without much parsing).

Making a static script for it might work, but I don't like this either.

As said it is okay to change the README.md to make your live easier.

I also ran into the API rate limit while trying around.

The solution here would be to "reimplement" the parts of the github API we use and use our own API it as fallback whenever github has troubles (e.g. shuts down its API).

Somehow I really feel as if a manually maintained structure file would still be our best option. [...] So currently I'm not sure what to do.

I think a mixture is the best option if you do not want a webpage that is completely statically generated. Let me try to explain again (I will invent some files in doc/webpage, the file names are open for discussion). There are two phases:

Phase 1: Static context

The first phase is crawling the checked out repo and bringing a (also checked-in) file up-to-date. Let us call the script that does that assemble-webpage-structure and the file containing all information needed for "Phase 2" doc/webpage/structure.

First there are some basic definitions in doc/webpage/main which defines the main menu which is carefully crafted by hand (and static). It contains references to folders and how their entries should be added, for example: "src/plugins contains a README.md in the format - [](), use the entries in this order and use subsections for subfolders". There are some features like this:

  1. use all sub-folders as menu entries (e.g. src/tools, src/bindings)
  2. use all files of a folder as menu entries (e.g. doc/help)
  3. use README.md in ^[]() format, e.g. src/libs/README.md
  4. use README.md in^- []() format, where sections define subfolders, e.g. src/plugins/README.md

The result of phase 1 is a single filedoc/webpage/structure describing the whole structure of the web page and every other metadata you might need in the dynamic context.

After the script assemble-webpage-structure is runned, the updated doc/webpage/structure needs to be commited.

Phase 2: Webpage (Dynamic context)

The webpage fetches the doc/webpage/structure which contains everything precompiled in a way so that you can easily show the main page and know the structure of the whole webpage.

The goal of doc/webpage/main (and others) is to be short and maintainable files, the goal of doc/webpage/structure is to be an ideal fit for your needs in the dynamic context. Does that make sense?

Namoshek commented 7 years ago

We are talking about the same structure and goals. You are describing exactly what my plan is. The only mistake I made was to assume I need github during assembling of the structure file, which I obviously don't need - thank you for the hint (doesn't make a huge difference for the rest though).


However, I've some issues with your four points, because they are not as trivial as they sound in first place (at least in my opinion):

1) use all sub-folders as menu entries (e.g. src/tools, src/bindings)

What are the target files being displayed for these menu points? The README.md? I think defining something like src/tools/**/README.md as path would be a very nice solution. An alternative would be to define a path + a set of files that are used as fallbacks, e.g.

{
   "name": "Tools",
   "path": "src/tools",
   "type": "listdirs",
   "target_file": ["README.md", "readme.md", "README", "readme", 404] // not sure if 404 would work
}

2) use all files of a folder as menu entries (e.g. doc/help)

This is the most straight forward use case and I've already implemented it in the assembler. Here the interesting part is how to transform the file name into a name that can be used for the menu, e.g. name.replace('-', ' ').firstCapitalize() (won't always make sense).

4) use README.md in^- []() format, where sections define subfolders, e.g. src/plugins/README.md

This is the fun part, because if I would just use what you described here, I could also just read out all directories of the src/plugins dir and had the same result (except for some description for each plugin that I don't need for the menu...). What would be a lot more interesting is to use the structure defined by the README.md for the menu, e.g. to make some sort of categories for storage, resolver, etc. But here is the problem - parsing this file is not trivial. I think I would need at least three infos for that: Starting point for the parsing (e.g. line with # Plugins #), format of categories and format of plugin entries. But if I implement this, I doubt I can reuse the script for other directories (like src/lib), because it seems very special.

3) use README.md in ^[]() format, e.g. src/libs/README.md

Seems easier compared to the previous point, but why not just look for directories in the lib path?

After the script assemble-webpage-structure is runned, the updated doc/webpage/structure needs to be commited.

If everyone can build the file for himself, why commit the result?


Another point I'm not entirely sure about is whether we should localize entries of this structure file (i.e. use i18n keys) or if we stay with one language (easier and doesn't require updates of the language files when changing the structure).

For the whole structure-file-approach to work out, I'd need to go the same way as with the configuration of the frontend. That means the structure file would have to be processed into a constant of the frontend, which is as easy as running something like grunt preprocess or simply grunt if I define a good default task. I don't think this should be a problem though, as it is completely independent from the structure.in and structure.out files.


Maybe we also need to discuss following points:

markus2330 commented 7 years ago

However, I've some issues with your four points, because they are not as trivial as they sound in first place (at least in my opinion):

I know that parsing can be a non-trivial problem ;) Thus my repetitive suggestion that you rewrite the README.md in a way that the relevant information can be extracted with a trivial regular expression. We then enforce the README.md to conform to the regular expression (and contain all subdirs/files), as it is done now with the plugin's readme.

What are the target files being displayed for these menu points? The README.md? I think defining something like src/tools/**/README.md as path would be a very nice solution.

Yes, exactly: Only some path specifier and which format this README.md conforms to. (It is unlikely that we can rewrite every README to the same format).

An alternative would be to define a path + a set of files that are used as fallbacks, e.g.

Yes, if there is no README.md we either write one (e.g. necessary for doc/tutorials) or go for simple white+blacklisting (e.g. doc/help).

This is the most straight forward use case and I've already implemented it in the assembler. Here the interesting part is how to transform the file name into a name that can be used for the menu, e.g. name.replace('-', ' ').firstCapitalize() (won't always make sense).

Please make it declarative (without any string-formatting code embedded). We want the same style everywhere, so there is no use in repeating such code.

This is the fun part, because if I would just use what you described here, I could also just read out all directories of the src/plugins dir and had the same result (except for some description for each plugin that I don't need for the menu...).

And also except for a different order..

What would be a lot more interesting is to use the structure defined by the README.md for the menu, e.g. to make some sort of categories for storage, resolver, etc.

Now we are talking ;) Exactly this is what needs some clever generalizations.

But here is the problem - parsing this file is not trivial.

The parsing should be trivial: go over the file line by line and consider only lines which are headers (^###?) or entries (^- []()). If some parts of the file do not conform to the schema: change the file and do not complicate your parser!

I think I would need at least three infos for that: Starting point for the parsing (e.g. line with # Plugins #), format of categories and format of plugin entries. But if I implement this, I doubt I can reuse the script for other directories (like src/lib), because it seems very special.

Yes, that is a good start, but I think you can do even better. Instead of using start/end point you can define that the only relevant section is level-1 header Plugins. (So the begin of any other section after Plugins would terminate the parser).

Seems easier compared to the previous point, but why not just look for directories in the lib path?

To get the specified order and nicer section names.

If everyone can build the file for himself, why commit the result?

I would very much prefer if we do not need to commit it. I thought it is necessary so that you can fetch the file in different versions, but ..

That means the structure file would have to be processed into a constant of the frontend

.. might be the better option after all. Too much dynamic loading might cause errors nobody understands.

Another point I'm not entirely sure about is whether we should localize entries of this structure file (i.e. use i18n keys) or if we stay with one language (easier and doesn't require updates of the language files when changing the structure).

Let us discuss this in the next meeting. As said I would focus in improving the content; but for one language only.

Are we going to use the GitHub API / raw files (raw endpoint has no limit, I think) or do we copy served website content to an appropriate directory?

Raw files with backup sounds nice. Do they support versions?

Does our structure.in only allow to define the globally displayed menu or also the side-menu of sections (the menu that shows the different doc files currently)? What I mean is whether the side-menu can contain only dynamically created points, e.g. listdirs, or if it can also contain defined points like static links?

(I am not sure if I understand you correctly.) A README.md with some external link should be shown in the menu if it conforms to the specified format. And in the long term I think we should have a README.md everywhere, it gives a nice overview. But we can discuss this, too.

Namoshek commented 7 years ago

Please make it declarative (without any string-formatting code embedded). We want the same style everywhere, so there is no use in repeating such code.

That was pseudocode to demonstrate what I'm talking about (seemed easier to understand to me than actual words). Don't worry, I wouldn't want to use code in the structure file.

The parsing should be trivial: go over the file line by line and consider only lines which are headers (^###?) or entries (^- []()). If some parts of the file do not conform to the schema: change the file and do not complicate your parser!

I'll give this a shot after dinner, sounds promising. One thing we definitely have to change are the line-breakings within the plugin descriptions (see here).

I would very much prefer if we do not need to commit it. I thought it is necessary so that you can fetch the file in different versions, but ..

To be honest, I would like to drop the idea of doc versions... After I learned that the github API can't give enough info when fetching release details to know which commit the release depends on, I don't think there is a solid approach to implement this. The additional use is also not incredibly valuable...

Raw files with backup sounds nice. Do they support versions?

I would focus on one thing, not doing the work twice. Accessing raw files on github is basically equally simple as accessing the same files in the local repo. Only difference would be that the latter would need us to copy the served docu into the public directory of the web server (which is trivial, because I know exactly which files we need, thanks to the structure.out). This would also ensure that our structure file is always perfectly up-to-date with the docu on the website.

Btw. if the assembling is done from the installed frontend path, how do I know where the repo with the docu is located? Should this be configurable?

(I am not sure if I understand you correctly.) A README.md with some external link should be shown in the menu if it conforms to the specified format. And in the long term I think we should have a README.md everywhere, it gives a nice overview. But we can discuss this, too.

I think you don't get me right. We currently have two website menus in action:

The structure file should be used to define the content of the first (global) menu and sub-menus of it (mouseover-menus²). But think about the following: We have a section called "Documentation" and we want the first three menu points (2nd, vertical menu) to be a link to the github repo, a link to the doxygen API html docs and something more. Below this points, we want all documentation docs from doc. So we would basically mix static links with dynamic content - necessary or not?

To clarify it a bit more, take this example docu page. Let's imagine we can generate all menu points, except for the Prologue section, from a README.md (plugins, e.g.). But now we want to add the Prologue section, because we think it might be necessary/useful. Is this a use case and if so, how do we do it?

² Example menu structure (sub-points are mouseover):

- Home
- Documentation
   |- Getting started
   |- Tutorials
   |- Libraries
   |- Plugins
   |- Tools
   |- Decisions
- Snippet Converter
markus2330 commented 7 years ago

One thing we definitely have to change are the line-breakings within the plugin descriptions (see here).

An easy fix here is to have a "brief description", which is by definition one line long, and the rest in the next lines (ignored by your parser)

To be honest, I would like to drop the idea of doc versions... After I learned that the github API can't give enough info when fetching release details to know which commit the release depends on, I don't think there is a solid approach to implement this. The additional use is also not incredibly valuable...

Ok, so only-master it is.

Btw. if the assembling is done from the installed frontend path, how do I know where the repo with the docu is located? Should this be configurable?

What about build directory==frontend path?

The structure file should be used to define the content of the first (global) menu and sub-menus of it (mouseover-menus²). But think about the following: We have a section called "Documentation" and we want the first three menu points (2nd, vertical menu) to be a link to the github repo, a link to the doxygen API html docs and something more. Below this points, we want all documentation docs from doc. So we would basically mix static links with dynamic content - necessary or not?

To clarify it a bit more, take this example docu page. Let's imagine we can generate all menu points, except for the Prologue section, from a README.md (plugins, e.g.). But now we want to add the Prologue section, because we think it might be necessary/useful. Is this a use case and if so, how do we do it?

On the left side menu, e.g. where it says on http://namoshek.at:9000/#/tutorials "Application Integration", "Cascading", .. it would of course be nice to also add links to API docu and the build server. If there are some restrictions, e.g. links are not possible within every section it should be okay. Do you have an alternative proposal where to add such links?

So an enhanced example menu structure:

- Home
- Documentation [end user/administrator related]
   |- Getting started (?)
   |- Tutorials
   |- Plugins
   |- Tools
   |- Man Pages
- Development [for people developing with Elektra]
   |- Getting started (CODING,...)
   |- Examples
   |- Tutorials
   |- Bindings
   |- Libraries
   |- Decisions
   |- Build server (external link)
   |- API Docu (external link)
   |- Github (external link)
- Snippet Converter

But obviously it is more important to have the structure easily changeable, it is impossible to get it right with the first time.

Namoshek commented 7 years ago

What about build directory==frontend path?

Not sure if I can follow you. Currently during frontend installation, the whole src/tools/rest-frontend directory is copied to /usr/local/lib/elektra/tool_exec. But if I run the assembler script in this path, I'll have a hard time to find the documentation files. Where I build the structure file is irrelevant, in the end it will be part of the constants file in the frontend. But I need to know where the original repo is located (can I lookup that through kdb somehow?).

Do you have an alternative proposal where to add such links?

Just put them into the main menu (maybe in the hovermenu somewhere). Probably the same way laravel is doing it, if we have more such links.

markus2330 commented 7 years ago

Not sure if I can follow you. Currently during frontend installation, the whole src/tools/rest-frontend directory is copied to /usr/local/lib/elektra/tool_exec. But if I run the assembler script in this path, I'll have a hard time to find the documentation files. Where I build the structure file is irrelevant, in the end it will be part of the constants file in the frontend.

In UNIX one would not expect that the "assembler" can directly work in /usr/local/lib/elektra/tool_exec. Instead you would again type "make install" in the build directory.

My idea to avoid the "make install" would be to directly use files that are already present in the build directory. If they are not present there, but only copied with "make install" then the idea obviously won't work.

But I need to know where the original repo is located (can I lookup that through kdb somehow?).

We could define a path where the build directory can be found.

But the cleanest approach would be to use the build server to build and deploy the web page. And the build server will happily do it from scratch every time ;)

Just put them into the main menu (maybe in the hovermenu somewhere). Probably the same way laravel is doing it, if we have more such links.

Yes, sounds reasonable.

Namoshek commented 7 years ago

In UNIX one would not expect that the "assembler" can directly work in /usr/local/lib/elektra/tool_exec. Instead you would again type "make install" in the build directory.

It definitely makes sense what you say. In theory I think it might be possible to do so, but I'm not sure if this is good in practice. npm install will install loads of dependencies from the internet, same as bower install (implicitely called by npm install) will do. I guess copying the dependencies does cause no troubles, but I don't know for sure. If we would do that during the cmake build, we could also build the website structure in the build dir and deploy it with the normal installation. But for the configuration of the frontend, which is a separate json file, we still need to run a small "build" task of grunt to update the application.js whenever the config changes (there isn't really a way around that, doing configuration as part of the cmake build doesn't seem good practice). So the build is running in the exec-dir anyway (at least part of it).

But the cleanest approach would be to use the build server to build and deploy the web page. And the build server will happily do it from scratch every time ;)

Even if we have the assembler run in the build directory, how can we make sure that he has access to the docs? I think to gather information, it is not only easier, but also necessary to operate on the source tree.

During development I'm way too lazy to always run make install when I did changes to the app, so I run the frontend directly in the source tree. So for now I'll look for a solution that works in this environment. Maybe it is possible to give an argument to the grunt task building the structure file, then I could place the target file in the build directory (to not polute the source dirs). But yeah, JS is not as easy as C here. :smile:

Namoshek commented 7 years ago

I pushed a draft for the generator in d12dd3d9365de957f95346dfb209e3704290f12e. For the moment I also checked in the generated structure file structure.json, which is generated from the structure.json.in. It is only a small example with every menu point type at least one time.

Is this about what you expected?

Btw. I'll develop the Home page independent from the structure file, as displaying news and the changelog works somewhat different and we might also use a custom design for the landing page...

markus2330 commented 7 years ago

It definitely makes sense what you say. In theory I think it might be possible to do so, but I'm not sure if this is good in practice. npm install will install loads of dependencies from the internet, same as bower install (implicitely called by npm install) will do. I guess copying the dependencies does cause no troubles, but I don't know for sure.

The advantage would be more consistency with the rest of the build system and maybe even some sharing with the npm downloads from @omnidan?

But I have to admit that I could not find a project that does an integration this way. What I would like to avoid are processes with too many steps (which would need a tutorial). But it seems to be hard to avoid in this case.

Even if we have the assembler run in the build directory, how can we make sure that he has access to the docs? I think to gather information, it is not only easier, but also necessary to operate on the source tree.

From the build directory you have access to the source. If everything else fails you can look into CMakeCache.txt (value Elektra_SOURCE_DIR). But there are better methods available, e.g. use configure_file and let CMake inject the variables you need.

During development I'm way too lazy to always run make install when I did changes to the app, so I run the frontend directly in the source tree. So for now I'll look for a solution that works in this environment.

The idea "build directory==frontend path" was all about getting it easier, but still not pollute the source. You can relatively easy copy everything from source to build directory during cmake stage, see file copy, but admittedly the CMake phase is already quite long, so you might not be happy with it.

Maybe it is possible to give an argument to the grunt task building the structure file, then I could place the target file in the build directory (to not polute the source dirs). But yeah, JS is not as easy as C here. :smile:

Adding arguments to processes is easy on CMake's side, too.

Namoshek commented 7 years ago

The advantage would be more consistency with the rest of the build system and maybe even some sharing with the npm downloads from @omnidan?

That doesn't work, dependencies are project / package dependent. I don't know if NPM can cache dependencies globally, this would increase speed, but nothing more.

What I would like to avoid are processes with too many steps (which would need a tutorial).

That shouldn't be the problem. We can run a script during installation phase of cmake that takes care of everything (already done). For individual configuration, some very small additional effort is necessary though.

markus2330 commented 7 years ago

That shouldn't be the problem. We can run a script during installation phase of cmake that takes care of everything (already done).

Sounds good! Is this already part of an PR?

For individual configuration, some very small additional effort is necessary though.

Maybe you can even pick up here some configuration from kdb?

Namoshek commented 7 years ago

Sounds good! Is this already part of an PR?

Yes, see the package.json of the frontend and the CMakeLists.txt in #1014. CMake calls npm install, which in return calls an additional script postinstall. This will download all required dependencies (first of npm, then of bower), as well as run grunt full, which builds everything we need (css files, website structure, config files, final app.js with minification, etc.). So the frontend is actually read-to-go after make install, if no configuration parameters have to be changed (e.g. backend URL, which is localhost by default).

Maybe you can even pick up here some configuration from kdb?

In general, it is the backend URL that needs to be changed (at least for most installations). I doubt we can know this URL during installation already, as it also depends on where the frontend is consumed from. The frontend might work well if being consumed from localhost, but from another computer it will not find the backend.

Namoshek commented 7 years ago

I pushed a new version that contains all basic functionality for the structure file in 40a437895a8057e2280a666d5cf3af48a4c4535a. I also updated the structure file to contain the structure you suggested above. You can get a first impression using my installation.

The only section that does not work is the libraries menu point, because I don't know yet what info to display there. Libraries currently have no README, so yeah.

The plugin section looks probably the best currently, out of the box, without any real design and format changes.

markus2330 commented 7 years ago

Thank you, well done!

I wrote down some small things I noticed: #1030

Namoshek commented 7 years ago

Aaaaaand I still don't really have an idea what to write on the main page. Or should I just provide some placeholders to be filled (within the translation file)?

omnidan commented 7 years ago

That doesn't work, dependencies are project / package dependent. I don't know if NPM can cache dependencies globally, this would increase speed, but nothing more.

I think yarn, which is a re-implementation of the npm client, does this to speed up the installation process.

markus2330 commented 7 years ago

Should news/changelog be part of the main page?

Yes, at least a headline for the latest releases.

Will we keep one file (doc/NEWS.md) for news or will we split it up?

If it is easier for you we can split up. But then we would need a script that does the splitting-up process.

sorting by file date

We could sort either by file name (prefix it with release date) or a date given within the file.

Should news also be read from the local repo or should it always be read from github (makes them available faster but we introduce a dependency (although only a very small one, could even be only part of the structure file))?

Ideally you should always fetch from github the same way. Then we only have to change the URL at one place and easily migrate from github to something else if needed.

I still don't really have an idea what to write on the main page.

Be creative and surprise us with an example ;)

From previous discussion it looks like it will be something like:

Namoshek commented 7 years ago

If it is easier for you we can split up. But then we would need a script that does the splitting-up process.

Why a script? I think splitting the current NEWS.md by hand shouldn't be an issue and for future news, we just create new files?

We could sort either by file name (prefix it with release date) or a date given within the file.

Filename in this case, it is easier than parsing file content.

Ideally you should always fetch from github the same way. Then we only have to change the URL at one place and easily migrate from github to something else if needed.

If we fetch the news from github, my current functions won't work, so a split of the files would be basically useless. But what is the issue with generating the news from the local repo? If we push the news to github, they should be on the build server and the build server could run the website-generator once every hour too (takes not even 5 seconds, so that should be no issue at all).

markus2330 commented 7 years ago

Why a script? I think splitting the current NEWS.md by hand shouldn't be an issue

Okay, if you can do it, that would be great. The older news have different formats anyway, so some manual tasks are needed anyway.

for future news, we just create new files?

We can adopt scripts/generate-news-entry to create correctly named files from a template. I will do it during the first release with the new format.

Filename in this case, it is easier than parsing file content.

I am afraid you need to parse content anyway (for headline, guid,...)

If we push the news to github, they should be on the build server and the build server could run the website-generator once every hour too (takes not even 5 seconds, so that should be no issue at all).

Yes, we can run the website-generator on the build server. But it would be good if we can get rid of the current news generation so that everything homepage related is in one place.

Namoshek commented 7 years ago

The older news have different formats anyway, so some manual tasks are needed anyway. I am afraid you need to parse content anyway (for headline, guid,...)

Ok, then it would be good to know how you imagine the news to look like, because I would just use the h1 headline as news name and display everything else as news body (speaking about one news per file).

Yes, we can run the website-generator on the build server. But it would be good if we can get rid of the current news generation so that everything homepage related is in one place.

What is the current news generation? What does it do? I only know the NEWS.md, nothing else.

markus2330 commented 7 years ago

Ok, then it would be good to know how you imagine the news to look like

I am okay with the current status, but also with a separate file per release.

What is the current news generation? What does it do?

http://community.markus-raab.org:8080/job/elektra-doc executes an awk script that parses NEWS.md and generates the rss/html pages in http://doc.libelektra.org/news/

Namoshek commented 7 years ago

I was more talking about this sentence of you

I am afraid you need to parse content anyway (for headline, guid,...)

because I don't know why this is relevant. For me the guid is simply part of the news body, but nothing special. Using the first line as news name should be not a big problem.

Maybe I'll have time to work on the news this afternoon, then I'll figure out which way I should go.

Btw. is there really a need to move the rss-script to the website if it works well and we do not (really) change the structure of current news (only split into files)? I don't think it would be much work to rewrite the script and run the news posts through the markdown parser, I'm just curious.

markus2330 commented 7 years ago

[headline, guid] because I don't know why this is relevant

It is needed in the RSS feed.

Btw. is there really a need to move the rss-script to the website if it works well and we do not (really) change the structure of current news (only split into files)?

The awk script is ugly and its not good to have out-of-tree dependencies. I would also like to avoid to have multiple deployments for different parts of the webpage.

Namoshek commented 7 years ago

Is there something new I should add to the website structure today? And shall we keep this ticket open for future discussion and todos like the build server stuff (#160) ticket?

markus2330 commented 7 years ago

I think we can close the issue. #160 is only useful because people interesting to build server are already subscribed there.

The basic structure seems to work now but we need a way to see if all pages are properly linked from and to other pages. Would it easy for your script to output some statistics? (The count of links from and to every page.)

Namoshek commented 7 years ago

I think that is non-trivial. The problem is that the generating script which could be used for that does only normalize links, but not actually link them. Linking happens in the frontend itself in the markdown parser.

markus2330 commented 7 years ago

Ok. Maybe some external link checker will do this job.