Raku / raku.org

Source code for https://raku.org/
https://raku.org/
Artistic License 2.0
70 stars 50 forks source link

Don't depend on abandoned Mowyw module #174

Open 2colours opened 1 year ago

2colours commented 1 year ago

The Mowyw module from Moritz Lenz is a niche, abandoned, Perl5 module that is 1. bad PR 2. no encouragement for a possible volunteer.

I think it can go without major changes needed, as it is a static site generator with minimal templating features.

It seems to me Template6 could serve as a near immediate replacement but Template::Mustache is also an option. I think we can only win, the choice isn't that important. We'll have one more reason to use Raku and show this "to the world" as well.

tbrowder commented 1 year ago

Can you create a new one? Maybe using Cro?

2colours commented 1 year ago

@tbrowder CIAvash actually created a new site with Hugo, https://www.raku-lang.ir/en/

Since neither of us could receive feedback on it, I decided to go with a much more minimalistic approach: to do as little to replace the Perl tooling (and mostly Mowyw itself) with something written in Raku that I can comprehend and share with others.

I'm working on the Template6 module with this plan in mind. Once we have something that works and can be deployed, nuances like the folder structure will be easier to sort out, at least that's what I hope.

tbrowder commented 1 year ago

Sounds good. Thanks.

2colours commented 11 months ago

So for what it's worth, I think the development phase is done. I did what I wanted, and it seemed to work, vaguely.

The deployment is a big question. It would be great to build a Docker image with all the stuff required for running it locally, I'm just not knowledgeable with Docker build and haven't had the time to actually do so.

coke commented 11 months ago

tagging @dontlaugh

dontlaugh commented 11 months ago

If the outcome of the rewrite is a static site, it's actually quite similar to docs.raku.org

Something like

FROM docker.io/caddy
ADD compessed-html-and-css.tar /usr/share/caddy
ADD Caddyfile /etc/caddy/Caddyfile

That's the Dockerfile version of these buildah commands

The tar file will be automatically exploded into a tree of files. A new Caddyfile will need to be created and checked in.

Once the changes are merged in (or are they already?), tag me again and I'll get the build started.

2colours commented 11 months ago

@dontlaugh First I'm gonna say how it works now, and then how I think it could be mitigated and what potential obstacles I see.

Currently, this site is running natively on some machine that has a cron job with the update.sh file you can see in this repo. That script basically polls this repo (and the features repo that we take out of this loop soon), and depending on the changes, it rebuilds the content in-place. There is another thing it does: it fetches recent blog posts from the https://planet.raku.org/ Atom feed and puts them into a json file. This static JSON file however isn't generated into the site in any way (although I think it actually could, currently), it is just served for baked-in JS to query. And the whole site seems to be served as a Plack app, with its whole content living under online/.

It's probably better to build and deploy based on some hook to the github repo than this ad-hoc cron pulling; from what I understand, that's basically how the doc site is already managed. The fetch-recent-blog-posts script is also not something I'd greatly miss, hopefully I can migrate that to the client JS on my own (not sure why it doesn't work like that anyway; why do we need to store this JSON).

For me, the only mysterious thing is Plack itself. It's probably not too difficult but I know nothing about it. Anyway, currently, I see no reason why the site builder script couldn't generate a tar instead of an online repository, or even simpler, just wrap it up like it's done for the doc site (tbh, the whole incremental build thing was simple to do but I think it has become severely irrelevant, with this few pages remaining in the first place).

So, for now, I'm going to try to make the changes towards building such a container, and see how it works.

2colours commented 11 months ago

Meh. While struggling with reasonably parsing a megabytes large, ever-growing XML in client-side JS, I came to understand that the reason for not doing something like that might have been that you wouldn't want visitors of the site to fetch such a large amount of data in the first place. I think it's actually planet.raku.org that should offer an API, not raku.org as it is currently - but anyway, maybe we can just ignore this part of the site. It could even just steal the data from the live site for the time being.