Open hchandad opened 7 months ago
In regards to:
These exist original in the old tree, they are mostly written with the old version of kraft in mind. Thus if they are to be imported they should be rewritten.
There is an existing issue tracking the support of different hypervisors in kraftkit.
For static media, they mostly exist withing the repo, under the static/assets/
folder, but the current static content is served from public/
instead, we can either cp the files to the public/
folder or configure the server to serve content from the old static
folder as well.
Example:
/static/assets/imgs/
but not loaded/assets/files/eurosys2021-slides.pdf
Overview
The old documentation was created using
hugo
, after the migration tocontentlayer
multiple links no longer point to the right resource, I tried to collect as many and list them, some have been previously mentioned in the issue tracker.related: #360 #372 #370 #384 #363 #341 #189 #188
The table below was generated by running the
linkchecker
toolThe csv output was further converted to a markdown format
URL List
Getting the context of where a link is used
To figure out where the url is referenced in the files,
git grep
is useful , for example :Converting the csv output to markdown table
The following python script was used
convert.py
```python if __name__ == "__main__": import argparse import csv parser = argparse.ArgumentParser() parser.add_argument("-f", "--file", type=argparse.FileType()) args = parser.parse_args() Columns = ( "urlname", "parentname", "base", "result", "warningstring", "infostring", "valid", "url", "line", "column", "name", "dltime", "size", "checktime", "cached", "level", "modified", ) def skip(iterator, n): for i in range(n): next(iterator) import sys if args.file: csvreader = csv.reader(args.file, delimiter=';') skip(csvreader, 4) for row in csvreader: line = {Columns[i]: value for i, value in enumerate(row)} try: mk_row = f"|{line['urlname']}|[Parent URL]({line['parentname']})|[Real URL]({line['url']})| |" print(mk_row) except KeyError: print(line, file=sys.stderr) ```Further notes
diff
and run the checker on those page's only, this would allow for faster run times