ioccc-src / temp-test-ioccc

Temporary test IOCCC web site that will go away
Creative Commons Attribution Share Alike 4.0 International
33 stars 6 forks source link

Enhancement: Perform the Great Fork Merge #2239

Open lcn2 opened 3 months ago

lcn2 commented 3 months ago

The Great Fork Merge

The Great Fork Merge will occur when the multi-thousand commits that the temp-test-ioccc repo is ahead of the Official IOCCC winner repo are bright back to main Official www.ioccc.org web site.

TODOs

In order to perform the Great Fork Merge, the following tasks need to be completed:

NOTE: This may require a rewrite of the bin/ioccc-status.sh tool as well change the format of the from status.json file.

Port the bin tools tools to run under RHEL 9 version of Linux. With this port, the IOCCC judges should be able to use these tools on a wide enough variety of systems for their purposes.

See also comment 1993767060.

See comment 9192931.

The exception, of course, is the initial text down to the XXX that indicates this is a test repo.

Move most of the remarks for this TODO into the comment for that news issue.

Replace this todo item with "Complete issue XXX" once the new issue is created.

On possible idea is to adopt some sort of archival news page as suggested by comment-2158965619. If that is done, some way back machine digging and/or repo history digging should be done to recover some historical news that has been "lost".

We suggest that a new directory archive/news/ be created so that old news from, say 2019 could go into archive/news/2019.news.md from which archive/news/2019.news.html would be built by bin tools. The file archive/news/README.md would hold an inventory/table of contents linking to the individual archived news year pages and the bin tools could use that README to form archive/news/index.html.

The bottom of the top level news.md would always hold some sort of "for older IOCCC news, see the IOCCC news archive" that would link to the archive/news/index.html archived news page.

Consider if we should or shouldn't think out news.md and retain high level items. However, also consider comment-2158965619 as a reason not to thin out the news.

See also 2003 website archive as per comment-2180597422.

See comment-2189364148.

See comment 919283.

See comment-2198635072.

This TODO was changed from referring to a "spoiler" into a "de-obfucated" educational emphasis. In that regard we should NOT use terms like "spoiler" in the FAQ as well as NOT using "spoiler" in the entry's README.md file.

At a most, README.md should suggest that the reader might wish to study the prog.c code first before going on to review the de-obfuscated "alt" code.

See also comment-2198807715.

After completing the FAQ entry how to handle de-obfuscated code, consider if needed, revising entries with existing de-obfuscated code AND if needed, fixing cases where the entry README.md makes reference to something being a spoiler.

Update the FAQ as needed to reflect the current ideas of the registration process, indicating that the registration process is in beta and could change.

Update the FAQ as needed to reflect the current ideas of the submission process, indicating that the submission process is in beta and could change.

In addition to the useful check for typos, wording, and broken links: review for suitability for gong live with on the main web site after the Great Fork Merge. Update faq.md if and as needed.

Add, modify and/or remove things from the remaining TODOs if/as needed.

In a comment in this issue #2239, announcers that general pull requests for the temp-test-ioccc repo will no be accepted and that we are beginning work on the Near Final TODOs.

See comment-2172105426 and see comment-2174364721.

See comment-2197371351.

Getting ready for the Near Final TODOs

NOTE: Once the above TODOs have been completed, the following last minute TODO actions must be completed in relatively short order (preferably in the same calendar day) before the Great Fork Merge takes place.

Use this to look for any whitespace from markdown files

find . -name '*.md' -print0 | xargs -0 grep -E -l '[[:space:]][[:space:]]*$'

Use:

find . -name '*.md' ! -name markdown.md -print0 | xargs -0 pcregrep -M '^``.*\n[^|`\n \t]'

After making sure that the temp-test-ioccc repo is up to date and related GitHub pages have been rendered, use the ✓ on the navbar to check all generated HTML pages.

Fix any errors, warnings and info messages reported. Update the temp-test-ioccc repo and recheck those pages.

In FAQ 1.2, look for the <!-- XXX - Fill in the date when Great Fork Merge happens --> and update the date as for that section as needed.

Change default URLs and REPOs to refer to the https://www.ioccc.org web site and winner repo.

Change references as needed given their context.

This includes markdown files, HTML files, var.mk, Makefiles, text files, etc.

Try:

make www
find . \( -name .git -o -name NOTES \) -prune -o -type f -print0 | xargs -0 grep -l -F temp-test-ioccc

The only exceptions should be historic references found in news.md, news.hml, faq.md and faq.html.

Edit news.md as needed.

Perform make www and commit any changes.

Near Final TODOs

NOTE: Once the above TODOs have been completed, the following last minute TODO actions must be completed in relatively short order (preferably in the same calendar day) before the Great Fork Merge takes place.

If not, reset all TODOs under the Near Final TODOs section and fix the cause.

If not, reset all TODOs under the Near Final TODOs section and fix the cause.

Remove the lines between:

<!-- XXX - This entire section goes away during the final stages of the Great Fork Merge -->

and:

<!-- XXX - remove down to here in the final stages of the Great Fork Merge -->

Verify that:

git status

reports:

On branch master
Your branch is up to date with 'origin/master'.

nothing to commit, working tree clean

Fetch any last minute changes:

cd docroot/temp-test-iocc

git fetch && git rebase

git clean -f

git status

The last command should report:

On branch master
Your branch is up to date with 'origin/master'.

nothing to commit, working tree clean

Perform the 12 steps as noted in comment 2089758002.

Commit the change.

make update
git commit -m'final pre-Great Fork Merge'

git push

Fetch any last minute changes:

cd docroot/temp-test-iocc

git fetch && git rebase

git clean -f

git status

The last command should report:

On branch master
Your branch is up to date with 'origin/master'.

nothing to commit, working tree clean

If not, reset all TODOs under the Near Final TODOs section and fix the cause.

If not, reset all TODOs under the Near Final TODOs section and fix the cause.

IMPORTANT: Test these idea first in a clone of this cloned repo!

Using the method outlined in GitHub notes on Removing sensitive data from a repository, remove all *.tar.bz2 files (from all levels including the former top level ioccc.tar.bz2 file) from the few previous commits, in order to reduce the size of this repo somewhat.

See also comment-2159159871.

IMPORTANT: Test these idea first in a clone of this cloned repo!

See stack overflow replies.

See also How to clean up the git repo and reduce its disk size.

Be sure the mkiocccentry repo is up to date and perform a make install in that typo.

This is so that later use if make www will not abort due to missing compressed tarball files.

Click Create Pull Request for the Official IOCCC winner repo.

Inspect the pull request for the Official IOCCC winner repo.

Accept and complete the pull request.

Post Great Fork Merge TODOs

Once the Great Fork Merge occurs and the official IOCCC winner repo and related Official www.ioccc.org web site has been updated, these TODOs need to be performed on the official IOCCC winner repo:

Fix any problems found on the Official www.ioccc.org web site by editing the official IOCCC winner repo as needed.

From temp-test-ioccc repo settings click on the ((Archive this repository)) button.

This will Freeze the temp-test-ioccc repo but leave it in place for historic purposes.

lcn2 commented 3 months ago

Also how did you find out that sed -i '' is not portable? If so what versions does it not work? I ask of course because if there can be a workaround in sgit(1) that would be good though I have tested it with macOS sed and also GNU sed, so...

Thanks.

Linux wants:

sed -i -e 'sed stuff' foo.txt

macOS wants:

sed -i '' -e 'sed stuff' foo.txt
xexyl commented 3 months ago

Also how did you find out that sed -i '' is not portable? If so what versions does it not work? I ask of course because if there can be a workaround in sgit(1) that would be good though I have tested it with macOS sed and also GNU sed, so... Thanks.

Linux wants:

sed -i -e 'sed stuff' foo.txt

macOS wants:

sed -i '' -e 'sed stuff' foo.txt

With linux you can also specify an extension so that shouldn't matter.

$ echo foo > foo
$ sed -i'' -e 's/foo/&d/g' foo
$ cat foo 
food

(Actually I believe I did that explicitly in sgit(1) but I added an option to specify an extension.)

lcn2 commented 3 months ago

Comments, Suggestions as Questions welcome

Any Comments, Suggestions as Questions you may have about this comment are welcome.

I have one that will be important before I start making changes. Important in case you have to make changes to the manifest as I don't want to have to figure out what I added and how to merge a Numbers spreadsheet.

Previously there was a three part commit for new files or file name changes. How do you wish commits to be processed (which commits, what stages etc.) for fixing the manifest in text and content of the spreadsheet? That I deem is important before I can really start working on it. That being said I can hopefully look at the index.html pages and determine if there are problems like you suggested (or others I notice) that can be fixed at the same time as the manifest.

We recommend that, we hold off on editing the manifest.numbers spreadsheet to make it easier for you, @xexyl . So you have the "feel free to edit token" the manifest.numbers spreadsheet and related friends until the TODOs in the "Initial TODO list" of issue #1933 are finished.

Once the TODOs in the "initial TODO list" of issue #1933 are finished, we will be happy to take back the "feel free to edit token" and complete the rest of issue #1933.

We recommend you use macOS to process stuff after spreadsheet updates, not Linux.

There are porting issues to Linux for tools under tmp that are NOT worth fixing because the tools under tmp are going away sometime after issue #1933 closes (there is a TODO in this issue to that effect).

Only process spreadsheet manifest stuff under macOS please.

Feel free "batch" changes to the manifest.numbers spreadsheet if that will help you. There is NO NEED to update things for every single change.

When it does come time to push out a "batch" changes to the manifest.numbers spreadsheet, we suggest you:

((sort the spreadsheet in numbers))
((export the CSV file in numbers))
((save the spreadsheet in numbers))

and:

cd tmp
./fix_csv.sh
./gen_path_list.found.sh

then check that all files are accounted for in the manifest and that there are no extra files not in the manifest:

./check_path_list.sh

If all is well, then update and .entry.json files:

./run_all.sh ./gen_entry_json.sh

Once that is done, update the web pages:

cd ..
make www

.. or make quick_www if you wish and trust your file modification dates.

At that point, you will be able to look at the impact on the generated HTML files on your local macOS machine.

Look on your local macOS machine, at the type of changes you have made to web pages by using the configured Apple pre-installed apache. We suggest you review the recently updated comment 1893176110 for how configure the Apple pre-installed apache to run on your macOS AND for how to open HTML files to look at them locally.

If all seems well, then form a pull request.

In the pull request comments, you can write general and high level remarks about the kind of changes that were made. There is no need to go into explicit detail on the comments.

lcn2 commented 3 months ago

BTW: a tip for not adding spaces at the end of markdown files in vim. Add in your .vimrc the following:

autocmd! BufWritePre *.md %s/\s\+$//e

That will remove spaces of *.md files at the end of lines when you save the file. You can of course specify other globs as well.

Thanks!

lcn2 commented 3 months ago

QUESTION for make www, make quick_www and related rules

Since it takes quite some time to run each time even if one file only is modified, is it a problem that you can think of to do:

make YEARS="2020 2019" quick_www

for example ?

Of course it'll still run the other rules that don't use the YEARS variable but it would speed things up a fair bit. On that note is it a problem if the YYYY/Makefiles have those specific rules (the ones that act on the YYYY)? I can look at adding this if you think it's worth it. I mean it might not be useful later on but right now it might be.

No, this is not possible nor what the Makefile rules were designed to pick and choose things like years. Moreover due to the way the site is integrated, such a feature is not desired in the usual cases.

... Would that be possible? ...

Please do not add such complexity of specifying YEARS. It is an integrated site, and the tools are designed to support that. The use case for the bon tools is to manage the entire site.

The tools were written for other IOCCC judges to be able to see the impact of someone's pull request before pushing out changes. They are designed to see the potential impact of a pull request before some IOCCC judge commits it to the official site.

We do understand that for testing purposes, one might wish to regenerate a given entry's index.html file. We did not write the bin tools as a single massive script. You can pick and choose for testing purposes, to process a given entry, and say up the verbosity level:

bin/readme2index.sh -v 3 2020/ferguson1

You can even see the "about to run" command at level 3 for things line the bin/md2html.sh command line if you want to go that deep and manually execute those low level commands.

Similar stuff can be done for the year level index.html files:

bin/gen-year-index.sh -v 3 2020

etc.

UPDATE 0

When were were doing early testing of building entry index.html files, for example, we would:

bin/readme2index.sh -v 3 2020/ferguson1
open 2020/ferguson1/index.html

over and over again until things "failed to suck" as they say. πŸ€“

lcn2 commented 3 months ago

With commit e4a7687 we have been successful in testing bin tools that form compressed tarballs for entries and IOCCC years and untar the same compressed tarballs. This worked well under both macOS (macOS 14.4.1) and Linux (RHEL9.3).

Does this address the problem you asked about yesterday? If not: I think it'd be good if we could have an ioccc.tar.bz2 file that is the entire contest history but if this is not possible with say, LFS, then it might be good to have both the YYYY.tar.bz2 and also maybe a decade. I think it'd be good to have a YYYY.tar.bz2 for each year but to have more than one year is ultimately a good thing if possible.

Please understand that we WILL continue to release compressed tarballs of individual entries AND we WILL continue to release IOCCC year level compressed tarballs.

What is in question is tarballs that cover multiple years,

OPTION 0 - use Git-LFS

  1. Use the Git Large File Storage (Git-LFS) to manage the top level ioccc.tar.bz2 compressed tarball

and unless the Git-LFS is free of undesirable side effects, we will not have a single large top level ioccc.tar.bz2 compressed tarball.

Such a test would need to be done using the large data file on some other repo to see what happens. It would require looking at how others use it, understanding the side effects they encounter, etc.

We did see enough FAQs about Git-LFS to suggest that this option is not a simple change to test.

Given time pressures, if this option were to be considers, it may be best to wait until after the Great Fork Merge AND to release the updated IOCCC without an single large compressed top level tarball. The testing of Git-LFS could happen in this repo AFTER the web site update, and changes ported over only after testing was complete and the result was satisfactory.

OPTION 1 - release sets of years

Alternatives to (0) above are:

  1. Release mulit-year compress tarball sets: group by decade or group by sets of every N-contents into several top level compressed tarballs

We break up the years into several sets:

These sets of years are smaller and don't run into GitHub size limitations. And if a single entry needs modifying, we don't have to replace the single large top level ioccc.tar.bz2 compressed tarball with a new single large top level ioccc.tar.bz2 compressed tarball.

We can also break up years into sets of, say, 5 IOCCC contents each.

OPTION 2 - do not release multi-year sets

  1. Only release year level and entry level compressed tarballs (no single large top level ioccc.tar.bz2 compressed tarball).
lcn2 commented 3 months ago

We believe we have addressed all of the current questions that still need answering at this time. If we've missed something or something else needs to be clarified, please ask again.

xexyl commented 3 months ago

QUESTION for make www, make quick_www and related rules

Since it takes quite some time to run each time even if one file only is modified, is it a problem that you can think of to do:

make YEARS="2020 2019" quick_www

for example ? Of course it'll still run the other rules that don't use the YEARS variable but it would speed things up a fair bit. On that note is it a problem if the YYYY/Makefiles have those specific rules (the ones that act on the YYYY)? I can look at adding this if you think it's worth it. I mean it might not be useful later on but right now it might be.

No, this is not possible nor what the Makefile rules were designed to pick and choose things like years. Moreover due to the way the site is integrated, such a feature is not desired in the usual cases.

I noticed that too. It's not a big deal but thanks for confirming.

... Would that be possible? ...

Please do not add such complexity of specifying YEARS. It is an integrated site, and the tools are designed to support that. The use case for the bon tools is to manage the entire site.

Oh I am not going to. No worries about that.

The tools were written for other IOCCC judges to be able to see the impact of someone's pull request before pushing out changes. They are designed to see the potential impact of a pull request before some IOCCC judge commits it to the official site.

That makes sense.

We do understand that for testing purposes, one might wish to regenerate a given entry's index.html file. We did not write the bin tools as a single massive script. You can pick and choose for testing purposes, to process a given entry, and say up the verbosity level:

bin/readme2index.sh -v 3 2020/ferguson1

Indeed. But I'm more likely to not even go that far. Probably just run the rule. It will take a bit longer but that's okay.

You can even see the "about to run" command at level 3 for things line the bin/md2html.sh command line if you want to go that deep and manually execute those low level commands.

Yes I've noticed that but thanks for the reminder.

Similar stuff can be done for the year level index.html files:

bin/gen-year-index.sh -v 3 2020

etc.

True.

UPDATE 0

When were were doing early testing of building entry index.html files, for example, we would:

bin/readme2index.sh -v 3 2020/ferguson1
open 2020/ferguson1/index.html

over and over again until things "failed to suck" as they say. πŸ€“

Oh I've done similar to that in the past too but before there was a tool to do it. I just used discount or fork of discount and then indeed 'open'. I did that when working on the README.md files quite a few times!

xexyl commented 3 months ago

With commit e4a7687 we have been successful in testing bin tools that form compressed tarballs for entries and IOCCC years and untar the same compressed tarballs. This worked well under both macOS (macOS 14.4.1) and Linux (RHEL9.3).

Does this address the problem you asked about yesterday? If not: I think it'd be good if we could have an ioccc.tar.bz2 file that is the entire contest history but if this is not possible with say, LFS, then it might be good to have both the YYYY.tar.bz2 and also maybe a decade. I think it'd be good to have a YYYY.tar.bz2 for each year but to have more than one year is ultimately a good thing if possible.

Please understand that we WILL continue to release compressed tarballs of individual entries AND we WILL continue to release IOCCC year level compressed tarballs.

What is in question is tarballs that cover multiple years,

OPTION 0 - use Git-LFS

  1. Use the Git Large File Storage (LFS) to manage the top level ioccc.tar.bz2 compressed tarball

and unless the Git-LFS is free of undesirable side effects, we will not have a single large top level ioccc.tar.bz2 compressed tarball.

Such a test would need to be done using the large data file on some other repo to see what happens. It would require looking at how others use it, understanding the side effects they encounter, etc.

We did see enough FAQs about Git-LFS to suggest that this option is not a simple change to test.

Given time pressures, if this option were to be considers, it may be best to wait until after the Great Fork Merge AND to release the updated IOCCC without an single large compressed top level tarball. The testing of Git-LFS could happen in this repo AFTER the web site update, and changes ported over only after testing was complete and the result was satisfactory.

That sounds like a good compromise to me. It doesn't have to be done now. That seems even more important if it's not easy to test the LFS feature.

OPTION 1 - release sets of years

Alternatives to (0) above are:

  1. Release mulit-year compress tarball sets: group by decade or group by sets of every N-contents into several top level compressed tarballs

I like the idea of decades but I guess if ever they become too big or too unwieldy five years could work too. I say 5 only because it can be evenly divided in half.

We break up the years into several sets:

  • ioccc.1984-1989.tar.bz2
  • ioccc.1990-1998.tar.bz2
  • ioccc.2000-2006.tar.bz2
  • ioccc.2011-2019.tar.bz2
  • ioccc.2020-2029.tar.bz2

These sets of years are smaller and don't run into GitHub size limitations. And if a single entry needs modifying, we don't have to replace the single large top level ioccc.tar.bz2 compressed tarball with a new single large top level ioccc.tar.bz2 compressed tarball.

True.

We can also break up years into sets of, say, 5 IOCCC contents each.

I really did not see this when I noted that above!

OPTION 2 - do not release multi-year sets

  1. Only release year level and entry level compressed tarballs (no single large top level ioccc.tar.bz2 compressed tarball).

I think it's good to have both, personally, but that's me. Maybe part of that is because in the past you had the full history as a single tarball. That's how I always downloaded the entries in fact.

xexyl commented 3 months ago

Comments, Suggestions as Questions welcome

Any Comments, Suggestions as Questions you may have about this comment are welcome.

I have one that will be important before I start making changes. Important in case you have to make changes to the manifest as I don't want to have to figure out what I added and how to merge a Numbers spreadsheet. Previously there was a three part commit for new files or file name changes. How do you wish commits to be processed (which commits, what stages etc.) for fixing the manifest in text and content of the spreadsheet? That I deem is important before I can really start working on it. That being said I can hopefully look at the index.html pages and determine if there are problems like you suggested (or others I notice) that can be fixed at the same time as the manifest.

We recommend that, we hold off on editing the manifest.numbers spreadsheet to make it easier for you, @xexyl . So you have the "feel free to edit token" the manifest.numbers spreadsheet and related friends until the TODOs in the "Initial TODO list" of issue #1933 are finished.

That sounds great! Thanks.

Once the TODOs in the "initial TODO list" of issue #1933 are finished, we will be happy to take back the "feel free to edit token" and complete the rest of issue #1933.

Sure.

We recommend you use macOS to process stuff after spreadsheet updates, not Linux.

I only work with GitHub with macOS but thanks for the note!

There are porting issues to Linux for tools under tmp that are NOT worth fixing because the tools under tmp are going away sometime after issue #1933 closes (there is a TODO in this issue to that effect).

That makes sense.

Only process spreadsheet manifest stuff under macOS please.

Of course. I'm not even sure if it's possible in linux. Certainly not the spreadsheet file itself.

Feel free "batch" changes to the manifest.numbers spreadsheet if that will help you. There is NO NEED to update things for every single change.

Thanks! That's helpful indeed (it's what I had hoped to do).

When it does come time to push out a "batch" changes to the manifest.numbers spreadsheet, we suggest you:

((sort the spreadsheet in numbers))
((export the CSV file in numbers))
((save the spreadsheet in numbers))

I do indeed sort and export and save.

and:

cd tmp
./fix_csv.sh
./gen_path_list.found.sh

Okay. I have to look at the script I wrote to see what it does too. Might not have to do the commands manually.

then check that all files are accounted for in the manifest and that there are no extra files not in the manifest:

./check_path_list.sh

If all is well, then update and .entry.json files:

./run_all.sh ./gen_entry_json.sh

Once that is done, update the web pages:

cd ..
make www

.. or make quick_www if you wish and trust your file modification dates.

I think I do that yes though with the script I wrote.

At that point, you will be able to look at the impact on the generated HTML files on your local macOS machine.

Look on your local macOS machine, at the type of changes you have made to web pages by using the configured Apple pre-installed apache. We suggest you review the recently updated comment 1893176110 for how configure the Apple pre-installed apache to run on your macOS AND for how to open HTML files to look at them locally.

That might or might not be a problem but I will worry about that at the time. Thanks for the link again.

If all seems well, then form a pull request.

In the pull request comments, you can write general and high level remarks about the kind of changes that were made. There is no need to go into explicit detail on the comments.

Thanks.

Now as for the commits though. I'm not sure what commit messages to use for each phase. Before it was because of file name changes or new files / deleted files but now it's not that. Does it matter that much?

xexyl commented 3 months ago

We believe we have addressed all of the current questions that still need answering at this time. If we've missed something or something else needs to be clarified, please ask again.

I'll look more tomorrow but I think you answered everything yes. Off to do other things for the rest of the day.

lcn2 commented 3 months ago

Now as for the commits though. I'm not sure what commit messages to use for each phase. Before it was because of file name changes or new files / deleted files but now it's not that. Does it matter that much?

It might not matter that much.

lcn2 commented 3 months ago

At that point, you will be able to look at the impact on the generated HTML files on your local macOS machine.

Look on your local macOS machine, at the type of changes you have made to web pages by using the configured Apple pre-installed apache. We suggest you review the recently updated https://github.com/ioccc-src/temp-test-ioccc/issues/4#issuecomment-1893176110 for how configure the Apple pre-installed apache to run on your macOS AND for how to open HTML files to look at them locally.

That might or might not be a problem but I will worry about that at the time. Thanks for the link again.

You REALLY do want to look at HTML files on your local machine, via a web browser as served from a local web server.

xexyl commented 3 months ago

At that point, you will be able to look at the impact on the generated HTML files on your local macOS machine. Look on your local macOS machine, at the type of changes you have made to web pages by using the configured Apple pre-installed apache. We suggest you review the recently updated #4 (comment) for how configure the Apple pre-installed apache to run on your macOS AND for how to open HTML files to look at them locally.

That might or might not be a problem but I will worry about that at the time. Thanks for the link again.

You REALLY do want to look at HTML files on your local machine, via a web browser as served from a local web server.

I understand it. But I'll have to worry about that when I am looking at the html files. I have another way that might work too that would be more natural for me (less of a burden and also quicker to set up). Since I have bind configured with views and since I already have a subdomain I might do it on the server. Or I might set up a new vhost and do it that way. But I'll consider the options (testing of course) when the time comes.

lcn2 commented 3 months ago

So if we batch multiple IOCCC years together: do we span a sets of

xexyl commented 3 months ago

So if we batch multiple IOCCC years together: do we span a sets of

  • 5 years
  • 10 years
  • N year (for some other value of N)
  • 5 IOCCC contents
  • 10 IOCCC contests
  • N IOCCC contents (for some other value of N)

For me ten years seems most natural but if that's too many why not five? Ah I see .. so contests instead of years. Interesting thought. I still think ten years in the sense of decades might be better. So the first tarball would be not ten years but all the others would (though the last one of course wouldn't be ten at first). Thus 1984-1989 and the next ones 1990-1999, 2000-2009 and so on.

lcn2 commented 3 months ago

So if we batch multiple IOCCC years together: do we span a sets of

  • 5 years
  • 10 years
  • N year (for some other value of N)
  • 5 IOCCC contents
  • 10 IOCCC contests
  • N IOCCC contents (for some other value of N)

For me ten years seems most natural but if that's too many why not five? Ah I see .. so contests instead of years. Interesting thought. I still think ten years in the sense of decades might be better. So the first tarball would be not ten years but all the others would (though the last one of course wouldn't be ten at first). Thus 1984-1989 and the next ones 1990-1999, 2000-2009 and so on.

We would have tarballs that look something like this:

They will be placed somewhere under the archive directory: location TBD.

We will now go about building the bin apps to tar and untag those 10 year tarballs, removing the top level ioccc.tar.bz2, and editing the web pages such as years.html accordingly.

These actions won't impact your work on the manifest / your work on issue #1933.

UPDATE 0

There is a problem with the decade approach. If we sum the sizes of the year level "YYYY/YYYY.tyar.bz2" compressed tarballs as a reasonable approximation for what the "combined" total would look line, the "ioccc.2011-2019.tar.bz2" would be >52 Mbytes. GitHub begins to gripe / warn about files > 50 Mbytes. Worse still, as we are considering expanding entry sizes to a few megabytes, this "decade" problem is only going to get worse over time.

Even if things stay under the "50 Mbyte GitHub warning" limit, editing entries and reforming the compressed tarballs would run into "big blob file editing many times" pain.

A 5-year span would keep under the 50 Mbyte limit so long as the average contribution of a year is < 10 Mbytes. In the last IOCCC decade, we had 3 IOCCC years that were 17.9 Mbytes. 17.4 Mbytes, and 10.0 Mbytes. An "ioccc.2011-2015.tar.bz2" would be 38.0 Mbytes.

The trend of allowing for entries with larger data sizes continues. Recall that "MAX_SUM_FILELEN" as defined in soup/limit_ioccc.h is "27651*1024" or 27Mbytes. Not that we expect all winners will be pushing the maximum size. The compressed tarball "MAX_TARBALL_LEN" < 3.81 Mbytes.

Now 15 winners for a given year remains common peak (there is no real limit on the number of winners for given year, it just turns out that 6 years had 15 winners, 7 years had 14 winners. If the of a year is < 10 Mbytes and we go with limiting to 5 years or 5 contests, then to avoid crossing the 50 Mbyte limit, the average entry has to be < 2/3 Mbytes.

Adjusting the limits to force an entry to be that small might be too constraining on creativity.

So: The decade grouping approach will not work. The 5 year or 5 content set starts to gets close to the problem area and would likely exceed the "50 Mbyte GitHub warning" limit.

AGAIN: While the past will work for 5 years or 5 contents, future trends put such as grouping of 5 into problem territory.

What about groups of 4 years or 4 contests?

With 27 IOCCC years, a 4 year or 4 content would require 7 multi-year tarballs. The average for 4 years would need to be < 12.5 Mbytes. With 15 winners per year, that an average of < 5/6 Mbytes per winner.

What about groups of 3 years or 3 contests? Well now with 9 multi-year tarballs to cover 27 contests, the point of having multi-year tarballs gets a bit silly.

UPDATE 1

So it seems the only reasonable "multiple IOCCC years in one compressed tarball" is to go for an option using Git Large File System OR no have any multi-year compressed tarballs.

We will ponder this some more.

lcn2 commented 3 months ago

There might have been problem with bin/gen-year-index.sh -v 5 2020, where the debug level was >= 5. However that turned out to be a debugging "mis-feature" and not a bug. With commit 420c7ae7ace0ae8af4e6947ab2a50ac6fc46be64 those debugging "mis-features" have been resolved in favor of "less sucky" / "less misleading" debug messages. :-)

xexyl commented 3 months ago

So if we batch multiple IOCCC years together: do we span a sets of

  • 5 years
  • 10 years
  • N year (for some other value of N)
  • 5 IOCCC contents
  • 10 IOCCC contests
  • N IOCCC contents (for some other value of N)

For me ten years seems most natural but if that's too many why not five? Ah I see .. so contests instead of years. Interesting thought. I still think ten years in the sense of decades might be better. So the first tarball would be not ten years but all the others would (though the last one of course wouldn't be ten at first). Thus 1984-1989 and the next ones 1990-1999, 2000-2009 and so on.

We would have tarballs that look something like this:

  • ioccc.1984-1989.tar.bz2
  • ioccc.1990-1998.tar.bz2
  • ioccc.2000-2006.tar.bz2
  • ioccc.2011-2019.tar.bz2
  • ioccc.2020-2029.tar.bz2

They will be placed somewhere under the archive directory: location TBD.

We will now go about building the bin apps to tar and untag those 10 year tarballs, removing the top level ioccc.tar.bz2, and editing the web pages such as years.html accordingly.

These actions won't impact your work on the manifest / your work on issue #1933.

UPDATE 0

There is a problem with the decade approach. If we sum the sizes of the year level "YYYY/YYYY.tyar.bz2" compressed tarballs as a reasonable approximation for what the "combined" total would look line, the "ioccc.2011-2019.tar.bz2" would be >52 Mbytes. GitHub begins to gripe / warn about files > 50 Mbytes. Worse still, as we are considering expanding entry sizes to a few megabytes, this "decade" problem is only going to get worse over time.

Even if things stay under the "50 Mbyte GitHub warning" limit, editing entries and reforming the compressed tarballs would run into "big blob file editing many times" pain.

A 5-year span would keep under the 50 Mbyte limit so long as the average contribution of a year is < 10 Mbytes. In the last IOCCC decade, we had 3 IOCCC years that were 17.9 Mbytes. 17.4 Mbytes, and 10.0 Mbytes. An "ioccc.2011-2015.tar.bz2" would be 38.0 Mbytes.

The trend of allowing for entries with larger data sizes continues. Recall that "MAX_SUM_FILELEN" as defined in soup/limit_ioccc.h is "27651*1024" or 27Mbytes. Not that we expect all winners will be pushing the maximum size. The compressed tarball "MAX_TARBALL_LEN" < 3.81 Mbytes.

Now 15 winners for a given year remains common peak (there is no real limit on the number of winners for given year, it just turns out that 6 years had 15 winners, 7 years had 14 winners. If the of a year is < 10 Mbytes and we go with limiting to 5 years or 5 contests, then to avoid crossing the 50 Mbyte limit, the average entry has to be < 2/3 Mbytes.

Adjusting the limits to force an entry to be that small might be too constraining on creativity.

So: The decade grouping approach will not work. The 5 year or 5 content set starts to gets close to the problem area and would likely exceed the "50 Mbyte GitHub warning" limit.

AGAIN: While the past will work for 5 years or 5 contents, future trends put such as grouping of 5 into problem territory.

What about groups of 4 years or 4 contests?

With 27 IOCCC years, a 4 year or 4 content would require 7 multi-year tarballs. The average for 4 years would need to be < 12.5 Mbytes. With 15 winners per year, that an average of < 5/6 Mbytes per winner.

What about groups of 3 years or 3 contests? Well now with 9 multi-year tarballs to cover 27 contests, the point of having multi-year tarballs gets a bit silly.

UPDATE 1

So it seems the only reasonable "multiple IOCCC years in one compressed tarball" is to go for an option using Git Large File System OR no have any multi-year compressed tarballs.

We will ponder this some more.

Well I like the idea of allowing larger entry sizes in the future though of course us contestants don't really know the size until the rules have been finalised. But if you are indeed going to increase the size some it might be that 4 years is good. I agree with 3 it would be kind of silly. But then I think we have to ask if 5 is pushing the limits is 4 not also silly? Maybe it would be better to have single year only and then anyone who wants the full contest can clone the repo?

I admit I like the idea of tarballs with more than one year but that's a 'used to it' reason. I don't need that now as I have the repo cloned. So now I don't know if it's necessary or not. I wonder.

xexyl commented 3 months ago

There might have been problem with bin/gen-year-index.sh -v 5 2020, where the debug level was >= 5. However that turned out to be a debugging "mis-feature" and not a bug. With commit 420c7ae those debugging "mis-features" have been resolved in favor of "less sucky" / "less misleading" debug messages. :-)

Thanks for the note!

xexyl commented 3 months ago

At that point, you will be able to look at the impact on the generated HTML files on your local macOS machine. Look on your local macOS machine, at the type of changes you have made to web pages by using the configured Apple pre-installed apache. We suggest you review the recently updated #4 (comment) for how configure the Apple pre-installed apache to run on your macOS AND for how to open HTML files to look at them locally.

That might or might not be a problem but I will worry about that at the time. Thanks for the link again.

You REALLY do want to look at HTML files on your local machine, via a web browser as served from a local web server.

I understand it. But I'll have to worry about that when I am looking at the html files. I have another way that might work too that would be more natural for me (less of a burden and also quicker to set up). Since I have bind configured with views and since I already have a subdomain I might do it on the server. Or I might set up a new vhost and do it that way. But I'll consider the options (testing of course) when the time comes.

I set up so that my web server will take care of the problem so now I can look at the rendered html files before committing the changes. All it'll take is when files change I can scp or rsync them to the server (which tool depending on how many files) and if all is okay I can then commit. If not all okay I will make fixes and try again.

This was very simple to do and much easier and quicker than setting it up on in macOS as apache is already configured there with a subdomain/vhost anyway (just in a subdirectory of it).

lcn2 commented 3 months ago

With commit 1979ea02809e232dc1f7781aa5dc88bfe5eb3cd1 we have selected option 2.

In particular:

     Removed top level `ioccc.tar.bz2` compressed tarball.  NOTE: Sorry
    (tm Canada πŸ‡¨πŸ‡¦) to remove the top level `ioccc.tar.bz2` file.

We will look into [Git Large File Storage Git-LFS. If Git-LFS works well and does not have undesirable side effects, we will consider restoring ioccc.tar.bz2.

A number of other things were fixed and improved as well. See the commit comment for other details.

xexyl commented 3 months ago

With commit 1979ea0 we have selected option 2.

In particular:

     Removed top level `ioccc.tar.bz2` compressed tarball.  NOTE: Sorry
    (tm Canada πŸ‡¨πŸ‡¦) to remove the top level `ioccc.tar.bz2` file.

We will look into [Git Large File Storage Git-LFS. If Git-LFS works well and does not have undesirable side effects, we will consider restoring ioccc.tar.bz2.

A number of other things were fixed and improved as well. See the commit comment for other details.

Thanks for the update! I'll look in more detail tomorrow. Enjoy the rest of your day!

lcn2 commented 2 months ago

We believe we have addressed all of the current questions that still need answering at this time. If we've missed something or something else needs to be clarified, please ask again.

lcn2 commented 1 month ago

Current activity

We have been working "heads down" as they say, on the IOCCC submit server and the IOCCC registration process.

FYI: This is the reason why we have been away from this repo for a bit.

IOCCC submit server progress report

The IOCCC submit server is a critical chunk of the IOCCC infrastructure: one that will accept compressed tarballs produced by mkiocccentry(1) and checked by txzchk(1) and verified by the chkentry(1) tools (See mkiocccentry repo. The IOCCC submit server is now on the critical path of holding the next IOCCC.

Without an IOCCC submit server, there will be no IOCCC. So it is rather important. πŸ˜‰

The IOCCC submit server is being co-developed with an engineer who lives in Switzerland. We have a good concept design that we believe will work well. Nevertheless we have been "heads down" on this project.

The IOCCC submit server is currently in a private repo at the moment, in part because it is in a very pre-alpha testing phase. We do plan to make the repo public in perhaps a month or so.

The IOCCC submit server development environment

The IOCCC submit server has been written in Python and runs under a Docker container.

We are trying to, once again, to be "at peace with the python programming environment", so pardon us if we do not express our opinion about Python as a programming language. <<<-- go ahead, click that link πŸ‡

Docker is another interesting provisional system. Think of it like "chroot(2) done much better". We can recommend docker for a wide variety of solutions. That solution set seems to include what the IOCCC will need in a submit server. For example, the docker container is allowing us to test this Linux-based solution under macOS.

Once the IOCCC submit server is in beta test, we will make the repo public and invite comments.

IOCCC registration process progress report

We are also working on the IOCCC registration process, a smaller bit of IOCCC infrastructure that will tie into the IOCCC submit server. Folks will use the IOCCC registration process to be given access to the IOCCC submit server where, using the mkiocccentry tool, they will upload their submissions to the IOCCC for judging.

As the IOCCC registration process and the IOCCC submit server are strongly tied together, we are working on both at the same time. Nevertheless we need to get farther along on the IOCCC submit server before we can make reasonable progress on the IOCCC registration process.

IOCCC registration process development environment

We are considering using the TopicBox service as a key part of the IOCCC registration process infrastructure.

We believe the work needed to finish the IOCCC registration process is not nearly as large as the work needed to finish the IOCCC submit server .. we believe/hope. We need to push the IOCCC submit server farther along before we can say that for certain.

The tools needed for a complete IOCCC registration process implementation are likely some private shell scripts that will be used by the IOCCC judges in conjunction with the TopicBox service.

IOCCC MOCK replacement

We are considering to NOT hold the *IOCCC MOCK as we previously envisioned. Instead we plan to hold a IOCCC submit server public test and IOCCC registration process public test. And as part of this test, folks will use the mkiocccentry tool to upload simple "Hello, world!"-like entries whose content will NOT** be judged.

We are going to use this approach in order to speed up the date when the 28th IOCCC will eventually start.

Great Fork Merge date

Once the IOCCC submit server and the IOCCC registration process are design stable and in alpha testing phase, we plan to document their process on the temp-test-ioccc web site.

Once that documentation is ready, the ((top priority)) issues that are required to close before we can begin work on the Near Final TODOs (i.e., issue #858 and issue #1933 and issue #2006) ... those issues will be rapidly brought to a close (somewhat regardless of their state).

When this will happen will depend on how fast the the IOCCC submit server and the IOCCC registration process can move into their beta testing phase. This will be done in order to NOT push the start date of the 28th IOCCC way way far into the future.

Stay tuned for these exciting developments.

P.S.

We DO APPRECIATE the efforts being made on the ((top priority)) issue #858 and issue #1933 and issue #2006 and DO UNDERSTAND that those helping us with these issues have their own matters to attend to.

We plan to give a bit of warning (but not too much, maybe 2 weeks) of when we will force closed, issue #858 and issue #1933 and issue #2006. That won't happen until the IOCCC submit server and the IOCCC registration process are able to move into their testing phase such that we can document how to use them on the web site. So there is more time left to close those issues .. hopefully not a lot more time left as we want to get the 28th IOCCC started .. sometime sooner than later πŸ—“οΈβ€ΌοΈ

lcn2 commented 1 month ago

Current activity

As mentioned in comment-2095164921 we continue to work on tthe ioccc-reg tool and the ioccc-submit tool.

Tasks related to the ioccc-reg tool and the ioccc-submit tool TODO include (but may not be limited to):

Relevance to this issue

While the the ioccc-reg tool and the ioccc-submit tool are beyond the scope of this repo, those projects ARE GATING FACTORS for the Great Fork Merge,

Both the ioccc-reg tool and the ioccc-submit tool need to be far enough along that useful screenshots can be added to the faq.md file. And in particular, improve / add FAQs about how to enter an IOCCC.

Once we reach the stage where the the faq.md file has been updated with useful FAQ information about how to register for the IOCCC and how to submit to the IOCCC, WE PLAN TO SET A DEADLINE for performing the Great Fork Merge. This will include setting a deadline to bring to a close the ((top priority)) issue #858 and issue #1933 and issue #2006. Those issues will need to be brought to a close (somewhat regardless of their state).

P.S.

We DO APPRECIATE the efforts being made on the ((top priority)) issue #858 and issue #1933 and issue #2006 and DO UNDERSTAND that those helping us with these issues have their own matters to attend to.

We plan to give a bit of warning (but not too much, maybe 2 weeks) of when we will force closed, issue #858 and issue #1933 and issue #2006. That won't happen until the IOCCC submit server and the IOCCC registration process are able to move into their testing phase such that we can document how to use them on the web site. So there is more time left to close those issues .. hopefully not a lot more time left as we want to get the 28th IOCCC started .. sometime sooner than later πŸ—“οΈβ€ΌοΈ

lcn2 commented 1 month ago

We believe we have addressed all of the current questions that still need answering at this time. If we've missed something or something else needs to be clarified, please ask again.

xexyl commented 1 month ago

Current activity

As mentioned in comment-2095164921 we continue to work on tthe ioccc-reg tool and the ioccc-submit tool.

Tasks related to the ioccc-reg tool and the ioccc-submit tool TODO include (but may not be limited to):

  • Revise the ioccc-submit tool tasks below and add them as an (private) repo issue

  • Revise the ioccc-reg tool tasks below and add them as an (private) repo issue

  • Make use of the GutHub machoism for handling keys within a repo and modify source code accordingly

  • Approach the topicbiox service about of ioccc-reg tool mailing list design

  • Isolate submit server uploads into separate directories

  • Add a mechanism to allow IOCCC judges to collect submissions and change the submission status accordingly

  • Add a mechanism to allow IOCCC judges further change the submission status to include an arbitarary string

  • Implement a lock / unlock mechanism to allow upload status updates by IOCCC judges while submission are being uploaded

  • Add a mechanism to have new ioccc-submit tool users change their initial password

  • Build the submit.ioccc.org server that will only be up when needed by an open IOCCC

  • Use letsencrypt.org to generate a cert for HTTPS connections on submit.ioccc.org

  • Map the internal port for ioccc-submit tool into tcp/443 for HTTPS use

  • Test install the current ioccc-submit tool on the the submit.ioccc.org server

  • Obtain screenshots from the ioccc-reg tool and add them to the FAQ

  • Obtain screenshots from the ioccc-submit tool and add them to the FAQ

Relevance to this issue

While the the ioccc-reg tool and the ioccc-submit tool are beyond the scope of this repo, those projects ARE GATING FACTORS for the Great Fork Merge,

Both the ioccc-reg tool and the ioccc-submit tool need to be far enough along that useful screenshots can be added to the faq.md file. And in particular, improve / add FAQs about how to enter an IOCCC.

Once we reach the stage where the the faq.md file has been updated with useful FAQ information about how to register for the IOCCC and how to submit to the IOCCC, WE PLAN TO SET A DEADLINE for performing the Great Fork Merge. This will include setting a deadline to bring to a close the ((top priority)) issue #858 and issue #1933 and issue #2006. Those issues will need to be brought to a close (somewhat regardless of their state).

P.S.

We DO APPRECIATE the efforts being made on the ((top priority)) issue #858 and issue #1933 and issue #2006 and DO UNDERSTAND that those helping us with these issues have their own matters to attend to.

We plan to give a bit of warning (but not too much, maybe 2 weeks) of when we will force closed, issue #858 and issue #1933 and issue #2006. That won't happen until the IOCCC submit server and the IOCCC registration process are able to move into their testing phase such that we can document how to use them on the web site. So there is more time left to close those issues .. hopefully not a lot more time left as we want to get the 28th IOCCC started .. sometime sooner than later πŸ—“οΈβ€ΌοΈ

Thank you for the warning. Bad timing for me but I will do my best. The html issue is probably mostly good and if necessary there can always be future fixes. But that would greatly speed things up.

The manifest issue I will work on next. It might take more than a few commits but I would think it shouldn't take that much effort.

The others I will have to look at to see what has to be done.

I will say that the html issue is probably the one that takes the most effort and time so if you think that could be closed soon or even now then it might be that the other issues could be finished sooner.

But I left some comments there.

I was able to look at these messages with the phone but I will be away for the rest of the day now most likely so I will reply tomorrow.

lcn2 commented 1 month ago

We believe we have addressed all of the current questions that still need answering at this time. If we've missed something or something else needs to be clarified, please ask again.

lcn2 commented 1 month ago

As a short break from our critical infrastructure work, we performed commit 650849b52335f67b9fbe3580dcce659ea2837e09 to update news since last month.

We also updated the todo list for this issue #2239, including a number of revisions to the final TODO steps.

xexyl commented 1 month ago

As a short break from our critical infrastructure work, we performed commit 650849b52335f67b9fbe3580dcce659ea2837e09 to update news since last month.

We also updated the todo list for this issue #2239, including a number of revisions to the final TODO steps.

Nice (to both parts). I will have to check it later which probably means tomorrow.

Good luck with the infrastructure problem!

xexyl commented 3 weeks ago

We will probably edit news.md to thin out some of the details before the Great Fork Merge.

BTW: We are still pondering how to manage old news: to delete it off the bottom or archive it elsewhere. That's TBD which is why we have not yet gone into a thinning operation on the news.md file.

UPDATE 0

Added news.md related TODO items to the Great Form Merge issue.

It seems like some thinning might be useful at times but I don't see why it's as necessary as it once was (if it ever was). I like seeing a the long list of news when there is some. Of course another option is to thin it out for the main page but then have the archive available unlike the past. This especially is useful and interesting when you have announcements of the winners and other things like that.

That's just my thoughts there.

xexyl commented 3 weeks ago

Using the method outlined in GitHub notes on Removing sensitive data from a repository, remove all *.tar.bz2 files (from all levels including the former top level ioccc.tar.bz2 file) from the few previous commits, in order to reduce the size of this repo somewhat.

Won't that defeat the purpose of the entry tarballs? That would also mean updating the manifest in a mass edit. Plus FAQ and removing a great convenience.

Also would you tell me more about the 'fix the judges' idea?

Have to go .. back tomorrow if not later today (which is unlikely)!

lcn2 commented 3 weeks ago

We will probably edit news.md to thin out some of the details before the Great Fork Merge.

BTW: We are still pondering how to manage old news: to delete it off the bottom or archive it elsewhere. That's TBD which is why we have not yet gone into a thinning operation on the news.md file.

UPDATE 0

Added news.md related TODO items to the Great Form Merge issue.

It seems like some thinning might be useful at times but I don't see why it's as necessary as it once was (if it ever was). I like seeing a the long list of news when there is some. Of course another option is to thin it out for the main page but then have the archive available unlike the past. This especially is useful and interesting when you have announcements of the winners and other things like that.

That's just my thoughts there.

Fair points. We added links to your comment in the two related TODOs at the top.

Perhaps issue #2006 is done (please don't let this suggestion distract you from completing that top priority issue) you might help be doing some digging in both this repo history and on internet archive "way back machine" to recover old and lost IOCCC news items? You did a good job of recovering lost files and lost images for entries in the past.

See the TODO at the top.

If that would be interesting for you, we could create a new issue, and build the bin in tools to support the creation of the archived news HTML pages if you were to generate the historical news content in markdown form.

xexyl commented 3 weeks ago

We will probably edit news.md to thin out some of the details before the Great Fork Merge.

BTW: We are still pondering how to manage old news: to delete it off the bottom or archive it elsewhere. That's TBD which is why we have not yet gone into a thinning operation on the news.md file.

UPDATE 0

Added news.md related TODO items to the Great Form Merge issue.

It seems like some thinning might be useful at times but I don't see why it's as necessary as it once was (if it ever was). I like seeing a the long list of news when there is some. Of course another option is to thin it out for the main page but then have the archive available unlike the past. This especially is useful and interesting when you have announcements of the winners and other things like that. That's just my thoughts there.

Fair points. We added links to your comment in the two related TODOs at the top.

Perhaps issue #2006 is done (please don't let this suggestion distract you from completing that top priority issue) you might help be doing some digging in both this repo history and on internet archive "way back machine" to recover old and lost IOCCC news items? You did a good job of recovering lost files and lost images for entries in the past.

No worries. The only thing slowing me down on that (besides things coming up from time to time like what happened today) is being so tired that I can't do as much as I might otherwise. I have even not done as many other things on my free time as I'd like to in order to finish that issue. No problem on my behalf. They're not as important things even if I'd like to do them. I still can do some of it but lately as I have slept so rubbish I haven't felt up to doing those thins anyway.

See the TODO at the top.

If that would be interesting for you, we could create a new issue, and build the bin in tools to support the creation of the archived news HTML pages if you were to generate the historical news content in markdown form.

If by this you mean finding old news items I'd love to! I actually have used the Wayback Machine to recover a dear friend's lost stories that she wrote as a kid (that she published on her website years later but had lost the originals): this was many years ago. Some very brilliant, very thought provoking stories for the age at the time at that.

And again no worries: I won't let it distract me from the top priority issue. That's what I want to get done too!

xexyl commented 3 weeks ago

As for a new issue: perhaps wait until after the merge or at least until the top priority issue is done? Just a thought but that way it's not 'hanging over our head' and we can also ponder it more and what it might mean. I mean today it was only thought of so it might be that we come up with other things too?

Anyway I should be back tomorrow.

lcn2 commented 3 weeks ago

Won't that defeat the purpose of the entry tarballs? That would also mean updating the manifest in a mass edit. Plus FAQ and removing a great convenience.

The tarballs in this REPO's history are of nil value and service to just bloat this REPO in both size and time to process.

In that TODO we first test this idea on a fork of this REPO (using the methods suggested the "Using the method outlined" link under that TODO) to see how much repo disk space is saved and how much time is saved (in doing a git clone and in doing a git fullfsck). If the result of that size w e sopped test seems worth it, then the same process would be applied to this REPO just before the merge into the winner repo was done: making the merge of the fork lighter on the final Official IOCCC winner repo.

And of course, a make tar would be done again and the resulting final tarballs would be in the repo. The only thing removed would be the older revised compressed tarballs that were created by this REPO since it was forked.

Again, we are talking about a cleanup of old compressed tarballs in this REPO, having just the latest tarballs in the this this REPO.

This history of the Official IOCCC winner repo would not be touched. The fork merge into the Official IOCCC winner repo from this REPO could only carry the addition of the most recent tarballs that were granted in this REPO.

We will add a link πŸ”— to this comment in the above TODO.

lcn2 commented 3 weeks ago

Also would you tell me more about the 'fix the judges' idea?

It's an idea suggested by @SirWumpus in issue #2301.

We are not fully convinced of the concept, but we are not saying no either. But we are delaying work on that idea until we a close to the Great Fork Merge, but not so close that the content, if added, couldn't be edited and revised beforehand.

Issue #2031 is not a priority, and so should not take time away from work on things such as competing issue #2006. And assume we do it (there a real possibility we will), it would be a fun break celebrating the completion all of the hard work of issue #2006.

lcn2 commented 3 weeks ago

See the TODO at the top.

If that would be interesting for you, we could create a new issue, and build the bin in tools to support the creation of the archived news HTML pages if you were to generate the historical news content in markdown form.

If by this you mean finding old news items I'd love to! I actually have used the Wayback Machine to recover a dear friend's lost stories that she wrote as a kid (that she published on her website years later but had lost the originals): this was many years ago. Some very brilliant, very thought provoking stories for the age at the time at that.

We modified the TODO item And put it just below the "complete issue #2006" TODO and just ahead of the item about "fixing the judges".

We didn't form the new issue now, so as to not distract ourselves.

And again no worries: I won't let it distract me from the top priority issue. That's what I want to get done too!

We did it also for ourselves so we too would not be distracted by the fun.

We REALLY want to get out infrastructure work done so we can be back to the submit server work so that will be far enough along to allow us to put screenshots into the FAQ (and how to register and how to upload submissions). We, ourselves, need to keep grinding away (we are making good progress) and the chores and not get distracted ourselves.

xexyl commented 3 weeks ago

Also would you tell me more about the 'fix the judges' idea?

It's an idea suggested by @SirWumpus in issue #2301.

We are not fully convinced of the concept, but we are not saying no either. But we are delaying work on that idea until we a close to the Great Fork Merge, but not so close that the content, if added, couldn't be edited and revised beforehand.

Issue #2031 is not a priority, and so should not take time away from work on things such as competing issue #2006. And assume we do it (there a real possibility we will), it would be a fun break celebrating the completion all of the hard work of issue #2006.

As you know I finally saw it .. thanks for saying though. It's a fun idea yes!

I'm off to get a bowl of strawberries and then some sleep. I have one of my cats under my blanket which might make it a bit harder to sleep but I'll make do if he doesn't leave.

Hopefully tomorrow will be more productive. Good night!

xexyl commented 3 weeks ago

See the TODO at the top. If that would be interesting for you, we could create a new issue, and build the bin in tools to support the creation of the archived news HTML pages if you were to generate the historical news content in markdown form.

If by this you mean finding old news items I'd love to! I actually have used the Wayback Machine to recover a dear friend's lost stories that she wrote as a kid (that she published on her website years later but had lost the originals): this was many years ago. Some very brilliant, very thought provoking stories for the age at the time at that.

We modified the TODO item And put it just below the "complete issue #2006" TODO and just ahead of the item about "fixing the judges".

We didn't form the new issue now, so as to not distract ourselves.

And again no worries: I won't let it distract me from the top priority issue. That's what I want to get done too!

We did it also for ourselves so we too would not be distracted by the fun.

We REALLY want to get out infrastructure work done so we can be back to the submit server work so that will be far enough along to allow us to put screenshots into the FAQ (and how to register and how to upload submissions). We, ourselves, need to keep grinding away (we are making good progress) and the chores and not get distracted ourselves.

Best wishes with that! I totally understand how that goes. It can be frustrating and exhausting and other things.

Good idea to not make the new issue yet.

I'll be back tomorrow in some way or another. Good night!

xexyl commented 3 weeks ago

Won't that defeat the purpose of the entry tarballs? That would also mean updating the manifest in a mass edit. Plus FAQ and removing a great convenience.

The tarballs in this REPO's history are of nil value and service to just bloat this REPO in both size and time to process.

In that TODO we first test this idea on a fork of this REPO (using the methods suggested the "Using the method outlined" link under that TODO) to see how much repo disk space is saved and how much time is saved (in doing a git clone and in doing a git fullfsck). If the result of that size w e sopped test seems worth it, then the same process would be applied to this REPO just before the merge into the winner repo was done: making the merge of the fork lighter on the final Official IOCCC winner repo.

And of course, a make tar would be done again and the resulting final tarballs would be in the repo. The only thing removed would be the older revised compressed tarballs that were created by this REPO since it was forked.

Again, we are talking about a cleanup of old compressed tarballs in this REPO, having just the latest tarballs in the this this REPO.

This history of the Official IOCCC winner repo would not be touched. The fork merge into the Official IOCCC winner repo from this REPO could only carry the addition of the most recent tarballs that were granted in this REPO.

We will add a link πŸ”— to this comment in the above TODO.

I see. That makes sense then and sounds like a good idea to me. Thanks for clarifying.

lcn2 commented 2 weeks ago

Updated the TODOs changed the order of Post Great Fork Merge TODOs.

Performed commit 5d4a624d37472be28a9aa65b0518dc54d29e3745 by editing the top of the top level README.md file.

xexyl commented 2 weeks ago

Updated the TODOs changed the order of Post Great Fork Merge TODOs.

Thanks. I will see if I can take a quick look at that as I guess it might affect some things in the other issue.

Performed commit 5d4a624 by editing the top of the top level README.md file.

Thanks. I just updated the top level index.md/index.html files. Made how to activate the menu a bit clearer for mobile devices.

lcn2 commented 2 weeks ago

We believe we have addressed all of the current questions that still need answering at this time. If we've missed something or something else needs to be clarified, please ask again.

lcn2 commented 1 week ago

Just an FYI for the curious

We are planning to modify a few of the later TODO items. Here is why:

We have been pondering the wisdom of maintaining the manifest for the long term / post Great Fork Merge via a numbers file.

We do not think that keeping the manifest.numbers file in the repo, post Great Fork Merge is a good idea. Instead we will have a tool that will collect all of the .entry.json file data and build a CSV which can be imported into a spreadsheet tool such as macOS numbers. The only purpose for doing this is to look over the global picture: so there would also be a tool that takes the CSV and re-builds the .entry.json files. However this "import to CSV / import to a spreadsheet / export to CSV / export to .entry.json files" would ONLY be done on rare occasions. The CSV file would NOT be part of the repo: it would only be a temporary file.

This new model will declare that all of the .entry.json files are authoritative source of data for IOCCC winning entries. This would happen at the "code freeze" in the final stages of the Great Fork Merge.

Until the "code freeze" happens, please continue to use the manifest.numbers file and the tmp tools. Please continue to made manifest revisions as needed.

We plan to write a tool that will be used to setup a new winning IOCCC entry. This tool will use default "_entrytext" for common types of files. Looking at the manifest.numbers file, we can see that certain types of files do have common "_entrytext" values, so this will assume those defaults. Of course the tool will allow the IOCCC judges to override the defaults where needed. The tool will also make default assumptions about "inventory_order", "OK_to_edit", "display_as", and "display_via_github" as well, allowing the IOCCC judges to override as needed.

The tool will also make use of a submission .info.json and .auth.json files when forming a new winning entry .entry.json files as well as creating and/or updating files under the author directory.

We have not finished the design of this tool. And while the tool is NOT strictly needed prior to Great Fork Merge, the implications of this tool need to be understood before the "code freeze", and so the design is proceeding.

And it "goes without saying" as that old saying goes (and so we will say it 😁): This tool will only be used by the IOCCC judges during that brief period where we know which submissions will win the IOCCC just prior to the git push where the new IOCCC winning entries are announced.

There is NO need to modify how things such as issue #2006 are worked in today. This is just an FYI for the curious.

UPDATE 0

The TODO for issue #2239 had been updated as per the above.

xexyl commented 6 days ago

Just an FYI for the curious

We are planning to modify a few of the later TODO items. Here is why:

We have been pondering the wisdom of maintaining the manifest for the long term / post Great Fork Merge via a numbers file.

We do not think that keeping the manifest.numbers file in the repo, post Great Fork Merge is a good idea. Instead we will have a tool that will collect all of the .entry.json file data and build a CSV which can be imported into a spreadsheet tool such as macOS numbers. The only purpose for doing this is to look over the global picture: so there would also be a tool that takes the CSV and re-builds the .entry.json files. However this "import to CSV / import to a spreadsheet / export to CSV / export to .entry.json files" would ONLY be done on rare occasions. The CSV file would NOT be part of the repo: it would only be a temporary file.

Seems good. I have a question for you though - in case you did not think of it (which I am sure you did).

This new model will declare that all of the .entry.json files are authoritative source of data for IOCCC winning entries. This would happen at the "code freeze" in the final stages of the Great Fork Merge.

So what happens when an author makes a change? It's only for the judges but how will they go about in finishing the task (as such)? Or is this one of those things that you will have to merge and then do the additional step?

Until the "code freeze" happens, please continue to use the manifest.numbers file and the tmp tools. Please continue to made manifest revisions as needed.

Of course.

We plan to write a tool that will be used to setup a new winning IOCCC entry. This tool will use default "_entrytext" for common types of files. Looking at the manifest.numbers file, we can see that certain types of files do have common "_entrytext" values, so this will assume those defaults. Of course the tool will allow the IOCCC judges to override the defaults where needed. The tool will also make default assumptions about "inventory_order", "OK_to_edit", "display_as", and "display_via_github" as well, allowing the IOCCC judges to override as needed.

Indeed many are quite common! The other thoughts also seem sound too.

The tool will also make use of a submission .info.json and .auth.json files when forming a new winning entry .entry.json files as well as creating and/or updating files under the author directory.

From the mkiocccentry tool, I gather?

We have not finished the design of this tool. And while the tool is NOT strictly needed prior to Great Fork Merge, the implications of this tool need to be understood before the "code freeze", and so the design is proceeding.

If you want to (and you can :-) ) fill me in on this I'd be happy to either help with it or write an FAQ entry or anything else that's necessary. Of course I'll still focus on #2006.

And it "goes without saying" as that old saying goes (and so we will say it 😁): This tool will only be used by the IOCCC judges during that brief period where we know which submissions will win the IOCCC just prior to the git push where the new IOCCC winning entries are announced.

:-) (to the humour there)

But see my question / thought above on the matter.

There is NO need to modify how things such as issue #2006 are worked in today. This is just an FYI for the curious.

Thanks - just in case. I'll keep doing it. I woke up at stupid o'clock today. I did get some things done in the other repo and I hope to get the YYYY/README.md files done today but I have to go afk a bit again .. not sure if I'll have time to do this task or not. But tomorrow should be fine.

Back later ... maybe .. otherwise back tomorrow. Well okay tomorrow is also later but you know what I mean :-)

lcn2 commented 6 days ago

This new model will declare that all of the .entry.json files are authoritative source of data for IOCCC winning entries. This would happen at the "code freeze" in the final stages of the Great Fork Merge.

So what happens when an author makes a change? It's only for the judges but how will they go about in finishing the task (as such)? Or is this one of those things that you will have to merge and then do the additional step?

Of course it depends on what is being changed.

We do not expect people to run the bin tools and fix up everything related to their proposed change. That would impose too much of a burden of responsibility and site knowledge on anyone who submitted a pull request.

We will have to use the GitHub tool, gh to apply the proposed change locally, evaluate the change, fix if/as needed, perform the complete site update, evaluate the result and push the update accordingly. We guess ...

Changes should not happen all that often. On the official website we will slow down approval and applying changes to every few months or so.

We do not want the website to churn or change that often.

xexyl commented 4 days ago

This new model will declare that all of the .entry.json files are authoritative source of data for IOCCC winning entries. This would happen at the "code freeze" in the final stages of the Great Fork Merge.

So what happens when an author makes a change? It's only for the judges but how will they go about in finishing the task (as such)? Or is this one of those things that you will have to merge and then do the additional step?

Of course it depends on what is being changed.

We do not expect people to run the bin tools and fix up everything related to their proposed change. That would impose too much of a burden of responsibility and site knowledge on anyone who submitted a pull request.

We will have to use the GitHub tool, gh to apply the proposed change locally, evaluate the change, fix if/as needed, perform the complete site update, evaluate the result and push the update accordingly. We guess ...

Changes should not happen all that often. On the official website we will slow down approval and applying changes to every few months or so.

We do not want the website to churn or change that often.

Fair enough. Just wanted to bring it up in case.

Phone call delayed. I was trying to go through photos and videos and I managed somewhat but numerous have to go in several albums and I am also having other problems right now that are making it hard to keep track of which ones they are in (doing it with phone).

So I will have to do checks at the laptop after this. Under this delay means more time taken away from the repo today. Sorry about that.

I should be able to at least get the rest of the years up to 1995 done but it's starting to look like I might not be able to get to next/ today :(

Will have to wait and see.