peterjc / mediawiki_to_git_md

Convert a MediaWiki export XML file into MarkDown as a series of git commits
MIT License
55 stars 17 forks source link

Deal with MediaWiki attachments, e.g. images #1

Closed peterjc closed 9 years ago

peterjc commented 9 years ago

Thanks to @vincentdavis for his experiments with images overnight on his fork https://github.com/vincentdavis/peterjc.github.io

Looking at an example from http://biopython.org/wiki/Logo mapped to https://github.com/peterjc/peterjc.github.io/blob/master/wiki/Logo.mediawiki

[[Image:biopython.jpg]]

which became https://github.com/peterjc/peterjc.github.io/blob/master/wiki/Logo.md

![](biopython.jpg "biopython.jpg")

Over at the test Jekyll rendering http://peterjc.github.io/wiki/Logo.html this became:

<p><img src="biopython.jpg" alt="" title="biopython.jpg"></p>

i.e. A simple relative link. I have tested this by manually committing the logo image to the master branch.

This works on wiki/Logo.html (via Jekyll on github.io) and wiki/Logo.md (rendered on github.com), providing the image is uploaded as wiki/biopython.jpg (lower case). Note the image as hosted on the original wiki was at http://biopython.org/w/images/5/5c/Biopython.jpg (somewhat cryptic URL with Biopython with a capital B), with associated wiki page http://biopython.org/wiki/File:Biopython.jpg (note Biopython with a capital B).

Strangely the wiki/Logo.mediawiki rendering on github.com does not show the image, but I don't mind. This may be a GitHub bug?

Note that I can edit the original MediaWiki site to use [[Image:Biopython.jpg]] instead of [[Image:biopython.jpg]] so the case is somewhat flexible within MediaWiki

The converter will need to look at File:... page revisions, in this case http://biopython.org/wiki/File:Biopython.jpg aka File:Biopython.jpg and fetch the matching image (upper case B).

One question is, do we save it as wiki/biopython.jpg (lower case b) or wiki/Biopython.jpg (upper case B), given MediaWiki allows [[Image:Biopython.jpg]] and [[Image:biopython.jpg]]? My slight preference is preserve the capitalisation in the filename, and pre-process lower case image links.

peterjc commented 9 years ago

For a test case of a file with a revision, see http://biopython.org/wiki/File:TorusDBN.png

peterjc commented 9 years ago

On the subject of filenames, quoting http://www.mediawiki.org/wiki/Manual:ImportImages.php

_Note: The "canonical database form" required by "--from" is obtained from the file name by capitalizing the first letter, replacing all spaces with underscores, and then replacing multiple consecutive underscores with one underscore. For example, to start with the file someFile with __weird_ spaces.png, the correct argument would be --from=SomeFile_with_weird_spaces.png_

(update: See notes on issue #2, there are more rules for special character encoding)

So, I think we should transform all [[Image:XXX]] entries to use this "canonical database form" (prior to calling pandoc) and ensure we save the images using the "canonical database form".

e.g. Replace [[Image:biopython.jpg]] with [Image:Biopython.jpg] and save the associated file as wiki/Biopython.jpg.

vincentdavis commented 9 years ago

Some images are scaled as an example in. http://biopython.org/wiki/Phylo Same page on github https://github.com/peterjc/peterjc.github.io/edit/master/wiki/Phylo.mediawiki

[[File:phylo-draw-apaf1.png|256px|thumb|right|Rooted phylogram, via Phylo.draw]]

which becomes

![Rooted phylogram, via Phylo.draw](phylo-draw-apaf1.png "fig:Rooted phylogram, via Phylo.draw")

On the actual wiki the URL is: http://biopython.org/w/images/thumb/0/04/Phylo-draw-apaf1.png/256px-Phylo-draw-apaf1.png

If you what to see the full size image http://biopython.org/w/images/0/04/Phylo-draw-apaf1.png

I found a document that suggests code like this can be used for scaling images.

[[ http://url.to/image.png | height = 100px ]]

Assuming you don't what to use ftp to download the images folder. I will work on a script for downloading the images,

vincentdavis commented 9 years ago

Maybe useful tool "extract all image names from the XML dump which it may reference, then generate a series of BASH" https://meta.wikimedia.org/wiki/Wikix

and also https://github.com/benjaoming/python-mwdump-tools

peterjc commented 9 years ago

We can probably get convert.py script to download the (latest) version of each image (e.g. Biopython.jpg) via scraping the URL from the the associated wiki page when it sees a revision for File:Biopython.jpg, save this under wiki/ and then use git add wiki/Biopython.jpg and commit it. This would be functional, but not fully capture image revisions.

peterjc commented 9 years ago

The lack of image scaling may be worth reporting to pandoc as a feature enhancement request, but the good news is for Biopython's wiki there are so few images we can check them all manually.

vincentdavis commented 9 years ago

Ok this is a hack. It scans the mediawiki files in /wiki/ if it finds a an image (file:) it then scans the corisponding biopython.org page for all images. It then get the the full size images.

import os
import urllib.request
import re

scan_path = 'peterjc.github.io/wiki/'  # look here for files to scan
save_path = ''

##############

files = os.listdir(scan_path)
mfile = re.compile(r'File:.+')
mlink = re.compile("""class="image"><img alt="" src="[^"]+""")

for file in files:
    namesplit = file.rsplit('.', maxsplit=1)
    if namesplit[-1] == 'mediawiki':
        ofile = open(dir_path+file, 'r').read()
        result = mfile.findall(ofile)
        if len(result) > 0: # Found a file to download
#            for r in result:
#                print(r)
            print(namesplit[0])
            response = urllib.request.urlopen('http://biopython.org/wiki/'+ namesplit[0])
            html = response.read()
            for m in mlink.findall(str(html)):
                pre_url = 'http://biopython.org'
                img_url = pre_url + m.split('src="')[1].rsplit('/', maxsplit=1)[0].replace('/thumb/', '/')
                print(img_url)
                img = urllib.request.urlopen(img_url)
                filename = img_url.rsplit('/', maxsplit=1)[-1]
                print(filename)
                localFile = open(save_path+filename, 'wb')
                localFile.write(img.read())
                localFile.close()
peterjc commented 9 years ago

Even if the API links are not enabled, we can if need be crawl the Special:ListFiles page to find all the currently uploaded files, e.g. http://biopython.org/wiki/Special:ListFiles

However, simply processing the XML revisions tells us when a new image was uploaded, or an old image updated. We may be able to use this revision date stamp to cross reference with the image page in order to get the appropriate version of the image. e.g. parsing the file history table on http://biopython.org/wiki/File:TorusDBN.png - based on the snippet from @vincentdavis

vincentdavis commented 9 years ago

Added get_images function which used the existing XML parsing in convert.py. Need to see if I can use the existing commit functions for the images. https://github.com/vincentdavis/mediawiki_to_git_md/commit/63bec72ebf78912d2e2ffe1223850e1cde366245#diff-fc95a3840033cc06854352f25fb6822f

peterjc commented 9 years ago

Excellent - I should be able to add the git part and test this over the weekend :)

peterjc commented 9 years ago

Marking as fixed; will open a new issue for image scaling...