overviewer / Minecraft-Overviewer

Render high-resolution maps of a Minecraft world with a Leaflet powered interface
https://overviewer.org/
GNU General Public License v3.0
3.36k stars 481 forks source link

support mutli-node parallelisation #42

Open eminence opened 14 years ago

eminence commented 14 years ago

to facilitate the rendering of really large maps, it would be nice to support parallelisation on multiple nodes in a cluster (assuming some type of shared storage).

chunk rendering seems easy to do. not so sure about image tiling.

brownan commented 14 years ago

If I assume some kind of shared storage, I don't think tihs would be hard. Tiles are now computed independently as well, so both steps can be parallelized across nodes.

rbos commented 14 years ago

Would it simplify anything to put the data into an SQL database?

brownan commented 14 years ago

Correct me if I'm wrong, but I'm under the impression that relational databases are not quite as good at storing large amounts of binary data (image files). I imagine that wouldn't be much of an advantage except that most people know how to set up a database server.

Also, the images would still need to get written out into image files for the browser to view.

The other obvious option is to have each node calculate its tiles and store them locally, then merge them later.

Basically, the less Inter-process communication, the better. Although my experience with this kind of thing is limited, so correct me if I'm wrong.

But regardless, there are a few options.

brownan commented 14 years ago

Also, see the Celery ticket, that's related and also an option.

eminence commented 14 years ago

re: Celery -- something like that seems like a great idea. i'll add some comments to that particular ticket.

re: SQL database for storage, probably not really a good idea. as brownan said there's a whole bunch of binary blobs to move around. sticking them in a SQL database probably isn't the best idea. also, any distributed solution needs to be careful about what types of data it moves around. the compressed minecraft worlds can be over 200mb, but the resulting tile data can easily be larger than 1gig.

i think requiring shared storage would be the easiest solution, unless a particularly elegant solution presents itself.

brownan commented 14 years ago

I'll look into what it would take for some easy multi node processing sans celery but assuming shared storage. Seems like a common case that may not satisfy everyone, but is still a step in the right direction.

eminence commented 14 years ago

awesome. if i can help in anyway, let me know.

ukd1 commented 14 years ago

Have a look at gearman - it might do the job and it's easy to install on most platforms too (though, so may Celery, I've not looked).

With gearman, you can send the results (images) back to a central place easily.

submitteddenied commented 14 years ago

OK, I have a pretty simple implementation of parallelisation (with a Client-Server architecture), but it will rely on the functions that are distributed returning their results, not writing them to disk and returning the path (unless something like NFS is set up).

This could lead to extra problems like memory and bandwidth usage (client would need to hold images in memory while being rendered, transmit them back to the server and then server writes to disk), but would be much easier to set up.

The interface (for queuing jobs) matches that of the multiprocessing.pool and your DummyPool - so it should be easy to integrate to the main body of code, just wondering what the best direction would be from here:

I'll push my work to my multinode branch - it's pretty rough at the moment, but hopefully you can work it out ;)

(sorry about all the edits, apparently "GitHub Flavoured Markdown" hates me?)

brownan commented 14 years ago

Yeah, every website out there has to have their own markup language, it bugs me.

Anyways, I think you should do, at least for starters, what'd be the easiest while still being useful to the people who want it. I think the people that can set up a multi node render can also set up shared storage, so I don't think that's a bad way to go.

eminence commented 14 years ago

My feelings are with brownan -- whatever is easiest. That was my thinking behind my original assumption of shared storage (since I think that simplifies things a lot), but if you think you can easily remove that requirement (while maintaining good performance), go for it.

eminence commented 14 years ago

ok, i'm still a bit of a git noob, so i'm not sure about the proper work-flow here. i want to play around with some of emjay's changes. do i clone his repo to a new work area with git-clone? or should i fetch his branch and somehow use that to create my own 'multinode' branch, or...?

brownan commented 14 years ago

There's no need to clone anything new.

If you want to merge in emjay's changes into your current branch (probably your local master branch) you can simply run a git pull, and pass in the remote repository and remote branch to merge in to your current one.

git pull git://github.com/emjay1988/Minecraft-Overviewer.git master

or for emjay's multinode branch

git pull git://github.com/emjay1988/Minecraft-Overviewer.git multinode

If you don't want to mess up your master branch and want to create a new branch first, create a new branch starting at your master and then check it out:

git branch my-new-branch master
git checkout my-new-branch

Then pull in the changes you want as before.

brownan commented 14 years ago

What I like to do when working with other people's repos tho is add a "remote". I would do something like this:

git remote add emjay git://github.com/emjay1988/Minecraft-Overviewer.git
git fetch emjay

That will name emjay's remote repository "emjay" and fetch in all commits in that repo that aren't in yours. Then use "gitk" to see visually the differences (very handy!)

Then when you want to merge in some changes to your current branch, just do a git merge:

git merge emjay/master

Git pull is just a shortcut for doing a fetch operation and a merge operation in one shot. I like to do them separately so I can use gitk to view the differences before I go and merge stuff.

Edit: This post accomplishes essentially the same thing as the last, but adding a remote makes it easier to refer to remote repositories without typing the whole thing out every pull, and you can to a "git fetch emjay" and easily see if there's anything new upstream.

alexjurkiewicz commented 14 years ago

The shared storage requirement is a concern for me. It's non-trivial to do this over the internet. NFS is obviously out of the question, and there are no non-sucky alternatives.

Here's a random idea. If you want to enable multi-system rendering, add --machines 3 --machineid 1 to the commandline. This splits world rendering up into three pieces (say quadrants, and one machine gets double the work) and renders only the first piece. If you run that on three machines you can then manually combine their map folders and get the full picture.

eminence commented 14 years ago

while shared storage is certainly not a requirement, i believe that we want to be careful what modes of operation we support, as some will obviously behave better than others.

the reason i support the shared storage requirement is two fold:

  1. i think it makes development easier
  2. the types of environments where a shared storage system performs well will probably be the same types of environments that multinode overviewer performs well.

the second reason isn't as compelling as the first, but exists none-the-less.

of course, another line of reasoning is that we shouldn't have the job of preventing users from doing stupid things (like setting up a cluster between london, new york, and hongkong) and that we should whatever features are most useful for them, but reasonable people will disagree on this point

submitteddenied commented 14 years ago

Ok guys, I've been thinking a bit about it and I think I have a good way to go. Each node (clients and servers) should have an independent "base_path". All paths that we deal with should be relative to this. This path could be an NFS/CIFS/SSHFS/whatever mount or just plain local storage. Workers can then save their results to wherever they're instructed to (inside "base_path") which would automatically merge with a shared file system or be manually merged later if it was not shared.

The major issue with not-shared files is that caching can't happen. There are also 3 methods (that I saw) that are distributable:

If the result of any of these is needed to do another, then the storage must be shared.

submitteddenied commented 14 years ago

Just expanding a little more on this, I think the best way to add worker specific arguments to the remote jobs is to add an "environment" dictionary to the kwargs passed to the function that is being distributed. If the function doesn't have any need of environment information, then it doesn't get used, but if it does it would use the node specific information, not the server's info.

Is there an easier way of having a node-specific "base_path" that I'm overlooking?

eminence commented 14 years ago

regarding caching: if the same chunk always went to the same node, then each node could do local caching. problems would probably arise when the list of nodes changes (that is, chunks would probably be sent to different nodes that don't have the necessary cached version, reducing efficiency).

regarding your statement "If the result of any of these is needed to do another, then the storage must be shared." i'm not sure I follow. If the result of one is needed as input for another, I'm not sure why shared storage is required. All that is required is that one method is executed before other than depend on it. The result can be pushed around to network to whatever node needs it next.

submitteddenied commented 14 years ago

My interpretation of what alexjurkiewicz said was that he wanted a way to distribute the processing without actually having the nodes connected, to specify that this node does quadrant 1, that node does q2, etc etc. That would allow (local) caching, but wouldn't be as flexible.

I agree that you could set up the server to transmit the required information to each node, but that would require significant rework of the code base, we'd need the server to track what nodes have which chunks/cache files etc and ship them around as required. This isn't necessarily a bad thing, it will just take a bit more work.

Otherwise, if you categorise the chunks and get a particular node to do all three stages on the same chunk, that would work too - it would also require the server to do some node management though...

alexjurkiewicz commented 14 years ago

To be frank, with the current parallelisation for a single machine, I don't see a great need for multi-node processing.

It's true that generating large (>100k chunk) maps is slow, but that's a one off process. After the initial time investment, updates are extremely quick: our 80k chunk map updates in 30 minutes each day. Sure, people with larger maps will see this take a longer time -- but until there's more than three or four people in the world with 500k chunks I'm not sure multi-node processing has much practical use.

eminence commented 14 years ago

emjay, i've think you've just outlined some good reasons why shared storage makes this easier :)

anyway, i'm going to try to free up some time this weekend to produce a simple (but working) shared storage version (unless, of course, some else gets to it first :)

even if this code is ultimately thrown away, it should at least be helpful for figuring out how useful this feature even is.

MostAwesomeDude commented 14 years ago

I work for OSUOSL on a project called Pydra, which was designed for this kind of "I want to be parallel, but my code was not written with networking or clustering in mind" problem. I might be able to take a closer look at this.

submitteddenied commented 14 years ago

Just did another commit and push (I know, I should do it more frequently) to my fork.

I also ran a test and it worked! Currently, paths are passed around as absolute. Do the following to make it work on windows:

  1. Share a folder on the "server"
  2. Mount ("Map Network Drive") that folder on the "server" and all clients as the same drive letter (I used drive M:)
  3. Copy the world data into M:\ drive (I called the folder "map-data")
  4. On the server run: "gmap.py --server --address [server ip]:[port] M:\map-data M:\map"
  5. Server should now process up to "Scanning chunks" and then get no where (no workers connected yet!)
  6. On each worker (one of which may be the server) run: "gmap.py --client --address [server ip]:[port]"
  7. You should now see progress on the server
  8. ....
  9. Profit!

So this is a good proof of concept (if I say so myself), but it needs more work. Each client should be able to process <#processors> jobs at once and we need to fix the paths so that they:

  1. aren't absolute; and
  2. don't have to be the same on all nodes

I've partially resolved this by adding an "environment" map to all the distributed functions, which should contain node specific data (such as the base path).

Let me know how you go!

submitteddenied commented 14 years ago

I should also note that all worker nodes (and the server) need to have minecraft installed.

Jonbas commented 14 years ago

I'm getting client errors when I try to run this on the same computer as the server. It's giving me [Errno 10048] Only one usage of each socket address is normally permitted.

Once the error comes up, the only way to clear it is to delete the cache dir and start over. Seems to be coming up at different times each run.

I would definitely use this if it was possible for me to use multiple processors on each node.

submitteddenied commented 14 years ago

Looks like you're trying to run two servers on the one machine, you should be able to do that, but you need to specify different ports after the address On the first: --server --address localhost:12345 On the second: --server --address localhost:54321

But you probably don't want to run two servers, make sure the server is run with: --server --address localhost:12345 And the client is run with: --client --address localhost:12345

When you start the client, you need to give it the same address as you gave the server (don't put both --server and --client, just one or the other).

I've made some more changes to my branch which should allow you to run multiple threads on each client, as well as merging up to brownan's master. Hopefully this didn't break everything (about to go test it now) :P

mrsheen commented 14 years ago

Aside: as one of the few running a 300k chunk map, I'd love to run multi-node for both the geek factor and as a learning experience (having never done an implementation myself before)....but I don't need to. Our (chunklist-driven) renders take 55 seconds, down from 55 minutes without the chunklist. Not much chance for optimisation in 55 seconds.

We are in a position where we can run the renders back-to-back, as we have a dedicated render box that is not running any other services. I would still suggest for everyone looking for optimisations to check out my fork, specifically the chunklist-improvements. I cannot guarantee it's bugfree, or will not corrupt tiles (which has happened before, during development), but I've got it running now in a relatively stable state.

eminence commented 14 years ago

just wanted to drop a quick note:

i've pushed eminence/Minecraft-overviewer@d110a1eb4820ce8824c3f818b270b003146713c0 to a new branch. It does multi-node rendering with gearman, no shared storage required (the chunk data is passed to the remote note, and the resulting image data is passed back to the Overviewer). Running on my local LAN, I saw some impressive looking speedups. I'll be playing more this weekend.

If you have any comments on this, and they don't apply to brownan's version, please feel free to start a new discussion on my fork, to keep the noise here to a minimum.

Hinara commented 9 years ago

If think the best idea if you use two parts -Part 1 The Proxy This software will manage all Over viewer worker giving them map parts regionX_X.dat ,splitting they works and receive images. Why use the proxy as file manager ? Because it's easier (no more configuration)

-Part 2 Workers The computer which render images from the days of the world sent by the server and send him the pictures.

I Think it's the best solution