sbromberger / LightGraphs.jl

An optimized graphs package for the Julia programming language
Other
672 stars 185 forks source link

Common API to create parallel traversal variants of existing algorithms #1349

Closed rapus95 closed 4 years ago

rapus95 commented 4 years ago

first iteration of an API proposal. That means that each chosen name is subject to change arbitrarily. This design is mainly for outlining call, dispatch and type hierarchies of the proposal. Most of those structs are still singletons but for actual usage each struct needs to contain the flags the given resource, problem and traversalstyle may depend upon. (but none of the others)

abstract type Resource end
struct Serial <: Resource end
struct Thread <: Resource nthreads::Int end #threaded
struct Cluster <: Resource nnodes::Int end #distributed

Thread() = Thread(Threads.nthreads())

abstract type Behaviour end
abstract type Traversal <: Behaviour end
abstract type Implementation <: Behaviour end

struct BFS <: Traversal end
struct ThreadedBFS <: Implementation end

defaultparallelization(o::Traversal) = Threads.nthreads()>1 ? Thread() : Serial() #fragile default, just an example

"""
parallelize(::Traversal, ::Resource; args...)

specialize on the traversal and optionally on the resource
`args` may hold any implementation relevant arguments, passed through
call `implement` with the resulting traversal and resource
"""
function parallelize end

serial(o::Traversal) = parallelize(o, Serial()) #optional, convenient

parallelize(o::Traversal) = parallelize(o, defaultparallelization(o))
parallelize(o::Traversal, r::Resource; args...) = implement(o, t; args...)

"""
implement(::Traversal, ::Resource; args...)

specialize on the resource and optionally on the traversal
`args` may hold any implementation relevant arguments, passed through
create and return the implementation object for the given combination of arguments
"""
function implement end

implement(::Traversal, ::Resource; args...) = error("missing implementation")
implement(o::BFS, t::Thread; args...)::Implementation = ThreadedBFS(o, t; args...)

### problem interaction
shortest_path(t::Traversal, args...) = shortestpath(implement(t, Serial()), args...) #optional, enables *
shortest_path(::Implementation, ::Graph, args...) #implement that, similar for all other problem solvers

shortest_path(BFS(), g, args...) #enabled by *
shortest_path(serial(BFS()), g, args...) #better than the last option but less specializations than specializing each problem on traversals
shortest_path(parallelize(BFS()), g, args...) #does the default parallelization based on available capabilities
shortest_path(parallelize(BFS(), Thread()), g, args...) #solves in a threaded way
shortest_path(parallelize(BFS(), Serial()), g, args...) #solves in a serial way

### further ideas to enhance/replace the previous problem interaction
abstract type GraphProblem end
(p::GraphProblem)(t::Traversal, args...; kwargs...) = p(parallelize(t), args...; kwargs...) #cleaner than * but julia 1.3+

#to implement for each problem
struct ShortestPath <: GraphProblem end
shortest_path(args...; kwargs...) = ShortestPath()(args...; kwargs...) #optional, convenience
(::ShortestPath)(::Implementation, ::Graph, args...) #actual implementation

Depending on which of the latter two variants gonna be used (object call or function call) the interface of the default functions needs to be varied a little bit.

sbromberger commented 4 years ago

Some quick comments before I have had a chance to go deep into your proposal:

There's no reason to keep the number of threads and the number of nodes inside the Resource struct. These can change; the functions don't depend on them at all; and you discount other combinations (different cores-per-node settings on different nodes of a cluster) that might be relevant. But the bottom line is that nothing in our code will use that. We use the capabilities that exist and are defined by the setup of distributed processing in Julia (whatever cluster manager you used to set up multiprocessing will be used.)

Let's not confuse BFS the shortest-path algorithm with BreadthFirst the traversal algorithm. The former uses the latter to compute simple geodesic distances.

This makes user-facing functions really complex. Instead of

d = shortest_paths(g, 1, BFS(BreadthFirst(Sort.QuickSort)))

we now require the user to type

d = shortest_path(serial(BFS(BreadthFirst(Sort.QuickSort))), g, args...)

and instead of

d = shortest_paths(g, 1, ThreadedBFS())

we'd have them type

shortest_paths(parallelize(BFS(), Thread()), g, args...)

(Leaving aside for the moment that we're making a massive breaking change in user experience by putting the algorithm first; I'm sure that's an easy fix so don't want to dwell on it.)

...but they'd have to remember that they can't specify a sorting algorithm in the threaded implementation or they'd get an error. Or they'd have to look at what will now be a multi-page docstring to figure out which options go with which implementations, and behind the scenes we're using ThreadedBFS anyway.

I understand this provides consistency in implementation, but this consistency is coming at pretty significant complexity. I don't know that we're there yet. Perhaps when we add more parallel functionality it might make sense (that is, I'm assuming things like centrality measures will be able to take advantage of this new syntax), but right now the matrix of behavior x implementation is pretty sparse.

What makes sense to me for now is to put this in Experimental. That is, since this is all syntactic sugar for existing methods, we can drop it in Experimental and as long as it's non-breaking to anything in the base, we can play around with it. Since it's in Experimental, it's immune from SemVer so we can break this code as much as we want while we tweak it. Then (again, since it's syntactic sugar), we can move it into Base once we're satisfied and let users use this or the existing methods. Then (3.0), if this proves to be the right thing to do, we can drop support for the old methods.

Thoughts?

rapus95 commented 4 years ago

Or they'd have to look at what will now be a multi-page docstring to figure out which options go with which implementations

How can they explore the available implementations right now? There MUST be a place somewhere, if it's not, THAT's hostile 😄 The only suggestion I made was to group the implementation per traversal or problem since usually people don't try to find an implementation but to solve a problem and then have to weigh the possibilities against each other and finally choose one.

shortest_paths(g, 1, BFS(BreadthFirst(Sort.QuickSort)))

As I understood it, BFS is the shortest_paths algorithm based on BreadthFirst traversal. So that line seems quite redundant IMO. If BFS is the implementation of a shortest_paths solver which utilizes a BreadthFirst traversal, you may simply hide that from the user entirely. I.e. why not just use shortest_paths(g, 1, BreadthFirst(Sort.QuickSort)) since it should be clear that this should result in a BFS state object anyway.

but this consistency is coming at pretty significant complexity

I don't see the complexity in (problem::GraphProblem)(impl::Implementation, g::Graph). But I see that it still holds some redundancy since the implementation needs to be problem aware in any case, so the GraphProblem instance is redundant here aswell.

Thus I suggest to move that call on a problem into the construction phase aswell. The final layout would be:

  1. Phase (constructing): Construct the implementation (both variants coexisting) a. Incremental construction from orthogonal properties (Problem, Resources, Traversal), defaults b. Manual: Just instantiate the correct implementation type, no defaults provided
  2. Phase (solving): call a solve(impl::Implementation, g::Graph, optionalargs...) function which runs the implementation on the given graph. Maybe even consider using a similar API as MathOptInterface.jl etc. (This helps to feel home in the Julia ecosystem and in some sense the graph problems are natural optimization problems anyway) I.e. the flow could be something like:
    solver = construct(ShortestPath(), Threaded(max=Threads.nthreads()), BreadthFirst(Sort.QuickSort))
    #or ThreadedBFS(Sort.QuickSort(), max=Threads.nthreads())
    solve!(solver, g, 1) #what's that 1 for?
    d = result(solver)

    for that we'd obviously add convenience functions:

    function (gp::GraphProblem)(g::Graph, args...)
    solver = construct(gp, args...)
    solve!(solver, g)
    return result(solver)
    end
    construct(prob::ShortestPath, g::Graph) = construct(prob, g, BreadthFirst(), Thread()) # arbitrary default
    construct(prob::ShortestPath, g::Graph, bf::BreadthFirst, r::Thread) = ThreadedBFS(prob, g, bf, r) #arbitrarily chosen

    then we can call it like: result = ShortestPath()(graph)

rapus95 commented 4 years ago

Maybe make the Graph part of the problem instance aswell. Technically that would make sense.

Then the user may specify his problem and optionally any of the resources, traversal styles etc. From the given object instances we construct/select an implementation capable of solving the problem. It's very important IMO to be able to only specify the problem and omit anything about the implementation questions. If the user doesn't want to dig into implementation details, we can choose for them but they have to be fine with the performance they get. But the results we return are correct in either case.

Everything after that point is considered to be the actual work part and thus can be hidden from the user entirely.

At the current point of iteration of the proposal, it seems like the only thing missing is a generic frontend for constructing the instances. That'd relief the decision making about the implementation from the user which can be considered a user friendly addition.

As usually, have sensible defaults which just work out of the box but allow optional arguments to help squeezing each little bit of performance out of the implementation.

Successfully following that proposal enables: solve(ShortestPath(graph, all_paths=true)) to automagically go parallel and select the best fitting traversal style for the given graph layout. That may lead to higher automagical performance, though, the predictability of the performance may suffer a bit. But for interactive use that's not a problem. In fact, that's the idea behind the Julia mentality. Go correct, then go fast. And you still can pass some of the optional arguments to make performance very predictable (but also maybe get errors if you provide incompatible options).

sbromberger commented 4 years ago

How can they explore the available implementations right now?

The current docstrings are short and provide pointers to the related functions.

If BFS is the implementation of a shortest_paths solver which utilizes a BreadthFirst traversal, you may simply hide that from the user entirely. I.e. why not just use shortest_paths(g, 1, BreadthFirst(Sort.QuickSort)) since it should be clear that this should result in a BFS state object anyway.

See Travesals.dists(g, s, BreadthFirst()) and Traversals.parents(g, s, BreadthFirst()). The reason we provide BFS as a shortest-paths algorithm is to allow dists and parents simultaneously (at the same cost as Traversals.parents()), and to allow maxdist.

Maybe make the Graph part of the problem instance aswell. Technically that would make sense.

And if I wanted to run Dijkstra on multiple graphs, I'd have to set up a problem for each? (Incidentally, this is what @jpfairbanks advocated in the beginning as well. I don't know whether he's come around.)

There's no reason not to try these approaches in Experimental. I'm skeptical, but I'm not against them :) If it turns out that this is a better way to do things, moving it from Experimental to base is not a difficult transition.

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.