Open Kastakin opened 3 years ago
Interesting point. We did not investigate such a question so far. At some point, the location itself can be considered as a resource. How many locations and workers do you have? It may be possible to represent your problem with current features, let me think about it.
For the time being I'm building a scaled down proof-of-concept for the business I'm working with to schedule an assembly line working schedule. A team of 10 resources are assigned to about 40 tasks over the course of four 8 hours long turns. These tasks are to be conducted in 4 distinct locations, at any time no more then 2 resources have to be present in a single location.
Here is something you could try, as far as I understand your problem: each person is represented by a Worker
instance, each location is represented by a CumulativeWorker
with size=2. Each of the task needs one worker chosen among 10, and one location chosen among 4.
import processscheduler as ps
n_workers = 10
n_locations = 4
n_tasks = 40
pb = ps.SchedulingProblem("LimitNumberWorkersLocations")
workers = [ps.Worker("Worker_%i" % i) for i in range(n_workers)]
locations = [ps.CumulativeWorker("Location_%i" % i, size=2) for i in range(n_locations)]
tasks = [ps.FixedDurationTask("Task_%i" %i, duration=8) for i in range(n_tasks)]
# assign resources
for t in tasks:
# each task needs exactly one worker ...
t.add_required_resource(ps.SelectWorkers(workers))
# ... and one location
t.add_required_resource(ps.SelectWorkers(locations))
pb.add_objective_makespan()
solver = ps.SchedulingSolver(pb_bs, logics="QF_IDL", max_time=300)#, debug=True)
solution = solver.solve()
print(solution)
solution.render_gantt_matplotlib(render_mode="Resource")
As a result, each location hosts at most 2 tasks, that is to say 2 workers. Here is Gantt chart generated from the code above.
Wow, thanks for the effort you put in providing a solution!
This would work if it wasn't for the fact that some tasks may require two workers at the same time. This is the main reason why I switched from pyschedule, with the other package it wasn't easy to model tasks that required any combination of workers.
While trying to find a solution to the problem I thought about using Buffers as stations each having quantity=2
effectively representing the available space add the constraints TaskUnloadBuffer
and TaskLoadBuffer
at the beginning and end of the task with having the quantity equal to the number of required resources in that location. But to be honest I don't know if this could be the right way to do it.
P.S. Thanks for updating the issue's label, I wasn't sure on how to mark it!
This would work if it wasn't for the fact that some tasks may require two workers at the same time.
Hi! I have a few questions: would these 2 workers need to share loaction in case they worked together at the same time for the same task? And is this always the case that a tasks needs two workers? or just some tasks?
Thanks! :smile:
@Kastakin I also thought about using the Buffer
class to represent a location. There would have as many Buffer
instances as locations (4 actually). The point is that the Buffer class is quite young (one week I would say) and you cannot let a task load/unload any Buffer, the buffer has to be explicitly passed as an argument. As a consequence, with the current implementation, you cannot tell a Task to load one item from either Buffer1Buffer2/Buffer3 and unload from either Buffer1/Buffer2/Buffer3. But it might be possible by using optional TaskloadBuffer
and TaskUnloadBuffer
tasks and tell the solver that only one has to be scheduled. I will soon come back with a code snippet.
would these 2 workers need to share location in case they worked together at the same time for the same task? And is this always the case that a tasks needs two workers?
If a task requires two workers those two have to work in the same location. Not every task requires two workers.
To sum up, in a given timeframe:
I hope this answers the question.
As a consequence, with the current implementation, you cannot tell a Task to load one item from either Buffer1Buffer2/Buffer3 and unload from either Buffer1/Buffer2/Buffer3.
I'll give a try to the Buffers, I know where a specific task has to be conducted (i.e. Task 1 takes place in Location 3, Task 2 takes place in Location , etc.)
UPDATE: @tpaviot Reporting on the results I got. After some tinkering with my problem not getting any results I decided to scale it down and use the snippet of code you use to generate the example above. Here's the code I used with some slight modification to use buffers as we talked about:
import processscheduler as ps
n_workers = 5
n_locations = 4
n_tasks = 40
pb = ps.SchedulingProblem("LimitNumberWorkersLocations")
workers = [ps.Worker("Worker_%i" % i) for i in range(n_workers)]
tasks = [ps.FixedDurationTask("Task_%i" %i, duration=8) for i in range(n_tasks)]
# each location is a buffer with a starting and maximum quantity of two and a minimum possible value of zero
locations = [ps.NonConcurrentBuffer("Location_%i" % i, initial_state=2, lower_bound=0, upper_bound=2) for i in range(n_locations)]
# List of the required locations for each task, they come from the known solution obtained in you example
loc_req = [1,2,3,2,0,1,1,2,0,1,2,2,2,3,2,1,0,3,3,1,0,3,3,2,2,2,0,1,2,0,1,0,3,0,1,0,2,0,3,1]
# assign resources to each task
for i,t in enumerate(tasks):
# each task needs exactly one worker
t.add_required_resource(ps.SelectWorkers(workers))
# each task reduces by one the quantity of the corresponding location/buffer when it starts...
begin = ps.TaskUnloadBuffer(t, locations[loc_req[i]], quantity=1)
pb.add_constraint(begin)
# ...and adds one to the occupation index when it ends
end = ps.TaskLoadBuffer(t, locations[loc_req[i]], quantity=1)
pb.add_constraint(end)
pb.add_objective_makespan()
solver = ps.SchedulingSolver(pb, max_time=3600, verbosity=1)
solution = solver.solve()
solution.render_gantt_matplotlib(render_mode="Resource")
Unfortunately with running times of up to an hour satisfiability couldn't be checked by the solver. Am I setting the constraints in a wrong way or is simply the problem too complex to tackle in a sane amount of time?
@Kastakin Using Buffer
in your case is actually similar to a token system, it's something I did not expect while adding this class. This turns your problem into a mix resource-constrained project scheduling + packing problem. I guess it's NP hard, that's why the solver takes long to complete the computation. Try reducing the number of tasks, you can leave the workers number to 10. A ConcurrentBuffer
should be better in your case but it's not currently available.
@tpaviot Sorry for the late reply! August got in the way and I have to work on my thesis in the meantime.
I've modified the previous example, reducing the number of tasks and using randomly generated values for locations and task duration. Below I report the code used and the gantt obtained:
Code:
from random import randint
import processscheduler as ps
n_workers = 4
n_locations = 3
n_tasks = 10
# ----------->T: 0 1 2 3 4 5 6 7 8 9
task_duration = [2, 3, 2, 2, 2, 1, 2, 1, 1, 3]
task_location = [0, 1, 2, 1, 2, 0, 2, 1, 1, 0]
pb = ps.SchedulingProblem("LimitNumberWorkersLocations")
workers = [ps.Worker("Worker_%i" % i) for i in range(n_workers)]
tasks = [
ps.FixedDurationTask("Task_%i" % i, duration=task_duration[i])
for i in range(n_tasks)
]
locations = [
ps.NonConcurrentBuffer(
"Location_%i" % i, initial_state=2, lower_bound=0, upper_bound=2
)
for i in range(n_locations)
]
# assign resources
for i, t in enumerate(tasks):
# each task needs exactly one worker
t.add_required_resource(ps.SelectWorkers(workers))
# each tasks reduce by one the occupation index of the corresponding location/buffer when it starts...
begin = ps.TaskUnloadBuffer(t, locations[task_location[i]], quantity=1)
pb.add_constraint(begin)
# ...and adds one to the occupation index when it ends
end = ps.TaskLoadBuffer(t, locations[task_location[i]], quantity=1)
pb.add_constraint(end)
pb.add_objective_flowtime()
solver = ps.SchedulingSolver(
pb, max_time=60, parallel=True
) # , debug=True, verbosity=1,)
solver.
solution = solver.solve()
solution.render_gantt_matplotlib(
render_mode="Resource", fig_filename="out_with_buffer.png", show_indicators=False
)
Gantt:
The main problem with the solution found is the "non-concurrency" of the buffers as you already pointed out, for example:
I have very little knowledge on SAT solvers, this is all new for me but being something that could be useful for my work I think I will dedicate some time to see if I can figure out how to implement ConcurrentBuffers.
I will report my results if I manage to get something done!
@Kastakin indeed the NonConcurrentBuffer
class is not the best way to represent your problem. The ConcurrentBuffer
is not implemented yet, mostly because it is a bit harder to implement. It's in my todo list.
@Kastakin I see in your gantt chart you have split a task over the horizon. I am also looking for something similar. Can you please help me with my issue https://github.com/tpaviot/ProcessScheduler/issues/117
I'm coming from the pyschedule package so I'm sorry if I'm still missing something but I looked around in the docs and I couldn't find a solution to my problem:
I've got a certain number of tasks all of which are to be executed in specific locations by one or more resources (workers). Due to the global COVID-19 situation I have to insert the added constraint that no more then 2 resources can work in the same location. Is there a way to do so?
In pyschedule there was the ability to add custom attributes to tasks so that it was possible to add constraints like these but I haven't found an equivalent option here.