Closed drpventura closed 4 years ago
Hi there, thanks for raising this issue. I believe you may be misinterpreting the objective of the hospital-resident algorithm. The aim is to find a stable, resident-optimal (the groups in this case), matching rather than to find some maximal matching based on the preferences.
So, in your example with the preferences and capacities below, Group 2's favourite topic is Privacy. As is Group 3's, and since each topic can accommodate two groups, they both get matched to Privacy.
group_prefs = {
"Group 1": ["Intellectual property", "Privacy"],
"Group 2": ["Privacy", "Fairness in AI"],
"Group 3": ["Privacy", "Social media"],
}
topic_hospital_prefs = {
"Fairness in AI": ["Group 2"],
"Intellectual property": ["Group 1"],
"Privacy": ["Group 3", "Group 2", "Group 1"],
"Social media": ["Group 3"],
}
capacities = {t: 2 for t in topic_hospital_prefs}
NB: Choosing to have a hospital-optimal matching (with game.solve(optimal="hospital")
) returns the same result as is often the case with games of this size.
If you reduce the capacity of each group to one, then you get the matching you're after. Although, that likely has no real value since this is a real problem and I imagine you want to be able to have multiple groups taking the same topic.
Something that is not currently implemented but I would be keen to do is a variant on the hospital-resident assignment problem where a minimum number of matches is required by each hospital agent. That would encourage the kind of solution I believe you are looking for here but comes at the cost of there being no guarantee that a stable solution exists.
I hope this explanation was sufficient. I'll close this issue now.
So I am trying to use this library to assign presentation topics for my students. Each group must submit a list of 3 ranked topics from a pre-generated list. Then I need to assign each group 2 different topics. I have code to test this out. Note that
group_prefs
contains preferences for each group. Group 2 is the only group to have selected Fairness in AI, and yet when the system is solved it is not assigned to Fairness in AI. What am I missing?