Duplicates which have not been caught by the sympy steps during function generation can be caught by their identical likelihoods. In combine_DL each likelihood is compared to the previous one, and discarded if it is the same, but the duplicate functions may not be next to each other. It would be better to use a scheme like this:
negloglike_list = [] # Store all unique negloglikes
for i in range(Nfuncs):
if negloglike_sort[i] in negloglike_list:
continue
negloglike_list += [negloglike_sort[i]]
Duplicates which have not been caught by the sympy steps during function generation can be caught by their identical likelihoods. In combine_DL each likelihood is compared to the previous one, and discarded if it is the same, but the duplicate functions may not be next to each other. It would be better to use a scheme like this: