We now have a function _sort_ollama_prompts (maybe thinking this should be in ollama_utils actually?) which will look for the indices in our list of prompt_dicts for ones with "api": "ollama". In the case there are some, we sort them by model_name otherwise no sorting is necessary. This gives us a mapping from old indices to new sorted indices for our ollama prompts which we then use to sort the full list.
We keep the other indices for other models the same. That is, only indices for ollama models are changed.
I think this could be useful to keep the other indices for other APIs/models the same because if you're not processing in parallel (i.e. -p flag not used in the command), it might be good to interweave the calls so we won't change the order of those. We only care about fixing the ollama order I think. It should generally be on the user to optimise the order but for ollama, we can apply this.
Fix #51.
We now have a function
_sort_ollama_prompts
(maybe thinking this should be inollama_utils
actually?) which will look for the indices in our list ofprompt_dict
s for ones with"api": "ollama"
. In the case there are some, we sort them bymodel_name
otherwise no sorting is necessary. This gives us a mapping from old indices to new sorted indices for our ollama prompts which we then use to sort the full list.We keep the other indices for other models the same. That is, only indices for ollama models are changed.
I think this could be useful to keep the other indices for other APIs/models the same because if you're not processing in parallel (i.e.
-p
flag not used in the command), it might be good to interweave the calls so we won't change the order of those. We only care about fixing the ollama order I think. It should generally be on the user to optimise the order but for ollama, we can apply this.