Open 2Barry2 opened 2 months ago
Looking at how efficient Phi-3-mini is for its size, one might argue that Phi-3-medium's function calling could be somewhere between llama-3-8B and llama-3-70B with your fine-tune?
will wait to see if Microsoft updates phi3-medium since they just updated phi3-mini today
@2Barry2 we just pushed our updated phi3-mini trained on Microsoft's june update. still no plan for phi3-medium
Looking at how efficient Phi-3-mini is for its size, one might argue that Phi-3-medium's function calling could be somewhere between llama-3-8B and llama-3-70B with your fine-tune?