shure-dev / logllm

Automatically extract ML experimental conditions from your Python scripts with GPT4, and save them via WandB.
1 stars 3 forks source link

Modify LLM Prompts #32

Open Archly2022 opened 2 weeks ago

Archly2022 commented 2 weeks ago

Large language models usually gravitate towards relation patterns(vector embeddings) to generate response especially when corresponding towards a predictable output. So I made some changes:

The excepted output example json schema: { "method": "SVC", "dataset": "Iris", "task": "classification", "accuracy": 1.0, "C": 1.0, "degree": 3, "tol": 0.001, "cache_size": 200, "max_iter": -1, "test_size": 0.2, "random_state": 42, "kernel": "linear", "condition_as_natural_langauge": [ "Using linear kernel on SVC model.", "Excluding class 2 from Iris dataset.", "Splitting data into 80% training and 20% testing." ], "advice_to_improve_acc": [ "Consider using cross-validation for better performance evaluation.", "Experiment with different kernels to optimize results.", "Increase the dataset size to improve generalization." ] }

The output is pretty much the same since it was directed towards a generative response, almost!

vector

The output when removed example schema:

response: { "method": "Support Vector Machine", "dataset": "Iris", "task": "Classification", "accuracy": 1.0, "other_param_here": { "C": 1.0, "kernel": "linear", "degree": 3, "gamma": "auto", "coef0": 0.0, "shrinking": true, "probability": false, "tol": 0.001, "cache_size": 200, "class_weight": null, "verbose": false, "max_iter": -1, "decision_function_shape": "ovr", "break_ties": false, "random_state": null, "test_size": 0.2, "random_state_tts": 42, "shuffle": true, "stratify": true }, "condition_as_natural_language": [ "Binarized class labels (excluding Class 2).", "Excluded Class 1 from evaluation." ], "advice_to_improve_acc": [ "Use a more complex model.", "Use a larger dataset.", "Try different hyperparameters." ] }

The response without excepted output referred on the notebook:

response without excepted parameters and example output on notefile --> {"condition_as_natural_language": [ "Iris dataset with two classes (0 and 2)" ], "param_name_1": "test_size", "param_value_1": 0.2, "param_name_2": "random_state", "param_value_2": 42, "param_name_3": "shuffle", "param_value_3": true, "param_name_4": "stratify", "param_value_4": "labels", "param_name_5": "C", "param_value_5": 1.0, "param_name_6": "kernel", "param_value_6": "linear", "param_name_7": "degree", "param_value_7": 3, "param_name_8": "gamma", "param_value_8": "auto", "param_name_9": "coef0", "param_value_9": 0.0, "param_name_10": "shrinking", "param_value_10": true, "param_name_11": "probability", "param_value_11": false, "param_name_12": "tol", "param_value_12": 0.001, "param_name_13": "cache_size", "param_value_13": 200, "param_name_14": "class_weight", "param_value_14": null, "param_name_15": "verbose", "param_value_15": false, "param_name_16": "max_iter", "param_value_16": -1, "param_name_17": "decision_function_shape", "param_value_17": "ovr", "param_name_18": "break_ties", "param_value_18": false, "param_name_19": "random_state", "param_value_19": null, "param_name_20": "excluded_class", "param_value_20": 1, "result_name_1": "accuracy", "result_value_1": 1.0, "result_name_2": "precision", "result_value_2": 1.0, "result_name_3": "recall", "result_value_3": 1.0, "result_name_4": "f1-score", "result_value_4": 1.0, "result_name_5": "support", "result_value_5": 10, "advice": "Try different kernels (e.g., 'rbf', 'poly') and tune hyperparameters like C and gamma to improve accuracy." }

Seemed more accurate and informative when retuned. configuration response --> Temperature 0.1

shure-dev commented 2 weeks ago

Thanks, could you let me know why you want to remove json output example? to save cost/token?

and temperature should be zero

Archly2022 commented 2 weeks ago

"advice": "Try different kernels (e.g., 'rbf', 'poly') and tune hyperparameters like C and gamma to improve accuracy." } This is the reason it got different recommendations

"advice_to_improve_acc": [ "Consider using cross-validation for better performance evaluation.", "Experiment with different kernels to optimize results.", "Increase the dataset size to improve generalization." ]