Closed HashemAlsaket closed 1 year ago
How the example above was produced:
import mysql.connector
from prompttools.experiment import MindsDBExperiment
db = mysql.connector.connect(
host="cloud.mindsdb.com",
user="my_username",
password="my_password",
port="3306"
)
experiment = MindsDBExperiment(db_connector=db, **{"prompt": ["""
SELECT response
FROM {table}
WHERE author_username={author_username}
AND text={text}
"""],
"table": ["mindsdb.gpt_model"],
"author_username": ['"Agent"'],
"text": ['"Who was the first President?"']}
)
experiment.run()
from prompttools.utils import similarity
experiment.evaluate("similar_to_expected", similarity.evaluate, expected="George Washington")
experiment.visualize()
Thanks @HashemAlsaket! I think you are right that MindsDB has a lot to offer, but we need to narrow the scope. It looks like you made the right call here. Could you add the example above to examples/
with some comments on how to set it up, either as a jupyter notebook or a standalone file?
@steventkrawczyk I added the full notebook example. For now, functionality is limited to querying models similar to the vain of LLMs [queries, tables, author_usernames, texts] despite MindsDB having far more functionality. I think we can slowly expand scope as users request it maybe?
@steventkrawczyk @NivekT
Issue #30
It seems like there are endless ways to use MindsDB. While it's a super cool tool, I'm wondering if we need to tighten scope on this addition for now. I started simple: user inputs models [table source], query, author_username. Here's the output I get with this implementation using the standard question we use for experiment testing. Let me know what you think: