Design, conduct and analyze results of AI-powered surveys and experiments. Simulate social science and market research with large numbers of AI agents and LLMs.
Currently, Conjure.to_scenario_list() generates scenarios with auto-generated shortnames as keys; the actual question texts are not included. The question texts and responses are available via Conjure.to_agent_list() as traits and codebook, but you still need to manually combine them.
It would be convenient to have a method that automatically generated scenarios that included the question texts, question names and responses in order to use them for sense check surveys -- new questions evaluating the original survey responses (from any source).
"Respondent ID","What do you like most about using our online marketplace?","What is one feature you would like to see added to improve your shopping experience?","Can you describe a recent experience where you were dissatisfied with our service?","How do you feel about the current product search and filtering options?","Is there anything else you would like to share about your experience with us?"
"101","The wide variety of products and the ease of use.","It would be great to have a personalized recommendation system based on my browsing history.","I was disappointed when an item I ordered arrived damaged, but customer service quickly resolved it.","The search and filtering options are intuitive and work well for me.","No, keep up the great work!"
"102","I enjoy the simplicity of the interface.","A feature that helps compare similar products side by side would be useful.","No complaints here.","I find the product search to be pretty effective.","I think the sky is a beautiful shade of purple today."
"103","The platform is user-friendly and offers a vast selection of products.","Would love to see an option to save and compare different products.","My delivery was late by a few days, which was frustrating.","It’s okay.","No."
from edsl import Conjure, ScenarioList, Scenario, QuestionYesNo
c = Conjure("marketplace_survey_results.csv")
agents = c.to_agent_list() # to get the codebook and traits that have the responses
scenarios = ScenarioList(
Scenario(
{
"respondent_id": agent["traits"]["respondent_id"],
"question_id": question_id,
"question": agent["codebook"][question_id],
"response": agent["traits"][question_id]
}
)
for agent in agents
for question_id in list(agent["codebook"].keys())
)
q_nonsensical = QuestionYesNo(
question_name = "nonsensical",
question_text = """
Is this response nonsensical?
Question: {{ question }}
Response: {{ response }}
"""
)
results = q_nonsensical.by(scenarios).run()
Currently,
Conjure.to_scenario_list()
generates scenarios with auto-generated shortnames as keys; the actual question texts are not included. The question texts and responses are available viaConjure.to_agent_list()
astraits
andcodebook
, but you still need to manually combine them.It would be convenient to have a method that automatically generated scenarios that included the question texts, question names and responses in order to use them for sense check surveys -- new questions evaluating the original survey responses (from any source).
Eg, I want to avoid the additional steps here: https://www.expectedparrot.com/content/2606e4f3-5863-4c08-b500-30c9ef4e923b
Sample data with header row of survey questions: