abecode / emotion-twenty-questions

Automatically exported from code.google.com/p/emotion-twenty-questions
Other
1 stars 1 forks source link

What would be necessary for an emo20q journal paper(s) #10

Open GoogleCodeExporter opened 8 years ago

GoogleCodeExporter commented 8 years ago
If we only do a database paper...
1. (as Jimmy suggested) a good way to mix analysis and just a plain database 
paper is to consider how the agent can be used as an extension to data 
collection.
2. we should have some inter-annotator agreement stats. The question 
annotations and answer annotations may have different (sub)sections
3. experimental design: objectivity, reliability, validity, sensitivity, 
comparability, and utility are different subtopics that can motivate our 
experimental design.
4. examples of user variation.
5. how emo20q fits into emotion theory and affective computing.
6. compare with 20q patent: more natural language, aims to guess in human like 
ways rather than optimal game playing.

For an analysis paper(s)...
1. spectral graph theory: connectivity, projections, inference, partitions?, 
visualization?, etc.  
2. dealing the yes/no answers: values between 0 and 1 and regression models 
from natural language.
3. Agent behavior, computational models of human-like behavior.
4. Static vs. dynamic (so far we've looked at only static, dynamic --the order 
the users ask questions-- is important for modeling human-like inference.
5. Influence of one user on other users in terms of question re-use.
6. entrainment/rapport between two players and effect on game outcome.

Original issue reported on code.google.com by abe.kaze...@gmail.com on 13 Sep 2011 at 12:36

GoogleCodeExporter commented 8 years ago
On this subject I've written up a very general outline of some ideas and goals 
that could contribute toward an emo20q journal paper.  Let me know what you 
think.

Emotion 20 Questions: Developing a Computational Understanding of Emotions 
using Natural Language

Data collection

-human-human collection
motivations:
1. used to generate questions and emotions to build theory/computational system
2. natural conversations

goals:
1. perhaps any undergraduate students who wanted to take part in this project 
could take a role in organizing?
2. more annotators to get a measure of annotator agreement.

-human-computer collection (EMO20Q Questioner Agent)

motivations:
1. can be used as a tool to collect large amounts of data
2. can be used to assess the computational model of emotion (i.e., can the 
computer correctly guess a the emotion a user is considering)
3. can be used as an application framework in which we can experiment with 
different clustering, graph theoretic, and information theoretic measures to 
assess our "computational theory of emotions"

goals:
1. use for data collection with an online application such as Amazon Mechanical 
Turk (AMT) to develop a "crowd-sourced theory of emotions"
2. incorporate "level of truth" ratings from AMT to weight question/answer 
edges to integrate information from non "yes" or "no" answers ("maybe", 
"possibly", etc.)
3. develop scheme for automatically assigning (annotating) user answers to 
weighted categories/bins (something like "yes"~3, "usually"~2 "sometimes"~1, 
"maybe"~0, "not always"~-1, "not usually"~-2, "no"~-3)
4. consider a way to better choose orthogonal questions (so "is it positive?" 
and "is it negative?" are not both asked) 
5. record outcomes of human-computer games and measure errors made by the system
6. incorporate adaptation into the system

Engineering methods

-question selection by automated agent
1. compare different methods for ranking question "strength" - information 
gain, page rank, etc.
2. consider how different techniques could lead to a more theoretically 
structured approach (with regard to information theory, graph theory, etc.)

-emotion prediction based on user answers by automated agent
1. decide upon an error measure for the current system
2. consider how different approaches could ascertain the correct emotion 
(summing evidence for/against, ANNs, etc.)

Results and Evaluation

1. how well can an automated agent predict the emotion being considered by the 
user?
2. are certain emotions more difficult/easy to predict?
3. use graphing techniques to give a visualization of relations between emotions
4. look at how emotions are clustered (do clusters have some sensible 
structure?)
5. perhaps some experiments on how dimensionality reduction methods affect 
system performance (to get at the question of: how much info do you need to 
build an "accurate/reasonable" theory/model of emotions using natural language)

Original comment by JimmyGib...@gmail.com on 13 Sep 2011 at 1:41

GoogleCodeExporter commented 8 years ago

Original comment by abe.kaze...@gmail.com on 14 Sep 2011 at 2:14