Closed anirudh4792 closed 7 years ago
The intent is not to generate questions for all of them but assume they are all implications of the underlying cause.
- If this is the case, answering the underlying cause covers the examples
- If not the case (the answers to a sub-behavior differ to the parent), people click >on the example/or generate own examples and the dimensions appear
We need to clarify our model (see issue #34). The way the above passage is phrased sounds like an essentialist model (one in which "a variety of concrete and observable defenses and symptoms are treated as indicators of the underlying causal entity—the core conflict. The concepts of the underlying causes are called latent constructs" (Hoffart & Johnson, 2017)), whereas I think we should be using a mechanistic property cluster model (one in which
The variables that are typically taken to be observable indicators of latent constructs are taken to be autonomous causal entities in a network. Thus, studying disorders means investigating their network structure—that is, the pattern of causal relationships between the observable symptoms. Disorders are supposed to exist as systems rather than as entities. Therefore, the aggregation of symptoms is of less interest from a network perspective. The focus of assessment is rather on the strength and number of connections between symptoms and the centrality of individual symptoms
— (Hoffart & Johnson, 2017)). A case can be made for either type of model, or of others, but I think the real strength of a linked database is in a network-of-signs-and-symptoms perspective rather than a perspective of underlying causes. Six years ago, Kendler, Zachar, & Craver asserted that "the MPC view is the best current answer to ‘What kinds of things are psychiatric disorders?’" (2011); I think they are still correct.
A. Questions must adapt to the user and be personalized rather than a static version for all This means users must be allowed to add their own examples.
Are you suggesting a multiple-choice + free-text format where a user could choose a given option and/or add his/her own? I like this idea, although I'm not sure how the user-supplied examples would be moderated or who/what would do said moderating. I wonder if a choose-all-that-apply instruction would be appropriate for this kind of question.
Also, is this a now issue or a future issue?
Since it is not a static questionnaire,
- Thought must be put into collection of responses for normative samples (is it even needed?).
Are the "collection of responses for normative samples" referenced here the options that will be given initially from which norms will be drawn? I think for this questionnaire, the patterns will be data-driven and constantly updating.
B. Users get visuals like atlas of emotions or results/incorporating Arno’s multiple dimension health space/taxonomy and/or results like the personality test showing where they stand in ‘behavior space’.
I absolutely agree, and I think visuals and visual metaphors are some things to think about as we go, but I don't think we're there yet (I know, "vision" is in the name of this issue).
C. And a personal signature/key to the database where their responses are connected to other components and objective measures of a database (noted Jon’s concern that too many associations would appear if we connect a response to other components)
I just mean we have to operationalize "connected" and "other components" because everything is/will be connected.
D. Ultimate aim - users use/retake the questionnaire as a symptom diary monitoring responses over time Clinians customize the questionnaire to monitor symptoms subjectively/qualitatively and quantitatively (tbe)
Yes! Time is a crucial part of a eunoia pattern, and changes over time are superinformative!
What's "tbe"?
E. Dream/Bonus: Since the questionnaire is personalized and we take only the relative value (tbe), we neednt worry about one person’s idea of attention span to another. However, it would be great to have few questions coupled with simple tasks on the smartphone/web to quantitatively gauge ‘eg motor coordination’ Anxiety ‘patients’ take the questionnaire and then use ‘taction’ complementary to it
Again referencing #34, we might or might not be interested in between-individual differences; different individuals may have the same absolute change but different relative expressions thereof. We just need to stay mindful of what interpretations we make and especially what we communicate to users.
Hoffart, A., & Johnson, S. U. (2017). Psychodynamic and Cognitive-Behavioral Therapies Are More Different Than You Think: Conceptualizations of Mental Problems and Consequences for Studying Mechanisms of Change. Clinical Psychological Science. doi:10.1177/2167702617727096
Kendler, K. S., Zachar, P., & Craver, C. (2011). What kinds of things are psychiatric disorders? Psychological Medicine, 41(6), 1143–1150. doi:10.1017/S0033291710001844
Thanks Jon.
Q) We need to clarify our model (see issue #34). The way the above passage is phrased sounds like an essentialist model (one in which "a variety of concrete and observable defenses and symptoms are treated as indicators of the underlying causal entity—the core conflict. The concepts of the underlying causes are called latent constructs" (Hoffart & Johnson, 2017)), whereas I think we should be using a mechanistic property cluster model (one in which
A) Thanks Jon! In short, I do not particularly disagree to any points raised and the current way we are proceeding does not bog us down or restrict us to any model particularly. Maybe the use of words - underlying cause caused confusion.
Possible scenarios (answering to issue #34 )
I feel we are not grouping symptoms based on their underlying causes but rather grouping examples or sub-behaviors under a parent/core behavior. I agree that distinct symptoms should have their own set of representative questions
In the DSM or E-SWAN, what I consider as examples are taken as symptoms. For example, gets angry out of home and gets angry at home are considered two symptoms whereas they are subsumed in our questionnaire when the user answers context for the underlying behaviour ‘getting angry’. To drive home this point, I see a symptom as a combination of a behavior and a dimension.
In some cases, the idea that the user must answer an underlying cause (higher level or parent behavior) holds good but I agree that it must not be standardized (eg only if a person understands nonverbal gestures can he use them or as arno suggested, he may not understand nonverbal gestures but may be taught to use them - thus answering the underlying cause as no does not necessarily indicate that the user does not use non-verbal gestures
To clarify, we have a core or parent behavior under which sub-behaviors are classified. I noticed that if the core behavior has a particular structure (eg questions generated on frequency, duration and context) and the user answers in a particular way; if a sub-behavior is also applicable, he is likely to answer in the same way. So the user can select examples that is applicable for him/add his own examples. For odd ones out, there must be a provision where dimensions + odd behavior questions are generated
“The resulting network demonstrated that symptoms within the same diagnosis showed differential associations and indicated that the strategy of summing symptoms, as in current classification systems, leads to loss of information.” http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0137621
Q) Are the "collection of responses for normative samples" referenced here the options that will be given initially from which norms will be drawn? I think for this questionnaire, the patterns will be data-driven and constantly updating. A) I meant unless we define attention span quantitatively (eg great attention span implies scores >10 on this app), even if two people answer very attentive; person A’s very attentive could be Person B’s moderately attentive and I suspect it would not be wise to cannot take data as two people answered very attentive like how they probably do in questionnaires to generate normative samples
Q) Are you suggesting a multiple-choice + free-text format where a user could choose a given option and/or add his/her own? I like this idea, although I'm not sure how the user-supplied examples would be moderated or who/what would do said moderating. I wonder if a choose-all-that-apply instruction would be appropriate for this kind of question. Also, is this a now issue or a future issue? A) I disliked the fact that all users answer set fixed questions where there is a lot of loss of information and it is never personalized. I guess this is a future issue as we should build a strong database of questions from which follow up questions would arise and this database would be enriched by addition of questions by crowdsourcing (initially vetted by us I guess)
Thank you for this very informative exchange. I have one request to make these exchanges easier to search and understand when we or others revisit them in the future. Let us keep issues (1) concise, (2) focused on a single issue, and (3) very clearly titled. There are too many issues in this issue, an attempt to respond to a different issue within this issue, and an unclear title.
People first answer the ‘enhanced intensity’ (intensity/ability) of the 25 core behaviors
Follow up questions are generated for ‘weaknesses’ and ‘strengths’ with the dimensions - frequency, duration, context etc.
In most cases, core behaviors have several examples under them. The intent is not to generate questions for all of them but assume they are all implications of the underlying cause.
‘+1 questions’ (implications/sub-behaviors of the underlying cause if present; eg attention to detail for attention) are generated with reduced dimensions than the core behavior
A. Questions must adapt to the user and be personalized rather than a static version for all This means users must be allowed to add their own examples. Since it is not a static questionnaire,
B. Users get visuals like atlas of emotions or results/incorporating Arno’s multiple dimension health space/taxonomy and/or results like the personality test showing where they stand in ‘behavior space’.
C. And a personal signature/key to the database where their responses are connected to other components and objective measures of a database (noted Jon’s concern that too many associations would appear if we connect a response to other components)
D. Ultimate aim - users use/retake the questionnaire as a symptom diary monitoring responses over time Clinians customize the questionnaire to monitor symptoms subjectively/qualitatively and quantitatively (tbe)
E. Dream/Bonus: Since the questionnaire is personalized and we take only the relative value (tbe), we neednt worry about one person’s idea of attention span to another. However, it would be great to have few questions coupled with simple tasks on the smartphone/web to quantitatively gauge ‘eg motor coordination’ Anxiety ‘patients’ take the questionnaire and then use ‘taction’ complementary to it