lzim / teampsd

Team PSD is using GitHub, R and RMarkdown as part of our free and open science workflow.
GNU General Public License v3.0
9 stars 23 forks source link

systems_thinking_coding #78

Closed lzim closed 4 years ago

lzim commented 6 years ago

Hello Qualitative Workgroup!

Starting an issue for us to track our work on the systems thinking coding of Team Meetings.

The Systems Thinking Codebook is here: https://github.com/lzim/teampsd/tree/master/qual_workgroup/codebooks

The coded team meetings are here: https://github.com/lzim/teampsd/tree/master/qual_workgroup/coded_meetings

Thanks!

Lindsey

lzim commented 5 years ago

Hi Everyone on the Qualitative Workgroup!

@swapmush @staceypark @dlounsbu :smile:

lzim commented 5 years ago

Thanks Kathryn,

We want to try to set this up for our two hour systems thinking coding meeting next Tuesday from 3-5PM. So, if you have any updates at all, progress, questions, barriers, please discuss them with the group on GitHub issue #78: https://github.com/lzim/teampsd/issues/78

This is far better than our emails.

Thanks!

Lindsey

teampsdkathryn commented 5 years ago

Hello Team,

I am wondering if setting this up on my computer in Outpatient mental health could be an issue for sharing with others since I am in a different building. Any thoughts?

teampsdkathryn commented 5 years ago

Which working directory do you want this coding project to be in?

teampsdkathryn commented 5 years ago

Hello Team,

I have made a first pass at completing this week's assignment. Stacey and I met for about a half hour trying to figure out which directory to put this in and addressing whether or not I should use my computer in OMH to do this or not. We determined that we did not have "write privileges" to most drives so that leaves us really with the default option that appears after opening RQDA. We also determined that I will use my computer in OMH. At this time, I am not aware of a way to link our computers together so we can code off the same documents. Hence, we determined that I would need to articulate the procedure and everyone would need to duplicate this on their own computer.

Overall, we had only a few issues:

  1. First each person will need to install RQDA and maybe GTK+
  2. I was not able to attach individual levels to memos, just one memo for each code. That is an issue to be resolved in the future.
  3. I was not able to directly download files from Github to RQDA so I created a file within MyDocuments where I could then import the files.
  4. I viewed this assignment as a pilot so I only imported the 12 files from team1 so we can trouble shoot as a team.

Here is the procedure I came up with for our next meeting: First, open RStudio and leave it running. Next, install RQDA on your computer by checking the box off in the Package list in RStudio. You may find that you need to install GTK+. This will pop up if it is not already on your computer. The VA will allow you to install it, so do so. It will take about 5 minutes. RQDA window should automatically open, if not type RQDA() Click on new project Name project: systems_thinking_coding Choose your file path, I used the default MyDocuments path because every other option I was not authorized to write to. Click "open project" Click "import files". I was not able to import directly from Github so I went to the shared research drive and copy and pasted 12 team1 files into a separate folder in MyDocuments. Go back to "import files", navigate to your folder in MyDocuments, and copy and paste one by one into RQDA. To do this you click "files" button, click "import" and it should appear. I did it one by one for each of the 12 files but I am sure there is a trick where you can copy more than one file at a time. I just do not know this trick. Once your files are imported, add codes one by one. Add complexity code Add Feedback code Add Behavior code Add Time code Now we need to add memo codes for five levels for each code 0,1,2,3,4. I could not figure this out, so in a memo for each code, I put "Levels 0,1,2,3,4" as a placeholder. I did this "Levels 0,1,2,3,4" memo for each code. It appears you can only add one memo per code, but there may be a way around this., Go to the cases tab Click add type: team1 click attributes tab Click add Do this for module type, session number, discipline, time spent, attendance, team size

Type RQDA() if you need to get back into RQDA.

This is as far as I went. I welcome your feedback.

@swapmush @staceypark @dlounsbu

lzim commented 5 years ago

I began to set-up the project on Friday.

Go to the Team PSD folder, navigate to the “qual_workgroup” folder and then to “r_qual_scripts” folder.

First, you can see screenshots of the tasks I completed organized in order by RQDA tab. You will see that I worked on things for tabs 1-6.

  1. Tab 1 – project – I set up the systems thinking project (this is the .rqda file that is in the “systems_thinking” folder – see below)
  2. Tab 2 – files – I only uploaded one file so far. But, it read it fine and I was able to do a sample code with it. So, we just need to upload the rest of the .txt team meeting files before tomorrow’s 3PM meeting if we can. There is help online about how to batch import and work with this files.
  3. Tab 3 – codes – I set up the four codes for systems thinking: complex, feedback, behavior, time.
  4. Tab 4 – code categories – I set up each of the four codes to have four levels.
  5. Tab 5 - Attributes – the sessions of MTL (1 through 12 – see the fidelity checklist on GitHub to review the 12-session plan).
  6. Tab 5 – Cases – these are our teams, deidentified using the numbering system
  7. Tab 6 – Notes – I added the Team PSD members’ initials for whose notes they are.

**Note I also began a .Rmd file that will be our instruction file for using RQDA, it’s called “rqda_script.Rmd” I haven’t had time to fully edit this and clean it up. We can do this as we go tomorrow.

There were two additional dimensions that I wasn’t able to research yet. Perhaps you can help us prepare for tomorrow, by learning more about those?

Second, you can find the .rqda project file that I set-up following those tab tasks according to our prior work in the systems thinking folder. It is named, “systems_thinking.rqda”

Third, I found that I was only able to set-up and run this .rqda project file from the two library locations I have under write control: “MyDocuments” and my “U- Drive.” I expect that this will be true for you two. For tomorrow, we should each try to set it up to our U-Drives, so that we can get going. Stacey, Swap and Kathryn, please take my work and add the .txt files before out 3PM meeting.

Please copy the systems_thining.rqda project file I created to your U-Drive, and then put the de-identified, team .txt files in the same working directory, and import them to your local copy of the systems_thinking.rqda project before the meeting tomorrow.

We will start our meeting in person, tomorrow, and then divvy up coding tasks after working through some examples.

@teampsdkathryn @swapmush @staceypark @dlounsbu

teampsdkathryn commented 5 years ago

Dear Team,

Time permitting, those of us who could use more background on R basics can take a self paced course online for free or minimal cost. Please see the information below.

Site: www.edx.org

Course title: Data Science: R Basics

@lzim @dlounsbu @staceypark @swapmush @teampsdkathryn

lzim commented 5 years ago

Hi Kathryn and Qual Workgroup!

Thanks for sharing this resource. I recommend DataCamp, which was created by the same folks who brought us R Studio. I would start with the resources available for free in their first 9 free courses. It is the best matched our team!

We have worked through a planned curriculum in the past with Team PSD mentees from our NCPTSD training programs: http://lindseyzimmerman.com/r-datacamp/ Data camp worked with us to make our free "class." However, the timing structure of a DataCamp course is a little bit of a mis-fit to our research and quality improvement externship, internship, fellowship and residency programs, which all have different timing/learner cycles.

Even if we started a new course, however, everyone should start with those first 9 free courses, so please check them out, I highly recommend them 👍

Lindsey

@lzim @dlounsbu @staceypark @swapmush @teampsdkathryn

teampsdkathryn commented 5 years ago

Team PSD Coding Philosophy Draft: Kathryn Azevedo, Ph.D., October 11, 2018

Protection of the identity of the participants is the foundation of our PSD coding philosophy.

Narrative Data: Traditional Ethnographic Research

Coding allows us to analyze the data to draw out major themes in our qualitative textual narrative data. This is often referred to as ethnographic analysis.

Narrative analysis often involves “telling,” “transcribing,” and “analyzing” (Riesmann, 1993). In practice, the first step of “telling” involved interviewing the participants. The survey instrument, an organized series of questions, usually asks for both specific discrete pieces of information and questions left toward the end are usually more open-ended, giving the participants a chance to open up and elaborate on their experiences. To ensure a more systematic examination of the qualitative data generated from interviews and field notes, we could choose to analyze our data by blending Riesmann’s narrative analysis techniques with the Spradley ethnographic Method (Azevedo et al, 2005).

James Spradley, a prominent ethnographer, encouraged researchers to perform “domain analysis” as a way to guide the emergence of cultural themes. Cultural themes are defined as “any principle recurrent in a number of domains, tacit or explicit, and serving as a relationship among the subsystems of cultural meaning” (Spradley, 1980).

Domains consist of “cover terms” and a semantic relationship. The domains and their respective cover terms or related themes can be described as “folk terms” or “emic terms” or “native viewpoints” generated from the patients interviewed (Spradley, 1980).

Spradley’s methodology is useful because it helps conceptualize the multiple meanings and experiences presented by our participant population into unifying cultural themes. Narrative data is analyzed to uncover cultural themes that give meaning to participant’s lived experiences.

In clinical research, the uncovering of these cultural themes yields an organized volume of knowledge, feelings, and interactions with the health care system (Azevedo, 2005). Ultimately, the information produced from ethnographic research can be used to better serve Veterans.

This approach, however, is very anthropological. The fields of psychology, nursing, economics, and political science have also developed narrative/textual analysis techniques that adds to the anthropological literature. We could further explore these avenues as well.

Health services research uses ethnography but this style often distills the participants voices to a few sentences where only a few quotes are used. Traditional ethnography, lets the quotes tell the story.

Interrater Reliability When coding, our team should strive for strong interrater reliability. Coders should meet regularly to ensure a unified interpretation of the codes. Agreement between coders can be monitored thru percent agreement and Kappa testing. Kappa testing is the gold standard but it is well known to be time consuming (Cohen 1960; Landis and Koch 1977; & Hruschka et. al. 2004) but top health services research journals expect it. The following from Hruschka et al. states:

” Achievement of perfect agreement is difficult and often impractical given finite resource and time constraints. Several different taxonomies have been offered for interpreting kappa values that offer different criteria, although the criteria for identifying “excellent” or “almost perfect” agreement tend to be similar. Landis and Koch (1977) proposed the following convention: 0.81– 1.00 = almost perfect; 0.61–0.80 = substantial; 0.41–0.60 = moderate; 0.21– 0.40 = fair; 0.00–0.20 = slight; and < 0.00 = poor. Adapting Landis and Koch’s work, Cicchetti (1994) proposed the following: 0.75–1.00 = excellent; 0.60–0.74 = good; 0.40–0.59 = fair; and < 0.40 = poor. Fleiss (1981) proposed similar criteria. Cicchetti’s criteria consider reliability in terms of clinical applications rather than research; hence, the upper levels are somewhat more stringent. Miles and Huberman (1994) do not specify a particular intercoder measure, but they do suggest that intercoder reliability should approach 0.90, although the size and range of the coding scheme may not permit this. In the studies presented below, we used stringent cutoffs at kappa 0.80 or 0.90, roughly between Cicchetti’s and Miles and Huberman’s criteria (Hruschka, D. J., Schwartz, D., St. John, D. C., Picone-Decaro, E., Jenkins, R. A., & Carey, J. W., 2004).”

If we decide to go this route, a Kappa cutoff at .80 is reasonable, .90 is ideal but this means a lot of coding consensus meetings.

Kappa testing involves downloading a few specific programs from the internet, learning them, and then developing an agreement exercise procedure. Usually one person is designated to perform this task and ideally this person is not one of the coders. However, in practice usually it is because few studies have a dedicated, independent statistician on the team.

Percent agreement is similar but does not require sophisticated statistical analysis. Anyone with a basic background in descriptive statistics can learn this task. Still it takes preparation. One needs to develop and agreement exercise and test the team. Ideally, team members should strive to achieve .85 percent agreement but this varies depending on field and where one is aiming to publish.

We could also try a blended approach where after a training period, we perform Kappa testing until we achieve the designated IRR level. Then moving forward, we could do percent agreement.

References

Azevedo K, Nguyen A, Rowhani-Rahbar A, Rose A, Sirinian E, Thotakura A, Payne C 2005 Pain Impacts Sexual Functioning Among Interstitial Cystitis Patients. Sexuality and Disability, Winter 23(4):189-208.

Hruschka, D. J., Schwartz, D., St. John, D. C., Picone-Decaro, E., Jenkins, R. A., & Carey, J. W. (2004). Reliability in coding open-ended data: Lessons learned from HIV behavioral research. Field methods, 16(3), 307-331.

Spradley JP: Participant Observation, Holt, Winehart, and Winstron, 1980.

Garro LY: The ethnography of health care decisions. Social Science & Medicine 16:1451–1452, 1982.

Riessman CK: Narrative Analysis, Qualitative Research Methods Series. Sage Publications, Vol. 30, 1993.

@lzim @ dlounsbu @staceypark @swapmush @ teampsdkathryn

teampsdkathryn commented 5 years ago

Good Morning, Resending to David and Kathryn!

Team PSD Coding Philosophy Draft: Kathryn Azevedo, Ph.D., October 11, 2018

Protection of the identity of the participants is the foundation of our PSD coding philosophy.

Narrative Data: Traditional Ethnographic Research

Coding allows us to analyze the data to draw out major themes in our qualitative textual narrative data. This is often referred to as ethnographic analysis.

Narrative analysis often involves “telling,” “transcribing,” and “analyzing” (Riesmann, 1993). In practice, the first step of “telling” involved interviewing the participants. The survey instrument, an organized series of questions, usually asks for both specific discrete pieces of information and questions left toward the end are usually more open-ended, giving the participants a chance to open up and elaborate on their experiences. To ensure a more systematic examination of the qualitative data generated from interviews and field notes, we could choose to analyze our data by blending Riesmann’s narrative analysis techniques with the Spradley ethnographic Method (Azevedo et al, 2005).

James Spradley, a prominent ethnographer, encouraged researchers to perform “domain analysis” as a way to guide the emergence of cultural themes. Cultural themes are defined as “any principle recurrent in a number of domains, tacit or explicit, and serving as a relationship among the subsystems of cultural meaning” (Spradley, 1980).

Domains consist of “cover terms” and a semantic relationship. The domains and their respective cover terms or related themes can be described as “folk terms” or “emic terms” or “native viewpoints” generated from the patients interviewed (Spradley, 1980).

Spradley’s methodology is useful because it helps conceptualize the multiple meanings and experiences presented by our participant population into unifying cultural themes. Narrative data is analyzed to uncover cultural themes that give meaning to participant’s lived experiences.

In clinical research, the uncovering of these cultural themes yields an organized volume of knowledge, feelings, and interactions with the health care system (Azevedo, 2005). Ultimately, the information produced from ethnographic research can be used to better serve Veterans.

This approach, however, is very anthropological. The fields of psychology, nursing, economics, and political science have also developed narrative/textual analysis techniques that adds to the anthropological literature. We could further explore these avenues as well.

Health services research uses ethnography but this style often distills the participants voices to a few sentences where only a few quotes are used. Traditional ethnography, lets the quotes tell the story.

Interrater Reliability When coding, our team should strive for strong interrater reliability. Coders should meet regularly to ensure a unified interpretation of the codes. Agreement between coders can be monitored thru percent agreement and Kappa testing. Kappa testing is the gold standard but it is well known to be time consuming (Cohen 1960; Landis and Koch 1977; & Hruschka et. al. 2004) but top health services research journals expect it. The following from Hruschka et al. states:

” Achievement of perfect agreement is difficult and often impractical given finite resource and time constraints. Several different taxonomies have been offered for interpreting kappa values that offer different criteria, although the criteria for identifying “excellent” or “almost perfect” agreement tend to be similar. Landis and Koch (1977) proposed the following convention: 0.81– 1.00 = almost perfect; 0.61–0.80 = substantial; 0.41–0.60 = moderate; 0.21– 0.40 = fair; 0.00–0.20 = slight; and < 0.00 = poor. Adapting Landis and Koch’s work, Cicchetti (1994) proposed the following: 0.75–1.00 = excellent; 0.60–0.74 = good; 0.40–0.59 = fair; and < 0.40 = poor. Fleiss (1981) proposed similar criteria. Cicchetti’s criteria consider reliability in terms of clinical applications rather than research; hence, the upper levels are somewhat more stringent. Miles and Huberman (1994) do not specify a particular intercoder measure, but they do suggest that intercoder reliability should approach 0.90, although the size and range of the coding scheme may not permit this. In the studies presented below, we used stringent cutoffs at kappa 0.80 or 0.90, roughly between Cicchetti’s and Miles and Huberman’s criteria (Hruschka, D. J., Schwartz, D., St. John, D. C., Picone-Decaro, E., Jenkins, R. A., & Carey, J. W., 2004).”

If we decide to go this route, a Kappa cutoff at .80 is reasonable, .90 is ideal but this means a lot of coding consensus meetings.

Kappa testing involves downloading a few specific programs from the internet, learning them, and then developing an agreement exercise procedure. Usually one person is designated to perform this task and ideally this person is not one of the coders. However, in practice usually it is because few studies have a dedicated, independent statistician on the team.

Percent agreement is similar but does not require sophisticated statistical analysis. Anyone with a basic background in descriptive statistics can learn this task. Still it takes preparation. One needs to develop and agreement exercise and test the team. Ideally, team members should strive to achieve .85 percent agreement but this varies depending on field and where one is aiming to publish.

We could also try a blended approach where after a training period, we perform Kappa testing until we achieve the designated IRR level. Then moving forward, we could do percent agreement.

References

Azevedo K, Nguyen A, Rowhani-Rahbar A, Rose A, Sirinian E, Thotakura A, Payne C 2005 Pain Impacts Sexual Functioning Among Interstitial Cystitis Patients. Sexuality and Disability, Winter 23(4):189-208.

Hruschka, D. J., Schwartz, D., St. John, D. C., Picone-Decaro, E., Jenkins, R. A., & Carey, J. W. (2004). Reliability in coding open-ended data: Lessons learned from HIV behavioral research. Field methods, 16(3), 307-331.

Spradley JP: Participant Observation, Holt, Winehart, and Winstron, 1980.

Garro LY: The ethnography of health care decisions. Social Science & Medicine 16:1451–1452, 1982.

Riessman CK: Narrative Analysis, Qualitative Research Methods Series. Sage Publications, Vol. 30, 1993.

@dlounsbu @teampsdkathryn

lzim commented 5 years ago

@teampsdkathryn Thanks for your hard work on this on behalf of the team! Note that we are not 1) interviewing, 2) doing narrative coding, 3) ethnography, or 4) thematic analysis.

Team - @staceypark @dlounsbu @swapmush and @ericasimon

We have already clarified our qualitative philosophy to that extent that we will code using theory-based constructs that have been defined in prior research. See our systems thinking codebook.

Systems Thinking Codebook references are available in the TeamPSD Zotero Library in qualitative_workgroup -> systems_thinking -- |

Maani & Maharaj (2004) - Complexity Sweeney and Sterman (2007) Appendix B - System Behavior Sweeny and Sterman (2007) Table 4 - Feedback Sweeny and Sterman (2007) Table 6 Change over Time

Now to fully operationalize our qualitative methods, we must establish procedures for determining the:

A) validity of our systems_thinking codebook_definitions

B) reliability of our coding_methodology

for the following:

1) constructs

2) and their degree

METHODOLOGY

We have done a lot of work on our codebook_definitions (A above) to date, but much more work and refinement of the definitions is required to establish their validity.

Only once we establish the validity of our codebook, do we move on to issues of reliability, such as kappa coefficients.

Therefore, as part of our phased coding_methodology (B above) we outline our coding procedures.

CODEBOOK VALIDITY - Introduction Section Drafted in this Phase

  1. Reviewing the codebook together to finalize decision rules for systems_thinking codes (part 1 of training).
    • First, we will make decisions rules for a) finalizing the code_definitions, and/or b) revising/refining the codes later, need to be documented well and described in manuscripts.
  2. Reviewing the codebook together to finalize decision rules for systems_thinking codes (part 2 of training).
    • Second, we will make decisions for applying the code_definitions.
    • Applying the code_definitions will require clarifying rule-in/rule-out criteria, if/then criteria, mutual exclusion definitions, etc. That provide clarity, and can be consistently applied by all coders.

CODING RELIABILITY - Introduction and Methods Section Drafted this Phase Interrater reliability

  1. First, we will decide how to estimate inter-reliability for our purposes. There are many a) definitional, b) procedural, and c) analytic decisions that are incorporated into determining which reliability measure is appropriate. a) definitional - these decisions include the rules for agreement and disagreement (e.g., coded by one-coder, but not the other, coded by both but different at the word-level, sentence level, etc.) b) procedural - these decisions include the rules for who will code what components of the corpus (e.g., how many coders will code, how much coding overlap of the text corpus will there be (0-100%), will coding be blinded and how, etc.) c) analytic - these decisions include whether we will use simple interrator reliability, or will we use an estimate that accounts for agreement simply due to chance (e.g., Cohen's kappa), and what R packages can be used with our RQDA coding package to calculate this estimate

  2. Second, we will justify our decisions regarding the level of reliability needed to a) establish an individual coder as a reliable coder, and b) end coder training.

    • This decision will follow from decisions 3a, 3b and 3c.

Separating our Training data and Coding [Analysis] data

  1. First, we will set aside a coder training dataset and we will set aside a dataset that we will code to reliability.
    • We will determine what sub-sample comprises our training dataset specifying our procedures.
    • Given the size of our corpus, I anticipate that it is reasonable/defensible to randomly select 20% of or the total corpus for training.

Begin Coder Training

  1. Training will require coding our text corpus together with our finalized codebook_definitions and discussing and resolving any discrepancies in our coding.
    • We will end coding training when we have achieved #2 above - an adequate number of reliable coders.

Coding Analyses - Introduction, Methods and Findings Drafted in this Phase

  1. Coding - we will follow all the coding procedures decided/justified a priori (meaning before we begin any actual coding analyses) in steps 1-6 until the text corpus has been coded.
  2. Analyses - we will review and describe our coding_findings regarding the extent to which teams enlisted systems_thinking during their participation. a) - First, we will estimate the reliability of our coding during coding analyses. Second we will answer our key analytic questions: b) describe observations of each construct (C.F.B.T.) [RQDA code] - do they differ, if so how? do teams converge and one code is highly prominent or absent? c) describe levels of each construct observed across teams [RQDA code level 0-4] e) describe high/low variation between teams [RQDA cases] in constructs [RQDA codes] e) describe the codes in relation to MTL 12-session plan content [RQDA attributes] f) estimate within team improvement in systems thinking [RQDA cases by RQDA C.F.B.T code levels 0-4]
  3. Third will identify the optimal ways to display and describe our findings in our manuscript.
    • This will include a) selecting example codes, b) producing tables, c) visualizations, d) preparing our open-science supplementary materials (.Rmd code files for full transparent replication and reproducibility, etc.).

Discussion and Dissemination of Findings - Discussion Section Drafted and Manuscript Submitted in this Phase

  1. We will document our scientific background, rationale for procedures for each of steps 1-9 as we completed each step. Therefore, we will draft our discussion at this phase.
    • Finally, submit the manuscript!
teampsdkathryn commented 5 years ago

Hello Lindsey and team,

Nice overview of the planned process. I have obtained and printed out Maani & Maharaj 2004 and Sweeney and Sternman 2007. I am not seeing Appendix B. Perhaps it is embedded in another reference? Best Regards, Kathryn

lzim commented 5 years ago

@teampsdkathryn perhaps you're right there are two Sweeney and Sterman 2007 articles that we used, I believe.

I can check ASAP

lzim commented 5 years ago

Hi Qual Workgroup! @dlounsbu @swapmush @staceypark and @teampsdkathryn

teampsdkathryn commented 5 years ago

Hello Team,

Since I opened and printed 2 of them, Maani and Maharaj 2004, and Sweeny and Sterman 2007, I will go back into Zotero and see what happened. I did not look at the rqda files. Best Regards, Kathryn

teampsdkathryn commented 5 years ago

Hello Team,

Maani 2004 and Sweeney 2007 re-uploaded into Zotero.

I ran into a loop where the wrong file would upload, but after several attempts and logging out and logging back in, the files correctly uploaded.

Best Regards, Kathryn

teampsdkathryn commented 5 years ago

Have we solidified our definitions of our constructs?

Complexity, Feedback, Behavior, Time

@teampsdkathryn @staceypark @dlounsbu @swapmush @ericasimon @lzim

dlounsbu commented 5 years ago

Yes, I believe we have. They are C, F, B and T, PLUS Level of systems thinking (1-4). Correct? @teampsdkathryn @staceypark @swapmush @ericasimon @lzim

teampsdkathryn commented 5 years ago

Have we operationalized the 4 concepts in one-three sentence definitions that we can refer to as we code? K

lzim commented 5 years ago

Hi @teampsdkathryn and Qual Workgroup @swapmush @staceypark @ericasimon

Yes @dlounsbu They are C, F, B and T, PLUS Level of systems thinking (1-4). In addition, there is an example sentence for each MTL module that shows the level.

Please refer to the updated systems_thinking_codebook_2018-10_16 here: https://github.com/lzim/teampsd/tree/master/qual_workgroup/qual_codebooks/systems_thinking

Today we made 5 coding decisions: 5 Coding Decision Rules

  1. Coding the four Systems Thinking Codes (Complex, Feedback, Behavior, Time) WILL be dichotmous (0 = absent; 1 = present).

  2. When one of the four systems Thinking Codes (C.F.B.T) is present assign it a level 1, 2, 3 or 4.

systems_thinking_codes

  1. Coding the four Systems Thinking Codes (Complex, Feedback, Behavior, Time) will NOT be mutually excusive (i.e., the four codes can overlaps and be present in the same text - see the Examples Tab of the codebook).

examples_headers

  1. We need to code the facilitators and team members text in the meeting notes. Determine whether this will be 2 sets of codes or something we set up in the Settings Tab.

  2. We will use 20% of the meeting notes as our training sample and 80% as our analysis sample. The training dataset will be balanced for a) note taker, b) team, c) time (early months vs. later months).

Thanks!

lzim commented 5 years ago

Hi Everyone :smile_cat:

Following up on the Action Items from Lucid. The most up-to-date files are available in Lucid > Records > Documents. Or, in the Team PSD Qualitative folder.

Consistent with the action item assigned via Lucid, everyone should make sure they are working with these two updated files: • “rqda_script.Rmd” which provides the instructions and code for launching your work session. • “systems_thinking.rqda” which has the 24 team meeting notes for our training dataset already uploaded.

Please discuss your coding questions on our Systems Thinking Issue GitHub here: https://github.com/lzim/teampsd/issues/78

Everyone should try to spend one to two hours coding and go as far down the list of 24 team meeting notes as you can.

Going in order starting with team 1 will be best because it will give diversity of exposure to:

Thanks :+1: @swapmush @ericasimon @staceypark @dlounsbu @teampsdkathryn

teampsdkathryn commented 5 years ago
  1. Once we upload all the appropriate documents to MyDocuments, I assume we then open up RStudio. Is this correct? From there do we open up RQDA with the library function or can we simply type RQDA ()?

  2. Once in R Studio, how do we correctly open up the Rmd document to view the instructions ?

  3. How do we correctly open up the systems_thinking.rqda document to correctly import the 24 team meeting notes into RQDA?

  4. Once we have the files in RQDA, do we have a particular color scheme we want to follow or perhaps that is in the instructions?

Thank you for considering these basic questions.

@swapmush @ericasimon @staceypark @dlounsbu @teampsdkathryn @lzim From the vid

dlounsbu commented 5 years ago

Hi @teampsdkathryn ,

In general, we each need to be able to successfully load the RQDA package on our one laptop. RQDA is a package accessed via R Studio. So R Studio needs to be successfully installed, too.

Once in RQDA, we should be able to launch our Systems Thinking coding project. Within the project, we should be able to associate all the source files (i.e., the team notes to be coded). And then we need to start coding the text, which would require that we have our codebook set up in RQDA, too.

Based on our last meeting, think we have defined our codes. If I recall our decisions from the last meeting, we are coding for references to notes from team members that illustrate 'thinking' that is COMPLEX, FEEDBACK, BEHAVIORal, and/or TIMEbound. We are also coding for LEVEL OF SYSTEMS THINKING (1 to 4).

We need to make sure we have agreed upon definitions for all of these codes, and I am not sure where they are in GitHub. Maybe @staceypark does??

@dlounsbu

teampsdkathryn commented 5 years ago

Thank you David!

After opening RQDA, I opened the "systems thinking rqda" file and I see files that could be imported into the project. I opened the script for instructions and I see the instructions as well. So far so good.

It appears as if the batch ordering instruction would be done in R Studio. Is that correct? However, I do see the files when I open RQDA and the systems thinking rqda so maybe importing is not necessary as LZ already did this when she set up the rqda project. LZ showed the importing step quickly at the last meeting but I did not catch all the steps needed to import the files correctly from the rqda project she created and my project in RQDA.

Any input would be most appreciated when your schedule permits. Thank you.

I could not find the definitions as well.

@dlounsbu @teampsdkathryn

lzim commented 5 years ago

@teampsdkathryn @dlounsbu @swapmush @staceypark @ericasimon

  1. You do not need the files, actually, because they were already imported by me to the .rqda project.
  2. You can simply open the .Rmd file using the File menu in your top of your screen R studio navigation bar.
  3. You don't need to import the team files, but the instructions are in the .Rmd file if you need them.

The qualitative workgroup needs to watch the video that Swap posted and think through the settings tab and the colors for coding, etc.

Thanks All!

dlounsbu commented 5 years ago

Thank you dear @lzim Dear @swapmush , can you help me (us) locate your instructional video link?? Repost or point us to the original posting.

teampsdkathryn commented 5 years ago

Hello David and team,

I have been reflecting on David's question: We need to make sure we have agreed upon definitions for all of these codes, and I am not sure where they are in GitHub. Maybe @staceypark does??

@dlounsbu

I, Kathryn, have re-typed our current definitions from what I was able to pull from the tabs in the systems thinking codebook. Here is what I see we have so far. My comments and questions are embedded:

COMPLEXITY: (Maani & Maharaj, 2004) Definition: Relationships among elements/variables Level 1: Basic one-to-one relationships, largely intuitive Level 2: Complex one-to-one relationships Level 3: Three-way relationships Level 4: Big picture Comment: Are these relationships between people? Providers and provision of services?

BEHAVIOR: (Sweeney and Sternman, 2007, Appendix B can’t find appendix in this reference) Definition: System behavior Level 1: Demonstrates simple interconnections. Level 2: Demonstrates awareness of system behavior and characteristics? Level 3: Demonstrates the understanding of the behavior of systems? Level 4: Demonstration of reflective, integrative thinking Comment: Current definition of complexity and behavior seem very similar. For example, what is the difference between relationship (complexity) and interconnections (behavior)? What is systems behavior? How do we know we are not coding for other types of behavior, such as “clinic specific” thinking, for example? What are system thinking characteristics? What are key behaviors of systems we should be looking for?

FEEDBACK: (Sweeney and Sternman, 2007, Table 4) Definition: Thinking in loops Level 1: Open loop Level 2: Closed loop Level 3: Behavior of closed loop over time. Level 4: Multiple closed loops Comment: What is definition of open versus closed loop?

TIME: (Sweeney and Sternman, 2007, Table 6) Definition: Reference to change over time Level 1: No reference to time Level 2: Non-specific references Level 3: Specific time references Level 4: Demonstration of fuller time dimension awareness Comment: Are we coding/rating non-health care related references to time?

Coding Decision Rules:

  1. Coding for Systems Thinking Codes (Complexity, Feedback, Behavior, Time) WILL be dichotomous (0=absent, 1= present)
  2. Coding for Systems Thinking Codes (Complexity, Feedback, Behavior, Time) will NOT be mutually exclusive (four codes can overlap and be present in the same text)
  3. When the Systems Thinking Codes (Complexity, Feedback, Behavior, Time) are present, assign it/them a level using the examples tab. Comment: Isn’t this the “code categories tab”?
  4. We need to code the facilitators and team members text in the meeting notes. Determine whether this will be 2 sets of codes or something we set up in the settings tab. Comment: It may be helpful to code for both facilitators and barriers as a code.
  5. We will use 20% of the meeting notes as our training sample and 80% as our analysis sample. The training dataset will be balanced for a) notetaker, b) time, c) time (early months versus later months). Comment: Once coder/rater inter-rater reliability is strong, I suggest we re-code the training corpus so we can use 100% of our sample for analysis.

Thank you for considering these comments and questions.

@swapmush @ericasimon @staceypark @dlounsbu @teampsdkathryn @lzim

teampsdkathryn commented 5 years ago

It is helpful to review systems thinking definitions that have emerged in our qualitative coding meetings over the past year in a half. This morning I reviewed my notes and here is how LZ defined the 4 concepts in these meetings:

Notes from Qualitative Coding Meetings (5/10/17-present) point to more granular definitions:

COMPLEXITY: 11/28/17: LZ Forest thinking: forces people to think about relationships between parts. 8/28/18: LZ Increasingly multi-variable, interdependent complexity 10/16/18: LZ In the past, stakeholders (KJA added) thought in 1:1 bivariate relationships. This is too simplistic.

BEHAVIOR: Systems Thinking Behavior 10/17/17: LZ Systems are endogenous, systems cause their own behavior 11/28/17: LZ System dynamics makes systems endogenous. Systems thinking makes the dynamics of behavior more transparent. We are testing causal relationships and looking at operational thinking by looking at the physics of relationships. 3/6/18: LZ We think there is evidence of systems thinking in their decisions. 8/28/18: LZ Behavior-increasingly link the observed system behavior to the structure 10/16/18: LZ What is behavior? Describing a path.

FEEDBACK: 8/28/18: LZ Feedback is increasingly more complete-close their feedback loop, intermediate variables 10/16/18: LZ Causes are in the feedback loop. Most feedback loops are balancing loops because of limited resources and units of time. 11/28/17: LZ Most causation errors are those where we only pay attention to the in-flows and not paying attention to the out-flows.

TIME: 8/28/18: LZ Time- increasingly sophisticated understanding of change over time; i.e. worse before better

Thank you for reviewing these notes.

@swapmush @ericasimon @staceypark @dlounsbu @teampsdkathryn @lzim

dlounsbu commented 5 years ago

This is very good. Thanks for pulling these notes together.

Talk with you soon! Per @staceypark , I think we are starting at 3pm PST (6pm EST), instead of 3:30pm PST (6:30pm EST).

@teampsdkathryn

teampsdkathryn commented 5 years ago

After reviewing Team 1 notes, I have some questions we may choose to reflect on for today's coding meeting:

  1. Do we need to assign colors to different coders for future comparisons?
  2. How much text should we code per code domain? In other words, if you have a 3 paragraph passage that describes behavior do we code it once or perhaps per idea?
  3. Sometimes it is not clear from the passages what cases and attributes should be.
  4. Should we consider a level 0 for when the response is no?
  5. We are doing coding and rating at the same time. Perhaps we should code first and once we have that down, rate the passages by level.
  6. Should we be coding for LZ and the facilitators work and suggestions?
  7. R questions: assigning the codes was easy, but navigating and figuring out how to assign the cases and attributes was not as easy. We need to make sure we are all doing it the same way as there appears to be a few options.
  8. I see a facilitators and barriers paper emerging. Perhaps we can considering coding for facilitators and barriers as a code.

Thank you for reviewing these comments and questions. @swapmush @ericasimon @staceypark @dlounsbu @teampsdkathryn @lzim

dlounsbu commented 5 years ago

I think that the notes that @staceypark provided above defines our codebook. Let's go over questions and comments about the codebook today, and try to get through @teampsdkathryn 's questions, too.

@swapmush @ericasimon @staceypark @dlounsbu @teampsdkathryn @lzim

teampsdkathryn commented 5 years ago

Hmm, I'm not seeing Stacey's notes.

@dlounsbu

teampsdkathryn commented 5 years ago

Hello Team,

After reviewing David's talking points from our last meeting and LZ's prior talks during qualitative meetings, I have revised the coding definitions to reflect this rich discussion:

Systems Thinking Codebook November 1, 2018 Coding Decision Rules:

  1. Coding for Systems Thinking Codes (Complexity, Feedback, Behavior, Time) WILL be dichotomous (0=absent, 1= present)
  2. Coding for Systems Thinking Codes (Complexity, Feedback, Behavior, Time) will NOT be mutually exclusive (four codes can overlap and be present in the same text)
  3. When the Systems Thinking Codes (Complexity, Feedback, Behavior, Time) are present, assign it/them a level using the examples tab.
  4. We need to code the facilitators and team members text in the meeting notes. Determine whether this will be 2 sets of codes or something we set up in the settings tab.
  5. We will use 20% of the meeting notes as our training sample and 80% as our analysis sample. The training dataset will be balanced for a) notetaker, b) time, c) time (early months versus later months).

Proposed Definitions:

COMPLEXITY: Definition: Relationships among elements/variables. Forest thinking: forces people to think about relationships between parts. Increasingly multi-variable, interdependent complexity. In the past, stakeholders thought in simplistic 1:1 bivariate relationships. Level 1: Basic one-to-one relationships, largely intuitive Level 2: Complex one-to-one relationships Level 3: Three-way relationships Level 4: Big picture

BEHAVIOR: Definition: Systems Thinking Behavior System dynamics makes systems endogenous, systems cause their own behavior. Systems thinking makes the dynamics of behavior more transparent. We are testing causal relationships and looking at operational thinking (mental map) by looking at the physics of relationships. We think there is evidence of systems thinking in stakeholder decisions. Behavior-increasingly link the observed system behavior to the structure. What is behavior? Describing a path. Looking for patterns of change over time. 2 types of behavior: 1) reinforcing or 2) balancing. Level 1: Demonstrates simple interconnections. Level 2: Demonstrates awareness of system behavior and characteristics? Level 3: Demonstrates the understanding of the behavior of systems? Level 4: Demonstration of reflective, integrative thinking

FEEDBACK: (Sweeney and Sternman, 2007, Table 4) Definition: Thinking in loops. Stakeholders have made some sort of a circle in their thinking. Feedback is increasingly more complete-close their feedback loop, intermediate variables. Causes are in the feedback loop. Most feedback loops are balancing loops because of limited resources and units of time. Most causation errors are those where we only pay attention to the in-flows and not paying attention to the out-flows. Level 1: Open loop: non-closed loop Level 2: Closed loop Level 3: Behavior of closed loop over time. Level 4: Multiple closed loops

TIME: (Sweeney and Sternman, 2007, Table 6) Definition: Reference to change over time. Time- increasingly sophisticated understanding of change over time; i.e. worse before better. Level 1: No refence to time Level 2: Non-specific references Level 3: Specific time references Level 4: Demonstration of fuller time dimension awareness

Thank you for reviewing these revised definitions. @swapmush @ericasimon @staceypark @dlounsbu @teampsdkathryn @lzim

teampsdkathryn commented 5 years ago

Hello Stacey, I tried uploading the materials but the VA blocked the drag/select/open options. How do you get around it?

Thank you, Kathryn

@staceypark @teampsdkathryn

Hi Kathryn,

Thanks for sending these helpful references 😊

Again, as a reminder we are trying to avoid email as much as possible. We don’t want helpful documents like this to be lost in our inboxes!

Please upload these documents to GitHub under qual_workgroup > r > help_code with “human readable and machine readable” (all lowercase, no spaces, using underscores) titles and continue using the Issue thread #78 to continue discussion.

staceypark commented 5 years ago

@teampsdkathryn Not sure why you're running into issues, but I was able to upload to GitHub without any blocks. Could you attach screenshots here so I can better help troubleshoot?

@dlounsbu @lzim @swapmush @teampsdkathryn @ericasimon I uploaded a fresh copy of the .rqda file with the new changes discussed last meeting:

  1. Assign cases to each file
  2. Assign file categories to each file
  3. Swap code categories and code domains

https://github.com/lzim/teampsd/tree/master/qual_workgroup/projects

Here's also a helpful video about the Attributes section. Perhaps we may want to rethink how we're currently using it. https://www.youtube.com/watch?v=epM0BNwE1RI

teampsdkathryn commented 5 years ago

Thank you Stacey! I tried coding in the new project. Question: So we are rating the passage before assigning the code domain? My interpretation of behavior-1, for example, was that was the level (or rating).

Thank you. @swapmush @ericasimon @staceypark @dlounsbu @teampsdkathryn @lzim

lzim commented 5 years ago

Hello Qual Workgroup :smiley_cat:

I cannot find detailed in the Lucid Meeting record or here what additional work was completed, decisions made, or action items assigned for completion by our meeting on Tuesday.

I downloaded the .rqda file that Stacey posted here: https://github.com/lzim/teampsd/tree/master/qual_workgroup/projects But, what's next?

Can the workgroup please document our procedures and next steps here on issue #78?

Thanks!

Lindsey @ericasimon @staceypark @teampsdkathryn @dlounsbu @swapmush

teampsdkathryn commented 5 years ago

Hello Team,

Logging into Lucid notes, I see the following meeting notes which are pasted below. We could supplement them with the following:

  1. We had a productive meeting from 3:00-5:30 where David discussed each concept in depth.
  2. Kathryn sent David the latest rqda project files.
  3. We started to code transcript 3 from Team 1 realizing we all are not yet on the same page with regards to the definitions.
  4. Erika pointed out that the ordering in the project could be changed. Erika perhaps you could elaborate here?
  5. Action item 1: Stacey agreed to update the rqda project per Erika's observations.
  6. Action item 2: We agreed to have completed Team 1 coding for next week's meeting.
  7. Due date of November 6 could be added to the meeting notes.
  8. Although not assigned in the meeting, Kathryn further clarified the definitions with a GitHub posting based on the meeting's discussion.

Meeting Record

Tuesday, October 30, 2018

Purpose: Complete R21 systems thinking coding

Desired Outcomes: Review coded files and coding choices

Meeting Attendance

In Attendance

Lindsey Zimmerman (National Center for PTSD, Dissemintation and Training Division), Erica Simon (National Center for PTSD), Kathryn Azevedo-Mendoza, Stacey Park, Swap Mushiana, David Lounsbury (Albert Einstein College of Medicine)

Decisions

Code Categories - Should be overarching 4 domains: complex, feedback, behavior, time

Codes - individual 4 coding levels of each domain

Assigned

Due Date

Completed

Update rqda file 1. Assign cases 2. Assign file categories 3. Swap code categories and code domains

Stacey P.

Full Meeting Record

View this record online at https://meet.lucidmeetings.com/meeting/199717

1.0 Issues for Immediate Resolution

Key Workgroup Dependencies: •Discuss coded RQDA files from the 20percent corpus selected to include files from across teams, time, and coders

Notes and Action Items

David's Notes

Complexity: Explaining how variables are related between appointments and patients

Feedback: thinking in loops, walking through a systems story

Behavior: pattern of change, a way that something is changing (i.e. increase/decrease in patient engagement), how the system is changing

Time:

Code Categories - Should be overarching 4 domains: complex, feedback, behavior, time

Codes - individual 4 coding levels of each domain

Action Item

Assigned

Due Date

Completed

Update rqda file 1. Assign cases 2. Assign file categories 3. Swap code categories and code domains

Stacey P.

Thanks!

Best Regards, Kathryn @ericasimon @staceypark @teampsdkathryn @dlounsbu @swapmush

lzim commented 5 years ago

My apologies, it looks like my "What's next?" message above may not have been clear enough ❓

  1. I definitely checked the Lucid record, so there is no need to add that record here as well. 😅
  2. I also saw that the action item from the meeting was completed by @staceypark per the thread above
  3. As I mentioned in my post, I downloaded the updated .rqda file from here https://github.com/lzim/teampsd/tree/master/qual_workgroup/projects and opened it:

However, between this issue and the Lucid meeting... I didn't see detailed records for our procedures:

  1. I don't see anything completed?

2. I don't see anything decided in terms of how I should use the Settings Tab to code?

  1. I also didn't see anything assigned to the workgroup? But, according to @teampsdkathryn's last post everyone agreed to complete systems thinking coding for Team 1 is that right?

Thanks! 🤔

@teampsdkathryn @dlounsbu @staceypark @ericasimon @swapmush

dlounsbu commented 5 years ago

@staceypark or @teampsdkathryn : Can either of you send me a zip file of the current deidentified team meeting notes we want to code in RQDA? At our last meeting there was a new 'set' of files that were discussed. Katheryn, you sent two files, and I thought one of the them was the zip file, but it wasn't!

In addition, per our last call, we realized we needed to do some additional set up of the RQDA project environment/template. This would include reorganizing/swapping our project ‘Categories’ and ‘Codes.’ Also, setting up coder ‘Settings’ (e.g., coder colors). I think Stacy was best equipped to do this for the coding team. @staceypark , please let us know if you can do this or if you need to consult with us further.

Thanks for your assistance! @dlounsbu

staceypark commented 5 years ago

@dlounsbu Recapping decisions made last time:

  1. The cases and file categories have already been assigned to all of the .txt files and .rqda file has been updated
    • Cases which are teams are assigned by right clicking on each .txt file
    • File categories which are the coders are assigned by right clicking on each .txt file
  2. The code categories and codes domains have been reorganized and the .rqda file has been updated accordingly
    • Code categories refer to the four higher order categories of systems thinking: CFBT.
    • Codes can be assigned to code categories and refer to degree of system thinking, e.g., behavior_1, behavior_2, etc.
  3. Settings tab: Pick a color for coding. Case color is gold and will not be changed. Please update with your color that we decided last time.
    • Erica - blue violet
    • Stacey - lavender
    • Swap - indian red
    • David -
    • Kathryn -
    • Lindsey - aquamarine

You do not need the zip file of the deidentified notes. You only need to download the updated .rqda file from the link here: https://github.com/lzim/teampsd/tree/master/qual_workgroup/projects

This already has all of the deidentified notes uploaded and the changes discussed above have been made.

dlounsbu commented 5 years ago

Great and thanks!!

teampsdkathryn commented 5 years ago

Hello Everyone,

Just to clarify, Stacey have you made any changes to the rqda file this week? In other words, is the one I downloaded and coded last week still the one we are using?

In addition please note the following: My color is blue.

Erica - blue violet

Thank you and have a great week coding! Best Regards, Kathryn

@swapmush @ericasimon @staceypark @dlounsbu @teampsdkathryn @lzim

staceypark commented 5 years ago

@teampsdkathryn I haven't made any new file changes since the update I posted 6 days ago with the changes we outlined last time we met (assigning cases, file categories, code categories, and codes)

dlounsbu commented 5 years ago

@lzim @staceypark @swapmush @teampsdkathryn @ericasimon : Huge apologies, but I am not going to be able to attend the Qual Workgroup Meeting this afternoon. This week is problematic due to multiple, competing deadlines, and such. Will need to make it up to the group later.

lzim commented 5 years ago

Hopefully you can keep up with the Lucid meeting @dlounsbu!

We tried to keep good documentation of all the decisions made

teampsdkathryn commented 5 years ago

I am at the American Anthropology Association Conference this week.

Below is a revised definition of behavior based on our qualitative meeting. With regards to differentiating levels of behavior, here is my draft suggestion:

BEHAVIOR: Definition: Systems Thinking Behavior Systems thinking behavior describes a trend over time. How health system variables change can be described as flows, not just a simple snap shot in time. System dynamics makes systems endogenous, systems cause their own behavior. Systems thinking makes the dynamics of behavior more transparent. We are testing causal relationships and looking at operational thinking (mental map) by looking at the physics of relationships. We think there is evidence of systems thinking in stakeholder decisions. Behavior-increasingly link the observed system behavior to the structure. What is behavior? A path.

Level 1: Demonstrates simple interconnections of the relationship between appointments and patients, Level 2: Demonstrates simple numerical awareness of system behavior and characteristics between appointment availability and number of patients. Level 3: Demonstrates the understanding of the behavior of systems by articulating how the proportion of available appointments is impacted by patient demand. Level 4: Demonstration of sophisticated, reflective, integrative thinking where stakeholder can describe the relationships between appointments, patients served, provider availability, and can offer novel suggestions on how to improve health service delivery.

Please review and revise for further clarity and specificity.

Happy Thanksgiving:)

Best Regards, Kathryn

@lzim @staceypark @swapmush @teampsdkathryn @ericasimon :

lzim commented 5 years ago

Thanks @teampsdkathryn! Have a great meeting :smile:

teampsdkathryn commented 5 years ago

Welcome back from break! I thought it would be useful to re-post our notes from the last meeting so we have all the coding instructions in one place that can be accessed within VA.   Meeting Record Tuesday, November 13, 2018 We will not meet Tues 11/20 or Tues 11/27 for qualitative. Our next meeting will be the first Tues of Dec 4th (Lindsey out), 11th 18th.

Change time codes to reflect: 0=no reference to time 1=non-specific time 2=specific time (behavior expected; specific value; increase/decrease) 3=fuller awareness of time (short/long term expected; better before worse/worse before better) 4=accurate time (system behavior as a function of the feedback; contingent on time)

Coding at the sentence/word/phrase level.

Coding both team and facilitator. (Note: for testing data set, we will focus on code category and level and will wait on separating out team from facilitator). Coder agreement includes an agreement that it is not present as well as that it is present. Walking through the diagram should pull for complexity and feedback. Walking through the results dashboard should pull for behavior and time. In the QHFD process, you should see feedback and behavior in the hypotheses and findings. Complexity should describe either a) the relationship between two or more variables (e.g., patient start rate, patient ending rate: same unit, different variables) or b) two or more units. (e.g., appointments and patients) Action Items Action Item Assigned Due Date Completed WE need to differentiate levels of behavior in code book. David L., Erica S., Kathryn A., Lindsey Z., Stacey P., Swap M. Nov 16
1.0 Issues for Immediate Resolution Key Workgroup Dependencies: qual_workgroup, quant_workgroup, hq_workgroup

  1. Discuss qualitative coding of team 1 in the training corpus:
  2. complexity
  3. behavior
  4. feedback
  5. time • Relevant GitHub Issue:
    o https://github.com/lzim/teampsd/issues/78

Notes and Action Items We will not meet Tues 11/20 or Tues 11/27 for qualitative. Our next meeting will be the first Tues of Dec 4th (Lindsey out), 11th 18th. Change time codes to reflect: 0=no reference to time 1=non-specific time 2=specific time (behavior expected; specific value; increase/decrease) 3=fuller awareness of time (short/long term expected; better before worse/worse before better) 4=accurate time (system behavior as a function of the feedback; contingent on time) Coding at the sentence/word/phrase level. Coding both team and facilitator. (Note: for testing data set, we will focus on code category and level and will wait on separating out team from facilitator). Coder agreement includes an agreement that it is not present as well as that it is present. Walking through the diagram should pull for complexity and feedback. Walking through the results dashboard should pull for behavior and time. In the QHFD process, you should see feedback and behavior in the hypotheses and findings. Complexity should describe either a) the relationship between two or more variables (e.g., patient start rate, patient ending rate: same unit, different variables) or b) two or more units. (e.g., appointments and patients) Best Regards, Kathryn

@lzim @staceypark @swapmush @teampsdkathryn @ericasimon

teampsdkathryn commented 5 years ago

Hello Team,

I have revised the codebook today to reflect the decisions reached at our last qualitative meeting. It is available on the server and below. Please let me know if there are questions or concerns and/or the need to make further revisions. Happy coding!

Systems Thinking Codebook November 26, 2018 Coding Decision Rules:

  1. Coding for Systems Thinking Codes (Complexity, Feedback, Behavior, Time) WILL be dichotomous (0=absent, 1= present)
  2. Coding for Systems Thinking Codes (Complexity, Feedback, Behavior, Time) will NOT be mutually exclusive (four codes can overlap and be present in the same text)
  3. When the Systems Thinking Codes (Complexity, Feedback, Behavior, Time) are present, assign it/them a level using the examples tab.
  4. We need to code the facilitators and team members text in the meeting notes. Determine whether this will be 2 sets of codes or something we set up in the settings tab.
  5. We will use 20% of the meeting notes as our training sample and 80% as our analysis sample. The training dataset will be balanced for a) notetaker, b) time, c) time (early months versus later months). Interrater Reliability:
  6. Coder agreement includes an agreement that it is not present as well as that it is present. Coding Guidelines:
  7. Coding at the sentence/word/phrase level.
  8. Coding both team and facilitator. (Note: for testing data set, we will focus on code category and level and will wait on separating out team from facilitator).
  9. Walking through the diagram should pull for complexity and feedback.
  10. Walking through the results dashboard should pull for behavior and time.
  11. In the QHFD (Questions, Hypothesis, Findings, Decisions) process, you should see feedback and behavior in the hypotheses and findings.

  Proposed Definitions: COMPLEXITY: Definition: Complexity should describe either a) the relationship between two or more variables (e.g., patient start rate, patient ending rate: same unit, different variables) or b) two or more units. (e.g., appointments and patients). Forest thinking. Level 1: Basic one-to-one relationships, largely intuitive Level 2: Complex one-to-one relationships Level 3: Three-way relationships Level 4: Big picture

BEHAVIOR: Definition: Systems thinking behavior describes a trend over time. How health system variables change can be described as flows. System dynamics makes systems endogenous, systems cause their own behavior. Systems thinking makes the dynamics of behavior more transparent. We are testing causal relationships and looking at operational thinking (mental map) by looking at the physics of relationships. We think there is evidence of systems thinking in stakeholder decisions. Behavior-increasingly link the observed system behavior to the structure. We are looking for a movie, not a snap shot in time. Level 1: Demonstrates simple interconnections of the relationship between appointments and patients.

Level 2: Demonstrates simple numerical awareness of system behavior and characteristics between appointment availability and number of patients.

Level 3: Demonstrates the understanding of the behavior of systems by articulating how the proportion of available appointments is impacted by patient demand.

Level 4: Demonstration of sophisticated, reflective, integrative thinking where stakeholder can describe the relationships between appointments, patients served, provider availability, and can offer novel suggestions on how to improve health service delivery.

FEEDBACK: (Sweeney and Sternman, 2007, Table 4) Definition: Thinking in loops. Stakeholders have made some sort of a circle in their thinking. Feedback is increasingly more complete-close their feedback loop, intermediate variables. Causes are in the feedback loop. Most feedback loops are balancing loops because of limited resources and units of time. Most causation errors are those where we only pay attention to the in-flows and not paying attention to the out-flows. 2 types of feedback: 1. Reinforcing 2. Balancing. Level 1: Open loop: non-closed loop Level 2: Closed loop: return to the variable you started with. Level 3: Behavior of closed loop over time. Level 4: Multiple closed loops

TIME: (Sweeney and Sternman, 2007, Table 6) Definition: Reference to change over time. Time- increasingly sophisticated understanding of change over time; i.e. worse before better. Level 0: No reference to time Level 1: Non-specific time Level 2: Specific time (behavior expected; specific value; increase/decrease) Level 3: Fuller awareness of time (short/long term expected; better before worse/worse before better) Level 4: Accurate time (system behavior as a function of the feedback; contingent on time)

Best Regards, Kathryn

@lzim @staceypark @swapmush @teampsdkathryn @ericasimon