Open jamesallenevans opened 7 months ago
Good afternoon, The first question is related to methodological challenges: What are some of the primary methodological challenges faced when integrating data across different sources and time scales, such as surveys, social media, and laboratory observations?
Second, in terms of practical applications, how can the findings from your Screenomics research be applied in practical settings? Are there particular domains or industries where this research could have a significant impact? Furthermore, could you discuss some examples of how your research has been or could be used to foster positive outcomes in communities or populations?
Thank you.
Hi Prof. Nam, Thank you for presenting your research. The Screenome Project seems like an ambitious but incredibly valuable project. I'm specifically interested in the rationale behind collecting screenshots rather than implementing some other way to collect phone usage data. It seems like this would be computationally expensive to collect and analyze.
A large concern with collecting such granular data from participant's devices is privacy. Though the paper mentions mitigating privacy and security concerns using encryption, secure storage, and de identification, I was wondering if you could elaborate on these measures, with a particular emphasis on de-identification. How can we be sure that the data is truly de-identified? We have seen in the past that supposedly anonymized survey data can be re-identified, so I was curious what makes this data de-identified?
Hi proffesor Nam, thanks for sharing your research. How do you think the changes to Molenaars manifesto will affect how we study and apply findings to help people with their digital lives and well-being? Are these changes at risk of being coopted by ill-intentioned agents that may benefit from information with this level of granularity?
I'm eager to understand how findings and methodologies from screenomics can be applied in real-world scenarios. Could you highlight any specific domains or industries where this research might have a significant impact? If there are no clear applications yet, is there a particular issue we should deal with?
Collecting fine-grained digital data for research is widely acknowledged and, in fact, creates an epistemological framework around the practice. Considering the granularity of your approach, how would you respond to the adverse effects of the project not in terms of privacy concerns but from a surveillance perspective (e.g., its normalization) and the role of power within scientific research? This seems especially important considering an unavoidable(?) feedback loop within science and industry.
Thank you for your sharing! My question is that given the increasing integration of digital technologies in everyday life, how can the Screenomics paradigm be used to inform public policy or interventions aimed at addressing digital well-being or reducing screen time-related issues in various populations?
Thank you for sharing your research with us. I am really interested in your idea of "screenome" - speaking of myself, I would love to see the analysis of what I see and do on screen just to learn about how digital content is shaping my mind and thought in ways I do not realize. I wonder how you think of the benefits of such research to participants? How can we make the project attractive for people to share the data?
Thank you for sharing this exciting new dataset for studying human behaviors in naturalistic settings! If we represent each screenshot as an embedding vector in the representation space, we will obtain an average (daily) temporal trajectory for each participant. -- Will we observe shared trajectories across individuals? On the other hand, if we cluster these trajectories, will we obtain clusters that are interpretable in terms of, for example, personalities, attention patterns, decision patterns, etc.?
Thank you for sharing your interesting research! My question is that with the developing innovative methods in fine-grained digital data collection and processing, from what perspective and how will the techniques and conclusions of Screenomics influence the relationship between electronics and people?
An interesting research! The paper presents a comprehensive analysis of human screen time through innovative methodologies like the Human Screenome Project. Yet, how might we refine these research methodologies to capture nuanced aspects of digital interactions, such as their context, duration, and emotional triggers, for a more holistic understanding of their impact on individuals' well-being?
Wow! This is very interesting, thank you! I have a question regarding the data collection part... From your experience, which kinds of people are willing to share such private data? Are there any patterns in demographics, etc.?
Very interesting research! You mentioned applying foundational AI models to individual-specific behavior datasets to enhance model transferability and predictive capabilities. Could you elaborate on how these AI models' transferability is assessed and optimized in practical applications? Specifically, how do you address potential 'out-of-distribution' issues when transferring from one task or dataset to another that is very different?
Hi professor Nam,
Thank you for sharing this novel research! Is it possible that individuals who are willing to share their screenomics data with academics might introduce a bias similar to that of self-report data, given that they need to be willing to share in the first place? Certainly! Also, how do you ensure that the screenshot data collected, which may contain private and sensitive information, are adequately protected and anonymized to maintain participants' privacy? Since based on my personal experience, whenever I take a screenshot, it's when I need to document some important information, but mostly private information or information that needs a context to be understood.
Thanks for your sharing! The shift from self-reported survey to data collection on real behaviors is critical for the accuracy of related studies. For HSP, as the data scales up dramatically with the time goes, what do researchers need to pay attention to? How do you think individuals can benefit from personal specific modeling?
Thanks for sharing your research! You propose a diverse array of methodologies for studying human dynamics via Screenomics. Could you elaborate on the specific types of methods or algorithms that are best suited for handling the high-dimensional, time-series data generated in these studies? How do these methods help in capturing the "zooms, tensions, and switches" in human behavior patterns?
Thank you very much for sharing. I am curious that given the study uses self-designed software and the content of the study may involve private information, how did the research team design the experimental process to minimize the observer effect?
Professor Ram, your approach in the Screenomics project utilizes screenshots to capture the digital behavior of participants, providing a detailed temporal and content-focused dataset. Considering the potential biases introduced by the voluntary nature of participant data sharing, how do you address these biases in your analysis? Additionally, could you elaborate on the specific AI and machine learning models you employ to manage the complexity and high demensionality of the data? How do these models handle the inherent challenges such as data sparsity and the potential for over-fitting?
Hi Prof. Ram,
Thank you for coming down to share about your research! I am curious about whether the Hawthorne effect is applicable to your studies, where individuals alter their behavior because they know that they are under surveillance?
Hi Dr. Ram,
Thanks for sharing this incredible work! It's exciting to learn that you combine the person-specific data and model with transfer learning in foundation models. I have some questions about the data -- it seems that you only collect screen data on smartphone, while it is becoming more common nowadays that people have more than one electronic devices like laptops and pad, which could potentially make the data incomplete. Another thing is that screen activities do not necessarily correspond to the actual activity. For example, one might have their phone screen on (maybe in a chat app or playing a video) during a meeting but is not actually paying attention to the screen, which would influence their state of being differently than when they are fully concentrated on their phones. How much do you think your current data is influenced by these situations and what are some ways to mitigate such influence?
Thank you for sharing this interesting research. The integration of AI foundation models with the Human Screenome Project data is a novel approach. However, how do you address potential biases inherent in these AI models, especially given the highly personal nature of the data involved? What methodologies are being considered or developed to ensure that these biases do not undermine the validity of personalized predictions and interventions derived from the models?
Can the similar approach of N=1 individual modeling be applied to say data collected from a collection of human beings, say a family, without moving towards a N > 1 model. I am curious if the principles of transferability of models are limited to individuals (N=1) or can we gain insights about the dynamics of collective organizations (e.g N=4 but treated as N=1) using a similar approach and transfer those to other similar sized collectives. Or is this a regression into already established statistical ways of thinking and generalizable model techniques.
Thanks for sharing your inspiring study. What do you think of the difference between human screensome interaction and Virtual/Enhanced Reality? Does the hardware development change people's understanding and utilization of smart devices? Will that change the concept of the human screensome project?
Thank you for sharing! This might be a slight digression but I was wondering how could super-intensive longitudinal data paradigms enhance our understanding of microeconomic fluctuations within populations, particularly in response to external economic shocks? Furthermore, how might these insights inform the design of more resilient and adaptive economic policies that can dynamically adjust to real-time changes in consumer behavior and market conditions? -- do you see any scope for applications of such research in other adjacent social sciences?
Hi Professor Ram,
I guess here we have longitudinal data and I am wonder is the users' behavior data stationary as a time series. If yes, can we extract certain patterns/habits or find some "unobserved" heterogeneity, just as how Bonhomme, Lamadon and Manresa detects unobserved heterogeneity with long-panel data.
The article highlights the potential of zero-shot learning with LLMs for person-specific modeling. However, how can researchers ensure the embeddings learned by these models from massive, general datasets are not biased and don't lead to inaccurate representations of individual behavior, especially for underrepresented groups? As LLM's training data may contain biases that could skew the interpretation of individual behavior.
Hi Professor Ram, that is a very interesting research! I can't wait to see more empirical papers coming out from your idea. I'm curious about what are the possible sociological questions people could ask with the idea of screenomics, and the ethical concern of such research method. How could this idea be applied in the research of digital labor?
Hi Prof. Ram, thank you for sharing the interesting research. I have two questions.
Firstly, in your recent work, you highlight the importance of using multiple time-scales to understand intraindividual variability. Could you discuss how these different time-scales can specifically enhance our understanding of human behavioral dynamics and potentially lead to better intervention strategies?
Secondly, your talk describes the Screenomics paradigm and its use in capturing every imaginable aspect of human behavior. What are some of the most surprising or counterintuitive findings you have encountered using this paradigm?
In the context of the Human Screenome Project, how do the practices of transfer learning and the use of AI foundation models raise ethical considerations regarding individual privacy and data security? Considering the detailed nature of the data captured (every screen interaction), what safeguards are necessary to ensure that the transfer of personalized models does not lead to misuse or unintended consequences?
Thanks for sharing. I have one question that is more closely related to the industry. How can the methods and findings from person-specific AI models be operationalized within digital product design to enhance user experience, engagement, and well-being while ensuring user privacy and data security?
Thanks for sharing your inspiring work. In your research, You discuss how technology dependence can also bring about potentially positive changes, such as in cases where screen time may be associated with some negative mental health outcomes, but can also enhance knowledge acquisition and social connection. In this case, how do you think research should balance the task of revealing potential negative impacts of technology use with emphasizing its positive effects?
Thanks so much for your sharing! My curiosity is given the extensive scope of your research outlined in "Modeling at Multiple Time-Scales: Screenomics and Other Super-Intensive Longitudinal Paradigms," I would be interested in learning how you navigate the ethical considerations regarding privacy and consent in the collection and analysis of intensive longitudinal data, particularly from personal devices and social media. Could you also share the precautions taken to maintain data security and protect participant anonymity in your studies?
Thank you for sharing your work! How will the project implement data compression techniques to handle the extensive volume of screen captures efficiently without losing significant data fidelity?
Thanks for sharing. I have a question regarding the first paper. What are the potential ethical implications of using such detailed monitoring methods, and how could these be mitigated?
Thanks for sharing! Given the complexities inherent in multi-time scale modeling, particularly with the Screenomics paradigm, what are the most significant risks or potential biases that could arise from overly relying on digital traces to understand human behavior? How should researchers mitigate these risks?
Thanks for sharing! In your paper, you mention the methodological invocation of 'zooms, tensions, and switches' (ZOOTS). Could you provide more specific examples of how ZOOTS are applied in practical research settings, particularly in terms of understanding human behavior changes over short and long time scales?
Thank you for the intriguing paper! For me, the data visualization from the presentation is so creative and interesting. It question might not really relate to the content of the paper. Although, I am wondering what is your opinion on the trade off between complexity of visualization and ability to convey the meaning. How you come up with these kind of visualizations?
This paper examines the use of AI foundation models in analyzing individual behavior, focusing on the Human Screenome Project. It emphasizes the importance of model transferability over generalizability for a deeper understanding and intervention in digital lives. Then, the discussion also covers AI's potential to enhance personal experiences, ethical data use, and the move towards self-supervised, person-specific learning.
Hence, I am wondering how AI foundation models can maintain the balance between personalization and privacy, especially when dealing with sensitive individual behavior data.
Professor Ram, given the unique nature of the Screenomics dataset, have you encountered any ethical concerns regarding participant privacy or the potential misuse of the collected data? If so, how have you addressed these concerns, and what measures have you put in place to ensure the responsible use of the dataset in your research?
Thanks for presenting your interesting research! I would like to know how we can address the concern about deliberate behavior change when people are aware that they are being recorded leads to biased records of their digital lives.
Fascinating presentation! I'm intrigued by your thoughts on motivating increased participation in this project to ensure a thoroughly comprehensive dataset capturing screen usage among individuals across various age brackets and geographical locations. How do you foresee encouraging broader engagement to achieve such a diverse and extensive data collection?
Thank you for sharing! Considering the diversity and fluidity in methodological approaches you advocate for, what are some of the challenges you face in integrating data across different time-scales and sources, and how do you address data comparability and reliability issues?
Thank you very much for sharing the interesting research. I am curious about the timeliness of Screenome data. As internet interaction patterns, recommendation algorithms, and trending topics evolve rapidly, does the accuracy of artificial intelligence, trained using Screenome data to predict people's behavior, decrease over time? Or is there a more stable mechanism underlying the rotation of topics that governs people's digital life behaviors, thus allowing the model to consistently predict people's actions effectively?
Thank you for sharing this research. One question I have regarding the research is whether people will behave the same when they know their data will be shared with researcher. Thus, they may engage less in highly private or sensitive activities, such as accessing bank accounts. I believe the research design could potentially introduce artifact to the human behavior.
Thanks for the talk. How does the ZOOTS methodological framework—focusing on zooms, tensions, and switches—affect the interpretation of longitudinal data, and what specific challenges does it address in traditional data analysis techniques?
Really cool project! I'm just interested in the prospect of it. What do you think could be the next step (say, in 5 years maybe)?
It is a very interesting project and I would like to hear more about the application of foundation model. I am curious however, that how to address the privacy issue with large data where human subjects are involved. Is there any potential risk that the models with advanced performance lead to issues of exposing the private information of the human subjects?
That's super cool! List my question as following, Conceptual Clarity: Could you clarify the concept of "zooms, tensions, and switches" (ZOOTS) and how these elements are crucial in understanding human dynamics through your methodologies? Ethical Considerations: With the intensive collection and analysis of personal data across various platforms, what ethical considerations and protections are integral to your research methodology?
Thanks for your presentation! I find the concept of ZOOTS – zooms, tensions, and switches – particularly intriguing. Could you explain how this methodology enhances our understanding of human dynamics and development? Additionally, could you clarify the impact of intensive longitudinal data from sources like survey panels, experience sampling, and social media on identifying intraindividual variability constructs and their temporal evolution?
Thanks for sharing the interesting topic! In your experience, what are some of the most promising applications of the insights gained from studying human behavior at multiple time scales? How can this knowledge be used to promote well-being and positive development?
Pose your questions for Nilam Ram for his talk Modeling at Multiple Time-Scales: Screenomics and Other Super-Intensive Longitudinal Paradigms. Abstract A decade ago, we used newly emerging smartphone technologies to obtain multiple time-scale data that facilitated study of new intraindividual variability constructs and how they changed over time. The recent merging of daily and digital life further opens opportunity to observe, probe, and modify every imaginable aspect of human behavior – at a scale we never imagined. Using collections of intensive longitudinal data from survey panels, experience sampling studies, social media, laboratory observations, and our new Screenomics paradigm, I illustrate how methodological invocation of zooms, tensions, and switches (ZOOTS) is transforming our understanding of human dynamics and development. Along the way, I develop calls for more flexible definitions of time, fluidity and diversity of methodological approach, and engagement with science that adds good into the world. Two short, related papers available here and here