Open monazhu opened 7 months ago
Please provide more detailed feedback here on what was done particularly well, and what could be improved. It is especially important to elaborate on items that you were not able to check off in the list above.
License - You missed a change when copying and pasting in an example license. Under attribution it says 'Copyright © Tiffany A. Timbers, Trevor Campbell, Melissa Lee' when it should be your team members.
The instructions under 'Usage' in your README are pretty exhaustive and a little confusing to look at. If the container method is preferred then that should be first under that section. Maybe include a separate .md with more troubleshooting notes and keep the instructions in the README simple.
I think you could justify your methodology more, i.e. explain why you chose a t-test and why you chose a 95% confidence interval. I think you've declared your assumptions well, especially with the observations being correlated (not iid).
I was unable to run your analysis, as I have a mac with an M2 chip. I had an issue when trying to compose the docker image. The exact error message is below.
analysis-env The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
Overall, great job guys! This was a super interesting project, really fun to learn about!
This was derived from the JOSE review checklist and the ROpenSci review checklist.
[x] Repository: Is the source code for this data analysis available? Is the repository well organized and easy to navigate?
[x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
[x] Installation instructions: Is there a clearly stated list of dependencies?
[x] Example usage: Do the authors include examples of how to use the software to reproduce the data analysis?
[x] Functionality documentation: Is the core functionality of the data analysis software documented to a satisfactory level?
[x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support
Overall, the report is very well-written with a lot of literature research to make your point sound and easy to follow. I was able to reproduce your analysis locally with very few commands! Here are some minor changes I suggest:
README References
: The About section currently contains only two in-text citations. It might be helpful to update the reference list to fully reflect the resources used in README specifically.
Running Analyses on Your Local Environment
under Usage
Section: I think the information in the Usage section could be clearer. Perhaps separating the steps for setting up the environment from the notes (similar to what you have for Docker setup), or using different text styles for each, could improve readability.
Your report has excellent flow! However, I would recommend adding some Exploratory Data Analysis (EDA) before the analysis section (e.g. any outliers and data preprocessing) and explaining how you decide on the methodology. Also, stating the hypothesis at the beginning of the analysis would be helpful.
This was derived from the JOSE review checklist and the ROpenSci review checklist.
Please provide more detailed feedback here on what was done particularly well, and what could be improved. It is especially important to elaborate on items that you were not able to check off in the list above.
Overall a very clear and concise analysis. The R codes are well-documented and well-tested. I could reproduce the pipeline and the final HTML report without any problem.
However, there are some minor issues that I have spotted so far:
In section 4.1, I think the use of the word "one-sample t-test" is a bit confusing especially when paired with the violin plot showing dirstibution from two Rating Types.
Also, from the plots one can observe that the distribution of those two rating types seem to differ in terms of variance. In this case, does the equal variance assumption still hold? I think it would be better if this could be addressed before performing the t-test.
When running the docker-compose
command using Apple Silicon machines, a warning would be shown saying that the architecture/platform used by the docker image hosted on DockerHub does not match that of the local machine. This is because only linux/amd64
image is available, and the docker-compose.yml
file does not specify the platform to be used. Please consider adding a platform: linux/amd64
attribute to the docker service inside the docker-compose.yml
to avoid this ambiguity.
Some typos/missing info in the markdown/plain-text files:
README.md
, Line 24: Missing command to launch Docker.README.md
, Line 87: The emulation target arch should be x86_64/amd64
.CODE_OF_CONDUCT.md
, Line 20: "Derogatory" is misspelt.CONTRIBUTING
, Line 3: The link for opening an issue does not work (it is a placeholder URL).This was derived from the JOSE review checklist and the ROpenSci review checklist.
This report is well-organized with clear conclusions, and the organization of project files and the reproducibility of results are also clear and easy to understand. The conclusion is very interesting: "the findings reveal a systematic tendency for individuals to overestimate their attractiveness", which immediately caught my eye!
Here are my minor suggestions:
Overall I really like this project, well done guys!
This was derived from the JOSE review checklist and the ROpenSci review checklist.
Submitting authors: @mishelly-h, @rorywhite200, @wenyunie, @monazhu
Repository: https://github.com/UBC-MDS/speed_dating_analysis Report link: https://ubc-mds.github.io/speed_dating_analysis/output/analysis_report.html Abstract/executive summary: This research delves into the dynamics of self-perceived attractiveness in the context of dating. We explore whether individuals accurately gauge their own appeal compared to external judgments. Analyzing data from speed dating studies, the findings reveal a systematic tendency for individuals to overestimate their attractiveness. While a significant correlation exists between self-ratings and others’ ratings, this research underscores the interplay between self-perception and external judgments in the realm of dating. The implications range from improved self-esteem for those perceiving themselves as more attractive to potential challenges in social interactions. Future research could investigate the influence of contemporary factors like social media on self-perception and exploring the multidimensional aspects of attractiveness.
Editor: @ttimbers Reviewer: Celeste Zhao, Jing Wen, Merete Lutz, Orix Au Yeung