Open lbechberger opened 6 years ago
in general, i like the idea of using the content-based approach or hybrid approach, because the difficulties of building a collaborating recommender doesn't seem to be easily resolvable with our resource, as you stated. However, while reading your documentation i feel like standing in a misty forest where i need some more light, in order to have a clearer picture of your project. I will point out something can be potentially improved:
1. inaccurate statement.
Furthermore, it is hard to make predictions for new users, because finding article-user pairs for each user is computationally complex.
as far as i know, "it is hard to make predictions for new users" is the the cold start problem, which reflects the inability of the recommender system to make predictions for a new user due to the lack of enough data about the user (Gopidi, 2015, page 12). "because finding article-user pairs for each user is computationally complex" doesn't seem to be the cause of this problem.
2. feasibility on building a user profile with questionnaires First of all, I doubt it's feasible to build a user profile with a questionnaire that is eloquent enough for the kind of content filtering you're trying to achieve, and I also doubt it's enough for many interesting kinds of ML algorithms (speak ANNs) (generating data will take forever with your approach). Secondly, that's probably a problem of the task per se, but only because a user likes some articles of a category doesn't mean he/she likes all articles of that category.
3. a blurry mist in the forest There are some important points that are not well clarified in your documentation. 1) How to calculate similarity? What features will be extracted to calculate similarity? Similarity measurement is the main part of the project where ML could be integrated. This is the crux of the task of News Recommendation, which needs to be investigated more deeply. 2) How ML will be applied in your project? What algorithms will be used in which steps? So far, all steps of the pipeline stated in the documentation are classical implementation tasks. I don't see where you want to use ML.
Hi Group Beta,
this is our feedback for your documentation for week 3. First of all, there is not much to criticize about it and it seems like you are already on a good level for your documentation.
Regarding the content: Your third part is technically accurate. All your statements (including the pandas library) are correct from our point of view. You used your terms consistent and threw data generation is described completely because no important key points are missing. You described your design decisions very well. Sometimes the reader would wish a little more detail, e. g. why did you split up the training and test data exactly in half and not for example 70% to 30%, but in the most cases your design decisions are comprehensible.
Regarding the code: In your documentation this week you described a little bit, how and with which functions you implement your ideas. It would be great if you already provide your code in GitHub. But we think this will be automatically the case in the future.
Regarding the style: Your text is easily accessible, grammatically accurate and readable. There is not much room for an improvement. Sometimes you missed some punctuation marks (e. g. in the last sentence of the second last paragraph), but this is not too critical. Your text is well structured, but we think you forget a blank for your heading this time. At first glance it is not so easy to see, where the third part of your documentation begins. Your table represents great examples for your user profiles and helps the understanding of your machine learning approach a lot.
Overall you did a great job this week and there are only a few little things, that you could improve in the future. Well done!
First of all, your documentation is comprehensible, well structured and you give reasons for your decision, like, for example, why you split up the categories into the hierarchical order. It is also good that you explain in detail which categories and subcategories you are using and the relationship between them. Moreover, it's great that the last part of your documentation is on a concrete level. That, in addition to the good examples you give, makes it really well read- and understandable.
In order to give some suggestions for improvement: You could explain why you chose to take only the big categories with more than 500 articles into account. Did you make this choice because you think it is more realistic, or do you want to have a low amount of categories but with coverage of most of the articles?
The way in which the user profiles are created would be even more comprehensible if you added an illustration in which it is shown how the categories and subsequently the subcategories and the username are chosen.
Apart from that, we have two small points concerning the language/grammar: -There is a typo in the username: Crime_and_law_Economy_and_business_Environmen_Health_Internet_Religion -> Environment -We think it's "on wikinews.org" or "at wikinews.org" instead of "in wikinews.org"
Hi Group Beta,
Feedback Week 6: Overall your documentation is detailed, consistent and complete. Following are the comments for the individual areas:
Regarding actual approach/design decisions: The approach has been detailed. One question under design decisions that could be asked is: Is there a reason for the Categories information not being provided as a feature to the classifier?
Regarding completeness of the actual documentation: The documentation seems complete with the relevant details covered. The codes user_generation.py and article_collection.py seem well commented and easy to understand.
Regarding Style & Readability: Overall the classification of topics and readability is good. One way to improve it even more would be to separate the last paragraph into what you did and the challenges/Solutions under suitable sub-headings.
Good luck with the dataset!
Concerning your argumentation for the selected design decisions: In the section about evaluation metrics, you talk about using the F1-score as well as Cohen's Kappa, the latter because "it is a more complex and meaningful metric than the mere accuracy of our classifier". I don't understand why you even use the F1-Score as primary metric - I agree with you that Cohen's Kappa is the most informative metric, but if that is the case this should be your primary metric in my view.
Concerning the completeness of your documentation: I like that you show why your positive/negative sample splitt is a bit off, it's understandable and concise. As Cohen's Kappa corrects for class imbalances, I think the fact that the split is off is not bad for you. However, as you're talking about using the F1-Metric as well as Cohen's Kappa, I think you should add the performance according to this metric, too.
Concerning the correctness - I think your baseline-performances are partially wrong. If I'm not mistaking, Cohen's Kappa gives a value of zero to the 50-50 and Label-Frequency baseline, as it gives this value no matter what the split is. My script to check this is:
def kappa(TP, FP, FN, TN): a = TP b = FP c = FN d = TN pnull = (a+d)/(a+b+c+d) pyes = (a+b)/(a+b+c+d) (a+c)/(a+b+c+d) pno = (c+d)/(a+b+c+d) (b+d)/(a+b+c+d) pe = pyes + pno kappa = (pnull-pe)/((1-pe)+epsilon) return kappa
TP = 85500 FP = 175500 FN = 85500 TN = 175500 print(kappa(TP, FP, FN, TN))
-- this returns zero as well. Maybe I'm wrong though.
Concerning everything else, I like your documentation. It doesn't contain any stylistic or grammatical errors, is readble and well structured.
Hello Group Beta,
this is the feedback for your documentation part 7 and part 8.
First the documentation of part 7 (we had written it already before christmas):
You were a little late in uploading your documentation (deadline is Sundays at 23:59), but we think this is not a problem, because you then already uploaded it Monday morning. Overall you provided a good documentation over your thoughts and ideas and we are excited, how they will work out during the next weeks.
Regarding the content: Your text is technically accurate and consistent. We can easily understand how you want to use the TF-IDF score for extracting the most relevant keywords for each article or how you want to calculate a vector (consisting out of means of single word-vectors) for each article. Your documentation is mostly complete, but some terms you could explain more in detail. For instance, we were not sure about the term "dbpnet". Do you mean the KnowledgeStore with this term? Moreover, some design decisions could be explained better. How did you come up with these three ideas? What was your reasoning? Why do you choose "maybe 5" words for your most relevant keywords?
Regarding the code: In this week you only documented your thoughts and ideas. So we are not able to say much about your code, because it seems like you did not implement anything this week. But in the future we recommend you, to link your source code to your three ideas of the Feature Representation so that the reader can easily connect these ideas to your source code.
Regarding the style: As we said earlier, we understand most of your text very easily. That indicates a readable and easily accessible writing style. Well done! Moreover, we did not find any Major grammatical or spelling mistakes and therefore, your documentation is grammatically accurate. One thing that is worth mentioning is the enormous repetition of the words "liked", "disliked" and "articles" in the first paragraph. Every sentence of this paragraph contains at least one of these three words and therefore it is not so nice to read. Maybe you could structure your text regarding often repetitions of the same word in the future a little bit better. Otherwise, your short part of the documentation seems well structured, because you used one paragraph for each of your ideas and the example in paragraph two helped the understanding of your second idea pretty much. Without this example, we perhaps might not be able to understand it.
In the future, you could improve your documentation a little bit on your style of writing as well as in describing some design decision better and more complete. Despite these minor weaknesses everything else was really good described and (most importantly) we did understand everything of your documentation without thinking too long about it. You should pay attention to link your source code to your three ideas of the Feature Representation as soon as you implement them, because for the reader this is really important. Otherwise, he might not be able to understand your further decisions and implementation steps.
Keeping this in mind, you are on a good way to successfully implement a news recommendation system!
Now the feedback for your documentation part 8:
You described your concrete ideas and implementations of the feature extraction very well. In the following, we give you specific feedback for the different criterions and our suggestions for an improvement.
Regarding the content: Everything you have explained seems logical sound and technically accurate. Moreover, you described your implementation consistent (especially the numbering of the features) so that the reader can follow your ideas. The documentation is mostly complete as well, but you mentioned, that for feature 1, you still have a question for Lucas. It would be more satisfying for us if you would describe your current problems in your documentation as well (instead of just mentioning that you want to ask Lucas about that). Moreover, your design decisions could be described a bit clearer. It is not so clear to us, why you exactly decided to implement these four (or maybe five) features and why they might be good features for your approach. For example, why is the length of an article may be a good feature?
Regarding the code: First of all, it is very good, that you explained in the beginning, where we can find the specific parts of your implementation. These links to the source code help the reader in understanding your implementation very much! The code quality seems fine as well. Almost every function has a documentation comment (except "get_article_length") and the variables have plausible names. The file user_generation.py still has some TODOs and some unnecessary comments in it, but we are sure you are still working on this.
Regarding the style: Your style of writing is easily understandable and most of your terms are grammatically accurate. The word "either" in the sentence "If the category-name contains more than one word (besides “and”), it is checked whether either word is present in the article" of the paragraph "Feature 2" seems a little bit weird. Everything else is readable and due to your bold headlines, your text is well structured. In general, your visualization is great and very helpful in understanding your first feature, but the text is a little bit small and hard to read. Maybe you could pay more attention to this is in the future.
All in all, we enjoyed reading the part for this week of your documentation very much. Pay attention to explain your current problems and design decisions a bit more in detail and to write the text in your visualizations in a readable size in the future. Everything else, e.g. the code quality, your style of writing and the overall content, was very good this week and it seems like you will end your project for this semester very successful!
Keep up the good work!
Dear Group Beta,
this is group Zeta's feedback for the 8th part of your documentation with the topic of training the classifier. First off, the overall impression of your documentation is really good. You managed to convey the large amount of data that is relevant for the last step of your work in a way that is not too convoluted. While the hyperparameter section was a bit difficult to read in the sense that it was extremely dense with information, that should not be counted as an explicit positive or negative because that is just how documentations usually are. This does point to the overall quality in readability and accessibility of your documentation since the rest of it was very easy to follow. Moreover, we liked your recall to an earlier entry of your documentation for reference which is helpful in keeping your explanations concise but at the same time helping readers understand everything that you talk about. However, you omitted where to look in the documentation; since your overall docs are structured in chapters, it would have been nice if you had at least pointed the reader to the chapter where he or she could find more information.
A few more things regarding the content of your documentation: While we liked your explanation on design choices regarding dimensionality reduction and how time played a serious role in it (partly because it's just so very relatable), we stumbled across your choice of word "Dimension Reduction" instead of the term "Dimensionality Reduction" used throughout the seminar. After checking this on the internet, though, it became clear that this is also a valid term, so this is again not a negative point. This moment of confusion was simply caused by deviating from the terminology established in the course; if you want to avoid this in the future, try to be conscious of which terms were established between the developers. Finally, we really liked how well you explained why you chose which performance measures and disregarded others. It really left no questions open for us.
Again, we think that your documentation turned out really good and does well what it is supposed to do: it led us through this particular part of your development process and did not leave open any questions for us. Good job!
This is the thread where all the other groups leave their feedback for the documentation of group Beta.