Closed 814734522 closed 4 years ago
Are you using any stop criteria? If not, it would be interesting to revise your code, since according to my approach, you should generate prediction for all unlabeled samples for the two recommenders. Even if one recommender finishes first than the other, iteration of the algorithm should only stop once all examples are labeled. (E.g. If Recommender 1 labels all of your samples, you should do one last iteration, causing Recommender 2 to label all of your last unlabeled samples and put in T_L1 set);
I have tried to do the identification two recommendation algorithms, they are all running synchronously(There seems to be no such thing as what you said), but the recommendation result is not good.
As I reported earlier, I'm still porting the algorithm to python and to the tool and this is not the algorithm described in the articles, just a skeleton framework to go helping the developers. As I'm working on another project, I have no time to finish porting the final algorithm. Two other researchers implemented the same algorithm in Java and C# and had similar results. Sorry for the inconvenience with the code, I will alert the other developers that the code is not complete and soon finish carrying the algorithm.
Can you share the JAVA code of this algorithm?please.thanks :)
arthur,I desperately need JAVA code of ecorec algorithm because I need it for comparative experiments.please.
I sent an e-mail for the authors requesting the code.
thanks.:)
---Original--- From: "Arthur Fortes"notifications@github.com Date: Wed, Apr 24, 2019 03:11 AM To: "caserec/CaseRecommender"CaseRecommender@noreply.github.com; Cc: "Author"author@noreply.github.com;"814734522"814734522@qq.com; Subject: Re: [caserec/CaseRecommender] problem of ensemble in ecorec.py (#32)
I sent an e-mail for the authors requesting the code.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
arthur,the code of ecorec.py haven't ensemble the results by a weighted average of the predictions generated by individual recommenders at each iteration of the co-training process.(it's looks like the only take the confidence value bigger one.)And when the result just from one recommender(other recommender doesn't have this result),what should we to deal with this result?