singnet / reputation

MIT License
11 stars 11 forks source link

Finding reputation network attractors in heterogeneous social networks (using Hopfield/RNN network) #268

Open deborahduong opened 5 years ago

deborahduong commented 5 years ago

Copied from Slack Reputation channel May 26:

Anton Kolonin [10:14 PM] JIC, if you could take all your thought stream on "finding reputation network attractors in heterogeneous social networks" (that is how I call this direction of study :wink: ) in separate google document from main reputation channel, we could get back to this later when the simulations are working. Otherwise I am afraid it may get lost in the chat log.

Copied from Slack Reputation channel May 10:

Deborah Duong [11:45 PM] https://arxiv.org/pdf/1204.3806.pdf Ising spin model for symmetrically connected. Im trying to find a way the opinions that the sponsored ratings show up as one color and the unsponsored showed up as another.

Anton Kolonin [5:22 AM] Interesting. Still, I would make simulation working first, with Regular RS and without RS - to get baseline results first of all.

Anton Kolonin [5:33 AM] I looked thorough the paper, I used our RS to do this kind of analysis with Steemit social network data: https://aigents.com/papers/2019/ReputationSystemsForHumanComputerEnvironmentsIMCIC2019.pdf The big difference is that all social network in fact promote "liquid rank" explicitly, which is not the case in our current Amazon study, so I would focus on getting realistic simulations of our case first of all, get the baseline results and then look what we can do about that. (edited) I have got your idea on inferring implicit liquid rank relationships in our RS and part of that is already in our spec - we can discuss what is missed there once simulations are available. (edited) I will also look in your idea further more carefully next week.

Deborah Duong [6:28 AM] Sure, I just wanted to comment on your social network method because you asked me to for Monday. The method wouldnt exactly be as in the above paper, but I thought some similar Ising spin model would work, for example, one in which votes for the goodness of a consumer, product, or supplier would be a plus, votes against the goodness of a consumer, product, or supplier would be a minus, two way symmetry was enforced,and then the whole thing was annealed. As in a Boltzmann machine neural network. Each consumer product and supplier would be a node and their opinions/use of each other would be the valence of the links. The energy in each node, coming from the links, would represent the goodness of that node, and importantly, that node's opinions would count to the extent that the network decided it was a good node. At the end of annealing, all the goods would be lit more than the bads. It seems like this is more direct than the proposed algorithm. (edited)

Deborah Duong [6:36 AM] (to some extent ising spin and the pagerank are doing the same thing)

Deborah Duong [7:55 AM] I only found that article in a search for ising spin and pagerank - knowing that they are doing similar things, I wanted to get a WLR version of ising spin, so that you would still be using your WLR, but with some of the directness of ising spin. But Im not spending time on that, Im doing the simulation, dont worry. I think one of the important things that ising spin adds is the ability to "turn off" a node with negative ratings, through inhibition. Pagerank doesnt have inhibition, but since we have negative ratings, I think it is a good thing to have in our case. Pagerank and PGMs are positive because they are probabilistic, but we actually have valence in our data that we should be taken into account, the negative ratings. Ising spin is a good model of valence. (edited) And ising spin is better at discrete assignments like black market vs. not.

Deborah Duong [8:50 AM] How can more attention to valence help WLR? For example, you multiply a rating value by a rater value and a rating weight as if there were no valence, according to the WLR algorithm in the Arxiv paper:

foreach of transactions do 2: let rater_value be rank of the rater at the end of previous period of default value 3: let rating_value be rating supplied by trasaction rater (consumer) to ratee (supplier) 4: let rating_weight be financial value of the transaction of its logarithm, if logarithmic ratings parameter is set to true 5: sum rater_valuerating_valuerating_weight for every ratee

In this algorithm, wouldn't an opinion that a product is really great by a low ranking rater be the equivalent of the opinion that the product was pretty average by a high ranking rater? Keeping track of valence, or strength of negativity or positivity in a form in which one excludes the other, as in the ising model, would differentiate these. (edited)

Deborah Duong [9:24 AM] In ising spin models, one of the factions "wins" instead of averaging votes out, so you dont take the -1 and the 1 and average them out to zero, you have attempts to cancel out the other side until one succeeds. So we would have evidence that an agent (whether supplier or product) is in the black market in some of the ratings, and evidence that it is not in others, and just one side would win, without any averaging. (If we set the links symmetrically, as in Boltzmann machine, that is) (edited)

Deborah Duong [9:33 AM] (we actually use such an algorithm whenever we use neural networks (not averaging is great) My proposal is like a neural net in its nonlinearities that emphasize valence, but is also unsupervised in a style from the eighties Hopfield network- a kind of social Boltzmann machine ). In these models, only one side wins all the way in one run, but running it multiple times will have the other side win in proportion to its uncertainty. So in the model of the perception of a necker cube, it goes one way half the time and the other way the other half. In our case, if there was a 10% chance the group was in the black market, it would show them as in the black market in 10% of the runs. So it would be more categorizing than ranking. (edited)

Deborah Duong [12:51 PM] In any case, it would be cool if we could resurrect the unsupervised RNN in our reputation system.

Deborah Duong [1:14 PM] (and we dont have to resurrect anything because Hopfield did it himself ) https://www.pnas.org/content/116/16/7723 PNAS Unsupervised learning by competing hidden units Despite great success of deep learning a question remains to what extent the computational properties of deep neural networks are similar to those of the human brain. The particularly nonbiological aspect of deep learning is the supervised training process with the backpropagation algorithm, which requires massive amounts of labeled data, and a nonlocal learning rule for changing the synapse strengths. This paper describes a learning algorithm that does not suffer from these two problems. It learns the weights of the lower layer of neural networks in a completely unsupervised fashion. The entire algorithm utilizes local learning rules which have conceptual biological plausibility.

deborahduong commented 5 years ago

The technical report describes the trouble we have had with the Psyneulink implementation of the Hopfield network, that it dies unexpectantly and is slow. However, this is still, I believe, the most promising method to weigh the reputation system raters by, as well as an neural network, interesting. In my own time I will first see if I can debug it in Psyneulink , and if not seeing about a Restricted Boltzmann Machine or Boltzmann machine replacement in Keras, or old school it in Numpy, in time for the AAMAS article due Nov 12. https://docs.google.com/document/d/1FEr4ir0jBZ5PZtOBas7naeHef8AnPo_E5IArSoxN0kE/edit