stella-project / stella-app

Multi-container application of the STELLA infrastructure
GNU General Public License v3.0
2 stars 1 forks source link

unique JSON Return format for getting dataset recommendation #11

Closed narges1212 closed 3 years ago

narges1212 commented 4 years ago
{
   "response_header":{
      "status":0,
      "q_time":0,
      "container":"recomm-sample",
      "ts":"1580380932371",
      "params":{
         "q":"",
         "page":0,
         "results_per_page":0,
         "sid":"0",
         "rid":"0"
      }
   },
   "items":[
      {
         "id":"ZA8805",
         "detail":{
            "score":0.4609375,
            "reason":""
         }
      },
      {
         "id":"ZA4608",
         "detail":{
            "score":0.05224609375,
            "reason":""
         }
      },
      {
         "id":"ZA2205",
         "detail":{
            "score":0.0024609375,
            "reason":""
         }
      },
      {
         "id":"ZA1111",
         "detail":{
            "score":0.00009375,
            "reason":""
         }
      }
   ],
   "target_item":"cews-2-407266"
}
narges1212 commented 4 years ago

Hi @ziyad121 ,

please consider this JSON format as the output for recommendation containers. thx

@breuert @benjwolff agree?

benjwolff commented 4 years ago

Hi,

in our (concept repo we agreed on the following output format for participant containers (for both: ranking and recommendations):

{ "page": 2, "rpp": 3, "query": "my_test_query", "num_found": 7929, "items": [ "M24835794", "M25111946", "M24836379" ] }

So for dataset recommendations we just have a list of items with identifiers, sorted by relevance. Right now, no score is passed to the STELLA-app. The idea behind this, was to keep a unique format for both rankings and recommendations.

benjwolff commented 4 years ago

Hi,

in our (concept repo we agreed on the following output format for participant containers (for both: ranking and recommendations):

{ "page": 2, "rpp": 3, "query": "my_test_query", "num_found": 7929, "items": [ "M24835794", "M25111946", "M24836379" ] }

So for dataset recommendations we just have a list of items with identifiers, sorted by relevance. Right now, no score is passed to the STELLA-app. The idea behind this, was to keep a unique format for both rankings and recommendations.

narges1212 commented 4 years ago

we can add the parameters ( "page": 2, "rpp": 3, "query": "my_test_query", "num_found": 7929) as well, but we will also may need some other parameters as reason in the gws case. didnt we need container name? .....yes, it was in header but anyhow for the items we may also need score and reason which are both in details and can remine optional.

benjwolff commented 4 years ago

Since the STELLA-app decides which participant container to pick, it already knows which container is replying. So we don't need the container-name. Moreover it seems more error-prone: It is not guaranteed, that the participant replies with the same container-name that we use in our STELLA-app. For the optional-values (like "score"): Initially, the idea was, do keep it simple at this stage. Secondly, we wanted to reply with the same structure for rankings and recommendations. Maybe we should discuss that point. The first idea, that came to my mind, is to add an optional parameters dictionary to every item in item-list. These parameters can be forwarded to the "site", which can decide, if it makes use of it.