Learnosity / learnosity-sdk-python

Learnosity SDK for Python
Apache License 2.0
7 stars 10 forks source link

Incorrect response data handling #34

Closed ansfan closed 5 years ago

ansfan commented 5 years ago

Using the data api itembank/questions endpoint and requesting data with request.item_references on all API versions does not actually return any data through the DataAPI.results_iter (works if you pass request.references instead).

I believe the default is to expect a list to be returned in the response['data'] object however a dict is being returned through the itembank/questions api endpoint and will thus only return all of the item keys we requested. Not sure if this is intended but I couldn't find anything pointing that this is.

A simple solution may be to check in the results_iter function if the returned response['data'] is a dict and to just return it as a list of the single object, not efficient but will work with existing api.

if isinstance(response['data'], dict):
    for key, value in response['data'].items():
        yield {key:value}
else:
    for result in response['data']:
        yield result
shtrom commented 5 years ago

Hey Anson,

Thanks for the report.

You might be onto something there... Could you give us an example of your request, and the returned data? Either through result_iter, or just request.

I suspect you may have happened upon an edge case, as we designed result_iter to iterate over arrays, but in your case, the returned data would be a dict, as you point out.

ansfan commented 5 years ago

Hey Olivier,

Sure can. Here is a little test snippet for the request that I am making, can provide actual item_references and domain information if needed.

import requests
from learnosity_sdk.request import Init

consumer_secret = 'IAMASECRETKEY'
action = 'get'
learnosity_endpoint = 'https://data.learnosity.com/v2019.2.LTS/{}'
security = {
    'consumer_key': 'ABCDEFGHIJKLMNOP',
    'domain': 'localhost.whatever'
}
data_request = {
    'item_references' : ['XY012345', 'XY112345', 'XY212345', 'XY312345', 'XY412345', 'XY512345']
}

r = requests.post('https://data.learnosity.com/v2019.2.LTS/{}'.format(resource), data=Init('data', security, consumer_secret, data_request, action).generate())

# Should return a dict that has 'data' key that is another dict instead of array
response = r.json()

The response that I get is in the following json format. Have redacted some parts to avoid crowding.

{
    "meta": {
        "status": true,
        "timestamp": 1563735324,
        "versions": {
            "requested": "v2019.2.LTS",
            "mapped": "v1.32",
            "concrete": "v1.32.1"
        },
        "records": 3
    },
    "data": {
        "XY012345": [
            {
                "type": "mcq",
                "widget_type": "response",
                "reference": "XY012345_RESPONSE",
                "data": {
                    "metadata": {
                        "distractor_rationale_response_level": [
                            "blah",
                            "blah",
                            "blah",
                            "blah"
                        ]
                    },
                    "stimulus": "blah",
                    "type": "mcq",
                    "validation": {
                        "scoring_type": "exactMatch",
                        "valid_response": {
                            "score": 1,
                            "value": [
                                "i1"
                            ]
                        }
                    },
                    "options": [
                        {
                            "label": "hello",
                            "value": "i1"
                        },
                        {
                            "label": "hello",
                            "value": "i2"
                        },
                        {
                            "label": "hello",
                            "value": "i3"
                        },
                        {
                            "label": "hello",
                            "value": "i4"
                        }
                    ]
                },
                "metadata": null
            }
        ],
        "XY112345": [
            {
                "type": "mcq",
                "widget_type": "response",
                "reference": "XY112345_RESPONSE",
                "data": {
                    "metadata": {
                        "distractor_rationale_response_level": [
                            "blah",
                            "blah",
                            "blah",
                            "blah"
                        ]
                    },
                    "stimulus": "blah",
                    "type": "mcq",
                    "validation": {
                        "scoring_type": "exactMatch",
                        "valid_response": {
                            "score": 1,
                            "value": [
                                "i4"
                            ]
                        }
                    },
                    "options": [
                        {
                            "label": "hello",
                            "value": "i1"
                        },
                        {
                            "label": "hello",
                            "value": "i2"
                        },
                        {
                            "label": "hello",
                            "value": "i3"
                        },
                        {
                            "label": "hello",
                            "value": "i4"
                        }
                    ]
                },
                "metadata": null
            }
        ],
        "XY212345": [
            {
                "type": "mcq",
                "widget_type": "response",
                "reference": "XY212345_RESPONSE",
                "data": {
                    "metadata": {
                        "distractor_rationale_response_level": [
                            "blah",
                            "blah",
                            "blah",
                            "blah"
                        ]
                    },
                    "stimulus": "blah",
                    "type": "mcq",
                    "validation": {
                        "scoring_type": "exactMatch",
                        "valid_response": {
                            "score": 1,
                            "value": [
                                "i1"
                            ]
                        }
                    },
                    "options": [
                        {
                            "label": "hello",
                            "value": "i1"
                        },
                        {
                            "label": "hello",
                            "value": "i2"
                        },
                        {
                            "label": "hello",
                            "value": "i3"
                        },
                        {
                            "label": "hello",
                            "value": "i4"
                        }
                    ]
                },
                "metadata": null
            }
        ]
    }
}
shtrom commented 5 years ago

Cool. I think your proposed solution makes sense. Do you think you would be able to prepare a PR?

flakeparadigm commented 5 years ago

@shtrom Thanks for jumping on this so quickly. I've created CAT-213 for internal tracking.

@ansfan We've added this issue into our backlog to be worked on by our development team. If this is time sensitive for your project, you're welcome to file a Pull Request yourself to help speed up the process.

flakeparadigm commented 5 years ago

Hi @ansfan, We've released the fix for this and it is now available on PyPI.