python-restx / flask-restx

Fork of Flask-RESTPlus: Fully featured framework for fast, easy and documented API development with Flask
https://flask-restx.readthedocs.io/en/latest/
Other
2.16k stars 335 forks source link

Object of type String is not JSON serializable #473

Closed luisgg98 closed 4 months ago

luisgg98 commented 2 years ago

swagger.json not found Good afternoon, I'm sorry but this is the first time that I used flask-restx. This API works perfectly I can request the API endpoints and the web services exposed through them works smoothly, however I am not capable of deploying the Swagger endpoint. When I access to the swagger documentation I get the following error "Object of type String is not JSON serializable", I must be placing an String instead of a JSON in somewhere but I can not find where. I have read the documentation -> https://flask-restx.readthedocs.io/en/latest/api.html , even though I not sure where is the failure. I share the code and the error in case someone want to take a look and nice what I'm missing. Thank you very much for reading this issue.

import logging
import traceback
import os
from flask_restx import Api
from end_point import settings

log = logging.getLogger(__name__)

api = Api(version='1.0', title=os.environ['FLASK_APP_NAME'] + ' API',
          description='')

@api.errorhandler
def default_error_handler(e):
    message = 'An unhandled exception occurred.'
    log.exception(message)

    if not settings.FLASK_DEBUG:
        return {'message': message}, 500

----------------------------------------------------------------------------------------------------------------------------------------

import logging
from flask_restx import Resource

from end_point.api.restplus import api
from end_point.api.crawler_rss_service.serializers import rss_crawler_job
from end_point.api.crawler_rss_service.parsers import crawler_rss_arg_json_model
from end_point.api.crawler_rss_service.endpoints.jobs import RSSCrawlerJobResult
from end_point.business.crawler_rss.tasks import get_crawler_rss_task
from end_point.business.crawler_rss import config

log = logging.getLogger(__name__)

ns = api.namespace('crawler_rss/get_crawler_rss', description='Operations related to hello')

# NOTE
# Headers documentation will be available only in next version of flask-RESTPlus
# api.response(...., headers={'Location','desc'})

# NOTE
# Flask.url_for('api.math/jobs_math_job')
# is equivalent to
# FlaskRestPlus.Api.url_for(MathJob)
# Both returns full URI http://host:port/api/math/jobs
# Cant find a way to get only /api/math/jobs
# like https://github.com/noirbizarre/flask-restplus/blob/master/tests/test_api.py

@ns.route('')
class GetRSSCrawler(Resource):

    @ns.marshal_with(rss_crawler_job)
    @ns.expect(crawler_rss_arg_json_model)
    @ns.response(code=202, description= 'Job submitted.')
    def post(self):
        """
        Get a RSS comments lists from RSS crawler.
        """
        # There is no arguments
        #request_args = api.payload
        request_args = api.payload 
        #, request_args['lists_screen_name'],
        result_task = get_crawler_rss_task.apply_async(args=(request_args[config.RSS_URL],"Just work"))

        return {'id' : result_task.task_id, 'status': result_task.state}, 202, {'Location': api.url_for(RSSCrawlerJobResult,id=result_task.task_id)}

------------------------------------------------------------------------------------------------------------------------------------------------------

import logging
from flask import request
from flask_restx import Resource
from celery import states
from celery.result import AsyncResult

from end_point.api.crawler_rss_service.serializers import rss_crawler_job
from end_point.api.restplus import api
from end_point.back import backapp

log = logging.getLogger(__name__)
ns = api.namespace('crawler_rss_job/jobs', description='Operations related to rss crawler jobs')

def error(result_task):
    """
    generic error return
    """
    try:
        cause = 'task state : {} - '.format(states) + result_task.info.get('error')
    except Exception as e:
        cause = 'task state : {} - '.format(states) + 'Unknown error occurred'
    return { 'id': result_task.task_id, 'status':'ERROR', 'desc': cause }, 500

@ns.route('/status/<string:id>')
@ns.param('id','A Job ID')
@ns.response(code=500, description='Job error.')
class RSSCrawlerJobStatus(Resource):

    @ns.marshal_with(rss_crawler_job)
    @ns.response(code=303, description='Job successfully finished.')
    @ns.response(code=200, description='Job unknown or not yet started.')
    def get(self, id):
        """
        Return status of a queued job.
        """
        result_task = AsyncResult(id = id, app = backapp)
        state = result_task.state

        if state == states.STARTED:
            return { 'id':result_task.task_id, 'status': state }, 200
        # task still pending or unknown
        elif state == states.PENDING:
            return { 'id':result_task.task_id, 'status': state }, 200
        elif state == states.SUCCESS:
            return { 'id':result_task.task_id, 'status': state }, 303, {'Location': api.url_for(RSSCrawlerJobResult,id=result_task.task_id)}
        else:
            return error(result_task)

@ns.route('/result/<string:id>')
@ns.param('id','A Job ID')
@ns.response(code=500, description='Job error.')
class RSSCrawlerJobResult(Resource):

    @ns.marshal_with(rss_crawler_job)
    @ns.response(code=404, description='Result do not exists.')
    @ns.response(code=200, description='Return result.')
    def get(self, id):
        """
        Return result of a job.
        """
        result_task = AsyncResult(id = id, app = backapp)
        state = result_task.state

        # tasks finished so result exists
        if state == states.SUCCESS:
            return { 'id': result_task.task_id, 'status': state, 'result': result_task.get(timeout=1.0)}, 200
        # task still pending or unknown - so result do not exists
        elif state == states.PENDING:
            return { 'id': result_task.task_id, 'status': state }, 404
        # task started but result do not exists yet
        elif state == states.STARTED:
            return { 'id': result_task.task_id, 'status': state }, 404
        else:
            return error(result_task)

    @ns.marshal_with(rss_crawler_job)
    @ns.response(code=404,description= 'Result do not exists.')
    @ns.response(code=200, description='Result deleted.')
    def delete(self, id):
        """
        Delete a result of a job.
        """
        result_task = AsyncResult(id = id, app = backapp)
        state = result_task.state

        # tasks finished so result exists
        if state == states.SUCCESS:
            try:
                result_task.forget()
            except Exception as e:
                return error(result_task)
            return { 'id': result_task.task_id, 'desc': 'result for job {} deleted'.format(result_task.task_id) }, 200
        # task still pending or unknown - so result do not exists
        elif state == states.PENDING:
            return { 'id': result_task.task_id, 'status': state }, 404
        # task started but result do not exists yet
        elif state == states.STARTED:
            return { 'id': result_task.task_id, 'status': state }, 404
        else:
            return error(result_task)

--------------------------------------------------------------------------------------------------------------------------

from flask_restx import fields, reqparse
from end_point.api.restplus import api
from end_point.business.crawler_rss import config

# NOTE : in FLASKRestplus things are missed up between api.model and api.parser
#        we have to define twice the same things depending it we want to use json for POST or querystring for GET
#        validation for input arguments
#        Should wait marshmallow integration in Flask-RESTPlus

# Get crawler_rss

crawler_rss_arg_json_model = api.schema_model('RSSCrawlerRequest', {
    "properties":{
        config.RSS_URL: {
            "type":"string"
        },
    },
    "type": "object"
})
print(type(crawler_rss_arg_json_model))

crawler_rss_parser = reqparse.RequestParser()
crawler_rss_parser.add_argument(config.RSS_URL, required=True, help='The url', type=str)
print(type(crawler_rss_parser))

----------------------------------------------------------------------------------------------------------------------------

from flask_restx import fields
from end_point.api.restplus import api
from end_point.business.crawler_rss import config

rss_fields = api.model(
    "RSS",
    {
        config.RSS_DOCUMENTID_FIELD: fields.String(required=True),
        config.RSS_COLLECTOR_ID_FIELD: fields.String(required=False),
        config.RSS_CREATEDAT_FIELD: fields.DateTime(
            dt_format="iso8601",
            required=False,
            description="created date in UTC",
            example="2016-12-23T00:00:00Z",
        ),
        config.RSS_TEXT_FIELD: fields.String(required=False),
        config.RSS_SOURCE_FIELD: fields.String(required=False),
        config.RSS_URL_FIELD: fields.String(required=False),
        config.RSS_LANGUAGE_FIELD: fields.String(required=False),
        config.RSS_TAGS_FIELD: fields.List(fields.String, required=False),
        config.RSS_TERM_FIELD_TAG: fields.String(required=False),
        config.RSS_TEXT_LOCATION_FIELD: fields.List(fields.Float, required=False),
        config.RSS_LOCATION_FIELD: fields.List(fields.Float, required=False),
    },
)

rss_crawler_fields = api.model(
    "CrawlerRSS",
    {
        "message": fields.String(required=True),
        config.RSS_LIST: fields.List(fields.Nested(rss_fields, skip_none=True)),
    },
)

rss_crawler_job = api.model(
    "crawler_rss",
    {
        "id": fields.String(required=True, description="job id"),
        "status": fields.String(
            required=False,
            description="job status",
            default=None,
            example="STARTED",
            enum=(
                "SUCCESS",
                "PENDING",
                "ERROR",
                "STARTED",
                "FAILURE",
                "RETRY",
                "REVOKED",
            ),
        ),
        "result": fields.Nested(
            rss_crawler_fields, required=False, description="job result", default=None
        ),
        "desc": fields.String(
            required=False, description="descriptive information", default=""
        ),
    },
)

-----------------------------------------------------------------------------------------------------------------------------------------------------------

import logging.config
import os
from flask import Flask, Blueprint
from werkzeug.middleware.proxy_fix import ProxyFix
from end_point import settings
from end_point.api.restplus import api
from end_point.api.hello_service.endpoints.hello_service import ns as wo_hello_namespace

from end_point.api.crawler_rss_service.endpoints.crawler_rss_service import ns as wo_crawler_rss_namespace

project_root = os.path.abspath(os.path.dirname(__file__))
frontapp = Flask(os.environ['FLASK_APP_NAME'])
frontapp.wsgi_app = ProxyFix(frontapp.wsgi_app)
logging.config.fileConfig(project_root + os.sep + 'logging.conf')
log = logging.getLogger(__name__)

def configure_app(flask_app):
    #flask_app.config['SERVER_NAME'] = settings.FLASK_SERVER_NAME
    flask_app.config['SWAGGER_UI_DOC_EXPANSION'] = settings.RESTPLUS_SWAGGER_UI_DOC_EXPANSION
    flask_app.config['RESTPLUS_VALIDATE'] = settings.RESTPLUS_VALIDATE
    flask_app.config['RESTPLUS_MASK_SWAGGER'] = settings.RESTPLUS_MASK_SWAGGER
    flask_app.config['ERROR_404_HELP'] = settings.RESTPLUS_ERROR_404_HELP

def initialize_app(flask_app):
    configure_app(flask_app)

    blueprint_api = Blueprint('api', __name__, url_prefix='/api')
    api.init_app(blueprint_api)

    api.add_namespace(wo_hello_namespace)
    api.add_namespace(wo_crawler_rss_namespace)
    flask_app.register_blueprint(blueprint_api)

if __name__ == "__main__":
    initialize_app(frontapp)
    frontapp.debug = settings.FLASK_DEBUG
    frontapp.run(host = '0.0.0.0', port = int(os.environ["FRONTEND_PORT"]))

Error on the web browser Screenshot of the error i'm encountering with once I try to access to the Swagger Endpoint MicrosoftTeams-image

Additional context Add any other context or screenshots about the feature request here.

peter-doggart commented 2 years ago

Haven't had a chance to try and run your code but I notice you have a schema_model defined:

crawler_rss_arg_json_model = api.schema_model('RSSCrawlerRequest', {
    "properties":{
        config.RSS_URL: {
            "type":"string"
        },
    },
    "type": "object"
})

I have had nothing but issues trying to use schema_model myself. Can you redefine this using the api.model syntax and see if that solves it?

luisgg98 commented 2 years ago

Hi @peter-doggart ! I've managed to find the reason of the error. Checking one of the serializers.py files I realized that there was an error defining an object property: RSS_DOCUMENTID_FIELD: fields.String(fields.String) This was the reason why Object of type String is not JSON serializable was prompted. Also I have followed your advise I have changed the way I was defining the schema.

crawler_rss_arg_json_model = api.model(
    "RSSCrawlerRequest", {config.RSS_URL: fields.List(fields.String)}
)

Thank you very much!

AFlowOfCode commented 2 years ago

I'd like to comment with the intention of helping someone avoid (or fix) a simple user error that can lead to a similar problem with the swagger endpoint failing with this message displayed:

Failed to load API definition - Fetch error - Internal server error /swagger.json

In the logs at the end of the stack trace it will say TypeError: Object of type String is not JSON serializable (or some other object type, such as DateTime - will be whatever the first field type is defined in the schema).

The tricky part is that all tests pass & the API works, it's only the spec that can't be generated. For me this happened when I accidentally omitted an argument from my response declaration:

@api.response(200, my_openapi_schema)  # incorrect
@api.response(200, 'Record retrieved', my_openapi_schema)  # correct

The problem arises from attempting to serialize the entire schema into the response text string.