Miserlou / Zappa

Serverless Python
https://blog.zappa.io/
MIT License
11.89k stars 1.21k forks source link

Random 'NoneType' object is not callable #795

Open bxm156 opened 7 years ago

bxm156 commented 7 years ago

Randomly my site will go down (perhaps once or twice a month) and display the following: { "message": "An uncaught exception happened while servicing this request. You can investigate this with the zappa tail command." }

The logs say the following:

[1492620032065] 'NoneType' object is not callable
[1492620147882] 'NoneType' object is not callable
[1492620330874] 'NoneType' object is not callable

etc. I don't see any further information in the logs that would be helpful. Is there a way I can get more detailed information about the error? It only happens about once a month or less. When it get into this state, the only thing I found has fixed the problem is redeploying the code.

Other side notes: My Django project uses Amazon RDS. I can see in the graph that during this time, the DB connections goes to 0. And then back up once I have redeployed the code.

## Possible Fix

Re-deploying my lambda code fixes the issue.

Steps to Reproduce

I haven't been able to reproduce it consistently, it just will happen randomly and won't be fixed until I redeploy my code.

Your Environment

Miserlou commented 7 years ago

Did you have to re-deploy for it to work again?

Unfortunately, this is a really hard one to debug because there's no repro. My only thoughts are that it could be AWS fault or something to do with your IAM/account policies.

bxm156 commented 7 years ago

Yes, re-deploying the code fixed it. I encountered this before, and I think another way I fixed it was by re-saving the function in the LAMBDA console. I could get it working again by adding a new ENV var and clicking Save.

So it makes me think the lambda handler gets into some invalid state, and then is kept in that invalid state because the function is kept warm by the callback.

By deploying or re-saving the function, AWS re-creates it and its ok again.

Its hard to debug because I don't have a traceback, so i don't know where this is occuring in the code. I haven't seen this error when I used to run the webserver via EC2 instances. My guess is its happening in the zappa handler code?

Do you have any recommendations on ways I could get some more logging in the lambda handler itself, to improve my chances of figuring out what is happening?

bxm156 commented 7 years ago

@Miserlou Can we add a line after https://github.com/Miserlou/Zappa/blob/master/zappa/handler.py#L506.

so that it looks like:

print(e)
print(traceback.format_exc(e))

I see that the exec info is in the HTTP response when only settings.DEBUG is True, which makes sense.

But can we add the traceback info to the logs as well? Since those are internal, it should be ok to add right? My site is on production, so settings.DEBUG is False, but I would still want access to the actual exception via the logs.

bxm156 commented 7 years ago

It happened again today. Doing the above, I got the following traceback:

1492967445091] 'NoneType' object is not callable
[1492967445091] Traceback (most recent call last):
[1492967445091] File "/var/task/handler.py", line 450, in handler
[1492967445091] response = Response.from_app(self.wsgi_app, environ)
[1492967445091] File "/tmp/pip-build-c5E7tR/Werkzeug/werkzeug/wrappers.py", line 903, in from_app

I have a theory. The Django CMS App I use has a thing called apphook reloads. When certain things about the site change the urls layout in Django, the Django CMS App forces a "server restart".

I'm still investigating.

darrell-rg commented 7 years ago

Theory: "Packaging project as zip.." step is occasionally truncating files.

I encountered this problem today. In my case the cause was the zipped-up app.py being truncated to 222 lines when it is supposed to be 270 lines. This causes "invalid syntax" once, and then "None Type Object is not callable" for subsequent calls. I discovered this by downloading the zipped up lambda code from aws console, unzipping and examining the app.py. Sure enough, it was truncated.

A "zappa update"re-zipped and re-deployed the lambda function and all was fixed. I have encountered this error at least twice. I suspect some sort of race condition or missing file flush during the zip-up step.

Log sample:

invalid syntax (app.py, line 222): SyntaxError Traceback (most recent call last): File "/var/task/handler.py", line 507, in lambda_handler return LambdaHandler.lambda_handler(event, context) File "/var/task/handler.py", line 238, in lambda_handler handler = cls() File "/var/task/handler.py", line 127, in init self.app_module = importlib.import_module(self.settings.APP_MODULE) invalid syntax (app.py, line 222): SyntaxError Traceback (most recent call last): File "/var/task/handler.py", line 507, in lambda_handler return LambdaHandler.lambda_handler(event, context) File "/var/task/handler.py", line 238, in lambda_handler handler = cls() File "/var/task/handler.py", line 127, in init self.app_module = importlib.import_module(self.settings.APP_MODULE) File "/usr/lib64/python2.7/importlib/init.py", line 37, in import_module import(name) File "/var/task/app.py", line 222 ^ SyntaxError: invalid syntax

And then later:

'NoneType' object is not callable

bxm156 commented 7 years ago

@darrell-rg However in my case, it happens without "zappa update". The code will be running fine on production, as packaged, then in a few days, I will encounter this error.

darrell-rg commented 7 years ago

I added this code after zipf.close() in https://github.com/Miserlou/Zappa/blob/master/zappa/core.py#L565. It attempts to check the validity of the zip file. So far I have not been able to reproduce the suspected file truncation. @bxm156 it is possible that AWS might be mangling the zip file on their end, but that seems unlikely.

print("Verifying {} ..".format(zip_fname))
import zlib
with zipfile.ZipFile(zip_path, 'r') as zipf:
    if zipf.testzip() is not None:
        print("Zip File fails on {} ").format(zipf.testzip())
    for i in zipf.infolist():
        tpp = os.path.join(temp_project_path,i.filename)
        if os.path.exists(tpp):
            if os.path.getsize(tpp) != i.file_size:
                print("ERROR DISK AND ZIP SIZES DO NOT MATCH: fn={} disksize={} zipsize={} ".format(i.filename,os.path.getsize(tpp),i.file_size))
            with open(tpp, 'rb') as f:
                crc = zlib.crc32(f.read()) & 0xffffffff
                if crc != i.CRC:
                    print("ERROR DISK AND ZIP CRC DO NOT MATCH: fn={} diskCRC={} zipCRC={} ".format(i.filename,crc,i.CRC))
bxm156 commented 7 years ago

@darrell-rg If AWS is, they are doing it randomly, because it work in the beginning. It would say that must be a pretty remote chance tho.

One suspicion is some type of naming conflig. My stuff uses the django rest framework, which defines its own "Response" object, which is different than the werkzeug Response object.

I know that when django cms app forces a reload, it does some threading stuff and reloads all the sys.modules. I'm wondering if thats causing the werkzeug Response class to get replaced with the django rest framework version.

I don't have a lot of understanding on how Python manages its imports, but I'm gonna try a few things and see what happens. The hard part is I haven't been able to reproduce this consistently.

raduccio commented 7 years ago

I stopped having this issue when I stopped using the slim_handler= true option. Not sure if this is it but I never experienced this issue for about a month until I started using the slim_handler, after that It happened about every day.

revanthgopi commented 7 years ago

@Miserlou Any update on this ? I started facing this issue after using the slim_handler= true option.

manan commented 7 years ago

I am facing similar issues as well! It works fine on localhost but gives me the same error at a particular route. All the tables show up on the admin console every time ... I don't know what's wrong!

GeorgianaPetria commented 6 years ago

Same problem. Did you find any solution?

bxm156 commented 6 years ago

I too turned off slim_handler, and the issue hasn't happened again.

GeorgianaPetria commented 6 years ago

I tried doing this with a new virtualenv and it works indeed. Thanks for the suggestion! However the size of my environment seems to be around ~300MB unzipped and aws is complaining. Is there any solution for large packages if I'm deactivating slim_handler?

bxm156 commented 6 years ago

@GeorgianaPetria Make sure your excluding any directories or files you don't need, like any build artifacts, node_modules, static/, vendor/, images, etc. Also remove any unneeded packages from your virtual environment

GeorgianaPetria commented 6 years ago

I only have the zappa examples in my virtual env: https://github.com/Miserlou/Zappa/tree/master/example

The biggest part of the env is importing scipy and sklearn and these two together make up for about 60M. I need them in order to deploy my machine learning model.

Why is slim_handler = true not working? Unless I'm missing something, I don't think there's any chance I can get my zip under 50M. Any other idea for using zappa in this scenario?

robwatkiss commented 6 years ago

I'm also facing this issue when using the slim_handler option - reducing the package size is not looking easy. Like @bxm156 I am also running django rest framework.

The app will run with no issues for a while and then, seemingly randomly, give up and throw the following error: {'message': 'An uncaught exception happened while servicing this request. You can investigate this with the `zappa tail` command.', 'traceback': ['Traceback (most recent call last):\\n', ' File \"/var/task/handler.py\", line 434, in handler\\n response = Response.from_app(self.wsgi_app, environ)\\n', ' File \"/var/task/werkzeug/wrappers.py\", line 903, in from_app\\n return cls(*_run_wsgi_app(app, environ, buffered))\\n', ' File \"/var/task/werkzeug/test.py\", line 884, in run_wsgi_app\\n app_rv = app(environ, start_response)\\n', \"TypeError: 'NoneType' object is not callable\\n\"]}

After a while (potentially when that lambda node is replaced?) it will come back to life again.

ghost commented 6 years ago

How to do you turn off slim_handler?

jasonrhaas commented 6 years ago

@yamat124 in the zappa_settings.json file:

For example:

{
    "dev": {
        "app_function": "application.application",
        "aws_region": "us-west-2",
        "s3_bucket": "zappa-y5ideuneb",
        "slim_handler": false,
    }
}
ghost commented 6 years ago

@jasonrhaas Thanks for the response. I added that redeployed but I'm still getting the same error. I'm also using python 3.6.1

rafapetter commented 6 years ago

I'm facing the same problem as @GeorgianaPetria, any updates?

darrell-rg commented 6 years ago

Finally found the cause of my issue. A build script that ran before zappa included a sed command that did not flush the output buffer, which sometimes truncated my app.py file. If you are experiencing the variant of this issue that goes away when you re-deploy, check your build scripts for this sort of race condition.

bxm156 commented 6 years ago

In my case, I don't have any build scripts

On Sep 11, 2017 9:40 AM, "darrell-rg" notifications@github.com wrote:

Finally found the cause of my issue. A build script that ran before zappa included a sed command that did not flush the output buffer, which sometimes truncated my app.py file. If you are experiencing the variant of this issue that goes away when you re-deploy, check your build scripts for this sort of race condition.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Miserlou/Zappa/issues/795#issuecomment-328586846, or mute the thread https://github.com/notifications/unsubscribe-auth/AAITZFMW2G_hlixTpmIE9FN3P4hXpiQhks5shWJmgaJpZM4NCABQ .

tista3 commented 6 years ago

I found a cause for my problem with NoneType is not callable. I had pyc files between some of my source files that became part of zip. Deleting them and running zappa update solved my problem. I found error like this https://teamtreehouse.com/community/importerror-bad-magic-number-in-time-bx03xf3rn and it lead me to the pyc files.

tista3 commented 6 years ago

Would it be possible and safe to ignore .pyc files from source project directory? Deleting .pyc files from my source project and zappa update helps everytime for me to solve 'NoneType' object is not callable. I am using Python3.6 runtime for lambdas, but everytime I accidentaly run the flask app locally with Python2, the pyc files are created and the problem emerges in AWS after zappa update

Shy commented 6 years ago

Not sure if this is related, but I ran into this issue and I resolved it by removing the cffi library from my python dependencies. I switched from flask-mikasa to flask-markdown and removed each extra dependency manually and redeployed each time. It looks like for my specific instance cffi was the culprit.

scorring commented 6 years ago

Thank you @tista3, your solution was the one for me

jorotenev commented 6 years ago

(this issue is the first on google for the NoneType exception) In my case the issue was caused by using a remote config (on s3) and one of the values being a number. This resulted in "Environment variable keys must be non-unicode" error, which I saw in the zappa tail.

peterburlakov commented 6 years ago

Just got the same error... I'm launching django projects in lambdas, not 1st time with this great project zappa :) But 1st time found that zappa package demo -o app.zip packing src and libs without 1 my folder 0_0 for this time I've just renamed my src folder and that fixed error for me... looks like zappa package can have side effects or naming conflicts...

dimitry12 commented 5 years ago

I had the same problem. Just 'NoneType' object is not callable and nothing else in the logs.

"Refreshing" my virtualenv solved the problem:

pipenv --rm
rm Pipfile.lock
pipenv install
pipenv install --dev

I decided to do that because I noticed that Zappa was packaging some modules that I clearly uninstalled before. So may be it's pipenv that caused the problem.

eexwhyzee commented 5 years ago

if anyone is still having issues with this, I have learned that increasing the memory size in the zappa config will resolve this

dtnewman commented 5 years ago

I'm still getting this issue. It seems to be related somehow to requests to S3 when either slim_handler=true is set or if I set remote_env.

I notice that every time I have this issue, if I tail the logs, I see that just prior to it happening, there was a failed request to my related S3 bucket.

[1547066147526] Could not load remote settings file. Connect timeout on endpoint URL: "https://<MY-BUCKET>.s3.amazonaws.com/config_secrets.json"
[1547066148246] [INFO] 2019-01-09T20:35:48.246Z 6b27ac21-144d-11e9-9d70-b7d27f0de1f9 Detected environment to be AWS Lambda. Using synchronous HTTP transport.
[1547066149758] 'NoneType' object has no attribute '_instantiate_plugins': AttributeError
Traceback (most recent call last):
  File "/var/task/handler.py", line 583, in lambda_handler
  return LambdaHandler.lambda_handler(event, context)
  File "/var/task/handler.py", line 247, in lambda_handler
  handler = cls()
  File "/var/task/handler.py", line 139, in __init__
  self.app_module = importlib.import_module(self.settings.APP_MODULE)
  File "/var/lang/lib/python3.6/importlib/__init__.py", line 126, in import_module
  return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 978, in _gcd_import
  File "<frozen importlib._bootstrap>", line 961, in _find_and_load
  File "<frozen importlib._bootstrap>", line 950, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
  File "/var/task/manage.py", line 17, in <module>
  app = create_app(config_object=config_object)
  File "/var/task/forecasting/app.py", line 28, in create_app
  init_engine(app.config['SQLALCHEMY_DATABASE_URI'])
  File "/var/task/forecasting/database.py", line 31, in init_engine
  engine = sqlalchemy.create_engine(uri, **kwargs)
  File "/var/task/sqlalchemy/engine/__init__.py", line 425, in create_engine
  return strategy.create(*args, **kwargs)
  File "/var/task/sqlalchemy/engine/strategies.py", line 52, in create
  plugins = u._instantiate_plugins(kwargs)
AttributeError: 'NoneType' object has no attribute '_instantiate_plugins'
[1547066154343] [DEBUG] 2019-01-09T20:35:54.342Z 6fc96e99-144d-11e9-b9a7-899756dfe16b Exception received when sending HTTP request.
Traceback (most recent call last):
  File "/var/task/urllib3/connection.py", line 159, in _new_conn
  (self._dns_host, self.port), self.timeout, **extra_kw)
  File "/var/task/urllib3/util/connection.py", line 80, in create_connection
  raise err
  File "/var/task/urllib3/util/connection.py", line 70, in create_connection
  sock.connect(sa)
socket.timeout: timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/var/task/botocore/httpsession.py", line 258, in send
  decode_content=False,
  File "/var/task/urllib3/connectionpool.py", line 638, in urlopen
  _stacktrace=sys.exc_info()[2])
  File "/var/task/urllib3/util/retry.py", line 343, in increment
  raise six.reraise(type(error), error, _stacktrace)
  File "/var/task/urllib3/packages/six.py", line 686, in reraise
  raise value
  File "/var/task/urllib3/connectionpool.py", line 600, in urlopen
  chunked=chunked)
  File "/var/task/urllib3/connectionpool.py", line 343, in _make_request
  self._validate_conn(conn)
  File "/var/task/urllib3/connectionpool.py", line 839, in _validate_conn
  conn.connect()
  File "/var/task/urllib3/connection.py", line 301, in connect
  conn = self._new_conn()
  File "/var/task/urllib3/connection.py", line 164, in _new_conn
  (self.host, self.timeout))
urllib3.exceptions.ConnectTimeoutError: (<botocore.awsrequest.AWSHTTPSConnection object at 0x7f60c62776a0>, 'Connection to <MY-BUCKET>.s3.amazonaws.com timed out. (connect timeout=60)')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/var/task/botocore/endpoint.py", line 200, in _do_get_response
  http_response = self._send(request)
  File "/var/task/botocore/endpoint.py", line 244, in _send
  return self.http_session.send(request)
  File "/var/task/botocore/httpsession.py", line 282, in send
  raise ConnectTimeoutError(endpoint_url=request.url, error=e)
botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL: "https://<MY-BUCKET>.s3.amazonaws.com/config_secrets.json"
[1547066154343] [DEBUG] 2019-01-09T20:35:54.343Z 6fc96e99-144d-11e9-b9a7-899756dfe16b Event needs-retry.s3.GetObject: calling handler <botocore.retryhandler.RetryHandler object at 0x7f60c6629ba8>
[1547066154344] Could not load remote settings file. Connect timeout on endpoint URL: "https://<MY-BUCKET>.s3.amazonaws.com/config_secrets.json"
[1547066155229] [INFO] 2019-01-09T20:35:55.229Z 6fc96e99-144d-11e9-b9a7-899756dfe16b Detected environment to be AWS Lambda. Using synchronous HTTP transport.
[1547066157034] 'NoneType' object has no attribute '_instantiate_plugins': AttributeError
Traceback (most recent call last):
  File "/var/task/handler.py", line 583, in lambda_handler
  return LambdaHandler.lambda_handler(event, context)
  File "/var/task/handler.py", line 247, in lambda_handler
  handler = cls()
  File "/var/task/handler.py", line 139, in __init__
  self.app_module = importlib.import_module(self.settings.APP_MODULE)
  File "/var/lang/lib/python3.6/importlib/__init__.py", line 126, in import_module
  return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 978, in _gcd_import
  File "<frozen importlib._bootstrap>", line 961, in _find_and_load
  File "<frozen importlib._bootstrap>", line 950, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
  File "/var/task/manage.py", line 17, in <module>
  app = create_app(config_object=config_object)
  File "/var/task/forecasting/app.py", line 28, in create_app
  init_engine(app.config['SQLALCHEMY_DATABASE_URI'])
  File "/var/task/forecasting/database.py", line 31, in init_engine
  engine = sqlalchemy.create_engine(uri, **kwargs)
  File "/var/task/sqlalchemy/engine/__init__.py", line 425, in create_engine
  return strategy.create(*args, **kwargs)
  File "/var/task/sqlalchemy/engine/strategies.py", line 52, in create
  plugins = u._instantiate_plugins(kwargs)
AttributeError: 'NoneType' object has no attribute '_instantiate_plugins'
peterburlakov commented 5 years ago

I'm still getting this issue. It seems to be related somehow to requests to S3 when either slim_handler=true is set or if I set remote_env.

I notice that every time I have this issue, if I tail the logs, I see that just prior to it happening, there was a failed request to my related S3 bucket.


[1547066147526] Could not load remote settings file. Connect timeout on endpoint URL: "https://<MY-BUCKET>.s3.amazonaws.com/config_secrets.json"
[1547066148246] [INFO] 2019-01-09T20:35:48.246Z 6b27ac21-144d-11e9-9d70-b7d27f0de1f9 Detected environment to be AWS Lambda. Using synchronous HTTP transport.

@dtnewman Looks like your Lambda don't have permissions to access that S3 file. So, check that file exists, then try to update aws role policy for your Lambda... Another variant - you running Lambda in VPC private subnets without gateway, in this case check https://aws.amazon.com/premiumsupport/knowledge-center/internet-access-lambda-function/

dtnewman commented 5 years ago

@peterburlakov

I am running inside a VPC, but I have a gateway setup and internet access seems to work fine. The curious thing is that this happens sporadically. I am certain that it is accessing the S3 files when I first deploy, because they are needed for setup and it seems to work just fine at first. But after anywhere from a few minutes to a few days later, an S3 access issue seems to arise somewhat randomly and then the system goes down (similar to original poster above) until I redeploy.

EDIT: It seems like others are reporting that the problem happens when the slim_handler or remote_env settings are set to be on. I'm wondering if this has to do with the code in handler.py making calls to S3 that fail and aren't retried, which then leads to the app not loading properly. But it's weird that there would be so many requests to S3 that fail in the first place though, since that service is typically pretty stable.

peterburlakov commented 5 years ago

@dtnewman in my experience, S3 more stable, than my solutions, and probably then yours :) so you can try to debug, when it failed, without redeploy... may be separate route and call it to check what's wrong... so if a part of your code can start and respond without reading any config - that can be used for debug.

coco-egaliciar commented 5 years ago

I too turned off slim_handler, and the issue hasn't happened again.

But what happen if my package>50MB ? :(

bxm156 commented 5 years ago

Fwiw: the 50mb limit is somewhat soft. I have uploaded packages as large as 60mb. I'm not sure what the real, enforced cut off is.

I adopted a microservice methodology, so I have multiple, small lambda functions.

On Mon, Jan 21, 2019, 1:09 PM coco-egaliciar <notifications@github.com wrote:

I too turned off slim_handler, and the issue hasn't happened again.

But what happen if my package>50MB ? :(

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Miserlou/Zappa/issues/795#issuecomment-456199455, or mute the thread https://github.com/notifications/unsubscribe-auth/AAITZAkagylqCy5J95GDFNqlJ0Y2-B0Pks5vFiyFgaJpZM4NCABQ .

NikolaJankovic commented 5 years ago

You're also going to want to delete __pycache__ directories

isurupalliyaguru commented 5 years ago

I'm hitting the same issue. Tried "slim_handler": false but it did not help.

isurupalliyaguru commented 5 years ago

Someone who is looking for a solution this might help.

1) If you are deploying from a Mac consider doing it from a docker container (so it can get the linux dependencies) 2) You can also be missing some dependencies which was not required in your Mac but required in Linux. In my case I had to add pycrypto to my requirements file.

Hope this helps.

leonardsaers commented 5 years ago

I'm having the same problem. I first get it to work, then I add the dependencies sklearn numpy scipy and setting "slim_handler": true. After that I get: 'NoneType' object is not callable

evonier commented 5 years ago

Same here, slim_handler is set to true (and no aws environment variables are specified). This happens frequently, yet not reproducible. Sometime directly after zappa deploy, and sometimes after x minutes/hours. Are there any updates on this topic? This is stopping us from using zappa for production, unfortunately.

AlmogCohen commented 5 years ago

To everyone else here that might have some issues - by commenting one line at a time and uploading again and again, I've realized that an import statement in the top of the app.py failed to execute, failed silently in zappa, and then raised the odd NoneType object is not callable exception. Surrounding the failing import statement with:

try:
    import something from failing_import
except Exception as e:
    print("Failure with {}".format(e))

I was able to find the actual issue in the dependent package, solve it, and upload my zappa function with no issues this time.

I'm not sure why the import exception was not visible by zappa in the first place. BTW, the actual issue was in another package installing the typing module while running in python 3.7...For some reason it got broken only on AWS.

chasek23 commented 5 years ago

For anyone else still having this issue, I was deploying just fine before setting "slim_handler":"true", when I started to get the error. Turns out it was because I was deploying locally from Windows.

I ultimately resolved the issue by running zappa update dev from an amazon-linux EC2 instance.

cmabastar commented 5 years ago

I'm not sure why the import exception was not visible by zappa in the first place. BTW, the actual issue was in another package installing the typing module while running in python 3.7...For some reason it got broken only on AWS.

It seems i'm encountering the same problem as you. I am using google-measurement-protocol that depends on the typing module. Also on py37.

AlmogCohen commented 5 years ago

@cmabastar I had control over the source code of the package that had the typing package so I made it a conditional dependency according to the used python version. What you can do is open a PR for google-measurement-protocol so they make this change or make your own fork.

To make the typing module a conditional dependency you can see an example in this PR (I have not tested the method proposed there)

Timbabs commented 4 years ago

It happened again today. Doing the above, I got the following traceback:


1492967445091] 'NoneType' object is not callable
[1492967445091] Traceback (most recent call last):
[1492967445091] File "/var/task/handler.py", line 450, in handler
[1492967445091] response = Response.from_app(self.wsgi_app, environ)
[1492967445091] File "/tmp/pip-build-c5E7tR/Werkzeug/werkzeug/wrappers.py", line 903, in from_app

Well I came across the issue lately and took time to investigate the problem. The issue is that lambda is timing out, at least for my case and I believe for many cases here too. Go over the logs in cloudwatch because unlike zappa tail logs, cloudwatch separate logs into logs from each container created by lambda as streams. I observed when a faulty container (one returning NoneType) diverged from the normal one. This is what happens.

So once a request comes in and all the fired up containers are in use, a new container is created and during initialization, because we're downloading files from s3 to /tmp and also due to sometimes lagged connectivity + vpc/db initilializations, lambda doesn't always finish setting up within the default 30s lambda time_out set by zappa. Hence, the container times out and starts receiving request pre-maturely when in reality the modules have not all been imported. Hence the NoneType error. For every NoneType eror that I saw in cloudwatch, there was an import issue beforehand which confirms this. Therefore when new requests hit that container periodically, we see the NoneType. But when new request hit other containers without this issue, it works. And that explains for the periodic NoneType Error we get.

Hence these are what you can do.

If you can reduce the size of your code/dependencies please do so for two reasons 1) As others have mentioned above, if you can get your code + dependencies to meet the 250M unzipped / 50M zipped limit, then you can set the slim_lambda==false flag or remove it entirely as the default is false.

2)If you cannot get it to meet the aws limit, the little code you remove will help because now you have reduced the files + time it has to download from s3 and maybe you never experience any time out again.

Now if you can't get your code base reduced to to the 250M unzipped / 50M zipped limit, maybe you're in case (2) above or can't even reduce anything at all, increase timeout_seconds from 30s to something like 60s.. The worst that could happen by increasing timeout_seconds is that you may hit api gateway timeout once because of the 30s api gateway limit, and after that by the next request your lambda must have already been set up and ready to serve requests. Therefore you'll not come across the NoneType object is not callable object error any longer.

xoelop commented 4 years ago

Hi, I've been struggling with this for the past 2 days, my 2 cents if it helps someone. My app was working locally but not on AWS

In my case I had tried already setting slimhandler to false, deleted all the .pyc files, tried doing zappa update several times (yep, it worked in the past ¯_(ツ)/¯ ), undeployed and deployed and nothing had worked so far.

I started commenting imports in my __init__.py file where the app factory is defined, doing zappa update and narrowing down until I found the conflicting modules.

In my case I'm reading env variables from a .env file. I found this and saw that the files where I was calling os.getenv where I hadn't called load_dotenv were failing, so I added

from dotenv import load_dotenv

load_dotenv()

to each one of those files and that fixed it

GregHilston commented 4 years ago

I tried all of these solutions and the last one I tried was blowing away my venv and updating via $ zappa update and that worked. I'm on a mac and even has "slim_handler": true

Trust me, try it, it takes a few minutes. This is with big packages like scikit learn, numpy, etc

nabazm commented 4 years ago

same happening here, seems like only happens when I attach vpc to the lambda, it get into a bad state and cannot read remote_env s3 files, I gave the permission full s3 permission still no luck