jina-ai / jina

☁️ Build multimodal AI applications with cloud-native stack
https://docs.jina.ai
Apache License 2.0
20.63k stars 2.21k forks source link

fix: skip doc attributes in __annotations__ but not in __fields__ #6035

Closed NarekA closed 10 months ago

NarekA commented 10 months ago

Currently, having a ClassVar in an input class breaks the application.

class MyDoc(BaseDoc):
    endpoint: ClassVar[str] = "my_endpoint"
    input_test: str = ""

Error:

    field_info = model.__fields__[field_name].field_info
KeyError: 'endpoint'

Goals:

codecov[bot] commented 10 months ago

Codecov Report

Patch coverage: 33.33% and project coverage change: -0.01% :warning:

Comparison is base (0569354) 77.60% compared to head (48b0e72) 77.59%. Report is 1 commits behind head on master.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## master #6035 +/- ## ========================================== - Coverage 77.60% 77.59% -0.01% ========================================== Files 144 144 Lines 13788 13790 +2 ========================================== + Hits 10700 10701 +1 - Misses 3088 3089 +1 ``` | [Flag](https://app.codecov.io/gh/jina-ai/jina/pull/6035/flags?src=pr&el=flags&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=jina-ai) | Coverage Δ | | |---|---|---| | [jina](https://app.codecov.io/gh/jina-ai/jina/pull/6035/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=jina-ai) | `77.59% <33.33%> (-0.01%)` | :arrow_down: | Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=jina-ai#carryforward-flags-in-the-pull-request-comment) to find out more. | [Files Changed](https://app.codecov.io/gh/jina-ai/jina/pull/6035?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=jina-ai) | Coverage Δ | | |---|---|---| | [jina/serve/runtimes/helper.py](https://app.codecov.io/gh/jina-ai/jina/pull/6035?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=jina-ai#diff-amluYS9zZXJ2ZS9ydW50aW1lcy9oZWxwZXIucHk=) | `23.01% <0.00%> (-0.38%)` | :arrow_down: | | [jina/\_\_init\_\_.py](https://app.codecov.io/gh/jina-ai/jina/pull/6035?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=jina-ai#diff-amluYS9fX2luaXRfXy5weQ==) | `56.00% <100.00%> (ø)` | | ... and [1 file with indirect coverage changes](https://app.codecov.io/gh/jina-ai/jina/pull/6035/indirect-changes?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=jina-ai)

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

NarekA commented 10 months ago

@JoanFM Done. Sorry, I haven't had time to setup my dev-environment.

JoanFM commented 10 months ago

@NarekA ,

tests are failing for your PR https://github.com/jina-ai/jina/actions/runs/6092600012/job/16533518592?pr=6035

These are the tests that use docarray>0.30

NarekA commented 10 months ago

@NarekA ,

tests are failing for your PR https://github.com/jina-ai/jina/actions/runs/6092600012/job/16533518592?pr=6035

These are the tests that use docarray>0.30

I believe my latest change fixed that module. I am seeing tests passing when I run locally:

❯ pytest tests/unit/serve/runtimes/test_helper.py -vv
========================== test session starts ==========================
platform darwin -- Python 3.8.17, pytest-7.4.1, pluggy-1.3.0 -- /Users/narek/anaconda3/envs/jina/bin/python
cachedir: .pytest_cache
rootdir: /Users/narek/projects/jina
configfile: pytest.ini
plugins: anyio-3.7.1
collected 19 items                                                      

tests/unit/serve/runtimes/test_helper.py::test_is_specific_executor[key-False] PASSED [  5%]
tests/unit/serve/runtimes/test_helper.py::test_is_specific_executor[key_1-False] PASSED [ 10%]
tests/unit/serve/runtimes/test_helper.py::test_is_specific_executor[executor__key-True] PASSED [ 15%]
tests/unit/serve/runtimes/test_helper.py::test_is_specific_executor[exec2__key_2-True] PASSED [ 21%]
tests/unit/serve/runtimes/test_helper.py::test_is_specific_executor[__results__-False] PASSED [ 26%]
tests/unit/serve/runtimes/test_helper.py::test_is_specific_executor[__banana__-False] PASSED [ 31%]
tests/unit/serve/runtimes/test_helper.py::test_split_key_executor_name[executor__key-key-executor] PASSED [ 36%]
tests/unit/serve/runtimes/test_helper.py::test_split_key_executor_name[executor__key_1-key_1-executor] PASSED [ 42%]
tests/unit/serve/runtimes/test_helper.py::test_split_key_executor_name[executor_1__key-key-executor_1] PASSED [ 47%]
tests/unit/serve/runtimes/test_helper.py::test_parse_specific_param[param0-parsed_param0-executor] PASSED [ 52%]
tests/unit/serve/runtimes/test_helper.py::test_parse_specific_param[param1-parsed_param1-executor] PASSED [ 57%]
tests/unit/serve/runtimes/test_helper.py::test_parse_specific_param[param2-parsed_param2-executor] PASSED [ 63%]
tests/unit/serve/runtimes/test_helper.py::test_parse_specific_param[param3-parsed_param3-executor] PASSED [ 68%]
tests/unit/serve/runtimes/test_helper.py::test_get_name_from_replicas[exec1/rep-0-exec1] PASSED [ 73%]
tests/unit/serve/runtimes/test_helper.py::test_get_name_from_replicas[exec1-exec1] PASSED [ 78%]
tests/unit/serve/runtimes/test_helper.py::test_create_pydantic_model_from_schema[proto] PASSED [ 84%]
tests/unit/serve/runtimes/test_helper.py::test_create_pydantic_model_from_schema[json] PASSED [ 89%]
tests/unit/serve/runtimes/test_helper.py::test_create_empty_doc_list_from_schema[proto] PASSED [ 94%]
tests/unit/serve/runtimes/test_helper.py::test_create_empty_doc_list_from_schema[json] PASSED [100%]
NarekA commented 10 months ago

@JoanFM ⬆️

NarekA commented 10 months ago

@JoanFM I can't tell if the segfaults in the last test run are real errors, or just need to be re-run. Can you take a look?

NarekA commented 10 months ago

Last execution: https://github.com/jina-ai/jina/actions/runs/6097551109/job/16545774855

Thread 0x00007f5c540d8b80 (most recent call first):
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/selectors.py", line 468 in select
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/asyncio/base_events.py", line 1823 in _run_once
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/asyncio/base_events.py", line 570 in run_forever
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/asyncio/base_events.py", line 603 in run_until_complete
  File "/home/runner/work/jina/jina/jina/orchestrate/flow/base.py", line 1989 in _wait_until_all_ready
  File "/home/runner/work/jina/jina/jina/orchestrate/flow/base.py", line 1843 in start
  File "/home/runner/work/jina/jina/jina/orchestrate/flow/builder.py", line 33 in arg_wrapper
  File "/home/runner/work/jina/jina/jina/orchestrate/orchestrator.py", line 14 in __enter__
  File "/home/runner/work/jina/jina/tests/integration/sandbox/test_sandbox.py", line 11 in test_sandbox
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/_pytest/python.py", line 194 in pytest_pyfunc_call
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/pluggy/_callers.py", line 77 in _multicall
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/pluggy/_manager.py", line 115 in _hookexec
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/pluggy/_hooks.py", line 493 in __call__
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/_pytest/python.py", line 1792 in runtest
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/_pytest/runner.py", line 169 in pytest_runtest_call
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/pluggy/_callers.py", line 77 in _multicall
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/pluggy/_manager.py", line 115 in _hookexec
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/pluggy/_hooks.py", line 493 in __call__
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/_pytest/runner.py", line 262 in <lambda>
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/_pytest/runner.py", line 341 in from_call
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/_pytest/runner.py", line 261 in call_runtest_hook
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/flaky/flaky_pytest_plugin.py", line 138 in call_and_report
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/_pytest/runner.py", line 133 in runtestprotocol
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/_pytest/runner.py", line 114 in pytest_runtest_protocol
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/flaky/flaky_pytest_plugin.py", line 94 in pytest_runtest_protocol
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/pluggy/_callers.py", line 77 in _multicall
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/pluggy/_manager.py", line 115 in _hookexec
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/pluggy/_hooks.py", line 493 in __call__
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/_pytest/main.py", line 349 in pytest_runtestloop
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/pluggy/_callers.py", line 77 in _multicall
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/pluggy/_manager.py", line 115 in _hookexec
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/pluggy/_hooks.py", line 493 in __call__
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/_pytest/main.py", line 324 in _main
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/_pytest/main.py", line 270 in wrap_session
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/_pytest/main.py", line [317](https://github.com/jina-ai/jina/actions/runs/6097551109/job/16545774855#step:8:318) in pytest_cmdline_main
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/pluggy/_callers.py", line 77 in _multicall
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/pluggy/_manager.py", line 115 in _hookexec
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/pluggy/_hooks.py", line 493 in __call__
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/_pytest/config/__init__.py", line 168 in main
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/_pytest/config/__init__.py", line 191 in console_main
  File "/opt/hostedtoolcache/Python/3.8.17/x64/bin/pytest", line 8 in <module>
/home/runner/work/_temp/d40278d6-4c92-44f4-8c21-[335](https://github.com/jina-ai/jina/actions/runs/6097551109/job/16545774855#step:8:336)7dbff6133.sh: line 1:  7638 Segmentation fault      (core dumped) pytest --suppress-no-test-exit-code --force-flaky --min-passes 1 --max-runs 5 --cov=jina --cov-report=xml --timeout=600 -v -s --ignore-glob='tests/integration/docarray_v2/*' --ignore-glob='tests/integration/stateful/*' --ignore-glob='tests/integration/hub_usage/dummyhub*' tests/integration/sandbox/
Error: Process completed with exit code 139.
JoanFM commented 10 months ago

Thanks for the contribution @NarekA !!!