This package provides the core functionality for pydantic validation and serialization.
Pydantic-core is currently around 17x faster than pydantic V1.
See tests/benchmarks/
for details.
NOTE: You should not need to use pydantic-core directly; instead, use pydantic, which in turn uses pydantic-core.
from pydantic_core import SchemaValidator, ValidationError
v = SchemaValidator(
{
'type': 'typed-dict',
'fields': {
'name': {
'type': 'typed-dict-field',
'schema': {
'type': 'str',
},
},
'age': {
'type': 'typed-dict-field',
'schema': {
'type': 'int',
'ge': 18,
},
},
'is_developer': {
'type': 'typed-dict-field',
'schema': {
'type': 'default',
'schema': {'type': 'bool'},
'default': True,
},
},
},
}
)
r1 = v.validate_python({'name': 'Samuel', 'age': 35})
assert r1 == {'name': 'Samuel', 'age': 35, 'is_developer': True}
# pydantic-core can also validate JSON directly
r2 = v.validate_json('{"name": "Samuel", "age": 35}')
assert r1 == r2
try:
v.validate_python({'name': 'Samuel', 'age': 11})
except ValidationError as e:
print(e)
"""
1 validation error for model
age
Input should be greater than or equal to 18
[type=greater_than_equal, context={ge: 18}, input_value=11, input_type=int]
"""
You'll need rust stable installed, or rust nightly if you want to generate accurate coverage.
With rust and python 3.8+ installed, compiling pydantic-core should be possible with roughly the following:
# clone this repo or your fork
git clone git@github.com:pydantic/pydantic-core.git
cd pydantic-core
# create a new virtual env
python3 -m venv env
source env/bin/activate
# install dependencies and install pydantic-core
make install
That should be it, the example shown above should now run.
You might find it useful to look at python/pydantic_core/_pydantic_core.pyi
and
python/pydantic_core/core_schema.py
for more information on the python API,
beyond that, tests/
provide a large number of examples of usage.
If you want to contribute to pydantic-core, you'll want to use some other make commands:
make build-dev
to build the package during developmentmake build-prod
to perform an optimised build for benchmarkingmake test
to run the testsmake testcov
to run the tests and generate a coverage reportmake lint
to run the lintermake format
to format python and rust codemake
to run format build-dev lint test
It's possible to profile the code using the flamegraph
utility from flamegraph-rs
. (Tested on Linux.) You can install this with cargo install flamegraph
.
Run make build-profiling
to install a release build with debugging symbols included (needed for profiling).
Once that is built, you can profile pytest benchmarks with (e.g.):
flamegraph -- pytest tests/benchmarks/test_micro_benchmarks.py -k test_list_of_ints_core_py --benchmark-enable
The flamegraph
command will produce an interactive SVG at flamegraph.svg
.
Cargo.toml
on Github, you need both Cargo.toml
and Cargo.lock
to be updated.v<the.new.version>
and select "Create new tag on publish" when the option appears.