Open Bryant-Yang opened 4 years ago
we usually define all our tables in a file called db_tables.py
, and then import that into alembic.
here's an example db_tables.py from a real project:
from sqlalchemy import (
Table, MetaData, Column, Integer, String, ForeignKey, UniqueConstraint,
)
from sqlalchemy.dialects.postgresql import JSONB
metadata = MetaData()
orders = Table(
'order_placed', metadata,
Column('surrogate_id', Integer, primary_key=True, autoincrement=True, nullable=False),
Column('order_id', String(255), unique=True, index=True, nullable=False),
Column('order_placement_time', String(255), nullable=False),
Column('promised_before_date', String(255), nullable=True),
Column('currency_code', String(255), nullable=False),
...
and here's the alembic config, which i think alembic by default expects you to put into a file called env.py
from alembic import context
from ourproject import db_tables
# this is the Alembic Config object, which provides
# access to the values within the .ini file in use.
config = context.config
# add your model's MetaData object here
# for 'autogenerate' support
# from myapp import mymodel
# target_metadata = mymodel.Base.metadata
target_metadata = db_tables.metadata
...
and that's about all you need, alembic --autogenerate
will now pick up on changes you make in db_tables
and create migrations for you... they usually need a bit of sanity-checking / tidying, but that's pretty standard i think...
let me know if there's anything else I can do to help!
(note to self: put this in a blog post. or appendix)
got it, the point is 'target_metadata'. Thanks for the quick reply.
I read more discussions before more testing, it seems a little risky for widely use in product: https://github.com/cosmicpython/book/issues/154 https://github.com/sqlalchemy/sqlalchemy/issues/5027 https://github.com/sqlalchemy/sqlalchemy/issues/4666
i should probably post a reply on that sqla thread. but we've not found any problems operationally with classical mappers, and it's been about 5 years now. admittedly we don't use dataclasses much in our domain model classes.
incidentally, you can use sqlalchemy + alembic purely for schema declarations and migrations, and then not use the ORM at all. I've built one project like that, and we're quite happy with it. The repository just uses raw SQL. but we still have a db_tables.py
.
admittedly we don't use dataclasses much in our domain model classes
yes me too, I didn't use dataclasses and attrs much, trying to explore some way to make domain model class with less limitations in syntax. (I saw you've written some classes Inherited from dict in some branch, another try?)
For example:
@dataclass
class BaseEntityModel:
id: int = field(default_factory=lambda: random.randrange(1, 10000))
uuid: str = field(default_factory=lambda: uuid1().hex)
created_at: datetime = field(default_factory=lambda: datetime.now())
updated_at: datetime = field(default_factory=lambda: datetime.now())
def __eq__(self, other):
if not isinstance(other, BaseEntityModel):
return False
return other.id == self.id
def __hash__(self):
return hash(self.id)
def __gt__(self, other):
if self.id is None:
return False
if other.id is None:
return True
return self.id > other.id
if I create biz model such as user, I need to take care of the fields' default value:
@dataclass
class UserModel(BaseEntityModel):
full_name: str # this will raise non-default error
mobile: Optional[str] = None
For me, it seems a little bit confused to choose a perfect way to handle domain model in python, other than stuffs like JPA in java world.
hmmm. well one way to keep life simple would be to not use dataclasses for Entities, only Value Objects? just use plain python objects for your entities, then inheritance, and the interactions with sqlalchemy will both be more predictable...
Just had time to make up codes months ago. With auto_attribs + kw_only of attrs to make the model definition more flexible.
@attrs(auto_attribs=True, kw_only=True)
class BaseEntityModel:
id: Optional[int] = None
uuid: str = Factory(lambda: uuid1().hex)
created_at: datetime = Factory(datetime.now)
updated_at: datetime = Factory(datetime.now)
deleted_at: Optional[datetime] = None
deleted: bool = False
events: List[Event] = []
def __eq__(self, other: BaseEntityModel):
if not isinstance(other, BaseEntityModel):
return False
return other.id == self.id
def __hash__(self):
return hash(self.uuid)
def __gt__(self, other: BaseEntityModel):
if self.id is None:
return False
if other.id is None:
return True
return self.id > other.id
def soft_remove(self):
self.deleted = True
self.deleted_at = datetime.now()
@attrs(auto_attribs=True, kw_only=True)
class UserModel(BaseEntityModel):
user_name: str
password_hash: Optional[str] = None
activated: bool = True
full_name: Optional[str] = None
mobile: Optional[str] = None
def __eq__(self, other: UserModel):
return super().__eq__(other) and self.user_name == other.user_name
Thanks to the very first and nice book for python developer to practice ddd. I have detailed questions when doing real project practice, one of those is:
We use sqlalchemy model and flask-migrate (based on alembic), at most of our projects. The work flow about db seems like: 'flask db init' --> 'flask db migrate' --> 'flask db upgrade' --> ... All the 'models'(sa model but not domain model) can be detected by alembic, so does the changing of the 'models'.
So if change to classical mapper, I'm not sure is there an easy way to change the workflow, or may be you already have some working code or build command example can be shown? Thanks.