Open staticdev opened 3 years ago
Thanks a lot for finding it. I've bee pretty busy lately but in August I will have time to catch up with all the bugs/issues/typos. Thanks again!
I also spotted that rest endpoint instantiates PostgresRepo
, which in turn runs on __init__()
:
Base.metadata.create_all(self.engine)
I have a suggestion implementation that considers that, applying the principle of least privilege and need to know basis, the application user should only be able to SELECT on table room, nothing else.
1) In the same fashion of your APPLICATION_DB
, I would create an application user with two new values.
config/production.json:
{
"name": "APPLICATION_USER",
"value": "rentomatic"
},
{
"name": "APPLICATION_PASSWORD",
"value": "hard2guess"
}
config/testing.json:
{
"name": "APPLICATION_PASSWORD",
"value": "postgres"
},
{
"name": "APPLICATION_USER",
"value": "postgres"
}
For the sake of simplicity, on config/testing.json
, application user and password can share the same values as postgres user.
2) I would just remove schema creation from PostgresRepo:__init__()
and set it to use the new user:
rentomatic/repository/postgresrepo.py
class PostgresRepo:
def __init__(self, configuration):
user = configuration["APPLICATION_USER"]
password = configuration["APPLICATION_PASSWORD"]
hostname = configuration["POSTGRES_HOSTNAME"]
port = configuration["POSTGRES_PORT"]
db = configuration["APPLICATION_DB"]
connection_string = (
f"postgresql+psycopg2://{user}:{password}@{hostname}:{port}/{db}"
)
self.engine = create_engine(connection_string)
Base.metadata.bind = self.engine
Same goes for the endpoint:
application/rest/room.py
postgres_configuration = {
"APPLICATION_USER": os.environ["APPLICATION_USER"],
"APPLICATION_PASSWORD": os.environ["APPLICATION_PASSWORD"],
"POSTGRES_HOSTNAME": os.environ["POSTGRES_HOSTNAME"],
"POSTGRES_PORT": os.environ["POSTGRES_PORT"],
"APPLICATION_DB": os.environ["APPLICATION_DB"],
}
3) Also the docker/production.yml
would need to change the user:
...
web:
build:
context: ${PWD}
dockerfile: docker/web/Dockerfile.production
environment:
FLASK_ENV: ${FLASK_ENV}
FLASK_CONFIG: ${FLASK_CONFIG}
APPLICATION_DB: ${APPLICATION_DB}
POSTGRES_USER: ${APPLICATION_USER}
POSTGRES_HOSTNAME: "db"
POSTGRES_PASSWORD: ${APPLICATION_PASSWORD}
POSTGRES_PORT: ${POSTGRES_PORT}
command: gunicorn -w 4 -b 0.0.0.0 wsgi:app
volumes:
- ${PWD}:/opt/code
4) I would separate two methods into manage.py:
create_db()
to just create the db (in case you are restoring a backup for example)init_postgres()
to create the db and schema (in case you are deploying for the first time the application)Notes:
dbname
parameter was added to run_sql()
since to create the application db you connect to postgres and for following SQL executions you run on application db.postgres_objects
._create_db()
needed to be created since a @cli.command()
calling another one directly causes problems. This is also reused in test()
._create_app_user()
create the user with only read permission to table room
.import sqlalchemy
import rentomatic.repository.postgres_objects as postgres_objects
...
def run_sql(statements, dbname):
conn = psycopg2.connect(
dbname=dbname
...
def _create_db():
app_db = os.getenv("APPLICATION_DB")
postgres_db = os.getenv("POSTGRES_DB")
try:
run_sql([f"CREATE DATABASE {app_db}"], postgres_db)
except psycopg2.errors.DuplicateDatabase:
print(f"The database {app_db} already exists and will not be recreated")
def _create_app_user():
app_db = os.getenv("APPLICATION_DB")
app_user = os.getenv("APPLICATION_USER")
app_password = os.getenv("APPLICATION_PASSWORD")
try:
run_sql(
[
f"CREATE USER {app_user} with password '{app_password}'",
f"GRANT SELECT ON TABLE room TO {app_user}",
], app_db)
except psycopg2.errors.DuplicateObject:
print(f"The user {app_user} already exists and will not be recreated.")
@cli.command()
def create_db():
configure_app(os.getenv("APPLICATION_CONFIG"))
_create_db()
@cli.command()
def init_postgres():
_create_db()
app_db = os.getenv("APPLICATION_DB")
admin_user = os.getenv("POSTGRES_USER")
admin_password = os.getenv("POSTGRES_PASSWORD")
host = os.getenv("POSTGRES_HOSTNAME")
port = os.getenv("POSTGRES_PORT")
engine = sqlalchemy.create_engine(
f"postgresql://{admin_user}:{admin_password}@{host}:{port}/{app_db}",
future=True,
)
with engine.begin() as connection:
postgres_objects.Base.metadata.create_all(connection)
_create_app_user()
...
def test(args):
os.environ["APPLICATION_CONFIG"] = "testing"
configure_app(os.getenv("APPLICATION_CONFIG"))
cmdline = docker_compose_cmdline("up -d")
subprocess.call(cmdline)
cmdline = docker_compose_cmdline("logs postgres")
wait_for_logs(cmdline, "ready to accept connections")
_create_db()
What do you think?
UPDATE: @lgiordani I added a proposed implementation of how I would add to the book. Feel free to comment.
@staticdev I will have a look. ~Generally speaking I advise against adding new unrelated issues to an already existing one, it makes it more complicated to manage them and to remember what to fix. Would you please move the above to a new issue?~ Forget about the previous message, I misunderstood the scope of the second part of the issue :facepalm: Thanks!
@staticdev I realised that when I simplified the approach in the second editionI removed a bit too much. The part that managed migrations is gone, and I didn't realise it was needed because I was reusing an old postgres volume with an already existing database :shrug: Thanks for pointing this out, I will edit the chapter and fix the missing parts
Description
If you follow the book to that part:
Now we can list the available tables with \dt and describe a table with \d
You will get no tables:
This is because ./manage.py init-postgres is only creating the db, and after that, you are asked to add a line of data with insert before the tables are created.
What I Did
Followed the book.