gratipay / gratipay.com

Here lieth a pioneer in open source sustainability. RIP
https://gratipay.news/the-end-cbfba8f50981
MIT License
1.12k stars 310 forks source link

use an ORM #129

Closed chadwhitacre closed 11 years ago

chadwhitacre commented 11 years ago

From @mjallday via twitter.

lyndsysimon commented 11 years ago

Do you have a specific ORM in mind?

mjallday commented 11 years ago

SQLAlchemy seems like it would be a good fit. It's the best ORM I've used in the Python world (and many other worlds too)

chadwhitacre commented 11 years ago

+1 from @mtamizi via Twitter

sigmavirus24 commented 11 years ago

+1 for SQLAlchemy

artagnon commented 11 years ago

+1 for SQLAlchemy

zzzeek commented 11 years ago

let me just give you a heads up, after reading the "SQL as a first class citizen" comment in the readme, that if you want to work with SQL and tuples as results, you can skip the ORM and just use the Core: http://docs.sqlalchemy.org/en/latest/core/tutorial.html . The Querying system within the ORM can do all of the same things and using the ORM does not preclude using the Core in any case, but if you still wince at building class hierarchies (I haven't looked at gittip's source to see how much it is OO versus procedural) you can stick with all SQL/tuples by just using Core.

chadwhitacre commented 11 years ago

I like the idea of solidifying Gittip's love triangle with @zzzeek and @balanced, and of making it easier to contribute to Gittip. I'm really leary of ORMs, though, because I want to have a high degree of understanding of and control over how I'm using my database--the README actually says "database as a first-class citizen." ORMs by definition are designed to hide the database from developers. That loss of knowledge and control frightens me. And as far as using a Python object model to build SQL, I don't see value there over strings. :-(

@mjallday @sigmavirus24 @artagnon Could you help me understand better why Gittip is hard to contribute to without an ORM? Is it because Postgres is scary? Or too low-level? Or unfamiliar? Or ... ? Are there particular parts of the Gittip codebase that you've looked at and said, "Ack!" about due to the lack of an ORM?

I'm keen for others to contribute to Gittip. I need help seeing that porting to the SQLAlchemy ORM is the right way to encourage that, though.

steveklabnik commented 11 years ago

I am not really contributing to Gittip, but I will say that I use ORMs because I'm lazy, just like any other abstraction. Yes, I could write some SQL by hand, but that's tedious and error-prone.

Client.where(:created_at => (Time.now.midnight - 1.day)..Time.now.midnight)

over

Client.execute("SELECT * FROM clients WHERE (clients.created_at BETWEEN '2008-12-21 00:00:00' AND '2008-12-22 00:00:00')")

Or parameterizing queries:

Client.where(:username => some_username).first

over

Client.execute("SELECT * FROM clients WHERE (username = ?) LIMIT 1", some_username)

Or whatever it is. I don't have to think about it, so I don't.

(and I've heard SQLAlchemy is super baller.)

zzzeek commented 11 years ago

ORMs are not by definition designed to hide the database from developers. ORMs in practice are often like this, but an ORM like SQLAlchemy makes a really big point over maintaining relational algebra as a first class citizen. SQL maps to SQLAlhcemy's Core and ORM querying systems in a 1->1 fashion.

the value over strings is one of expressing intent succinctly. using Core only:

 conn.execute(table.insert(), {"x":1, "y":3, "some_json":{'foo':'bar', 'bat':'hoho'})

using DBAPI:

import psycopg2
import json
connection = psycopg2.connect(username, password, hostname)

cursor = connection.cursor()

marshalled_json = json.dumps({'foo':'bar', 'bat':'hoho'})
cursor.execute("insert into table (x, y, some_json) values (%(x)s, %(y)s, %(some_json)s)", 
                  {"x":1, "y":3, "some_json":marshalled_json}
cursor.close()
connection.commit()
connection.close()

now you might say, "I wouldn't write all that boilerplate for every query, I'd have helper functions/libraries that reduce all of that to common function calls". Well yes. Hence SQLAlchemy Core, which is a really mature and sophisticated form of that.

The above example in my experience isn't even realistic. Usually, you'd have queries that have different numbers of parameters for inserts/updates etc., which then affect the string form of the query. Suppose you have a really basic function like update_table() - it takes the primary key of the row you want to update, and then a list of values to update. Here's how that looks without any help:

def update_table(tablename, pk, values):
    stmt = "UPDATE %s SET %s WHERE id=%%(id)s"

    set_clause = ", ".join("%s = %%(%s)s" % (k, k) for k in values)
    stmt = stmt % (tablename, set_clause)

    cursor = connection.cursor()

    # need coercion of values to SQL-appropriate types here too
    values['id'] = pk
    cursor.execute(stmt, values)

    cursor.close()

the above function is really low-powered and only handles a specific kind of WHERE clause, needs to concatenate a string based on what values are being updated, doesn't do any marshalling of types, etc. With Core:

conn.execute(table.update().values(values).where(table.c.id==pk))

we haven't used an ORM as of yet here. I wrote SQLAlchemy from the perspective of someone who had despised and avoided all ORMs for years, so I made sure the ORM was entirely a "for when you're ready" kind of thing, and it doesn't seem like you're ready :). But to assuage your fears about "hiding" SQL, my talk "Hand Coded Applications with SQLAlchemy" discusses this, check it out http://www.youtube.com/watch?v=E09qigk_hnY

gvenkataraman commented 11 years ago

+1 for sqlalchemy. @whit537 When I forked the repo, I was afraid to touch portions involving raw sql. SQLAlchemy is just more readable and more pythonic and as a developer working on a open source code, I would feel much more comfortable using this.

mjallday commented 11 years ago

@whit537 let's pick an isolated part of the code base to migrate to SA on a separate branch. Something we can can reasonably finish in an hour or two. I'll write the code if you help me figure out which part to migrate and then once that's done you'll be in a better place to see if it's what you're looking for.

sigmavirus24 commented 11 years ago

I like @mjallday 's idea of taking some section and converting it to SQLAlchemy. As for why I gave it a +1, if you're going to use one, SQLAlchemy is incredibly well documented.

mjallday commented 11 years ago

authentication.py looks like a nice easy target especially since you're constructing SQL and stopping that is a worthy goal in and of itself.

mahmoudimus commented 11 years ago

+1 I'll help convert it over to SQLAlchemy

chadwhitacre commented 11 years ago

You found my Achilles' heel, @mjallday. :-) There are one or two other places where I construct SQL, and I don't feel good about it.

Let's move forward with SQLAlchemy. Re: authenticate.py ... we've started moving the cookie auth stuff upstream into aspen, but let's not worry about that for now. Once that settles in Aspen we can upgrade our Aspen dependency in Gittip and rework as necessary.

Love seeing the energy in this thread. Far be it from me to get in the way. :-)

chadwhitacre commented 11 years ago

Hey @mjallday, be sure to track the changes on #406. I factored out a Participant object and made User subclass from that. Hopefully doesn't throw you too badly?

chadwhitacre commented 11 years ago

Sorry, gang. I wish I didn't feel so strongly about this, but I do. Can we dig into this more to make sure we make the best decision? :-)

@steveklabnik In the examples you gave, I like the SQL versions better. When I read the Ruby versions I am mentally translating to SQL. I think SQL is a fantastic language, not tedious and error-prone, and I want to write in it. I love this recent @tenderlove quote:

Working on a new DSL for accessing structured data. A sort of "structured query language".

@gvenkataraman Why are you afraid of SQL? It's a fantastic language. :-) Seriously, SQL is extremely robust and mature and time-tested and battle-worn and elegant and suited to its problem domain.

@mjallday I agree 100% that constructing SQL is bad and we shouldn't do it. I started a ticket (#405) and fixed the instance you pointed out. Over on that ticket, it occurs to me that using an abstraction at the app layer (an ORM) is one way to avoid this, and that possibly another way is to make better use of the datastore itself (stored procedures? views?)

@zzzeek ...

Re: succinctness. As indicated above, I find SQL to be more succinct than any Python representation of SQL. I experience it as mental overhead, to translate Python syntax into SQL syntax.

Re: boilerplate.

Now you might say, "I wouldn't write all that boilerplate for every query, I'd have helper functions/libraries that reduce all of that to common function calls". Well yes. Hence SQLAlchemy Core, which is a really mature and sophisticated form of that.

I was actually really self-conscious about this while working on #413 last night. You're right. Gittip has a boilerplate abstraction, and it's not as mature as SQLAlchemy core. However, it's much, much simpler.

Gittip's current solution is a postgres.py module that basically wraps psycopg2 in context managers. It's 242 lines, including whitespace and documentation. There's also a db-related context manager in testing.py; that's 213 lines. I also have a query caching module waiting in the wings for if/when we need it; that's another ~250 lines. So call it ~700 lines of db-related boilerplate in Gittip. A SQLAlchemy dev snapshot appears to clock in at 82,378 lines overall for lib/sqlalchemy, including 10,972 for the 'sql' subpackage, and 26,173 for the 'orm' subpackage (I don't find a 'core' subpackage; counts are per wc -l $(find . -name \*.py)).

The #413 bug did turn out to be in the testing context manager (though not in the core postgres.py module). That said, I personally feel much more comfortable debugging 800 lines of shim code than 80,000 lines of general-purpose abstractions.

There are of course use cases where general-purpose database abstractions are called for at the application layer. One would be for a product intended for independent deployments, where different customers probably want different databases behind it (Oracle, SQL Server, etc.). I suppose another would be if you were working with developers who are afraid of SQL. ;-)

That's actually the strongest case in my mind for using SQLAlchemy in Gittip: I absolutely need more people contributing to Gittip. I can understand and respect that different people have different comfort levels with different technologies. I want @gvenkataraman to be comfortable contributing to Gittip, though I do think he shouldn't be so afraid of SQL. :-) At this point I would still prefer that if he's not comfortable writing SQL himself, that he partner with someone else who is, rather than implementing a generalized abstraction. @mjallday has contributed using straight SQL, for example. During Delpan (#329), @gvenkataraman and I talked about adding additional fraud signaling in payday.py. We decided that I would make the necessary SQL changes, and he would handle the changes to the Balanced API calls. That ball is still in my court, and I accept that if I want to encourage contributions from @gvenkataraman then the burden is on me to make abstractions for him and/or help him overcome his fear of SQL. I guess I just don't feel like I know yet whether there are enough people around who love raw SQL or are at least willing to give it a try, or whether SQLAlchemy is going to be the only way to get more contributors.

Re: realistic examples. We've talked about starting with the User class in authentication.py. Maybe that's the way to go, but there's not a lot of complicated SQL going on in there. For a more involved case, maybe we could look at the Participant.absorb method that I wrote last night. How would you approach that using SQLAlchemy?

(I'll try to get to the video later this weekend.)

mw44118 commented 11 years ago

I'm firmly in the SQL camp, but I've learned a few tricks to reduce the boilerplate. First, I build a crap ton of views, so that my app code never does a join. Instead, it reads from the view that does that join, and that view returns exactly the columns that are required and none extra. The app code passes in parameters that go into the where clause on that view.

This has a side benefit -- since the app code selects from a view, I get the benefit of being able to improve the internals without having to tweak app code. Just like OO encapsulation. Just recently, I replaced a bunch of subselects with some CTEs in order to improve performance, and I didn't have to do any app code changes.

More generally, everything that we do to keep regular app code simple and easy to work with also is possible in the database. Just like we all avoid writing gigantic 1000-line app code functions, we can use stuff like views and stored functions and triggers to reduce that page-long SQL query into something not quite so scary.

Two really good things to read:

this blog is fat-packed with helpful tips and better ways of thinking about serious data modelling: http://ledgersmbdev.blogspot.com/2012/08/intro-to-postgresql-as-object.html

And this book humbled me over and over again with simple and intuitive solutions for puzzles I didn't think were solvable with SQL: http://www.amazon.com/Puzzles-Answers-Kaufmann-Management-Systems/dp/0123735963/ref=sr_1_1?s=books&ie=UTF8&qid=1354980212&sr=1-1&keywords=sql+puzzles

The database is not a trash can for strings!

ncoghlan commented 11 years ago

@whit537 I suggest reading this before deciding to build your own SQL helper functions: http://www.aosabook.org/en/sqlalchemy.html

No matter how good your back end data modelling is, you still have issues of state consistency, concurrency, connection management, (maybe database portability - e.g to run a fast local test suite against SQLite), etc. You could rediscover and solve all those problems yourself, or just drop in the state of the art solution for Python, and solve them in advance of encountering them.

The whole essence of SQL Alchemy is to not fear the database, but just to make it stupidly easy to work with from Python.

d0ugal commented 11 years ago

Personally Im a fan of SQL, very abstracted ORM's and probably everything in between. There isn't a Python ORM I've used and taken particular objection to (unless used in the wrong situation).

However, I think that SQLAlchemy is the right choice because its flexible. You can easily use what features you want or need and then use SQL when its more appropriate because it integrates well. Even if you only choose to use a few features it would be better than recreating what you need from scratch while being much easier and SQLAlchemy is somewhat battle tested too.

Or, in short, I agree with @ncoghlan.

Oh, and, there are just too many useful things you can do easily with SQLAlchemy too :)

coolbreezechloe commented 11 years ago

So I've been reading this thread and looked briefly at the code. It is an interesting debate on raw SQL vs. SQL Alchemy. One thing I didn't see mentioned as a reason for using SA is maybe to also abstract away the actual database system that you use so that if someone doesn't want to use Postgres (which I gather is what we use now) they can in theory switch to another database system. That to me is a worthy goal / feature of a project because we all know that people tend to favor MySQL or Postgres (funny side note, at my current job we use both and I've been forced to get used to them both. I also have been forced to use a Mac which I never did before. As a result I now look down upon all you silly people who fight and bicker over one thing or the other. The truth is it doesn't matter, you can make anything work and the more you use it the more you get better at it. Except that Python is always better than Ruby...haha)

However I also agree that SQL itself is very mature and strong. Sometimes I think that the purpose of an ORM is for those developers who are not comfortable with raw SQL (assuming your not trying to be compliant with multiple database systems). If you can keep the SQL to say just the "model" classes of the code base and ensure that you don't have code duplication (not that an ORM would fix that if you did) then I see no reason not to just use raw SQL as it does what you want and in general your SQL that I have seen is not that complicated.

Plus if the SQL is limited to the core model classes then people wishing to contribute who do not want to get messy with SQL can just make their contributions in other areas of the code.

alexcouper commented 11 years ago

2¢: I don't mind so much the use of SQL in direct calls to the DB - but I do feel that it should be abstracted to some degree.

eg. https://github.com/whit537/www.gittip.com/blob/master/www/index.html#L13

In the case above I think that some kind of participant model object/module should exist that has a def receivers() method/function. Whether that function then uses SQLAlchemy to retrieve the rows or uses a direct SQL statement I don't really mind.

(But potentially everyone agrees with this and this move away from SQL in the simplates is happening already?)

matin commented 11 years ago

Posted to HN to open up the conversation: http://news.ycombinator.com/item?id=4899571

zzzeek commented 11 years ago

ugh....sure let's release the dogs. countdown to "you should use all stored procedures" in 3...2...

matin commented 11 years ago

@zzzeek: maybe. I was hoping for the HN community to push the conversation in a different direction, but perhaps, I'm being naive. I'll make my case directly.

I don't think the main challenge for Gittip and @whit537 are to use an ORM or not. The main challenge is to get people to use Gittip at all. Growth is the primary challenge, and everything else should be viewed through that light. Build something that people want, and do whatever is necessary to make customers happy and keep them happy. Stagnation and unhappy customers is death.

@mjallday and I brought up the use of SQLAlchemy vs. raw SQL for one reason. We believe it makes it easier for others to contribute—including us. I don't buy Chad's argument that his postgres.py is the right solution. It's only 241 lines, but that's 241 that needs to be tested and maintained. That's 241 lines that someone needs to read.

I personally feel much more comfortable debugging 800 lines of shim code than 80,000 lines of general-purpose abstractions

@whit537: The point is that you don't need to debug the SQLAlchemy code. SA is incredibly well documented. Plus, you can always just shame @zzzeek into doing any debugging any SA issues. You'll be able to push the responsibility of someone maintaining components of your stack to others, which means you can focus on the primary goal—growth.

I also have a personal bias for ORMs. I spent most of my time coding in C/x86 assembly, some Verilog, and more MATLAB than I care to admit. I learned Python because @charlessimpson and I didn't want to use MATLAB for our crypto class, and Python was a godsend. I came to web programming through Rails in 2008. I never wrote a single line of SQL. I learned SQL at Milo.com purely because of scaling. Otherwise, I would stick to SA when possible. Our stored procedures and triggers were completely unmaintainable.

@gvenkataraman (in his defense) came to Python in a similar way. He has a PhD in Electrical Engineering. He wrote his first line of Python while @mahmoudimus interviewed him for Balanced. Ganesh spends most of his time doing research and writing algorithms. He's learned HTML/CSS/JS and SQL while working at Balanced.

Let me come back to the main topic. Do whatever is best for growth. We had a public conversation about using Markdown vs. rST for our public API spec. I favored Markdown, but rST helped us move faster, and it meant there would be less work, which means we can focus on growth—the main goal.

+1 for using external libraries instead of writing more code.

neilk commented 11 years ago

Just $0.02 from someone who saw the topic come up at Hacker News. I don't think anyone else has yet pointed this out, but raw SQL simply will not be as safely reusable or parameterizable compared to a query abstraction. This is implicit in @zzzeek's point above, but let's make it explicit.

Even ordinary prudence requires using some framework to escape strings concatenated into SQL. And security should be a paramount concern for gittip. I assume I don't have to explain SQL injection. So the answer, for some simple problems, is the SQL placeholder e.g. "SELECT * FROM donations WHERE donation.userid = ?".

But placeholders can't do everything. Let's say in the interface we start allowing the user to filter and sort their donations by something else. "... AND ? ..." is not a possible placeholder. So you need to wrap that string in a library and use string functions to safely concatenate additional query constraints. Fine, not too hard. But then you have variants of your query for logged-in users and administrators. Or with different date offsets. Or paginated.

Pretty soon the SQL-construction routines are becoming quite large. And then you find that you need to separate control of different aspects of SQL construction somehow. So you need some other way to pass around a query between methods, which isn't fully formed into SQL yet. In the worst case scenario you might find that your assumption about when to render a SQL query into a string was wrong, and then you waste your life trying to "edit" already-formed SQL statements.

And you have to be sure that all the above hasn't somehow made your application vulnerable to SQL injection. To say nothing of the possibility that you one day might want to change to a different database, with a slightly different SQL syntax!

You can avoid much of these hassles by using something which abstracts the query into some sort of object, and only rendering it into a string at the very last instant before you query the database.

ORM can help, but in my experience it is a little too magical and sometimes results in non-performant queries. That said, you don't really have a performance issue yet, so that's premature optimization.

SQLAlchemy, to my eyes, looks like a good balance.

TLDR string operations do not map well, or safely, to the kind of transformations you are likely to need to reuse database-querying code.

mjallday commented 11 years ago

@whit537 https://github.com/whit537/www.gittip.com/pull/418

chadwhitacre commented 11 years ago

@matin Thanks for keeping us focused on growth. I've merged @mjallday's pull request. Let the flood of contributions begin! :-)

matin commented 11 years ago

@zzzeek you up for making changes to other parts of gittip to incorporate SA core?

You can school us on the best practices ;-)

chadwhitacre commented 11 years ago

I'm actually more interested in ORM than Core; might as well go all the way. I'd love to see @zzzeek turn Gittip into a showpiece for SA best practices. :-)

zzzeek commented 11 years ago

yeah best practice is to use the ORM ;).

I'd certainly want to help, if someone wants to do the grunt work, there's a pretty standard form to use.

chadwhitacre commented 11 years ago

@mjallday did an initial implementation using core in #418, and has an orm implementation as well. Want to go ahead with the ormalicious pull request, @mjallday?

zzzeek commented 11 years ago

it looks OK. I'm not sure of the mix of declarative + core there, but the all ORM version will have just the declarative base and the one metadata.

Axe this thing, I've been trying for years to get people to stop doing this reflexively:

+Base.metadata.bind = db_engine

bound metadata isn't needed. (read the big green note at the end of http://docs.sqlalchemy.org/en/rel_0_8/core/connections.html#connectionless-execution-implicit-execution)

chadwhitacre commented 11 years ago

First reticket from this: User object, #446, @joonas is going to work on it.

chadwhitacre commented 11 years ago

I'm realizing that the big fear I have with going to an ORM is that we might lose control over our schema and become constrained by an application layer designed to be widely accessible to the majority of programmers, and that this will cause headaches when we try to scale and start hitting the hard parts of Gittip. Really there's two big pieces to Gittip:

  1. Gittip.com
  2. Payday

With the ORM port that @mjallday started, and which @joonas is admirably carrying forward, it seems that we're focusing on the first of those, the web app side. As a web app Gittip is pretty unremarkable. The writes are small and infrequent, and I expect we'll be able to tune the reads without too much trouble when things start to warm up.

The payday side is going to be much more challenging as we scale. Every Thursday we:

The pulling and pushing are recorded in an exchanges table, and the shuffling inside Gittip is in a transfers table. The amounts to pull, shuffle, and push are computed from a tips table. Conceptually this all happens in a single instant of time; practically we want it to happen in such a way that users can keep using the website—changing their tips to each other, as well as their username and any other info—without adversely influencing the currently running payday. We don't want to have to pause the web app while payday is running. The simple solution is to wrap all of payday in a database transaction (cf. #160), but that's inflexible and fragile. Due to the network access involved in pulling and pushing money, the payday script already takes 30 minutes to run (we have ~7500 participants, ~550 active). Furthermore, we've already had four crashes out of 32 paydays (12.5%), due both to bugs (#169, #308) and network problems (#441, #458). Yeah, running on EC2 instead of my laptop would be an easy win, but the fact is that payday needs to be fault-tolerant and parallelizable for Gittip to scale. So instead of a monolithic database transaction, we use timestamps in a paydays table to keep track of when we're inside a payday event, and timestamps in transfers to keep track of which money we've shuffled so far during that event. Money accrues in a pending column, and then at the end clears in a single database transaction to the balance column. In the long run, payday is probably going to become Gittip's most significant computing challenge. We're going to want to make gift amounts harder to compute (e.g., #449), while processing as many gifts as possible, while keeping payday as close to instant as possible, while maintaining payday as a single transaction.

The point here is that the database schema is deeply entwined with the payday algorithm, and the payday algorithm is hard. It's not run-of-the-mill web development. We've said on this thread that raw SQL is a barrier to entry for people to work on Gittip. With the web side of Gittip, I agree: let's lower the barriers to entry—and let me be clear that front-end development and especially UX design present their own challenges. For the payday side, though, I think we want a hedge, at least for now. Therefore, I've asked @joonas to hold off on replacing schema.sql with Alembic and on modifying payday.py for now, so that we can utilize the apparent enthusiasm-dampening effects of raw SQL to our advantage to protect our core schema and algorithm from harm. :-) Having SQLAlchemy on the web side should make Gittip more accessible to mid-stack and front-end developers, and we can evaluate whether to go further with it after living with it for a while and watching how Gittip scales.

lyndsysimon commented 11 years ago

Perhaps I'm way off base here, but conceptually, I don't see why paydays have to be blocking. It seems like we're trying to optimize a race condition here where one need not exist.

If the last payday happened at timestamp x, and this payday happens at timestamp y, go ahead and place the timestamp for y in the table, then take your time computing it. Nothing that can happen from the webapp should impact the past anyhow.

If it takes 8 hours to run payday.py, then funds aren't sent until y + 8 hours.

chadwhitacre commented 11 years ago

@lyndsysimon The time constraint is more that we want to charge and credit people predictably. We have most of Thursday available to us, but we don't want to initiate deposits on Friday or they won't clear until Monday.

justinabrahms commented 11 years ago

I think I'm on board with @lyndsysimon . There's no reason that it necessarily needs to take 8 hours, but not pausing the database and instead just operating on a time-bounded slice of the database for transaction purposes seems to fit the bill quite well. So I'm clear, if the dispersement starts at 5pm on Thursday, there's no reason why you can't just bound your sql like: SELECT * FROM tips WHERE timestamp < 5pm;, then party on that to your heart's content.

I also don't agree with "this code is important, so it should be hard to understand to scare away dummies" and prefer instead "this code is important, so it should be as obviously correct as possible. So obvious, anyone can understand it".

For anecdotal evidence of how this can bite you in the ass, at work a few months ago, someone was rewriting our tests and got to our billing code. He said "This is too complex. I don't want to mess anything up.", so the tests went unchanged. Turns out, we had errors in our billing code that tests could have easily caught. If the barrier to entry would have been lower, this fellows tests would have likely saved us money. Making contribution easy (and code review even easier), you're doing yourself a service. As another example of this, think about security via obfuscation versus security via open sourcing everything. :smile:

chadwhitacre commented 11 years ago

operating on a time-bounded slice of the database for transaction purposes

Right, I think we're all on the same page here?

I also don't agree with "this code is important, so it should be hard to understand to scare away dummies" and prefer instead "this code is important, so it should be as obviously correct as possible. So obvious, anyone can understand it".

You may be right. Let's let the ORM shake out on the web app side and see where we stand.