Closed niphlod closed 9 years ago
Looks good after an initial pass. The migration will need to clear out the cache before adding the column.
Thanks for switching over Travis to their container interface. I'm happy for you to drop pypy for now so that tests pass.
should be finenow . I avoided to trash completely testing on pypy because forgetting it would be a shame :P Should I include in this also migration-related adaptations or make a new PR ?
Thanks, that's fine for pypy. I'll have to think of a better fix later or possibly even rip it out.
The migration should be part of this PR.
aside from test failing for mysql and postgresql for version mismatch, could you please take a look at the current status ? Work done:
This is looking good. I think a better approach would be to take all of the logic in your driver.migrate(...)
function and move it to a helper script. This has two benefits, a) you're not putting print
statements in library code and can instead use the Python logging interface with appropriate verbosity settings, and b) you're reusing as much code as possible across drivers.
$ gauged_migrate <config>
You'd define the migrations as you've done in prepare_migrations
and then expose a way for the driver to run one or more migrations (all wrapped in a transaction).
got pretty much all the points, albeit the latest requiring multiple migrations in a single transaction wouldn't be possible. Any DDL statement can't be rollbacked safely, so I'd still go for executing every step in a single transaction.
@chriso : please see the latest version and if you're happy with it.
Looking good :+1:
ok, this should be the latest .
This is looking good. After the timestamp vs. offset issue is fixed can you squash the commits? I can then merge and publish the new version to pypi. Thanks!
ok. ready for squash time ?
okay, done.
ok, got the idea. Will do tomorrow. Thanks for the invaluable inputs.
done.
Sorry for the back and forth :smile: This is looking great.
should be including all of your revisions, but not this, which I didn't understand...
At the moment if you call writer.clear_key_before(key, timestamp=0) it will delete the data between 0 and the first block offset (e.g. [0, 86399]) if you're using a daily block size. This is probably not what the user expects.
At the moment if you call writer.clear_key_before(key, timestamp=0) it will delete the data between 0 and the first block offset (e.g. [0, 86399]) if you're using a daily block size. This is probably not what the user expects.
I was referring to the previous discussion where I explained how you can only delete whole blocks at a time.
After changing clear_key_before
so that it was not inclusive of the specified timestamp, the method was doing the right thing for all timestamps except for 0. This was because of the (now fixed) block of code: if offset > 0: offset -= 1
.
For example, with a block size of 10000, calling clear_key_before(timestamp=20000)
would correctly delete block_offset <= 1
(timestamps [0, 9999]
, and [10000, 19999]
). If you passed in clear_key_before(timestamp=0)
however, the block_offset
would never be decremented and so the driver would delete block_offset <= 0
(timestamps [0, 9999]
) which is not what the user would expect.
You fixed it by saying:
if offset == 0:
raise ValueError('cannot delete before offset zero')
offset -= 1
Thanks again for working on this. gauged==1.0.0 is now available in pypi.
Fixes issue #4
Those are meant mainly for easier maintenance purposes
Notes:
Unrelated: