taoensso / faraday

Amazon DynamoDB client for Clojure
https://www.taoensso.com/faraday
Eclipse Public License 1.0
238 stars 84 forks source link

Implementation discussion: update-table changes #78

Closed ricardojmendez closed 8 years ago

ricardojmendez commented 8 years ago

I'm looking into expanding update-table to add other functionality, like GlobalSecondaryIndexUpdates, since as discussed on #77, it only currently supports provisioned throughput for the table.

I see the following issues:

I'm inclined to change that third parameter to update-table to just be a map with the settings for the operations to apply. This would be consistent with how the table description is passed to create-table and would solve both problems above, but since that'd be a breaking change I'd rather bring it up first.

Thoughts?

ptaoussanis commented 8 years ago

Hi Ricardo,

The breaking change you have in mind sounds good to me, thanks for checking.

ricardojmendez commented 8 years ago

It looks like update-table's implementation right now is very throughput-specific, including mapping the update call on a series of throughput increase steps and returning the last one executed.

Is there a reason for this? I'm going through the referenced page but don't see a reason why the update request would be executed in steps, only the explanation for the decrease limit in a 24 hour period.

ptaoussanis commented 8 years ago

Was the case previously that large throughput increases needed manual steps. Not sure if that's still the case or not. If not, that code's vestigial and can be removed.

ricardojmendez commented 8 years ago

Perfect. I'll see about removing it and running some tests against an actual live instance, since I've read some remarks that the local instance disregards throughput settings (it referred to global secondary indexes, but I expect is a general characteristic).

ptaoussanis commented 8 years ago

Ahh yes, limits like that would only apply to a live cloud instance. Not sure if this particular limit still exists though.

ricardojmendez commented 8 years ago

No problem, I'll test. If the limits no longer exist then things would be much simpler, since I can just unify all the table updates - otherwise it'd be best to keep separate update-table and update-table-throughput functions.

ricardojmendez commented 8 years ago

Testing against a live cloud instance, I can get the update to throughput to apply in a single step. Will prune that old code.

ptaoussanis commented 8 years ago

Just confirming: that's for a large increase, yes? I.e. more than 2x or 4x the previous value?

ricardojmendez commented 8 years ago

Correct, tried it going from {:read 1 :write 1} to {:read 16 :write 16} without stepping. No error and update returned the expected values

; First increase
{:lsindexes nil,
 :gsindexes nil,
 :name :temp_table_1450109809826,
 :throughput
 {:read 16,
  :write 16,
  :last-decrease nil,
  :last-increase #inst "2015-12-14T16:16:58.340-00:00",
  :num-decreases-today 0},
 :prim-keys {:artist {:key-type :hash, :data-type :s}},
 :size 0,
 :status :active,
 :item-count 0,
 :creation-date #inst "2015-12-14T16:16:49.943-00:00",
 :indexes nil}
; Second increase
{:lsindexes nil,
 :gsindexes nil,
 :name :temp_table_1450109809826,
 :throughput
 {:read 256,
  :write 256,
  :last-decrease nil,
  :last-increase #inst "2015-12-14T16:17:03.542-00:00",
  :num-decreases-today 0},
 :prim-keys {:artist {:key-type :hash, :data-type :s}},
 :size 0,
 :status :active,
 :item-count 0,
 :creation-date #inst "2015-12-14T16:16:49.943-00:00",
 :indexes nil}
ptaoussanis commented 8 years ago

Great, happy to see that they've removed that limit- thanks for picking up on this.

ricardojmendez commented 8 years ago

Closing along with #80