Closed ghost closed 9 years ago
@rubytester - thanks for your report. It does appear that this is an issue with vRA and not this gem. That's unfortunate, since I can't fix vRA :smile:
That said, I'll look into making the limit value configurable and also see if we can't programmatically de-dup these arrays as well. I'll have more soon, and possibly a branch to try out if you'd be willing to test it out for me. I don't have an environment with enough items to completely reproduce this case.
Also @rubytester I'd encourage you to log a support issue with VMware with the details of this bug if you haven't already since you can reproduce it. Thanks!
My immediate fix is this:
def http_get_paginated_array!(path, limit=100)
https://github.com/chef-partners/vmware-vra-gem/blob/master/lib/vra/client.rb#L141
can you make 100 a default value? I hope it's not too large.
To make it configurable a small setting on client. it's only used in one call anyway.
# Vra::Client
class << self
attr_writer :default_page_size
def default_page_size
@default_page_size ||= 20
end
end
def http_get_paginated_array!(path, limit=self.class.default_page_size)
# user can increase with
Vra::Client.default_page_size = 100
I see 3 tests would need to be modified for that config.
For now I am going with immediate fix.
And I don't know where to submit the bug to vRA (busy busy busy). how about tweet. https://twitter.com/rubytester/status/644959051072180224
I think it's a bad idea to increase the size so large when this could be used in people's environments where fetching such a large list could be resource intensive. The right way is to do as you suggested - make it an option on the client object, default it to 20, and allow users such as yourself to override. And dedup until VMware implements a fix.
I'll have a fix out for this next week.
@rubytester would you be willing to try out branch adamleff/pagination-dedup?
This has two commits on it:
If this looks good, I'll get a review of it done and merged/released. Thanks!
Hi, my current solution to this bug is this:
module Vra
class Client
# hack page limit bug: https://github.com/chef-partners/vmware-vra-gem/issues/10
alias foobar http_get_paginated_array!
def http_get_paginated_array!(path, limit=20)
foobar(path, 100)
end
end
end
And I don't really need to go any further than that. This fixes my problem.
Without this bug I think page_size setting may be useful if you have more than 20 items and you want to fetch them in one http call rather several (with my 108 I get 7 rest calls which is not too bad). If there was no bug then sensible limit=20 is just fine and that is what VRealize also provides as default. (https://vdc-download.vmware.com/vmwb-repository/dcr-public/b996ad6c-d44c-45af-8a6f-5945296e4848/8d8e47c6-4bbc-4938-80d3-c758c4ac63b3/docs/catalog-service/api/docs/resource_Resource.html#path__api_consumer_resources.html)
The dedup
for me is a waste of time. I should be guaranteed no duplicates in a paged set of items and that's that. - The fact that it returns dups is a bug on VRealize API.
I have no clue where I could actually file a bug with them for endpoint /catalog-service/api/consumer/resources?limit=20
. I did manage to post at vmware forum: https://developercenter.vmware.com/forums?id=5098#520515|3053606 and link back to here
Thanks.
@rubytester I don't see the harm in the dedup if it helps other users.
Regardless of the dedup, the first commit in this branch avoids the need for you to monkey-patch as you can set the page limit to 100 on the client instance when you create it. I'm going to get this reviewed and merged as-is unless you uncover any issues using it. Thanks for the feedback!
Hi @adamleff My post on VMWare developer center was deleted by ?
so I have no clue how to interface with those guys.
If you go ahead with dedup in #11 I would add some Warning
to the user that a duplicate was found in the result set so people are not surprised that they miss items they expected.
Thanks for your enthusiasm getting this gem going. Good times ahead!
If anybody has similar dups issues please write a bug report to them in a format and place that of their choosing, I don't have active relationship with our VMWare setups, I just consume that API.
I had a change of heart on this today. In my mind, incomplete data is worse than no data. So I'm going to change this to raise an exception if duplicates are detected (since that's an indication that actual data was omitted based on your report) and point users to the workaround. I'm traveling this week but I will make that change real soon and get this released.
@rubytester I've passed this along to a contact at VMware asking for the best way to report this issue. In the meantime, I'm closing this issue in favor of PR #11 which I'm about to merge and release.
Thanks again for your report!
vmware-vra v1.2.0 has been released to Rubygems!
This is tripping me now all the time. It looks like the bug in Vrealize API but I want to report it here for those using
all_resources
method. (I am usingVMware vRealize Automation Appliance 6.2.1.0-2553372
)My bug report
I have 108 machines to administer. When I make a call
client.resources.all_resources
which usesdef http_get_paginated_array!(path, limit=20)
method I get 108 results however 22 of those machines are duplicates (so I am missing 22 machines from my expected set).When I debugged the issue by stepping over each page in
def http_get_paginated_array!(path, limit=20)
method I see 6 pages of results.Starting with page 3 I start seeing duplicates of machines I already got on page 1. The initial 108 machines becomes 86 unique machines, 22 duplicates
Here are my results for machines in 6 pages returned. Notice machine 190 had a dup in page3
some conclusion
Out of 108 unique machines (which is a correct count) I get 22 duplicates. That means I have 22 machines unaccounted for in the result. This looks like a bug in vRealize API pagination.
when I use limit=200 I get all my 108 unique machines as expected.
My recommendation is to setup limit to 200 as default and if you have more machines to fetch perhaps make the
limit
configurable for theall_resources
method.I feel like a bean counter but for my set of 108 machines if I set limit=46 per page I get my entire set without duplicates. with limit 45 and below all breaks down. YMMV if you have a different set to fetch and your pagination limit. I would like to find out if anybody else experiences this bug.
extra
I even hacked this method to see if using rel link will be different. Foolish me.