Open rodneyrehm opened 10 years ago
The problem is that there are legitimate reasons for sending (moderately) sparse arrays. I really don't want to remove those.
I would think it reasonable for UAs to impose a limit on the number of null fields in an array. WDYT?
@darobin I'm going to agree with your latest comment. It's certainly worth some discussion, but I think denying a (moderately) sparse array simply because they could be misused is not an end-all, be-all reason not to support them.
This is not so much a UA issue as it is an issue with the server-side conversion algorithm for supporting older clients. Someone can send in a relatively small payload that blows up in server-side memory.
Enforcing a limit on null
values might be surprising to a lot of developers.
We should specify this limit, either as a specific upper bound or at least by acknowledging that there might be limits and what should happen if so. Otherwise we will quickly get a lot of different behaviours, which kind of defeats the point of a standard.
I propose that we address this issue by:
Apparently, browsers do not yet (as of 2016-03) feature a security guard against massive sparse arrays: http://codingsight.com/how-to-hang-chrome-firefox-and-node-js-inside-native-func/
This has been pointed out before: This is not only "ugly", it's a trivial DoS waiting to happen.
While this was trying to be developer friendly, a simple "fix" can be found in PHP's json_encode: sequential vs. non-sequential array example. If the keys of a map do not fulfill the following condition, the map is not converted to array, but serialized to object: