Closed markvincze closed 1 year ago
I think it's just O(kn) - you move through once in each direction. I don't know if it can be improved but it would be interesting to find out, and yes interviewers will ask you if you understand time/space and if improvements are possible. At least in my experience :).
But in each of the k*n
iterations, we're building a new helper array and we sort it:
let workingList = [];
for (let r of rangeLists) {
workingList.push({
val: r.value(),
list: r
});
}
workingList.sort((a, b) => a.val - b.val);
Doesn't this take O(k * log k)
steps every time?
Ah - yeah sorry forgot about the sort :). Yep, that’s correct… but it’s n log n I believe as log n is only for the search part. So, order(n * k log k)
On Nov 5, 2017, at 1:24 AM, Mark Vincze notifications@github.com wrote:
But in each of the k*n iterations, we're building a new helper array and we sort it:
let workingList = []; for (let r of rangeLists) { workingList.push({ val: r.value(), list: r }); }
workingList.sort((a, b) => a.val - b.val); Doesn't this take O(k * log k) steps every time?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/bigmachine-io/mission-interview/issues/6#issuecomment-341959750, or mute the thread https://github.com/notifications/unsubscribe-auth/AAEy-rnmputVkG4pmf8HNnhVQdO5moyiks5szX63gaJpZM4QR_Au.
@robconery,
Are you sure? Maybe I'm wrong, but I wanna make sure I understand correctly. The way I see it:
k
arrays, of which the longest has n
elements (or to simplify, let's say that all of them have n
elements)k*n
iterationsk
elements, and we sort it. AFAIK this takes O(k * log k)So in total this comes down to O((k * n) * (k * log k))
. (Basically [number of iterations] * [cost of building the sorted list in every iteration]
).
Or am I missing something?
By the way, the way I was thinking about optimizing is to instead of building a new sorted list in every iteration, maintain a min-heap and a max-heap of the k numbers we're currently looking at, and in every iteration, when we do the step, remove the old item from the heaps and insert the new one.
Looking up the min and the max is O(1)
, so that's fine, and removing and inserting an item is only O(log k)
(instead of O(k * log k)
for building a new sorted list), so the total runtime would be O((k * n) * log k)
.
You have to traverse every list too. Let's say you have 3 lists, 10 elements apiece (n=10). That's O(3n) which is just O(n). You have a second list as well, the new one of 3 items. You have to traverse that too so that's O(k). So far we have O(k n).
The trick is with sorting. I think, in the example, I'm using the built in JS sort which is either Quicksort or Merge sort. Both of those are n log n in the worst case.
That leaves you with O(n (k log k)) since we're sorting the k list. I think I missed a paren in my reply above.
RE your idea, I believe we're already using a heap. You still have to sort the item when it comes in, which (I think) will be n log n.
Hey @robconery,
I was wondering what's the runtime of the solution presented for the "Smallest Range of K Lists".
I think we're iterating—worst case—
k*n
times (where n is the number of items in the longest array), and in every iteration we're building a helper list and sorting it, which takesO(k * log k)
, so the total runtime isO(k*n * (k * log k))
. Do you think this is correct?Do you think it can be improved? Maybe by instead of building the helper sorted list in every iteration, maintaining a min and a max heap of the current elements?
In general, in your experience does it often come up on interviews to try to optimize the solution to get a better runtime?