Open gabrielvincent opened 1 year ago
It seems to me that it does.
Your outer loop will run sqrt(n)
times because n/sqrt(n) == sqrt(n)
.
Your inner loop will also run sqrt(n)
times - because a difference between low
and high
at any iteration is sqrt(n)
.
When adding above we get 2 * sqrt(n)
which is simplified to sqrt(n)
.
Cheers!
Regarding time complexity, @sathoune's comment is correct. But I would like to mention some things:
expect(two_crystal_balls([false, false, false, true])).toEqual(3);
Math.floor(breaks.length / Math.sqrt(breaks.length))
instead of Math.floor(Math.sqrt(breaks.length))
?Is there a good reason to write
Math.floor(breaks.length / Math.sqrt(breaks.length))
instead ofMath.floor(Math.sqrt(breaks.length))
?
My reasoning was that we want to jump indexes by sqrt(n)
. In an array of length n
, there are n / sqrt(n)
jumps that could be performed. Now that you pointed it out, I see it doesn't make any sense.
forgive me, I'm a bit new to big O. Just from the "count the loops" technique though, you have a for loop inside another for loop. In my head that means you have O (sqrt(n)*sqrt(n)), which is actually equal to O (n) isn't it? That's why in the walkthrough he declares i outside of the first for loop, so the second one can use it without being nested within the first for loop.
Please correct me if I'm wrong, I didn't really understand why he bothered with a j variable in the second loop which he never uses, I took it out and just used i and the code still passes.
`const jump = Math.floor(Math.sqrt(breaks.length)); let i = jump; for(; i<breaks.length; i+=jump){ if (breaks[i]){ break; } } i-=jump;
for(; i<breaks.length; ++i){ if (breaks[i]){ return i; } } return -1;`
It feels a bit 'hacky' to me cause it is kind of doing a linear search at the end, but the amount of steps will always be <= sqrt(n) so it's not?
EDIT again lol:
The j I'm referring to: for(let j =0; j<jump && i<breaks.length; ++j, ++i){ if (breaks[i]){ return i; }
I guess this means the for loop can only run until j === sqrt(n), thereby ensuring that you never add more than sqrt(n) to i??? This somehow prevents it from violating O(sqrt(n))??? To me it still looks like O(sqrt(n) + x) where x must always be < sqrt(n), but it kind of behaves in a O(n) way. Also what is the point in this? Surely by doing this you're incrementing i just as much as j and stopping when you reach the first index with a true value - why put j in at all?
This is what I mean by being new to big O haha
Is there a good reason to write
Math.floor(breaks.length / Math.sqrt(breaks.length))
instead ofMath.floor(Math.sqrt(breaks.length))
?My reasoning was that we want to jump indexes by
sqrt(n)
. In an array of lengthn
, there aren / sqrt(n)
jumps that could be performed. Now that you pointed it out, I see it doesn't make any sense.
That is indeed an interesting point. That is related to answering the question of why we choose the square root and not another root (the cube root, for example).
As you said, if we want to jump indexes by √n, we get that the maximum number of jumps (throws of the ball) we must perform is also √n. Thus, first, we should try at most all of the √n throws and see where the first ball breaks, then, return to the latest safe height, increase the height by one, throw the other ball, and repeat until it breaks (that would also be at most √n attempts). So the number of total throws is 2√n, in big O notation, we drop the constant and get O(√n).
Had we chosen to increase the height (index) by taking a different root, say ∛n, we increase the height by smaller steps so we must perform more throws with the first ball (∛n)², so the total number of throws would be ∛n+(∛n)². In big O notation, we just keep the largest quantity: O((∛n)²). Similarly, for the fourth, fifth, ... root, we reduce the size of the jump, increasing the number of throws of the first ball. So the time complexity will be worse than O(√n), and the sqrt is the best case.
forgive me, I'm a bit new to big O. Just from the "count the loops" technique though, you have a for loop inside another for loop. In my head that means you have O (sqrt(n)*sqrt(n)), which is actually equal to O (n) isn't it? That's why in the walkthrough he declares i outside of the first for loop, so the second one can use it without being nested within the first for loop.
You're right in that using the counting the loops technique, the algorithm seems to be O(n). However, in this case, counting the loops is not enough because of the
continue
statement. As long as the first ball doesn't break, thecontinue
statement terminates the current iteration of the first loop, and the second loop does not run. Only when the first ball breaks, the second loop runs, and as it returns once it finds atrue
in thebreaks
array, the function terminates. To be honest, the readability of the code could be improved, and it doesn't work properly; for example, the input[false, false, false, true]
will return-1
Please correct me if I'm wrong, I didn't really understand why he bothered with a j variable in the second loop which he never uses, I took it out and just used i and the code still passes.You're right; the (Primeagen's) solution can work without the
j
index, but I guess those are the perks of coding live. it is kind of doing a linear search at the end, but the amount of steps will always be <= sqrt(n) so it's not?That's correct. It's a linear search on √n elements. EDIT again lol: The j I'm referring to:
for(let j =0; j<jump && i<breaks.length; ++j, ++i){ if (breaks[i]){ return i; }
I guess this means the for loop can only run until j === sqrt(n), thereby ensuring that you never add more than sqrt(n) to i??? This somehow prevents it from violating O(sqrt(n))??? To me it still looks like O(sqrt(n) + x) where x must always be < sqrt(n), but it kind of behaves in a O(n) way. Also what is the point in this? Surely by doing this you're incrementing i just as much as j and stopping when you reach the first index with a true value - why put j in at all?
Again, you're observation is correct. The code will work without the
j
index. Loosely speaking, you're statement O(√n+x) with x<=√n makes sense and can be simplified (using the worst case) to O(√n+√n) or, adding and dropping the constant to O(√n). I just don't get you're statement on "behaving in an O(n) way", is it because the last comparison is againstbreaks.length
? Despite that, remember you will do a linear search in at most √n elements (that may indeed be a good reason to keep thej
index and its comparison: making it explicit that you're only looping over √n values)
I had issue with some arrays with different sizes and I debuged the code and changed only one thing on my algorithm. After the first loops encounters the first TRUE value (where the first ball breaks) the index, according to the GOAT prime, he deducts by the jump amount variable and start from the beginning of that portion again.
What I changed was I incremented index variable because we dont need to check where was the last element before we found the TRUE value, because we know that last element is FALSE, otherwise that last portion of the squared would be checked.
so I did the following:
i = i + 1 - jumpAmount
Another solution would be to change the second for loop:
for (let j = 0; j < jumpAmount && i < array.length; ++j, ++i) {
if (array[i]) {
return i;
}
}
where the loop will run until j variable is less OR equal to jumpAmount.
That happens because depending where the TRUE value is, we are short one element, because the first loop will always check one element more than the amount we want to jump.
const array = [false,false,false,false,false,false,false,false,false,true,true,true,true]
The first run will jump to the 4th element because squared root of 13, rounded down, is 3:
const jumpAmount = 3;
let index = jumpAmount //3
/* First run will check from index 0 to index 3*/
/*If not is found, increment index by 3. Now index = 6*/
/* Second run will check from index 3 to index 6, which will check 4 elements*/
But if we find a TRUE value, which is located in the last index of this portion we are checking, the second loop will only check for 3 elements instead of the whole 4 elements, because the second loop only executes until j variable is less than jump amount. To prevent this we can either increment the start of the index (from the portion we want to check linearly) by 1, because we already know this index is FALSE, or we can still check this last index but we need to make sure that the loop runs until j variable is less or EQUAL to the jump amount variable.
This passes the tests, but as it's different from the solution presented in the course, I'm not sure that my solution preservers the O(√n) time complexity.