lipinggm / tlb

Automatically exported from code.google.com/p/tlb
0 stars 0 forks source link

use weighted mean to default test times when time data not available #26

Open GoogleCodeExporter opened 9 years ago

GoogleCodeExporter commented 9 years ago
verify before closing

Original issue reported on code.google.com by singh.janmejay on 17 Dec 2010 at 8:09

GoogleCodeExporter commented 9 years ago

Original comment by itspa...@gmail.com on 17 Jan 2011 at 8:55

GoogleCodeExporter commented 9 years ago

Original comment by itspa...@gmail.com on 17 Mar 2011 at 8:03

GoogleCodeExporter commented 9 years ago

Original comment by itspa...@gmail.com on 20 Nov 2011 at 7:25

GoogleCodeExporter commented 9 years ago
Haven't worked on stories in a long time. Will pick this one up. Will add the 
comment on what the decided approach is.

Original comment by itspa...@gmail.com on 20 Nov 2011 at 11:25

GoogleCodeExporter commented 9 years ago
Group the historic times of tests so that their mean differences are similar. 
Using the groups, find the mean time based on probability of a test being in 
that group.

For example, test times: 1, 2, 3, 4, 23, 26, 45, 66, 90, 100, 220 would be 
grouped into (1, 2, 3, 4), (23, 26), (45), (66), (90, 100), (220). New test 
would get a time of (4 / 11 * 2.5) + (2 /11 * 24.5) + (1/11 * 45) + (1/11 * 66) 
+ (2/11 * 95) + (1/11 * 220). This is more realistic than just using a simple 
mean.

Original comment by itspa...@gmail.com on 22 Jan 2012 at 11:11

GoogleCodeExporter commented 9 years ago
Isn't this a complicated way of computing average? I mean:
(4 / 11 * 2.5) + (2 /11 * 24.5) + (1/11 * 45) + (1/11 * 66) + (2/11 * 95) + 
(1/11 * 220) = (1/11 * 1) + (1/11 * 2) .. + (1/11 * 220) = (1 + 2  + ... 220)/11

We need to explore something smarter.

Original comment by singh.janmejay on 22 Jan 2012 at 4:40

GoogleCodeExporter commented 9 years ago
Of course! Lame. OK, so, I was trying to go for the probability of each bucket 
and how much it contributes to the new time. We clearly need something 
different.

We have 2 options: Use the probability to distribute the new tests into that 
bucket. Or, Chuck this line of reasoning and go with a different one based on 
the test names and test properties.

For now, I am reopening this and parking it.

Original comment by itspa...@gmail.com on 23 Jan 2012 at 5:21

GoogleCodeExporter commented 9 years ago
After talking with JJ, we have decided to park this for now. This may not be 
the right approach. We should see if we can leverage some test name correlation 
to find the test times. For example, com.foo.integration will likely take more 
time than com.foo.unit. etc. But that might work only for java tests and not 
others (say Twist tests). So, we need to do more analysis on this one. We 
should look at this again for 0.5.

Original comment by itspa...@gmail.com on 21 Feb 2012 at 9:15