Closed disaster123 closed 8 years ago
hi
writeback_threshold doesnt work? I think that's what you want
Sent from my iPhone
On 2015/07/23, at 19:44, disaster123 notifications@github.com wrote:
bcache has a pretty cool function if the drive can't keep up with the writes or is too slow for the writes it adds a dynamic sleep between each backend write.
Currently i've the problem that latency for reads is very high on dm-writeboost devices if you have a lot of writes.
Stefan
― Reply to this email directly or view it on GitHub.
ah OK i'm sorry i think i've misread the documentaion reading the code makes it more clean:
int writeback_modulator_proc(void *data)
{
struct wb_device *wb = data;
struct hd_struct *hd = wb->backing_dev->bdev->bd_part;
unsigned long old = 0, new, util;
unsigned long intvl = 1000;
while (!kthread_should_stop()) {
new = jiffies_to_msecs(part_stat_read(hd, io_ticks));
util = div_u64(100 * (new - old), 1000);
if (util < ACCESS_ONCE(wb->writeback_threshold))
wb->allow_writeback = true;
else
wb->allow_writeback = false;
old = new;
schedule_timeout_interruptible(msecs_to_jiffies(intvl));
}
return 0;
}
'''
Will redo some tests with lowering the value to 50.
mhm but isn't that too easy? It is OK to have 100% writes if there are no reads.
Will redo some tests with lowering the value to 50.
The calculation of load is approximation. or there could be a mistake. But it actually turna off writeback when the load is too high. That's enough for me.
It is OK to have 100% writes if there are no reads.
I don't think so. Keeping the load around 70 eventually achieves the highest throughput.
Is it the same algorithm iostat is using?
I believe yes but I am not sure about iostat other than it looks at diskstats.
I learned from the diskstats the correct usage of part_stat_read
.
http://lxr.free-electrons.com/source/block/genhd.c#L1147
1163 disk_part_iter_init(&piter, gp, DISK_PITER_INCL_EMPTY_PART0);
1164 while ((hd = disk_part_iter_next(&piter))) {
1165 cpu = part_stat_lock();
1166 part_round_stats(cpu, hd);
1167 part_stat_unlock();
1168 seq_printf(seqf, "%4d %7d %s %lu %lu %lu "
1169 "%u %lu %lu %lu %u %u %u %u\n",
1170 MAJOR(part_devt(hd)), MINOR(part_devt(hd)),
1171 disk_name(gp, hd->partno, buf),
1172 part_stat_read(hd, ios[READ]),
1173 part_stat_read(hd, merges[READ]),
1174 part_stat_read(hd, sectors[READ]),
1175 jiffies_to_msecs(part_stat_read(hd, ticks[READ])),
1176 part_stat_read(hd, ios[WRITE]),
1177 part_stat_read(hd, merges[WRITE]),
1178 part_stat_read(hd, sectors[WRITE]),
1179 jiffies_to_msecs(part_stat_read(hd, ticks[WRITE])),
1180 part_in_flight(hd),
1181 jiffies_to_msecs(part_stat_read(hd, io_ticks)),
1182 jiffies_to_msecs(part_stat_read(hd, time_in_queue))
bcache has a pretty cool function if the drive can't keep up with the writes or is too slow for the writes it adds a dynamic sleep between each backend write.
Currently i've the problem that latency for reads is very high on dm-writeboost devices if you have a lot of writes.
Stefan