summaryrefslogtreecommitdiff
path: root/block
diff options
context:
space:
mode:
authorShaohua Li <shli@fb.com>2016-11-03 17:03:53 -0700
committerHuang, Tao <huangtao@rock-chips.com>2017-09-15 09:39:36 +0800
commit63e2f4ea2deeef82179d5a3bff76347b2ed738fb (patch)
tree7dfee99856a7ef08d6c2d5b7b38831bfc04557ee /block
parent809e38acf1e5bbd2149f5e443d8e08c4b24dd731 (diff)
UPSTREAM: block: immediately dispatch big size request
Currently block plug holds up to 16 non-mergeable requests. This makes sense if the request size is small, eg, reduce lock contention. But if request size is big enough, we don't need to worry about lock contention. Holding such request makes no sense and it lows the disk utilization. In practice, this improves 10% throughput for my raid5 sequential write workload. The size (128k) is arbitrary right now, but it makes sure lock contention is small. This probably could be more intelligent, eg, check average request size holded. Since this is mainly for sequential IO, probably not worthy. V2: check the last request instead of the first request, so as long as there is one big size request we flush the plug. Change-Id: I034ee890eb799ea2c2ee2d38f80f880398f39f91 Signed-off-by: Shaohua Li <shli@fb.com> Signed-off-by: Jens Axboe <axboe@fb.com> Signed-off-by: Shawn Lin <shawn.lin@rock-chips.com> (cherry pick from 50d24c34403c62ad29e8b6db559d491bae20b4b7)
Diffstat (limited to 'block')
-rw-r--r--block/blk-core.c4
1 files changed, 3 insertions, 1 deletions
diff --git a/block/blk-core.c b/block/blk-core.c
index be43481bcb12..b3c48aaa6dc0 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1800,7 +1800,9 @@ get_rq:
if (!request_count)
trace_block_plug(q);
else {
- if (request_count >= BLK_MAX_REQUEST_COUNT) {
+ struct request *last = list_entry_rq(plug->list.prev);
+ if (request_count >= BLK_MAX_REQUEST_COUNT ||
+ blk_rq_bytes(last) >= BLK_PLUG_FLUSH_SIZE) {
blk_flush_plug_list(plug, false);
trace_block_plug(q);
}