summaryrefslogtreecommitdiff
path: root/virt
diff options
context:
space:
mode:
authorIgor Mammedov <imammedo@redhat.com>2014-12-01 17:29:27 +0000
committerPaolo Bonzini <pbonzini@redhat.com>2014-12-04 15:29:11 +0100
commit9c1a5d38780e652275aa55362dbee0d7e827e069 (patch)
tree0fc252e3948a30b78974e3ba8979d5aa231de22f /virt
parent0e60b0799fedc495a5c57dbd669de3c10d72edd2 (diff)
kvm: optimize GFN to memslot lookup with large slots amount
Current linear search doesn't scale well when large amount of memslots is used and looked up slot is not in the beginning memslots array. Taking in account that memslots don't overlap, it's possible to switch sorting order of memslots array from 'npages' to 'base_gfn' and use binary search for memslot lookup by GFN. As result of switching to binary search lookup times are reduced with large amount of memslots. Following is a table of search_memslot() cycles during WS2008R2 guest boot. boot, boot + ~10 min mostly same of using it, slot lookup randomized lookup max average average cycles cycles cycles 13 slots : 1450 28 30 13 slots : 1400 30 40 binary search 117 slots : 13000 30 460 117 slots : 2000 35 180 binary search Signed-off-by: Igor Mammedov <imammedo@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Diffstat (limited to 'virt')
-rw-r--r--virt/kvm/kvm_main.c8
1 files changed, 7 insertions, 1 deletions
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 162817f853ec..759af6596a07 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -679,8 +679,14 @@ static void update_memslots(struct kvm_memslots *slots,
struct kvm_memory_slot *mslots = slots->memslots;
WARN_ON(mslots[i].id != id);
- if (!new->npages)
+ if (!new->npages) {
new->base_gfn = 0;
+ if (mslots[i].npages)
+ slots->used_slots--;
+ } else {
+ if (!mslots[i].npages)
+ slots->used_slots++;
+ }
while (i < KVM_MEM_SLOTS_NUM - 1 &&
new->base_gfn <= mslots[i + 1].base_gfn) {