diff options
author | Andy Wingo <wingo@pobox.com> | 2017-02-17 11:01:19 +0100 |
---|---|---|
committer | Andy Wingo <wingo@pobox.com> | 2017-02-17 11:04:16 +0100 |
commit | 2864f11d3415c650d9e80f8e7787e4df81dcc7e9 (patch) | |
tree | a1ee7cd94da9acb53fb0b72ceb1d85edc5e5f7a9 /libguile | |
parent | 60035b66c795ffe82800b6400e5aba5b3d6fd5ca (diff) |
Bump fluid cache size to 16 entries
* libguile/cache-internal.h (SCM_CACHE_SIZE): Bump to 16. It seems that
a thread accesses more than 8 fluids by default (%stacks, the
exception handler, current ports, current-fiber, port read/write
waiters) which leads every fiber to cause cache eviction and copying
the value table, which is a bottleneck. Instead just bump this cache
size.
(scm_cache_lookup): Update unrolled search.
Diffstat (limited to 'libguile')
-rw-r--r-- | libguile/cache-internal.h | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/libguile/cache-internal.h b/libguile/cache-internal.h index fc1e3c139..4c1732f81 100644 --- a/libguile/cache-internal.h +++ b/libguile/cache-internal.h @@ -39,7 +39,7 @@ struct scm_cache_entry scm_t_bits value; }; -#define SCM_CACHE_SIZE 8 +#define SCM_CACHE_SIZE 16 struct scm_cache { @@ -81,6 +81,7 @@ scm_cache_lookup (struct scm_cache *cache, SCM k) scm_t_bits k_bits = SCM_UNPACK (k); struct scm_cache_entry *entry = cache->entries; /* Unrolled binary search, compiled to branchless cmp + cmov chain. */ + if (entry[8].key <= k_bits) entry += 8; if (entry[4].key <= k_bits) entry += 4; if (entry[2].key <= k_bits) entry += 2; if (entry[1].key <= k_bits) entry += 1; |