| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|\
| |
| |
| |
| | |
vespa-engine/vekterli/btree-bucket-db-support-on-content-node
Create generic B-tree bucket DB and content node DB implementation
|
| |
| |
| |
| |
| | |
Also rewrite some GMock macros that triggered Valgrind warnings
due to default test object printers accessing uninitialized memory.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This is the first stage of removing the legacy DB implementation.
Support for B-tree specific functionality such as lock-free snapshot
reads will be added soon. This commit is just for feature parity.
Abstract away actual database implementation to allow it to
be chosen dynamically at startup. This abstraction does incur
some overhead via call indirections and type erasures of callbacks,
so it's likely it will be removed once the transition to the
new B-tree DB has been completed.
Since the algorithms used for bucket key operations is so similar
between the content node and distributor, a generic B-tree backed
bucket database has been created. The distributor DB will be rewritten
around this code very soon.
Due to the strong coupling between bucket locking and actual DB
implementation details, the new bucket DB has a fairly significant
code overlap with the legacy implementation. This is to avoid
spending time abstracting away and factoring out code for a
legacy implementation that is to be removed entirely anyway.
Remove existing LockableMap functionality not used or that's
only used by tests.
|
| | |
|
| | |
|
| | |
|
|/ |
|
| |
|
|
|
|
|
|
|
|
|
| |
Existing naive prime-based solution was susceptible to scheduling
operations for the subtree of a superbucket in one strand alone,
despite previous attempts to disperse this using prime number
multiplication. This would put a serious limiter on parallelism
for super bucket locality-sensitive reads such as streaming search
visitors.
|
| |
|
| |
|
|
|
|
|
|
|
| |
If the newest document version is a tombstone, behave
as if the document was not found at all. Since we still
track replica consistency, this should work as expected
for multi-phase update operations as well.
|
| |
|
|
|
|
|
|
|
| |
The tracker was passed by move down to the handler function,
but the surrounding code would try to auto-synthesize a reply
from the message (now owned by the tracker) if an exception was
thrown from the handler. Fun ensued.
|
|
|
|
| |
mark_controller_as_having_observed_explicit_node_state.
|
|
|
|
| |
- Let default bucket iteration work in smaller chunks with shorter waits.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Implement async remove.
|
|\
| |
| |
| |
| |
| |
| | |
vekterli/remove-deprecated-bucket-disk-move-functionality
Conflicts:
storage/src/tests/persistence/diskmoveoperationhandlertest.cpp
|
| | |
|
|/
|
|
|
| |
The notion of multiple disks hasn't been supported since we
removed VDS, and likely won't be in the future either.
|
|\
| |
| | |
- Implement async put
|
| |
| |
| |
| | |
taking an reference when it is safe.
|
| |
| |
| |
| |
| |
| | |
- Move result processing to MessageTracker
- Wire putAsync through provider error wrapper too.
- Handle both sync and async replies in tests.
|
|/ |
|
|
|
|
| |
Rename namespace search::datastore to vespalib::datastore.
|
| |
|
|
|
|
| |
Not in use after VDS was removed.
|
| |
|
| |
|
|
|
|
|
| |
- Use MessageTracker for keeping context.
- implement putAsync, but still use it synchronously.
|
| |
|
| |
|
| |
|
|\
| |
| |
| |
| | |
vespa-engine/vekterli/optimize-btree-find-parents-with-fix
Optimize B-tree bucket DB lookup with used-bits aggregation
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
By tracking the minimum used bits count across all buckets in
the database we can immediately start seeking at that implicit
level in the tree, as we know no parent buckets can exist above
that level.
Local synthetic benchmarking shows the following results with a
DB size of 917504 buckets and performing getParents for all
buckets in sequence:
Before optimization:
- B-tree DB: 0.593321 seconds
- Legacy DB: 0.227947 seconds
After optimization:
- B-tree DB: 0.191971 seconds
- Legacy DB: (unchanged)
|
| | |
|
| | |
|
| |
| |
| |
| | |
operations easier to implement.
|
| | |
|
|/
|
|
| |
Avoid state in the thread.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
By tracking the minimum used bits count across all buckets in
the database we can immediately start seeking at that implicit
level in the tree, as we know no parent buckets can exist above
that level.
Local synthetic benchmarking shows the following results with a
DB size of 917504 buckets and performing getParents for all
buckets in sequence:
Before optimization:
- B-tree DB: 0.593321 seconds
- Legacy DB: 0.227947 seconds
After optimization:
- B-tree DB: 0.213738 seconds
- Legacy DB: (unchanged)
|