| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
| |
Existing naive prime-based solution was susceptible to scheduling
operations for the subtree of a superbucket in one strand alone,
despite previous attempts to disperse this using prime number
multiplication. This would put a serious limiter on parallelism
for super bucket locality-sensitive reads such as streaming search
visitors.
|
| |
|
| |
|
|
|
|
|
|
|
| |
If the newest document version is a tombstone, behave
as if the document was not found at all. Since we still
track replica consistency, this should work as expected
for multi-phase update operations as well.
|
| |
|
|
|
|
|
|
|
| |
The tracker was passed by move down to the handler function,
but the surrounding code would try to auto-synthesize a reply
from the message (now owned by the tracker) if an exception was
thrown from the handler. Fun ensued.
|
|
|
|
| |
mark_controller_as_having_observed_explicit_node_state.
|
|
|
|
| |
- Let default bucket iteration work in smaller chunks with shorter waits.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Implement async remove.
|
|\
| |
| |
| |
| |
| |
| | |
vekterli/remove-deprecated-bucket-disk-move-functionality
Conflicts:
storage/src/tests/persistence/diskmoveoperationhandlertest.cpp
|
| | |
|
|/
|
|
|
| |
The notion of multiple disks hasn't been supported since we
removed VDS, and likely won't be in the future either.
|
|\
| |
| | |
- Implement async put
|
| |
| |
| |
| | |
taking an reference when it is safe.
|
| |
| |
| |
| |
| |
| | |
- Move result processing to MessageTracker
- Wire putAsync through provider error wrapper too.
- Handle both sync and async replies in tests.
|
|/ |
|
|
|
|
| |
Rename namespace search::datastore to vespalib::datastore.
|
| |
|
|
|
|
| |
Not in use after VDS was removed.
|
| |
|
| |
|
|
|
|
|
| |
- Use MessageTracker for keeping context.
- implement putAsync, but still use it synchronously.
|
| |
|
| |
|
| |
|
|\
| |
| |
| |
| | |
vespa-engine/vekterli/optimize-btree-find-parents-with-fix
Optimize B-tree bucket DB lookup with used-bits aggregation
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
By tracking the minimum used bits count across all buckets in
the database we can immediately start seeking at that implicit
level in the tree, as we know no parent buckets can exist above
that level.
Local synthetic benchmarking shows the following results with a
DB size of 917504 buckets and performing getParents for all
buckets in sequence:
Before optimization:
- B-tree DB: 0.593321 seconds
- Legacy DB: 0.227947 seconds
After optimization:
- B-tree DB: 0.191971 seconds
- Legacy DB: (unchanged)
|
| | |
|
| | |
|
| |
| |
| |
| | |
operations easier to implement.
|
| | |
|
|/
|
|
| |
Avoid state in the thread.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
By tracking the minimum used bits count across all buckets in
the database we can immediately start seeking at that implicit
level in the tree, as we know no parent buckets can exist above
that level.
Local synthetic benchmarking shows the following results with a
DB size of 917504 buckets and performing getParents for all
buckets in sequence:
Before optimization:
- B-tree DB: 0.593321 seconds
- Legacy DB: 0.227947 seconds
After optimization:
- B-tree DB: 0.213738 seconds
- Legacy DB: (unchanged)
|
|\
| |
| |
| |
| | |
vespa-engine/toregge/relax-judy-array-test-null-pointer-output-check
Some libraries print "0x0" for a null void ptr
|
| | |
|
|/ |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
If requests or responses from external sources are being constantly
processed as part of the distributor tick, allow for up to N ticks
to skip maintenance scanning, where N is a configurable number.
This reduces the amount of CPU time spent on maintenance operations
when the node has a lot of incoming data to deal with.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bucket DB updating happened unconditionally anyway; this was
only used for failing operations in an overly pessimistic way.
Removing this lookup has two benefits:
- Less CPU spent in DB
- Less impact expected during feeding during node state
transitions since fewer operations will have to be needlessly
retried by the client.
Rationale: an operation towards a given bucket completes (i.e.
is ACKed by all its replica nodes) at time t and the bucket is
removed from the DB at time T. There is no fundamental change
in correctness or behavior from the client's perspective if
the order of events is tT or Tt. Both are equally valid, as
the state transition edge happens independently of any reply
processing.
|
|
|
|
|
| |
B-tree/datastore stats can be expensive to sample, so don't do
this after every full DB iteration. For now, wait at least 30s.
|