| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| |
| | |
vespa-engine/geirst/dispatch-get-and-visitor-messages-to-stripe
Dispatch get and visitor messages to correct distributor stripe.
|
| | |
|
|/
|
|
|
|
|
|
| |
hash lookup.
If it is a wildcard lookup iterate as earlier on.
Also use vespalib::stringref in interface to avoid conversion.
Use vespalib:string in the hash map to locate string in object aswe are still on old abi.
|
|
|
|
|
|
|
|
|
| |
The most basic functionality is now supported using multiple distributor stripes (and threads).
Note that the following is (at least) still missing:
* Stripe-separate metrics with top-level aggregation.
* Aggregation over all stripes in misc functions in Distributor that currently is using the first stripe.
* Handling of messages without bucket id in the top-level Distributor instead of using the first stripe.
|
|\
| |
| |
| |
| | |
vespa-engine/geirst/validate-distributor-stripes-config
Add validation of the number of distributor stripes from config and a…
|
| |
| |
| |
| |
| |
| | |
asserts.
This ensures the number of stripes is a power of 2 and within MaxStripes boundary.
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
| |
To avoid starvation of high priority global bucket merges, we do not consider
these for blocking due to a node being "busy" (usually caused by a full merge
throttler queue).
This is for two reasons:
1. When an ideal state op is blocked, it is still removed from the internal
maintenance priority queue. This means a blocked high pri operation will
not be retried until the next DB pass (at which point the node is likely
to still be marked as busy when there's heavy merge traffic).
2. Global bucket merges have high priority and will most likely be allowed
to enter the merge throttler queues, displacing lower priority merges.
|
|
|
|
| |
stripe in sequence.
|
|
|
|
| |
stripe bits.
|
| |
|
|
|
|
|
|
|
|
| |
- Ideal state ops cannot look at null-bucket messages for determining
if full bucket checks are pending when running in striped mode, as
these are not handled by stripes when not in legacy mode.
- State checker context should use ideal state cache instead of recomputing
for every checked bucket (observed via `perf` in production).
|
| |
|
|\
| |
| | |
Minor cleanups in distributor maintenance handling code
|
| |
| |
| |
| | |
No functional changes
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New behavior:
- Only allow time to travel forwards within a given distributor process'
lifetime. This is a change from the old behavior, which would emit a
warning to the logs and happily continue from a previously used second,
possibly causing the distributor to reuse timestamps.
- Try to detect cases where the wall clock has been transiently set far
into the future--only to bounce back--by aborting the process if the
current observed time is more than 120 seconds older than the highest
observed wall clock time. This is an attempt to avoid generating _too_
many bogus future timestamps, as the distributor would otherwise continue
generating timestamps within the highest observed second.
|
| |
|
|\
| |
| | |
Implement class that provides mapping from bucket space to state for …
|
| | |
|
|\ \
| |/
|/|
| |
| | |
vespa-engine/vekterli/add-initial-multi-stripe-support-to-access-guard
Add initial multi stripe support to access guard [run-systemtest]
|
| |
| |
| |
| |
| |
| | |
Still missing functionality for:
- Merging bucket entries across stripes
- Aggregating pending operation stats across stripes
|
|/
|
|
|
|
|
|
|
|
| |
storage link chain.
This is required to avoid stripe threads being able to send up messages
while the communication manager is being closed.
Such messages will fail at the RPC layer (already closed)
and an error reply is sent down from the communication manager.
This triggers an assert in StorageLink::sendDown() which is already CLOSED.
|
|
|
|
|
|
|
| |
internals
Also lets us test guard functionality much more easily since its target is
now fully mockable.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since distributor stripes may independently reach a conclusion that
a `GetNodeState` reply containing new host info should be sent back to
the cluster controller, implement basic rate limiting/batching of
concurrent sends.
Batching has two separate modes of operation:
- If the node is initializing, host info will be sent immediately after
_all_ stripes have reported in (they will always do this post-init).
This is not timed, in order to minimize latency of bucket info being
visible to the cluster controller.
- If the node has already initialized, have a grace period of up to 1
second from the time the first stripe signals its intent to send
host info until it's actually sent. This allows several stripes
to complete their recovery mode and signal host info intents during
this second.
Batch time period is currently not configurable, may be done later if
deemed useful or necessary.
|
|
|
|
|
|
|
| |
This fixes the following system tests when running with new distributor stripe mode:
Capacity::test_capacity
FlatToHierarchicTransitionTest::test_transition_implicitly_indexes_and_activates_docs_per_group
HierarchDistr::test_app_change__PROTON
|
| |
|
|
|
|
|
|
| |
This is wrongly implemented by not using a delegator to the right thread.
The system test framework is not using the "idealstateman" page.
The same information is present in the status page "distributor?page=buckets".
|
|
|
|
|
|
|
|
|
|
| |
A stripe thread is parked as part of another thread calling DistributorStripePool::park_all_threads().
The stripe thread will then be inside DistributorStripePool::park_thread_until_released(),
just waiting to call DistributorStripeThread::wait_until_unparked().
Before this is called, the other thread can call DistributorStripePool::unpark_all_threads(),
and the _should_park variable in DistributorStripeThread is set to false again.
When the stripe thread calls DistributorStripeThread::wait_until_unparked(), it is already unparked.
This is a scenario that might occur when the parking / unparking loop is short.
|
|
|
|
| |
running in new stripe mode.
|
| |
|
|
|
|
|
| |
Required to ensure no race conditions can happen from processing such
messages from arbitrary RPC/CommunicationManager threads.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The (currently single) stripe is now run as part of the distributor
stripe pool instead of being transitively invoked by the main thread.
Introduce an explicit message mutex per stripe that is used for
external messages and status requests when not using legacy mode.
Use per-stripe wakeup mechanisms instead of the framework-global
mutex used in the legacy code path.
Additional work remains to bring back a dedicated message run-queue
for the top-level distributor, so this is not yet thread safe for
operations to the main `BucketDBUpdater`.
|
|\
| |
| |
| |
| | |
vespa-engine/vekterli/make-more-distributor-internals-private-2
Make more distributor internals private 2; the much anticipated sequel [run-systemtest]
|
| |
| |
| |
| |
| |
| |
| | |
Also unconditionally update top-level Distributor's own config snapshot
so that it can be used for legacy code paths as well. Would ideally
remove all usages of legacy `getConfig()`, but we need to refactor
how unit tests sneakily inject config changes first.
|
| |
| |
| |
| | |
Also add more assertions that such functions are only called in legacy mode.
|
| | |
|
| |
| |
| |
| | |
Also add more assertions that such functions are only called in legacy mode.
|
| | |
|
| | |
|
|/
|
|
|
| |
Don't use bucket databases in the top-level distributor component and bucket db updater.
It is only distributor stripes that should have a bucket database.
|
|\
| |
| |
| |
| | |
vespa-engine/vekterli/distributor-stripe-pool-and-thread-coordination
Add DistributorStripe thread pool with thread park/unpark support
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
To enable safe and well-defined access to underlying stripe data
structures from the main distributor thread, the pool has functionality
for "parking" and "unparking" all stripe threads:
* Parking makes all threads go into a blocked holding pattern where
it is guaranteed that they may not race with any other threads.
* Unparking releases all threads from their holding pattern, allowing
them to continue their event processing loop.
Also adds a custom run loop for distributor threads that largely
emulates the waiting semantics found in the current framework ticking
thread pool run loop. But unlike the framework pool, there is no global
mutex that must be acquired by all threads in the pool. All stripe
event handling uses per-thread mutexes and condition variables. Global
state is only accessed when thread parking is requested, which happens
very rarely.
|
| | |
|
| |
| |
| |
| | |
DistributorStripeMessageSender is used for all stripe related operations.
|
|/
|
|
| |
distributor stripe.
|
| |
|
| |
|