| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
This is handled similarly to per stripe distributor metrics.
|
|\
| |
| |
| |
| | |
vespa-engine/toregge/aggregate-metrics-on-the-fly-when-adding-to-snapshot
Aggregate distributor metrics when adding to snapshot.
|
| | |
|
| | |
|
|/
|
|
| |
waiting for full Q
|
|
|
|
| |
This function is only used by idealstatemanagertest in the context of testing a single stripe.
|
|\
| |
| |
| |
| | |
vespa-engine/toregge/aggregate-metrics-from-distributor-stripes-pass-2
Aggregate metrics from distributor stripes.
|
| |
| |
| |
| | |
Reorder member variables.
|
| | |
|
|\ \
| |/
|/|
| |
| | |
vespa-engine/geirst/getnodestate-command-in-distributor-main-thread
Handle GetNodeStateCommand in distributor main thread when running in…
|
| |
| |
| |
| | |
stripe mode.
|
|/ |
|
|
|
|
| |
instead of sum.
|
|
|
|
| |
spaces, and pending maintenance stats.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since distributor stripes no longer have access to the top-level
pending message tracking info, it's no longer possible to infer if
a pending cluster state is happening by looking at the sent messages.
Instead, do this more generally (and efficiently) by looking at the
potential pending cluster state directly.
Rewire the `isBlocked` logic to take in an operation context instead
of just a `PendingMessageTracker`, giving it access to a lot more
relevant information.
|
|
|
|
|
|
| |
a random distributor stripe.
Such commands will eventually be bounced with WRONG_DISTRIBUTION when handled by the stripe.
|
| |
|
| |
|
|\
| |
| |
| |
| | |
vespa-engine/geirst/dispatch-get-and-visitor-messages-to-stripe
Dispatch get and visitor messages to correct distributor stripe.
|
| | |
|
|/
|
|
|
|
|
|
| |
hash lookup.
If it is a wildcard lookup iterate as earlier on.
Also use vespalib::stringref in interface to avoid conversion.
Use vespalib:string in the hash map to locate string in object aswe are still on old abi.
|
|
|
|
|
|
|
|
|
| |
The most basic functionality is now supported using multiple distributor stripes (and threads).
Note that the following is (at least) still missing:
* Stripe-separate metrics with top-level aggregation.
* Aggregation over all stripes in misc functions in Distributor that currently is using the first stripe.
* Handling of messages without bucket id in the top-level Distributor instead of using the first stripe.
|
|\
| |
| |
| |
| | |
vespa-engine/geirst/validate-distributor-stripes-config
Add validation of the number of distributor stripes from config and a…
|
| |
| |
| |
| |
| |
| | |
asserts.
This ensures the number of stripes is a power of 2 and within MaxStripes boundary.
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
| |
To avoid starvation of high priority global bucket merges, we do not consider
these for blocking due to a node being "busy" (usually caused by a full merge
throttler queue).
This is for two reasons:
1. When an ideal state op is blocked, it is still removed from the internal
maintenance priority queue. This means a blocked high pri operation will
not be retried until the next DB pass (at which point the node is likely
to still be marked as busy when there's heavy merge traffic).
2. Global bucket merges have high priority and will most likely be allowed
to enter the merge throttler queues, displacing lower priority merges.
|
|
|
|
| |
stripe in sequence.
|
|
|
|
| |
stripe bits.
|
| |
|
|
|
|
|
|
|
|
| |
- Ideal state ops cannot look at null-bucket messages for determining
if full bucket checks are pending when running in striped mode, as
these are not handled by stripes when not in legacy mode.
- State checker context should use ideal state cache instead of recomputing
for every checked bucket (observed via `perf` in production).
|
| |
|
|\
| |
| | |
Minor cleanups in distributor maintenance handling code
|
| |
| |
| |
| | |
No functional changes
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New behavior:
- Only allow time to travel forwards within a given distributor process'
lifetime. This is a change from the old behavior, which would emit a
warning to the logs and happily continue from a previously used second,
possibly causing the distributor to reuse timestamps.
- Try to detect cases where the wall clock has been transiently set far
into the future--only to bounce back--by aborting the process if the
current observed time is more than 120 seconds older than the highest
observed wall clock time. This is an attempt to avoid generating _too_
many bogus future timestamps, as the distributor would otherwise continue
generating timestamps within the highest observed second.
|
| |
|
|\
| |
| | |
Implement class that provides mapping from bucket space to state for …
|
| | |
|
|\ \
| |/
|/|
| |
| | |
vespa-engine/vekterli/add-initial-multi-stripe-support-to-access-guard
Add initial multi stripe support to access guard [run-systemtest]
|
| |
| |
| |
| |
| |
| | |
Still missing functionality for:
- Merging bucket entries across stripes
- Aggregating pending operation stats across stripes
|
|/
|
|
|
|
|
|
|
|
| |
storage link chain.
This is required to avoid stripe threads being able to send up messages
while the communication manager is being closed.
Such messages will fail at the RPC layer (already closed)
and an error reply is sent down from the communication manager.
This triggers an assert in StorageLink::sendDown() which is already CLOSED.
|
|
|
|
|
|
|
| |
internals
Also lets us test guard functionality much more easily since its target is
now fully mockable.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since distributor stripes may independently reach a conclusion that
a `GetNodeState` reply containing new host info should be sent back to
the cluster controller, implement basic rate limiting/batching of
concurrent sends.
Batching has two separate modes of operation:
- If the node is initializing, host info will be sent immediately after
_all_ stripes have reported in (they will always do this post-init).
This is not timed, in order to minimize latency of bucket info being
visible to the cluster controller.
- If the node has already initialized, have a grace period of up to 1
second from the time the first stripe signals its intent to send
host info until it's actually sent. This allows several stripes
to complete their recovery mode and signal host info intents during
this second.
Batch time period is currently not configurable, may be done later if
deemed useful or necessary.
|
|
|
|
|
|
|
| |
This fixes the following system tests when running with new distributor stripe mode:
Capacity::test_capacity
FlatToHierarchicTransitionTest::test_transition_implicitly_indexes_and_activates_docs_per_group
HierarchDistr::test_app_change__PROTON
|
| |
|
|
|
|
|
|
| |
This is wrongly implemented by not using a delegator to the right thread.
The system test framework is not using the "idealstateman" page.
The same information is present in the status page "distributor?page=buckets".
|
|
|
|
|
|
|
|
|
|
| |
A stripe thread is parked as part of another thread calling DistributorStripePool::park_all_threads().
The stripe thread will then be inside DistributorStripePool::park_thread_until_released(),
just waiting to call DistributorStripeThread::wait_until_unparked().
Before this is called, the other thread can call DistributorStripePool::unpark_all_threads(),
and the _should_park variable in DistributorStripeThread is set to false again.
When the stripe thread calls DistributorStripeThread::wait_until_unparked(), it is already unparked.
This is a scenario that might occur when the parking / unparking loop is short.
|
|
|
|
| |
running in new stripe mode.
|