| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bundles
If an application uses deferred cluster state activations, do not report
back a given cluster state version as being in use by the node until the
state version has been explicitly activated by the cluster controller.
This change is due to the fact that the replication invalidation happens
upon recovery mode entry, and for deferred state bundles this takes place
when a cluster state is _activated_, not when the distributor is otherwise
done gathering bucket info (for a non-deferred bundle the activation happens
implicitly at this point). If the state manager reports that the new cluster
state is in effect even though it has not been activated, the cluster
controller could still end up using stale replication stats, as the invalidation
logic has not yet run at this point in time.
The cluster controller will ignore any host info responses for older
versions, so any stale replication statistics should not be taken into
account with this change.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The replica stats track the minimum replication factor for any bucket
for a given content node the distributor maintains buckets for. These
statistics may be asynchronously queried by the cluster controller
through the host info reporting API.
If we do not invalidate the statistics upon a cluster state change, there
is a very small window of time where the distributor may potentially
report back _stale_ statistics that were valid for the _prior_ cluster
state version but not for the new one. This can happen if the cluster
controller fetches host info from the node in between start of the recovery
period and the completion of the recovery mode DB scan. Receiving stale
replication statistics may cause the cluster controller to erroneously
believe that replication due to node retirements etc has completed earlier
than it really has, possibly impacting orchestration decisions in a sub-
optimal manner.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's possible for internal commands to have replies auto-generated
if they're being evicted/aborted from persistence queue structures.
Most of these replies are intercepted by higher-level links in the
storage chain, but commands such as `RunTaskCommand` are actually
initiated by the persistence _provider_ and not a higher level component,
and are therefore not caught explicitly anywhere.
Let CommunicationManager signal that internal replies are handled,
even if the handling of these is just to ignore them entirely. This
prevents the replies from being spuriously warning-logged as "unhandled
message on top of call chain".
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This also changes the aggregation semantics of "not found" results.
An update reply indicating "not found" is protocol-wise reported back to
the client as successful but with a not-found field set that has to be
explicitly checked. But on the distributor the `notfound` metric is
under the `updates.failures` metric set, i.e. not counted as a success.
To avoid double bookkeeping caused by treating it as both OK and not,
we choose to not count "not found" updates under the `ok` metric. But to
avoid artificially inflating the `failures.sum` metric, we remove the
`notfound` metric from the implicit sum aggregation.
This means that visibility into "not found" updates requires explicitly
tracking the `notfound` metric as opposed to just the aggregate, but
this should be an acceptable tradeoff to avoid false failure positives.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This change adds previously missing metric increments for test-and-set
condition failures when this happens in the context of a write-repair,
as well as adding missing wiring for the "notfound" failure metric.
The latter is incremented when an update is returned from the content
nodes having found no existing document, or when the read-phase of a
write-repair finds no document to update (and neither `create: true`
nor a TaS condition is set on the update).
Also remove some internal reply type erasure to make it more obvious
what the reply type must be.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This avoids a potential starvation issue caused by the existing
implementation, which is bucket ID ordered within a given priority
class. The latter has the unfortunate effect that frequently reinserting
buckets that sort before buckets that are already in the queue may
starve these from being popped from the queue.
Move to a composite key that first sorts on priority, then on a strictly
increasing sequence number. Add a secondary index into this structure
that allows for lookups on bucket IDs as before.
|
|
|
|
| |
[run-systemtest]"
|
| |
|
| |
|
|\
| |
| |
| |
| | |
vespa-engine/vekterli/treat-empty-replica-subset-as-inconsistent-for-get-operations
Treat empty replica subset as inconsistent for GetOperation [run-systemtest]
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
`GetOperation` document-level consistency checks are used by the multi-phase
update logic to see if we can fall back to a fast path even though not all
replicas are in sync. Empty replicas are not considered part of the send-set,
so only looking at replies from replicas _sent_ to will not detect this case.
If we haphazardly treat empty replicas as implicitly being in sync we risk
triggering undetectable inconsistencies at the document level. This can
happen if we send create-if-missing updates to an empty replica as well as a
non-empty replica, and the document exists in the latter replica.
The document would then be implicitly created on the empty replica with the
same timestamp as that of the non-empty one, even though their contents would
almost certainly differ.
With this change we initially tag all `GetOperations` with at least one empty
replica as having inconsistent replicas. This will trigger the full write-
repair code path for document updates.
|
|/
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we'd implicitly assume a failed CreateBucket reply meant the
bucket replica was not created, but this does not hold in the general case.
A failure may just as well be due to connection failures etc between the
distributor and content node. To tell for sure, we now send an explicit
RequestBucketInfo to the node in the case of CreateBucket failures. If
it _was_ created, the replica will be reintroduced into the bucket DB.
We still implicitly delete the bucket replica from the DB to avoid
transiently routing client write load to a bucket that may likely not
exist.
|
|\
| |
| |
| |
| | |
vespa-engine/vekterli/decrement-merge-counter-when-sync-merge-handling-complete
Decrement persistence thread merge counter when syncronous processing is complete [run-systemtest]
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
complete
Add a generic interface for letting an operation know that the synchronous
parts of its processing in the persistence thread is complete. This allows
a potentially longer-running async operation to free up any limits that
were put in place when it was taking up synchronous thread resources.
Currently only used by merge-related operations (that may dispatch many
async ops). Since we have a max upper bound for how many threads in a stripe
may be processing merge ops at the same time (to avoid blocking client ops),
we previously could effectively stall the pipelining of merges caused by
hitting the concurrency limit even if all persistence threads were otherwise
idle (waiting for prior async merge ops to complete).
We now explicitly decrease the merge concurrency counter once the synchronous
processing is done, allowing us to take on further merges immediately.
|
|\ \
| |/
|/|
| |
| | |
vespa-engine/vekterli/cap-merge-delete-bucket-pri-to-default-feed-pri
Cap merge-induced DeleteBucket priority to that of default feed priority
|
| |
| |
| |
| |
| |
| |
| | |
This lets DeleteBucket operations FIFO with the client operations using
the default feed priority (120).
Not doing this risks preempting feed ops with deletes, elevating latencies.
|
|\ \
| | |
| | |
| | |
| | | |
vespa-engine/arnej/config-class-should-not-be-public
Arnej/config class should not be public
|
| | | |
|
| |/
| |
| |
| |
| |
| |
| | |
* For C++ code this introduces a "document::config" namespace, which will
sometimes conflict with the global "config" namespace.
* Move all forward-declarations of the types DocumenttypesConfig and
DocumenttypesConfigBuilder to a common header file.
|
| | |
|
|/
|
|
|
|
|
|
|
|
| |
If a reply arrives for a preempted cluster state it will be ignored.
To avoid it being automatically sent further down the storage chain
we still have to treat it as handled. Otherwise a scary looking but
otherwise benign "unhandled message" warning will be emitted in the
Vespa log.
Also move an existing test to the correct test fixture.
|
|
|
|
|
|
|
|
| |
Since deletes are now async in the backend, make them FIFO-order with
client feed by default to avoid swamping the queues with deletes.
Also, explicitly inherit priority for bucket deletion triggered by
bucket merging. This was actually missing previously and meant such
deletes got the default, very low priority.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Only skip deactivating buckets if the entire _node_ is marked as
maintenance state, i.e. the node has maintenance state across all
bucket spaces provided in the bundle. Otherwise treat the state
transition as if the node goes down, deactivating all buckets.
Also ensure that the bucket deactivation logic above the SPI is
identical to that within Proton. This avoids bucket DBs getting
out of sync between the two.
|
| |
|
|
|
|
| |
[run-systemtest]"
|
|
|
|
|
|
|
|
|
|
|
| |
Only skip deactivating buckets if the entire _node_ is marked as
maintenance state, i.e. the node has maintenance state across all
bucket spaces provided in the bundle. Otherwise treat the state
transition as if the node goes down, deactivating all buckets.
Also ensure that the bucket deactivation logic above the SPI is
identical to that within Proton. This avoids bucket DBs getting
out of sync between the two.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
bucket before waiting for the replies.
Prepare RemoveResult to contain more replies.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Historically the MergeThrottler component has required a deterministic
forwarding of merges between nodes in strictly increasing distribution
key order. This is to avoid distributed deadlocks caused by ending up
with two or more nodes waiting for each other to release merge resources,
where releasing one depends on releasing the other. This works well,
but has the downside that there's an inherent pressure of merges towards
nodes with lower distribution keys. These often become a bottleneck.
This commit lifts this ordering restriction, by allowing forwarded,
unordered merges to immediately enter the active merge window. By doing
this we remove the deadlock potential, since nodes will longer be waiting
on resources freed by other nodes.
Since the legacy MergeThrottler has a lot of invariant checking around
strictly increasing merge chains, we only allow unordered merges to be
scheduled towards node sets where _all_ nodes are on a Vespa version
that explicitly understands unordered merges (and thus do not self-
obliterate upon seeing one). To communicate this, full bucket fetches
will now piggy-back version-specific feature sets as part of the response
protocol. Distributors then aggregate this information internally.
|
| |
|
|\
| |
| |
| |
| | |
vespa-engine/toregge/handover-tracker-to-apply-bucket-diff-state-on-exceptions
Handover tracker to ApplyBucketDiffState on exceptions.
|
| | |
|
|/ |
|
|\
| |
| |
| |
| | |
vespa-engine/vekterli/prioritize-forwarded-merges-in-throttler-queue
Prioritize forwarded merges in MergeThrottler queue
|
| |
| |
| |
| |
| |
| |
| | |
Rationale: merges that already are part of an active merge window
are taking up logical resources on one or more nodes in the cluster
and we should prefer completing these before starting new merges
queued from distributors.
|
|/
|
|
|
|
|
| |
bucket doesn't exist:
* getBucketInfo() returns success with empty bucket info
* createIterator() returns success
* iterate() returns empty complete result.
|
|\
| |
| |
| |
| | |
vespa-engine/toregge/delay-apply-bucket-diff-state-deletion-try-2
Delay deletion of ApplyBucketState.
|
| | |
|
| | |
|
|/ |
|
| |
|
| |
|
| |
|
|\
| |
| |
| |
| | |
vespa-engine/toregge/update-merge-latency-metrics-from-operation-complete-callbacks
Update merge handler put/remove latency metrics from operation complete callback.
|