aboutsummaryrefslogtreecommitdiffstats
path: root/storage/src/tests
Commit message (Collapse)AuthorAgeFilesLines
* Make host info cluster state version reporting correct for deferred state ↵Tor Brede Vekterli2022-01-031-26/+73
| | | | | | | | | | | | | | | | | | | | | bundles If an application uses deferred cluster state activations, do not report back a given cluster state version as being in use by the node until the state version has been explicitly activated by the cluster controller. This change is due to the fact that the replication invalidation happens upon recovery mode entry, and for deferred state bundles this takes place when a cluster state is _activated_, not when the distributor is otherwise done gathering bucket info (for a non-deferred bundle the activation happens implicitly at this point). If the state manager reports that the new cluster state is in effect even though it has not been activated, the cluster controller could still end up using stale replication stats, as the invalidation logic has not yet run at this point in time. The cluster controller will ignore any host info responses for older versions, so any stale replication statistics should not be taken into account with this change.
* Invalidate bucket DB replica statistics upon recovery mode entryTor Brede Vekterli2022-01-031-6/+24
| | | | | | | | | | | | | | | | | | The replica stats track the minimum replication factor for any bucket for a given content node the distributor maintains buckets for. These statistics may be asynchronously queried by the cluster controller through the host info reporting API. If we do not invalidate the statistics upon a cluster state change, there is a very small window of time where the distributor may potentially report back _stale_ statistics that were valid for the _prior_ cluster state version but not for the new one. This can happen if the cluster controller fetches host info from the node in between start of the recovery period and the completion of the recovery mode DB scan. Receiving stale replication statistics may cause the cluster controller to erroneously believe that replication due to node retirements etc has completed earlier than it really has, possibly impacting orchestration decisions in a sub- optimal manner.
* Let CommunicationManager swallow any errant internal reply messagesTor Brede Vekterli2021-12-221-0/+8
| | | | | | | | | | | | | | It's possible for internal commands to have replies auto-generated if they're being evicted/aborted from persistence queue structures. Most of these replies are intercepted by higher-level links in the storage chain, but commands such as `RunTaskCommand` are actually initiated by the persistence _provider_ and not a higher level component, and are therefore not caught explicitly anywhere. Let CommunicationManager signal that internal replies are handled, even if the handling of these is just to ignore them entirely. This prevents the replies from being spuriously warning-logged as "unhandled message on top of call chain".
* Complete wiring of OK/failure metric reporting during update write-repairTor Brede Vekterli2021-12-171-2/+26
| | | | | | | | | | | | | | | | | This also changes the aggregation semantics of "not found" results. An update reply indicating "not found" is protocol-wise reported back to the client as successful but with a not-found field set that has to be explicitly checked. But on the distributor the `notfound` metric is under the `updates.failures` metric set, i.e. not counted as a success. To avoid double bookkeeping caused by treating it as both OK and not, we choose to not count "not found" updates under the `ok` metric. But to avoid artificially inflating the `failures.sum` metric, we remove the `notfound` metric from the implicit sum aggregation. This means that visibility into "not found" updates requires explicitly tracking the `notfound` metric as opposed to just the aggregate, but this should be an acceptable tradeoff to avoid false failure positives.
* Cover additional update failure edge cases with metricsTor Brede Vekterli2021-12-161-0/+17
| | | | | | | | | | | | | This change adds previously missing metric increments for test-and-set condition failures when this happens in the context of a write-repair, as well as adding missing wiring for the "notfound" failure metric. The latter is incremented when an update is returned from the content nodes having found no existing document, or when the read-phase of a write-repair finds no document to update (and neither `create: true` nor a TaS condition is set on the update). Also remove some internal reply type erasure to make it more obvious what the reply type must be.
* Let bucket maintenance priority queue be FIFO ordered within priority classTor Brede Vekterli2021-12-151-42/+50
| | | | | | | | | | | | This avoids a potential starvation issue caused by the existing implementation, which is bucket ID ordered within a given priority class. The latter has the unfortunate effect that frequently reinserting buckets that sort before buckets that are already in the queue may starve these from being popped from the queue. Move to a composite key that first sorts on priority, then on a strictly increasing sequence number. Add a secondary index into this structure that allows for lookups on bucket IDs as before.
* Revert "Treat empty replica subset as inconsistent for GetOperation ↵Tor Brede Vekterli2021-12-131-20/+0
| | | | [run-systemtest]"
* _executor -> _threadHenning Baldersheim2021-12-091-2/+2
|
* Add init_fun to vespalib::Thread too to figure out what the thread is used for.Henning Baldersheim2021-12-091-1/+3
|
* Merge pull request #20382 from ↵Henning Baldersheim2021-12-081-0/+20
|\ | | | | | | | | vespa-engine/vekterli/treat-empty-replica-subset-as-inconsistent-for-get-operations Treat empty replica subset as inconsistent for GetOperation [run-systemtest]
| * Treat empty replica subset as inconsistent for GetOperationTor Brede Vekterli2021-12-061-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | `GetOperation` document-level consistency checks are used by the multi-phase update logic to see if we can fall back to a fast path even though not all replicas are in sync. Empty replicas are not considered part of the send-set, so only looking at replies from replicas _sent_ to will not detect this case. If we haphazardly treat empty replicas as implicitly being in sync we risk triggering undetectable inconsistencies at the document level. This can happen if we send create-if-missing updates to an empty replica as well as a non-empty replica, and the document exists in the latter replica. The document would then be implicitly created on the empty replica with the same timestamp as that of the non-empty one, even though their contents would almost certainly differ. With this change we initially tag all `GetOperations` with at least one empty replica as having inconsistent replicas. This will trigger the full write- repair code path for document updates.
* | Prevent orphaned bucket replicas caused by indeterminate CreateBucket repliesTor Brede Vekterli2021-12-071-3/+29
|/ | | | | | | | | | | | | Previously we'd implicitly assume a failed CreateBucket reply meant the bucket replica was not created, but this does not hold in the general case. A failure may just as well be due to connection failures etc between the distributor and content node. To tell for sure, we now send an explicit RequestBucketInfo to the node in the case of CreateBucket failures. If it _was_ created, the replica will be reintroduced into the bucket DB. We still implicitly delete the bucket replica from the DB to avoid transiently routing client write load to a bucket that may likely not exist.
* Merge pull request #20359 from ↵Henning Baldersheim2021-12-031-0/+4
|\ | | | | | | | | vespa-engine/vekterli/decrement-merge-counter-when-sync-merge-handling-complete Decrement persistence thread merge counter when syncronous processing is complete [run-systemtest]
| * Decrement persistence thread merge counter when syncronous processing is ↵Tor Brede Vekterli2021-12-031-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | complete Add a generic interface for letting an operation know that the synchronous parts of its processing in the persistence thread is complete. This allows a potentially longer-running async operation to free up any limits that were put in place when it was taking up synchronous thread resources. Currently only used by merge-related operations (that may dispatch many async ops). Since we have a max upper bound for how many threads in a stripe may be processing merge ops at the same time (to avoid blocking client ops), we previously could effectively stall the pipelining of merges caused by hitting the concurrency limit even if all persistence threads were otherwise idle (waiting for prior async merge ops to complete). We now explicitly decrease the merge concurrency counter once the synchronous processing is done, allowing us to take on further merges immediately.
* | Merge pull request #20340 from ↵Geir Storli2021-12-021-7/+21
|\ \ | |/ |/| | | | | vespa-engine/vekterli/cap-merge-delete-bucket-pri-to-default-feed-pri Cap merge-induced DeleteBucket priority to that of default feed priority
| * Cap merge-induced DeleteBucket priority to that of default feed priorityTor Brede Vekterli2021-12-021-7/+21
| | | | | | | | | | | | | | This lets DeleteBucket operations FIFO with the client operations using the default feed priority (120). Not doing this risks preempting feed ops with deletes, elevating latencies.
* | Merge pull request #20329 from ↵Arne H Juul2021-12-027-9/+10
|\ \ | | | | | | | | | | | | vespa-engine/arnej/config-class-should-not-be-public Arnej/config class should not be public
| * | more descriptive name for header fileArne H Juul2021-12-025-5/+5
| | |
| * | track namespace move in documenttypes.defArne H Juul2021-12-027-9/+10
| |/ | | | | | | | | | | | | * For C++ code this introduces a "document::config" namespace, which will sometimes conflict with the global "config" namespace. * Move all forward-declarations of the types DocumenttypesConfig and DocumenttypesConfigBuilder to a common header file.
* | Add metrics for active operations on service layer.Tor Egge2021-12-012-0/+151
| |
* | Don't let ignored bucket info reply be propagated out of distributorTor Brede Vekterli2021-11-301-33/+49
|/ | | | | | | | | | If a reply arrives for a preempted cluster state it will be ignored. To avoid it being automatically sent further down the storage chain we still have to treat it as handled. Otherwise a scary looking but otherwise benign "unhandled message" warning will be emitted in the Vespa log. Also move an existing test to the correct test fixture.
* Reduce default bucket delete pri and fix priority inheritance for merge deletionTor Brede Vekterli2021-11-302-1/+12
| | | | | | | | Since deletes are now async in the backend, make them FIFO-order with client feed by default to avoid swamping the queues with deletes. Also, explicitly inherit priority for bucket deletion triggered by bucket merging. This was actually missing previously and meant such deletes got the default, very low priority.
* Update operation metrics for delayed or chained merge handler replies.Tor Egge2021-11-241-2/+76
|
* Actually test maintenance -> down node state transitionTor Brede Vekterli2021-11-241-1/+1
|
* Handle case where bucket spaces have differing maintenance state for a nodeTor Brede Vekterli2021-11-241-26/+83
| | | | | | | | | | | Only skip deactivating buckets if the entire _node_ is marked as maintenance state, i.e. the node has maintenance state across all bucket spaces provided in the bundle. Otherwise treat the state transition as if the node goes down, deactivating all buckets. Also ensure that the bucket deactivation logic above the SPI is identical to that within Proton. This avoids bucket DBs getting out of sync between the two.
* Update merge latency metrics after async writes have completed.Tor Egge2021-11-231-2/+37
|
* Revert "Continue serving search queries when in Maintenance node state ↵Henning Baldersheim2021-11-231-83/+26
| | | | [run-systemtest]"
* Handle case where bucket spaces have differing maintenance state for a nodeTor Brede Vekterli2021-11-231-26/+83
| | | | | | | | | | | Only skip deactivating buckets if the entire _node_ is marked as maintenance state, i.e. the node has maintenance state across all bucket spaces provided in the bundle. Otherwise treat the state transition as if the node goes down, deactivating all buckets. Also ensure that the bucket deactivation logic above the SPI is identical to that within Proton. This avoids bucket DBs getting out of sync between the two.
* GC unused codeHenning Baldersheim2021-11-202-16/+0
|
* Remove redundant sync_allHenning Baldersheim2021-11-201-1/+0
|
* Fix forward declaration of NodeSupportedFeatures.Tor Egge2021-11-191-1/+1
|
* Let removeAsync handle list of documents.Henning Baldersheim2021-11-183-6/+14
|
* Move BucketIdFactory to test fixture.Henning Baldersheim2021-11-186-21/+13
|
* Move removeLocation over to Asynchandler and issue all removes for one ↵Henning Baldersheim2021-11-171-4/+9
| | | | | | bucket before waiting for the replies. Prepare RemoveResult to contain more replies.
* Add configurable support for unordered merge forwardingTor Brede Vekterli2021-11-1211-25/+242
| | | | | | | | | | | | | | | | | | | | | | | Historically the MergeThrottler component has required a deterministic forwarding of merges between nodes in strictly increasing distribution key order. This is to avoid distributed deadlocks caused by ending up with two or more nodes waiting for each other to release merge resources, where releasing one depends on releasing the other. This works well, but has the downside that there's an inherent pressure of merges towards nodes with lower distribution keys. These often become a bottleneck. This commit lifts this ordering restriction, by allowing forwarded, unordered merges to immediately enter the active merge window. By doing this we remove the deadlock potential, since nodes will longer be waiting on resources freed by other nodes. Since the legacy MergeThrottler has a lot of invariant checking around strictly increasing merge chains, we only allow unordered merges to be scheduled towards node sets where _all_ nodes are on a Vespa version that explicitly understands unordered merges (and thus do not self- obliterate upon seeing one). To communicate this, full bucket fetches will now piggy-back version-specific feature sets as part of the response protocol. Distributors then aggregate this information internally.
* Rename ISequencedTaskExecutor::sync() to sync_all().Tor Egge2021-10-281-1/+1
|
* Merge pull request #19735 from ↵Geir Storli2021-10-283-0/+80
|\ | | | | | | | | vespa-engine/toregge/handover-tracker-to-apply-bucket-diff-state-on-exceptions Handover tracker to ApplyBucketDiffState on exceptions.
| * Handover tracker to ApplyBucketDiffState on exceptions.Tor Egge2021-10-263-0/+80
| |
* | Update 2019 Oath copyrights.gjoranv2021-10-2715-15/+15
|/
* Merge pull request #19725 from ↵v7.489.25Henning Baldersheim2021-10-251-9/+23
|\ | | | | | | | | vespa-engine/vekterli/prioritize-forwarded-merges-in-throttler-queue Prioritize forwarded merges in MergeThrottler queue
| * Prioritize forwarded merges in MergeThrottler queueTor Brede Vekterli2021-10-251-9/+23
| | | | | | | | | | | | | | Rationale: merges that already are part of an active merge window are taking up logical resources on one or more nodes in the cluster and we should prefer completing these before starting new merges queued from distributors.
* | Adjust dummy persistence spi semantics towards proton spi semantics whenTor Egge2021-10-251-4/+4
|/ | | | | | | bucket doesn't exist: * getBucketInfo() returns success with empty bucket info * createIterator() returns success * iterate() returns empty complete result.
* Merge pull request #19717 from ↵Geir Storli2021-10-252-4/+8
|\ | | | | | | | | vespa-engine/toregge/delay-apply-bucket-diff-state-deletion-try-2 Delay deletion of ApplyBucketState.
| * Delay deletion of ApplyBucketState.Tor Egge2021-10-252-4/+8
| |
* | create/delete bucket will never throw.Henning Baldersheim2021-10-254-9/+7
| |
* | Async createBucketHenning Baldersheim2021-10-253-7/+7
|/
* Add class comment for MergeHandlerTest.Tor Egge2021-10-211-0/+4
|
* Delay replies for async apply bucket diff.Tor Egge2021-10-212-28/+92
|
* Eliminate ApplyBucketDiffEntryResult.Tor Egge2021-10-201-21/+39
|
* Merge pull request #19643 from ↵Geir Storli2021-10-201-4/+3
|\ | | | | | | | | vespa-engine/toregge/update-merge-latency-metrics-from-operation-complete-callbacks Update merge handler put/remove latency metrics from operation complete callback.