aboutsummaryrefslogtreecommitdiffstats
path: root/storage
Commit message (Collapse)AuthorAgeFilesLines
* Use int64_t constants.Tor Egge2023-11-071-2/+2
|
* Add removeByGidAsync() to spi.Tor Egge2023-11-064-0/+23
|
* Specify metric unit in description stringTor Brede Vekterli2023-11-021-2/+2
|
* Less confusing namingTor Brede Vekterli2023-11-022-3/+3
|
* Wire HwInfo into MergeThrottler and use for auto-deduction of memory limitsTor Brede Vekterli2023-11-027-58/+196
| | | | | | | | Add config for min/max capping of deduced limit, as well as a scaling factor based on the memory available to the process. Defaults have been chosen based on empirical observations over many years, but having these as config means we can tune things live if it should ever be required.
* Heuristically compute expected merge memory usage upper boundTor Brede Vekterli2023-11-023-10/+106
| | | | | | | | | | | | | | | | | | | The distributor only knows a limited amount of metadata per bucket replica (roughly: checksum, doc count, doc size). It therefore has no way to know if two replicas with different checksums, both with 1000 documents, have 999 or 0 documents in common. We therefore have to assume the worst and estimate the worst case memory usage as being the _sum_ of mutually divergent replica sizes. Estimates are bounded by the expected bucket merge chunk size, as we make the simplifying assumption that memory usage for a particular node is (roughly) limited to this value for any given bucket. One special-cased exception to this is single-document replicas, as one document can not be split across multiple chunks by definition. Here we track the largest single document replica.
* Add content node soft limit on max memory used by mergesTor Brede Vekterli2023-11-019-66/+264
| | | | | | | | | If configured, the active merge window is limited so that the sum of estimated memory usage for its merges does not go beyond the configured soft memory limit. The window can always fit a minimum of 1 merge regardless of its size to ensure progress in the cluster (thus this is a soft limit, not a hard limit).
* Remove unneeded and code-bloating test macroTor Brede Vekterli2023-10-261-40/+46
|
* Use same concurrency inhibition for DeleteBucket as for merge opsTor Brede Vekterli2023-10-261-1/+6
| | | | | | | This provides a strict upper bound for the number of concurrently executing DeleteBucket operations, and ensures that no persistence thread stripe can have all its threads allocated to processing bucket deletions.
* Merge pull request #29098 from vespa-engine/vekterli/print-state-inside-lockHenning Baldersheim2023-10-252-10/+11
|\ | | | | Print Bouncer state within lock to ensure visibility
| * Print Bouncer state within lock to ensure visibilityTor Brede Vekterli2023-10-252-10/+11
| | | | | | | | | | | | This code path is only encountered when debug logging is explicitly enabled for the parent `StorageLink` component. Turns out an old system test did just that.
* | Avoid using a reserved identifier naming formatTor Brede Vekterli2023-10-255-80/+80
|/ | | | | | Identifiers of the form `_Uppercased` are considered reserved by the standard. Not likely to cause ambiguity in practice, but it's preferable to stay on the good side of the standard-gods.
* Purge additional config instances not needed after bootstrapTor Brede Vekterli2023-10-241-0/+2
|
* Simplify and reuse utility config functionTor Brede Vekterli2023-10-242-13/+8
|
* Rewire `FileStorManager` configTor Brede Vekterli2023-10-248-54/+68
|
* Rewire `ModifiedBucketChecker` configTor Brede Vekterli2023-10-246-27/+33
|
* Propagate `VisitorManager` config from outsideTor Brede Vekterli2023-10-247-37/+59
|
* Provide explicit bootstrap config to `BucketManager`Tor Brede Vekterli2023-10-244-22/+20
|
* Pull up and out config of `ChangedBucketOwnershipHandler` componentTor Brede Vekterli2023-10-249-55/+86
|
* Wire config to MergeThrottler in from the outsideTor Brede Vekterli2023-10-246-52/+66
|
* Explicitly de-inline `BootstrapConfigs` ctor/dtorTor Brede Vekterli2023-10-232-0/+10
|
* Propagate existing StorageNode config from main Process reconfig loopTor Brede Vekterli2023-10-236-106/+61
|
* Rewire Bouncer configuration flowTor Brede Vekterli2023-10-199-55/+82
| | | | | | | | | Removes own `ConfigFetcher` in favor of pushing reconfiguration responsibilities onto the components owning the Bouncer instance. The current "superclass calls into subclass" approach isn't ideal, but the longer term plan is to hoist all config subscriptions out of `StorageNode` and into the higher-level `Process` structure.
* De-dupe `StorageNode` config propagationTor Brede Vekterli2023-10-184-125/+109
| | | | | | | Removes need to duplicate locking and explicit config propagation handling per config type. Also remove unused upgrade-config wiring.
* Merge pull request #29003 from ↵Henning Baldersheim2023-10-182-23/+0
|\ | | | | | | | | vespa-engine/vekterli/remove-unused-document-config-handler Remove unused document config update logic
| * Remove unused document config update logicTor Brede Vekterli2023-10-182-23/+0
| | | | | | | | | | | | Actual document config changes are propagated in from the top-level `Process` via an entirely different call chain. Having the unused one around is just confusing, so remove it.
* | Move xxh3_64 methods to vespalib. That also removes the need for workarounds ↵Henning Baldersheim2023-10-171-9/+1
|/ | | | for GCC false positives.
* Merge pull request #28964 from ↵Tor Brede Vekterli2023-10-1710-111/+99
|\ | | | | | | | | vespa-engine/vekterli/make-operation-priority-mapping-static Remove unused configurability of operation priorities
| * Remove unused configurability of operation prioritiesTor Brede Vekterli2023-10-1710-111/+99
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As far as I know, this config has not been used by anyone for at least a decade (if it ever was used for anything truly useful). Additionally, operation priorities are a foot-gun at the best of times. The ability to dynamically change the meaning of priority enums even more so. This commit entirely removes configuration of Document API priority mappings in favor of a fixed mapping that is equal to the default config, i.e. what everyone's been using anyway. This removes a thread per distributor/storage node process as well as 1 mutex and 1 (presumably entirely unneeded `seq_cst`) atomic load in the message hot path. Also precomputes a LUT for the priority reverse mapping to avoid needing to lower-bound seek an explicit map.
* | Avoid gcc 12 bug when compiled for x86-64 and haswell or newer cpu.Henning Baldersheim2023-10-161-1/+7
|/
* Wire `CommunicationManager` config from its owner rather than self-subscribingTor Brede Vekterli2023-10-167-103/+132
| | | | | | | | This moves the responsibility for bootstrapping and updating config for the `CommunicationManager` component to its owner. By doing this, a dedicated `ConfigFetcher` can be removed. Since this is a component used by both the distributor and storage nodes, this reduces total thread count by 2 on a host.
* Improve enum naming by reducing redundant informationTor Brede Vekterli2023-10-124-15/+15
|
* Replace bools with type safe enumsTor Brede Vekterli2023-10-124-11/+14
|
* Allow Bouncer to send messages up when in state `CLOSED`Tor Brede Vekterli2023-10-124-6/+17
| | | | | | | | | | | This is required to allow messages to be bounced during the final chain flushing step where the `CommunicationManager` is shutting down the RPC subsystem and waiting for all RPC threads to complete. At this point the Bouncer component below it has completed transition into its final `CLOSED` state. This is symmetrical to allowing the `CommunicationManager` to send messages down while in a `FLUSHINGUP` state.
* Allow CommunicationManager to send down messages during flushingTor Brede Vekterli2023-10-113-15/+42
| | | | | | | | | | | | | Since we now shut down the RPC server as the last step during flushing, it's possible for incoming RPCs to arrive before we get to this point. These will be immediately bounced (or swallowed) by the Bouncer component that lies directly below the CommunicationManager, but to actually get there we need to allow messages down in the StorageLink `FLUSHINGUP` state. This commit allows this explicitly for the CommunicationManager and disallows it for everyone else. Also added stack trace dumping to the log in the case that a violation is detected.
* Move async message queue signal notification inside lockTor Brede Vekterli2023-10-111-4/+3
|
* Ensure internal messages are flushed before shutting down RPC subsystemTor Brede Vekterli2023-10-118-83/+161
| | | | | | | | | | This moves RPC shutdown from being the _first_ thing that happens to being the _last_ thing that happens during storage chain shutdown. To avoid concurrent client requests from the outside reaching internal components during the flushing phases, the Bouncer component will now explicitly and immediately reject incoming RPCs after closing and all replies will be silently swallowed (no one is listening for them at that point anyway).
* Revert "Ensure internal messages are flushed before shutting down RPC ↵Harald Musum2023-10-1110-206/+102
| | | | subsystem, take 2"
* Allow CommunicationManager to send down messages during flushingTor Brede Vekterli2023-10-113-15/+42
| | | | | | | | | | | | | Since we now shut down the RPC server as the last step during flushing, it's possible for incoming RPCs to arrive before we get to this point. These will be immediately bounced (or swallowed) by the Bouncer component that lies directly below the CommunicationManager, but to actually get there we need to allow messages down in the StorageLink `FLUSHINGUP` state. This commit allows this explicitly for the CommunicationManager and disallows it for everyone else. Also added stack trace dumping to the log in the case that a violation is detected.
* Move async message queue signal notification inside lockTor Brede Vekterli2023-10-111-4/+3
|
* Ensure internal messages are flushed before shutting down RPC subsystemTor Brede Vekterli2023-10-118-83/+161
| | | | | | | | | | This moves RPC shutdown from being the _first_ thing that happens to being the _last_ thing that happens during storage chain shutdown. To avoid concurrent client requests from the outside reaching internal components during the flushing phases, the Bouncer component will now explicitly and immediately reject incoming RPCs after closing and all replies will be silently swallowed (no one is listening for them at that point anyway).
* Revert "Ensure internal messages are flushed before shutting down RPC subsystem"Tor Brede Vekterli2023-10-109-164/+87
|
* Merge pull request #28825 from ↵Henning Baldersheim2023-10-109-87/+164
|\ | | | | | | | | vespa-engine/vekterli/ensure-internal-messages-flushed-prior-to-rpc-shutdown Ensure internal messages are flushed before shutting down RPC subsystem
| * Move async message queue signal notification inside lockTor Brede Vekterli2023-10-061-4/+3
| |
| * Ensure internal messages are flushed before shutting down RPC subsystemTor Brede Vekterli2023-10-068-83/+161
| | | | | | | | | | | | | | | | | | | | This moves RPC shutdown from being the _first_ thing that happens to being the _last_ thing that happens during storage chain shutdown. To avoid concurrent client requests from the outside reaching internal components during the flushing phases, the Bouncer component will now explicitly and immediately reject incoming RPCs after closing and all replies will be silently swallowed (no one is listening for them at that point anyway).
* | Correct copyright headersJon Bratseth2023-10-096-6/+6
| |
* | Update copyrightJon Bratseth2023-10-09775-777/+777
|/
* Remove unused message dispatcher functionalityTor Brede Vekterli2023-10-043-77/+24
| | | | | | | Only the reply dispatcher functionality is ever used. Also rename shutdown function to raise less eyebrows from a case of mistaken identity with `std::terminate`...
* Remove unused code branch in Bouncer componentTor Brede Vekterli2023-10-021-8/+0
| | | | | | For a long time now, content nodes have transitioned directly from Down to Up on startup, and they will never pass through an Initializing state (remnant from spinning rust days).
* No need to have this memory trap enabled anymore.Henning Baldersheim2023-10-023-13/+2
|