| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
| |
This provides a strict upper bound for the number of concurrently
executing DeleteBucket operations, and ensures that no persistence
thread stripe can have all its threads allocated to processing
bucket deletions.
|
|\
| |
| | |
Print Bouncer state within lock to ensure visibility
|
| |
| |
| |
| |
| |
| | |
This code path is only encountered when debug logging is explicitly
enabled for the parent `StorageLink` component. Turns out an old
system test did just that.
|
|/
|
|
|
|
| |
Identifiers of the form `_Uppercased` are considered reserved by
the standard. Not likely to cause ambiguity in practice, but it's
preferable to stay on the good side of the standard-gods.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
Removes own `ConfigFetcher` in favor of pushing reconfiguration
responsibilities onto the components owning the Bouncer instance.
The current "superclass calls into subclass" approach isn't
ideal, but the longer term plan is to hoist all config subscriptions
out of `StorageNode` and into the higher-level `Process` structure.
|
|
|
|
|
|
|
| |
Removes need to duplicate locking and explicit config
propagation handling per config type.
Also remove unused upgrade-config wiring.
|
|\
| |
| |
| |
| | |
vespa-engine/vekterli/remove-unused-document-config-handler
Remove unused document config update logic
|
| |
| |
| |
| |
| |
| | |
Actual document config changes are propagated in from the
top-level `Process` via an entirely different call chain.
Having the unused one around is just confusing, so remove it.
|
|/
|
|
| |
for GCC false positives.
|
|\
| |
| |
| |
| | |
vespa-engine/vekterli/make-operation-priority-mapping-static
Remove unused configurability of operation priorities
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
As far as I know, this config has not been used by anyone for
at least a decade (if it ever was used for anything truly useful).
Additionally, operation priorities are a foot-gun at the best of
times. The ability to dynamically change the meaning of priority
enums even more so.
This commit entirely removes configuration of Document API
priority mappings in favor of a fixed mapping that is equal to
the default config, i.e. what everyone's been using anyway.
This removes a thread per distributor/storage node process as
well as 1 mutex and 1 (presumably entirely unneeded `seq_cst`)
atomic load in the message hot path. Also precomputes a LUT for
the priority reverse mapping to avoid needing to lower-bound seek
an explicit map.
|
|/ |
|
|
|
|
|
|
|
|
| |
This moves the responsibility for bootstrapping and updating config
for the `CommunicationManager` component to its owner. By doing this,
a dedicated `ConfigFetcher` can be removed. Since this is a
component used by both the distributor and storage nodes, this
reduces total thread count by 2 on a host.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
This is required to allow messages to be bounced during the
final chain flushing step where the `CommunicationManager` is
shutting down the RPC subsystem and waiting for all RPC threads
to complete. At this point the Bouncer component below it has
completed transition into its final `CLOSED` state.
This is symmetrical to allowing the `CommunicationManager` to
send messages down while in a `FLUSHINGUP` state.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since we now shut down the RPC server as the last step during flushing,
it's possible for incoming RPCs to arrive before we get to this point.
These will be immediately bounced (or swallowed) by the Bouncer
component that lies directly below the CommunicationManager, but to
actually get there we need to allow messages down in the StorageLink
`FLUSHINGUP` state.
This commit allows this explicitly for the CommunicationManager and
disallows it for everyone else. Also added stack trace dumping to the
log in the case that a violation is detected.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This moves RPC shutdown from being the _first_ thing that happens
to being the _last_ thing that happens during storage chain shutdown.
To avoid concurrent client requests from the outside reaching internal
components during the flushing phases, the Bouncer component will now
explicitly and immediately reject incoming RPCs after closing and all
replies will be silently swallowed (no one is listening for them at that
point anyway).
|
|
|
|
| |
subsystem, take 2"
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since we now shut down the RPC server as the last step during flushing,
it's possible for incoming RPCs to arrive before we get to this point.
These will be immediately bounced (or swallowed) by the Bouncer
component that lies directly below the CommunicationManager, but to
actually get there we need to allow messages down in the StorageLink
`FLUSHINGUP` state.
This commit allows this explicitly for the CommunicationManager and
disallows it for everyone else. Also added stack trace dumping to the
log in the case that a violation is detected.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
This moves RPC shutdown from being the _first_ thing that happens
to being the _last_ thing that happens during storage chain shutdown.
To avoid concurrent client requests from the outside reaching internal
components during the flushing phases, the Bouncer component will now
explicitly and immediately reject incoming RPCs after closing and all
replies will be silently swallowed (no one is listening for them at that
point anyway).
|
| |
|
|\
| |
| |
| |
| | |
vespa-engine/vekterli/ensure-internal-messages-flushed-prior-to-rpc-shutdown
Ensure internal messages are flushed before shutting down RPC subsystem
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This moves RPC shutdown from being the _first_ thing that happens
to being the _last_ thing that happens during storage chain shutdown.
To avoid concurrent client requests from the outside reaching internal
components during the flushing phases, the Bouncer component will now
explicitly and immediately reject incoming RPCs after closing and all
replies will be silently swallowed (no one is listening for them at that
point anyway).
|
| | |
|
|/ |
|
|
|
|
|
|
|
| |
Only the reply dispatcher functionality is ever used.
Also rename shutdown function to raise less eyebrows from a case
of mistaken identity with `std::terminate`...
|
|
|
|
|
|
| |
For a long time now, content nodes have transitioned directly
from Down to Up on startup, and they will never pass through an
Initializing state (remnant from spinning rust days).
|
| |
|
|
|
|
|
|
| |
Although unused, its presence in the chain causes an indirect
call for each message passing by in either direction, which is
entirely pointless.
|
|
|
|
|
|
| |
Serialization code can safely be removed, as no revert-related
messages have ever flown across the wire in the new serialization
format.
|
|
|
|
|
|
| |
This is a remnant from the VDS days and can only work when the
backend is a multi-version store. Code has been explicitly config
model-disabled for Proton since day one.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This introduces cancellation of pending operations/messages to
content nodes in the following scenarios:
* One or more content nodes become unavailable in a newly
received cluster state version (triggered when first received,
i.e. at the pending state start edge).
* One or more nodes are removed from the distribution config.
* The set of available distributors changes, which in turn
changes the ownership of a fraction of the set of super buckets.
Pending operations to buckets that were owned by the current
distributor in the previous state, but not in the new state,
are all cancelled.
Introduce cancellation support for internal maintenance operations.
As part of this, move `CancelScope` tracking out into the parent
`Operation` class to unify cancellation tracking across both
client and maintenance operations.
Remove interface vs. impl indirection for `PersistenceMessageTracker`
since it's only ever had a single implementation and it likely never
will have another.
|
|\
| |
| |
| |
| | |
vespa-engine/vekterli/add-predicated-bucket-msg-fn-to-message-tracker
Enumerate pending message IDs on a bucket predicate basis
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Lets a caller selectively enumerate all IDs of messages pending
towards buckets that match a caller-provided predicate function.
A separate message ID callback is invoked per distinct message.
Also remove hard-coded multi-index numeric indices in favor of
named constants.
|