| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Identifiers of the form `_Uppercased` are considered reserved by
the standard. Not likely to cause ambiguity in practice, but it's
preferable to stay on the good side of the standard-gods.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Remnants of the "file per bucket on spinning disks" days and no
longer used for anything.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Lets the "test" part of a test-and-set condition be evaluated
locally on individual content nodes. Piggybacks on top of metadata-only
Get operations, adding a new condition field to the request and a
boolean match result to the response.
Decouples the existing TaS utility code from being command-oriented,
allowing it to be used in other contexts as well.
Not yet wired through any protocols.
|
| |
|
|
|
|
| |
use std::thread directly instead
|
|
|
|
|
|
| |
also add very simple ThreadPool class to run multiple threads at once
make an effort to only join once
|
|
|
|
|
|
|
|
|
|
| |
- also stop using std::jthread
- remove Active and Joinable interfaces
- remove stop, stopped and slumber
- remove currentThread
- make start function static
- override start for Runnable w/init or custom function
- explicit stop/slumber where needed
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
We don't support using imported fields in conditional mutations, so
catch attempts at doing this during the field enumeration that is done
as part of the condition evaluation. Would previously get an internal
error response with an ugly stack trace since the exception would
propagate up to a generic exception-to-response handler.
Will now generate an `ILLEGAL_PARAMETERS` error response with a hopefully
helpful error message.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After initialization, the node will immediately start communicating with the cluster
controller, exchanging host info. This host info contains a subset snapshot of the active
metrics, which includes the total bucket count, doc count etc. It is critical that
we must never report back host info _prior_ to having run at least one full sweep of
the bucket database, lest we risk transiently reporting zero buckets held by the
content node. Doing so could cause orchestration logic to perform operations based
on erroneous assumptions.
To avoid this, we explicitly force a full DB sweep and metric update prior to reporting
the node as up. Since this function is called prior to the CommunicationManager thread
being started, any CC health pings should also always happen after this init step.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If enabled, garbage collection is performed in two phases (metadata
gathering and deletion) instead of just a single phase. Two-phase GC
allows for ensuring the same set of documents is deleted across all
nodes and explicitly takes write locks on the distributor to prevent
concurrent feed ops to GC'd documents from potentially creating
inconsistencies.
Two-phase GC is only used _iff_ all replica content nodes support
the feature _and_ it's enabled in config. An additional field has
been added to the feature negotiation functionality to communicate
support from content nodes to distributors.
|
|
|
|
|
| |
Feels more intuitive to have a tuple that implies "document foo at timestamp bar"
rather than the current inverse of "timestamp bar with document foo".
|
|
|
|
|
|
|
|
| |
Remove '.sum' from metric names for storage node and also remove the average metrics for the same.
Remove '.sum' from distributor metrics set and remove distributor average metrics.
GC '.sum' from distributor metric names.
Remove '.alldisks' from metric names and update tests.
GC '.alldisks' from filestor metrics.
|
| |
|
| |
|
|
|
|
| |
(gcc 12 on aarch64 platform).
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
proton.
|
| |
|
|
|
|
| |
Also move the remaining throttler unit tests to vespalib.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds an operation throttler that is intended to provide global throttling
of async operations across all persistence stripe threads. A throttler
wraps a logical max pending window size of in-flight operations. Depending
on the throttler implementation, the window size may expand and shrink
dynamically. Exactly how and when this happens is unspecified.
Commit adds two throttler implementations:
* An unlimited throttler that is no-op and never blocks.
* A throttler built around the mbus `DynamicThrottlePolicy` and defers
all window decisions to it.
Current config default is to use the unlimited throttler. Config changes
require a process restart.
Offers both polling and (timed, non-timed) blocking calls for acquiring
a throttle token. If the returned token is valid, the caller may proceed
to invoke the asynchronous operation.
The window slot taken up by a valid throttle token is implicitly freed up
when the token is destroyed.
|
| |
|
| |
|
|
|
|
| |
- Consistently use DocEntryList as type for std::vector<spi::DocEntry::UP>
|
| |
|
|
|
|
|
|
| |
instead of an mutant.
Also add tests for the different variations a DocEntry can have.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
complete
Add a generic interface for letting an operation know that the synchronous
parts of its processing in the persistence thread is complete. This allows
a potentially longer-running async operation to free up any limits that
were put in place when it was taking up synchronous thread resources.
Currently only used by merge-related operations (that may dispatch many
async ops). Since we have a max upper bound for how many threads in a stripe
may be processing merge ops at the same time (to avoid blocking client ops),
we previously could effectively stall the pipelining of merges caused by
hitting the concurrency limit even if all persistence threads were otherwise
idle (waiting for prior async merge ops to complete).
We now explicitly decrease the merge concurrency counter once the synchronous
processing is done, allowing us to take on further merges immediately.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Only skip deactivating buckets if the entire _node_ is marked as
maintenance state, i.e. the node has maintenance state across all
bucket spaces provided in the bundle. Otherwise treat the state
transition as if the node goes down, deactivating all buckets.
Also ensure that the bucket deactivation logic above the SPI is
identical to that within Proton. This avoids bucket DBs getting
out of sync between the two.
|