| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
| |
Can't initialize members in constructor that depend on objects
that are subsequently reset by the superclass' `SetUp()` method.
|
|
|
|
|
|
|
| |
Implicitly unlocking messes up higher level assumptions about
when locks are held and thus cannot be safely done. Lock will
be unlocked immediately after anyway, so this does not seem
like a useful optimization.
|
| |
|
|
|
|
|
| |
Downcast-safe type invariant shall be maintained by the message's
own type ID tracking. If it's not, we have bigger problems.
|
|
|
|
| |
Avoids the need for barriers to avoid stepping on the thread's toes
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Bucket operations require either exclusive (single writer) or
shared (multiple readers) access. Prior to this commit, this
means that many enqueued feed operations to the same bucket
introduce pipeline stalls due to each operation having to wait
for all prior operations to the bucket to complete entirely
(including fsync of WAL append). This is a likely scenario when
feeding a document set that was previously acquired through
visiting, as such documents will inherently be output in
bucket-order.
With this commit, a configurable number of feed operations
(put, remove and update) bound for the exact same bucket may
be sent asynchronously to the persistence provider in the
context of the _same_ write lock. This mirrors how merge
operations work for puts and removes.
Batching is fairly conservative, and will _not_ batch across
further messages when any of the following holds:
* A non-feed operation is encountered
* More than one mutating operation is encountered for the
same document ID
* No more persistence throttler tokens can be acquired
* Max batch size has been reached
Updating the bucket DB, assigning bucket info and sending
replies is deferred until _all_ batched operations complete.
Max batch size is (re-)configurable live and defaults to a
batch size of 1, which shall have the exact same semantics as
the legacy behavior.
Additionally, clock sampling for persistence threads have been
abstracted away to allow for mocking in tests (no need for sleep!).
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Extends metric producer classes with the requested exposition format.
As a consequence, the State API server has been changed to allow
emitting other content types than just `application/json`.
Add custom Prometheus rendering for Slobrok, as it does its own
domain-specific metric tracking. However, since it has non-destructive
sampling properties, we can actually use proper `counter` types.
|
|
|
|
|
|
|
|
|
|
|
| |
Maps all internal metrics to one or more labelled time series.
Due to poor compatibility between the data model (and sampling
strategy) of the legacy metrics framework and that of Prometheus,
all time series are emitted as `untyped` metrics.
This is a stop-gap solution on the way to "properly" supporting
Prometheus exposition, and the output of this renderer should
therefore only be used for internal purposes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The document API has long since had a special field for update operations
where an optional expected _existing_ backend timestamp can be specified,
and where the update should only go through iff there is a timestamp
match.
This has been supported on the distributor all along, but only when
write-repair is taking place (i.e. rarely), but the actual backend
support has been lacking. No one has complained yet since this is
very much not an advertised feature, but if we want to e.g. use this
feature for improvements to batch updates we should ensure that it
works as expected.
With this commit, a non-zero "old timestamp" field is cross-checked
against the existing document, and the update is only applied if the
actual and expected timestamps match.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
tests.
- Reduce penetration of generated StorFilestorConfig.
|
|\
| |
| |
| |
| | |
vespa-engine/balder/hardcode-enable_metadata_only_fetch_phase_for_inconsistent_updates
- Hardcode enable_metadata_only_fetch_phase_for_inconsistent_updates …
|
| | |
|
| |
| |
| |
| |
| |
| | |
restart_with_fast_update_path_if_all_get_timestamps_are_consistent to true.
- The tests expecting depending on these flags specify these values explicit.
|
|\ \
| | |
| | | |
Balder/gc unused distribution config
|
| | | |
|
| | | |
|
| |/ |
|
|\ \
| | |
| | |
| | |
| | | |
vespa-engine/balder/disable_queue_limits_for_chained_merges-always-true
disable_queue_limits_for_chained_merges has long been true, GC
|
| | | |
|
| |/ |
|
|\ \
| | |
| | |
| | |
| | | |
vespa-engine/balder/throttle_individual_merge_feed_ops_and_common_merge_chain_optimalization
Balder/throttle individual merge feed ops and common merge chain optimalization
|
| | | |
|
| |/ |
|
|/ |
|
| |
|
| |
|
|\
| |
| | |
Balder/always unordered merging
|
| | |
|
| | |
|
| | |
|
|\ \
| |/
|/|
| |
| | |
vespa-engine/balder/gc-maxpendingidealstateoperations
GC maxpendingidealstateoperations which has not been wired in for a l…
|
| | |
|
|\ \
| |/
|/|
| |
| | |
vespa-engine/balder/always-inhibit_default_merges_when_global_merges_pending
- Always inhibit_default_merges_when_global_merges_pending
|
| |
| |
| |
| |
| | |
- Only show config to the code that needs it.
- Avoid using config autogenerated internals around in the code.
|
|/ |
|
| |
|
|\
| |
| | |
Always report hostinfo
|
| | |
|
|\| |
|
| |\
| | |
| | | |
GC priority control in config. Correct priority is essential to conte…
|
| | |
| | |
| | |
| | | |
layer, and should not be reconfigured.
|
| |\ \
| | | |
| | | | |
Never block statecheckers
|
| | |/ |
|