| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
With a sufficient, even thread count this will ensure that no
stripes end up completely blocked on processing merges, which can
starve client operations. Having a global limit means that it was
possible for stripes to completely fill up with merges.
As an added bonus, moving the limit tracking to individual stripes
means that we no longer have to track this as an atomic, since all
access already happens under the Stripe lock.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
New distributor metric available as:
```
vds.idealstate.garbage_collection.documents_removed
```
Add documents removed statistics to `RemoveLocation` responses,
which is what GC is currently built around. Could technically have
been implemented as a diff of before/after BucketInfo, but GC is
very low priority so many other mutating ops may have changed the
bucket document set in the time span between sending the GC ops
and receiving the replies.
This relates to issue #12139
|
| |
|
| |
|
|
|
|
|
| |
send spec for client connections to enable SNI as well as server name
verification
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
Balder/reduce bytebuffer exposure
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
| | |
|
|\ \
| |/
|/|
| |
| | |
vespa-engine/vekterli/support-weak-internal-read-consistency-for-client-gets
Vekterli/support weak internal read consistency for client gets
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If configured, Get operations initiated by the client are flagged
with weak internal consistency. This allows the backend to bypass
certain internal synchronization mechanisms, which minimizes latency
at the cost of possibly not observing a consistent view of the
document. This config should only be used in a very restricted set
of cases where the document set is effectively read-only, or cross-
field consistency or freshness does not matter.
To enable the weak consistency, use an explicit config override:
```
<config name="vespa.config.content.core.stor-distributormanager">
<use_weak_internal_read_consistency_for_client_gets>
true
</use_weak_internal_read_consistency_for_client_gets>
</config>
```
This closes #11811
|
| | |
|
|/
|
|
| |
better generated code.
|
|\
| |
| |
| |
| | |
vespa-engine/balder/bring-you-backing-buffer-along
Balder/bring your backing buffer along
|
| | |
|
| | |
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Create-if-missing updates have a rather finicky behavior in the backend,
wherein they'll set the timestamp of the previous document to that of the
_new_ document timestamp if the update ended up creating a document from
scratch. This particular behavior confuses the "after the fact" timestamp
consistency checks, since it will seem like the document that was created
from scratch is a better candidate to force convergence towards rather
than the ones that actually updated an existing document.
With this change we therefore detect this case specially and treat the
received timestamps as if the document updated had a timestamp of zero.
This matches the behavior of regular (non auto-create) updates.
Note that another venue for solving this would be to alter the returned
timestamp in the backend to be zero instead, but this would cause issues
during rolling upgrades since some of the content nodes would be returning
zero timestamps while others would be returning non-zero. This would in
turn trigger false positives for the inconsistency sanity checks.
Also note that this is a fallback path that should not be hit unless
the a-priori inconsistency checks in the two-phase update operation
somehow fails to recognize that the document versions may be out of
sync.
This relates to issue #11686
|
|\
| |
| |
| |
| | |
vespa-engine/vekterli/support-config-disabling-of-merges
Support config disabling of merges
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
If config is set, all merges will be completely inhibited. This is
useful for letting system tests operate against a bucket replica state
that is deterministically out of sync.
This relates to issue #11686
|
|\ \
| | |
| | |
| | |
| | | |
vespa-engine/toregge/system-time-and-steady-time-might-have-different-duration-types
std::chrono::system_clock and std::chrono::steady_clock might have different duration types.
|
| | | |
|
| |/ |
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Introducing a new member in the category "stupid bugs that I should have
added explicit tests for when adding the feature initially".
There was ambiguity in the GetOperation code where a timestamp sentinel
value of zero was used to denote not having received any replies yet,
but where a timestamp of zero also means "document not found on replica".
This means that if the first reply was from a replica _without_ a document
and the second reply was from a replica _with_ a document, the code
would act as if the first reply effectively did not exist. Consequently
the Get operation would be tagged as consistent. This had very bad
consequences for the two-phase update operation logic that relied on
this information to be correct.
This change ensures there is no ambiguity between not having received
a reply and having a received a reply with a missing document.
|
|
|
|
|
|
|
|
|
|
| |
Even with the fix in #11561 we are still observing replica
divergence warnings in the logs. Disabling this feature entirely
until the issue has been fully investigated and a complete fix
has been implemented.
Also emit a log message when the distributor has forced convergence
of a detected inconsistent update.
|
|\ |
|
| |
| |
| |
| | |
extracting debuginfo.
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After the recent change to allow safe path updates to be restarted
as fast path updates iff all observed document timestamps are equal,
a race condition regression was introduced. If the bucket that the
update operation was scheduled towards got a new replica concurrently
created _between_ the time that safe path Gets were sent and received,
it was possible for updates to be sent to inconsistent replicas. This
is because the Get and Update operations use the current database
state at _their_ start time, not a stable snapshot state from the start
time of the two phase update operation itself.
Add an explicit check that the replica state between sending Gets and
Updates is unchanged. If it has changed, a fast path restart is _not_
permitted.
|
|\
| |
| |
| |
| | |
vespa-engine/balder/use-duration-in-messagebus-and-storageapi-rebased-1
timeout as duration
|
| |\
| | |
| | |
| | | |
balder/use-duration-in-messagebus-and-storageapi-rebased-1
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | | |
Conflicts:
messagebus/src/vespa/messagebus/testlib/testserver.cpp
|
|\ \ \
| |_|/
|/| | |
Balder/use system time in trace
|
| |/ |
|
|/
|
|
|
| |
Renamed Timer -> ScheduledExecutor.
Do not include thread.h when not needed in header files.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When Get requests initiated outside the main distributor thread are sent
via the MessageSender that is implemented by the main Distributor instance,
they would be implicitly registered with the pending message tracker.
Not only was this thread unsafe, the registrations would never be cleared
away since the reply pipeline bypassed it entirely. This would cause a silent
memory leak that would build up many small allocations over time.
We now dispatch Get requests directly through the storage link chain, bypassing
the message tracking component. This both fixes the leak and avoids extra
overhead for the Get requests.
Note: the concurrent Get feature is _not_ enabled by default.
Also fixes an issue where concurrent Get operations weren't correctly
gracefully aborted when the node shuts down.
|