| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Avoids potentially having to deserialize the entire update just
to get to a single bit of information that is technically
metadata existing orthogonally to the document update itself.
To ensure backwards/forwards compatibility, the flag is
propagated as a Protobuf `enum` where the default value is
a special "unspecified" sentinel, implying an old sender.
Since the Java protocol implementation always eagerly
deserializes messages, it unconditionally assigns the
`create_if_missing` field when sending and completely ignores
it when receiving.
The C++ protocol implementation observes and propagates the
field iff set. Otherwise the flag is deferred to the update
object as before. This applies to both the DocumentAPI and
StorageAPI protocols.
|
|
|
|
|
|
|
|
|
| |
If configured, the active merge window is limited so that the
sum of estimated memory usage for its merges does not go
beyond the configured soft memory limit. The window can
always fit a minimum of 1 merge regardless of its size to
ensure progress in the cluster (thus this is a soft limit,
not a hard limit).
|
| |
|
|
|
|
|
|
| |
Serialization code can safely be removed, as no revert-related
messages have ever flown across the wire in the new serialization
format.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a storage API command is created internally on a node it is
always assigned a strictly increasing message ID that is guaranteed
to be unique within the process. Some parts of the code use this
as a way to distinguish messages from another. However, uniqueness
(prior to this commit) did not necessarily hold, as the underlying
wire protocol would inherit message IDs _from other nodes_ and override
the generated ID with this. I.e. uniqueness no longer holds.
This had exciting consequences when the stars aligned and a remote
node sent the same ID as one generated at the same time internally
on the receiver node. Luckily, in practice this would only be used
in a potentially ambiguous context when sanity checking shared read
lock sets for the _same bucket_ in the persistence threads. Invariant
checks would detect this is as an attempted duplicate lock acquisition
and abort the process. This has been latent for many, many years,
but we've seen it happen exactly once.
This commit introduces an explicit domain separation between the
node-internal (locally unique) IDs and the ID used by the originator.
The originator ID is maintained and returned over the wire to the
caller when sending a response to the incoming request.
Curiously, we don't actually need this originator ID at all since
the caller maintains explicit state containing the sender command.
Unfortunately we can't simply remove it, since versions prior to
this commit will still use whatever's on the wire.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Lets the "test" part of a test-and-set condition be evaluated
locally on individual content nodes. Piggybacks on top of metadata-only
Get operations, adding a new condition field to the request and a
boolean match result to the response.
Decouples the existing TaS utility code from being command-oriented,
allowing it to be used in other contexts as well.
Not yet wired through any protocols.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If enabled, garbage collection is performed in two phases (metadata
gathering and deletion) instead of just a single phase. Two-phase GC
allows for ensuring the same set of documents is deleted across all
nodes and explicitly takes write locks on the distributor to prevent
concurrent feed ops to GC'd documents from potentially creating
inconsistencies.
Two-phase GC is only used _iff_ all replica content nodes support
the feature _and_ it's enabled in config. An additional field has
been added to the feature negotiation functionality to communicate
support from content nodes to distributors.
|
|
|
|
|
|
| |
Legacy version negotiation only happens over MessageBus transport,
which is now removed. Current StorageAPI RPC transport always uses
the newest version directly since it's built around Protobuf.
|
|
|