| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
| |
|
|
|
|
| |
state broadcast deadline [run-systemtest]"
|
| |
|
| |
|
| |
|
|\
| |
| |
| |
| | |
vespa-engine/jonmv/create-only-one-cluster-controller
Avoid recreation of ClusterController when config changes
|
| | |
|
|/
|
|
| |
setup
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds an absolute number delta that is subtracted from the feed block limit
when a node has a resource already in feed blocked state. This means that
there's a lower watermark threshold that must be crossed before feeding
can be unblocked. Avoids flip-flopping between block states.
Default is currently 0.0, i.e. effectively disabled. To be modified
later for system tests and trial roll-outs.
A couple of caveats with the current implementation:
* The cluster state is not recomputed automatically when just the hysteresis
threshold is crossed, so the description will be out of date on the
content nodes. However, if any other feed block event happens (or the
hysteresis threshold is crossed), the state will be recomputed as expected.
This does not affect correctness, since the feed is still to be blocked.
* A node event remove/add pair is emitted for feed block status when the
hysteresis threshold is crossed and there's a cluster state recomputation.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Will push out a new cluster state bundle indicating cluster feed blocked
if one or more nodes in the cluster has one or more resources exhausted.
Similarly, a new state will be pushed out once no nodes have resources
exhausted any more.
The feed block description currently contains up to 3 separate exhausted
resources, possibly across multiple nodes.
A cluster-level event is emitted for both the block and unblock edges.
No hysteresis is present yet, so if a node is oscillating around a block-limit,
so will the cluster state.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
VespaZooKeeperServer is enough, ZooKeeperProvider is just an
unnecessary extra layer. In addition neither provides any guarantee
that the the server has started and is working. ClusterController has
code that verifies that connecting to zookeeper works, that should
be sufficient.
|
|\
| |
| |
| | |
hmusum/disallow-clustercontroller-with-no-zookeeper-cluster
|
| | |
|
| | |
|
| | |
|
|/ |
|
| |
|
|
|
|
|
|
| |
Cannot find any usage of nodes created. This might be some initialization
code to check if operations work, but I cannot see the need for it.
If this breaks something we can at least document why this is needed.
|
|
|
|
|
| |
Code in clustercontroller-apputils is now only used from clustercontroller-apps,
so those two modules can be merged
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
The flag controlled config read by the Cluster Controller. Therefore, I have
left the ModelContextImpl.Properties method and implementation (now always
returning true), but the model has stopped using that method internally, and
the config is no longer used in the CC.
The field in the fleetcontroller.def is left unchanged and documented as
deprecated.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This makes the Cluster Controller use the
vds.datastored.bucket_space.buckets_total, dimension bucketSpace=default, to
determine whether a content node manages zero buckets, and if so, will allow
the node to go permanently down. This is used when a node is retiring, and it
is to be removed from the application.
The change is guarded by the use-bucket-space-metric, default true. If the new
metric doesn't work as expected, we can revert to using the current/old metric
by flipping the flag. The flag can be controlled per application.
|
| |
|
| |
|
|
|
|
| |
The feature has been default on since late May 2018.
|
|
|
|
|
|
|
| |
Lets cluster controller use new protocols for sending compressed cluster state
bundles, but without triggering implicit Maintenance edges for nodes in the
default bucket space. Also allows for easy live reconfiguration when global
document types are added or removed.
|
|
|
|
| |
buckets in a bucket space before it is considered complete.
|
| |
|
| |
|
|
|
|
|
| |
* this implicitly wires in a Metric, allowing handler invocations
to be measured in the ThreadedRequestHandler superclass.
|
|\
| |
| |
| |
| | |
vespa-engine/vekterli/re-enable-synchronous-set-node-state
Re-enable synchronous set node state with additional safeguards
|
| |
| |
| |
| |
| |
| | |
Prevents an unstable cluster from potentially holding up all
container request processing threads indefinitely.
Deadline errors are translated into HTTP 504 errors to REST API clients.
|
|/ |
|
| |
|
| |
|
| |
|
| |
|