| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
| |
There is no cluster controller redundancy with 2 nodes
and this leads to operational problems.
|
|
|
|
|
|
|
| |
* remove tcp_abort_on_overflow setting
* remove tenant node flag / cluster size quota
* remove quota tests
|
|
|
|
|
|
|
|
| |
- Closes a loophole where the suggestion made will be lower than the current allocation
if the current allocation is the max need observed over the last week.
- Since we now store the suggestion even if it is current we check at read time
whether to suggest, and then also refrain from making suggestions inside
the autoscaling interval.
|
|\
| |
| | |
Bratseth/suggest on human scale
|
| |
| |
| |
| |
| | |
Since suggestions are consumed by humans they should change on
the time scale of human decision making.
|
| | |
|
|/ |
|
|
|
|
|
| |
mem.total.util included disk cache which makes it unsuitable as a
regulation target.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| | |
Reject endpoints with clashing non-compactable IDs
|
| | |
|
| | |
|
|\ \
| |/
|/| |
Remove redundant compatibility with old format
|
| | |
|
|\ \
| | |
| | | |
Record scaling event completion
|
| |/ |
|
| | |
|
|/ |
|
|
|
|
|
|
|
| |
vespa-engine/revert-15614-bratseth/record-scaling-event-completion"
This reverts commit 49ecd29903215b133505f316773631ec9161ff44, reversing
changes made to ed58cd5826de9da0ed6a963a35c1246abebac1e4.
|
| |
|
|\
| |
| |
| |
| | |
vespa-engine/bratseth/record-scaling-event-completion
Store the 15 last autoscaling events
|
| | |
|
|\ \
| |/
|/| |
Allow a grace period after node re-activation
|
| | |
|
|/ |
|
|\
| |
| | |
Set stateful property for relevant clusters
|
| | |
|
|/ |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adds a 'minCount' field to the shared host jackson flag, denoting the minimum
number of "shared hosts" that must exist, otherwise the deficit will be
provisioned by DynamicProvisioningMaintainer.
A "shared host" is one that is considered for allocation if current tenant node
allocations were removed: It must be a tenant host, cannot be an exclusiveTo
host, etc.
minCount requires the setting of (at least one) shared host.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|\
| |
| |
| |
| | |
vespa-engine/hakonhall/allow-preprovision-capacity-on-partially-filled-hosts
Allow preprovision capacity on partially filled hosts
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Adds new functionality that can be disabled by setting the
compact-preprovision-capacity flag to false.
preprovision-capacity can be satisfied by hosts with spare resources. The
DynamicProvisioningMaintainer does this as follows:
1. For each cluster in preprovision-capacity, try to
a. allocate the cluster using NodePrioritizer
b. If there is a deficit, provision the deficit with HostProvisioner, which
may provision larger shared hosts depending on shared-hosts, and retry
(1) from the first cluster again.
c. Otherwise, pretend the nodes are allocated and go to next cluster.
2. All of preprovision-capacity was successfully allocated, and empty hosts
are therefore excess that can be deprovisioned.
|
| | |
|
|/ |
|
| |
|
|\
| |
| |
| |
| | |
vespa-engine/hakonhall/allow-allocating-to-a-provisioned-tenant-host
Allow allocating to a provisioned tenant host
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
This PR changes (A) and (B) described below, which effectivelly allows
the allocation of nodes to a dynamically provisioned tenant host.
A. In NodePriotizer, when converting a VirtualNodeCandidate to a
ConcreteNodeCandidate with 'withNode()', primary IP addresses are currently
picked from the IP pool of the parent host and assigned to the node, along with
the corresponding hostname found using a DNS resolver.
This PR allows a ConcreteNodeCandidate to be created WITH a hostname, but
WITHOUT primary IP addresses, when 1. the parent host has no IP addresses in
the pool AND 2. there are hostnames in the address pool not yet assigned to any
child node.
This may happen for a brief period of time just after provisioning in a
dynamically provisioned zone: An asynchronous process is supposed to add IP
addresses to the pool (based on the pool hostnames) before the host can become
active, and also update the child nodes with their primary IP addresses
accordingly. They both hold the unallocated lock to guarantee atomicity.
B. In NodePriotizer.addCandidatesOnExistingHosts(), the
HostCapacity.hasCapacity() will now return true when (a) the parent host is
tenant, (b) there are NO IP addresses in the pool, and (c) there is a hostname
in the pool not assigned to
|