aboutsummaryrefslogtreecommitdiffstats
path: root/storage/src/tests
Commit message (Collapse)AuthorAgeFilesLines
* Merge pull request #13706 from ↵Tor Brede Vekterli2020-06-303-164/+261
|\ | | | | | | | | vespa-engine/vekterli/btree-bucket-db-support-on-content-node Create generic B-tree bucket DB and content node DB implementation
| * Address review commentsTor Brede Vekterli2020-06-291-7/+6
| | | | | | | | | | Also rewrite some GMock macros that triggered Valgrind warnings due to default test object printers accessing uninitialized memory.
| * Wire config for enabling content node B-tree bucket DBTor Brede Vekterli2020-06-251-8/+4
| |
| * Create generic B-tree bucket DB and content node DB implementationTor Brede Vekterli2020-06-252-156/+258
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is the first stage of removing the legacy DB implementation. Support for B-tree specific functionality such as lock-free snapshot reads will be added soon. This commit is just for feature parity. Abstract away actual database implementation to allow it to be chosen dynamically at startup. This abstraction does incur some overhead via call indirections and type erasures of callbacks, so it's likely it will be removed once the transition to the new B-tree DB has been completed. Since the algorithms used for bucket key operations is so similar between the content node and distributor, a generic B-tree backed bucket database has been created. The distributor DB will be rewritten around this code very soon. Due to the strong coupling between bucket locking and actual DB implementation details, the new bucket DB has a fairly significant code overlap with the legacy implementation. This is to avoid spending time abstracting away and factoring out code for a legacy implementation that is to be removed entirely anyway. Remove existing LockableMap functionality not used or that's only used by tests.
* | Use find_package to find gtest library.Tor Egge2020-06-2910-10/+20
|/
* Remove unused legacy bucket DB functionalityTor Brede Vekterli2020-06-031-149/+0
|
* Test that single Get sent by update op works with tombstonesTor Brede Vekterli2020-05-261-0/+57
|
* Handle tombstones in GetOperationTor Brede Vekterli2020-05-261-16/+84
| | | | | | | If the newest document version is a tombstone, behave as if the document was not found at all. Since we still track replica consistency, this should work as expected for multi-phase update operations as well.
* - Update metrics less often by removing the forceEventLogging alltogether.Henning Baldersheim2020-05-131-3/+3
| | | | - Let default bucket iteration work in smaller chunks with shorter waits.
* Remove unused clearResult method, and use std::lock_guardHenning Baldersheim2020-05-081-4/+1
|
* Use a lock to ensure it is thread safe.Henning Baldersheim2020-05-082-6/+18
|
* Add async update and followup on PR comments.Henning Baldersheim2020-05-053-9/+8
|
* Implement async putHenning Baldersheim2020-05-047-41/+78
| | | | Implement async remove.
* Merge branch 'master' into ↵Henning Baldersheim2020-05-047-71/+36
|\ | | | | | | | | | | | | vekterli/remove-deprecated-bucket-disk-move-functionality Conflicts: storage/src/tests/persistence/diskmoveoperationhandlertest.cpp
| * Revert "- Implement async put"Harald Musum2020-05-048-72/+37
| |
* | Remove deprecated bucket cross-disk move functionalityTor Brede Vekterli2020-05-049-323/+12
|/ | | | | The notion of multiple disks hasn't been supported since we removed VDS, and likely won't be in the future either.
* - Implement async putHenning Baldersheim2020-05-048-37/+72
| | | | | | - Move result processing to MessageTracker - Wire putAsync through provider error wrapper too. - Handle both sync and async replies in tests.
* Remove deprecated BucketIntegrityCheckerTor Brede Vekterli2020-04-303-310/+0
| | | | Not in use after VDS was removed.
* - Add async interface to putHenning Baldersheim2020-04-2910-136/+147
| | | | | - Use MessageTracker for keeping context. - implement putAsync, but still use it synchronously.
* Use rvalue qualifierHenning Baldersheim2020-04-282-6/+6
|
* getReplySP => stealReplySPHenning Baldersheim2020-04-282-6/+6
|
* Implement hasReply avoid copying the shared_ptr just to peak at the result.Henning Baldersheim2020-04-284-23/+19
|
* Merge pull request #13084 from ↵Tor Brede Vekterli2020-04-281-36/+94
|\ | | | | | | | | vespa-engine/vekterli/optimize-btree-find-parents-with-fix Optimize B-tree bucket DB lookup with used-bits aggregation
| * Optimize B-tree bucket DB lookup with used-bits aggregationTor Brede Vekterli2020-04-271-36/+94
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | By tracking the minimum used bits count across all buckets in the database we can immediately start seeking at that implicit level in the tree, as we know no parent buckets can exist above that level. Local synthetic benchmarking shows the following results with a DB size of 917504 buckets and performing getParents for all buckets in sequence: Before optimization: - B-tree DB: 0.593321 seconds - Legacy DB: 0.227947 seconds After optimization: - B-tree DB: 0.191971 seconds - Legacy DB: (unchanged)
* | Remove flush from provider interface.Henning Baldersheim2020-04-274-25/+4
| |
* | Remove batching of messages that has no effect in favor of making async ↵Henning Baldersheim2020-04-272-85/+0
| | | | | | | | operations easier to implement.
* | Prepare for making persistence layer async.Henning Baldersheim2020-04-262-19/+19
|/ | | | Avoid state in the thread.
* Revert "Optimize B-tree bucket DB lookup with used-bits aggregation"Tor Brede Vekterli2020-04-251-33/+0
|
* Optimize B-tree bucket DB lookup with used-bits aggregationTor Brede Vekterli2020-04-241-0/+33
| | | | | | | | | | | | | | | | | | | By tracking the minimum used bits count across all buckets in the database we can immediately start seeking at that implicit level in the tree, as we know no parent buckets can exist above that level. Local synthetic benchmarking shows the following results with a DB size of 917504 buckets and performing getParents for all buckets in sequence: Before optimization: - B-tree DB: 0.593321 seconds - Legacy DB: 0.227947 seconds After optimization: - B-tree DB: 0.213738 seconds - Legacy DB: (unchanged)
* Some libraries print "0x0" for a null void ptr,Tor Egge2020-04-221-1/+1
|
* Allow temporarily inhibiting maintenance ops when under loadTor Brede Vekterli2020-04-171-0/+8
| | | | | | | | If requests or responses from external sources are being constantly processed as part of the distributor tick, allow for up to N ticks to skip maintenance scanning, where N is a configurable number. This reduces the amount of CPU time spent on maintenance operations when the node has a lot of incoming data to deal with.
* Remove redundant bucket DB lookup in persistence reply handlingTor Brede Vekterli2020-04-161-5/+10
| | | | | | | | | | | | | | | | | | | Bucket DB updating happened unconditionally anyway; this was only used for failing operations in an overly pessimistic way. Removing this lookup has two benefits: - Less CPU spent in DB - Less impact expected during feeding during node state transitions since fewer operations will have to be needlessly retried by the client. Rationale: an operation towards a given bucket completes (i.e. is ACKed by all its replica nodes) at time t and the bucket is removed from the DB at time T. There is no fundamental change in correctness or behavior from the client's perspective if the order of events is tT or Tt. Both are equally valid, as the state transition edge happens independently of any reply processing.
* Only update bucket DB memory statistics at certain intervalsTor Brede Vekterli2020-04-071-0/+46
| | | | | B-tree/datastore stats can be expensive to sample, so don't do this after every full DB iteration. For now, wait at least 30s.
* Revert "Bypass communicationmanager Q"Henning Baldersheim2020-04-051-2/+2
|
* Merge pull request #12810 from ↵Tor Brede Vekterli2020-04-031-0/+35
|\ | | | | | | | | vespa-engine/vekterli/add-distributor-bucket-db-memory-usage-metrics Add distributor bucket db memory usage metrics
| * Add memory usage metrics for distributor bucket databasesTor Brede Vekterli2020-04-021-0/+35
| |
* | Bypass communicationmanager QHenning Baldersheim2020-04-021-2/+2
|/
* Reduce code duplication in test code.Tor Egge2020-03-302-14/+4
|
* Handle newer gtest versions where the legacy API is deprecated.Tor Egge2020-03-292-0/+10
|
* Track metrics for new inconsistent update phasesTor Brede Vekterli2020-03-241-0/+23
| | | | | | Reuses the old update-get metric for the single full Get sent after the initial metadata-only phase. Adds new metric set for the initial metadata Gets.
* Add comments and some extra safety handling of single Get commandTor Brede Vekterli2020-03-171-1/+16
|
* Add initial metadata-only phase to inconsistent update handlingTor Brede Vekterli2020-03-163-351/+604
| | | | | | | | | | | | | | | | | | | | | | If bucket replicas are inconsistent, the common case is that only a small subset of documents contained in the buckets are actually mutually out of sync. The added metadata phase optimizes for such a case by initially sending Get requests to all divergent replicas that only ask for the timestamps (and no fields). This is a very cheap and fast operation. If all returned timestamps are in sync, the update can be restarted in the fast path. Otherwise, a full Get will _only_ be sent to the newest replica, and its result will be used for performing the update on the distributor itself, before pushing out the result as Puts. This is in contrast to today's behavior where full Gets are sent to all replicas. For users with large documents this can be very expensive. In addition, the metadata Get operations are sent with weak internal read consistency (as they do not need to read any previously written, possibly in-flight fields). This lets them bypass the main commit queue entirely.
* Add count metric for number of documents garbage collectedTor Brede Vekterli2020-02-242-8/+31
| | | | | | | | | | | | | | | New distributor metric available as: ``` vds.idealstate.garbage_collection.documents_removed ``` Add documents removed statistics to `RemoveLocation` responses, which is what GC is currently built around. Could technically have been implemented as a diff of before/after BucketInfo, but GC is very low priority so many other mutating ops may have changed the bucket document set in the time span between sending the GC ops and receiving the replies. This relates to issue #12139
* extend crypto engine apiHåvard Pettersen2020-02-131-1/+1
| | | | | send spec for client connections to enable SNI as well as server name verification
* Add include statements needed by newer build environments.Tor Egge2020-01-261-0/+1
|
* Followup on code comments.Henning Baldersheim2020-01-231-1/+1
|
* Use a single chunkHenning Baldersheim2020-01-231-27/+26
|
* Merge pull request #11822 from vespa-engine/balder/reduce-bytebuffer-exposureHenning Baldersheim2020-01-213-8/+5
|\ | | | | Balder/reduce bytebuffer exposure
| * Add stream method and use memcpy over casting.Henning Baldersheim2020-01-213-3/+3
| |
| * Make it known that getting serialized size will always be expensive.Henning Baldersheim2020-01-201-1/+2
| |