aboutsummaryrefslogtreecommitdiffstats
path: root/staging_vespalib
Commit message (Collapse)AuthorAgeFilesLines
* Merge pull request #20800 from ↵Henning Baldersheim2022-01-155-15/+62
|\ | | | | | | | | vespa-engine/balder/add-an-interface-that-can-accept-a-tasklist Add an interface that can post a list of task instead of only one at …
| * Add an interface that can post a list of task instead of only one at a time.Henning Baldersheim2022-01-135-15/+62
| | | | | | | | | | Intention is to make it cheaper to post many small tasks. It requires that the implementation adds support if it find it worthwhile.
* | Use else instead of initializing to 0Henning Baldersheim2022-01-131-1/+3
| |
* | Differentiate between numTasks called when holding lock and not.Henning Baldersheim2022-01-132-11/+22
| |
* | - Add support for using an unbound Q -> nonblocking.Henning Baldersheim2022-01-134-24/+101
|/ | | | | - It uses a synchronized overflow Q if the main Q is full. - Long term it is the intention that the blocking option will be removed.
* Add noexcept specifiers.Tor Egge2021-12-111-1/+1
|
* Merge pull request #20438 from ↵Henning Baldersheim2021-12-091-2/+1
|\ | | | | | | | | vespa-engine/balder/add-init_fun-to-vespalib_Thread-too Add init_fun to vespalib::Thread too to figure out what the thread is…
| * Add init_fun to vespalib::Thread too to figure out what the thread is used for.Henning Baldersheim2021-12-091-2/+1
| |
* | Reduce watermark from 50% to 10% to get faster reaction.Henning Baldersheim2021-12-091-1/+1
|/
* Compute watermarkRatio onceHenning Baldersheim2021-12-062-4/+5
|
* Add testing of watermark and change it to have the ration to the taskLimit asHenning Baldersheim2021-12-063-7/+15
| | | | it had on initial construction time.
* Round up to a power of 2 AFTER you have capped tasklimit.Henning Baldersheim2021-12-061-1/+1
|
* Modify test to trigger the case where watermark would prevent correct power ↵Henning Baldersheim2021-12-061-13/+15
| | | | of 2 task limit when reducing below watermark.
* Only issue wakeup if there is a good reason too.Henning Baldersheim2021-12-031-1/+3
|
* GC unused code.Henning Baldersheim2021-12-021-8/+0
|
* - Use the wakeupservice as main source for frequent regular wakeups.Henning Baldersheim2021-12-023-10/+35
| | | | | - Keep a self wakeup of 100ms - Avoid using default arguments to be able to find callsite.
* Add a wakeup serviceHenning Baldersheim2021-11-291-2/+2
|
* Remove the need for SyncableHenning Baldersheim2021-11-262-2/+3
|
* vespalib::SequencedTaskExecutor uses std::optional. Add needed include.Tor Egge2021-11-171-0/+1
|
* Rename test to reflect current behaviourHenning Baldersheim2021-11-161-1/+1
|
* Use std::optional instead of separate class.Henning Baldersheim2021-11-162-16/+6
|
* Address both thread safety in regards to visibility of updates and race for ↵Henning Baldersheim2021-11-152-31/+51
| | | | the last spots.
* If we lost the race for the last spots we need to use the second option.Henning Baldersheim2021-11-141-3/+9
|
* Add a fixed size table of 8 * num_exutors with 16 bit entries. Use this for ↵Henning Baldersheim2021-11-133-32/+68
| | | | | | | | mapping the first components exact. With more components than 8x we fall abck to to using shrunk id of 8 bits as before. This enables perfect distribution for the first 8x and then 'good enough' for the rest. The more there are the less impact of imperfect distribution will be.
* Test the distribution we get with 8 attributes and 4/8 threads.Henning Baldersheim2021-11-101-0/+24
|
* Let default watermark be at 50% instead of 10%.Henning Baldersheim2021-11-091-1/+1
| | | | That will favour more frequent wakeups, and should give more stable flow.
* Use alternate executor id for push stage when sharing sequenced task executorTor Egge2021-11-082-0/+18
| | | | with invert stage.
* Bundle fields using same executor for memory index.Tor Egge2021-11-051-6/+6
|
* Rename ISequencedTaskExecutor::sync() to sync_all().Tor Egge2021-10-2812-34/+34
|
* Update 2020 Oath copyrights.gjoranv2021-10-274-4/+4
|
* Update 2019 Oath copyrights.gjoranv2021-10-272-2/+2
|
* foreground executors are never woken up.Henning Baldersheim2021-10-222-2/+2
|
* properly set utilizationHenning Baldersheim2021-10-225-10/+10
|
* Track time outside of idle loop.Henning Baldersheim2021-10-221-2/+2
|
* Rename executorCount -> threadCountHenning Baldersheim2021-10-221-1/+1
|
* Add a metric for how many times a worker in a thread pool has woken up.Henning Baldersheim2021-10-227-11/+38
| | | | Also track the idle time a worker has and add metric for the utilization.
* Reduce to 3 tries as this is a rather expensive operation with many smaps.Henning Baldersheim2021-10-201-1/+1
|
* Silence info message by reducing it to debug.Henning Baldersheim2021-10-201-1/+1
|
* Use the ExecutorStats type directly.Henning Baldersheim2021-10-197-17/+15
|
* Update Verizon Media copyright notices.gjoranv2021-10-075-5/+5
|
* Update 2018 copyright notices.gjoranv2021-10-072-2/+2
|
* Update 2017 copyright notices.gjoranv2021-10-07244-244/+244
|
* Reduce exposure of internal details to reduce number of includes.Henning Baldersheim2021-06-301-1/+0
|
* Include cassert when needed.Tor Egge2021-06-041-0/+1
|
* Add DistributorStripe thread pool with thread park/unpark supportTor Brede Vekterli2021-04-291-1/+1
| | | | | | | | | | | | | | | | | | | To enable safe and well-defined access to underlying stripe data structures from the main distributor thread, the pool has functionality for "parking" and "unparking" all stripe threads: * Parking makes all threads go into a blocked holding pattern where it is guaranteed that they may not race with any other threads. * Unparking releases all threads from their holding pattern, allowing them to continue their event processing loop. Also adds a custom run loop for distributor threads that largely emulates the waiting semantics found in the current framework ticking thread pool run loop. But unlike the framework pool, there is no global mutex that must be acquired by all threads in the pool. All stripe event handling uses per-thread mutexes and condition variables. Global state is only accessed when thread parking is requested, which happens very rarely.
* Use signed char when needed for base64 encoding.Tor Egge2021-04-281-2/+2
|
* Avoid using slow std::string and std::ifstream, just use asciistream.Henning Baldersheim2021-03-191-7/+4
|
* Remove duplicate headersJon Bratseth2021-03-181-1/+0
|
* Add copyright headersJon Bratseth2021-03-181-0/+1
|
* Ensure NameCollection can not be copiedHenning Baldersheim2021-03-184-17/+12
|