aboutsummaryrefslogtreecommitdiffstats
path: root/eval/src/tests/tensor/instruction_benchmark/instruction_benchmark.cpp
Commit message (Expand)AuthorAgeFilesLines
* Update copyrightJon Bratseth2023-10-091-1/+1
* Remove inlining warnings (eval).Tor Egge2022-02-261-0/+11
* Update Verizon Media copyright notices.gjoranv2021-10-071-1/+1
* run benchmark with --smoke-test as unit testHåvard Pettersen2021-03-051-14/+33
* Merge pull request #16811 from vespa-engine/arnej/use-more-small-vectorsHåvard Pettersen2021-03-051-1/+2
|\
| * loop_cnt[1] is invalid in ghost modeArne Juul2021-03-051-1/+2
* | avoid bad scalar assertsHåvard Pettersen2021-03-051-12/+13
|/
* use size literals in evalArne Juul2021-02-151-17/+18
* adjust param repo add options and remove seq_biasHåvard Pettersen2021-02-051-95/+95
* use GenSpec to generate test valuesHåvard Pettersen2021-02-011-136/+93
* forward or ignore index in relevant mixed tensor reduce casesHåvard Pettersen2021-01-191-0/+8
* forward index for concat of mixed tensor with dense tensorArne Juul2021-01-151-0/+6
* Merge pull request #15764 from vespa-engine/arnej/move-dense-optimizersArne H Juul2020-12-091-1/+0
|\
| * move to vespalib::eval namespaceArne Juul2020-12-091-1/+0
* | ghost support in instruction benchmarkHåvard Pettersen2020-12-091-8/+55
|/
* only factory in interpreted functionHåvard Pettersen2020-12-031-37/+36
* stop benchmarking old engineArne Juul2020-12-031-10/+6
* remove simple tensorHåvard Pettersen2020-12-021-1/+0
* GC unused codeArne Juul2020-11-251-1/+0
* Merge pull request #15412 from vespa-engine/havardpe/improved-benchmarking-fa...Arne H Juul2020-11-211-29/+80
|\
| * use same loop_cnt when benchmarking if possibleHåvard Pettersen2020-11-201-6/+47
| * each EvalOp gets its own stash, for more fairnessHåvard Pettersen2020-11-201-23/+33
* | track CellType moveArne Juul2020-11-201-1/+1
|/
* combine dimensions and split reduce operationsHåvard Pettersen2020-11-191-1/+30
* move "keep as-is" optimizersArne Juul2020-11-121-0/+6
* benchmark some forms of join with numberArne Juul2020-11-101-0/+18
* untangle factory-based optimization pipeline from DefaultTensorEngineHåvard Pettersen2020-11-031-22/+23
* drop BM of PackedMixedTensorBuilderFactoryArne Juul2020-10-261-3/+0
* use a run-time flag instead of conditional compilationArne Juul2020-10-261-11/+13
* partial duplicate of micro-bemchmarkArne Juul2020-10-251-5/+11
* improve generic dense reduce with more robust cell orderingHåvard Pettersen2020-10-221-20/+36
* added mixed -> partial mixed peek casesHåvard Pettersen2020-10-161-0/+2
* added tensor peek benchmarkHåvard Pettersen2020-10-161-1/+94
* added tensor lambda benchmarkHåvard Pettersen2020-10-161-1/+58
* added encode/decode benchmarkHåvard Pettersen2020-10-161-0/+71
* added tensor create benchmarkHåvard Pettersen2020-10-161-41/+86
* extend map benchmark with number caseHåvard Pettersen2020-10-161-1/+6
* benchmark GenericMap alsoArne Juul2020-10-131-0/+45
* allow interpreted function to use new generic operationsHåvard Pettersen2020-10-121-61/+23
* Merge pull request #14769 from vespa-engine/arnej/fix-concat-collapsingArne H Juul2020-10-081-0/+71
|\
| * benchmark concatArne Juul2020-10-081-0/+71
* | fast value to enable inlined sparse operationsHåvard Pettersen2020-10-071-4/+7
|/
* benchmark mergeHåvard Pettersen2020-10-021-4/+68
* generic reduceHåvard Pettersen2020-10-021-19/+168
* Implement new Value API in SparseTensorArne Juul2020-10-011-0/+6
* improve benchmark reportHåvard Pettersen2020-09-291-14/+93
* Merge pull request #14592 from vespa-engine/arnej/new-sparse-tensor-value-2Arne H Juul2020-09-281-3/+3
|\
| * less asserts and parametersArne Juul2020-09-281-4/+0
| * just hold std::vector<T> inside SparseTensorValueArne Juul2020-09-281-0/+4
| * benchmark with new "adaptive" factoryArne Juul2020-09-281-3/+3