summaryrefslogtreecommitdiffstats
path: root/eval
Commit message (Collapse)AuthorAgeFilesLines
* added some missing thatsHåvard Pettersen2020-05-291-2/+2
|
* revert unintended changeHåvard Pettersen2020-05-291-1/+1
|
* dense number joinHåvard Pettersen2020-05-298-4/+347
|
* dense tensor lambdaHåvard Pettersen2020-05-287-56/+305
|
* dense simple mapHåvard Pettersen2020-05-2811-180/+231
|
* use index lookup table with shared cacheHåvard Pettersen2020-05-277-22/+307
|
* Move streaming operators to namespace searched by ADL.Tor Egge2020-05-251-0/+4
|
* simple dense joinHåvard Pettersen2020-05-2212-195/+626
|
* let compile cache use shared proton executorHåvard Pettersen2020-05-193-1/+24
|
* dense single reduceHåvard Pettersen2020-05-0712-92/+450
|
* Avoid making copies of container elements.Tor Egge2020-05-041-2/+2
|
* include local file firstHåvard Pettersen2020-05-041-1/+1
|
* multi-matmulHåvard Pettersen2020-05-0411-24/+522
|
* fix PR commentsHåvard Pettersen2020-04-307-13/+8
|
* added float cell range testsHåvard Pettersen2020-04-301-0/+2
|
* lambda peek optimizerHåvard Pettersen2020-04-3017-30/+881
|
* Unwrap reference wrappers to avoid extra indirections viaTor Egge2020-04-231-12/+12
| | | | invalid memory.
* truncate doubles when converting to labels/indexesHåvard Pettersen2020-04-155-9/+9
|
* added skeleton for lambda peek optimizerHåvard Pettersen2020-04-034-0/+34
|
* delay preparing tensor lambda function for executionHåvard Pettersen2020-04-033-13/+25
| | | | | | This will allow implementation-specific tensor lambda optimizations to look at the lambda function and perform appropriate optimizations before it is converted to an interpreted function.
* make tensor engine available when compiling tensor functionsHåvard Pettersen2020-04-0324-55/+67
|
* remove parameter count from interpreted functionsHåvard Pettersen2020-04-035-12/+7
|
* added support for exporting a subset of node typesHåvard Pettersen2020-04-033-5/+78
| | | | | | | This is needed to store type information about tensor lambda inner functions until it is needed; we want to delay making it into an interpreted function until after the actual tensor engine implementation gets a chance to come up with a better optimization.
* Eliminate redundant move in return statement.Tor Egge2020-03-211-1/+1
|
* Merge pull request #12651 from vespa-engine/havardpe/improve-rank-feature-errorsHenning Baldersheim2020-03-202-6/+16
|\ | | | | Havardpe/improve rank feature errors
| * better tensor lambda type errorsHåvard Pettersen2020-03-202-6/+16
| | | | | | | | | | - report actual return type when not double - import type errors from lambda function type resolving
* | ReinlineHenning Baldersheim2020-03-202-16/+9
| |
* | Stick with one way of getting an accelrator.Henning Baldersheim2020-03-201-6/+6
| |
* | Use a common accelrator instance.Henning Baldersheim2020-03-191-1/+1
|/
* Merge pull request #12619 from vespa-engine/balder/optimize-value-excutorsHenning Baldersheim2020-03-195-48/+74
|\ | | | | Balder/optimize value excutors.
| * Use vespalib::hash_set instead of std::set to reduce number of allocation ↵Henning Baldersheim2020-03-185-48/+74
| | | | | | | | and epeed it up. Also use faster 2^N AND based hash tables.
* | fix dimension list printingHåvard Pettersen2020-03-192-1/+5
| |
* | print more details about type errorsHåvard Pettersen2020-03-193-21/+89
|/
* handle tensor lambda as nested function with bindingsHåvard Pettersen2020-03-1120-96/+361
|
* - Remove unused includes.Henning Baldersheim2020-03-0515-50/+18
| | | | | | | - = default - push_back -> emplace_back - std::move on vector. No semantic changes.
* using NUM_DOCS is wrong for remove benchmark, use EFFECTIVE_DOCSArne Juul2020-02-273-2/+4
|
* Merge pull request #12321 from vespa-engine/arnej/rework-ann-filter-bmArne H Juul2020-02-2619-885/+1850
|\ | | | | Arnej/rework ann filter bm
| * add common header fileArne Juul2020-02-261-0/+203
| |
| * keep more code commonArne Juul2020-02-253-788/+427
| |
| * split out common subroutinesArne Juul2020-02-257-491/+190
| |
| * add and verify filter optionArne Juul2020-02-2413-259/+853
| | | | | | | | split out common subroutines
| * experimental HNSW with various extensionsArne Juul2020-02-241-0/+830
| |
* | - Add debug logging.Henning Baldersheim2020-02-235-42/+24
|/ | | | | - std::make_unique - Reduce code visibility.
* Fix issues detected by clang 10.Tor Egge2020-02-141-2/+2
|
* Use llvm 10 on Fedora rawhide.Tor Egge2020-02-142-0/+4
|
* cannot use std::aligned_allocArne Juul2020-02-112-8/+10
|
* * add "remove" benchmarkArne Juul2020-02-116-72/+652
| | | | | | | | * redo ops tracking * use std::aligned_alloc * more stats - measure reach This reverts commit 37fd87978ab1c3abfa840403e4e8f289d5ea4a20.
* Revert "* remove benchmark"Henning Baldersheim2020-02-116-652/+72
|
* avoid actual HNSW library hereArne Juul2020-02-071-2/+0
|
* update copyrightArne Juul2020-02-071-1/+1
|