diff options
| author | Dan Engelbrecht <[email protected]> | 2024-11-25 14:49:04 +0100 |
|---|---|---|
| committer | GitHub Enterprise <[email protected]> | 2024-11-25 14:49:04 +0100 |
| commit | bcb81b326a373aa86d7e6a046febc8ba74f21c04 (patch) | |
| tree | b20c6d59cefd299b4daac0754c8fab7ec7019b9c /src/zenstore/compactcas.cpp | |
| parent | stronger validation of payload existance (#229) (diff) | |
| download | zen-bcb81b326a373aa86d7e6a046febc8ba74f21c04.tar.xz zen-bcb81b326a373aa86d7e6a046febc8ba74f21c04.zip | |
caller controls threshold for bulk-loading chunks in IterateChunks (#222)
* Allow caller to control threshold for bulk-loading chunks in IterateChunks
* use smaller batch chunk reading for /fileinfos and /chunkinfos as we do not intend to read the payload
* use smaller batch read buffer when just querying for size of attachments
Diffstat (limited to 'src/zenstore/compactcas.cpp')
| -rw-r--r-- | src/zenstore/compactcas.cpp | 8 |
1 files changed, 5 insertions, 3 deletions
diff --git a/src/zenstore/compactcas.cpp b/src/zenstore/compactcas.cpp index bc30301d1..792854af6 100644 --- a/src/zenstore/compactcas.cpp +++ b/src/zenstore/compactcas.cpp @@ -305,7 +305,8 @@ CasContainerStrategy::FilterChunks(HashKeySet& InOutChunks) bool CasContainerStrategy::IterateChunks(std::span<IoHash> ChunkHashes, const std::function<bool(size_t Index, const IoBuffer& Payload)>& AsyncCallback, - WorkerThreadPool* OptionalWorkerPool) + WorkerThreadPool* OptionalWorkerPool, + uint64_t LargeSizeLimit) { if (ChunkHashes.size() < 3) { @@ -344,7 +345,8 @@ CasContainerStrategy::IterateChunks(std::span<IoHash> ChunkHashes, }, [&](size_t ChunkIndex, BlockStoreFile& File, uint64_t Offset, uint64_t Size) { return AsyncCallback(FoundChunkIndexes[ChunkIndex], File.GetChunk(Offset, Size)); - }); + }, + LargeSizeLimit); }; Latch WorkLatch(1); @@ -498,7 +500,7 @@ CasContainerStrategy::ScrubStorage(ScrubContext& Ctx) }; m_BlockStore.IterateChunks(ChunkLocations, [&](uint32_t, std::span<const size_t> ChunkIndexes) { - return m_BlockStore.IterateBlock(ChunkLocations, ChunkIndexes, ValidateSmallChunk, ValidateLargeChunk); + return m_BlockStore.IterateBlock(ChunkLocations, ChunkIndexes, ValidateSmallChunk, ValidateLargeChunk, 0); }); } catch (const ScrubDeadlineExpiredException&) |