| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
| |
- Improvement: Add file and line to ASSERT exceptions
- Improvement: Catch call stack when throwing assert exceptions and log/output call stack at important places to provide more context to caller
|
| |
|
| |
* refactor so we don't have to re-read data from source to get block sizes
|
| |
|
|
| |
source buffer (#28)
|
| |
|
|
|
|
|
| |
this change adds serialization of payloads as YAML, but not parsing. The implementation is somewhat based on the JSON path, and may be collapsed eventually as it is possible to serialize JSON format using the same code
it also separates out the JSON serialization into a separate file for ease of maintenance
any HTTP request response may be formatted as yaml by using a `.yaml` suffix or an `Accept: text/yaml` header
|
| |
|
|
|
|
| |
- Improvement: Add limit to the number of times we attempt to finalize and exported oplog
- Improvement: Switch to large thread pool when executing oplog export/import
- Improvement: Clean up reporting of missing attachments in oplog export/import
- Improvement: Remove double-reporting of abort reason for oplog export/import
|
| |
|
|
|
| |
- Improvement: Delay exiting due to no sponsor processes by one second to handle race conditions
- Improvement: Safer IsProcessRunning check
- Improvement: make sure we can RequestApplicationExit safely from any thread
|
| | |
|
| |
|
|
|
| |
* Add CompressedBuffer::GetRange that references source data rather than make a memory copy
* Use Compressed.CopyRange in project store GetChunkRange
* docs for CompressedBuffer::CopyRange and CompressedBuffer::GetRange
|
| |
|
|
|
| |
- Bugfix: Install Ctrl+C handler earlier when doing `zen oplog-export` and `zen oplog-export` to properly cancel jobs
- Improvement: Add ability to block a set of CAS entries from GC in project store
- Improvement: Large attachments and loose files are now split into smaller chunks and stored in blocks during oplog export
|
| |
|
|
| |
* Make sure WriteFile() does not leave incomplete files
* use TemporaryFile and MoveTemporaryIntoPlace to avoid leaving partial files on error
|
| |
|
|
| |
Avoid double resize of buffer in CbWriter::SetName and CbWriter::AddBinary
Add WriteMeasuredVarUInt to avoid measuring ints twice before writing
|
| |
|
|
| |
* improved gc/blockstore logging
* more gcv2 tests
|
| |
|
|
|
| |
* move structuredcachestore tests to zenstore-test
* Don't materialize entire files when hashing if it is a large files
* rewrite CompositeBuffer::Mid to never materialize buffers
|
| |
|
|
|
|
|
|
|
| |
* Save large compressed large attachments to temporary files on disk
* bump oplog block max size up to 64Mb again
* Make sure CompositeBuffer::AppendBuffers actually moves inputs when it should
* removed parallell execution of fetching payload for block assembly
it was not actually helping and added complexity
* make sure we move/release payload buffers as soon as possible
* make sure we don't read in full large attachments to memory when computing hash
|
| |
|
|
|
|
|
| |
- Improvement: Speed up oplog export by fetching/compressing big attachments on demand
- Improvement: Speed up oplog export by batch-fetcing small attachments
- Improvement: Speed up oplog import by batching writes of oplog ops
- Improvement: Tweak oplog export default block size and embed size limit
- Improvement: Add more messaging and progress during oplog import/export
|
| |
|
| |
fixes rare race condition when using RPC recording for long periods of time
|
| |
|
|
|
| |
* Change BasicFile::Read to throw exception like IoBuffer
- Don't ASSERT on dwNumberOfBytesRead == NumberOfBytesToRead - throw exception with details instead
- Use proper return type for pread()
|
| |
|
|
| |
* improve feedback from oplog import/export
* improve oplog save performance
|
| |
|
| |
jobqueue - add AbortReason and properly propagate error when running async command
|
| |
|
|
| |
the previous implementation was quite slow due to use of mt and uniform_distribution.
|
| |
|
|
| |
`xmake config -zentrace=n` would previously not build cleanly
|
| |
|
|
|
|
|
|
|
| |
* fix JobQueue test threading issue. The inner job queued with `QueueJob` would reference `I` from inside the captured closure which would subsequently disappear
* made sure application exit is thread safe
* don't try to access string data out of bounds
* keep-alive flag is accessed from multiple threads
* fix memory leaks in Zen upstream client code
* TSAN fixes for Event
|
| |
|
|
| |
* fix leak in IoBuffer for manifested small chunk. previously it would null out the `m_DataPtr` member on every path from `IoBufferExtendedCore::~IoBufferExtendedCore()` but it only really makes sense to null it out when the buffer has been memory mapped
|
| |
|
| |
- also fixes weird DateTime/TimeSpan comparison operator
|
| | |
|
| |
|
| |
enabling mimalloc path for `Memory::Alloc` and `Memory::Free`
|
| |
|
| |
- Bugfix: Allow attachments that contains a raw size of zero
|
| |\ |
|
| | |
| |
| |
| |
| |
| |
| |
| |
| | |
(#600)
* Make sure that PathFromHandle don't hide true error when throwing exceptions
* changelog
* return error info in PathFromHandle if path fails to resolve
|
| | |
| |
| | |
this adds information on program name and command line to trace initialization
|
| | |
| |
| |
| |
| |
| | |
- Improvement: Scrub command now validates compressed buffer hashes in filecas storage (used for large chunks)
- Improvement: Added --dry, --no-gc and --no-cas options to zen scrub command
- Improvement: Implemented oplog scrubbing (previously was a no-op)
- Improvement: Implemented support for running scrubbint at startup with --scrub=<options>
|
| | |
| |
| |
| |
| |
| | |
* added ZEN_SCOPED_WARN and implemented multi-line logging
* changed so file log also uses `fullformatter` for consistency and to get the multi-line support across the board
|
| | |
| |
| | |
SuppressConsoleLog now removes any existing console logger to avoid exceptions in spdlog
|
| | |
| |
| |
| |
| |
| |
| | |
with these changes it is possible to configure loggers on the command line. For instance:
`xmake run zenserver --log-trace=http_requests,http`
will configure the system so that the `http_request` and `http` loggers are set to TRACE level
|
| | |
| |
| |
| |
| |
| | |
* added log level control/query to LoggerRef
* added debug logging to http plugin implementation
* added GetDebugName() to transport plugin interfaces
* added debug name to log output
|
| | |
| |
| | |
- Improvement: Use two global worker thread pools instead of ad-hoc creation of worker pools
|
| | |
| |
| |
| | |
* close thread pool at destruction
* parallell casimpl::initialize
|
| | |
| |
| |
| | |
controlled manner (#573)
|
| | |
| |
| |
| |
| | |
the previous implementation of in-memory index snapshots serialise data to memory before writing to disk and vice versa when reading. This leads to some memory spikes which end up pushing useful data out of system cache and also cause stalls on I/O operations.
this change moves more code to a streaming serialisation approach which scales better from a memory usage perspective and also performs much better
|
| | | |
|
| | |
| |
| |
| |
| | |
- Refactor GCV2 so GcReferencer::RemoveExpiredData returns a store compactor, moving out the actual disk work from deleting items in the index.
- Refactor GCV2 GcResult to reuse GcCompactStoreStats and GcStats
- Make Compacting of stores non-parallell to not eat all the disk I/O when running GC
|
| | |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
initial version -- this is primarily intended to be used for running stress tests and/or benchmarks
example usage:
`zen run -n 10 -- zenserver-test`
`zen run -n 10 -- zenserver-test --ts=core.assert` run zenserver-test 10 times (testing only the `core.assert` test suite)
`zen run --time 600 --basepath=d:\test_dir\test1 -- zenserver-test` keeps spawning new instances for 10 minutes (600 seconds)
|
| | |
| |
| | |
includes porting some compact binary builder code to use it since it had vestiges of the UE-side asserts
|
| | | |
|
| | |
| |
| | |
* fix named event timout and test, fix blocking queue
|
| | |
| |
| |
| |
| |
| |
| |
| |
| | |
* make BlockingQueue::m_CompleteAdding non-atomic
* ZenCacheDiskLayer::Flush logging
* name worker threads in ZenCacheDiskLayer::DiscoverBuckets
* name worker threads in gcv2
* improved logging in ZenServerInstance
* scrub threadpool naming
* remove waitpid handling, we should just call wait to kill zombie processes
|
| | |
| |
| |
| |
| |
| |
| |
| | |
* changed posix event implementation to use std::atomic instead of volatile
* ensure Event::Close() can take lock before deleting the inner object
* don't try to take the Event lock if the event is already signaled
* changed logic around Event::Wait without time-out. this works around some apparent issues on MacOS/Linux
* fix logic for posix process exit wait
|
| | |
| |
| |
| |
| | |
this introduces a --snapshot-dir command line option to zenserver which specifies a directory which will be propagated to the persistence root directory on start-up.
This is most powerful with file systems which support block cloning, such as ReFS on Windows. This allows even very large state snapshots to be used repeatedly without having to worry about mutating the original dataset on disk. When using ReFS the state copy for even large state directories can be very fast since the duration is primarily proportional to the number of files in the tree rather than the size of the files being cloned. The storage requirements are also minimal as all data will be handled in a copy-on-write manner.
|
| | | |
|
| | |
| |
| |
| | |
fix process wait timeout
always use kill(pid, 0) to determine if process is running
|