## - Feature: `zen print` now has a `--show-type-info` option to add type information to output of compact binary content - Improvement: `zen print` now allows output of compact binary content even if they are in non-optimal format (Unifom vs Non-Uniform arrays and objects) - Bugfix: Stats information for Build Store (Zen Store Cache) no longer throws exception and outputs invalid state information - Bugfix: Make sure we properly shut down any pending httpsys requests when shutting down ## 5.6.17 - Bugfix: Don't skip oplogs without package data in zen dashboard - Bugfix: Finalize build (in addition to build part) when exporting oplogs ## 5.6.16 - Feature: Added `zen build ls` option to list the content of a build part(s) - Build source is specified using one of the following options - `--cloud-url` cloud artifact URL to build - `--host` or `--override-host`, `--namespace`, `--bucket` and `--buildid` - `--filestorage`, `--namespace`, `--bucket` and `--buildid` - `--build-part-name` to specify a particular build part(s) in the build - `--wildcard` windows style wildcard (using * and ?) to match file paths to include - `--exclude-wildcard` windows style wildcard (using * and ?) to match file paths to exclude. Applied after --wildcard include filter - Feature: Added wildcard options for `zen build download` - `--wildcard` windows style wildcard (using * and ?) to match file paths to include - `--exclude-wildcard` windows style wildcard (using * and ?) to match file paths to exclude. Applied after --wildcard include filter - Feature: Added global zenserver option `--cache-bucket-limit-overwrites` controlling Whether to require policy flag pattern before allowing overwrites or not. Default `false` = overwrites always allowed - Feature: Add per bucket cache configuration option `limitoverwrites` (Lua options file only) cache = { bucket = { -- This is the default for all namespaces limitoverwrites = true }, buckets = { -- Here you can add matching per bucket name (matches accross namespaces) iostorecompression = { limitoverwrites = false }, }, } - Improvement: Added `--quiet` option to zen `builds` commands to suppress non-essential output - Improvement: Remove early wipe of target folder for `zen download` to allow for scavenging useful data - Improvement: Refactored jupiter oplog export code to reuse builds jupiter wrapper classes - Improvement: If `zen builds`, `zen oplog-import` or `zen oplog-import` command fails due to a http error, the return code for the program will be set to the error/status code ## 5.6.15 - Improvement: Add validation of cache bucket metadata manifest when reading legacy format - Improvement: Don't set m_DispatchComplete in ParallelWork until after pending work countdown succeeds - Improvement: Safeguard FormatCallstack to not throw exceptions when building the callstack string - Improvement: Limit thread name length when setting it for debugger use - Improvement: Don't allow assert callbacks to throw exception - Improvement: When formatting log output for malformed attachments in a package message, allow the string buffer to grow instead of throwing exception - Improvement: Refactored build store cache to use existing CidStore implementation instead of implementation specific blob storage - **CAUTION** This will clear any existing cache when updating as the manifest version and storage strategy has changed - Improvement: If cloud-ddc requests upload of a blob it earlier reported as good to reuse we treat it as a transient error and attempt to retry - Bugfix: Parents were not notified when successfully attaching to an existing server instance - Bugfix: BuildStorage cache return "true" for metadata existance for all blobs that had payloads regardless of actual existance for metadata - Bugfix: Builds download - don't query for block metadata if no blocks are required - Bugfix: Add the referenced attachments correctly when storing inline cache bucket records using batch mode ## 5.6.14 - Improvement: If `zen builds upload` fails to upload metadata for a block with a 404 response (due to race condition from hitting different server) we save and retry metadata upload at end of upload - Improvement: Updated Oodle libs to 2.9.14 ## 5.6.13 - Feature: Added `--sentry-environment` to `zen` and `zenserver` - Feature: Added `--sentry-debug` to `zen` and `zenserver` - Feature: Added environment variable parsing for the following options: - `UE_ZEN_SENTRY_ENABLED`: `--no-sentry` (inverted) - `UE_ZEN_SENTRY_DEBUG`: `--sentry-debug` - `UE_ZEN_SENTRY_ALLOWPERSONALINFO`: `--sentry-allow-personal-info` - `UE_ZEN_SENTRY_DSN`: `--sentry-dsn` - `UE_ZEN_SENTRY_ENVIRONMENT`: `--sentry-environment` - Feature: Added `--output-path` option to `zen version` command to save version information to a file - Improvement: `zen build --cloud-url` option now accepts URLs without the `api/v2/builds/` part in urls - Bugfix: Range requests for build blobs that reached end of blob now works correctly - Bugfix: Gracefully handle a malformed response when querying list of blocks - Bugfix: Make sure we unregister cache namespaces and buckets when dropping them ## 5.6.12 - Bugfix: Don't require `--namespace` option when using `zen list-namespaces` command - Bugfix: Crash in upload of blobs to Cloud DDC due to buffer range error - Improvement: Zcache namespace and bucket information now shown on self-hosted dashboard. ## 5.6.11 - Bugfix: Reinstate `zen builds --url` option to old behaviour, use `zen builds --cloud-url` for "Cloud Artifact URL" ## 5.6.10 - Feature: `zen builds list` now allows `--bucket` to be skipped to allow for multi-bucket search, `bucketRegex` field must be included in the search query - Feature: `--url` option for `zen builds` command has been reworked to accept a "Cloud Artifact URL", removing the need to specify "host", "namespace" and "bucket" separately - Feature: `zen builds pause`, `zen builds resume` and `zen builds abort` commands to control a running `zen builds` command - `--process-id` the process id to control, if omitted it tries to find a running process using the same executable as itself - Feature: New `--sentry-dsn` option for zen command line and zenserver to control Sentry reporting endpoint - Improvement: Use fixed size block chunking for know encrypted/compressed file types - Improvement: Skip trying to compress chunks that are sourced from files that are known to be encrypted/compressed - Improvement: Add global open file cache for written files increasing throughput during download by reducing overhead of open/close of file by 80% - Improvement: Multithreaded scavenge pass for zen builds download - Improvement: Optimized check for modified files when verifying state of scavenged paths - Improvement: Process report now indicates if it is pausing or aborting - Improvement: Don't report OOD or OOM errors to Sentry when running `zen builds` commands - Improvement: Use unique temporary file name when calling OidcToken executable to generate auth token - Improvement: Moved sentry database path to temporary directory for zen commandline - Bugfix: Zen CLI now initializes Sentry after command line options parsing, so that the options can be propetly taken into account during init - Bugfix: Revert: `zen builds upload` now use the system temp directory for temporary files leaving the source folder untouched - Bugfix: Use selected subcommand when displaying help for failed command line options in zen builds - Bugfix: Make sure we recreate the log file in CAS/cache bucket when creating snapshot at startup causing lost changes. UE-291196 ## 5.6.9 - Bugfix: Remove long running exclusive namespace wide locks when dropping buckets or namespaces - Bugfix: Flush the last block before closing the last new block written to during blockstore compact. UE-291196 - Bugfix: Fix stats for memcached entries in disk cache buckets - Feature: Drop unreachable CAS data during GC pass. UE-291196 - Improvement: Check available disk space more frequently and trigger GC operation if low water mark is reached - Improvement: Faster oplog validate to reduce GC wall time and disk I/O pressure - Improvement: `zen builds upload` now use the system temp directory for temporary files leaving the source folder untouched - Improvement: NoneDecoder::DecompressToStream and NoneDecoder::CompressToStream now uses direct disk I/O - Improvement: Add streaming upload from HttpClient to reduce I/O caused by excessive MMap usage ## 5.6.8 - Feature: Add per bucket cache configuration (Lua options file only) cache = { bucket = { -- This is the default for all namespaces maxblocksize = 1073741824, payloadalignment = 4, memlayer = { sizethreshold = 1024, }, largeobjectthreshold = 2097152, }, buckets = { -- Here you can add matching per bucket name (matches accross namespaces) iostorecompression = { maxblocksize = 1073741824, payloadalignment = 4, memlayer = { sizethreshold = 1024, }, largeobjectthreshold = 2097152, }, }, } - Feature: `zen oplog-import` and `zen oplog-export` now supports `--oidctoken-exe-path` option - Feature: Added `--use-sparse-files` option to `zen builds` command improving write performance of large files. Enabled by default. - Bugfix: Handle invalid plugin config file without terminating - Bugfix: Gracefully handle errors while running `oplog-mirror` - Bugfix: Wait for async threads if dispatching of work using ParallellWork throws exception - Bugfix: Fix crash when batch-fetching chunks and an exception was thrown in non-async code - Bugfix: Implemented compact binary validation of custom fields - Bugfix: Fix oplog creation during `zen oplog-import` - Improvement: `--cache-memlayer-sizethreshold` is now deprecated and has a new name: `--cache-bucket-memlayer-sizethreshold` to line up with per cache bucket configuration - Improvement: Remove CAS log files at startup when we write full state index - Improvement: Optimize memory-buffered chunk iteration sizes - Improvement: Skip command line arguments that are empty or just a single space for `zen` command line tool on Mac/Linux to handle "triple space" problem. UE-273411 - Improvement: Extend log information when httpsys response sending fails - Improvement: Log warning when port remappng occurs due to desired port is already in use - Improvement: Custom CopyFile in `zen builds` command increasing throughput by 50% on Windows and give better progress update ## 5.6.7 - Feature: Support for `--log-progress` to output UE style `@progress` log messages has been added to - `zen builds upload` - `zen builds download` - `zen builds diff` - `zen builds validate-part` - `zen wipe` - Feature: `zen builds upload` and `zen builds download` not accepts `--allow-redirect` for Cloud hosts allowing direct upload to/download from backend store (such as S3) - Bugfix: Add explicit lambda capture in CasContainer::IterateChunks to avoid accessing state data references - Bugfix: Validate compact binary objects stored in cache before attempting to parse them to avoid crash when data is corrupt - Bugfix: Add explicit lambda capture in CasContainer::IterateChunks to avoid accessing state data references - Bugfix: Validate compact binary objects stored in cache before attempting to parse them to avoid crash when data is corrupt - Bugfix: Don't report redundant error if disk is full when writing GC state or GC log - Bugfix: Properly handle empty directories when using `zen wipe` command - Improvement: If garbage collection fails due to out of disk or out of memory we issue a warning instead of reporting an error to Sentry - Improvement: Improved cache bucket state index write time with ~15% and cache bucket side-car write time with ~20% - Improvement: Optimize block compact reducing memcpy operations - Improvement: Handle padding of block store blocks when compacting to avoid excessive flusing of write buffer - Improvement: Handle padding when writing oplog index snapshot to avoid unnecessary flushing of write buffer - Improvement: Reduce CPU and lock contention during non-critical phases of GC ## 5.6.6 - Improvement: Made metadata presentation for cooked output entries more generic - Improvement: `builds diff` command uses state file if available reducing need to hash content of files - Improvement: Retry http client requests if we get an unspecified internal error or a bad gateway response - Improvement: Added Sentry integration to zen command line - `--no-sentry` disables Sentry integration, defaults to `false` (Sentry is enabled) - `--sentry-allow-personal-info` to allow personally identifiable information in Sentry reports, disabled by default - Bugfix: Use proper FindClose call when using fallback when getting file attributes on windows - Bugfix: Fixed race condition at final chunks when downloading multipart blobs which could lead to corruption and/or crash - Bugfix: Fixed BigInt conversion error affecting the tree view in the web UI - Bugfix: Limit retry count when finalizing build part to avoid getting stuck in an infinite loop - Bugfix: Fixed lua config naming for zenserver `--buildstore-disksizelimit` option - Feature: `zen builds list-namespaces` subcommand allows listing namespace and bucket information from jupiter builds API. - Feature: New `zen wipe` command for fast cleaning of directories, it will not remove the directory itself, only the content - `--directory` - path to directory to wipe, if the directory does not exist or is empty, no action will be taken - `--keep-readonly` - skip removal of read-only files found in directory, defaults to `true`, set to `false` to remove read-only files - `--quiet` - reduce output to console, defaults to `false` - `--dryrun` - simulate the wipe without removing anything, defaults to `false` - `--yes` - skips prompt to confirm wipe of directory - `--plain-progress` - show progress using plain output - `--verbose` - enable verbose console output - `--boost-workers` - increase the number of worker threads, may cause computer to be less responsive, defaults to `false` - Feature: **EXPERIMENTAL** New `--plugins-config` option to load plugins based on transport-sdk. - It accepts a json of format `[{"name": "%path_to_dll%", "%opt1%": "%val1%", ...}, ]`. ## 5.6.5 - Bugfix: `zen builds` multipart download of large chunks could result in crash - Bugfix: Fixed invalid hash check on messages from Jupiter when doing range http requests - Bugfix: Handle temporary locked files more gracefully (Crowdstrike workaround) - Bugfix: Avoid integer overflow in web UI causing numbers to show up as negative ## 5.6.4 - Improvement: `zen builds` now scavenges previous download locations for data to reduce download size, enabled by default, disable with `--enable-scavenge=false` - Improvement: `zen builds upload` Fixed and improved layout of end of run stats output - Improvement: `zen builds` Reduced clutter and added timing to some logging entries when `--verbose` is enabled - Bugfix: If both `--host` and `--override-host` was given to a `zen builds` command, `-override-host` no host would be resolved - Bugfix: Failing to rename a file during download sometimes reported an error when it succeeded when retrying ## 5.6.3 - Feature: `zen oplog-export`, `zen oplog-import` for `--url` (cloud) and `--builds` (builds) option now has `--oidctoken-exe-path` to let zen run the OidcToken executable to get and refresh authentication token - Feature: zenserver option `--buildstore-disksizelimit` to set an soft upper limit for build storage data. Defaults to 1TB. - Bugfix: Restore Mac minver back to 12.5 - Bugfix: Fixing bug where some imports were shown as a blank string in the web UI - Improvement: Hide progress ETA number until at least 5% of work is done to reduce misleading processing times. UE-256121 - Improvement: Dynamically adjust console progress output based on console width. UE-256126 - Improvement: Usability improvements to oplog search in web UI. Case insensitve, short string search, reset search. ## 5.6.2 - Bugfix: Changed Mac minver from 12.5 to 14.0, and removed `_LIBCPP_DISABLE_AVAILABILITY` as a define on Mac due to executable incompatibility issues - Bugfix: Fixed missing trailing quote when converting binary data from compact binary to json - **EXPERIMENTAL** `zen builds` - Feature: `zen builds` auth option `--oidctoken-exe-path` to let zen run the OidcToken executable to get and refresh authentication token - Bugfix: Fixed upload failure on Mac/Linux if `--zen-folder-path` was not given. UE-265170 - Feature: `zen builds upload` command has new option `--find-max-block-count` to control how many blocks we search for reuse. - Improvement: Bumped the default number of blocks to search during upload to 10000 (from 5000). ## 5.6.1 - Bugfix: GetModificationTickFromPath and CopyFile now works correctly on Windows/Mac - Bugfix: Handling of quotes and quotes with leading backslash for command line parsing - UE-231677 - Improvement: When logging with a epoch time prefix, the milliseconds/fraction is now correct. We now also set the epoch to the process spawn time rather than the time when the logger is created - Improvement: Reduce concurrent disk I/O during GC for cache buckets - **EXPERIMENTAL** `zen builds` - Feature: `zen builds list` command has new options - `--query-path` - path to a .json (json format) or .cbo (compact binary object format) with the search query to use - `--result-path` - path to a .json (json format) or .cbo (compact binary object format) to write output result to, if omitted json format will be output to console - Feature: New `/builds` endpoint for caching build blobs and blob metadata - `/builds/{namespace}/{bucket}/{buildid}/blobs/{hash}` `GET` and `PUT` method for storing and fetching blobs - `/builds/{namespace}/{bucket}/{buildid}/blobs/putBlobMetadata` `POST` method for storing metadata about blobs - `/builds/{namespace}/{bucket}/{buildid}/blobs/getBlobMetadata` `POST` method for fetching metadata about blobs - `/builds/{namespace}/{bucket}/{buildid}/blobs/exists` `POST` method for checking existance of blobs - Feature: zen: `--zen-cache-host` option for `upload` and `download` operations to use a zenserver host `/builds` endpoint for storing build blob and blob metadata - Feature: zenserver: Add command line option `--gc-buildstore-duration-seconds` to control GC life time of build store data - Feature: zen `--boost-workers` option to builds `upload`, `download` and `validate-part` that will increase the number of worker threads, may cause computer to be less responsive - Feature: zen `--cache-prime-only` that uploads referenced data from a part to `--zen-cache-host` if it is not already present. Target folder will be untouched. - Feature: zen: `--zen-folder-path` added to `builds` command, `list`, `upload`, `download`, `fetch-blob`, `validate-part` to control where `.zen` folder is placed and named - Feature: Added `--host` option to use Jupiters list of cloud host and zen servers to resolve best hosts - Feature: Use local zenserver as builds cache if it has the `builds` service enabled and `--cloud-discovery-host` is provided and no remote zenserver cache hosts can be found - Improvement: Do partial requests of blocks if not all of the block is needed - Improvement: Better progress/statistics on upload and download - Improvement: Scavenge .zen temp folders for existing data (downloaded, decompressed or written) from previous failed run - Improvement: Faster abort during stream compression - Improvement: Try to move downloaded blobs with rename if possible avoiding an extra disk write - Improvement: Only clean temp folders on successful or cancelled build - keep it if download fails - Improvement: Do put/get build and find blocks while scanning local folder when uploading - Improvement: Don't chunk up .mp4 files as they generally won't benefit from deduplication or partial in-place-updates - Improvement: Emit build name to console output when downloading a build - Improvement: Reduced memory usage during upload and part upload validation - Improvement: Reduced I/O usage during upload and download - Improvement: Faster block regeneration when uploading in response to PutBuild/FinalizeBuild - Improvement: More trace scopes for build upload operations - Improvement: Progress bar automatically switches to plain mode when stdout is not a console - Improvement: Progress bar is much more efficient on Windows (switched away from printf) - Improvement: Improved stats output at end of upload and download operations - Improvement: Reduced disk I/O when writing out chunk blocks during download - Improvement: Collapse consecutive ranges when reading/writing data from local cache state - Improvement: If a chunk or block write operation results in more than one completed chunk sequence, do the additional verifications as async work - Improvement: Improved error reporting when async tasks fail - Improvement: At end of build upload we post statistics to the Jupiter build stats endpoint: - `totalSize` - `reusedRatio` - `reusedBlockCount` - `reusedBlockByteCount` - `newBlockCount` - `newBlockByteCount` - `uploadedCount` - `uploadedByteCount` - `elapsedTimeSec` - `uploadedBytesPerSec` - Improvement: Allow cook metadata to be browsed in the web UI - Improvement: ELF and MachO executable files are no longer chunked - Improvement: Compress chunks in blocks that encloses a full file (such as small executables) - Improvement: Only check known files from remote state when downloading to a target folder with no local state file - Improvement: Don't move existing local to cache and back if they are untouched - Improvement: Faster cleaning of directories - Improvement: Faster initial scanning of local state - Improvement: Added `--override-host` option as a replacement for `--url` (`--url` still works, but `--override-host` is preferred) - Improvement: Output Build and Build Part information to console during `builds download` - Improvement: Output zen executable path and version at start of `builds` commands - Bugfix: Strip path delimiter at end of string in StringToPath - Bugfix: Ensure that temporary folder for Jupiter downloads exists during verify phase - Bugfix: Fixed crash during download when trying to write outside a file range - Bugfix: MacOS / Linux zen build download now works correctly - Bugfix: Env auth parsing blocked parsing OAuth and OpenId options - Bugfix: Long file paths now works correctly on Windows - Bugfix: Validate that we can read input files correctly for files that are not chunked ## 5.6.0 - Feature: Added support for `--trace`, `--tracehost` and `--tracefile` options to zen CLI command - Improvement: When logging HTTP responses, the body is now sanity checked to ensure it is human readable, and the length of the output is capped to prevent inadvertent log bloat - Improvement: Instrumented `zen builds download` command code so we get more useful Insights output - **EXPERIMENTAL** `zen builds` - Improvement: Better error reporting and faster exit on error - Improvement: Downloads without `--clean` now preserves untracked files in target folder - Improvement: Reduce write operations for duplicate files in dowloads - Improvement: Added on the fly validation `zen builds download` of files built from smaller chunks as each file is completed - Improvement: Do streaming decompression of loose blobs to improve memory and I/O performance - Improvement: Validate hash of decompressed data inline with streaming decompression - Improvement: Do streaming compression of large blobs to improve memory and I/O performance - Feature: Added `--verify` option to `zen builds upload` to verify all uploaded data once entire upload is complete - Feature: Added `--verify` option to `zen builds download` to verify all files in target folder once entire download is complete ## 5.5.20 - Bugfix: Fix bug causing crash if large file was exact multiple of 256KB when using zen builds command - Bugfix: Fix authentication error affecting Jupiter client that manifests as 'bad function call' error UE-252123 ## 5.5.19 - Feature: **EXPERIMENTAL** New `zen builds` command to list, upload and download folders to Cloud Build API - `builds list` list available builds (**INCOMPLETE - FILTERING MISSING**) - `builds upload` upload a folder to Cloud Build API - `--local-path` source folder to upload - `--create-build` creates a new parent build object (using the object id), if omitted a parent build must exist and `--build-id` must be given - `--build-id` an Oid in hex form for the Build identifier to use - omit to have the id auto generated - `--build-part-id` and Oid in hex form for the Build Part identifier for the folder - omit to have the id auto generated - `--build-part-name` name of the build part - if omitted the name of the leaf folder name give in `--local-path` - `--metadata-path` path to a json formatted file with meta data information about the build. Meta-data must be provided if `--create-build` is set - `--metadata` key-value pairs separated by ';' with build meta data for the build. (key1=value1;key2=value2). Meta-data must be provided if `--create-build` is set - `--clean` ignore any existing blocks of chunk data and upload a fresh set of blocks - `--allow-multipart` enable usage of multi-part http upload requests - `--manifest-path` path to text file listing files to include in upload. Exclude to upload everything in `--local-path` - `builds download` download a folder from Cloud Build API (**INCOMPLETE - WILL WIPE UNTRACKED DATA FROM TARGET FOLDER**) - `--local-path` target folder to download to - `--build-id` an Oid in hex form for the Build identifier to use - `--build-part-id` a comma separated list of Oid in hex for the build part identifier(s) to download - mutually exclusive to `--build-part-name` - `--build-part-name` a comma separated list of names for the build part(s) to download - if omitted the name of the leaf folder name give in `--local-path` - `--clean` deletes all data in target folder before downloading (NON-CLEAN IS NOT IMPLEMENTED YET) - `--allow-multipart` enable usage of multi-part http download reqeusts - `builds diff` download a folder from Cloud Build API - `--local-path` target folder to download to - `--compare-path` folder to compare target with - `--only-chunked` compare only files that would be chunked - `builds fetch-blob` fetch and validate a blob from remote store - `--build-id` an Oid in hex form for the Build identifier to use - `--blob-hash` an IoHash in hex form identifying the blob to download - `builds validate part` fetch a build part and validate all referenced attachments - `--build-id` an Oid in hex form for the Build identifier to use - `--build-part-id` an Oid in hex for the build part identifier to validate - mutually exclusive to `--build-part-name` - `--build-part-name` a name for the build part to validate - mutually exclusive to `--build-part-id` - `builds test` a series of operation that uploads, downloads and test various aspects of incremental operations - `--local-path` source folder to upload - Options for Cloud Build API remote store (`list`, `upload`, `download`, `fetch-blob`, `validate-part`) - `--url` Cloud Builds URL - `--assume-http2` assume that the builds endpoint is a HTTP/2 endpoint skipping HTTP/1.1 upgrade handshake - `--namespace` Builds Storage namespace - `--bucket` Builds Storage bucket - Authentication options for Cloud Build API - Auth token - `--access-token` http auth Cloud Storage access token - `--access-token-env` name of environment variable that holds the Http auth Cloud Storage access token - `--access-token-path` path to json file that holds the Http auth Cloud Storage access token - OpenId authentication - `--openid-provider-name` Open ID provider name - `--openid-provider-url` Open ID provider url - `--openid-client-id`Open ID client id - `--openid-refresh-token` Open ID refresh token - `--encryption-aes-key` 256 bit AES encryption key for storing OpenID credentials - `--encryption-aes-iv` 128 bit AES encryption initialization vector for storing OpenID credentials - OAuth authentication - `--oauth-url` OAuth provier url - `--oauth-clientid` OAuth client id - `--oauth-clientsecret` OAuth client secret - Options for file based remote store used for for testing purposes (`list`, `upload`, `download`, `fetch-blob`, `validate-part`, `test`) - `--storage-path` path to folder to store builds data - `--json-metadata` enable json output in store for all compact binary objects (off by default) - Output options for all builds commands - `--plain-progress` use plain line-by-line progress output - `--verbose` ## 5.5.18 - Bugfix: Fix parsing of workspace options in Lua config - Bugfix: Add missing Lua option for option `--gc-projectstore-duration-seconds` - Bugfix: Add missing Lua mapping option to `--statsd` command line option - Bugfix: Verify that chunking is allowed before chunking loose files duriung oplog export - Bugfix: Fix oplog target url for oplog export to remote zenserver - Bugfix: Handle workspace share paths enclosed in quotes and ending with backslash UE-231677 - Bugfix: Strip leading path separator when creating workspace shares ## 5.5.17 - Improvement: Batch fetch record attachments when appropriate - Improvement: Reduce memory buffer allocation in BlockStore::IterateBlock - Improvement: Tweaked BlockStore::IterateBlock logic when to use threaded work (at least 4 chunks requested) - Improvement: Remove overhead of verifying oplog presence on disk for "getchunks" rpc call in project store - Improvement: Increase limit where we switch from simple read of IoBuffer file to mmap - Bugfix: CasContainerStrategy::IterateChunks could give wrong payload/index when requesting 1 or 2 chunks - Bugfix: Suppress progress report callback if oplog import detects oplog with zero ops ## 5.5.16 - Feature: Project store "getchunks" rpc call `/prj/{project}/oplog/{log}/rpc` extended to accept both CAS (RawHash) and Id (Oid) identifiers as well as partial ranges - Legacy call still has and `chunks` array in the request body with IoHash entries providing CAS data for whole chunks only - New call has a top level `Request` object in request body with the following elements: - `SkipData`: bool, optional, default `false` - If `SkipData` is set to true we will not include the payload of the chunk, just the information requested for the chunk - `Chunks`: array of objects for the requests - `RawHash`: IoHash - Indicates we want a Cid chunk which is stored in CAS and identified by the RawHash of the chunk. - Mutually exclusive with `Oid`. - The value of the field will be used as the `Id` field in the chunk response. - `Oid`: IoChunkId - Indicates we want a data chunk via its Id. May be stored in CAS or be a reference to a loose file on disk. - Mutually exclusive with `RawHash`. - The value of the field will be used as the `Id` field in the chunk response. - `Offset`: uint64, optional - Partial request offset into the requested chunk. Default is from start of chunk (0). - `Size`: uint64, optional - Partial request size of the requested chunk. Default is to end of chunk (-1). - `ModTag`: uint64, optional - A tag indicating the modification version of the chunk. - If `SkipData` is set to false and the `ModTag` matches the current modification version in zenserver the chunk response will only contain an `Id` field. - If `SkipData` is set to false and the `ModTag` does not match the current modification version in zenserver the response will contains an `Id` field, current `ModTag` and the chunk payload. - If `SkipData` is set to true and the `ModTag` matches the current modification version in zenserver the chunk response will only contain an `Id` field. - If `SkipData` is set to true and the `ModTag` does not match the current modification version in zenserver the response will contains an `Id` field and the current `ModTag`. - New call responds with a zen Http CbPackage (with the header magic `0xaa77aacc`) with the following elements in the included package object: - `Chunks`: array of objects for the response - `Id`: IoHash or IoChunkId - This is the identifier used for the chunk - if the requested chunk can not be found by zenserver the chunk will be omitted from the reponse `Chunks` array. - `Error`: string - If the chunk was found but there was an error processing the chunk to fulfill the request the reason will be added as a descriptive string. - Mutually exclusive with `ModTag`, `Hash`, `RawHash` and `FragmentHash`. - `ModTag`: uint64 - This field is set if the `ModTag` for the chunk was not present in the request or the current `ModTag` for the chunk does not match the `ModTag` in the request. - Mutually exclusive with `Error`. - `Hash`: IoHash - If the response payload was found and `SkipData` is false and `ModTag` in the request does not match the current `ModTag` this is the identifier for the attachment data. - A `Hash` field indicates that the response for the chunk is attached as uncompressed data. - If a range is given for the chunk the attached data is limited to the requested range. - Mutually exclusive with `RawHash`, `FragmentHash` amd `Error`. - `RawHash`: IoHash - If the response payload was found and `SkipData` is false and `ModTag` in the request does not match the current `ModTag` this is the identifier for the attachment data. - A `RawHash` field indicates that the response for the chunk is attached as compressed data. - If a range is given for the chunk the this field will be replaced with a `FragmentHash` field. - Mutually exclusive with `Hash`, `FragmentHash` and `Error`. - `FragmentHash`: IoHash - If the response payload was found and `SkipData` is false and `ModTag` in the request does not match the current `ModTag` this is the identifier for the attachment data. - A `FragmentHash` field indicates that the response for the chunk is a partial chunk as compressed data. Compressed data ranges will always be sent as full compressed blocks covering the requested range. - If `FragmentHash` is present, a `FragmentOffset` field will also be present. - Mutually exclusive with `Hash`, `RawHash` and `Error`. - `FragmentOffset`: uint64 - Indicates at which offset the partial chunk of compressed data starts. Compressed data is compressed into blocks and ranges for compressed data will always send full compressed blocks. - `FragmentOffset` will be less or equal to the requested `Offset`. - `FragmentOffset` plus the decompressed attachment size will always be more or equal to the requested `Offset` + `Size`. - Only present if `FragmentHash` is present. - `Size`: uint64 - If the uncompressed chunk response is not the full chunk the `Size` field will contain the chunks full size - Mutually exclusive with `RawHash` and `Error`. - `RawSize`: uint64 - If the compressed chunk response is not the full chunk the `RawSize` field will contain the chunk full decompressed size - Mutually exclusive with `RawHash`, `Hash` and `Error`. - Feature: zen command `oplog-export` and `oplog-import` now supports `--builds` remote target using the Jupiter builds API - `oplog-export`: - `builds` Jupiter builds api service endpoint to export to - `namespace` Name of namespace to store data to - `bucket` Name of bucket to store data to - `builds-id` Key to the stored oplog container as an Object-Id in hex string format, optional - if omitted a new Object Id will be generated - `builds-metadata-path` Path to json file that holds the metadata for the build - `builds-metadata` Key-value pairs separated by ';' with build meta data. (key1=value1;key2=value2) - `openid-provider` Optional name of openid provider used to authenticate with, requires that the zen server instance has been provided with a oids refresh token for the provider name - `access-token` Optional JWT access token to authenticate with - `access-token-env` Optional name of environment variable that holds an JWT access token to authenticate with - `access-token-path` Read cloud access token from json file produced by OidcToken.exe - `disableblocks` Disable block creation and save all attachments individually - `disabletempblocks` Disable temp block creation and upload blocks without waiting for oplog container to be uploaded - `oplog-import`: - `builds` Jupiter builds api service endpoint to export import from - `namespace` Name of namespace to read data from - `bucket` Name of bucket to read data from - `builds-id` Key to the stored oplog container as an Object-Id in hex string format - `openid-provider` Optional name of openid provider used to authenticate with, requires that the zen server instance has been provided with a oids refresh token for the provider name - `access-token` Optional JWT access token to authenticate with - `access-token-env` - Optional name of environment variable that holds an JWT access token to authenticate with - `access-token-path` Read cloud access token from json file produced by OidcToken.exe - Improvement: Build release binaries with LTO on Windows/Mac - Improvement: Release binaries now build with "faster" instead of "smaller" optimization flags - Improvement: Improved payload validation in HttpClient and oplog import UE-230990 - Improvement: Self-hosted frontend; - Renamed "view" action to "list" in project's oplog list - Downloaded items are suffixed with the oplog entry's file name - Better styling of tables with a single column - Improvement: Fixed threading and lifetime issues in AuthMgr - Improvement: Adding an existing OpenId provider now preplaces any existing one if the parameters differ - Bugfix: Reduce background job description name for oplog import/export FORT-829408 - Bugfix: Properly handle trailing path separators in `zen workspace` and `zen workspace-share` commands UE-231677 - Bugfix: Long oplog entry keys could cause infinite loops in xxhash due to API mis-use. A workaround is in place which retains the old key mapping to maintain backwards compatibility with on-disk state ## 5.5.15 - Bugfix: Fix returned content type when requesting a project store chunk with non-compressed accept type - Bugfix: Fix path parsing using stale memory pointers when reading oplog index file UE-231759 - Bugfix: Ensure we don't throw exceptions from worker threads when doing oplog upload/download operations UE-225400 - Bugfix: Don't add workspace to config.json if the provided file system path does not exist UE-231801 - Improvement: Increase the smallest chunk size in block store to do batch reading for - Improvement: Block compact of oplogs from performance sensitive calls ## 5.5.14 - Feature: Added `--malloc` option for selecting which memory allocator should be used. Currently the available options are `--malloc=mimalloc`, `--malloc=rpmalloc`, `--malloc=stomp` or `--malloc=ansi`. Some of these options are currently not available on all targets, but all are available on Windows. `rpmalloc` is currently not supported on Linux or Mac due to toolchain limitations. - Feature: Added support for generating Unreal Insights-compatible traces for memory usage analysis. Currently only supported for Windows. Activate memory tracing by passing `--trace=memory` on the command line, alongside one of `--tracehost=` or `--tracefile=` to enable tracing over network or to a file. - Feature: Self-hosted dashboard displays oplog entry package data sizes in tree and entry view pages - Bugfix: Don't add RawSize and Size in ProjectStore::GetProjectFiles response if we can't get the payload - Bugfix: Use validation of payload size/existance in all chunk fetch operations in file cas - Bugfix: Fixed deadlock in oplog snapshot - Bugifx: Validate that we can actually fetch the payload when checking attachments in response to a `/prep` call - Bugfix: Add missing exclusive ShardLock during compact in filecas - Improvement: In project store oplog validate, make sure we can reach all the payloads - Improvement: Add threading to oplog validate request - Improvement: For `/chunkinfos` and `/files` requests we will not fill in the `size` or `rawsize` in the response for chunks that are not found - Improvement: For `/chunkinfos` and `/files` requests we will not fill in the `size` or `rawsize` in the response for chunks that are are invalid compressed buffers when `rawsize` is requested - Improvement: Tweaked how small chunks are batch-read info memory for different use cases. Project store `/chunkinfos` call execution time is 2x to 3x times faster - Improvement: Add warning log when filecas fails to open a file it expects to be present - Improvement: Httpclient: Add validation of response payloads that are compressed binary - attempt retry if applicable - Improvement: Httpclient: Add validation of response payloads where response contains a `X-Jupiter-IoHash` header - attempt retry if applicable ## 5.5.13 - Bugfix: Fix inconsistencies in filecas due to failing to remove payload file during GC causing "Missing Chunk" errors - Bugfix: Fixed crash on corrupt attachment block when doing oplog import - Bugfix: Fixed off-by-one in GetPidStatus (Linux) which could cause spurious errors ## 5.5.12 - Feature: Added option `gc-validation` to `zenserver` that does a check for missing references in all oplog post full GC. Enabled by default. - Feature: Added option `gc-validation` to `zen gc` command to control reference validation. Enabled by default. - Feature: Self-hosted dashboard: - Searchable oplog - Oplog tree view - Added links to an oplog entry's dependencies - Improvement: Added more details in post GC log. - Bugfix: Fixed race condition in oplog writes which could cause used attachments to be incorrectly removed by GC - Bugfix: fixed issue with ZenServerInstance::SpawnServer. Previously it would assign a child identifier twice, which could lead to confusing log output when running tests - Bugfix: Fixed batch request not handling missing chunks correctly - Bugfix: Fixed CorrelationId in oplog batch chunk fetch - Bugfix: Fixed issue which could lead to inconsistent timestamp output within the first second of execution ## 5.5.11 - Bugfix: Fixed memory leak in cache (would leak GetBatchHandle instances) ## 5.5.10 - Improvement: Provide shorter project store name in list when not conflicting with other project names - Improvement: `zen project-details` is now `zen project-op-details` and uses the standard resolving logic for project id and oplog id - Improvement: If there is only one project in the project store, auto-resolve to that if no project id is given - Improvement: Nicer progress bar during oplog import/export - Improvement: Added progress bar for `zen oplog-mirror` command - Improvement: Verify that oplog has not been deleted from disk behind our back - Bugfix: If zenserver fails to pick up a request for a sponsor process, make sure to clear the slot - Bugfix: Fix potential hole where an chunk could be lost during GC if the package referenced a chunk that already existed in store - Bugfix: Make op key and file path matching in `zen oplog-mirror` case insensitive ## 5.5.9 - Feature: Added command `zen cache-get` to fetch a cache value/record or an attachment from a cache record - Feature: Added interpretation of ReferencedSet op written by the cooker. - The ReferencedSet specifies which ops in the oplog were reachable by the most recent incremental cook and should be staged. - Feature: Added options `--bucketsize` and `--bucketsizes` to `zen cache-info` to get data sizes in cache buckets and attachments - Improvement: Self-hosted dashboard - Oplog entry view is more complete - Separate page for inspecting server stats - Improvement: When batch-fetching cache values/attachments, only use block memory-buffering for items that may be memcached to reduce disk I/O - Improvement: `zen project-drop` command defaults to dry-run to prevent accidental drop and provides information on how to actually perform the drop - Improvement: Use a smaller thread pool during pre-cache phase of GC to reduce memory pressure - Improvement: Reworked workspace shares to be more secure. Workspaces and workspace shares can only be created using the `zen workspace` command, the http endpoint is disabled unless zenserver is started with the `--workspaces-allow-changes` option enabled. - Each workspace are now configured via a `zenworkspaceconfig.json` file in the root of each workspace - A workspace can allow shares to be created via the http interface if the workspace is created with the `--allow-share-create-from-http` option enabled - A new http endpoint at `/ws` - issuing a `Get` operation will get you a list of workspaces - A new http endpoint at `/ws/refresh` - issuing a `Get` will make zenserver scan for edits in workspaces and workspace shares - Bugfix: Parse filenames as utf8 string when mirroring files inside an oplog using `zen oplog-mirror` and `zen vfs` - Bugfix: Properly initialize stats for file/jupiter/zen remote project store - fixes stats log output when exporting and importing oplogs - Bugfix: Properly calculate month for last gc times - Cleanup: Removed GCv1 implementation ## 5.5.8 - Feature: Added option `gc-attachment-passes` to zenserver - Limit the range of unreferenced attachments included in GC check by breaking it into passes. Default is one pass which includes all the attachments. - Feature: Added options to `zen gc` to filter which attachments get checked for GC - `--reference-low` attachments with a hash higher than this will not be considered for GCv2 - `--reference-high` attachments with a hash lower than this will not be considered for GCv2 - Feature: Added `zen cache-gen` command to generate large amount of cache data for testing - Feature: Initial release of a self-hosted HTML dashboard - Feature: Added `--cache-attachment-store` and `--projectstore-attachment-store` options to `zen gc` command to override settings for attachment meta store during GCv2 - Improvement: Refactored GCv2 to reduce time we block write requests, trading for longer overall GCv2 time - Reduces time writes are blocked by ~8 times, going from 6s to 0.7s on a large data set - Increases time for GCv2 to execute (without blocking reads/writes) by ~1.7 times, going from 10.4s to 17.6s on a large data set - Improvement: Cleaned up GCv2 start and stop logs and added identifier to easily find matching start and end of a GCv2 pass in log file - Improvement: Optimizations to GCv2 leading to 50-100% performance increase when handling large data sets - Improvement: Added a namespace-qualified RPC endpoint for z$ at `/z$//$rpc` which may be used to validate RPC requests by URL inspection - Improvement: Faster memcache trimming - Reduce calculations while holding bucket lock for memcache trim analysis to reduce contention - When trimming memcache, evict 25% more than required to reduce frequency of trimming - When trimming memcache, don't repack memcache data vector, defer that to regular garbage collection - When trimming memcache, deallocate memcache buffers when not holding exclusive lock in bucket - Improvement: Optimized startup performance - reduced startup time by ~30% for large data sets - Improvement: During block store compact, merge small blocks even if no new unused data is found - Improvement: Self-hosted dashboard - Bugfix: Symbol resolution could fail on Windows depending on the process' current working directory - Bugfix: Don't block disk cache buckets from writing when scanning records for attachment references - Bugfix: Make sure we clear active write block of block store if we get exception when writing to disk ## 5.5.7 - Bugfix: Fix race condition in zenserver during batched fetch of small cache chunks (<=1024b) resulting in missing rawhash/rawsize in the response. UE-223703 - Improvement: Don't add batch overhead if we are only going to put once cache value in a request - Improvement: Removed redundant commands `project-delete` and `oplog-delete`. Use already existing `project-drop` instead. - Improvement: zen oplog commands `project-drop`, `project-info`, `oplog-create`, `oplog-import`, `oplog-mirror` can now help resolve partial project and oplog identifiers - Improvement: zen `oplog-mirror` command now has new filter options to control which files are realized to disk: `--key` for op key, `--file` for file path matching and `--chunk` for chunk id matching - Improvement: Validate oplog log file and wipe state if it is corrupt before attempting to open - Improvement: `project-drop` command defaults to `--dry-run=true` and will only delete the target if `--dry-run=false` is added to the command line to avoid accidental delete - Improvement: Gracefully handle oplog being deleted during early phase of GC - Improvement: Optional paging when iterating oplog entries ## 5.5.6 - Bugfix: Make sure `noexcept` functions does not leak exceptions via ASSERT statements which causes crash via abort - Bugfix: If we fail to move a temporary file into place, try to re-open the file so we clean it up - Improvement: Added CPU trace scope "Z$::Bucket::PutBatched" - Improvement: Cleaned up catch of exception in noexcept functions - Improvement: Removed `--owner-pid` command line option for `zen up` - send is as a pass through option after `--` instead if needed - Improvement: Added `--show-console` option to `zen up` to control if zenserver should open a visible console window - default is `false` - Improvement: Added `--show-log` option to `zen up` to force output of zenserver log on launch success - default is `false` - Improvmenet: Made `zen down` more robust - Improvement: Added detection of zombie processes on Mac/Linux when waiting for process to exit - Improvement: Don't keep all oplogs open after GC, close them when references are fetched unless they are open by client - Improvement: Move GC logging in callback functions into "gc" context - Improvement: Clean up cache bucket log files at startup as we store the matching information in the index snapshot for the bucket - Feature: Added option `--gc-cache-attachment-store` which caches referenced attachments in cache records on disk for faster GC - default is `false` - Feature: Added option `--gc-projectstore-attachment-store` which caches referenced attachments in project store oplogs on disk for faster GC - default is `false` - Feature: Added project store oplog index snapshots for faster opening of oplog - opening oplogs are roughly 10x faster ## 5.5.5 - Improvement: Log reasons for failing to read cache bucket sidecar file - Improvement: If we fail to read access times for a project it now issues a warning instead of error - Improvement: If a zenserver instance is already using our named mutex - exit with error code instead of reporting error to Sentry - Improvement: Gracefully handle scenarios where sponsor processes running status can not be checked - Improvement: Split worker pools into "burst" and "background" to prevent background job from starving client requests that uses thread pools - Improvement: Add zenserver session id to Sentry context - Bugfix: If we fail to get information on a cache chunk for a partial chunk request - treat it as a cache miss - Bugfix: Set last GC time when we skip GC due to low disk space to avoid spam-running GC - Bugfix: Make sure we lock project and verify directory exists before trying to iterate to find oplogs - Bugfix: Failure to get size or if size mismatch of a file in filecas now results in overwrite without error report - Bugfix: Handle "path not found" error when trying to traverse a directory - Bugfix: Remove bad ASSERT in cache batch get ## 5.5.4 - Feature: Added new option to zenserver for GC V2 - `--gc-single-threaded` GCV2 - force GC operation to run in single-threaded mode to help with debugging, default is off - Feature: Added new options to `zen gc` command for GC V2 - `--single-threaded` GCV2 - force GC operation to run in single-threaded mode to help with debugging, default is off - Feature: Project store now compacts oplog container at first open when 50% of storage is unused and during GCv2 when more than 25% is unused - Feature: Added `--detach` option to zenserver which controls the call to `setsid()` on Mac/Linux for better ctrl-c behaviour when running it using `xmake run` - Bugfix: Absolute paths and paths going outside the root path for workspace shares are now blocked - Bugfix: Fix ASSERT that would trigger in GC under certain conditions if source block was empty - Bugfix: Skip and report invalid configurations for workspaces instead of crashing - Bugfix: Report back error to http caller if removal of oplog fails - Bugfix: If a cache bucket value fails validation - don't try to memcache the empty buffer - Bugfix: Skip chunk in block stores when iterating a block if the location is out of range - Bugfix: Make sure we use the full max block size when compacting block store and not create a new block for each input block - Improvement: `xmake run zenserver ...` now passes in `--detach=false` to allow subprocesses to be terminated using Ctrl-C (previously, zenserver instances would linger) - Improvement: `zen workspace-share create` now resolves relative root paths to absolute paths - Improvement: Add better output/logging when failing to initialize shared mutex - Improvement: Validate data when gathering attachments in GCv2 - Improvement: Add file and size of file when reading a iobuffer into memory via ReadFromFileMaybe - Improvement: Add hardening to gracefully handle malformed oplogs in project store - Improvement: Catch exceptions in threaded work to avoid uncaught exception errors - Improvement: Make oplog/project removal more robust - Improvement: Made LSN number for project store oplog an unsigned 32 bit value at the top layer to increase range - Improvement: A request for an out of range project store chunk will now result in a "not found" status result - Improvement: Callstack logs on assert/error is now indented for better readability - Improvement: Full command line is now logged on startup and added as context information in Sentry reports - Improvement: Retry writing GC state if it fails to handle transient problems - Improvement: If GC operation fails we report an error, after that errors are demoted to warnings until we manage to do a successful GC run. This is to reduce spam on Sentry. - Improvement: Removed stats for CId entries when doing GCv2 without cleaning up CId references - Improvement: Enabled mimalloc on arm64 - Improvement: Enabled Sentry support on arm64 ## 5.5.3 - Feature: New 'workspaces' service which allows a user to share a local folder via zenserver. A workspace can have mulitple workspace shares and they provie an HTTP API that is compatible with the project oplog HTTP API. Workspaces and shares are preserved between runs. Workspaces feature is disabled by default - enable with `--workspaces-enabled` option when launching zenserver. - New http service endpoint `/ws` - `/ws/{workspace_id}` - manage workspaces. {workspace_id} is a Object Id - PUT - add a workspace. Set `{workspace_id}` to `000000000000000000000000` to automatically generate id from `root_path` - Parameter: `root_path` the root local file path of the workspace. - GET - get information about a workspace - DELETE - removes a workspace (does not affect the files on disk). Remove all shares inside the workspace - `/ws/{workspace_id}/{share_id}` - manage folder shares within a workspaces. {share_id} is a Object Id - PUT - add a workspace. Set `{share_id}` to `000000000000000000000000` to automatically generate id from `share_path` - Parameter: `share_path` the relative local file path to the workspace root path. - Parameter: `alias` a names alias for the share to be used with the `/ws/share/{alias}` url - GET - get information about a workspace share - DELETE -removes a workspace share (does not affect the files on disk) - `/ws/{workspace_id}/{share_id}/files` - GET - get the files in the workspace share - Parameter: `fieldnames` is a parameter to filter which fields to include in the response. Default fields are: `id`, `clientpath` and `serverpath` (translates to `?fieldnames=id,clientpath,serverpath`). Use `?fieldnames=*` to get all the fields. - Parameter: `filter` only applies if `fieldnames` is not given as a parameter. Use `?filter=client` to exclude the `serverpath` - `/ws/{workspace_id}/{share_id}/entries` - GET - get share as oplog entries - Parameter: `fieldfilter` is a parameter to filter which fields to include in the response. Default fields are: `id`, `clientpath`, `serverpath` (translates to `?fieldnames=id,clientpath,serverpath`). Use `?fieldfilter=*` to get all the fields - Parameter: `filter` only applies if `fieldfilter` is not given as a parameter. Use `?filter=client` to exclude the `serverpath` - Parameter: `opkey` limits which ops to include, for a workspace share the only valid value for this parameter is `file_manifest` - `/ws/{workspace_id}/{share_id}/{chunk}` - GET - get the content of a file in a file share - Parameter: `offset` the start of the range of the chunk - Defaults to zero - Parameter: `size` the size of the range of the chunk - Defaults to entire size - `/ws/{workspace_id}/{share_id}/{chunk}/info` - GET - get properties of the chunk. Currently only includes the size of the chunk. - `/ws/{workspace_id}/{share_id}/batch` - POST - do a batch request for multiple chunks in a workspace share - `/ws/share/{alias}/` - a shortcut to `/ws/{workspace_id}/{share_id}/` based endpoints using the alias for a workspace share. - New `zen workspace {subcommand}` command to manage workspaces - `create` - creates a new workspace - `--root-path` - the root local file path of the workspace - `--workspace` - the object id identity of the workspace, omit to automatically generate an id from the `--root-path` parameter - `info` - gets information about a workspace - `--workspace` - the object id identity of the workspace - `remove` - removes an existing workspace - `--workspace` - the object id identity of the workspace - New `zen workspace-share {subcommand}` command to manage workspace shares - `create` - creates a new workspace - `--workspace` - the object id identity of the workspace to add the share to - `--root-path` - the root local file path of the workspace - if given it will automatically create the workspace before creating the share. If `--workspace` is omitted, an id will be generated from the `--root-path` parameter - `--share-path` - the relative local file path to the workspace root path - `--share` - the object id identity of the workspace share, omit to automatically generate an id from the `--share-path` parameter - `--alias` - a name for the workspace share that can be accessed via the `/ws/share/` endpoint - `info` - gets information about a workspace share - `--workspace` - the object id identity of the workspace to add the share to - `--share` - the object id identity of the workspace share - `--alias` - alias name of the share - replaces `--workspace` and `--share` options - `remove` - removes an existing workspace share - `--workspace` - the object id identity of the workspace to add the share to - `--share` - the object id identity of the workspace share - `--alias` - alias name of the share - replaces `--workspace` and `--share` options - `files` - get a list of files in a workspace share - `--workspace` - the object id identity of the workspace to add the share to - `--share` - the object id identity of the workspace share - `--alias` - alias name of the share - replaces `--workspace` and `--share` options - `--filter` - a list of comma separated fields to include in the response - `--refresh` - for a refresh by re-reading the content of the local folder for the share - `entries` - get ops for a workspace share - `--workspace` - the object id identity of the workspace to add the share to - `--share` - the object id identity of the workspace share - `--alias` - alias name of the share - replaces `--workspace` and `--share` options - `--opkey` - filter the entries using an opkey - `--filter` - a list of comma separated fields to include in the response - `--refresh` - for a refresh by re-reading the content of the local folder for the share - `get` - get a chunk from a workspace share - `--workspace` - the object id identity of the workspace to add the share to - `--share` - the object id identity of the workspace share - `--alias` - alias name of the share - replaces `--workspace` and `--share` options - `--chunk` - the chunk id for the chunk or the share local path for the chunk - `--offset` the start of the range of the chunk - Defaults to zero - `--size` the size of the range of the chunk - Defaults to entire size - `batch` - get multiple chunks from a workspace share - `--workspace` - the object id identity of the workspace to add the share to - `--share` - the object id identity of the workspace share - `--alias` - alias name of the share - replaces `--workspace` and `--share` options - `--chunks` - the chunk ids for the chunk or the share local paths for the chunk - Bugfix: Removed test data added at current folder when running test - Bugfix: Make sure we monitor and include new project/oplogs created during GCv2 - Bugfix: Make sure we monitor and include new namespaces/cache buckets created during GCv2 - Improvement: Various minor optimizations in cache package formatting - Improvement: Add batch fetch of cache values in the GetCacheValues request - Improvement: Use a smaller thread pool for network operations when doing oplog import to reduce risk of NIC/router failure - Improvement: Medium worker pool now uses a minimum of 2 threads (up from 1) - Improvement: Don't try to cache process handles on Windows if we don't have a session id ## 5.5.2 - Bugfix: Don't try to read bytes to validate a compact binary object that is empty - Bugfix: Don't free fake memory buffer pointer when materializing a zero size file - Bugfix: Fix crash when iterating chunks in project store causing crash due threading issues - Bugfix: If we get a request for a partial chunk that can not be fulfilled we warn and treat it as a miss - Bugfix: Correctly calculate memory view size from Mid/MidInline function if size is not given - Improvement: Asserts gives an immediate ERROR log entry with callstack and reason - Improvement: Asserts flushes the log before sending error report to Sentry - Improvement: Refactored IterateChunks to allow reuse in diskcachelayer and hide public GetBlockFile() function in BlockStore - Improvement: Don't use "error:" in log messages unless there is an error as Horde CI will pick up that log line and interpret it as an error ## 5.4.5 - Bugfix: If we get a request for a partial chunk that can not be fulfilled we warn and treat it as a miss - Bugfix: Correctly calculate memory view size from Mid/MidInline function if size is not given ## 5.5.1 - Bugfix: Remove extra loop causing GetProjectFiles for project store to find all chunks once for each chunk found - Bugfix: Don't capture ChunkIndex variable in CasImpl::IterateChunks by reference as it causes crash - Bugfix: Don't try to respond with zero size partial cache value when partial size is zero - Improvement: Make FileCasStrategy::IterateChunks (optionally) multithreaded (improves GetProjectFiles performance) - Improvement: Add batch scope for adding multiple cache values from single request efficently - Improvement: Use temp file write and move into place for manifest/state files to avoid partial incomplete file writes - Improvement: Added more validation of data read from cache / cas - Improvement: We now detect if we are running on Wine and change the default http path to asio since Wine does not implement http.sys properly ## 5.5.0 - Change: GCv2 is now the default option, use `--gc-v2=false` to fall back to GCv1 - Bugfix: Correctly calculate size freed/data moved from blocks in GCv2 - Bugfix: Only disable oplog update capture if we have started it - Bugfix: Harden GCv2 when errors occur and gracefully abort GC operation on error - Bugfix: Always pre-cache oplog when creating project store GCv2 referencer - Bugfix: Fix potential deadlock in project store GCv2 referencer - Bugfix: Added dedicated timer for EnqueueStateExitFlagTimer - Bugfix: Made log formatter `fullformatter` output consistent time stamps across threads - Bugfix: Made Linux/Mac event implementation TSAN clean - Bugfix: Properly set content type of chunks fetch from CidStore - Feature: `zen up` command improvements - --`port` allows you to specify a base port when starting an instance - --`base-dir` allows you to specify a base directory for the zenserver executable if it is not located next to the zen.exe executable - Feature: `zen down` - --`port` allows you to specify a base port when shutting down an instance - --`base-dir` allows you to specify a base directory for the zenserver executable if it is not located next to the zen.exe executable - --`force` if regular shutdown fails it tries to find a running zenserver.exe process and terminate it - --`data-dir` to specify a data directory to deduce which zen instance to bring down - If it fails to attach to the running server it now waits for it to exit when setting the RequestExit shared memory flag - Feature: `zen status` - --`port` filter running zen instances based on port - --`data-dir` filter running zen instances based on information in the data directory - Feature: `zen attach` - --`data-dir` to specify a data directory to deduce which zen instance to attach to - Improvement: zenserver now checks the RequestExit flag in the shared memory and exist gracefully if it is set - Improvement: When adding a sponsor process to a running zenserver instance, we wait for it to be picked up from the shared memory section to determine success/fail - Improvement: Reduced details in remote store stats for oplog export/import to user - Improvement: Transfer speed for oplog export/import is now an overall number rather than average of speed per single request - Improvement: add validation of input buffer size when trying to parse package message - Improvement: avoid doing memcopy when parsing package message - Improvement: Detect zombie processes on Mac/Linux when checking for running processes - Imrpovement: Make sure zenserver detaches itself as a child process at startup to avoid zombie process if parent process does not wait for zenserver child process - Improvement: Trying to load a compact binary object from an empty file no longer causes access violation - Improvement: When importing oplogs we now import all attachments first and (optionally clean) write the oplog on success to avoid invalid import results - Improvement: Capture launched zenserver output to display to user if launch fails - Improvement: Add disk buffering in http client (improves download speed for oplog import) - Improvement: Add block hash verification for blocks received at oplog import - Improvement: Offload block decoding and chunk writing form network worker pool threads (improves download speed for oplog import) - Improvement: Add batching when writing multiple small chunks to block store - decreases I/O load significantly on oplog import - Improvement: Reworked GetChunkInfos in oplog store to reduce disk thrashing and improve performance - Improvement: Bumped xmake to 2.9.1 and vcpkg version to 2024.03.25 - Improvement: Refactor `IoHash::HashBuffer` and `BLAKE3::HashBuffer` to not use memory mapped files. Performs better and saves ~10% of oplog export time on CI - Improvement: Add IterateChunks(std::span) for better performance in get oplog ## 5.4.4 - Bugfix: Get raw size for compressed chunks correctly for `/prj/{project}/oplog/{log}/chunkinfos` - Bugfix: Fix log of Success/Failure for oplog import - Bugfix: Use proper API when checking oplog export blob existence in Jupiter - Improvement: It is now possible to control which fields to include in `/prj/{project}/oplog/{log}/chunkinfos` request by adding a comma delimited list of filed names for `fieldnames` parameter - Default fields are: `id`, `rawhash` and `rawsize` (translates to `?fieldnames=id,rawhash,rawsize`) - Use `?fieldnames=*` to get all the fields - Improvement: It is now possible to control which fields to include in `/prj/{project}/oplog/{log}/files` request by adding a comma delimited list of filed names for `fieldnames` parameter - Default fields are: `id`, `clientpath` and `serverpath` (translates to `?fieldnames=id,clientpath,serverpath`), `filter=client` only applies if `fieldnames` is not given as a parameter - Use `?fieldnames=*` to get all the fields - Improvement: Use multithreading to fetch size/rawsize of entries in `/prj/{project}/oplog/{log}/chunkinfos` and `/prj/{project}/oplog/{log}/files` - Improvement: Use HttpClient when doing oplog export/import with a zenserver as a remote target. Includes retry logic - Improvement: Increase the retry count to 4 (5 attempts in total) when talking to Jupiter for oplog export/import - Improvement: Optimize `CompressedBuffer::GetRange()` with new `CompressedBuffer::ReadHeader()` that does one less read from source data resulting in a 30% perf increase. - Improvement: Validate lengths of chunks fetched from CAS/Cache store, if full chunk can not be retrieved, treat it as missing - Improvement: Add file and line to ASSERT exceptions - Improvement: Catch call stack when throwing assert exceptions and log/output call stack at important places to provide more context to caller - Improvement: Hardening of ParsePackageMessage and added extended details of all malformed attachments detected - Improvement: Allow import-oplog command to include GC marker path as an argument for when it creates the destination oplog - Improvement: Oplog export and import now writes out statistics about requests and transfer speeds at end of successful execution ## 5.4.3 - Bugfix: Fix sentry using wrong folder path when data path contains non-ascii characters UE-210530 - Improvement: Faster reading of compressed buffer headers by not materializing entire source buffer - Bugfix: Get raw size for compressed chunks correctly for `/prj/{project}/oplog/{log}/chunkinfos` - Improvement: It is now possible to control which fields to include in `/prj/{project}/oplog/{log}/chunkinfos` request by adding a comma delimited list of filed names for `fieldnames` parameter - Default fields are: `id`, `rawhash` and `rawsize` (translates to `?fieldnames=id,rawhash,rawsize`) - Use `?fieldnames=*` to get all the fields - Improvement: It is now possible to control which fields to include in `/prj/{project}/oplog/{log}/files` request by adding a comma delimited list of filed names for `fieldnames` parameter - Default fields are: `id`, `clientpath` and `serverpath` (translates to `?fieldnames=id,clientpath,serverpath`), `filter=client` only applies if `fieldnames` is not given as a parameter - Use `?fieldnames=*` to get all the fields - Improvement: Use multithreading to fetch size/rawsize of entries in `/prj/{project}/oplog/{log}/chunkinfos` and `/prj/{project}/oplog/{log}/files` - Improvement: Add `GetMediumWorkerPool()` in addition to `LargeWorkerPool()` and `SmallWorkerPool()` ## 5.4.2 - Bugfix: Shared memory for zenserver state may hang around after all zenserver processes exit - make sure we find a valid entry in `zen up` before bailing - Bugfix: Httpasio only call listen() once - Bugfix: Make sure exception do not leak out of async (worker thread pool) work and make sure we always wait for completion of all work - Bugfix: Limit number of headers parsed to 127 as that is the maximum supported by Zen - Bugfix: Don't capture for loop variables by reference when executing async code - Bugfix: Make sure WriteFile() does not leave incomplete files - Bugfix: Use TemporaryFile and MoveTemporaryIntoPlace to avoid leaving partial files on error - Bugfix: Install Ctrl+C handler earlier when doing `zen oplog-export` and `zen oplog-export` to properly cancel jobs - Bugfix: Fix startup issues where data path contains non-ascii characters UE-210530 - Feature: Added option `--access-token-path` to `zen oplog-export` and `zen-oplog-import` enabling it to read a cloud access token from a json file - Feature: Added support for generating yaml format responses via Accept: yaml or .yaml suffix - Improvement: Add ability to block a set of CAS entries from GC in project store - Improvement: Httpasio explicitly close acceptor sockets - Improvement: Httpasio add retry for desired port - Improvement: Move structuredcachestore tests to zenstore-test - Improvement: CompositeBuffer::Mid no longer materializes segment buffers - Improvement: Don't materialize entire files when hashing if it is a large file - Improvement: Added context to some http.sys warnings caused by HTTP API error returns - Improvement: Improved logging for block store GCV2 operations - Improvement: Added more tests for GCV2 (added GCV2 versions of existing GCV2 tests) - Improvement: Add disk cache to reading and writing blocks when moving data in GCV2 - Improvement: Cleaned up some asio server state machine details (minor) - Improvement: Added support for request tracing when using asio path (use `--log-trace=http_requests` to enable) - Improvement: Large attachments and loose files are now split into smaller chunks and stored in blocks during oplog export - Improvement: Make sure zenserver reacts and exist on SIGTERM signal - Improvement: Retry to create the .lock file at startup to avoid failing launch due to race condition with UE - Improvement: Add CompressedBuffer::GetRange that references source data rather than make a memory copy - Improvement: Delay exiting due to no sponsor processes by one second to handle race conditions - Improvement: Safer IsProcessRunning check - Improvement: Make sure we can RequestApplicationExit safely from any thread - Improvement: Check if a block exists in remote store before considering it for reuse in oplog export - Improvement: Add limit to the number of times we attempt to finalize and exported oplog - Improvement: Switch to large thread pool when executing oplog export/import - Improvement: Clean up reporting of missing attachments in oplog export/import - Improvement: Remove double-reporting of abort reason for oplog export/import - Improvement: Add support to filter projectstore `entries` request using the `fieldfilter` where the wanted fields are comma (,) delimited - Improvement: Add support for responding with compressed payloads for projectstore `entries` requests by adding AcceptType `compressed-binary` to the request header - Improvement: Add support for responding with compressed payloads for projectstore `files` requests by adding AcceptType `compressed-binary` to the request header - Improvement: Add support for responding with compressed payloads for projectstore `chunkinfo` requests by adding AcceptType `compressed-binary` to the request header - Removed: `--cache-reference-cache-enabled` option has been removed along with the implementation for reference caching in disk cache ## 5.4.1 - Feature: Added `--copy-log`, `--copy-cache-log` and `copy-http-log` option to zen `logs` command to copy logs from a local running zenserver instance - Improvement: More details in oplog import/export logs - Improvement: Switch from Download to Get when fetching Refs from Jupiter as they can't be resumed anyway and streaming to disk is redundant - Improvement: Return system error code on exception throw in zen command - Improvement: Clean up HttpClient::Response::ErrorMessage to remove redundant ": " - Improvement: respond with BadRequest result instead of throwing exception and causing a Sentry report on bad request input - Improvement: Speed up oplog export by fetching/compressing big attachments on demand - Improvement: Speed up oplog export by batch-fetcing small attachments - Improvement: Speed up oplog import by batching writes of oplog ops - Improvement: Tweak oplog export default block size and embed size limit - Improvement: Add more messaging and progress during oplog import/export - Improvement: Large loose file attachments are now saved to temp files after compressing during oplog export to reduce memory pressure - Improvement: Keep track of added ops during GCV2 instead of rescanning full oplog when added ops are detected - Bugfix: Make sure we clear read callback when doing Put in HttpClient to avoid timeout due to not sending data when reusing sessions - Bugfix: Respect `--ignore-missing-attachments` in `oplog-export` command when loose file is missing on disk - Bugfix: Only try to traverse an objectstore bucket if it really exists - Bugfix: Actually throw exception if we can't parse the JobId when starting async job - Bugfix: Implement two listening sockets in ASIO (ipv4+ipv6) when either we start with `--http-forceloopback` or we resort to that mode because of a failure to bind to the "any" address ## 5.4.0 - Improvement: Add details when reading from BasicFile fails (number of bytes read is not expected size) - Bugfix: No longer outputs illegal characters when Jupiter responds with an error payload in compact binary format ## 0.2.39 - Feature: add `--ignore-missing-attachments` to `oplog-import` command - Feature: add `--ignore-missing-attachments` to `oplog-export` command - Improvement: Removed use of in stats, for better performance (runtime as well as build) - Improvement: Separated cache RPC handling code from general structured cache HTTP code - Improvement: Get more detailed information on Jupiter upstream errors - Improvement: Improved performance when saving oplog via oplog import command - Improvement: Add more feedback and progress information when executing oplog import/export - Improvement: Refactored Jupiter upstream to use HttpClient - Improvement: Added retry and resume logic to HttpClient - Improvement: Added authentication support to HttpClient - Improvement: Clearer logging in GCV2 compact of FileCas/BlockStore - Improvement: Size details in oplog import logging - Improvement: Reduce oplog block size to 64MB to reduce amount of redundant chunks to download - Bugfix: RPC recording would not release memory as early as intended which resulted in memory buildup during long recording sessions. Previously certain memory was only released when recording stopped, now it gets released immediately when a segment is complete and written to disk. - Bugfix: File log format now contains dates again (PR #631) - Bugfix: Jobqueue - Allow multiple threads to report progress/messages (oplog import/export) - Bugfix: Jobqueue - Add AbortReason and properly propagate error when running async command (oplog import/export) - Bugfix: Make sure to write the correct data in BasicFileWriter when writing items that are not a multiple of the buffer size - Bugfix: Restructured RPC recorder to use a MPSC queue instead of shared datastructures to fix a data race condition ## 0.2.38 - Bugfix: Cache RPC recording would drop data when it reached 4GB of inline chunk data in a segment - Bugfix: Fixed thread safety issues in RPC recorder v2 - Bugfix: `IoBuffer::Materialize` would leak memory for small buffers - Bugfix: Fix crash bug when trying to inspect non-open block file in GC - Bugfix: Fixed up code so we can build everything even when trace support is disabled - Bugfix: Make sure we initialize the pattern of FileSink before it is added as a usable logger - Bugfix: Fixed capture of loop-local variables in lambdas for GCv2 - Bugfix: Various minor TSAN/ASAN fixes (see PR #622) - Improvement: Cache RPC replay can now process partial recordings by recovering metadata from available files - Improvement: Cache RPC recording now limits duration of individual segments to 1h - Improvement: Made RPC replay command line parsing more robust by ensuring at least one processing thread is in use - Improvement: Windows executables are now signed with official cert when creating a release - Improvement: Each block in block store that is rewritten will now be logged for better feedback ## 0.2.37 - Bugfix: ShutdownLogging code would throw an exception if it was called before everything had been initialised properly - Bugfix: Reorder shutdown to avoid crash due to late async log messages (spdlog workaround) - Bugfix: Correctly calculate peak disk write size in GC status message - Bugfix: Skip invalid chunks in block store GC when moving existing chunks - Bugfix: Don't use copy of Payloads array when fetching memcached payload in GC - Bugfix: Make sure IoBuffer is a valid null-buffer after move operation - Improvement: Adjusted and added some trace scopes ## 0.2.36 - Feature: Added xmake task `updatefrontend` which updates the zip file containing the frontend html (`/src/zenserver/frontend/html.zip`) - Feature: Added `--powercycle` option to zenserver which causes it do shut down immediately after initialization is completed. This is useful for profiling startup/shutdown primarily but could also be useful for some kinds of validation/state upgrade scenarios - Feature: New endpoint `/admin/gc-stop` to cancel a running garbage collect operation - Feature: Added `zen gc-stop` command to cancel a running garbage collect operation - Feature: Added ability to configure logger verbosity on the command line. You can now use `--log-debug=http_requests` to configure the `http_requests` logger to DEBUG level. The provided options are `--log-trace`, `--log-debug`, `--log-info`, `--log-warn`, `--log-error`, `--log-critical`, `--log-off` and each accepts a comma-separated list of logger names to apply the threshold to. - Bugfix: Fix sentry host name where last character of name was being truncated - Bugfix: GCv2 - make sure to discover all projects and oplogs before checking for expired data - Bugfix: Fix sync of log position and state log when writing cas index snapshot - Bugfix: Make sure we can override flags to "false" when running `zen gc` commmand - `smallobjects`, `skipcid`, `skipdelete`, `verbose` - Bugfix: fixed file log timestamp format so the milliseconds are appended after the time not the date - Bugfix: Shut down thread pools earlier so worker threads have a chance to terminate before main thread calls `atexit()` - Bugfix: Use correct lookup index when checking for memcached buffer when finding references in diskcache GC - Bugfix: CasContainerStrategy::ReadIndexFile issue could cause CAS items to not be found after a shutdown/restart cycle - Bugfix: Make sure we don't hold the namespace bucket lock when we create buckets to avoid deadlock - Bugfix: Make sure that PathFromHandle don't hide true error when throwing exceptions - Bugfix: Allow attachments that contains a raw size of zero - Improvement: The frontend html content is no longer appended at the end of the executable which prevented signing, instead it is compiled in from the `/src/zenserver/frontend/html.zip` archive - Improvement: MacOS now does ad-hoc code signing by default when issuing `xmake bundle`, signing with proper cert is done on CI builds - Improvement: Updated branding to be consistent with current working name ("Unreal Zen Storage Server" etc) - Improvement: GcScheduler will now cancel any running GC when it shuts down. - Current GC is rather limited in *when* it reacts to cancel of GC. GCv2 is more responsive. - Improvement: Cache metadata snapshot (`*.uidx`/`.zen_manifest` and now `*.meta`) files are read and written in a streaming fashion instead of all-at-once to/from memory like before. This eliminates some spiky memory usage patterns during garbage collection and also improves overall performance considerably. - Improvement: `zen copy-state` command now utilizes block cloning where possible (i.e on ReFS volumes) for near-instant snapshots - Improvement: Added more trace scopes for GCv2 - Improvement: Use two global worker thread pools instead of ad-hoc creation of worker pools - Improvement: GCv2: Use separate PreCache step to improve concurrency when checking references - Improvement: GCv2: Improved verbose logging - Improvement: GCv2: Sort chunks to read by block/offset when finding references - Improvement: GCv2: Exit as soon as no more unreferenced items are left - Improvement: Reduce memory usage in GC and diskbucket flush - Improvement: Added command line to trace initialization (Windows only for now) - Improvement: Added a `{project}/oplog/{log}/chunkinfos` endpoint that can be used for getting all chunk info within an oplog in batch - Improvement: Reserve vector sizes in GCv2 to reduce reallocations - Improvement: Set min/max load factor for cachedisk/compactcas/filecas indexes to reduce memory footprint - Improvement: Added context (upstream host name) to Zen upstream resolve error message - Improvement: Make a more accurate estimation of memory usage for in-memory cache values - Improvement: Added detailed debug logging for pluggable transports - Improvement: Improved formatting of multi-line logging. Each line is now indented to line up with the initial line to make reading the output easier - Improvement: Refactor memory cache for faster trimming and correct trim reporting - Improvement: Added trace scopes for memory cache trimming - Improvement: Pass lock scope to helper functions to clarify locking rules - Improvement: Block flush and gc operations for a bucket that is not yet initialized - Improvement: Add ZenCacheDiskLayer::GetOrCreateBucket to avoid code duplication - Improvement: Scrub operation now validates compressed buffer hashes in filecas storage (used for large chunks) - Improvement: Added `--dry`, `--no-gc` and `--no-cas` options to `zen scrub` command - Improvement: Implemented oplog scrubbing (previously was a no-op) - Improvement: Implemented support for running scrubbint at startup with --scrub= ## 0.2.35 - Bugfix: Fix timeout calculation for semtimedop call - Bugfix: Fix NameEvent test to avoid race condition - Bugfix: Fix BlockingQueue asserts - Bugfix: Catch exceptions in WorkerThreadPool when running single-threaded - Bugfix: Improved block cloning copy argument validation, to properly catching the case where source or target trees overlap - Feature: Adding a file named `root_manifest.ignore_schema_mismatch` in the root of zenserver data dir prevents wipe of data when schema mismatches - Feature: Added `zen run` command which can be used to run a stress test or benchmark repeatedly while redirecting output and other state to separate subdirectories - Example usage: `zen run -n 10 -- zenserver-test` will run the `zenserver-test` command 10 times - Example usage: `zen run -n 10 -- zenserver-test --ts=core.assert` will` run zenserver-test 10 times (testing only the core.assert test suite) - Example usage: `zen run --time 600 --basepath=d:\test_dir\test1 -- zenserver-test` keeps spawning new instances for 10 minutes (600 seconds). Each run will execute in a separate subdirectory in `d:\test_dir\test1\` where stdout will be captured alongside any data generated by the executed command - Feature: Added new options to zenserver for GC V2 - `--gc-compactblock-threshold` GCV2 - how much of a compact block should be used to skip compacting the block, default is 90% - `--gc-verbose` GCV2 - enable more verbose output when running a GC pass - Feature: Added new options to `zen gc` command for GC V2 - `--compactblockthreshold` GCV2 - how much of a compact block should be used to skip compacting the block, default is 90% - `--verbose` GCV2 - enable more verbose output when running a GC pass - Feature: Added new parameters for endpoint `admin/gc` (PUT) - `compactblockthreshold` GCV2 - how much of a compact block should be used to skip compacting the block, default is 90% - `verbose` GCV2 - enable more verbose output when running a GC pass - Improvement: Removed `zen runtests` command since it's no longer useful - Improvement: Simplified zenserver-test code by implementing dynamic port assignment, and also implemented transparent handling of port relocation for increased test robustness against environmental differences and socket lifetime 'noise' - Improvement: Refactor GCV2 so GcReferencer::RemoveExpiredData returns a store compactor, moving out the actual disk work from deleting items in the index. - Improvement: Refactor GCV2 GcResult to reuse GcCompactStoreStats and GcStats - Improvement: Make GCV2 Compacting of stores non-parallel to not eat all the disk I/O when running GC - Improvement: Added `ZEN_ASSERT_FORMAT` implementation in `zencore/assertfmt.h` for better logging of errors. Introduced it into compact binary building code which had some existing use cases. ## 0.2.34 - Bumped zenserver data schema (to '5') to wipe corrupted state caused by version 0.2.31/0.2.32 - Bugfix: Fix hang on ZenServerInstance shutdown on Mac/Linux - Improvement: Event implementation now uses `std::atomic_bool` instead of `volatile bool` for correctness - Improvement: Removed dependency on cxxopts exception types, to enable use of library versions >3.0.0 - Improvement: Posix event implementation is now more robust and works around apparent condition variable issue - Improvement: `ProcessHandle::Wait` no longer returns without calling `kill` - Improvement: We now set Sentry to `@` if personal information is allowed to be sent - Improvement: Make object store endpoint S3 compatible. ## 0.2.33 - Bugfix: Fix index out of bounds in CacheBucket::CompactState - Bugfix: Implement != operator for DiskLocation to avoid comparing uninitialized data - Improvement: Shrink data structures to fit after CacheBucket::CompactReferences ## 0.2.32 - Feature: Writes a `gc.log` with settings and detailed result after each GC execution (version 2 only) - Bugfix: Fix memory stop in disk cache bucket ReadManifest - Improvement: Package dependency clean-ups ## 0.2.31 - Feature: New parameter for endpoint `admin/gc` (GET) `details=true` which gives details stats on GC operation when using GC V2 - Feature: New options for zen command `gc-status` - `--details` that enables the detailed output from the last GC operation when using GC V2 - Feature: Add new `copy-state` zen command which copies a zenserver data directory state excluding bulk data for analysis purposes - Feature: New garbage collection implementation, still in evaluation mode. Enabled by `--gc-v2` command line option - Feature: zen `print` command can now print the `gc.log` file - Feature: New option for zenserver - `--http-forceloopback` which forces opening of the server http server using loopback (local) connection only (UE-199776) - Bugfix: Build script now sets up arch properly when running tests on MacOS - Bugfix: Corrected initialization of block store MaxBlockCount - Bugfix: Corrected total disk size usage in block store - Bugfix: Server log files were using the wrong log line prefix due to a mistake when consolidating logging setup code - Bugfix: Sponsor processes are now registered synchronously at startup, to close potential race condition in very short-lived subprocesses such as the automated tests - Bugfix: Fix error in GC when reclaiming disk reserve is not enough to accommodate the new block - Bugfix: If a directory is deleted while we try to traverse it, skip it and continue - Improvement: Multithread init and flush of cache bucket for faster startup and exit - Improvement: Renamed BlockStoreCompactState::AddBlock to BlockStoreCompactState::IncludeBlock for clarity - Improvement: Added tests for BlockStore::CompactBlocks - Improvement: Reduce memory consumption in cache disk layer - Improvement: Refactored logging so that spdlog details are hidden from the majority of client code - Improvement: Use GC reserve when writing index/manifest for a disk cache bucket when disk is low when available - Improvement: Demote errors to warning for issues that are not critical and we handle gracefully - Improvement: Treat more out of memory errors from windows as Out Of Memory errors - Improvement: Factored out some compiler / platform definitions into standalone `zenbase` header-only library, along with header-only helpers which can be used standalone. This is intended to support out-of-tree code like pluggable transports etc but also provides more fine grained dependencies - Improvement: Replaced use of openssl on Windows with bcrypt, which reduces executable size by some 40% - Improvement: We no longer put cache entries into the memory cache on Put, only on Get - Improvement: Dedicated servers now have a different heuristic for deciding when to use standalone files to store cache records/values in the structured cache disk layer. This helps performance on heavy-traffic servers - Improvement: Reduced memory footprint of cache index by 10% or so by limiting the maximum number of entries in a bucket to 2^32 (was 2^64) ## 0.2.30 - Bugfix: Block sending error reports from sentry_sink to Sentry unless the log is actually an error log - Bugfix: Make sure we have an exclusive lock in CacheBucket::CollectGarbage when removing standalone entries from the index - Bugfix: Fixed problem with missing session/request context in cache record PUT operations ## 0.2.29 - Feature: Add `skipdelete` parameter to `admin/gc` endpoint to do a dry run of GC - Feature: Add `--skipdelete` option to `zen gc` command to do a dry run of GC - Feature: New endpoint `/admin/flush ` to flush all storage - CAS, Cache and ProjectStore - Feature: New command `zen flush` to flush all storage - CAS, Cache and ProjectStore - Feature: Added `--cache-memlayer-sizethreshold` option to zenserver to control at which size cache entries get cached in memory - Bugfix: Filter cache keys against set of expired cache keys, not against set of CAS keys to keep - Bugfix: Fix implementation when claiming GC reserve during GC - Bugfix: Catch exceptions when processing requests with asio http and log errors - Improvement: Command `zen gc-status` now gives details about storage, when last GC occured, how long until next GC etc - Improvement: New rotating file logger that keeps on running regardless of errors to avoid spamming error reports on OOM or OOD - Improvement: Removed HttpCidStore (was already deprecated) - Improvement: Optimized cases in CompactBinary reader where we would measure a variable integer twice - Changed: Cache access and write log are disabled by default - Changed: We no longer purge out block location for missing blocks to allow testing/analisys of snapshots of server states without copying full set of data - Changed: Merged cache memory layer with cache disk layer to reduce memory and cpu overhead ## 0.2.28 - Feature: Implemented initial and experimental support for pluggable transports via (`--http=plugin`) - Feature: Add caching of referenced CId content for structured cache records, this avoid disk thrashing when gathering references for GC - disabled by default, enable with `--cache-reference-cache-enabled` - Feature: Added `--clean` command line option which can be used to wipe all server state at startup (useful for testing/benchmarking) - Feature: Added `--dry` command line option to `zen rpc-record-replay` to allow analysis of recordings without making changes to any server - Changed: The default port for zenserver has been changed from 1337 to 8558 - Bugfix: GC logging now correctly reports used/free disk space in log message - Bugfix: Fixed calculation of cache memory layer total size - Improvement: Implemented a new RPC recording strategy (aka v2) to make long-running recordings use less resources and perform better over time - Recordings are split into segments to limit the amount of data and number of files in each segment directory. Each segment is independent to allow future disk space management improvements and partial analysis/replay of recordings - The new recording strategy also records the client session id for each entry to enable better traffic analysis - Improvement: Rewrite the state_marker file at startup to make sure we have write access to the data directory - Improvement: Faster reading of project store oplogs - Improvement: Faster collection of referenced CId content in project store - Improvement: Also reject bad bucket GET operations to prevent the buckets from being created on disk - Improvement: GC will now skip a lightweight GC if a full GC is due to run within the next lightweight GC interval - Improvement: when dedicated mode is enabled via --dedicated or server.dedicated then we tune http.sys server settings to be more suitable for a shared server. Initially we tune two things: - the thread pool used to service I/O requests allows a larger number of threads to be created when needed. The minimum thread count is unchanged but in dedicated server mode we double the maximum number of threads allowed - the http.sys request queue length (HttpServerQueueLengthProperty) is increased to 50,000 in dedicated mode. The regular default is 1,000. A larger queue means the server will deal with small intermittent stalls (for example due to GC locking) even at higher loads like 100k req/s without rejecting requests via HTTP 503 results - Removed: Removed legacy compute interface (will be replaced with new implementation in the future) ## 0.2.27 - Bugfix: Remove double counting of memory usage in memcachelayer - Bugfix: Make sure we don't busy loop if Garbage Collection fails - Bugfix: If we can't check if a project/oplog is expired via the marker file, assume it is not expired - Improvement: added rejection of known bad buckets in structured cache (caused by inappropriate client use of buckets) - buckets consisting of 32 characters in hexadecimal are rejected in put operations - buckets are also rejected at bucket discovery time, and any bad directories are deleted - Improvement: restructured zenserver project to improve maintainability (PR#442) ## 0.2.26 - Feature: Limit the size ZenCacheMemoryLayer may use - `--cache-memlayer-targetfootprint` option to set which size (in bytes) it should be limited to, zero to have it unbounded - `--cache-memlayer-maxage` option to set how long (in seconds) cache items should be kept in the memory cache - Feature: Add lightweight GC that only removes items from cache/project store without cleaning up data referenced in Cid store - Add `skipcid` parameter to http endpoint `admin/gc`, defaults to "false" - Add `--skipcid` option to `zen gc` command, defaults to false - Add `--gc-lightweight-interval-seconds` option to zenserver - Bugfix: Correctly calculate the total number of RPC ops in the stats page for structured cache - Bugfix: Change "chunks" title to "count" for RPC chunk requests in stats page for structured cache - Bugfix: Sentry username string no longer includes the trailing NUL - Bugfix: Fix scrub messing up payload and access time in disk cache bucket when compacting index - Bugfix: Cache bucket index of chunks in block store could get corrupted after a GC - Bugfix: Probe disk for existsing block file before writing a new block - Bugfix: IoBufferBuilder::ReadFromFileMaybe did not propagate content type to a new IoBuffer - Bugfix: IoBufferBuilder::ReadFromFileMaybe Linux/MacOS pread success/error condition check was incorrect - Bugfix: Memory cache layer could end up holding references to file handles between GC runs, delaying file deletion - Improvement: Split up disk cache bucket index into hash lookup and payload array to improve performance - Improvement: Reserve space up front for compact binary output when saving cache bucket manifest to improve performance - Improvement: Reduce time a cache bucket is locked for write when flushing/garbage collecting - Change format for faster read/write and reduced size on disk - Don't lock index while writing manifest to disk - Skip garbage collect if we are currently in a Flush operation - BlockStore::Flush no longer terminates currently writing block - Garbage collect references to currently writing block but keep the block as new data may be added - Fix BlockStore::Prune used disk space calculation - Don't materialize data in filecas when we just need the size - Don't lock entire disk cache layer when doing GatherReferences/CollectGarbage - Improvement: Catch Out Of Memory and Out Of Disk exceptions and report back to requester without reporting an error to Sentry - Improvement: If creating bucket fails when storing an item in the structured cache, log a warning and propagate error to requester without reporting an error to Sentry - Improvement: Make an explicit flush of the active block written to in blockstore flush - Improvement: Make sure cache and cas MakeIndexSnapshot does not throw exception on failure which would cause and abnormal termniation at exit - Improvement: http.sys I/O completion handler no longer holds transaction lock while enqueueing new requests. This eliminates some lock contention and improves latency/throughput for certain types of requests - Improvement: date logging in GC no longer emits extraneous newlines - Improvement: named main thread and thread handle cache thread - Improvement: removed websockets support as it is not used and likely won't be used in the future - Improvement: added `--quiet` command line option which can be used to suppress all stdout logging ## 0.2.25 - Feature: Add detailed stats on requests and data sizes on a per-bucket level, use parameter `cachestorestats=true` on the `/stats/z$` endpoint to enable - Feature: Add detailed stats on requests and data sizes on cidstore, use parameter `cidstorestats=true` on the `/stats/z$` endpoint to enable - Feature: Dashboard now accepts parameters in the URL which is passed on to the `/stats/z$` endpoint - Improvement: GarbageCollect for ZenCacheMemoryLayer now respects `--gc-cache-duration-seconds` - Improvement: HttpSys: When a response fails, we now include more information including metadata about the contents of the reponse - Improvement: Flush current data block to disk when switching to a new block - Improvement: Handle cache RPCs synchronously instead of dispatching to async worker threads when there is no upstream server - Improvement: Endpoint for cache upstream stats improved - added `active` and `worker_threads` reflects actual number of threads - Improvement: Cache upstream only starts worker threads once at least one Endpoint is registered - Improvement: http.sys only starts async work threads when there is async work to do (which is often never if there is no upstream) ## 0.2.24 - Feature: New endpoint `/admin/logs` to query status of logging and log file locations and cache logging - `cacheenablewritelog`=`true`/`false` parameter to control cache write logging - `cacheenableaccesslog`=`true`/`false` parameter to control cache access logging - `loglevel` = `trace`/`debug`/`info`/`warning`/`error` - Feature: New zen command `logs` to query/control zen logging - No arguments gives status of logging and paths to log files - `--cache-write-log` `enable`/`disable` to control cache write logging - `--cache-access-log` `enable`/`disable` to control cache access logging - `--loglevel` `trace`/`debug`/`info`/`warning`/`error` to set debug level - Feature: Add endpoint for controlling Insights tracing - GET `/admin/trace` to query if tracing is currently running or not - POST `/admin/trace/start` to start tracing - `host=` start tracing to a trace server at ip `` - `file=` start tracing to file at path `` - POST `/admin/trace/stop` stop the currently running trace - Feature: Add `zen trace` command to control Insights tracing - `zen trace` to show the status of tracing ("enabled" or "enabled") - `zen trace --host=` start tracing to a trace server at ip `` - `zen trace --file=` start tracing to file at path `` - `zen trace --stop` stop the currently running trace - Feature: Implemented virtual file system (VFS) support for debugging and introspection purposes - `zen vfs mount ` will initialize a virtual file system at the specified mount point. The mount point should ideally not exist already as the server can delete the entirety of it at exit or in other situations. Within the mounted tree you will find directories which allow you to enumerate contents of DDC and the project store - `zen vfs unmount` will stop the VFS - `zen vfs info` can be used to check the status of the VFS - Bugfix: Do controlled shut down order of zenserver and catch exceptions thrown during shutdown - Bugfix: Make sure we don't throw exceptions when reporting errors to Sentry - Improvement: Add names to background jobs for easier debugging - Improvement: Background jobs now temporarily sets thread name to background job name while executing - Improvement: Background jobs tracks worker thread id used while executing - Improvement: `xmake sln` can now be used on Mac as well to generate project files - Improvement: http.sys request queues are named to make it easier to find performance counters in Performance Monitor and such - Improvement: http.sys - if request rate is too high then rejected requests will get a 503 response instead of a dropped connection ## 0.2.23 - Bugfix: Respect result from FinalizeRef in Jupiter oplog upload where it requests missing attachments - Improvement: Increase timeout when doing import/export of oplogs to jupiter to 30 min per request - Improvement: Better logging/progress report on oplog export - Improvement: Ignore OOM errors in spdlog, just drop the error since we can't do anything useful if we run out of memory here - Improvement: Try to catch any exceptions in spdlog error handling to avoid abort termination of process - Improvement: Block cache access/write log from writing to log if disk is low on free space ## 0.2.22 - Bugfix: Under heavy load, an http.sys async response handler could end up deleting the HTTP transaction object before the issuing call had completed. This is now fixed - Improvement: More tracing scopes in zenserver ## 0.2.21 - Feature: New http endpoint for background jobs `/admin/jobs` which will return a response listing the currently active background jobs and their status - Feature: New http endpoint for background jobs information `/admin/jobs/{jobid}` which will return a response detailing status, pending messages and progress status - GET will return a response detailing status, pending messages and progress status - DELETE will mark the job for cancelling and return without waiting for completion - If status returned is "Complete" or "Aborted" the jobid will be removed from the server and can not be queried again - Feature: New zen command `jobs` to list, get info about and cancel background jobs - If no options are given it will display a list of active background jobs - `--jobid` accepts an id (returned from for example `oplog-export` with `--async`) and will return a response detailing status, pending messages and progress status for that job - `--cancel` can be added when `--jobid` is given which will request zenserver to cancel the background job - Feature: oplog import and export http rpc requests are now async operations that will run in the background - Feature: `oplog-export` and `oplog-import` now reports progress to the console as work progress by default - Feature: `oplog-export` and `oplog-import` can now be cancelled using Ctrl+C - Feature: `oplog-export` and `oplog-import` has a new option `--async` which will only trigger the work and report a background job id back - Feature: Incremental oplog export for block-base target (Cloud/File). If a base is given it will download an existing oplog (excluding attachments) and try to reuse existing block references in that oplog. - `--basename` option for file based `oplog-export` - `--basekey` option for cloud based (Jupiter) `oplog-export` - Feature: Added `--cache-write-log` and `--cache-access-log` command line option to enable/disable cache write/access logs - Feature: Added `--http-threads`, `--httpsys-async-work-threads`, `--httpsys-enable-request-logging` and `--httpsys-enable-async-response` command line options to zenserver - Feature: More statistics for Cache, Project Store and Cid Store - Cache: `requestcount`, `badrequestcount`, `writes` - Project Store: `requestcount` - Cid Store: `cidhits`, `cidmisses`, `cidwrites` - Bugfix: Make sure cache logging thread does not crash on errors - Bugfix: Make sure error logging or destructors don't throw exception when trying to get file name from handle - Bugfix: Issue warning instead of assert on bad data in cid store - Bugfix: Don't index out of string_view range when parsing URI in httpsys - Improvement: Sorting attachments in oplog blocks based on Op key to group op attachments together - Improvement: Don't split attachments associated with the same op across oplog blocks - Improvement: 25% faster oplog op reading, only read and parse op data of latest op for particular key speeding up reading of oplog with old oplog data ## 0.2.20 - Feature: `zen up` command has two new command line options - `--config ` tells zenserver to start with a specific config file - `--owner-pid ` tells zenserver to start with a owning process id - Feature: `zen attach` command to add additional owning processes to a running zenserver instance - `--owner-pid ` adds pid to running zenserver instance list of owning processes - Feature: `--write-config` command line option for zenserver - `--write-config ` path to a file which will contain a lua config file for zenserver combining all command line options and optional lua config files - Bugfix: Only write disk usage log if disk writes are allowed (disk space is not cirtically low) - Improvement: `zen up` command will check if zenserver is currently running before starting up a new instance - Improvement: Add retry logic when creating oplog temp files and cas block files - Improvement: Large attachments fetched from Jupiter while doing oplog-import now streams to disk and are moved in place ## 0.2.19 - Bugfix: Fix deadlock in project store garbage collection - Bugfix: Fix zen command executable not being able to access shared status memory (`zen status`, `zen down` etc fails) on MacOs - Bugfix: All options given on command line now overrides lua config file settings - Improvement: All options available from command line can now be configured in the lua config file (with a few exceptions such as `owner-pid`, `install` and `uninstall`) ## 0.2.18 - Feature: Add `--embedloosefiles` option to `oplog-export` which adds loose files to the export, removing need to call `oplog-snapshot` - Bugfix: Fix construction order in OpenProcessCache to avoid crash in OpenProcessCache::GcWorker - Bugfix: Retain `ServerPath` in oplog when performing `oplog-snapshot`. This is a short-term fix for current incompatibility with the UE cooker. - Bugfix: Fix OpenProcessCache state error causing assert/error - Bugfix: Make sure to reset cache logging worker thread event to avoid busy-looping looking for more work - Improvement: Make sure we have disk space available to do GC and use reserve up front if need be - Improvement: We now build the Linux target using the UE toolchain to be compliant with the VFX platform that UE uses for Linux. ## 0.2.17 - Feature: Add `oplog-mirror` command to Zen command line tool. It can be used to export the contents of an oplog as files. Currently it will export all files, filtering options will be added at a later time - Feature: Add `--force-update` option to Zen command line tool `project-create` to update or create a project store project. It will update meta information about the project without affecting existing oplogs if project exists. - Feature: Add `--force-update` option to Zen command line tool `oplog-update` to update or create a project store oplog. It will update meta information about the project without affecting existing oplog data if oplog exists. - Feature: Zen command line tool `project-delete` to delete a project store project and all its oplogs. - `--project` Project name (id) - Feature: Zen command line tool `oplog-delete` to delete a project store oplog and the oplog data. - `--project` Project name (id) - `--oplog` Oplog name (id) - Bugfix: Make sure to check oplog op attachments when gathering references for GC - Bugfix: Reduce log level of RLIMIT message on Mac/Linux to avoid it interfering with parsing of stdout from `zen version` command - Bugfix: Make sure we close our trace session properly at exit when trace is enabled - Improvement: Use buffered file reading when replaying oplog - Improvement: Add endpoint in project store to update the information in a project without deleting the stored data/oplog - Improvement: Add endpoint in project store to update the information in a project without deleting the stored data/oplog - Improvement: Add oplog op content to error result if attachment is missing when doing `oplog-export` - Improvement: Windows: Cache process handles for FormatPackageMessage reducing function execution time from 100+us to ~1 us - Improvement: Skip upstream logic early if we have no upstream endpoints - Improvement: Cachestore logging of CbObjects are now async - Improvement: Use better hashing algorithm for instance pointers when using shared lock in IoBufferExtendedCore::Materialize - Improvement: Use tsl/robin-map/robin-set in compactcas and projectstore for 30% faster GC ## 0.2.16 - Feature: Add more stats for `stats/prj` - Project: read/write/delete count - Oplog: read/write/delete count - Chunk: hit/miss/write count - Op: hit/miss/write count - BadRequest count - Bugfix: Allow oplog file mapping where ServerPath is missing if a attachment hash is specified - Bugfix: Make sure we always write "data" attachment hash for snapshotted oplog entries - Bugfix: Fixed expiry limit for GC of project/oplogs - Improvement: Add `response.text` to output in log when jupiter request fails - Improvement: Only hash jupiter oplog ref once when uploading - Improvement: Increase request timeout when uploading to Jupiter to 3 min (to handle very large attachments) - Improvement: Fix issues with latest fmt vcpk dependency (10.0.0) and sentry-native for linux ## 0.2.15 - Feature: Add `--assume-http2` option to cloud style import/export command to use a HTTP/2 endpoint without without HTTP/1.1 upgrade - Bugfix: Make sure out of memory condition does not terminate http-asio service thread. UE-191531 - Bugfix: `oplog-import` with `--file` source now sends the oplog folder correctly to zenserver - Bugfix: If `oplog-export` fails while creating blocks, wait for background jobs to finish before aborting to avoid crash - Bugfix: If `GetChunkInfo` in project store finds a chunk in the wrong format, return a readable error instead of ASSERT - Bugfix: If checking for state_marker throws exception, exit gracefully rather than throw exception - Improvement: More details in zenserver logfile if jupiter operation fails ## 0.2.14 - Feature: Added `zen serve` command for establishing a link to a directory tree for use with staged UE builds and the `-Mount=` option - Bugfix: Make sure to validate return pointer when calling Memory::Alloc in all locations - Bugfix: Log error instead of hard crash if GC scheduler thread throws exception - Improvement: In oplog import/export, try to resolve access token via env variable on zen command side as first option, with resolve on zenserver side as second option. Resolves [UE-189978](https://jira.it.epicgames.com/browse/UE-189978) - Improvement: Keep reason and status code when parsing responses from jupiter remote requests - Improvement: Add additional context for errors when importing/exporting oplogs - Improvement: Added `ZenServerInstance::SpawnServerAndWait` and improved logic around process termination when using `ZenServerInstance::AttachToRunningServer` - Improvement: When uploading compressed blob to jupiter, use streaming reading of source file if it is a "whole file" - a large attachment. ## 0.2.13 - Feature: Project store now has a `snapshot` RPC on oplogs which may be used to inline any files referenced by name into Zen store. This makes the oplog transportable - Feature: Zen command line tool `oplog-snapshot` which may be used to inline any files referenced by name into Zen store. This makes the oplog transportable - `--project` Project name (id) - `--oplog` Oplog name (id) - Feature: Session Id and Request Id are now logged in log for cache put/get operations - Bugfix: Prevent destructors in ProjectStore::Project, ScopedActivityBase and FileMapping from throwing exceptions to avoid abort termination - Bugfix: Zen CLI command help now include descriptions for positional arguments - Bugfix: Correctly prefix auth token when using a bare token in project oplog import/export - Bugfix: Make sure GetEnvVariable can handle values that are longer than 1023 characters - Improvement: Throw exception with information on failed memory allocation instead of calling ZEN_ASSERT - Improvement: Added support for streaming decompression - Improvement: Added zenserver.exe and zen.exe/zen.pdb to Sentry debug information upload to populate unwind information - Improvement: Front-end can now be served from a development directory in release mode as well as debug if there's no zipfs attached - Improvement: Increased retry logic in diskcachelayer when we are denied moving a temporary file into place - Improvement: Named some additional background threads for better debug / sentry reporting - Update: Bump CI VCPKG version to 2023.04.15 and xmake to 2.7.9 (was 2022.08.15 and 2.6.5) ## 0.2.12 - Feature: zenserver/zen: Added zen command line command `scrub` which can be used to trigger a data scrubbing pass which traverses all stored data and verifies its integrity. If any checksum mismatches or structural errors are found the content is dropped. For now this does not provide much feedback in the console, but the zenserver logs will contain information about the operation - Feature: zen: added zen `bench` command which has an option to empty Windows standby lists. This effectively empties the system (disk) cache, which can be useful when performing benchmarks since this puts the system in a more consistent state - Feature: zen: added zen `copy` command which can be used to perform copy-on-write copies of files and directories on supported file systems (i.e ReFS on Windows). This is useful when working with test datasets where you want to avoid tests modifying the original test data - Feature: zenserver: Add command line option `--gc-projectstore-duration-seconds` to control GC life time of project store data - Bugfix: Improve error handling when processing requests in http asio - Bugfix: Error out if `test` is passed to zenserver in release builds (tests are only compiled in for debug) - Bugfix: Gracefully exit with error code if problems arise during startup (used to cause abort termination) - Bugfix: Project oplog delete fixed so it works even right after server startup, before the oplog has been instantiated in memory - Bugfix: Corrected argument name in oplog-export file target RPC message - Improvement: Change state_marker detection (deletion of DDC folder) log to WARN, it is not an error but useful information in the log output - Improvement: Added logging when bad chunks are detected in `BlockStore` - Improvement: `zen::SetCurrentThreadName` now also sets trace (Insights) thread name - Improvement: All thread pool threads now have names - Improvement: zenserver now emits session information to trace (Insights) for a better session browser experience - Improvement: Add more trace instrumentation - Improvement: Eliminated ATL header dependency - Improvement: If no `-hosturl=...` parameter is passed to zen CLI commands we use the current session state from shared memory to pick an instance to communicate with - Improvement: Better option validation in zen command line parsing ## 0.2.11 - Feature: Gracefully exit if Ctrl-C is pressed - Feature: Structured cache now writes an activity log to `logs/z$` which may be used to understand client interactions better. Enabled by default for the time being - Bugfix: Return error code on exit as set by application - Bugfix: Fix crash at startup if dead process handles are detected in ZenServerState - Bugfix: Fixed assert/error when running block store GC and a block to GC does not exist - Bugfix: GC could mix up locations of cache bucket items causing it to return the wrong item for a specific key. All cache buckets from previous versions will be wiped to remove inconsistent state - Improvement: Log details about file and read operation when it fails inside IoBuffer::Materialize() ## 0.2.10 - Feature: zenserver now writes a state_marker file in the root of the data directory. Deleting this file will cause zenserver to exit. This is used to detect if user is deleting the data folder while zenserver is running - Feature: Disk writes are now blocked early and return an insufficient storage error if free disk space falls below the `--low-diskspace-threshold` value - Feature: zenserver: Add command line option `--sentry-allow-personal-info` to allow personally identifiable information in sentry reports, disabled by default - Feature: Age-based GC of oplogs in project store - Improvement: Failing to write index snapshots or access times are now considered a warning rather than error - Bugfix: Validate that block store entries points inside valid blocks when initializing - Bugfix: Close down http server gracefully when exiting even while requests are still being processed - Bugfix: Flush snapshot for filecase on flush/exit - Bugfix: Fix log of size found when scanning for files in filecas ## 0.2.9 - Bugfix: Treat reading outside of block store file as a not found error. We may encounter truncated blocks due to earlier abnormal termination of zenserver or disk failures ## 0.2.8 - Feature: ASSERTs triggered at runtime are sent directly to Sentry with callstack if sentry is enabled - Bugfix: Verify that there are blocks to GC for block store garbage collect (void division by zero) - Bugfix: Write log error and flush log before reporting error to Sentry/error logger - Bugfix: Log ERROR in scope guard if function throws exception, throwing exception causes application abort ## 0.2.7 - Bugfix: Safely handle missing blocks when doing garbage collection in block store data - Bugfix: Only strip uri accept type suffix if it can be parsed to a known type - Bugfix: Keep system error code on Windows when file mapping fails and propagate to log/exception - Bugfix: Catch any errors throw in HttpAsioServer() destructor and log error. ## 0.2.6 - Strip __FILE__ macro names in logging to only include the file name as to not expose file paths of the machine building the executable - Bugfix: Reporting the correct callstack to sentry on ERROR/CRITICAL failure. - Extend sentry message with triggering file/line/function ## 0.2.5 - Feature: Zen command line tool `rpc-record-start` to record all RPC requests to the structured cache - `--path` Recording file path where the rpc requests will be stored - Feature: Zen command line tool `rpc-record-stop` stop the currently active RPC request recording started with `rpc-record-start` - Feature: Zen command line tool `rpc-record-replay` replacy a RPC request recording created with `rpc-record-start` - `--path` Recording file path where the rpc requests are stored - `--numthreads` Number of worker threads to use while replaying the RPC requests - `--numproc` Number of worker processes to run, if more than one new processes will be spawn with `` workers each - `--offset` Offset into request playback to start at - `--stride` The stride to use when selecting requests to playback - `--onhost` Replay the recording inside the zenserver bypassing http overhead - `--showmethodstats` Show statistics of which RPC methods are used - `--forceallowlocalrefs` Force the requests to allow local references (file path/file handle) - `--disablelocalrefs` Force disable local references in request (file path/file handle) - `--forceallowlocalhandlerefs` Force the requests to allow local references via duplicated file handles for requests that allow local refs - `--disablelocalhandlerefs` Force disable local references via duplicated file handles in requests - `--forceallowpartiallocalref` Force the requests to allow local references for files that are not saved as whole files for requests that allow local refs - `--disablepartiallocalrefs` Force disable local references for files that are not saved as whole files for requests that allow local refs - Feature: Zen command line tool `cache-stats` to give stats result about the zen cache - Feature: Zen command line tool `project-stats` to give stats result about the zen project store - Feature: Zen command line tool `cache-details` to give detail result about the zen cache, defaults to overview information about the cache - `--namespace` Get information about cache values in a namespace - `--bucket` Get information about cache values limited to a specific bucket in a namespace - `--valuekey` Get information about a cache value in a specific bucket in a namespace, valuekey is specified as IoHash hex string - `--details` Get detailed information about each cache record - `--attachmentdetails` Get detailed information about each attachments for each cache record - `--csv` Format the output as a comma delimited CSV file. If not specified it defaults to JSon style response. - Feature: Zen command line tool `project-details` to give detail result about the zen project store, defaults to overview information about the project store - `--project` The project id to get information about - `--oplog` The oplog id to get information about - `--opid` The op Oid to get information about - `--details` Get detailed information about the op - `--opdetails` Extract the entire op information (not available in CSV output) - `--attachmentdetails` Get detailed information about each attachments for each op - `--csv` Format the output as a comma delimited CSV file. If not specified it defaults to JSon style response. - Feature: New project store stats endpoint `/stats/prj` to get stats info for zen project store - Feature: New project store details endpoints `/prj/details$`, `/prj/details$/{project}`, `/prj/details$/{project}/{oplog}`, `/prj/details$/{project}/{oplog}/{op}` to give detail result about the zen project store, defaults to overview information about the project store items - `details=true` Get detailed information about the op - `opdetails=true` Extract the entire op information - `attachmentdetails=true` Get detailed information about each attachments for each op - `csv=true` Format the output as a comma delimited CSV file. If not specified it defaults to JSon style response. - Feature: New cache detail endpoints `/z$/details$`, `/z$/details$/{namespace}`, `/z$/details$/{namespace}/{bucket}`, `/z$/details$/{namespace}/{bucket}/{key}` has been added - `details=true` Get detailed information about each cache record - `attachmentdetails=true` Get detailed information about each attachments for each cache record - `csv=true` Format the response as a comma delimited CSV file. If not specified it defaults to CbObject but can auto-format to json - Feature: `--junit` switch to `xmake test` to generate junit style reports of tests. - Feature: CI build on GitHub now uploads junit test reports as artifact to the check for PR validation and mainline validation - Feature: Payloads from zenserver can now be sent using duplicated file handles if caller requests provides client ProcessId (Windows only). - Feature: Add `--port` option to zen down command to shut down servers on different base ports - Bugfix: Make sure async responses are sent async correctly in httpsys - Bugfix: Don't delete manifest file in cas root when initializing a new filecas folder - Bugfix: Sentry does not like UNC paths, so strip the prefix before passing them to sentry - Bugfix: Make sure zen down command uses the correct port for shutdown event - Improvement: FileCas now keeps an up to date index of all the entries improving performance when getting cache misses on large payloads - Improvement: Structured cache now keeps RawHash and RawSize in memory avoiding materialization of cache values before sending response - Changed: Exit with failure code on port conflict rather than reporting crash to Sentry - Changed: removed catch2 support for now since it does not handle multithreaded tests - Bugfix: fixed bug in dashboard content serving (see PR #255) ## 0.2.4 - Bugfix: Don't send empty http responses with content type set to Text. Fixes UE-177895 ## 0.2.3 - Feature: Add support for "packagedata" mapping in oplog entries - Feature: Zen command line tool `project-create` to create a project store project - `--project` Project name (id) - `--rootdir` Absolute path to root directory (optional) - `--enginedir` Absolute path to engine root directory (optional) - `--projectdir` Absolute path to project directory (optional) - `--projectfile` Absolute path to .uproject file (optional) - Feature: Zen command line tool `oplog-create` to create a project store oplog - `--project` Project name (id) - `--oplog` Oplog name (id) - `--gcpath` Absolute path to oplog lifetime marker file (optional) - Feature: Build scripts and tooling to build zen compliant with VFX reference platform CY2022/2021 matching UE linux builds - Feature: added `xmake sln` task which replaces `generate_projects.bat` - Feature: Zen server endpoint `prj/{project}/oplog/{log}/chunks` to post multiple attachments in one request. - Feature: Zen server endpoint `prj/{project}/oplog/{log}/save` to save an oplog container. Accepts `CbObject` containing a compressed oplog and attachment references organized in blocks. - Feature: Zen server endpoint `prj/{project}/oplog/{log}/load` to request an oplog container. Responds with an `CbObject` containing a compressed oplog and attachment references organized in blocks. - Feature: Zen server endpoint `{project}/oplog/{log}/rpc` to initiate an import to or export from an external location and other operations. Use either JSon or CbPackage as payload. - CbObject/JSon RPC format for `import` and `export` methods: ```json { "method" : "", "params" : { "maxblocksize": "", "maxchunkembedsize": "", "file" : { "path" : "", "name" : "" }, "cloud" : { "url" : "", "namespace" : "", "bucket" : "", "key" : "", "openid-provider" : "", "access-token" : "", "access-token-env": ", "disableblocks" : "", "disabletempblocks" : "" }, "zen" : { "url" : "", "project" : "", "oplog" : "" } } } ``` - `"method"`supported methods are `"export"` and `"import"` to import/export an oplog - `"params"` container for parameters - `"maxblocksize"` - Optional. The maximum size of a block of attachments, default 134217728 (128 Mb) (export only) - `"maxchunkembedsize"` - Optional. The maximum size of an attachment to be put in a block, larger attachments will be stored as usual attachments, default 1048576 (1Mb) (export only) - `"force"` - Optional. Boolean flag to indicate weather attachments should be uploaded/downloaded disregarding prior existance - External location types are "file" (File system), "cloud" (UE Cloud Storage service) or "zen" (Zen server instance), provide one of those as remote location. - `"file"` - Optional. Indicates remote location is the local file system - `"path"` - File system path folder to export to / import from - `"name"` - File name of oplog output, written into - `"cloud"` - Optional. Indicates remote location is UE Cloud Storage service - `"url"` - Jupiter service endpoint url - `"namespace"` - Name of namespace to store data to - `"bucket"` - Name of bucket to store data to - `"key"` - IoHash key to the stored oplog container - `"openid-provider"` - Optional. Name of openid provider used to authenticate with, requires that the zen server instance has been provided with a oids refresh token for - `"access-token"` - Optional. JWT access token to authenticate with - `"access-token-env"` - Optional. Name of environment variable that holds an JWT access token to authenticate with - `"disableblocks"` - Optional. Disable creation of attachments blocks - "true"/"false" (export only) - `"disabletempblocks"` - Optional. Disable creation of attachments temp blocks forcing upload before oplog container - "true"/"false" (export only) - `"zen"` - Optional. Indicates remote location is a Zen server instance - `"url"` - Zen server instance url - `"project"` - The remote project name (id) - `"oplog" - The remote oplog name (id) - CbObject RPC format for `getchunks` method, returns CbPackage with the found chunks, if all chunks are found the number of attachments matches number of chunks requested. ```json { "method" : "getchunks", "chunks" : [ "", ] } ``` - CbPackage RPC format for `putchunks` method, attachments are stored in CidStore ```json { "method" : "putchunks", } ``` - Feature: Zen server `{project}/oplog/{log}/{hash}` now accepts `HttpVerb::kPost` as well as `HttpVerb::kGet`. - Feature: Zen command line tool `oplog-export` to export an oplog to an external target using the zenserver oplog export endpoint. - `--project` Project name (id) - `--oplog` Project name (id) - `--maxblocksize` The maximum size of a block of attachments (optional) - `--maxchunkembedsize` The maximum size of an attachment to be put in a block, larger attachments will be stored as usual attachments (optional) - `--force` Force upload/download of attachments even if they already exist. - `--file` File system path folder to export to / import from - `--name` File name of oplog output, written into `--file` path - `--disableblocks` Disable block creation and save all attachments individually - `--forcetempblocks` Force creation of temp attachment blocks - `--cloud` Jupiter service endpoint to export to / import from - `namespace` Name of namespace to store data to - `bucket` Name of bucket to store data to - `key` Key to the stored oplog container (If omitted a default key will be generated based on project/oplog/namespace/bucket) - `openid-provider` Optional name of openid provider used to authenticate with, requires that the zen server instance has been provided with a oids refresh token for the provider name - `access-token` Optional JWT access token to authenticate with - `access-token-env` - Optional name of environment variable that holds an JWT access token to authenticate with - `disableblocks` Disable block creation and save all attachments individually - `disabletempblocks` Disable temp block creation and upload blocks without waiting for oplog container to be uploaded - `--zen` Zen server instance url to export to / import from - `--target-project` The remote project name (id) (optional, defaults to same as `project`) - `--taret-oplog` The remote oplog name (id) (optional, defaults to same as `olplog`) - `--clean` Delete and create a new oplog before starting export - Feature: Zen command line tool `oplog-import` to import an oplog from an external source using the zenserver oplog import endpoint. - `--project` Project name (id) - `--oplog` Project name (id) - `--force` Force upload/download of attachments even if they already exist. - `--file` File system path folder to export to / import from - `--name` File name of oplog output, written into `--file` path - `--cloud` Jupiter service endpoint to export to / import from - `namespace` Name of namespace to store data to - `bucket` Name of bucket to store data to - `key` Key to the stored oplog container (If omitted a default key will be generated based on project/oplog/namespace/bucket) - `openid-provider` Optional name of openid provider used to authenticate with, requires that the zen server instance has been provided with a oids refresh token for the provider name - `access-token` Optional JWT access token to authenticate with - `access-token-env` - Optional name of environment variable that holds an JWT access token to authenticate with - `--zen` Zen server instance url to export to / import from - `--source-project` The remote project name (id) (optional, defaults to same as `project`) - `--source-oplog` The remote oplog name (id) (optional, defaults to same as `olplog`) - `--clean` Delete and create a new oplog before starting import - Improvement: Faster oplog replay - reduces time to open an existing oplog - Improvement: Clearer error messages and logging when requests to project store fails - Changed: Removed remnants of old mesh experiment - Changed: Remove obsolete export-project command - Changed: Removed remnants import-project command - Changed: Removed unused remote build scripts - Changed: Removed very old and invalid TODO.md - Changed: Removed some deprecated scripts ## 0.2.2 - Feature: Added info (GET) endpoints for structured cache - `/z$` - get a list of namespaces and global info - `/z$/{namespace}` - get list of buckets in a namespace and namespace related info - `/z$/{namespace}/{bucket}` - get bucket info - Feature: Added project store oplog info: `markerpath`, `totalsize`, `opcount`, `expired` on GET requests for oplog - Feature: Added project store project info: `expired` on GET requests for project - Feature :Added project store root route `/prj` which is identical to `/prj/list` - Feature: Zen command line tool `cache-info` to show cache, namespace or bucket info - Feature: Zen command line tool `project-info` to show store, project or oplog info - Feature: Zen command line tool `project-drop` to drop project or oplog - Feature: Zen command line tool `gc` to trigger a GC run - Feature: Zen command line tool `gc-info` to check status of GC - Feature: Added version information to zenserver and zen command line tool executables - Bugfix: Don't log "time to next GC" if time to next GC is not set - Improvement: Don't wait for GC monitor interval before doing first GC check - Improvement: Zen command line tool now fails on any unrecognized arguments - Improvement: Zen command line tool now displays extra help for all sub-commands - Improvement: Host address can now be configured for zen command line tool `drop` command - Improvement: Added precommit xmake task `xmake precommit` to run precommit checks - Changed: Default GC interval set to 1 hour - Changed: Default GC cache duration set to 2 weeks - Changed: Removed HttpLaunchService and related code - Changed: Removed dead/experimental file system related code - Changed: Removed faux vfs config option ## 0.2.1 - Feature: Oplog level GC in project store. If gc marker file path is given by UE, oplogs will be GC:d when marker file is deleted (and GC is triggered) - Bugfix: Index handling for cache large object store was broken resulting in log always being played back - Bugfix: Make sure to flush cache store on call to flush on service and exit - Improvement: Don't write index snapshots if no new entries has been added to log ## 0.2.0 - Feature: Recording and playback of cache request with full data - both get and put operations can be replayed. Invoke via web request - `/z$/exec$/start-recording?` - `/z$/exec$/stop-recording` - `/z$/exec$/replay-recording?&` - Feature: Disk size triggered GC, a soft disk usage limit for cache data - Feature: New option `--gc-disk-size-soft-limit` (command line), `gc.cache.disksizesoftlimit` (lua config) controlling limit for soft disk usage limit. Defaults to zero which disables soft disk usage limit - Improvement: Disk write pressure in GC log and cleaned up clutter in GC logging - Improvement: Much improved performance, between 2x to 9x improvement under heavy load (excluding http service overhead). See https://github.com/EpicGames/zen/pull/200 for details - Bugfix: Always store records or oplog entries before storing attachments to avoid GC finding unreferenced chunks i CidStore - Bugfix: Updated Zen `drop` command to support namespaces - Bugfix: Use ZEN_CONSOLE for output to console in Zen commands - Bugfix: Zen `status` command now shows info about found Zen instances - Bugfix: Zen `top` command now shows session id string - Bugfix: On Windows platforms explicitly set the special `SO_EXCLUSIVEADDRUSE` flag as `SO_REUSEADDR` still allows shared use of sockets - Bugfix: Fix logging of number of entries read from caslog at startup - Bugfix: Fix asio http handling of very large/malformed headers and handle mismatching content size - Changed: Reduced GC `INFO` spam by converting to `DEBUG` log messages - Changed: Use Iso8601 format for logging start and end message ## 0.1.9 - Feature: Adds two command to Zen command tool to export/import project store oplogs with attachments - `export-project [oplogs...]` - `import-project [oplogs...]` - Feature: Adds command to query Zen version, specify `host-name` url to query running service version, otherwise you get zen command version. `detailed` option gives you long form version. - `version [host-name] [detailed]` - Feature: New service endpoint to query Zen server version, add `?detailed=true` to get long form version - `/health/version` - Feature: Configure OpenID providers from cmd line and Lua cfg - Feature: Added zen command line executable to release distribution - Bugfix: Fix double reporting of disk usage for namespaces - Bugfix: Fix double garbage collection analisys and garbage collection execution of namespaces - Improvement: Improve tracking of used disk space for filecas and compactcas - Improvement: Add tracking of used disk space for project store - Improvement: Bumped limit for storing cache values as separate files to reduce number of loose files - Improvement: Optimizations when handling compressed buffer (less materialization and reading of headers) - Improvement: Send attachments as file references if the IoBuffer we find represents a complete file and `AcceptFlags` in RPC request allows it. - Improvement: Don't reserve full block size for block store files at creation ## v0.1.8 - Change: Responding with new wire format for RPC requests requires the requestor to add a `Accept` field in the request. This is to allow compatability with older clients for shared instances. - Improvement: Fixed concurrency issues in project store - project and oplog lifetime issues. - Improvement: Don't open oplogs until we require use of them. - Cleanup: Remove rocksdb experimental code. - Feature: Add GC to projects store. Checks path to project file in UE side to determine when a project may be GCd on the Zen side. ## v0.1.7 - Change: All RPC responses are now formatted using dedicated wire format, Zen server has fallback to enable compatability with legacy upstreams - Feature: Adding a `.json` extension to the `--abslog` option will make zenserver log in json format to file - Feature: Create release in Sentry and use `sentry_options_set_release` to associate the executable - Bugfix: CompactBinary: Fixed LoadCompactBinary to gracefully handle read failures and sizes larger than the archive. From http1s://p4-swarm.epicgames.net/changes/21983905 - Bugfix: Use bucket/key to get inline value in upstream for chunks without a chunkid - Bugfix: Handle edge case when trying to materialize a IoBuffer of zero size via memory mapping - Improvement: Logging: don't do formatting of messages the will not be logged - Improvement: Logging: Timing and upstream source information in upstream logging when debug level logging is enabled - Improvement: Reduce buffer creation and copying in ParsePackageMessage - Improvement: Don't read attachments for oplogs we already have when parsing oplog message ## v0.1.6 - Bugfix: Bugfix: Use bucket/key to get inline value in upstream for chunks without a chunkid (UE-164966) ## v0.1.5 - Bugfix: Don't fail entire request if GetCacheValue from Horde fails for a single value ## v0.1.4 - Change: Bumped ZEN_SCHEMA_VERSION - this will invalidate entire local cache when deployed - Change: Make CAS storage an hidden implementation detail of CidStore, we no longer hash and do mapping to compressed hash when storing cache values - Feature: Extended zen print command to also handle CbPackage and CompressedBuffer format payloads - Feature: Added /prj/{project}/oplog/{log}/{op} endpoint to allow retrieval of an op entry by LSN. Supports returning CbObject or CbPackage format payloads - Improvement: asio: added some context to error reporting - Improvement: namespace/bucket validation now uses AsciiSet for more efficient validation - Improvement: Frontend: simplified content-type logic - Improvement: Improved message indicating no GC is scheduled - Improvement: Implement proper GetCacheValues upstream path - Improvement: Demote a number of ZEN_ERROR log calls for problems that are recoverable and handled - Bugfix: Use bucket/key to get inline value in upstream for chunks without a chunkid - Bugfix: Fixed issue in CbPackage marshaling of local reference - Bugfix: Fix crash when switching Zen upstream configured via DNS when one endpoint becomes unresposive - Bugfix: Fixed issue where projects would not be discovered via DiscoverProjects due to use of stem() vs filename() - Bugfix: Use "\\\\?\\" prefixed paths on Windows and fix hardcoded path delimiters (UE-141222) - Bugfix: Safer detection of html folder when running non-bundled executable - Bugfix: Use "application/x-jupiter-inline" to fetch GetCacheValues from Horde (UE-162151) - Sentry: Added logging of sentry_init error code - Sentry: Attach log file to Sentry error reports - Sentry: Capture capture error/critical log statements as errors in Sentry - Update: Bump VCPKG version to 2022.08.15 - CI: MacOS build enable again in GitHub CI - CI: Upload debug info and source files to Sentry when release is created ## v0.1.3 -- Enable adding namespace to URI based upstream requests -- Add logging of namespace name and bucket name if we get invalid names in requests -- Updated README.md with Linux dev prerequisites -- asio: added some logging to indicate concurrency -- Fixed expired cache keys overwriting between namespaces when bucket names were the same in multiple namespaces ## v0.1.2 - Tweak bundle compression settings to streamline build - ZenCacheDiskLayer::CacheBucket::GatherReferences: Don't hold index lock while reading standalone values - hardening of ZenCacheDiskLayer::CacheBucket::PutStandaloneCacheValue - GitHub Actions: Move release job to in-house linux agent ## v0.1.1 - BlockStore (small object store) Always block GC of current write block - Make it possible to configure GC monitoring interval using `--gc-monitor-interval-seconds` - Keep "reason" from upstream response so we can present it even if the request fails without outright error - New GitHub Actions release flow - Add release flow in GitHub actions on pushed tag `v0.1.2` gives full release, `v0.1.2-pre0` gives pre-release ## 0d08450 - Fixes issue with broken Zen instances for legacy requests ## 63f50b5 - Enable FILE_SHARE_DELETE on standalone files in disk buckets - fixes Jira UE-154234 - Make sure we can properly create the block file before assigning it for use - fixes Jira UE-154438 - Horde execute compressed input blobs - Drop namespace support - Safer delete of cache buckets ## dba8b36 - Namespaces: This introduces namespaces to the zenserver but only the default ue4.ddc is supported. Clients that don't send a namespace in the request will keep old behviour, new clients that sends namespace is required to use ue4.ddc (which they currently do) - Aligned bucket naming rules with UE code base - Fix retry counter and add an extra iteration to give more time for success during contention for standalone files in cache - Make sure CacheBucket::PutStandaloneCacheValue cleans up the temp file - Restore logic where we accept failed overwrite if resulting size is the same for standlone file in cache - Correctly calculate the m_TotalSize difference when overwriting file for standalone files in cache - Fix namespace folder scanning