Step through the GEAR rolling hash byte-by-byte. Hash = (hash << 1) + GEAR[byte].
Each colored block is one of 256 pre-computed random 32-bit values, keyed by byte. Hover a cell to see its mapping.
The hash rolls forward one byte at a time. When it matches a bit pattern, a chunk boundary is placed. Target chunk size: min 8, avg 16, max 32 bytes.
Drag the slider to adjust the target average chunk size and see how FastCDC re-chunks the same text.
See how target average size affects chunk boundaries and size distribution.
See how fixed-size chunking vs content-defined chunking handle file modifications.
Edit text and save versions to see which chunks are new and which are shared.
Click "Save Version" after editing to see which chunks are new and which are shared. Hover over chunks to highlight them across views.
Compare how single-mask and dual-mask strategies distribute chunk sizes across the same data.
Compare how single-mask and dual-mask strategies distribute chunk sizes across the same data.
See how average chunk size affects each cost dimension: CPU, memory, network, and storage.
See how per-operation pricing on established object storage providers affects costs when every chunk is a separate object.
See how container packing reduces API operations costs by bundling chunks into larger objects.
Explore costs on challenger object storage providers with radically different pricing models.
Compare costs across all seven storage providers side by side.
Visualize how skewness affects the popularity distribution of items under a Zipf model.
Given a skewness level and a target hit rate, how much unique data do you need to cache?
See how established cache providers (ElastiCache, CloudFront) affect origin costs.
Compare challenger cache providers that scale linearly with per-request pricing.
Combine storage provider, cache layer, chunk size, and container packing into a single cost view.