Part of bzrlib.chunk_writer View In Hierarchy
ChunkWriter allows writing of compressed data with a fixed size.
If less data is supplied than fills a chunk, the chunk is padded with NULL bytes. If more data is supplied, then the writer packs as much in as it can, but never splits any item it was given.
Class Variables | _max_repack | To fit the maximum number of entries into a node, we will sometimes start over and compress the whole list to get tighter packing. We get diminishing returns after a while, so this limits the number of times we will try. The default is to try to avoid recompressing entirely, but setting this to something like 20 will give maximum compression. |
_max_zsync | Another tunable nob. If _max_repack is set to 0, then you can limit the number of times we will try to pack more data into a node. This allows us to do a single compression pass, rather than trying until we overflow, and then recompressing again. |
Method | __init__ | Create a ChunkWriter to write chunk_size chunks. |
Method | finish | Finish the chunk. |
Method | set_optimize | Change how we optimize our writes. |
Method | write | Write some bytes to the chunk. |
Method | _recompress_all_bytes_in | Recompress the current bytes_in, and optionally more. |
Parameters | chunk_size | The total byte count to emit at the end of the chunk. |
reserved | How many bytes to allow for reserved data. reserved data space can only be written to via the write(..., reserved=True). |
This returns the final compressed chunk, and either None, or the bytes that did not fit in the chunk.
Returns | (compressed_bytes, unused_bytes, num_nulls_needed)
|
Parameters | for_size | If True, optimize for minimum space usage, otherwise optimize for fastest writing speed. |
Returns | None |
Parameters | extra_bytes | Optional, if supplied we will add it with Z_SYNC_FLUSH |
Returns | (bytes_out, bytes_out_len, alt_compressed)
|