b.c.ChunkWriter(object) : class documentation

Part of bzrlib.chunk_writer View In Hierarchy

ChunkWriter allows writing of compressed data with a fixed size.

If less data is supplied than fills a chunk, the chunk is padded with NULL bytes. If more data is supplied, then the writer packs as much in as it can, but never splits any item it was given.

The algorithm for packing is open to improvement! Current it is:
  • write the bytes given
  • if the total seen bytes so far exceeds the chunk size, flush.
Class Variables_max_repackTo fit the maximum number of entries into a node, we will sometimes start over and compress the whole list to get tighter packing. We get diminishing returns after a while, so this limits the number of times we will try. The default is to try to avoid recompressing entirely, but setting this to something like 20 will give maximum compression.
_max_zsyncAnother tunable nob. If _max_repack is set to 0, then you can limit the number of times we will try to pack more data into a node. This allows us to do a single compression pass, rather than trying until we overflow, and then recompressing again.
Method __init__ Create a ChunkWriter to write chunk_size chunks.
Method finish Finish the chunk.
Method set_optimize Change how we optimize our writes.
Method write Write some bytes to the chunk.
Method _recompress_all_bytes_in Recompress the current bytes_in, and optionally more.
def __init__(self, chunk_size, reserved=0, optimize_for_size=False):
Create a ChunkWriter to write chunk_size chunks.
Parameterschunk_sizeThe total byte count to emit at the end of the chunk.
reservedHow many bytes to allow for reserved data. reserved data space can only be written to via the write(..., reserved=True).
def finish(self):
Finish the chunk.

This returns the final compressed chunk, and either None, or the bytes that did not fit in the chunk.

Returns

(compressed_bytes, unused_bytes, num_nulls_needed)

  • compressed_bytes: a list of bytes that were output from the compressor. If the compressed length was not exactly chunk_size, the final string will be a string of all null bytes to pad this to chunk_size
  • unused_bytes: None, or the last bytes that were added, which we could not fit.
  • num_nulls_needed: How many nulls are padded at the end
def set_optimize(self, for_size=True):
Change how we optimize our writes.
Parametersfor_sizeIf True, optimize for minimum space usage, otherwise optimize for fastest writing speed.
ReturnsNone
def _recompress_all_bytes_in(self, extra_bytes=None):
Recompress the current bytes_in, and optionally more.
Parametersextra_bytesOptional, if supplied we will add it with Z_SYNC_FLUSH
Returns

(bytes_out, bytes_out_len, alt_compressed)

  • bytes_out: is the compressed bytes returned from the compressor
  • bytes_out_len: the length of the compressed output
  • compressor: An object with everything packed in so far, and Z_SYNC_FLUSH called.
def write(self, bytes, reserved=False):
Write some bytes to the chunk.

If the bytes fit, False is returned. Otherwise True is returned and the bytes have not been added to the chunk.

ParametersbytesThe bytes to include
reservedIf True, we can use the space reserved in the constructor.
API Documentation for Bazaar, generated by pydoctor at 2014-04-23 00:01:24.