b.g.GroupCompressVersionedFiles(VersionedFilesWithFallbacks) : class documentation

Part of bzrlib.groupcompress View In Hierarchy

A group-compress based VersionedFiles implementation.
Method __init__ Create a GroupCompressVersionedFiles object.
Method without_fallbacks Return a clone of this object without any fallbacks configured.
Method add_lines Add a text to the store.
Method add_fallback_versioned_files Add a source of texts for texts not present in this knit.
Method annotate See VersionedFiles.annotate.
Method get_annotator Undocumented
Method check See VersionedFiles.check().
Method clear_cache See VersionedFiles.clear_cache()
Method get_parent_map Get a map of the graph parents of keys.
Method get_missing_compression_parent_keys Return the keys of missing compression parents.
Method get_record_stream Get a stream of records for keys.
Method get_sha1s See VersionedFiles.get_sha1s().
Method insert_record_stream Insert a record stream into this container.
Method iter_lines_added_or_present_in_keys Iterate over the lines in the versioned files from keys.
Method keys See VersionedFiles.keys.
Method _add_text See VersionedFiles._add_text().
Method _check_add check that version_id and lines are safe to add.
Method _get_parent_map_with_sources Get a map of the parents of keys.
Method _get_blocks Get GroupCompressBlocks for the given read_memos.
Method _find_from_fallback Find whatever keys you can from the fallbacks.
Method _get_ordered_source_keys Get the (source, [keys]) list.
Method _get_as_requested_source_keys Undocumented
Method _get_io_ordered_source_keys Undocumented
Method _get_remaining_record_stream Get a stream of records for keys.
Method _get_compressor_settings Undocumented
Method _make_group_compressor Undocumented
Method _insert_record_stream Internal core to insert a record stream into this container.

Inherited from VersionedFilesWithFallbacks:

Method get_known_graph_ancestry Get a KnownGraph instance with the ancestry of keys.

Inherited from VersionedFiles (via VersionedFilesWithFallbacks):

Method add_mpdiffs Add mpdiffs to this VersionedFile.
Static Method check_not_reserved_id Undocumented
Method make_mpdiffs Create multiparent diffs for specified keys.
Method _check_lines_not_unicode Check that lines being added to a versioned file are not unicode.
Method _check_lines_are_lines Check that the lines really are full lines without inline EOL.
Method _extract_blocks Undocumented
Method _transitive_fallbacks Return the whole stack of fallback versionedfiles.
def __init__(self, index, access, delta=True, _unadded_refs=None, _group_cache=None):
Create a GroupCompressVersionedFiles object.
ParametersindexThe index object storing access and graph data.
accessThe access object storing raw data.
deltaWhether to delta compress or just entropy compress.
_unadded_refsprivate parameter, don't use.
_group_cacheprivate parameter, don't use.
def without_fallbacks(self):
Return a clone of this object without any fallbacks configured.
def add_lines(self, key, parents, lines, parent_texts=None, left_matching_blocks=None, nostore_sha=None, random_id=False, check_content=True):
Add a text to the store.
ParameterskeyThe key tuple of the text to add.
parentsThe parents key tuples of the text to add.
linesA list of lines. Each line must be a bytestring. And all of them except the last must be terminated with n and contain no other n's. The last line may either contain no n's or a single terminating n. If the lines list does meet this constraint the add routine may error or may succeed - but you will be unable to read the data back accurately. (Checking the lines have been split correctly is expensive and extremely unlikely to catch bugs so it is not done at runtime unless check_content is True.)
parent_textsAn optional dictionary containing the opaque representations of some or all of the parents of version_id to allow delta optimisations. VERY IMPORTANT: the texts must be those returned by add_lines or data corruption can be caused.
left_matching_blocksa hint about which areas are common between the text and its left-hand-parent. The format is the SequenceMatcher.get_matching_blocks format.
nostore_shaRaise ExistingContent and do not add the lines to the versioned file if the digest of the lines matches this.
random_idIf True a random id has been selected rather than an id determined by some deterministic process such as a converter from a foreign VCS. When True the backend may choose not to check for uniqueness of the resulting key within the versioned file, so this should only be done when the result is expected to be unique anyway.
check_contentIf True, the lines supplied are verified to be bytestrings that are correctly formed lines.
ReturnsThe text sha1, the number of bytes in the text, and an opaque representation of the inserted version which can be provided back to future add_lines calls in the parent_texts dictionary.
def _add_text(self, key, parents, text, nostore_sha=None, random_id=False):
See VersionedFiles._add_text().
def add_fallback_versioned_files(self, a_versioned_files):
Add a source of texts for texts not present in this knit.
Parametersa_versioned_filesA VersionedFiles object.
def annotate(self, key):
See VersionedFiles.annotate.
def get_annotator(self):
Undocumented
def check(self, progress_bar=None, keys=None):
See VersionedFiles.check().
def clear_cache(self):
See VersionedFiles.clear_cache()
def _check_add(self, key, lines, random_id, check_content):
check that version_id and lines are safe to add.
def get_parent_map(self, keys):
Get a map of the graph parents of keys.
ParameterskeysThe keys to look up parents for.
ReturnsA mapping from keys to parents. Absent keys are absent from the mapping.
def _get_parent_map_with_sources(self, keys):
Get a map of the parents of keys.
ParameterskeysThe keys to look up parents for.
ReturnsA tuple. The first element is a mapping from keys to parents. Absent keys are absent from the mapping. The second element is a list with the locations each key was found in. The first element is the in-this-knit parents, the second the first fallback source, and so on.
def _get_blocks(self, read_memos):
Get GroupCompressBlocks for the given read_memos.
Returnsa series of (read_memo, block) pairs, in the order they were originally passed.
def get_missing_compression_parent_keys(self):
Return the keys of missing compression parents.

Missing compression parents occur when a record stream was missing basis texts, or a index was scanned that had missing basis texts.

def get_record_stream(self, keys, ordering, include_delta_closure):
Get a stream of records for keys.
ParameterskeysThe keys to include.
orderingEither 'unordered' or 'topological'. A topologically sorted stream has compression parents strictly before their children.
include_delta_closureIf True then the closure across any compression parents will be included (in the opaque data).
ReturnsAn iterator of ContentFactory objects, each of which is only valid until the iterator is advanced.
def _find_from_fallback(self, missing):
Find whatever keys you can from the fallbacks.
ParametersmissingA set of missing keys. This set will be mutated as keys are found from a fallback_vfs
Returns(parent_map, key_to_source_map, source_results) parent_map the overall key => parent_keys key_to_source_map a dict from {key: source} source_results a list of (source: keys)
def _get_ordered_source_keys(self, ordering, parent_map, key_to_source_map):
Get the (source, [keys]) list.

The returned objects should be in the order defined by 'ordering', which can weave between different sources.

ParametersorderingMust be one of 'topological' or 'groupcompress'
ReturnsList of [(source, [keys])] tuples, such that all keys are in the defined order, regardless of source.
def _get_as_requested_source_keys(self, orig_keys, locations, unadded_keys, key_to_source_map):
Undocumented
def _get_io_ordered_source_keys(self, locations, unadded_keys, source_result):
Undocumented
def _get_remaining_record_stream(self, keys, orig_keys, ordering, include_delta_closure):
Get a stream of records for keys.
ParameterskeysThe keys to include.
orderingone of 'unordered', 'topological', 'groupcompress' or 'as-requested'
include_delta_closureIf True then the closure across any compression parents will be included (in the opaque data).
ReturnsAn iterator of ContentFactory objects, each of which is only valid until the iterator is advanced.
def get_sha1s(self, keys):
See VersionedFiles.get_sha1s().
def insert_record_stream(self, stream):
Insert a record stream into this container.
ParametersstreamA stream of records to insert.
ReturnsNone
See Also
def _get_compressor_settings(self):
Undocumented
def _make_group_compressor(self):
Undocumented
def _insert_record_stream(self, stream, random_id=False, nostore_sha=None, reuse_blocks=True):
Internal core to insert a record stream into this container.

This helper function has a different interface than insert_record_stream to allow add_lines to be minimal, but still return the needed data.

ParametersstreamA stream of records to insert.
nostore_shaIf the sha1 of a given text matches nostore_sha, raise ExistingContent, rather than committing the new text.
reuse_blocksIf the source is streaming from groupcompress-blocks, just insert the blocks as-is, rather than expanding the texts and inserting again.
ReturnsAn iterator over the sha1 of the inserted records.
See Also
def iter_lines_added_or_present_in_keys(self, keys, pb=None):

Iterate over the lines in the versioned files from keys.

This may return lines from other keys. Each item the returned iterator yields is a tuple of a line and a text version that that line is present in (not introduced in).

Ordering of results is in whatever order is most suitable for the underlying storage format.

If a progress bar is supplied, it may be used to indicate progress. The caller is responsible for cleaning up progress bars (because this is an iterator).

NOTES:
  • Lines are normalised by the underlying store: they will all have

    terminators.

  • Lines are returned in arbitrary order.

ReturnsAn iterator over (line, key).
def keys(self):
See VersionedFiles.keys.
API Documentation for Bazaar, generated by pydoctor at 2022-06-16 00:25:16.