Part of bzrlib.groupcompress View In Hierarchy
Method | __init__ | Create a GroupCompressVersionedFiles object. |
Method | without_fallbacks | Return a clone of this object without any fallbacks configured. |
Method | add_lines | Add a text to the store. |
Method | add_fallback_versioned_files | Add a source of texts for texts not present in this knit. |
Method | annotate | See VersionedFiles.annotate. |
Method | get_annotator | Undocumented |
Method | check | See VersionedFiles.check(). |
Method | clear_cache | See VersionedFiles.clear_cache() |
Method | get_parent_map | Get a map of the graph parents of keys. |
Method | get_missing_compression_parent_keys | Return the keys of missing compression parents. |
Method | get_record_stream | Get a stream of records for keys. |
Method | get_sha1s | See VersionedFiles.get_sha1s(). |
Method | insert_record_stream | Insert a record stream into this container. |
Method | iter_lines_added_or_present_in_keys | Iterate over the lines in the versioned files from keys. |
Method | keys | See VersionedFiles.keys. |
Method | _add_text | See VersionedFiles._add_text(). |
Method | _check_add | check that version_id and lines are safe to add. |
Method | _get_parent_map_with_sources | Get a map of the parents of keys. |
Method | _get_blocks | Get GroupCompressBlocks for the given read_memos. |
Method | _find_from_fallback | Find whatever keys you can from the fallbacks. |
Method | _get_ordered_source_keys | Get the (source, [keys]) list. |
Method | _get_as_requested_source_keys | Undocumented |
Method | _get_io_ordered_source_keys | Undocumented |
Method | _get_remaining_record_stream | Get a stream of records for keys. |
Method | _get_compressor_settings | Undocumented |
Method | _make_group_compressor | Undocumented |
Method | _insert_record_stream | Internal core to insert a record stream into this container. |
Inherited from VersionedFilesWithFallbacks:
Method | get_known_graph_ancestry | Get a KnownGraph instance with the ancestry of keys. |
Inherited from VersionedFiles (via VersionedFilesWithFallbacks):
Method | add_mpdiffs | Add mpdiffs to this VersionedFile. |
Static Method | check_not_reserved_id | Undocumented |
Method | make_mpdiffs | Create multiparent diffs for specified keys. |
Method | _check_lines_not_unicode | Check that lines being added to a versioned file are not unicode. |
Method | _check_lines_are_lines | Check that the lines really are full lines without inline EOL. |
Method | _extract_blocks | Undocumented |
Method | _transitive_fallbacks | Return the whole stack of fallback versionedfiles. |
Parameters | index | The index object storing access and graph data. |
access | The access object storing raw data. | |
delta | Whether to delta compress or just entropy compress. | |
_unadded_refs | private parameter, don't use. | |
_group_cache | private parameter, don't use. |
Parameters | key | The key tuple of the text to add. |
parents | The parents key tuples of the text to add. | |
lines | A list of lines. Each line must be a bytestring. And all of them except the last must be terminated with n and contain no other n's. The last line may either contain no n's or a single terminating n. If the lines list does meet this constraint the add routine may error or may succeed - but you will be unable to read the data back accurately. (Checking the lines have been split correctly is expensive and extremely unlikely to catch bugs so it is not done at runtime unless check_content is True.) | |
parent_texts | An optional dictionary containing the opaque representations of some or all of the parents of version_id to allow delta optimisations. VERY IMPORTANT: the texts must be those returned by add_lines or data corruption can be caused. | |
left_matching_blocks | a hint about which areas are common between the text and its left-hand-parent. The format is the SequenceMatcher.get_matching_blocks format. | |
nostore_sha | Raise ExistingContent and do not add the lines to the versioned file if the digest of the lines matches this. | |
random_id | If True a random id has been selected rather than an id determined by some deterministic process such as a converter from a foreign VCS. When True the backend may choose not to check for uniqueness of the resulting key within the versioned file, so this should only be done when the result is expected to be unique anyway. | |
check_content | If True, the lines supplied are verified to be bytestrings that are correctly formed lines. | |
Returns | The text sha1, the number of bytes in the text, and an opaque representation of the inserted version which can be provided back to future add_lines calls in the parent_texts dictionary. |
Parameters | a_versioned_files | A VersionedFiles object. |
Parameters | keys | The keys to look up parents for. |
Returns | A mapping from keys to parents. Absent keys are absent from the mapping. |
Parameters | keys | The keys to look up parents for. |
Returns | A tuple. The first element is a mapping from keys to parents. Absent keys are absent from the mapping. The second element is a list with the locations each key was found in. The first element is the in-this-knit parents, the second the first fallback source, and so on. |
Returns | a series of (read_memo, block) pairs, in the order they were originally passed. |
Missing compression parents occur when a record stream was missing basis texts, or a index was scanned that had missing basis texts.
Parameters | keys | The keys to include. |
ordering | Either 'unordered' or 'topological'. A topologically sorted stream has compression parents strictly before their children. | |
include_delta_closure | If True then the closure across any compression parents will be included (in the opaque data). | |
Returns | An iterator of ContentFactory objects, each of which is only valid until the iterator is advanced. |
Parameters | missing | A set of missing keys. This set will be mutated as keys are found from a fallback_vfs |
Returns | (parent_map, key_to_source_map, source_results) parent_map the overall key => parent_keys key_to_source_map a dict from {key: source} source_results a list of (source: keys) |
The returned objects should be in the order defined by 'ordering', which can weave between different sources.
Parameters | ordering | Must be one of 'topological' or 'groupcompress' |
Returns | List of [(source, [keys])] tuples, such that all keys are in the defined order, regardless of source. |
Parameters | keys | The keys to include. |
ordering | one of 'unordered', 'topological', 'groupcompress' or 'as-requested' | |
include_delta_closure | If True then the closure across any compression parents will be included (in the opaque data). | |
Returns | An iterator of ContentFactory objects, each of which is only valid until the iterator is advanced. |
Parameters | stream | A stream of records to insert. |
Returns | None | |
See Also |
This helper function has a different interface than insert_record_stream to allow add_lines to be minimal, but still return the needed data.
Parameters | stream | A stream of records to insert. |
nostore_sha | If the sha1 of a given text matches nostore_sha, raise ExistingContent, rather than committing the new text. | |
reuse_blocks | If the source is streaming from groupcompress-blocks, just insert the blocks as-is, rather than expanding the texts and inserting again. | |
Returns | An iterator over the sha1 of the inserted records. | |
See Also | ||
Iterate over the lines in the versioned files from keys.
This may return lines from other keys. Each item the returned iterator yields is a tuple of a line and a text version that that line is present in (not introduced in).
Ordering of results is in whatever order is most suitable for the underlying storage format.
If a progress bar is supplied, it may be used to indicate progress. The caller is responsible for cleaning up progress bars (because this is an iterator).
Lines are normalised by the underlying store: they will all have
terminators.
Lines are returned in arbitrary order.
Returns | An iterator over (line, key). |