Package CedarBackup3 :: Module filesystem :: Class BackupFileList
[hide private]
[frames] | no frames]

Class BackupFileList

source code

object --+        
         |        
      list --+    
             |    
FilesystemList --+
                 |
                BackupFileList

List of files to be backed up.

A BackupFileList is a FilesystemList containing a list of files to be backed up. It only contains files, not directories (soft links are treated like files). On top of the generic functionality provided by FilesystemList, this class adds functionality to keep a hash (checksum) for each file in the list, and it also provides a method to calculate the total size of the files in the list and a way to export the list into tar form.

Instance Methods [hide private]
new empty list
__init__(self)
Initializes a list with no configured exclusions.
source code
 
addDir(self, path)
Adds a directory to the list.
source code
 
totalSize(self)
Returns the total size among all files in the list.
source code
 
generateSizeMap(self)
Generates a mapping from file to file size in bytes.
source code
 
generateDigestMap(self, stripPrefix=None)
Generates a mapping from file to file digest.
source code
 
generateFitted(self, capacity, algorithm='worst_fit')
Generates a list of items that fit in the indicated capacity.
source code
 
generateTarfile(self, path, mode='tar', ignore=False, flat=False)
Creates a tar file containing the files in the list.
source code
 
removeUnchanged(self, digestMap, captureDigest=False)
Removes unchanged entries from the list.
source code
 
generateSpan(self, capacity, algorithm='worst_fit')
Splits the list of items into sub-lists that fit in a given capacity.
source code
 
_getKnapsackTable(self, capacity=None)
Converts the list into the form needed by the knapsack algorithms.
source code

Inherited from FilesystemList: addDirContents, addFile, normalize, removeDirs, removeFiles, removeInvalid, removeLinks, removeMatch, verify

Inherited from list: __add__, __contains__, __delitem__, __delslice__, __eq__, __ge__, __getattribute__, __getitem__, __getslice__, __gt__, __iadd__, __imul__, __iter__, __le__, __len__, __lt__, __mul__, __ne__, __new__, __repr__, __reversed__, __rmul__, __setitem__, __setslice__, __sizeof__, append, count, extend, index, insert, pop, remove, reverse, sort

Inherited from object: __delattr__, __format__, __reduce__, __reduce_ex__, __setattr__, __str__, __subclasshook__

Static Methods [hide private]
 
_generateDigest(path)
Generates an SHA digest for a given file on disk.
source code
 
_getKnapsackFunction(algorithm)
Returns a reference to the function associated with an algorithm name.
source code
Class Variables [hide private]

Inherited from list: __hash__

Properties [hide private]

Inherited from FilesystemList: excludeBasenamePatterns, excludeDirs, excludeFiles, excludeLinks, excludePaths, excludePatterns, ignoreFile

Inherited from object: __class__

Method Details [hide private]

__init__(self)
(Constructor)

source code 

Initializes a list with no configured exclusions.

Returns: new empty list
Overrides: object.__init__

addDir(self, path)

source code 

Adds a directory to the list.

Note that this class does not allow directories to be added by themselves (a backup list contains only files). However, since links to directories are technically files, we allow them to be added.

This method is implemented in terms of the superclass method, with one additional validation: the superclass method is only called if the passed-in path is both a directory and a link. All of the superclass's existing validations and restrictions apply.

Parameters:
  • path (String representing a path on disk) - Directory path to be added to the list
Returns:
Number of items added to the list.
Raises:
  • ValueError - If path is not a directory or does not exist.
  • ValueError - If the path could not be encoded properly.
Overrides: FilesystemList.addDir

totalSize(self)

source code 

Returns the total size among all files in the list. Only files are counted. Soft links that point at files are ignored. Entries which do not exist on disk are ignored.

Returns:
Total size, in bytes

generateSizeMap(self)

source code 

Generates a mapping from file to file size in bytes. The mapping does include soft links, which are listed with size zero. Entries which do not exist on disk are ignored.

Returns:
Dictionary mapping file to file size

generateDigestMap(self, stripPrefix=None)

source code 

Generates a mapping from file to file digest.

Currently, the digest is an SHA hash, which should be pretty secure. In the future, this might be a different kind of hash, but we guarantee that the type of the hash will not change unless the library major version number is bumped.

Entries which do not exist on disk are ignored.

Soft links are ignored. We would end up generating a digest for the file that the soft link points at, which doesn't make any sense.

If stripPrefix is passed in, then that prefix will be stripped from each key when the map is generated. This can be useful in generating two "relative" digest maps to be compared to one another.

Parameters:
  • stripPrefix (String with any contents) - Common prefix to be stripped from paths
Returns:
Dictionary mapping file to digest value

See Also: removeUnchanged

generateFitted(self, capacity, algorithm='worst_fit')

source code 

Generates a list of items that fit in the indicated capacity.

Sometimes, callers would like to include every item in a list, but are unable to because not all of the items fit in the space available. This method returns a copy of the list, containing only the items that fit in a given capacity. A copy is returned so that we don't lose any information if for some reason the fitted list is unsatisfactory.

The fitting is done using the functions in the knapsack module. By default, the first fit algorithm is used, but you can also choose from best fit, worst fit and alternate fit.

Parameters:
  • capacity (Integer, in bytes) - Maximum capacity among the files in the new list
  • algorithm (One of "first_fit", "best_fit", "worst_fit", "alternate_fit") - Knapsack (fit) algorithm to use
Returns:
Copy of list with total size no larger than indicated capacity
Raises:
  • ValueError - If the algorithm is invalid.

generateTarfile(self, path, mode='tar', ignore=False, flat=False)

source code 

Creates a tar file containing the files in the list.

By default, this method will create uncompressed tar files. If you pass in mode 'targz', then it will create gzipped tar files, and if you pass in mode 'tarbz2', then it will create bzipped tar files.

The tar file will be created as a GNU tar archive, which enables extended file name lengths, etc. Since GNU tar is so prevalent, I've decided that the extra functionality out-weighs the disadvantage of not being "standard".

If you pass in flat=True, then a "flat" archive will be created, and all of the files will be added to the root of the archive. So, the file /tmp/something/whatever.txt would be added as just whatever.txt.

By default, the whole method call fails if there are problems adding any of the files to the archive, resulting in an exception. Under these circumstances, callers are advised that they might want to call removeInvalid() and then attempt to extract the tar file a second time, since the most common cause of failures is a missing file (a file that existed when the list was built, but is gone again by the time the tar file is built).

If you want to, you can pass in ignore=True, and the method will ignore errors encountered when adding individual files to the archive (but not errors opening and closing the archive itself).

We'll always attempt to remove the tarfile from disk if an exception will be thrown.

Parameters:
  • path (String representing a path on disk) - Path of tar file to create on disk
  • mode (One of either 'tar', 'targz' or 'tarbz2') - Tar creation mode
  • ignore (Boolean) - Indicates whether to ignore certain errors.
  • flat (Boolean) - Creates "flat" archive by putting all items in root
Raises:
  • ValueError - If mode is not valid
  • ValueError - If list is empty
  • ValueError - If the path could not be encoded properly.
  • TarError - If there is a problem creating the tar file
Notes:
  • No validation is done as to whether the entries in the list are files, since only files or soft links should be in an object like this. However, to be safe, everything is explicitly added to the tar archive non-recursively so it's safe to include soft links to directories.
  • The Python tarfile module, which is used internally here, is supposed to deal properly with long filenames and links. In my testing, I have found that it appears to be able to add long really long filenames to archives, but doesn't do a good job reading them back out, even out of an archive it created. Fortunately, all Cedar Backup does is add files to archives.

removeUnchanged(self, digestMap, captureDigest=False)

source code 

Removes unchanged entries from the list.

This method relies on a digest map as returned from generateDigestMap. For each entry in digestMap, if the entry also exists in the current list and the entry in the current list has the same digest value as in the map, the entry in the current list will be removed.

This method offers a convenient way for callers to filter unneeded entries from a list. The idea is that a caller will capture a digest map from generateDigestMap at some point in time (perhaps the beginning of the week), and will save off that map using pickle or some other method. Then, the caller could use this method sometime in the future to filter out any unchanged files based on the saved-off map.

If captureDigest is passed-in as True, then digest information will be captured for the entire list before the removal step occurs using the same rules as in generateDigestMap. The check will involve a lookup into the complete digest map.

If captureDigest is passed in as False, we will only generate a digest value for files we actually need to check, and we'll ignore any entry in the list which isn't a file that currently exists on disk.

The return value varies depending on captureDigest, as well. To preserve backwards compatibility, if captureDigest is False, then we'll just return a single value representing the number of entries removed. Otherwise, we'll return a tuple of (entries removed, digest map). The returned digest map will be in exactly the form returned by generateDigestMap.

Parameters:
  • digestMap (Map as returned from generateDigestMap.) - Dictionary mapping file name to digest value.
  • captureDigest (Boolean) - Indicates that digest information should be captured.
Returns:
Results as discussed above (format varies based on arguments)

Note: For performance reasons, this method actually ends up rebuilding the list from scratch. First, we build a temporary dictionary containing all of the items from the original list. Then, we remove items as needed from the dictionary (which is faster than the equivalent operation on a list). Finally, we replace the contents of the current list based on the keys left in the dictionary. This should be transparent to the caller.

_generateDigest(path)
Static Method

source code 

Generates an SHA digest for a given file on disk.

The original code for this function used this simplistic implementation, which requires reading the entire file into memory at once in order to generate a digest value:

  sha.new(open(path).read()).hexdigest()

Not surprisingly, this isn't an optimal solution. The Simple file hashing Python Cookbook recipe describes how to incrementally generate a hash value by reading in chunks of data rather than reading the file all at once. The recipe relies on the the update() method of the various Python hashing algorithms.

In my tests using a 110 MB file on CD, the original implementation requires 111 seconds. This implementation requires only 40-45 seconds, which is a pretty substantial speed-up.

Experience shows that reading in around 4kB (4096 bytes) at a time yields the best performance. Smaller reads are quite a bit slower, and larger reads don't make much of a difference. The 4kB number makes me a little suspicious, and I think it might be related to the size of a filesystem read at the hardware level. However, I've decided to just hardcode 4096 until I have evidence that shows it's worthwhile making the read size configurable.

Parameters:
  • path - Path to generate digest for.
Returns:
ASCII-safe SHA digest for the file.
Raises:
  • OSError - If the file cannot be opened.

generateSpan(self, capacity, algorithm='worst_fit')

source code 

Splits the list of items into sub-lists that fit in a given capacity.

Sometimes, callers need split to a backup file list into a set of smaller lists. For instance, you could use this to "span" the files across a set of discs.

The fitting is done using the functions in the knapsack module. By default, the first fit algorithm is used, but you can also choose from best fit, worst fit and alternate fit.

Parameters:
  • capacity (Integer, in bytes) - Maximum capacity among the files in the new list
  • algorithm (One of "first_fit", "best_fit", "worst_fit", "alternate_fit") - Knapsack (fit) algorithm to use
Returns:
List of SpanItem objects.
Raises:
  • ValueError - If the algorithm is invalid.
  • ValueError - If it's not possible to fit some items

Note: If any of your items are larger than the capacity, then it won't be possible to find a solution. In this case, a value error will be raised.

_getKnapsackTable(self, capacity=None)

source code 

Converts the list into the form needed by the knapsack algorithms.

Returns:
Dictionary mapping file name to tuple of (file path, file size).

_getKnapsackFunction(algorithm)
Static Method

source code 

Returns a reference to the function associated with an algorithm name. Algorithm name must be one of "first_fit", "best_fit", "worst_fit", "alternate_fit"

Parameters:
  • algorithm - Name of the algorithm
Returns:
Reference to knapsack function
Raises:
  • ValueError - If the algorithm name is unknown.