satpy.writers package¶
Submodules¶
satpy.writers.cf_writer module¶
Writer for netCDF4/CF.
Example usage¶
The CF writer saves datasets in a Scene as CF-compliant netCDF file. Here is an example with MSG SEVIRI data in HRIT format:
>>> from satpy import Scene
>>> import glob
>>> filenames = glob.glob('data/H*201903011200*')
>>> scn = Scene(filenames=filenames, reader='seviri_l1b_hrit')
>>> scn.load(['VIS006', 'IR_108'])
>>> scn.save_datasets(writer='cf', datasets=['VIS006', 'IR_108'], filename='seviri_test.nc',
exclude_attrs=['raw_metadata'])
- You can select the netCDF backend using the
engine
keyword argument. Default ish5netcdf
. - For datasets with area definition you can exclude lat/lon coordinates by setting
include_lonlats=False
. - By default the dataset name is prepended to non-dimensional coordinates such as scanline timestamps. This ensures maximum consistency, i.e. the netCDF variable names are independent of the number/set of datasets to be written. If a non-dimensional coordinate is identical for
Grouping¶
All datasets to be saved must have the same projection coordinates x
and y
. If a scene holds datasets with
different grids, the CF compliant workaround is to save the datasets to separate files. Alternatively, you can save
datasets with common grids in separate netCDF groups as follows:
>>> scn.load(['VIS006', 'IR_108', 'HRV'])
>>> scn.save_datasets(writer='cf', datasets=['VIS006', 'IR_108', 'HRV'],
filename='seviri_test.nc', exclude_attrs=['raw_metadata'],
groups={'visir': ['VIS006', 'IR_108'], 'hrv': ['HRV']})
Note that the resulting file will not be fully CF compliant.
Attribute Encoding¶
In the above examples, raw metadata from the HRIT files have been excluded. If you want all attributes to be included,
just remove the exclude_attrs
keyword argument. By default, dict-type dataset attributes, such as the raw metadata,
are encoded as a string using json. Thus, you can use json to decode them afterwards:
>>> import xarray as xr
>>> import json
>>> # Save scene to nc-file
>>> scn.save_datasets(writer='cf', datasets=['VIS006', 'IR_108'], filename='seviri_test.nc')
>>> # Now read data from the nc-file
>>> ds = xr.open_dataset('seviri_test.nc')
>>> raw_mda = json.loads(ds['IR_108'].attrs['raw_metadata'])
>>> print(raw_mda['RadiometricProcessing']['Level15ImageCalibration']['CalSlope'])
[0.020865 0.0278287 0.0232411 0.00365867 0.00831811 0.03862197
0.12674432 0.10396091 0.20503568 0.22231115 0.1576069 0.0352385]
Alternatively it is possible to flatten dict-type attributes by setting flatten_attrs=True
. This is more human
readable as it will create a separate nc-attribute for each item in every dictionary. Keys are concatenated with
underscore separators. The CalSlope attribute can then be accessed as follows:
>>> scn.save_datasets(writer='cf', datasets=['VIS006', 'IR_108'], filename='seviri_test.nc',
flatten_attrs=True)
>>> ds = xr.open_dataset('seviri_test.nc')
>>> print(ds['IR_108'].attrs['raw_metadata_RadiometricProcessing_Level15ImageCalibration_CalSlope'])
[0.020865 0.0278287 0.0232411 0.00365867 0.00831811 0.03862197
0.12674432 0.10396091 0.20503568 0.22231115 0.1576069 0.0352385]
This is what the corresponding ncdump
output would look like in this case:
$ ncdump -h test_seviri.nc
...
IR_108:raw_metadata_RadiometricProcessing_Level15ImageCalibration_CalOffset = -1.064, ...;
IR_108:raw_metadata_RadiometricProcessing_Level15ImageCalibration_CalSlope = 0.021, ...;
IR_108:raw_metadata_RadiometricProcessing_MPEFCalFeedback_AbsCalCoeff = 0.021, ...;
...
-
class
satpy.writers.cf_writer.
AttributeEncoder
(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, sort_keys=False, indent=None, separators=None, default=None)[source]¶ Bases:
json.encoder.JSONEncoder
JSON encoder for dataset attributes.
Constructor for JSONEncoder, with sensible defaults.
If skipkeys is false, then it is a TypeError to attempt encoding of keys that are not str, int, float or None. If skipkeys is True, such items are simply skipped.
If ensure_ascii is true, the output is guaranteed to be str objects with all incoming non-ASCII characters escaped. If ensure_ascii is false, the output can contain non-ASCII characters.
If check_circular is true, then lists, dicts, and custom encoded objects will be checked for circular references during encoding to prevent an infinite recursion (which would cause an OverflowError). Otherwise, no such check takes place.
If allow_nan is true, then NaN, Infinity, and -Infinity will be encoded as such. This behavior is not JSON specification compliant, but is consistent with most JavaScript based encoders and decoders. Otherwise, it will be a ValueError to encode such floats.
If sort_keys is true, then the output of dictionaries will be sorted by key; this is useful for regression tests to ensure that JSON serializations can be compared on a day-to-day basis.
If indent is a non-negative integer, then JSON array elements and object members will be pretty-printed with that indent level. An indent level of 0 will only insert newlines. None is the most compact representation.
If specified, separators should be an (item_separator, key_separator) tuple. The default is (‘, ‘, ‘: ‘) if indent is
None
and (‘,’, ‘: ‘) otherwise. To get the most compact JSON representation, you should specify (‘,’, ‘:’) to eliminate whitespace.If specified, default is a function that gets called for objects that can’t otherwise be serialized. It should return a JSON encodable version of the object or raise a
TypeError
.
-
class
satpy.writers.cf_writer.
CFWriter
(name=None, filename=None, base_dir=None, **kwargs)[source]¶ Bases:
satpy.writers.Writer
Writer producing NetCDF/CF compatible datasets.
Initialize the writer object.
Parameters: - name (str) – A name for this writer for log and error messages. If this writer is configured in a YAML file its name should match the name of the YAML file. Writer names may also appear in output file attributes.
- filename (str) –
Filename to save data to. This filename can and should specify certain python string formatting fields to differentiate between data written to the files. Any attributes provided by the
.attrs
of a DataArray object may be included. Format and conversion specifiers provided by thetrollsift
package may also be used. Any directories in the provided pattern will be created if they do not exist. Example:{platform_name}_{sensor}_{name}_{start_time:%Y%m%d_%H%M%S.tif
- base_dir (str) – Base destination directories for all created files.
- kwargs (dict) – Additional keyword arguments to pass to the
Plugin
class.
-
static
da2cf
(dataarray, epoch='seconds since 1970-01-01 00:00:00', flatten_attrs=False, exclude_attrs=None, compression=None)[source]¶ Convert the dataarray to something cf-compatible.
Parameters: - dataarray (xr.DataArray) – The data array to be converted
- epoch (str) – Reference time for encoding of time coordinates
- flatten_attrs (bool) – If True, flatten dict-type attributes
- exclude_attrs (list) – List of dataset attributes to be excluded
-
save_dataset
(dataset, filename=None, fill_value=None, **kwargs)[source]¶ Save the dataset to a given filename.
-
save_datasets
(datasets, filename=None, groups=None, header_attrs=None, engine=None, epoch='seconds since 1970-01-01 00:00:00', flatten_attrs=False, exclude_attrs=None, include_lonlats=True, pretty=False, compression=None, **to_netcdf_kwargs)[source]¶ Save the given datasets in one netCDF file.
Note that all datasets (if grouping: in one group) must have the same projection coordinates.
Parameters: - datasets (list) – Datasets to be saved
- filename (str) – Output file
- groups (dict) – Group datasets according to the given assignment: {‘group_name’: [‘dataset1’, ‘dataset2’, …]}. Group name None corresponds to the root of the file, i.e. no group will be created. Warning: The results will not be fully CF compliant!
- header_attrs – Global attributes to be included
- engine (str) – Module to be used for writing netCDF files. Follows xarray’s
to_netcdf()
engine choices with a preference for ‘netcdf4’. - epoch (str) – Reference time for encoding of time coordinates
- flatten_attrs (bool) – If True, flatten dict-type attributes
- exclude_attrs (list) – List of dataset attributes to be excluded
- include_lonlats (bool) – Always include latitude and longitude coordinates, even for datasets with area definition
- pretty (bool) – Don’t modify coordinate names, if possible. Makes the file prettier, but possibly less consistent.
- compression (dict) – Compression to use on the datasets before saving, for example {‘zlib’: True, ‘complevel’: 9}. This is in turn passed the xarray’s to_netcdf method: http://xarray.pydata.org/en/stable/generated/xarray.Dataset.to_netcdf.html for more possibilities.
-
update_encoding
(dataset, to_netcdf_kwargs)[source]¶ Update encoding.
Avoid _FillValue attribute being added to coordinate variables (https://github.com/pydata/xarray/issues/1865).
-
satpy.writers.cf_writer.
area2cf
(dataarray, strict=False)[source]¶ Convert an area to at CF grid mapping or lon and lats.
-
satpy.writers.cf_writer.
area2lonlat
(dataarray)[source]¶ Convert an area to longitudes and latitudes.
-
satpy.writers.cf_writer.
assert_xy_unique
(datas)[source]¶ Check that all datasets share the same projection coordinates x/y.
-
satpy.writers.cf_writer.
create_grid_mapping
(area)[source]¶ Create the grid mapping instance for area.
-
satpy.writers.cf_writer.
encode_attrs_nc
(attrs)[source]¶ Encode dataset attributes in a netcdf compatible datatype.
Parameters: attrs (dict) – Attributes to be encoded Returns: Encoded (and sorted) attributes Return type: dict
-
satpy.writers.cf_writer.
encode_nc
(obj)[source]¶ Encode the given object as a netcdf compatible datatype.
Try to find the datatype which most closely resembles the object’s nature. If that fails, encode as a string. Plain lists are encoded recursively.
-
satpy.writers.cf_writer.
get_extra_ds
(dataset)[source]¶ Get the extra datasets associated to dataset.
-
satpy.writers.cf_writer.
link_coords
(datas)[source]¶ Link datasets and coordinates.
If the coordinates attribute of a data array links to other datasets in the scene, for example coordinates=’lon lat’, add them as coordinates to the data array and drop that attribute. In the final call to xr.Dataset.to_netcdf() all coordinate relations will be resolved and the coordinates attributes be set automatically.
-
satpy.writers.cf_writer.
make_alt_coords_unique
(datas, pretty=False)[source]¶ Make non-dimensional coordinates unique among all datasets.
Non-dimensional (or alternative) coordinates, such as scanline timestamps, may occur in multiple datasets with the same name and dimension but different values. In order to avoid conflicts, prepend the dataset name to the coordinate name. If a non-dimensional coordinate is unique among all datasets and
pretty=True
, its name will not be modified.Since all datasets must have the same projection coordinates, this is not applied to latitude and longitude.
Parameters: - datas (dict) – Dictionary of (dataset name, dataset)
- pretty (bool) – Don’t modify coordinate names, if possible. Makes the file prettier, but possibly less consistent.
Returns: Dictionary holding the updated datasets
-
satpy.writers.cf_writer.
make_time_bounds
(start_times, end_times)[source]¶ Create time bounds for the current dataarray.
satpy.writers.geotiff module¶
GeoTIFF writer objects for creating GeoTIFF files from DataArray objects.
-
class
satpy.writers.geotiff.
GeoTIFFWriter
(dtype=None, tags=None, **kwargs)[source]¶ Bases:
satpy.writers.ImageWriter
Writer to save GeoTIFF images.
Basic example from Scene:
>>> scn.save_datasets(writer='geotiff')
Un-enhanced float geotiff with NaN for fill values:
>>> scn.save_datasets(writer='geotiff', dtype=np.float32, enhance=False)
To add custom metadata use tags:
>>> scn.save_dataset(dataset_name, writer='geotiff', ... tags={'offset': 291.8, 'scale': -0.35})
For performance tips on creating geotiffs quickly and making them smaller see the faq.
Init the writer.
-
GDAL_OPTIONS
= ('tfw', 'rpb', 'rpctxt', 'interleave', 'tiled', 'blockxsize', 'blockysize', 'nbits', 'compress', 'num_threads', 'predictor', 'discard_lsb', 'sparse_ok', 'jpeg_quality', 'jpegtablesmode', 'zlevel', 'photometric', 'alpha', 'profile', 'bigtiff', 'pixeltype', 'copy_src_overviews')¶
-
save_image
(img, filename=None, dtype=None, fill_value=None, compute=True, keep_palette=False, cmap=None, tags=None, include_scale_offset=False, **kwargs)[source]¶ Save the image to the given
filename
in geotiff format.Note for faster output and reduced memory usage the
rasterio
library must be installed. This writer currently falls back to usinggdal
directly, but that will be deprecated in the future.Parameters: - img (xarray.DataArray) – Data to save to geotiff.
- filename (str) – Filename to save the image to. Defaults to
filename
passed during writer creation. Unlike the creationfilename
keyword argument, this filename does not get formatted with data attributes. - dtype (numpy.dtype) – Numpy data type to save the image as.
Defaults to 8-bit unsigned integer (
np.uint8
). If thedtype
argument is provided during writer creation then that will be used as the default. - fill_value (int or float) – Value to use where data values are NaN/null. If this is specified in the writer configuration file that value will be used as the default.
- compute (bool) – Compute dask arrays and save the image
immediately. If
False
then the return value can be passed tocompute_writer_results()
to do the computation. This is useful when multiple images may share input calculations where dask can benefit from not repeating them multiple times. Defaults toTrue
in the writer by itself, but is typically passed asFalse
by callers where calculations can be combined. - keep_palette (bool) – Save palette/color table to geotiff.
To be used with images that were palettized with the
“palettize” enhancement. Setting this to
True
will cause the colormap of the image to be written as a “color table” in the output geotiff and the image data values will represent the index values in to that color table. By default, this will use the colormap used in the “palettize” operation. See thecmap
option for other options. This option defaults toFalse
and palettized images will be converted to RGB/A. - cmap (trollimage.colormap.Colormap or None) – Colormap to save
as a color table in the output geotiff. See
keep_palette
for more information. Defaults to the palette of the providedimg
object. The colormap’s range should be set to match the index range of the palette (ex. cmap.set_range(0, len(colors))). - tags (dict) – Extra metadata to store in geotiff.
- include_scale_offset (bool) – Activate inclusion of scale and offset
factors in the geotiff to allow retrieving original values from
the pixel values.
False
by default.
-
satpy.writers.mitiff module¶
MITIFF writer objects for creating MITIFF files from Dataset objects.
-
class
satpy.writers.mitiff.
MITIFFWriter
(name=None, tags=None, **kwargs)[source]¶ Bases:
satpy.writers.ImageWriter
-
save_dataset
(dataset, filename=None, fill_value=None, compute=True, **kwargs)[source]¶ Save the
dataset
to a givenfilename
.This method creates an enhanced image using
get_enhanced_image()
. The image is then passed tosave_image()
. See both of these functions for more details on the arguments passed to this method.
-
save_datasets
(datasets, filename=None, fill_value=None, compute=True, **kwargs)[source]¶ Save all datasets to one or more files.
-
save_image
()[source]¶ Save Image object to a given
filename
.Parameters: - img (trollimage.xrimage.XRImage) – Image object to save to disk.
- filename (str) – Optionally specify the filename to save this dataset to. It may include string formatting patterns that will be filled in by dataset attributes.
- compute (bool) – If True (default), compute and save the dataset. If False return either a dask:delayed object or tuple of (source, target). See the return values below for more information.
- **kwargs – Other keyword arguments to pass to this writer.
Returns: Value returned depends on compute. If compute is True then the return value is the result of computing a dask:delayed object or running
dask.array.store()
. If compute is False then the returned value is either a dask:delayed object that can be computed using delayed.compute() or a tuple of (source, target) that should be passed todask.array.store()
. If target is provided the the caller is responsible for calling target.close() if the target has this method.
-
satpy.writers.ninjotiff module¶
Writer for TIFF images compatible with the NinJo visualization tool (NinjoTIFFs).
NinjoTIFFs can be color images or monochromatic. For monochromatic images, the physical units and scale and offsets to retrieve the physical values are provided. Metadata is also recorded in the file.
In order to write ninjotiff files, some metadata needs to be provided to the writer. Here is an example on how to write a color image:
chn = "airmass"
ninjoRegion = load_area("areas.def", "nrEURO3km")
filenames = glob("data/*__")
global_scene = Scene(reader="hrit_msg", filenames=filenames)
global_scene.load([chn])
local_scene = global_scene.resample(ninjoRegion)
local_scene.save_dataset(chn, filename="airmass.tif", writer='ninjotiff',
sat_id=6300014,
chan_id=6500015,
data_cat='GPRN',
data_source='EUMCAST',
nbits=8)
Here is an example on how to write a color image:
chn = "IR_108"
ninjoRegion = load_area("areas.def", "nrEURO3km")
filenames = glob("data/*__")
global_scene = Scene(reader="hrit_msg", filenames=filenames)
global_scene.load([chn])
local_scene = global_scene.resample(ninjoRegion)
local_scene.save_dataset(chn, filename="msg.tif", writer='ninjotiff',
sat_id=6300014,
chan_id=900015,
data_cat='GORN',
data_source='EUMCAST',
physic_unit='K',
nbits=8)
The metadata to provide to the writer can also be stored in a configuration file (see pyninjotiff), so that the previous example can be rewritten as:
chn = "IR_108"
ninjoRegion = load_area("areas.def", "nrEURO3km")
filenames = glob("data/*__")
global_scene = Scene(reader="hrit_msg", filenames=filenames)
global_scene.load([chn])
local_scene = global_scene.resample(ninjoRegion)
local_scene.save_dataset(chn, filename="msg.tif", writer='ninjotiff',
# ninjo product name to look for in .cfg file
ninjo_product_name="IR_108",
# custom configuration file for ninjo tiff products
# if not specified PPP_CONFIG_DIR is used as config file directory
ninjo_product_file="/config_dir/ninjotiff_products.cfg")
-
class
satpy.writers.ninjotiff.
NinjoTIFFWriter
(tags=None, **kwargs)[source]¶ Bases:
satpy.writers.ImageWriter
Writer for NinjoTiff files.
Inititalize the writer.
satpy.writers.scmi module¶
The SCMI AWIPS writer is used to create AWIPS compatible tiled NetCDF4 files. The Advanced Weather Interactive Processing System (AWIPS) is a program used by the United States National Weather Service (NWS) and others to view different forms of weather imagery. Sectorized Cloud and Moisture Imagery (SCMI) is a netcdf format accepted by AWIPS to store one image broken up in to one or more “tiles”. Once AWIPS is configured for specific products the SCMI NetCDF backend can be used to provide compatible products to the system. The files created by this backend are compatible with AWIPS II (AWIPS I is no longer supported).
The SCMI writer takes remapped binary image data and creates an AWIPS-compatible NetCDF4 file. The SCMI writer and the AWIPS client may need to be configured to make things appear the way the user wants in the AWIPS client. The SCMI writer can only produce files for datasets mapped to areas with specific projections:
- lcc
- geos
- merc
- stere
This is a limitation of the AWIPS client and not of the SCMI writer.
Numbered versus Lettered Grids¶
By default the SCMI writer will save tiles by number starting with ‘1’ representing the upper-left image tile. Tile numbers then increase along the column and then on to the next row.
By specifying lettered_grid as True tiles can be designated with a letter. Lettered grids or sectors are preconfigured in the scmi.yaml configuration file. The lettered tile locations are static and will not change with the data being written to them. Each lettered tile is split in to a certain number of subtiles (num_subtiles), default 2 rows by 2 columns. Lettered tiles are meant to make it easier for receiving AWIPS clients/stations to filter what tiles they receive; saving time, bandwidth, and space.
Any tiles (numbered or lettered) not containing any valid data are not created.
-
class
satpy.writers.scmi.
AttributeHelper
(ds_info)[source]¶ Bases:
object
helper object which wraps around a HimawariScene to provide SCMI attributes
-
class
satpy.writers.scmi.
LetteredTileGenerator
(area_definition, extents, cell_size=(2000000, 2000000), num_subtiles=None)[source]¶
-
class
satpy.writers.scmi.
NetCDFWrapper
(filename, sector_id, ds_info, awips_info, xy_factors, tile_info, compress=False, fix_awips=False)[source]¶ Bases:
object
Object to wrap all NetCDF data-based operations in to a single call.
This makes it possible to do SCMI writing with dask’s delayed da.store function.
-
class
satpy.writers.scmi.
NetCDFWriter
(filename, include_fgf=True, ds_info=None, compress=False)[source]¶ Bases:
object
Write a basic NetCDF4 file with header data mapped to global attributes, and BT/ALB/RAD variables FUTURE: optionally add time dimension (CF) FUTURE: optionally add zenith and azimuth angles
-
col_dim_name
= 'x'¶
-
create_variables
(bitdepth, fill_value, scale_factor=None, add_offset=None, valid_min=None, valid_max=None)[source]¶
-
fgf_x
= None¶
-
fgf_y
= None¶
-
image_var_name
= 'data'¶
-
nc
¶
-
projection
= None¶
-
row_dim_name
= 'y'¶
-
set_global_attrs
(physical_element, awips_id, sector_id, creating_entity, total_tiles, total_pixels, tile_row, tile_column, tile_height, tile_width, creator=None)[source]¶
-
x_var_name
= 'x'¶
-
y_var_name
= 'y'¶
-
-
class
satpy.writers.scmi.
NumberedTileGenerator
(area_definition, tile_shape=None, tile_count=None)[source]¶ Bases:
object
-
class
satpy.writers.scmi.
SCMIDatasetDecisionTree
(decision_dicts, **kwargs)[source]¶ Bases:
satpy.writers.DecisionTree
-
class
satpy.writers.scmi.
SCMIWriter
(compress=False, fix_awips=False, **kwargs)[source]¶ Bases:
satpy.writers.Writer
-
enhancer
¶ Lazy loading of enhancements only if needed.
-
get_filename
(area_def, tile_info, sector_id, **kwargs)[source]¶ Create a filename where output data will be saved.
Parameters: kwargs (dict) – Attributes and other metadata to use for formatting the previously provided filename.
-
save_dataset
(dataset, **kwargs)[source]¶ Save the
dataset
to a givenfilename
.This method must be overloaded by the subclass.
Parameters: - dataset (xarray.DataArray) – Dataset to save using this writer.
- filename (str) – Optionally specify the filename to save this dataset to. If not provided then filename which can be provided to the init method will be used and formatted by dataset attributes.
- fill_value (int or float) – Replace invalid values in the dataset with this fill value if applicable to this writer.
- compute (bool) – If True (default), compute and save the dataset. If False return either a dask:delayed object or tuple of (source, target). See the return values below for more information.
- **kwargs – Other keyword arguments for this particular writer.
Returns: Value returned depends on compute. If compute is True then the return value is the result of computing a dask:delayed object or running
dask.array.store()
. If compute is False then the returned value is either a dask:delayed object that can be computed using delayed.compute() or a tuple of (source, target) that should be passed todask.array.store()
. If target is provided the the caller is responsible for calling target.close() if the target has this method.
-
save_datasets
(datasets, sector_id=None, source_name=None, filename=None, tile_count=(1, 1), tile_size=None, lettered_grid=False, num_subtiles=None, compute=True, **kwargs)[source]¶ Save all datasets to one or more files.
Subclasses can use this method to save all datasets to one single file or optimize the writing of individual datasets. By default this simply calls save_dataset for each dataset provided.
Parameters: - datasets (iterable) – Iterable of xarray.DataArray objects to save using this writer.
- compute (bool) – If True (default), compute all of the saves to
disk. If False then the return value is either
a dask:delayed object or two lists to
be passed to a
dask.array.store()
call. See return values below for more details. - **kwargs – Keyword arguments to pass to save_dataset. See that documentation for more details.
Returns: Value returned depends on compute keyword argument. If compute is True the value is the result of a either a
dask.array.store()
operation or a dask:delayed compute, typically this is None. If compute is False then the result is either a dask:delayed object that can be computed with delayed.compute() or a two element tuple of sources and targets to be passed todask.array.store()
. If targets is provided then it is the caller’s responsibility to close any objects that have a “close” method.
-
classmethod
separate_init_kwargs
(kwargs)[source]¶ Help separating arguments between init and save methods.
Currently the
Scene
is passed one set of arguments to represent the Writer creation and saving steps. This is not preferred for Writer structure, but provides a simpler interface to users. This method splits the provided keyword arguments between those needed for initialization and those needed for thesave_dataset
andsave_datasets
method calls.Writer subclasses should try to prefer keyword arguments only for the save methods only and leave the init keyword arguments to the base classes when possible.
-
-
class
satpy.writers.scmi.
TileInfo
(tile_count, image_shape, tile_shape, tile_row_offset, tile_column_offset, tile_id, x, y, tile_slices, data_slices)¶ Bases:
tuple
Create new instance of TileInfo(tile_count, image_shape, tile_shape, tile_row_offset, tile_column_offset, tile_id, x, y, tile_slices, data_slices)
-
data_slices
¶ Alias for field number 9
-
image_shape
¶ Alias for field number 1
-
tile_column_offset
¶ Alias for field number 4
-
tile_count
¶ Alias for field number 0
-
tile_id
¶ Alias for field number 5
-
tile_row_offset
¶ Alias for field number 3
-
tile_shape
¶ Alias for field number 2
-
tile_slices
¶ Alias for field number 8
-
x
¶ Alias for field number 6
-
y
¶ Alias for field number 7
-
satpy.writers.simple_image module¶
-
class
satpy.writers.simple_image.
PillowWriter
(**kwargs)[source]¶ Bases:
satpy.writers.ImageWriter
-
save_image
(img, filename=None, compute=True, **kwargs)[source]¶ Save Image object to a given
filename
.Parameters: - img (trollimage.xrimage.XRImage) – Image object to save to disk.
- filename (str) – Optionally specify the filename to save this dataset to. It may include string formatting patterns that will be filled in by dataset attributes.
- compute (bool) – If True (default), compute and save the dataset. If False return either a dask.delayed.Delayed object or tuple of (source, target). See the return values below for more information.
- **kwargs – Keyword arguments to pass to the images save method.
Returns: Value returned depends on compute. If compute is True then the return value is the result of computing a dask.delayed.Delayed object or running dask.array.store. If compute is False then the returned value is either a dask.delayed.Delayed object that can be computed using delayed.compute() or a tuple of (source, target) that should be passed to dask.array.store. If target is provided the the caller is responsible for calling target.close() if the target has this method.
-
satpy.writers.utils module¶
Writer utilities.
Module contents¶
Shared objects of the various writer classes.
For now, this includes enhancement configuration utilities.
-
class
satpy.writers.
DecisionTree
(decision_dicts, attrs, **kwargs)[source]¶ Bases:
object
The decision tree.
Init the decision tree.
-
any_key
= None¶
-
-
class
satpy.writers.
EnhancementDecisionTree
(*decision_dicts, **kwargs)[source]¶ Bases:
satpy.writers.DecisionTree
The enhancement decision tree.
Init the decision tree.
-
class
satpy.writers.
Enhancer
(ppp_config_dir=None, enhancement_config_file=None)[source]¶ Bases:
object
Helper class to get enhancement information for images.
Initialize an Enhancer instance.
Parameters: - ppp_config_dir – Points to the base configuration directory
- enhancement_config_file – The enhancement configuration to apply, False to leave as is.
-
class
satpy.writers.
ImageWriter
(name=None, filename=None, base_dir=None, enhance=None, enhancement_config=None, **kwargs)[source]¶ Bases:
satpy.writers.Writer
Base writer for image file formats.
Initialize image writer object.
Parameters: - name (str) – A name for this writer for log and error messages. If this writer is configured in a YAML file its name should match the name of the YAML file. Writer names may also appear in output file attributes.
- filename (str) –
Filename to save data to. This filename can and should specify certain python string formatting fields to differentiate between data written to the files. Any attributes provided by the
.attrs
of a DataArray object may be included. Format and conversion specifiers provided by thetrollsift
package may also be used. Any directories in the provided pattern will be created if they do not exist. Example:{platform_name}_{sensor}_{name}_{start_time:%Y%m%d_%H%M%S.tif
- base_dir (str) – Base destination directories for all created files.
- enhance (bool or Enhancer) – Whether to automatically enhance
data to be more visually useful and to fit inside the file
format being saved to. By default this will default to using
the enhancement configuration files found using the default
Enhancer
class. This can be set to False so that no enhancments are performed. This can also be an instance of theEnhancer
class if further custom enhancement is needed. - enhancement_config (str) – Deprecated.
- kwargs (dict) – Additional keyword arguments to pass to the
Writer
base class.
Changed in version 0.10: Deprecated enhancement_config_file and ‘enhancer’ in favor of enhance. Pass an instance of the Enhancer class to enhance instead.
-
save_dataset
(dataset, filename=None, fill_value=None, overlay=None, decorate=None, compute=True, **kwargs)[source]¶ Save the
dataset
to a givenfilename
.This method creates an enhanced image using
get_enhanced_image()
. The image is then passed tosave_image()
. See both of these functions for more details on the arguments passed to this method.
-
save_image
(img, filename=None, compute=True, **kwargs)[source]¶ Save Image object to a given
filename
.Parameters: - img (trollimage.xrimage.XRImage) – Image object to save to disk.
- filename (str) – Optionally specify the filename to save this dataset to. It may include string formatting patterns that will be filled in by dataset attributes.
- compute (bool) – If True (default), compute and save the dataset. If False return either a dask:delayed object or tuple of (source, target). See the return values below for more information.
- **kwargs – Other keyword arguments to pass to this writer.
Returns: Value returned depends on compute. If compute is True then the return value is the result of computing a dask:delayed object or running
dask.array.store()
. If compute is False then the returned value is either a dask:delayed object that can be computed using delayed.compute() or a tuple of (source, target) that should be passed todask.array.store()
. If target is provided the the caller is responsible for calling target.close() if the target has this method.
-
class
satpy.writers.
Writer
(name=None, filename=None, base_dir=None, **kwargs)[source]¶ Bases:
satpy.plugin_base.Plugin
Base Writer class for all other writers.
A minimal writer subclass should implement the save_dataset method.
Initialize the writer object.
Parameters: - name (str) – A name for this writer for log and error messages. If this writer is configured in a YAML file its name should match the name of the YAML file. Writer names may also appear in output file attributes.
- filename (str) –
Filename to save data to. This filename can and should specify certain python string formatting fields to differentiate between data written to the files. Any attributes provided by the
.attrs
of a DataArray object may be included. Format and conversion specifiers provided by thetrollsift
package may also be used. Any directories in the provided pattern will be created if they do not exist. Example:{platform_name}_{sensor}_{name}_{start_time:%Y%m%d_%H%M%S.tif
- base_dir (str) – Base destination directories for all created files.
- kwargs (dict) – Additional keyword arguments to pass to the
Plugin
class.
-
get_filename
(**kwargs)[source]¶ Create a filename where output data will be saved.
Parameters: kwargs (dict) – Attributes and other metadata to use for formatting the previously provided filename.
-
save_dataset
(dataset, filename=None, fill_value=None, compute=True, **kwargs)[source]¶ Save the
dataset
to a givenfilename
.This method must be overloaded by the subclass.
Parameters: - dataset (xarray.DataArray) – Dataset to save using this writer.
- filename (str) – Optionally specify the filename to save this dataset to. If not provided then filename which can be provided to the init method will be used and formatted by dataset attributes.
- fill_value (int or float) – Replace invalid values in the dataset with this fill value if applicable to this writer.
- compute (bool) – If True (default), compute and save the dataset. If False return either a dask:delayed object or tuple of (source, target). See the return values below for more information.
- **kwargs – Other keyword arguments for this particular writer.
Returns: Value returned depends on compute. If compute is True then the return value is the result of computing a dask:delayed object or running
dask.array.store()
. If compute is False then the returned value is either a dask:delayed object that can be computed using delayed.compute() or a tuple of (source, target) that should be passed todask.array.store()
. If target is provided the the caller is responsible for calling target.close() if the target has this method.
-
save_datasets
(datasets, compute=True, **kwargs)[source]¶ Save all datasets to one or more files.
Subclasses can use this method to save all datasets to one single file or optimize the writing of individual datasets. By default this simply calls save_dataset for each dataset provided.
Parameters: - datasets (iterable) – Iterable of xarray.DataArray objects to save using this writer.
- compute (bool) – If True (default), compute all of the saves to
disk. If False then the return value is either
a dask:delayed object or two lists to
be passed to a
dask.array.store()
call. See return values below for more details. - **kwargs – Keyword arguments to pass to save_dataset. See that documentation for more details.
Returns: Value returned depends on compute keyword argument. If compute is True the value is the result of a either a
dask.array.store()
operation or a dask:delayed compute, typically this is None. If compute is False then the result is either a dask:delayed object that can be computed with delayed.compute() or a two element tuple of sources and targets to be passed todask.array.store()
. If targets is provided then it is the caller’s responsibility to close any objects that have a “close” method.
-
classmethod
separate_init_kwargs
(kwargs)[source]¶ Help separating arguments between init and save methods.
Currently the
Scene
is passed one set of arguments to represent the Writer creation and saving steps. This is not preferred for Writer structure, but provides a simpler interface to users. This method splits the provided keyword arguments between those needed for initialization and those needed for thesave_dataset
andsave_datasets
method calls.Writer subclasses should try to prefer keyword arguments only for the save methods only and leave the init keyword arguments to the base classes when possible.
-
satpy.writers.
add_decorate
(orig, fill_value=None, **decorate)[source]¶ Decorate an image with text and/or logos/images.
This call adds text/logos in order as given in the input to keep the alignment features available in pydecorate.
An example of the decorate config:
decorate = { 'decorate': [ {'logo': {'logo_path': <path to a logo>, 'height': 143, 'bg': 'white', 'bg_opacity': 255}}, {'text': {'txt': start_time_txt, 'align': {'top_bottom': 'bottom', 'left_right': 'right'}, 'font': <path to ttf font>, 'font_size': 22, 'height': 30, 'bg': 'black', 'bg_opacity': 255, 'line': 'white'}} ] }
Any numbers of text/logo in any order can be added to the decorate list, but the order of the list is kept as described above.
Note that a feature given in one element, eg. bg (which is the background color) will also apply on the next elements unless a new value is given.
align is a special keyword telling where in the image to start adding features, top_bottom is either top or bottom and left_right is either left or right.
-
satpy.writers.
add_logo
(orig, dc, img, logo=None)[source]¶ Add logos or other images to an image using the pydecorate package.
All the features of pydecorate’s
add_logo
are available. See documentation of pydecorate:index for more info.
-
satpy.writers.
add_overlay
(orig_img, area, coast_dir, color=None, width=None, resolution=None, level_coast=None, level_borders=None, fill_value=None, grid=None, overlays=None)[source]¶ Add coastline, political borders and grid(graticules) to image.
Uses
color
for feature colors wherecolor
is a 3-element tuple of integers between 0 and 255 representing (R, G, B).Warning
This function currently loses the data mask (alpha band).
resolution
is chosen automatically if None (default), otherwise it should be one of:‘f’ Full resolution 0.04 km ‘h’ High resolution 0.2 km ‘i’ Intermediate resolution 1.0 km ‘l’ Low resolution 5.0 km ‘c’ Crude resolution 25 km grid
is a dictionary with key values as documented in detail in pycoast- eg. overlay={‘grid’: {‘major_lonlat’: (10, 10),
- ‘write_text’: False, ‘outline’: (224, 224, 224), ‘width’: 0.5}}
Here major_lonlat is plotted every 10 deg for both longitude and latitude, no labels for the grid lines are plotted, the color used for the grid lines is light gray, and the width of the gratucules is 0.5 pixels.
For grid if aggdraw is used, font option is mandatory, if not
write_text
is set to False:font = aggdraw.Font('black', '/usr/share/fonts/truetype/msttcorefonts/Arial.ttf', opacity=127, size=16)
-
satpy.writers.
add_text
(orig, dc, img, text=None)[source]¶ Add text to an image using the pydecorate package.
All the features of pydecorate’s
add_text
are available. See documentation of pydecorate:index for more info.
-
satpy.writers.
available_writers
(as_dict=False)[source]¶ Available writers based on current configuration.
Parameters: as_dict (bool) – Optionally return writer information as a dictionary. Default: False - Returns: List of available writer names. If as_dict is True then
- a list of dictionaries including additionally writer information is returned.
-
satpy.writers.
compute_writer_results
(results)[source]¶ Compute all the given dask graphs results so that the files are saved.
Parameters: results (iterable) – Iterable of dask graphs resulting from calls to scn.save_datasets(…, compute=False)
-
satpy.writers.
configs_for_writer
(writer=None, ppp_config_dir=None)[source]¶ Generate writer configuration files for one or more writers.
Parameters: - writer (Optional[str]) – Yield configs only for this writer
- ppp_config_dir (Optional[str]) – Additional configuration directory to search for writer configuration files.
Returns: Generator of lists of configuration files
-
satpy.writers.
get_enhanced_image
(dataset, ppp_config_dir=None, enhance=None, enhancement_config_file=None, overlay=None, decorate=None, fill_value=None)[source]¶ Get an enhanced version of dataset as an
XRImage
instance.Parameters: - dataset (xarray.DataArray) – Data to be enhanced and converted to an image.
- ppp_config_dir (str) – Root configuration directory.
- enhance (bool or Enhancer) – Whether to automatically enhance
data to be more visually useful and to fit inside the file
format being saved to. By default this will default to using
the enhancement configuration files found using the default
Enhancer
class. This can be set to False so that no enhancments are performed. This can also be an instance of theEnhancer
class if further custom enhancement is needed. - enhancement_config_file (str) – Deprecated.
- overlay (dict) – Options for image overlays. See
add_overlay()
for available options. - decorate (dict) – Options for decorating the image. See
add_decorate()
for available options. - fill_value (int or float) – Value to use when pixels are masked or
invalid. Default of None means to create an alpha channel.
See
finalize()
for more details. Only used when adding overlays or decorations. Otherwise it is up to the caller to “finalize” the image before using it except if callingimg.show()
or providing the image to a writer as these will finalize the image.
Changed in version 0.10: Deprecated enhancement_config_file and ‘enhancer’ in favor of enhance. Pass an instance of the Enhancer class to enhance instead.
-
satpy.writers.
load_writer
(writer, ppp_config_dir=None, **writer_kwargs)[source]¶ Find and load writer writer in the available configuration files.
-
satpy.writers.
load_writer_configs
(writer_configs, ppp_config_dir, **writer_kwargs)[source]¶ Load the writer from the provided writer_configs.
-
satpy.writers.
read_writer_config
(config_files, loader=<class 'yaml.loader.UnsafeLoader'>)[source]¶ Read the writer config_files and return the info extracted.
-
satpy.writers.
split_results
(results)[source]¶ Split results.
Get sources, targets and delayed objects to separate lists from a list of results collected from (multiple) writer(s).
-
satpy.writers.
to_image
(dataset)[source]¶ Convert
dataset
into aXRImage
instance.Convert the
dataset
into an instance of theXRImage
class. This function makes no other changes. To get an enhanced image, possibly with overlays and decoration, seeget_enhanced_image()
.Parameters: dataset (xarray.DataArray) – Data to be converted to an image. Returns: Instance of XRImage
.