fits2hdf.io package¶
Submodules¶
fits2hdf.io.fitsio module¶
fitsio.py¶
FITS I/O for reading and writing to FITS files.
-
exception
fits2hdf.io.fitsio.
DeprecatedGroupsHDUWarning
¶ Bases:
astropy.io.fits.verify.VerifyWarning
Warning message when a deprecated ‘group HDU’ is found
-
fits2hdf.io.fitsio.
create_column
(col)¶ Create a astropy.io.fits column object from IdiColumn
This is a helper function that automatically computes a few things that should be obvious from the numpy data type and shape, but that the fits.column object needs to have set explicitly.
This fills in format, dim, and array keywords. Unit and null are left as keyword arguments. Bscale, bzero, disp, start, and ascii are NOT supported.
- col: IdiColumn
- IdiColumn object that contains the data array
- fits_col: pf.Column
- astropy.io.fits column
-
fits2hdf.io.fitsio.
create_fits
(hdul, verbosity=0)¶ Export HDU to FITS file in memory.
Returns an in-memory HDUlist, does not write to file.
- hdul: IdiHduList
- An IDI HDU list object to convert into a pyfits / astropy HDUlist() object in memory
- verbosity: int
- verbosity level, 0 (none) to 5 (all)
-
fits2hdf.io.fitsio.
export_fits
(hdul, outfile, verbosity=0)¶ Export HDU list to file
- hdul: IdiHduList
- HDU list to write to file
- outfile: str
- Filename of ourput file
- verbosity: int
- verbosity of output, 0 (none) to 5 (all)
-
fits2hdf.io.fitsio.
fits_format_code_lookup
(numpy_dtype, numpy_shape)¶ Return a FITS format code from a given numpy dtype
- numpy_dtype: a numpy dtype object
- Numpy dtype to lookup
- numpy_shape: tuple
- shape of data array to be converted into a FITS format
- fmt_code: string
- FITS format code, e.g. 8A for character string of length 8
- fits_dim: string or None
- Returns fits dimension for TDIM keyword
L logical (Boolean) 1 X bit * B Unsigned byte 1 I 16-bit integer 2 J 32-bit integer 4 K 64-bit integer 4 A character 1 E single precision floating point 4 D double precision floating point 8 C single precision complex 8 M double precision complex 16 P array descriptor 8 Q array descriptor 16
-
fits2hdf.io.fitsio.
numpy_dtype_lookup
(numpy_dtype)¶ Return the local OS datatype for a given dtype
This is added to workaround a bug in binary table writing, whereby an additional byteswap is done that is unnecessary.
- numpy_dtype: numpy.dtype
- Numpy datatype
- numpy_local_dtype: numpy.dtype
- Local numpy datatype
-
fits2hdf.io.fitsio.
parse_fits_header
(hdul)¶ Parse a FITS header into something less stupid.
- hdul: HDUList
- FITS HDUlist from which to parse the header
This function takes a fits HDU object and returns: header (dict): Dictionary of header values. Header comments
are written to [CARDNAME]_COMMENT- comment (list): Comment cards are parsed and then put into list
- (order is important)
history (list): History cards also parsed into a list
-
fits2hdf.io.fitsio.
read_fits
(infile, verbosity=0)¶ Read and load contents of a FITS file
- infile: str
- File path of input file
- verbosity: int
- Verbosity level of output, 0 (none) to 5 (all)
-
fits2hdf.io.fitsio.
write_headers
(hduobj, idiobj)¶ copy headers over from idiobj to hduobj.
Need to skip values that refer to columns (TUNIT, TDISP), as these are necessarily generated by the table creation
- hduobj: astropy FITS HDU (ImageHDU, BintableHDU)
- FITS HDU to which to write header values
- idiobj: IdiImageHdu, IdiTableHdu, IdiPrimaryHdu
- HDU object from which to copy headers from
- verbosity: int
- Level of verbosity, none (0) to all (5)
fits2hdf.io.hdfcompress module¶
hdfcompress.py¶
Helper functions for writing bitshuffled compressed datatsets
-
fits2hdf.io.hdfcompress.
create_compressed
(hgroup, name, data, **kwargs)¶ Add a compressed dataset to a given group.
Use bitshuffle compression and LZ4 to compress a dataset.
hgroup: h5py group in which to add dataset name: name of dataset data: data to write chunks: chunk size
-
fits2hdf.io.hdfcompress.
create_dataset
(hgroup, name, data, **kwargs)¶ Create dataset from data, will attempt to compress
Parameters: - hgroup – h5py group in which to add dataset
- name – name of dataset
- data – data to write
-
fits2hdf.io.hdfcompress.
guess_chunk
(shape)¶ Guess the optimal chunk size for a given shape :param shape: shape of dataset :return: chunk guess (tuple)
#TODO: Make this better
fits2hdf.io.hdfio module¶
hdfio.py¶
HDF I/O for reading and writing to HDF5 files.
-
fits2hdf.io.hdfio.
export_hdf
(idi_hdu, outfile, table_type='DATA_GROUP', **kwargs)¶ Export to HDF file
- idi_hdu: IdiHduList
- HDU list to write to file.
- outfile: str
- Name of output file
- These are passed to h5py, and include:
- compression=None, apply compression (lzf, bitshuffle, gzip) shuffle=False, apply shuffle precompression filter chunks=None, set chunk size
-
fits2hdf.io.hdfio.
read_hdf
(infile, mode='r+', verbosity=0)¶ Read and load contents of an HDF file
- infile: str
- file name of input file to read
- mode: str
- file read mode. Defaults to ‘r+’
- verbosity: int
- Level of verbosity, none (0) to all (5)
-
fits2hdf.io.hdfio.
write_headers
(hduobj, idiobj, verbosity=0)¶ copy headers over from idiobj to hduobj.
Need to skip values that refer to columns (TUNIT, TDISP), as these are necessarily generated by the table creation
- hduobj: h5py group
- HDF5 group representing HDU
- idiobj: IdiImageHdu, IdiTableHdu, IdiPrimaryHdu
- HDU object to copy headers from
- verbosity: int
- Level of verbosity, none (0) to all (5)