Export methods

RSK2CSV

RSK.RSK2CSV(channels: Union[str, Collection[str]] = [], profiles: Optional[Union[int, Collection[int]]] = None, direction: str = 'both', outputDir: str = '.', comment: Optional[str] = None) None

Write one or more CSV files of logger data and metadata.

Parameters
  • channels (Union[str, Collection[str]], optional) – longName of channel(s) for output files, if no value is given it will output all channels. Defaults to [] (all available channels).

  • profiles (Union[int, Collection[int]], optional) – profile number(s) for output files. If not specified, data will not be exported in profiles. Specify [] for all profiles. Defaults to None.

  • direction (str, optional) – cast direction of either “up”, “down”, or “both” for output files. Defaults to “both”.

  • outputDir (str, optional) – directory for output files. Defaults to “.” (current working directory).

  • comment (str, optional) – extra comments to attach to the end of the header. Defaults to None.

Outputs channel data and metadata from the RSK structure into one or more CSV files. The CSV file header contains logger metadata, station metadata (see RSK.addmetadata()). The data table starts with a row of variable names and units above each column of channel data. If the data has been parsed into profiles (and profiles is specified as an argument), then one file will be written for each profile. Furthermore, an extra column called “cast_direction” will be included. The column will contain ‘d’ or ‘u’ to indicate whether the sample is part of the downcast or upcast, respectively.

Users can customize which channel, profile for outputs, output directory and comments attached to the end of the header.

Example:

>>> with RSK("example.rsk") as rsk:
...     rsk.computeprofiles()
...     rsk.RSK2CSV(channels=["conductivity","pressure","dissolved_o2_concentration"], outputDir="/users/decide/where", comment="My data")

Example of a CSV file created by this method:

// Creator: RBR Ltd.
// Create time: 2022-05-20T19:54:26
// Instrument model firmware and serialID: RBRmaestro 12.03 80217
// Sample period: 0.167 second
// Processing history:
//     /rsk_files/080217_20220919_1417.rsk opened using RSKtools v1.0.0.
//     Sea pressure calculated using an atmospheric pressure of 10.1325 dbar.
// Comment: My data

// timestamp(yyyy-mm-ddTHH:MM:ss.FFF),  conductivity(mS/cm),   pressure(dbar),   dissolved_o2_concentration(%)
2015-09-19T08:32:16.000,                 34.6058,           12.6400,          694.7396
2015-09-19T08:32:16.167,                 34.6085,           12.4154,          682.4502
2015-09-19T08:32:16.333,                 34.6130,           12.4157,          666.1949

RSK2RSK

RSK.RSK2RSK(outputDir: str = '.', suffix: Optional[str] = None) str

Write the current RSK instance into a new RSK file.

Parameters
  • outputDir (str, optional) – directory for output RSK file. Defaults to “.” (current working directory).

  • suffix (str, optional) – string to append to output rsk file name, default is current time in format of YYYYMMDDTHHMM.

  • None (Defaults _sphinx_paramlinks_pyrsktools.RSK.RSK2RSK.to) –

Returns

str – file name of output RSK file.

Writes a new RSK file containing the data and various metadata from the current RSK instance. It is designed to store post-processed data in a sqlite file that is readable by Ruskin. The new rsk file is in “EPdesktop” format, which is the simplest Ruskin table schema. This method effectively provides a convenient method for Python users to easily share post-processed RBR logger data with others without recourse to CSV, MAT, or ODV files.

The tables created by this method include:

  • channels

  • data

  • dbinfo

  • deployments

  • downloads

  • epochs

  • errors

  • events

  • instruments

  • region

  • regionCast

  • regionComment

  • regionGeoData

  • regionProfile

  • schedules

Example:

>>> with RSK("example.rsk") as rsk:
...     rsk.readdata()
...     rsk.computeprofiles()
...     outputfilename = rsk.RSK2RSK(outputDir="/users/decide/where", suffix="processed")