nrgpy.read package#

Submodules#

nrgpy.read.channel_info_arrays module#

nrgpy.read.channel_info_arrays.return_array(data_file_type)[source]#

return data file header parameter array based on data_file_type

nrgpy.read.channel_info_arrays.return_sp3_ch_info()[source]#

returns array of sensor info parameters for Symphonie, PLUS, and PLUS3 txt export files

nrgpy.read.channel_info_arrays.return_spro_ch_info()[source]#

returns array of possible channel parameters for SymphoniePRO txt export files

nrgpy.read.logr module#

class nrgpy.read.logr.LogrRead(filename: str = '', out_file: str = '', text_timestamps: bool = False, logger_local_time: bool = False, **kwargs)[source]#

Bases: object

arrange_ch_info()[source]#

creates ch_info dataframe and ch_list array

concat_txt(dat_dir: str = '', file_type: str = 'statistical', file_filter: str = '', filter2: str = '', start_date: str = '1970-01-01', end_date: str = '2150-12-31', ch_details: bool = False, output_txt: bool = False, out_file: str = '', progress_bar: bool = True, **kwargs)[source]#

Will concatenate all text files in the dat_dir

files must match the site_filter argument. Note these are both blank by default.

Parameters:
dat_dirstr (path-like)

directory holding txt files

file_typestr

type of export (meas, event, comm, sample, etc…)

file_filterstr

text filter for txt files, like site number, etc.

filter2str

secondary text filter

start_datestr

for filtering files to concat based on date “YYYY-mm-dd”

end_datestr

for filtering files to concat based on date “YYYY-mm-dd”

ch_detailsbool

show additional info in ch_info dataframe

output_txtbool

create a txt output of data df

out_filestr

filename to write data dataframe too if output_txt = True

progress_barbool

show bar on concat [True] or list of files [False]

Returns:
ch_infoobj

pandas dataframe of ch_list (below) pulled out of file with logr_read.arrange_ch_info()

ch_listlist

list of channel info; can be converted to json w/ import json … json.dumps(fut.ch_info)

dataobj

pandas dataframe of all data

headobj

lines at the top of the txt file…, used when rebuilding timeshifted files

site_infoobj

pandas dataframe of site information

logger_snstr
ipack_snstr
logger_typestr
ipack_typestr
latitudefloat
longitudefloat
elevationint
site_numberstr
site_descriptionstr
start_datestr
dat_file_nameslist

list of files included in concatenation

Examples

Read files into nrgpy reader object

>>> import nrgpy
>>> reader = nrgpy.logr_read()
>>> reader.concat_txt(
        dat_dir='/path/to/dat/files/',
        file_filter='123456', # site 123456
        start_date='2020-01-01',
        end_date='2020-01-31',
    )
Time elapsed: 2 s | 33 / 33 [=============================================] 100%
Queue processed
>>> reader.logger_sn
'511'
>>> reader.ch_info
        Channel:        Description:    Offset: Scale Factor:   Serial Number:  Type:   Units:
0       1               NRG S1          0.13900         0.09350         94120000059     Anemometer      m/s
1       2               NRG S1          0.13900         0.09350         94120000058     Anemometer      m/s
2       3               NRG S1          0.13900         0.09350         94120000057     Anemometer      m/s
3       4               NRG 40C Anem    0.35000         0.76500         179500324860    Anemometer      m/s
4       5               NRG 40C Anem    0.35000         0.76500         179500324859    Anemometer      m/s
5       6               NRG S1          0.13900         0.09350         94120000056     Anemometer      m/s
6       13              NRG 200M Vane   -1.46020        147.91100       10700000125     Vane        Deg
7       14              NRG 200M Vane   -1.46020        147.91100       10700000124     Vane        Deg
8       5               NRG T60 Temp    -40.85550       44.74360        9400000705      Analog      C
9       6               NRG T60 Temp    40.85550        44.74360        9400000xxx      Analog      C
10      7               NRG RH5X Humi   0.00000         20.00000        NaN             Analog      %RH
11      0               NRG BP60 Baro   95.27700        243.91400       NaN             Analog      hPa
12      1               NRG BP60 Baro   95.04400        244.23900       9396FT1937      Analog          hPa
format_site_data()[source]#

take dat header to create oject data

insert_blank_header_rows(filename: str)[source]#

insert blank rows when using shift_timestamps()

ensures the resulting text file looks and feels like an original LOGR export

output_txt_file(standard: bool = True, shift_timestamps: bool = False, out_file: str = '', **kwargs)[source]#
nrgpy.read.logr.logr_read#

alias of LogrRead

nrgpy.read.logr.shift_timestamps(txt_folder: str = '', out_folder: str = '', file_filter: str = '', start_date: str = '1970-01-01', end_date: str = '2150-12-31', seconds: int = 3600)[source]#

Takes as input a folder of exported standard text files and time to shift in seconds.

Parameters:
txt_folderstr

path to folder with txt files to shift

out_folderstr

where to put the shifted files (in subfolder by default)

file_filterstr

filter for restricting file set

start_datestr

date filter “YYYY-mm-dd”

end_datestr

date filter “YYYY-mm-dd”

secondsint

time in seconds to shift timestamps (default 3600)

Returns:
obj

text files with shifted timestamps; new file names include shifted timestamp.

nrgpy.read.spidar_txt module#

class nrgpy.read.spidar_txt.SpidarRead(filename='')[source]#

Bases: object

reads in CSV file(s) using pandas and creates

Parameters:
data_filestr

path to single CSV or ZIP to be read

Returns:
dataobj

pandas dataframe of all available data

heightslist

list of measurement heights

Examples

Read a spidar data file into an object:

>>> import nrgpy
>>> reader = nrgpy.spidar_data_read(filename="1922AG0070_CAG70-SPPP-LPPP_PENT_AVGWND_2019-07-04_1.zip")
>>> reader.heights
['40', '60', '80', '90', '100', '120', '130', '160', '180', '200']
>>> reader.data
        Timestamp  pressure[mmHg]  temperature[C]  ...  dir_200_mean[Deg]  dir_200_std[Deg]  wind_measure_200_quality[%]
0   2019-07-03 23:40:00          753.55           23.68  ...             342.36             63.63                           48
1   2019-07-03 23:50:00          753.47           23.76  ...             345.70             57.59                           38
2   2019-07-04 00:00:00          753.46           23.96  ...             314.16             82.73                           20
...

Ex. read a directory of spidar data files into an object:

>>> reader = nrgpy.spidar_data_read()
>>> reader.concat_txt(
        txt_dir="/path/to/spidardata/",
        file_filter="2020-01",
        progress_bar=False
    )
Adding 1/8  ...  /home/user/spidardata/1922AG0070_CAG70-SPPP-LPPP_PENT_AVGWND_2019-07-01_1.zip [OK]
Adding 2/8  ...  /home/user/spidardata/1922AG0070_CAG70-SPPP-LPPP_PENT_AVGWND_2019-07-01_2.csv [OK]
Adding 3/8  ...  /home/user/spidardata/1922AG0070_CAG70-SPPP-LPPP_PENT_AVGWND_2019-07-02_1.zip [OK]
Adding 4/8  ...  /home/user/spidardata/1922AG0070_CAG70-SPPP-LPPP_PENT_AVGWND_2019-07-03_1.zip [OK]
Adding 5/8  ...  /home/user/spidardata/1922AG0070_CAG70-SPPP-LPPP_PENT_AVGWND_2019-07-04_1.zip [OK]
Adding 6/8  ...  /home/user/spidardata/1922AG0070_CAG70-SPPP-LPPP_PENT_AVGWND_2019-07-05_1.zip [OK]
Adding 7/8  ...  /home/user/spidardata/1922AG0070_CAG70-SPPP-LPPP_PENT_AVGWND_2019-07-06_1.zip [OK]
Adding 8/8  ...  /home/user/spidardata/1922AG0070_CAG70-SPPP-LPPP_PENT_AVGWND_2019-07-07_1.zip [OK]
>>> reader.serial_number
'1922AG0070'
concat_txt(txt_dir='', output_txt=False, out_file='', file_filter='', file_filter2='', start_date='1970-01-01', end_date='2150-12-31', progress_bar=True)[source]#

concatenate files in a folder

Parameters:
txt_dirstr (path-like)

path to csv or csv.zip files

output_txtboolean

export concatenated data

out_filestr

optional, filename of text export

start_datestr

yyy-mm-dd formatted string

end_datestr

yyy-mm-dd formatted string

progress_barboolean

show progress bar instead of each file being concatenated

Returns:
None

adds data dataframe to reader object

get_heights()[source]#
read_file(f)[source]#
nrgpy.read.spidar_txt.spidar_data_read#

alias of SpidarRead

nrgpy.read.sympro_txt module#

class nrgpy.read.sympro_txt.SymProTextRead(filename: str = '', out_file: str = '', text_timestamps: bool = False, **kwargs)[source]#

Bases: object

arrange_ch_info()[source]#

creates ch_info dataframe and ch_list array

calculate_soiling_ratio(method='IEC', T0=25, G0=1000, alpha=0.0004, I_clean_SC_0=0.9, I_soiled_SC_0=0.9)[source]#
concat_txt(txt_dir='', file_type='meas', file_filter='', filter2='', start_date='1970-01-01', end_date='2150-12-31', ch_details=False, output_txt=False, out_file='', progress_bar=True, **kwargs)[source]#

Will concatenate all text files in the txt_dir

files must match the site_filter argument. Note these are both blank by default.

Parameters:
txt_dirstr (path-like)

directory holding txt files

file_typestr

type of export (meas, event, comm, sample, etc…)

file_filterstr

text filter for txt files, like site number, etc.

filter2str

secondary text filter

start_datestr

for filtering files to concat based on date “YYYY-mm-dd”

end_datestr

for filtering files to concat based on date “YYYY-mm-dd”

ch_detailsbool

show additional info in ch_info dataframe

output_txtbool

create a txt output of data df

out_filestr

filename to write data dataframe too if output_txt = True

progress_barbool

show bar on concat [True] or list of files [False]

Returns:
ch_infoobj

pandas dataframe of ch_list (below) pulled out of file with sympro_txt_read.arrange_ch_info()

ch_listlist

list of channel info; can be converted to json w/ import json … json.dumps(fut.ch_info)

dataobj

pandas dataframe of all data

headobj

lines at the top of the txt file…, used when rebuilding timeshifted files

site_infoobj

pandas dataframe of site information

logger_snstr
ipack_snstr
logger_typestr
ipack_typestr
latitudefloat
longitudefloat
elevationint
site_numberstr
site_descriptionstr
start_datestr
txt_file_nameslist

list of files included in concatenation

Examples

Read files into nrgpy reader object >>> import nrgpy >>> reader = nrgpy.SymProTextRead() >>> reader.concat_txt(

txt_dir=’/path/to/txt/files/’, file_filter=’123456’, # site 123456 start_date=’2020-01-01’, end_date=’2020-01-31’,

)

Time elapsed: 2 s | 33 / 33 [=============================================] 100% Queue processed >>> reader.logger_sn ‘820600019’ >>> reader.ch_info

Bearing: Channel: Description: Effective Date: Height: Offset: Scale Factor: Serial Number: Type: Units:

0 50.00 1 NRG S1 2020-01-31 00:00:00 33.00 0.13900 0.09350 94120000059 Anemometer m/s 1 230.00 2 NRG S1 2020-01-31 00:00:00 0.00 0.13900 0.09350 94120000058 Anemometer m/s 2 50.00 3 NRG S1 2020-01-31 00:00:00 22.00 0.13900 0.09350 94120000057 Anemometer m/s 3 230.00 4 NRG 40C Anem 2020-01-31 00:00:00 22.00 0.35000 0.76500 179500324860 Anemometer m/s 4 50.00 5 NRG 40C Anem 2020-01-31 00:00:00 12.00 0.35000 0.76500 179500324859 Anemometer m/s 5 230.00 6 NRG S1 2020-01-31 00:00:00 12.00 0.13900 0.09350 94120000056 Anemometer m/s 6 320.00 13 NRG 200M Vane 2020-01-31 00:00:00 32.00 -1.46020 147.91100 10700000125 Vane Deg 7 320.00 14 NRG 200M Vane 2020-01-31 00:00:00 21.00 -1.46020 147.91100 10700000124 Vane Deg 8 0.00 15 NRG T60 Temp 2020-01-31 00:00:00 34.00 -40.85550 44.74360 9400000705 Analog C 9 0.00 16 NRG T60 Temp 2020-01-31 00:00:00 2.00 -40.85550 44.74360 9400000xxx Analog C 10 0.00 17 NRG RH5X Humi 2020-01-31 00:00:00 0.00 0.00000 20.00000 NaN Analog %RH 11 0.00 20 NRG BP60 Baro 2020-01-31 00:00:00 0.00 495.27700 243.91400 NaN Analog hPa 12 0.00 21 NRG BP60 Baro 2020-01-31 00:00:00 2.00 495.04400 244.23900 9396FT1937 Analog hPa

format_data_for_epe()[source]#
format_site_data()[source]#

take txt header to create oject data

insert_blank_header_rows(filename)[source]#

insert blank rows when using shift_timestamps()

ensures the resulting text file looks and feels like an original Sympro Desktop exported

make_header_for_epe()[source]#
output_txt_file(epe=False, soiling=False, standard=True, shift_timestamps=False, out_file='', **kwargs)[source]#
select_channels_for_reformat(epe=False, soiling=False)[source]#

determines which of the channel headers fit those required for post-processing for either

  1. EPE formatting

  2. soiling ratio calculation

Note that this formatting requires the the channel headers to be full (requires Local export of text files, as of 0.1.8.

nrgpy.read.sympro_txt.shift_timestamps(txt_folder: str = '', out_folder: str = '', file_filter: str = '', start_date: str = '1970-01-01', end_date: str = '2150-12-31', seconds: int = 3600)[source]#

Takes as input a folder of exported standard text files and time to shift in seconds.

Parameters:
txt_folderstr

path to folder with txt files to shift

out_folderstr

where to put the shifted files (in subfolder by default)

file_filterstr

filter for restricting file set

start_datestr

date filter “YYYY-mm-dd”

end_datestr

date filter “YYYY-mm-dd”

secondsint

time in seconds to shift timestamps (default 3600)

Returns:
obj

text files with shifted timestamps; new file names include shifted timestamp.

nrgpy.read.sympro_txt.sympro_txt_read#

alias of SymProTextRead

nrgpy.read.txt_utils module#

nrgpy.read.txt_utils.format_sympro_site_data(reader)[source]#

adds formatted site dataframe to reader object

class nrgpy.read.txt_utils.read_text_data(filename='', data_type='sp3', txt_dir='', file_filter='', filter2='', file_ext='', sep='\t')[source]#

Bases: object

class for handling known csv-style text data files with header information

Parameters:
filenamestr, optional

perform a single file read (takes precedence over txt_dir)

data_typestr

specify instrument that the data file came from

sepstr

‘ ‘; csv separator

txt_dirstr (path-like)

folder path of text files to read and concatenate

file_filterstr, optional

use when using txt_dir to filter on subset of files

file_extstr, optional

secondary file filter

arrange_ch_info()[source]#

generates list and dataframe of channel information

concat(output_txt=False, out_file='', file_filter='', filter2='', progress_bar=True)[source]#

combine exported rwd files (in txt format)

Parameters:
output_txtbool

set to True to save a concatenated text file

out_filestr

filepath, absolute or relative

file_filterstr
filter2str
progress_barbool
format_rwd_site_data()[source]#

adds formatted site dataframe to reader object

get_data(_file)[source]#

create dataframe of tabulated data

get_head(_file)[source]#

get the first lines of the file

excluding those without tabs up to the self.skip_rows line

get_site_info(_file)[source]#

create dataframe of site info

Module contents#