wind101net / atmo

Wind Data Logging software for Anemometers designed to provide an accurate and affordable plug-&-play solution for wind site evaluation and wind generator performance. Compatible with Atmo Anemometer Wind Sensors and Atmo Wind Data Logger. Features live anemometer wind data plotting, wind speed database, automatic Weibull analysis and live WEB streaming of anemometer data. Atmo v2.0 will be available for download on http://wind101.net
http://www.baranidesign.com/atmo.html
Other
9 stars 3 forks source link

For Testing Plotting Accuracy: Create an applet/program to convert Excel exported .CSV files into DAQ SD format .DAT files #12

Closed baranidesign closed 12 years ago

baranidesign commented 13 years ago

Conver .CSV file into SD Card .DAT hexadecimal raw data file format.

SD Card Data File Format: Each File ATTEMPTS to ends at time 24:59:59 at which point a new file is created for the next day. (each day has a data file). Files will usually contain chunks of data which start just before the end of the day but may continue into the next day. The next file created will begin at a time shortly after the beginning of the new day as can be seen in the first valid time stamp within the file. Each file is broken up into fixed size chunks of data. Each chunk within the file is broken into a header and a fixed number of fixed size rows. Each row contains the sensor readings and no time stamp information. Time stamp information is only recorded for the first row in each chunk and is stored in the chunk header. To determine the time stamp of an row you must add the time elapsed for that row to the chunk's time stamp which is should be one second per row. For example if you wanted to know the time stamp for the 3rd row (index 2) you would add 2 seconds to the time stamp found in the chunk header. It is possible for two consecutive chunks to overlap or to have a gap between them but the rows within a chunk can not have a gap. Currently, if time is altered on the device the currently active fixed size chunk is discarded and not written to the DAQ storage.

The file format described:

Each file contains some number of 'chunks'.

Each 'chunk' contains an 8 byte HEADER followed by up to 256 records or 256x7 bytes of SENSOR DATA:

8 byte HEADER: byte1-year (most sig byte) ... byte2-year (least sig byte) (0x00 - 0x99 to denote 2000 to 2099) byte3-month (0x01 - 0x12 to denote 1 to 12) byte4-day (0x01 - 0x31 to denote 1 to 31) byte5-hour (0x00 - 0x23 to denote hours in 24 hour mode) byte6-minute (0x00 - 0x59 to denote 0 to 59) byte7-second (0x00 - 0x59 to denote 0 to 59) byte8-0xa5
(spare byte, 0xa5 for lack of anything else, and it is a good pattern)

7 byte SENSOR DATA FORMAT: each 7byte chunk represents 1sec of data from 1 Anemometer Memory Usage Memory Usage Resolution & Range byte1- (windspeed x 100) >> 5 High 8 bits of wind speed

byte2- (windspeed x 100) << 3 | direction>> 6 Low 5 bits of wind speed High 3 bits of wind direction Speed: 0.00 and 81.91 with 4 significant digits byte3- direction << 6 | (((temp+40)_10) >> 8) Low 6 bits of wind direction High 2 bits of temperature Dir: Integral values between 0 and 359 (511) inclusive byte4- ((temp+40)_10) >> 1 Mid 8 bits of temperature Temp: Values between -40 and 62.3 with 3 significant digits byte5- (((temp+40)*10) & 0x01) << 7 | humidity Low bit of temperature All 7 bits of humidity Hum: Integral values between 0 and 100 (127) inclusive byte6- (pressure / 2) high byte High 8 bits of pressure Press: Integral values between 0 and 131070 inclusive, even values only byte7- (pressure / 2) low byte Low 8 bits of pressure ...

The data will be broken up into multiple files, the file names will be in the format .DAT where sensorId = [A,B,C,D] index= [YY][MM][DD]
Each file will be composed of chunks of data, the exact maximum number of chunks per file is not able to be determined until testing upon an implementation can be done. Each "chunk" in a file will be made up of n+1 7 byte records, the first of which will contain a time stamp, the remaining n records will be sensor data with a time stamp of the first record time stamp + it's zero based position in the chunk. The last chunk in a file may have less than n+1 records but must at least contain two records (one header and one value record). All other chunks must be composed of n+1 records. The value of n will be determined with testing but can probably be around 255 or 240.

Each FILE WILL BE: timestamp + checksum 4bytes + ?bytes data (7bytes)= (3bit data/timestamp identifier) 53bits of data records total of 7bytes data (7bytes) data (7bytes) ... timestamp + checksum data data

baranidesign commented 13 years ago

Aaron, now that we have had this slew of bugs in the Beta release, I thing it would be a great idea to create this script/program to verify that no data handling issues exist. To be able to input predefined data and have Atmo export identical data. Actually, I believe this is a critical step to finishing Atmo 2.0