Change: Data Acquisition

From HiveTool
Revision as of 17:13, 19 September 2015 by Paul (talk | contribs)
Jump to: navigation, search

Currently, every 5 minutes cron runs a shell script, hive.sh, that reads the sensors. Some sensors are read once. On the other hand, the program that reads the HX711 reads it 64 times, averages the readings, throws away the outliers more than 5% from the average, and then averages the remaining readings again.

Problems with the current approach include difficulty filtering noisy sensors and up to 5 minutes latency in detecting anomalies such as swarms.


It is proposed that a daemon would read all the sensors every second. There would be 60 samples a minute, or 300 in a five minute interval. These 300 readings (for each sensor) would be stored in a circular FIFO buffer in shared memory. (This is probably not realistic with the sensors we are using. I would settle for reading every 5 seconds, 12 samples a minute or 60 every 5 minutes. But a 2 second read interval may be possible.)

Methods would be provided to get the average, last, direction and rate of change. variance, and noise figures from the data in the buffer. A filtered average would be calculated based on the method used by the HX711 program of throwing away the outliers. Other noise filters could be implemented.

The buffer would be monitored for anomalies (eg a sudden drop in weight indicating a swarm). When an anomaly is detected, the contents of the buffer would be dumped and a "hyper logging" mode started, where every sample is logged until the event is over.

This would preserve a detailed record of sensor changes, and other data such a bee counts, 5 minutes before, during and for 5 minutes after the event. Audio and video streams would be similarly buffered and dumped.