Change: Data Acquisition

From HiveTool
Revision as of 17:28, 22 September 2015 by Paul (talk | contribs)
Jump to: navigation, search

Background: Every 5 minutes cron runs a shell script, hive.sh, that calls other shell scripts that read the sensors. The sensor are usually read by a short c program. Some sensors are read once. On the other hand, the program that reads the HX711 reads it 64 times, averages the readings, throws away the outliers more than 5% from the average, and then averages the remaining readings again.

Problems with the current approach: Difficulty filtering noisy sensors and up to 5 minutes latency in detecting anomalies such as swarms.

Proposed change: It is proposed that a daemon read all the sensors and the readings stored in a circular FIFO buffer in shared memory. Slow changing signals could be read once every 10 seconds (30 per 5 minute logging interval). Fast changing signals could be read once a second (300 per 5 minute logging interval). Methods would be provided to get the average, last, direction and rate of change. variance, and noise figures from the data in the buffer. A filtered average would be calculated based on the method used by the HX711 program of throwing away the outliers. Other noise filters could be implemented.

The buffer would be monitored for anomalies (eg a sudden drop in weight indicating a swarm). When an anomaly is detected, the contents of the buffer would be dumped and a "hyper logging" mode started, where every sample is logged until the event is over. This would preserve a detailed record of sensor changes, and other data such a bee counts, 5 minutes before, during and for 5 minutes after the event. Audio and video streams would be similarly buffered and dumped.