Difference between revisions of "Change: Data Acquisition"

From HiveTool
Jump to: navigation, search
Line 1: Line 1:
 
'''Background:''' Every 5 minutes cron runs a shell script, hive.sh, that calls other shell scripts that read the sensors.  The sensor are usually read by a short c program.  Some sensors are read once.  On the other hand, the program that reads the HX711 reads it 64 times, averages the readings, throws away the outliers more than 5% from the average, and then averages the remaining readings again.
 
'''Background:''' Every 5 minutes cron runs a shell script, hive.sh, that calls other shell scripts that read the sensors.  The sensor are usually read by a short c program.  Some sensors are read once.  On the other hand, the program that reads the HX711 reads it 64 times, averages the readings, throws away the outliers more than 5% from the average, and then averages the remaining readings again.
 
  
 
'''Problems with the current approach:''' Difficulty filtering noisy sensors and up to 5 minutes latency in detecting anomalies such as swarms.
 
'''Problems with the current approach:''' Difficulty filtering noisy sensors and up to 5 minutes latency in detecting anomalies such as swarms.

Revision as of 18:25, 21 September 2015

Background: Every 5 minutes cron runs a shell script, hive.sh, that calls other shell scripts that read the sensors. The sensor are usually read by a short c program. Some sensors are read once. On the other hand, the program that reads the HX711 reads it 64 times, averages the readings, throws away the outliers more than 5% from the average, and then averages the remaining readings again.

Problems with the current approach: Difficulty filtering noisy sensors and up to 5 minutes latency in detecting anomalies such as swarms.

Proposed change: It is proposed that a daemon would read all the sensors every second. There would be 60 samples a minute, or 300 in a five minute interval. These 300 readings (for each sensor) would be stored in a circular FIFO buffer in shared memory. (This is probably not realistic with the sensors we are using. I would settle for reading every 5 seconds, 12 samples a minute or 60 every 5 minutes. But a 2 second read interval may be possible.)

Methods would be provided to get the average, last, direction and rate of change. variance, and noise figures from the data in the buffer. A filtered average would be calculated based on the method used by the HX711 program of throwing away the outliers. Other noise filters could be implemented.

The buffer would be monitored for anomalies (eg a sudden drop in weight indicating a swarm). When an anomaly is detected, the contents of the buffer would be dumped and a "hyper logging" mode started, where every sample is logged until the event is over.

This would preserve a detailed record of sensor changes, and other data such a bee counts, 5 minutes before, during and for 5 minutes after the event. Audio and video streams would be similarly buffered and dumped.