Difference between revisions of "Change: Data Acquisition"

From HiveTool
Jump to: navigation, search
 
(7 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
'''Background:''' Every 5 minutes cron runs a shell script, hive.sh, that calls other shell scripts that read the sensors.  The sensor are usually read by a short c program.  Some sensors are read once.  On the other hand, the program that reads the HX711 reads it 64 times, averages the readings, throws away the outliers more than 5% from the average, and then averages the remaining readings again.
 
'''Background:''' Every 5 minutes cron runs a shell script, hive.sh, that calls other shell scripts that read the sensors.  The sensor are usually read by a short c program.  Some sensors are read once.  On the other hand, the program that reads the HX711 reads it 64 times, averages the readings, throws away the outliers more than 5% from the average, and then averages the remaining readings again.
  
 +
'''Problems with the current approach:'''
 +
#Difficulty filtering noisy sensors  or bad reads.
 +
#Up to 5 minutes latency in detecting anomalies such as swarms, hive tampering, and sensor problems.
 +
#If running more than one GUI (e.g. HiveTool and HiveControl concurrently), they both may try to read the same sensor at the same time.
  
'''Problems with the current approach:''' Difficulty filtering noisy sensors and up to 5 minutes latency in detecting anomalies such as swarms.
+
'''Proposed change:'''
 +
It is proposed that a daemon read all the sensors and store the readings in a circular FIFO buffer in shared memory.  Slow changing signals could be read once every 10 seconds (30 per 5 minute logging interval).  Fast changing signals could be read once a second (300 per 5 minute logging interval).  Methods will be provided to get the average, last, direction and rate of change. variance, and noise figures from the data in the buffer.  A filtered average will be calculated based on the technique used by the HX711 program of throwing away the outliers.  Other noise filters can be implemented.
  
'''Proposed change:'''
+
Every 5 minutes, hive.sh would call some of the methods provided to access data in the buffer and log the filtered average and other metrics.
It is proposed that a daemon would read all the sensors every second.  There would be 60 samples a minute, or 300 in a five minute interval. These 300 readings (for each sensor) would be stored in a circular FIFO buffer in shared memory.  (This is probably not realistic with the sensors we are using.  I would settle for reading every 5 seconds, 12 samples a minute or 60 every 5 minutes.  But a 2 second read interval may be possible.)
 
  
Methods would be provided to get the average, last, direction and rate of change. variance, and noise figures from the data in the buffer. A filtered average would be calculated based on the method used by the HX711 program of throwing away the outliersOther noise filters could be implemented.
+
The buffer will also be monitored for anomalies (eg a sudden drop in weight indicating a swarm). When an anomaly is detected, the contents of the buffer will be saved to a file and a "hyper logging" mode started, where every sample is logged to that file until the event is over.  This will preserve a detailed record of sensor changes, and other data such a bee counts, 5 minutes before, during and for 5 minutes after the eventAudio and video streams will be similarly buffered and dumped.
  
The buffer would be monitored for anomalies (eg a sudden drop in weight indicating a swarm)When an anomaly is detected, the contents of the buffer would be dumped and a "hyper logging" mode started,  where every sample is logged until the event is over.
+
The daemon can provide a layer between the GUI and the hardware to allow developers to concentrate on presenting the data and the user interface with out having to deal with the sensor codeIt will also allow multiple GUIs to run at the same time without them interfering with each other (trying to read the same sensor at the same time).  
  
This would preserve a detailed record of sensor changes, and other data such a bee counts, 5 minutes before, during and for 5 minutes after the event.  Audio and video streams would be similarly buffered and dumped.
+
The start of a Data Acquisition Daemon (dad) is included in version 0.7.3 in the /home/download/dad directory.

Latest revision as of 22:17, 21 May 2016

Background: Every 5 minutes cron runs a shell script, hive.sh, that calls other shell scripts that read the sensors. The sensor are usually read by a short c program. Some sensors are read once. On the other hand, the program that reads the HX711 reads it 64 times, averages the readings, throws away the outliers more than 5% from the average, and then averages the remaining readings again.

Problems with the current approach:

  1. Difficulty filtering noisy sensors or bad reads.
  2. Up to 5 minutes latency in detecting anomalies such as swarms, hive tampering, and sensor problems.
  3. If running more than one GUI (e.g. HiveTool and HiveControl concurrently), they both may try to read the same sensor at the same time.

Proposed change: It is proposed that a daemon read all the sensors and store the readings in a circular FIFO buffer in shared memory. Slow changing signals could be read once every 10 seconds (30 per 5 minute logging interval). Fast changing signals could be read once a second (300 per 5 minute logging interval). Methods will be provided to get the average, last, direction and rate of change. variance, and noise figures from the data in the buffer. A filtered average will be calculated based on the technique used by the HX711 program of throwing away the outliers. Other noise filters can be implemented.

Every 5 minutes, hive.sh would call some of the methods provided to access data in the buffer and log the filtered average and other metrics.

The buffer will also be monitored for anomalies (eg a sudden drop in weight indicating a swarm). When an anomaly is detected, the contents of the buffer will be saved to a file and a "hyper logging" mode started, where every sample is logged to that file until the event is over. This will preserve a detailed record of sensor changes, and other data such a bee counts, 5 minutes before, during and for 5 minutes after the event. Audio and video streams will be similarly buffered and dumped.

The daemon can provide a layer between the GUI and the hardware to allow developers to concentrate on presenting the data and the user interface with out having to deal with the sensor code. It will also allow multiple GUIs to run at the same time without them interfering with each other (trying to read the same sensor at the same time).

The start of a Data Acquisition Daemon (dad) is included in version 0.7.3 in the /home/download/dad directory.