Software is continuously developed (this page may become out of date). For detailed installation instructions, see Install Hivetool Pi image.
This project uses Free and Open Source Software (FOSS). The operating system is Linux although everything should run under Microsoft Windows. The code is available at GitHub.
Hivetool can be used as a:
- Data logger that provides data acquisition and storage.
- Bioserver that displays, streams, analyzes and visualizes the data in addition to data acquisition and storage.
Both of these options can be run with or without access to the Internet. The bioserver requires installation and configuration of additional software: a web server (usually Apache), the Perl module GD::Graph, and perhaps a media server such as IceCast http://www.icecast.org/ or FFserver http://www.ffmpeg.org/ffserver.html to record and/or stream audio and video. The Pi uses VLC media software to stream video.
- 1 Linux Distributions
- 2 Data Logger
- 3 Bioserver
- 4 Programmer's Guide
Linux distros that have been tested are:
- Debian Wheezy (Pi)
- Lubuntu (lightweight Ubuntu)
- Slackware 13.0
Starting with Hivetool ver 0.5, the text file hive.conf is read first to determine which sensors are used and to retrieve their calibration parameters.
Reading the Sensors
A detailed list of supported sensors is on the Sensors page.
Several different scales are supported. For tips on scales that use serial communication see Scale Communication. Scales based on the HX711(see Frameless Scale or the AD7193 (Phidget Bridge) Analog to Digital Converter are supported on the Pi.
Temperature and Humidity Sensors
tempered reads the RDing TEMPerHUM USB thermometer/hygrometer. Source code is at github.com/edorfaus/TEMPered Detailed instructions for installing TEMPered on the Pi. dht22 reads the DHT22 temperature/humidity sensor.
2591 reads the TSL2591.
Logging the Data
After hive.sh reads the sensors and gets the weather, the data is logged both locally and remotely on a central web server. On the hive computer it is appended to a flat text log file hive.log, and to a SQL database and written in xml format to the temporary file /tmp/hive.xml by xml.sh. cURL is used to send the xml file to a hosted web server where a perl script extracts the xml encoded data and inserts a row into the database.
Starting with version 0,5, the data is also stored in a local SQL database. Unless a full SQL server is needed, SQLite is recommend as it uses less resources.
- sql.sh inserts a row into a MySQL database.
- sqlite.sh inserts a row into a SQLite database.
Just as a mail server serves up email and a web server dishes out web pages, a biological data server, or bioserver, serves biological data that it has monitored, analyzed and visualized.
Visualizing the Data
Filters are used to remove "noise" and distortion from the data.
- NASA weight filter eliminates weight changes caused by the beekeeper so the data only reflects weight changes made by the bees.
Displaying the Data
When a request from a web browser comes in, the web server kicks off hive_stats.pl that queries the database for current, minimum, maximum, and average data values and generates the html page. Embedded in the html page is a image link to hive_graph.pl that queries the database for the detailed data and returns the data in tabular form for download or generates and returns the graph as a gif. hive_graph.pl can be called as a stand alone program to embed a graph in a web page on another site.
Directions for streaming video from the Pi.
Variable Naming Convention
If you wish to dig in to the code, you might want to start with the Variable Naming Convention guide.