Tag Archives: Python

Running Python CGI Scripts on the Raspberry Pi

Python is the language of choice for controlling the Raspberry Pi’s GPIO pins. It seems only natural then that to interact with your Pi over the web it should run a web server able to handle Python CGI scripts. Following the steps below will get the lightweight nginx web server running on your Pi, handing Python requests off to a uwsgi helper process.

  1. Install nginx
    sudo apt-get update
    sudo apt-get upgrade
    sudo apt-get install nginx
  2. Add a location to the /etc/nginx/sites-available/default to pass Python requests on to uwsgi. This needs to be placed inside the “server” block of the configuration, for example right after one of the existing “location” sections.
    location ~ \.py$ {
    uwsgi_pass 127.0.0.1:9000;
    include uwsgi_params;
    uwsgi_modifier1 9;
    }
  3. Create a Python CGI script at /usr/share/nginx/www/hello.py
    #!/usr/bin/env python
    print "Content-type: text/html\n\n"
    print "<h1>Hello World</h1>"
  4. Start nginx
    sudo /etc/init.d/nginx start
  5. Build and install the uwsgi with the cgi plugin
    curl http://uwsgi.it/install | bash -s cgi /home/pi/uwsgi
    sudo mv /home/pi/uwsgi /usr/local/bin
  6. Create the file /etc/uwsgi.ini
    [uwsgi]
    plugins = cgi
    socket = 127.0.0.1:9000
    module = pyindex
    cgi = /usr/share/nginx/www
    cgi-allowed-ext = .py
    cgi-helper = .py=python
  7. Create the file /usr/share/nginx/www/hello.py
    #!/usr/bin/env python
    print "Content-type: text/html\n\n"
    print "<h1>Hello World</h1>"
  8. Create an init script for uwsgi at /etc/init.d/uwsgi
    #!/bin/sh### BEGIN INIT INFO
    # Provides: uwsgi
    # Required-Start: $local_fs $remote_fs $network $syslog
    # Required-Stop: $local_fs $remote_fs $network $syslog
    # Default-Start: 2 3 4 5
    # Default-Stop: 0 1 6
    # Short-Description: starts the uwsgi cgi socket
    # Description: starts uwsgi using start-stop-daemon
    ### END INIT INFO
    
    PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
    DAEMON=/usr/local/bin/uwsgi
    NAME=uwsgi
    DESC=uwsgi
    DAEMON_OPTS=/etc/uwsgi.ini
    
    test -x $DAEMON || exit 0
    
    set -e
    
    . /lib/lsb/init-functions
    
    case "$1" in
    start)
    echo -n "Starting $DESC: "
    start-stop-daemon --start --quiet --pidfile /var/run/$NAME.pid \
    --make-pidfile --chuid www-data --background \
    --exec $DAEMON -- $DAEMON_OPTS || true
    echo "$NAME."
    ;;
    
    stop)
    echo -n "Stopping $DESC: "
    start-stop-daemon --stop --quiet --pidfile /var/run/$NAME.pid \
    --exec $DAEMON || true
    echo "$NAME."
    ;;
    
    restart|force-reload)
    echo -n "Restarting $DESC: "
    start-stop-daemon --stop --quiet --pidfile \
    /var/run/$NAME.pid --exec $DAEMON || true
    sleep 1
    start-stop-daemon --start --quiet --pidfile /var/run/$NAME.pid \
    --chuid www-data --background \
    --exec $DAEMON -- $DAEMON_OPTS || true
    echo "$NAME."
    ;;
    
    status)
    status_of_proc -p /var/run/$NAME.pid "$DAEMON" uwsgi && exit 0 || exit $?
    ;;
    *)
    echo "Usage: $NAME {start|stop|restart|status}" >&2
    exit 1
    ;;
    esac
    
    exit 0
  9. Start uwsgi and configure it to start on boot
    sudo chmod +x /etc/init.d/uwsgi
    sudo /etc/init.d/uwsgi start
    sudo update-rc.d uwsgi defaults
  10. Open up your web browser and go to http://{your pi’s ip address}/hello.py
    If you’re using the browser on your Pi then you could instead go to http://localhost/hello.py
    If you see the message “Hello World” then everything is working.
    Hello_World

Controlling the MCP4151 Digital Potentiometer with the Raspberry Pi

We’re going to use the Raspberry Pi’s SPI bus to control Microchip’s MCP4151 8-bit digital potentiometer.  The MCP4151 is an 8 pin SPI device that can be used to programmatically control output voltage. The GPIO pins on the pi run at 3.3 volts meaning that we can command the pot to output between 0 and 3.3 volts. However, if we instead power the pot with 5 volts then we can control a voltage between 0 and 5 volts. Note that PWM is a possible alternative to a digital pot that doesn’t require an extra chip. However, this can add  noise to the signal that wasn’t acceptable for my project.

Microchip’s datasheet is recommended reading before starting this project.

Parts List

Step 1: Configure SPI on the Raspberry Pi

Follow the first two steps in Controlling an SPI device with the Raspberry Pi.

Step 2: Wire up the components

Being a pin-limited device, the SPI input and output on the MCP4151 shares one pin. That’s not a problem in our case since we’re only interested in writing to the device.

You’ll notice that we’re powering the pot with 5 volts despite the Raspberry Pi being a 3.3 volt device. This is fine because 1) The Pi sends commands to the pot but receives nothing back. Therefore, there is no risk of overvoltage on the Pi’s pins. And 2) The pot recognizes any SPI input over 0.7 volts as a high signal. This means the 3.3 volts from the Pi is plenty to communicate with the pot.

digital pot

As you can see, the power, ground and the SPI signals take up most of the pins. The actual potentiometer terminals are pins 5, 6 and 7.

Pin Description
1 SPI chip select input
2 SPI clock input
3 SPI serial data input/output
4 Ground
5 Potentiometer terminal A
6 Potentiometer wiper terminal
7 Potentiometer terminal B
8 Positive power supply input

IMG_20150315_140946

IMG_20150315_140905

Step 3: Create the python script

To test out the device, we’re going to continuously loop through all the possible values for the potentiometer. This will cause the voltage at the wiper terminal grow and shrink between 0 and 5 volts. Create the following script on your Pi.

#!/usr/bin/python

import spidev
import time

spi = spidev.SpiDev()
spi.open(0, 0)
spi.max_speed_hz = 976000

def write_pot(input):
    msb = input >> 8
    lsb = input & 0xFF
    spi.xfer([msb, lsb])

while True:
    for i in range(0x00, 0x1FF, 1):
        write_pot(i)
        time.sleep(.005)
    for i in range(0x1FF, 0x00, -1):
        write_pot(i)
        time.sleep(.005)

Step 4: Run the script

Make the script executable and run it with the following commands.

chmod +x pot-control.py
sudo ./pot-control.py

If all goes well, you should see the LED continuously fade to full brightness then off again.

Controlling an SPI device with the Raspberry Pi

The Raspberry Pi has a Broadcom BCM 2835 chip allowing it to interface with SPI devices on its GPIO pins. There are two chip select pins meaning that the Pi can control two devices simultaneously.

P1 Header Pin Function
19 MOSI – master output slave input
21 MISO – master input slave output
23 SCLK – clock
24 CE0 – chip enable 0
26 CE1 – chip enable 1

Step 1: Enable SPI on the Raspberry Pi

  1. In your Pi’s terminal, run
    sudo raspi-config
  2. Go to Advanced Options > SPI
  3. Choose “Yes” for both questions then select Finish to exit raspi-config
  4. Either reboot your Pi or run this command to load the kernel module
    sudo modprobe spi-bcm2708

Step 2: Install spidev

Spidev is a python module that allows us to interface with the Pi’s SPI bus.Watch movie online The Transporter Refueled (2015)

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install python-dev python3-dev
cd ~
git clone https://github.com/doceme/py-spidev.git
cd py-spidev
make
sudo make install

Step 3: Python script

Finally, we can write and run a python script to control the SPI device.

  1. Create a file called spi-test.py in your favorite editor
    #!/usr/bin/python
    
    import spidev
    import time
    
    spi = spidev.SpiDev()
    spi.open(0, 0)
    spi.max_speed_hz = 7629
    
    # Split an integer input into a two byte array to send via SPI
    def write_pot(input):
        msb = input >> 8
        lsb = input & 0xFF
        spi.xfer([msb, lsb])
    
    # Repeatedly switch a MCP4151 digital pot off then on
    while True:
        write_pot(0x1FF)
        time.sleep(0.5)
        write_put(0x00)
        time.sleep(0.5)
  2. Make the file executable and run it
    chmod +x spi-test.py
    sudo ./spi-test.py

Notes on spidev

Unless the spi.max_speed_hz field is a value accepted by the driver, the script will fail when you run it. The field can be set to these values on the raspberry pi:

Speed spi.max_speed_hz value
125.0 MHz 125000000
62.5 MHz 62500000
31.2 MHz 31200000
15.6 MHz 15600000
7.8 MHz 7800000
3.9 MHz 3900000
1953 kHz 1953000
976 kHz 976000
488 kHz 488000
244 kHz 244000
122 kHz 122000
61 kHz 61000
30.5 kHz 30500
15.2 kHz 15200
7629 Hz 7629

Two SPI devices can be controlled in python by creating two SpiDev objects, one for each device.

spi = spidev.SpiDev()
spi.open(0, 0)
spi.max_speed_hz = 976000

spi2 = spidev.SpiDev()
spi2.open(0, 1)
spi2.max_speed_hz = 976000

Automating Rsync Backups to Amazon EC2

A while back I wrote about how to perform incremental backups via rsync to an Amazon EC2 instance. The script worked great when run manually from a Python interpreter but I always ran into issues when trying to automate the script via cron. I finally took some time to hammer out all of the automation issues and fixed some other bugs along the way. With the new fixes in place, I now have my VPS automatically backed up nightly via cron, all for about $1/month!

For the backup script including the latest updates, take a look at my Backup to AWS EBS via Rsync and Boto how-to. I’ve documented the problems encountered and how I fixed them below.

Preserve File Ownership in the Backup

I learned that the remote rsync process must run as root in order to preserve file ownership. This is accomplished by adding –rsync-path=”sudo rsync” to the rsync command. However, EC2’s Amazon Linux AMI does not allow this by default because it requires a tty (terminal) to run a sudo command. The solution was to add a line to the script that ssh’s into the EC2 instance, forces allocation of a pseudo-tty, then appends “Defaults !requiretty” to /etc/sudoers.

Maintain Proper Directory Structure in the Backup

Another issue I discovered was that my backup was getting created with directories like /home/home/username/ instead of /home/username/. The solution to this was to simply ensure that all of my rsync destinations contained a final trailing slash.

Connection Denied for SSH

This was the error I saw that was preventing me from running the script via cron. Putting the script to sleep for 60 seconds after attaching the volume fixed this.

Terminate the EC2 Instance when Finished

The original script stopped the EC2 instance rather than terminating it. The EC2 instance is only needed for the rsync after which it should be terminated in order to avoid extra fees from AWS! The backed up data will remain on the detached EBS volume even after the instance is terminated.

Backup to AWS EBS via Rsync and Boto 3

Update 11/2015:

  • Updated the script to use boto3 and waiters
  • Switched to use the t2.micro instance type with VPC
  • More detailed setup instructions

Overview

Amazon Web Services Elastic Block Storage provides cheap, reliable storage—perfect for backups. The idea is to temporarily spin up an EC2 instance, attach your EBS volume to it and upload your files. Transferring the data via rsync allows for incremental backups which is very fast and reduces costs. Once the backup is complete, the EC2 instance is terminated. The whole process can be repeated as often as needed by attaching a new EC2 instance to the same EBS volume. I backup 12 GB from my own server weekly using this method. The backup takes about 5 minutes and my monthly bill from Amazon is around $1.

Setup Your VPC

You’ll need an AWS VPC with access to the internet. Exactly how to do this is beyond the scope of this article but you should basically follow AWS’ instructions for creating a “VPC with a Single Public Subnet”. Also make sure that
  1. The default security group for your subnet allows inbound port 22 (SSH) and inbound port 873 (rsync)
  2. Your subnet has “Auto-assign Public IP” enabled
  3. You create an EBS volume in your preferred zone (location). Make sure it is large enough to store your backups.

Create Your Access Key and Key Pair

Create an Amazon EC2 key pair. You need this to connect to your EC2 instance after launching it. Download the private key and store in on your system. In my example, I have the private key stored at /home/takaitra/.ec2/takaitra-aws-key.pem Also create an access key (access key ID and secret access key) for either your root AWS account or an IAM user that has access to create instances. Make sure to save your secret key somewhere safe as you’ll only be able to download it once after creating it. I had problems using a key that had special characters (=, +, -, /, etc) so you may want to regenerate your key if it has these in it.

Install and Configure Boto 3

Assuming you have Python’s pip, installing Boto 3 is easy.
$ pip install boto3
The easiest way to set up your access credentials is via awscli.
$ pip install awscli
$ aws configure
AWS Access Key ID: [enter your access key id]
AWS Secret Access Key: [enter your secret access key]
Default region name: [enter the region name]

The Script

The below script automates the entire backup process via Boto (A Python interface to AWS). Make sure to configure the VOLUME_ID, SUBNET and BACKUP_DIRS variables with your own values. Also update SSH_OPTS to point to the private key of your EC2 key pair.
#!/usr/bin/env python

import os
import boto3
import time

IMAGE           = 'ami-60b6c60a' # Amazon Linux AMI 2015.09.1
KEY_NAME        = 'takaitra-key'
INSTANCE_TYPE   = 't2.micro'
VOLUME_ID       = 'vol-########'
PLACEMENT       = {'AvailabilityZone': 'us-east-1a'}
SUBNET          = 'subnet-########'
SSH_OPTS        = '-o StrictHostKeyChecking=no -i /home/takaitra/.ec2/takaitra-aws-key.pem'
BACKUP_DIRS     = ['/etc/', '/opt/', '/root/', '/home/', '/usr/local/', '/var/www/']
DEVICE          = '/dev/sdh'

print 'Starting an EC2 instance of type {0} with image {1}'.format(INSTANCE_TYPE, IMAGE)
ec2 = boto3.resource('ec2')
ec2Client = boto3.client('ec2')
ec2.create_instances(ImageId=IMAGE,InstanceType=INSTANCE_TYPE,Placement=PLACEMENT,SubnetId=SUBNET,MinCount=1,MaxCount=1,KeyName=KEY_NAME)
instances = ec2.instances.filter(
    Filters=[{'Name': 'instance-state-name', 'Values': ['pending']}])
instanceList = list(instances)
instance = instanceList[0]

print 'Waiting for instance {0} to switch to running state'.format(instance.id)
waiter = ec2Client.get_waiter('instance_running')
waiter.wait(InstanceIds=[instance.id])
instance.reload()
print 'Instance is running, public IP: {0}'.format(instance.public_ip_address)

try:
    print 'Attaching volume {0} to device {1}'.format(VOLUME_ID, DEVICE)
    volume = ec2.Volume(VOLUME_ID)
    volume.attach_to_instance(InstanceId=instance.id,Device=DEVICE)
    print 'Waiting for volume to switch to In Use state'
    waiter = ec2Client.get_waiter('volume_in_use')
    waiter.wait(VolumeIds=[VOLUME_ID])
    print 'Volume is attached'

    print 'Waiting for the instance to finish booting'
    time.sleep(60)
    print 'Mounting the volume'
    os.system("ssh -t {0} ec2-user@{1} \"sudo mkdir /mnt/data-store && sudo mount {2} /mnt/data-store && echo 'Defaults !requiretty' | sudo tee /etc/sudoers.d/rsync > /dev/null\"".format(SSH_OPTS, instance.public_ip_address, DEVICE))

    print 'Beginning rsync'
    for backup_dir in BACKUP_DIRS:
            os.system("sudo rsync -e \"ssh {0}\" -avz --delete --rsync-path=\"sudo rsync\" {2} ec2-user@{1}:/mnt/data-store{2}".format(SSH_OPTS, instance.public_ip_address, backup_dir))
    print 'Rsync complete'

    print 'Unmounting and detaching volume'
    os.system("ssh -t {0} ec2-user@{1} \"sudo umount /mnt/data-store\"".format(SSH_OPTS, instance.public_ip_address))
    volume.detach_from_instance(InstanceId=instance.id)
    print 'Waiting for volume to switch to Available state'
    waiter = ec2Client.get_waiter('volume_available')
    waiter.wait(VolumeIds=[VOLUME_ID])
    print 'Volume is detached'
finally:
    print 'Terminating instance'
    instance.terminate()

Automation

Follow these steps in order to automate backups to Amazon EC2. The steps may vary slightly depending on which distro you are running.
  1. Save the script to a file without a file extension such as “ec2_rsync”. Cron (at least in Debian) ignores scripts with extensions.
  2. Configure the script as explained above.
  3. Make the script executable (chmod +x rsync_to_ec2)
  4. Check that the script is working by running it manually (./ec2_rsync). This may take a long time if this is your initial backup.
  5. Copy the script to /etc/cron.daily/ or /etc/cron.weekly depending on how often you want the backup to run.
  6. Profit!