Category Archives: Linux Administration

Running Python CGI Scripts on the Raspberry Pi

Python is the language of choice for controlling the Raspberry Pi’s GPIO pins. It seems only natural then that to interact with your Pi over the web it should run a web server able to handle Python CGI scripts. Following the steps below will get the lightweight nginx web server running on your Pi, handing Python requests off to a uwsgi helper process.

  1. Install nginx
    sudo apt-get update
    sudo apt-get upgrade
    sudo apt-get install nginx
  2. Add a location to the /etc/nginx/sites-available/default to pass Python requests on to uwsgi. This needs to be placed inside the “server” block of the configuration, for example right after one of the existing “location” sections.
    location ~ \.py$ {
    include uwsgi_params;
    uwsgi_modifier1 9;
  3. Create a Python CGI script at /usr/share/nginx/www/
    #!/usr/bin/env python
    print "Content-type: text/html\n\n"
    print "<h1>Hello World</h1>"
  4. Start nginx
    sudo /etc/init.d/nginx start
  5. Build and install the uwsgi with the cgi plugin
    curl | bash -s cgi /home/pi/uwsgi
    sudo mv /home/pi/uwsgi /usr/local/bin
  6. Create the file /etc/uwsgi.ini
    plugins = cgi
    socket =
    module = pyindex
    cgi = /usr/share/nginx/www
    cgi-allowed-ext = .py
    cgi-helper = .py=python
  7. Create the file /usr/share/nginx/www/
    #!/usr/bin/env python
    print "Content-type: text/html\n\n"
    print "<h1>Hello World</h1>"
  8. Create an init script for uwsgi at /etc/init.d/uwsgi
    #!/bin/sh### BEGIN INIT INFO
    # Provides: uwsgi
    # Required-Start: $local_fs $remote_fs $network $syslog
    # Required-Stop: $local_fs $remote_fs $network $syslog
    # Default-Start: 2 3 4 5
    # Default-Stop: 0 1 6
    # Short-Description: starts the uwsgi cgi socket
    # Description: starts uwsgi using start-stop-daemon
    test -x $DAEMON || exit 0
    set -e
    . /lib/lsb/init-functions
    case "$1" in
    echo -n "Starting $DESC: "
    start-stop-daemon --start --quiet --pidfile /var/run/$ \
    --make-pidfile --chuid www-data --background \
    --exec $DAEMON -- $DAEMON_OPTS || true
    echo "$NAME."
    echo -n "Stopping $DESC: "
    start-stop-daemon --stop --quiet --pidfile /var/run/$ \
    --exec $DAEMON || true
    echo "$NAME."
    echo -n "Restarting $DESC: "
    start-stop-daemon --stop --quiet --pidfile \
    /var/run/$ --exec $DAEMON || true
    sleep 1
    start-stop-daemon --start --quiet --pidfile /var/run/$ \
    --chuid www-data --background \
    --exec $DAEMON -- $DAEMON_OPTS || true
    echo "$NAME."
    status_of_proc -p /var/run/$ "$DAEMON" uwsgi && exit 0 || exit $?
    echo "Usage: $NAME {start|stop|restart|status}" >&2
    exit 1
    exit 0
  9. Start uwsgi and configure it to start on boot
    sudo chmod +x /etc/init.d/uwsgi
    sudo /etc/init.d/uwsgi start
    sudo update-rc.d uwsgi defaults
  10. Open up your web browser and go to http://{your pi’s ip address}/
    If you’re using the browser on your Pi then you could instead go to http://localhost/
    If you see the message “Hello World” then everything is working.

Manual WordPress Upgrade – The Easy Way

I choose to not run FTP on my server so when it comes time to upgrade WordPress, I do it manually. If, like me, you think that even the short instructions are too long, here are the even shorter instructions. These commands are done within an SSH shell on the server.

  1. Backup your existing WordPress install and database
  2. Download WordPress and upgrade using the below commands. Replace the directories with those specific to your system.
    cd ~/downloads
    tar xvzf latest.tar.gz
    cd /var/www/wordpress
    rm -r wp-includes wp-admin
    cp -r ~/downloads/wordpress/wp-includes .
    cp -r ~/downloads/wordpress/wp-admin .
    rsync -rAv ~/downloads/wordpress/wp-content/ wp-content/
    rsync -Av ~/downloads/wordpress/*.{php,html,txt} .
  3. Visit your main WordPress admin page at /wp-admin and upgrade the database if necessary.

Automating Rsync Backups to Amazon EC2

A while back I wrote about how to perform incremental backups via rsync to an Amazon EC2 instance. The script worked great when run manually from a Python interpreter but I always ran into issues when trying to automate the script via cron. I finally took some time to hammer out all of the automation issues and fixed some other bugs along the way. With the new fixes in place, I now have my VPS automatically backed up nightly via cron, all for about $1/month!

For the backup script including the latest updates, take a look at my Backup to AWS EBS via Rsync and Boto how-to. I’ve documented the problems encountered and how I fixed them below.

Preserve File Ownership in the Backup

I learned that the remote rsync process must run as root in order to preserve file ownership. This is accomplished by adding –rsync-path=”sudo rsync” to the rsync command. However, EC2’s Amazon Linux AMI does not allow this by default because it requires a tty (terminal) to run a sudo command. The solution was to add a line to the script that ssh’s into the EC2 instance, forces allocation of a pseudo-tty, then appends “Defaults !requiretty” to /etc/sudoers.

Maintain Proper Directory Structure in the Backup

Another issue I discovered was that my backup was getting created with directories like /home/home/username/ instead of /home/username/. The solution to this was to simply ensure that all of my rsync destinations contained a final trailing slash.

Connection Denied for SSH

This was the error I saw that was preventing me from running the script via cron. Putting the script to sleep for 60 seconds after attaching the volume fixed this.

Terminate the EC2 Instance when Finished

The original script stopped the EC2 instance rather than terminating it. The EC2 instance is only needed for the rsync after which it should be terminated in order to avoid extra fees from AWS! The backed up data will remain on the detached EBS volume even after the instance is terminated.

Backup to AWS EBS via Rsync and Boto 3

Update 11/2015:

  • Updated the script to use boto3 and waiters
  • Switched to use the t2.micro instance type with VPC
  • More detailed setup instructions


Amazon Web Services Elastic Block Storage provides cheap, reliable storage—perfect for backups. The idea is to temporarily spin up an EC2 instance, attach your EBS volume to it and upload your files. Transferring the data via rsync allows for incremental backups which is very fast and reduces costs. Once the backup is complete, the EC2 instance is terminated. The whole process can be repeated as often as needed by attaching a new EC2 instance to the same EBS volume. I backup 12 GB from my own server weekly using this method. The backup takes about 5 minutes and my monthly bill from Amazon is around $1.

Setup Your VPC

You’ll need an AWS VPC with access to the internet. Exactly how to do this is beyond the scope of this article but you should basically follow AWS’ instructions for creating a “VPC with a Single Public Subnet”. Also make sure that
  1. The default security group for your subnet allows inbound port 22 (SSH) and inbound port 873 (rsync)
  2. Your subnet has “Auto-assign Public IP” enabled
  3. You create an EBS volume in your preferred zone (location). Make sure it is large enough to store your backups.

Create Your Access Key and Key Pair

Create an Amazon EC2 key pair. You need this to connect to your EC2 instance after launching it. Download the private key and store in on your system. In my example, I have the private key stored at /home/takaitra/.ec2/takaitra-aws-key.pem Also create an access key (access key ID and secret access key) for either your root AWS account or an IAM user that has access to create instances. Make sure to save your secret key somewhere safe as you’ll only be able to download it once after creating it. I had problems using a key that had special characters (=, +, -, /, etc) so you may want to regenerate your key if it has these in it.

Install and Configure Boto 3

Assuming you have Python’s pip, installing Boto 3 is easy.
$ pip install boto3
The easiest way to set up your access credentials is via awscli.
$ pip install awscli
$ aws configure
AWS Access Key ID: [enter your access key id]
AWS Secret Access Key: [enter your secret access key]
Default region name: [enter the region name]

The Script

The below script automates the entire backup process via Boto (A Python interface to AWS). Make sure to configure the VOLUME_ID, SUBNET and BACKUP_DIRS variables with your own values. Also update SSH_OPTS to point to the private key of your EC2 key pair.
#!/usr/bin/env python

import os
import boto3
import time

IMAGE           = 'ami-60b6c60a' # Amazon Linux AMI 2015.09.1
KEY_NAME        = 'takaitra-key'
INSTANCE_TYPE   = 't2.micro'
VOLUME_ID       = 'vol-########'
PLACEMENT       = {'AvailabilityZone': 'us-east-1a'}
SUBNET          = 'subnet-########'
SSH_OPTS        = '-o StrictHostKeyChecking=no -i /home/takaitra/.ec2/takaitra-aws-key.pem'
BACKUP_DIRS     = ['/etc/', '/opt/', '/root/', '/home/', '/usr/local/', '/var/www/']
DEVICE          = '/dev/sdh'

print 'Starting an EC2 instance of type {0} with image {1}'.format(INSTANCE_TYPE, IMAGE)
ec2 = boto3.resource('ec2')
ec2Client = boto3.client('ec2')
instances = ec2.instances.filter(
    Filters=[{'Name': 'instance-state-name', 'Values': ['pending']}])
instanceList = list(instances)
instance = instanceList[0]

print 'Waiting for instance {0} to switch to running state'.format(
waiter = ec2Client.get_waiter('instance_running')
print 'Instance is running, public IP: {0}'.format(instance.public_ip_address)

    print 'Attaching volume {0} to device {1}'.format(VOLUME_ID, DEVICE)
    volume = ec2.Volume(VOLUME_ID)
    print 'Waiting for volume to switch to In Use state'
    waiter = ec2Client.get_waiter('volume_in_use')
    print 'Volume is attached'

    print 'Waiting for the instance to finish booting'
    print 'Mounting the volume'
    os.system("ssh -t {0} ec2-user@{1} \"sudo mkdir /mnt/data-store && sudo mount {2} /mnt/data-store && echo 'Defaults !requiretty' | sudo tee /etc/sudoers.d/rsync > /dev/null\"".format(SSH_OPTS, instance.public_ip_address, DEVICE))

    print 'Beginning rsync'
    for backup_dir in BACKUP_DIRS:
            os.system("sudo rsync -e \"ssh {0}\" -avz --delete --rsync-path=\"sudo rsync\" {2} ec2-user@{1}:/mnt/data-store{2}".format(SSH_OPTS, instance.public_ip_address, backup_dir))
    print 'Rsync complete'

    print 'Unmounting and detaching volume'
    os.system("ssh -t {0} ec2-user@{1} \"sudo umount /mnt/data-store\"".format(SSH_OPTS, instance.public_ip_address))
    print 'Waiting for volume to switch to Available state'
    waiter = ec2Client.get_waiter('volume_available')
    print 'Volume is detached'
    print 'Terminating instance'


Follow these steps in order to automate backups to Amazon EC2. The steps may vary slightly depending on which distro you are running.
  1. Save the script to a file without a file extension such as “ec2_rsync”. Cron (at least in Debian) ignores scripts with extensions.
  2. Configure the script as explained above.
  3. Make the script executable (chmod +x rsync_to_ec2)
  4. Check that the script is working by running it manually (./ec2_rsync). This may take a long time if this is your initial backup.
  5. Copy the script to /etc/cron.daily/ or /etc/cron.weekly depending on how often you want the backup to run.
  6. Profit!

Watch Full Arrival (2016) Movie Online and Download

Quality : HD
Title : Arrival (2016)
Director : Denis Villeneuve.
Release : 2016-11-10
Language : 普通话, English, Pусский
Runtime : 116 min.
Genre : Drama, Science Fiction.

Synopsis :
‘Taking place after alien crafts land around the world, an expert linguist is recruited by the military to determine whether they come in peace or are a threat.

Watch Full Movie Get Squirrely (2015)

Mailman and Exim4 on Debian

Update 10/21/2008: By the way, this article now appears on the Debian Administration web site!

I recently installed Mailman on on my server to provide a mailing list for my extended family. While in the end, I was able to scrounge up the articles I needed by searching the web, many of them were woefully outdated. Here is a short article that pulls together my research and describes in one place what is needed to get Mailman running happily under Debian etch with Exim4.


This guide assumes that you are running a recent release of Debian and have Exim4 installed and working.

Installing and Configuring Mailman

To install mailman, simply run the following command:

apt-get install mailman

During the install, you will be prompted to choose which languages you want mailman to support.

After the install is complete, follow the instructions given during the install and setup the Mailman-specific mailing list.

newlist mailman

There are just a few changes that must be made to the basic configuration. Open /etc/mailman/ and edit the following items:

# Default domain for email addresses of newly created mailing lists

# Default host for the web interface of newly created mailing lists

# Uncomment this. In this setup, the alias file won't need to be changed.
MTA=None   # Misnomer, suppresses alias output on newlist

The last line makes no functional changes to mailman but will stop commands like “newlist” from outputing messages we won’t need. Restart mailman so that the configuration changes take effect:

/etc/init.d/mailman restart

Now would be a good time to set up any other mailing lists you will need using the same “newlist” command. If your list will be using anything other than the DEFAULT_URL_HOST we set up earlier as its web interface hostname, make sure to pass that to newlist with the -u flag.

Exim Configuration

The classic way of integrating Mailman with your MTA is to add each mailing list address to /etc/alias as a pipe to the mailman process. This is no longer the recommended way to configure Mailman with Exim. In fact, when I did try to add a piped alias, Exim choked on it because its default configuration no longer allows these for security reasons. So instead of adding dozens of lines to our alias file, we will be following the how-to to allow all Mailman addresses to automatically be handled by Exim.

Assuming you are using the split config, you will need to create the files listed below. If you are using a single file for configuration, you will need to find the appropriate places to insert the items.


# Mailman macro definitions

# Home dir for the Mailman installation

# User and group for Mailman

# Domains that your lists are in - colon separated list
# you may wish to add these into local_domains as well

# The path of the Mailman mail wrapper script
# The path of the list config file (used as a required file when
# verifying list addresses)


driver = accept
domains = +mm_domains
require_files = MM_LISTCHK
local_part_suffix = -admin : \
-bounces   : -bounces+*  : \
-confirm   : -confirm+*  : \
-join      : -leave      : \
-owner     : -request    : \
-subscribe : -unsubscribe
transport = mailman_transport


driver = pipe
command = MM_WRAP \
'${if def:local_part_suffix \
{${sg{$local_part_suffix}{-(\\w+)(\\+.*)?}{\$1}}} \
{post}}' \
current_directory = MM_HOME
home_directory = MM_HOME
user = MM_UID
group = MM_GID

After you finish creating the various configuration files, run the following commands to build the updated configuration file and restart exim:

/etc/init.d/exim4 restart


Apache Configuration

mailman uses CGI to create a web interface for its mailing lists. We need to configure Apache in order to get this piece working. First create a file to store some new aliases for the web server.


Alias /pipermail /var/lib/mailman/archives/public
Alias /images/mailman /usr/share/images/mailman
<directory /var/lib/mailman/archives/public>
DirectoryIndex index.html

Then create (or edit) a VirtualHost entry to allow the scripts to run.


<virtualhost *:80>
DocumentRoot /var/www/
<directory /var/www/>
Options Indexes FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
allow from all
# This directive allows us to have apache2's default start page
# in /apache2-default/, but still have / go to the right place
RedirectMatch ^/$ /cgi-bin/mailman/listinfo

ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
<directory "/usr/lib/cgi-bin">
AllowOverride None
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Order allow,deny
Allow from all

If this is a new file, remember to symlink it to the sites-enabled directory.

Finally, restart Apache so that the changes take effect.

/etc/init.d/apache2 restart


Administer your List

That completes the setup! You can begin administering your new list at