Qiang Blog

Just another zhangjingqiang's blog.

How to setup Local Infrastructure Development with Ansible and Vagrant by Ad-Hoc Commands

Make work directory

$ mkdir work && cd work

Make Vagrantfile

# -*- mode: ruby -*-
# vi: set ft=ruby :


Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.ssh.insert_key = false
    config.vm.provider :virtualbox do |vb|
    vb.customize ["modifyvm", :id, "--memory", "256"]

  # Application server 1.
  config.vm.define "app1" do |app|
    app.vm.hostname = "orc-app1.dev"
    app.vm.box = "geerlingguy/centos7"
    app.vm.network :private_network, ip: ""

  # Application server 2.
  config.vm.define "app2" do |app|
    app.vm.hostname = "orc-app2.dev"
    app.vm.box = "geerlingguy/centos7"
    app.vm.network :private_network, ip: ""

  # Database server.
  config.vm.define "db" do |db|
    db.vm.hostname = "orc-db.dev"
    db.vm.box = "geerlingguy/centos7"
    db.vm.network :private_network, ip: ""

Run vagrant

$ vagrant up

Edit /etc/ansible/hosts

# Lines beginning with a # are comments, and are only included for
# illustration. These comments are overkill for most inventory files.

# Application servers

# Database server

# Group 'multi' with all servers

# Variables that will be applied to all servers


$ ansible multi -a "hostname"
$ ansible multi -a "hostname" -f 1
$ ansible multi -a "df -h"
$ ansible multi -a "free -m"
$ ansible multi -a "date"
$ ansible multi -s -m yum -a "name=ntp state=present"
$ ansible multi -s -m service -a "name=ntpd state=started enabled=yes"
$ ansible multi -s -a "service ntpd stop"
$ ansible multi -s -a "ntpdate -q 0.rhel.pool.ntp.org"
$ ansible multi -s -a "service ntpd start"
$ ansible app -s -m yum -a "name=MySQL-python state=present"
$ ansible app -s -m yum -a "name=python-setuptools state=present"
$ ansible app -s -m easy_install -a "name=django"
$ ansible app -a "python -c 'import django; print django.get_version()'"
$ ansible db -s -m yum -a "name=mariadb-server state=present"
$ ansible db -s -m service -a "name=mariadb state=started enabled=yes"
$ ansible db -s -a "iptables -F"
$ ansible db -s -a "iptables -A INPUT -s -p tcp -m tcp --dport 3306 -j ACCEPT"
$ ansible db -s -m yum -a "name=MySQL-python state=present"
$ ansible db -s -m mysql_user -a "name=django host=% password=12345 priv=*.*:ALL state=present"
$ ansible app -s -a "service ntpd status"
$ ansible app -s -a "service ntpd restart" --limit ""
$ ansible app -s -a "service ntpd restart" --limit "*.4"
$ ansible app -s -a "service ntpd restart" --limit ~".*\.4"
$ ansible app -s -m group -a "name=admin state=present"
$ ansible app -s -m user -a "name=johndoe group=admin createhome=yes"
$ ansible app -s -m user -a "name=johndoe state=absent remove=yes"
$ ansible multi -m stat -a "path=/etc/environment"
$ ansible multi -m copy -a "src=/etc/hosts dest=/tmp/hosts"
$ ansible multi -s -m fetch -a "src=/etc/hosts dest=/tmp"
$ ansible multi -m file -a "dest=/tmp/test mode=644 state=directory"
$ ansible multi -m file -a "src=/src/symlink dest=/dest/symlink owner=root group=root state=link"
$ ansible multi -m file -a "dest=/tmp/test state=absent"
$ ansible multi -s -B 3600 -a "yum -y update"
$ ansible multi -s -m async_status -a "jid=763350539037"
$ ansible multi -B 3600 -P 0 -a "/path/to/fire-and-forget-script.sh"
$ ansible multi -s -a "tail /var/log/messages"
$ ansible multi -s -m shell -a "tail /var/log/messages | grep ansible-command | wc -l"
$ ansible multi -s -m cron -a "name='daily-cron-all-servers' hour=4 job='/path/to/daily-script.sh'"
$ ansible multi -s -m cron -a "name='daily-cron-all-servers' state=absent"
$ ansible app -s -m git -a "repo=git://example.com/path/to/repo.git dest=/opt/myapp update=yes version=1.2.4"
$ ansible app -s -a "/opt/myapp/update.sh"


Ansible for DevOps(Chapter 3)

ansible vagrant

Setting Up Ruby on Rails Production Server With Nginx and Unicorn

  • OS: Ubuntu 12.04
  • Nginx: 1.1.19
  • Unicorn: 4.8.3
  • Rails: 4.2

Step 1: Set up unicorn

# Gemfile
gem 'unicorn'

# From the command line, in your application's root directory.

# Add to ~/.bashrc or ~/.bash_profile

# Run under Rails application folder
bundle --binstubs

Configuring config/unicorn.rb

vim config/unicorn.rb
# Set the current app's path for later reference. Rails.root isn't available at
# this point, so we have to point up a directory.
app_path = File.expand_path(File.dirname(__FILE__) + '/..')

# The number of worker processes you have here should equal the number of CPU
# cores your server has.
worker_processes (ENV['RAILS_ENV'] == 'production' ? 4 : 1)

# You can listen on a port or a socket. Listening on a socket is good in a
# production environment, but listening on a port can be useful for local
# debugging purposes.
listen app_path + '/tmp/unicorn.sock', backlog: 64

# For development, you may want to listen on port 3000 so that you can make sure
# your unicorn.rb file is soundly configured.
listen(3000, backlog: 64) if ENV['RAILS_ENV'] == 'development'

# After the timeout is exhausted, the unicorn worker will be killed and a new
# one brought up in its place. Adjust this to your application's needs. The
# default timeout is 60. Anything under 3 seconds won't work properly.
timeout 300

# Set the working directory of this unicorn instance.
working_directory app_path

# Set the location of the unicorn pid file. This should match what we put in the
# unicorn init script later.
pid app_path + '/tmp/unicorn.pid'

# You should define your stderr and stdout here. If you don't, stderr defaults
# to /dev/null and you'll lose any error logging when in daemon mode.
stderr_path app_path + '/log/unicorn.log'
stdout_path app_path + '/log/unicorn.log'

# Load the app up before forking.
preload_app true

# Garbage collection settings.
GC.respond_to?(:copy_on_write_friendly=) &&
  GC.copy_on_write_friendly = true

# If using ActiveRecord, disconnect (from the database) before forking.
before_fork do |server, worker|
  defined?(ActiveRecord::Base) &&

# After forking, restore your ActiveRecord connection.
after_fork do |server, worker|
  defined?(ActiveRecord::Base) &&

Test config

bin/unicorn -c config/unicorn.rb

Step 2: Set up MySQL Database

sudo apt-get install mysql-server mysql-client libmysqlclient-dev

Step 3: Set up Unicorn Init Script

sudo vim /etc/init.d/unicorn

# File: /etc/init.d/unicorn

# Provides:          unicorn
# Required-Start:    $local_fs $remote_fs $network $syslog
# Required-Stop:     $local_fs $remote_fs $network $syslog
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: starts the unicorn web server
# Description:       starts unicorn

# Feel free to change any of the following variables for your app:

# ubuntu is the default user on Amazon's EC2 Ubuntu instances.
# Replace [PATH_TO_RAILS_ROOT_FOLDER] with your application's path. I prefer
# /srv/app-name to /var/www. The /srv folder is specified as the server's
# "service data" folder, where services are located. The /var directory,
# however, is dedicated to variable data that changes rapidly, such as logs.
# Reference https://help.ubuntu.com/community/LinuxFilesystemTreeOverview for
# more information.
# Set the environment. This can be changed to staging or development for staging
# servers.
# This should match the pid setting in $APP_ROOT/config/unicorn.rb.
# A simple description for service output.
DESC="Unicorn app - $RAILS_ENV"
# Run unicorn command
CMD="unicorn -D -c $APP_ROOT/config/unicorn.rb -E $RAILS_ENV"
# Give your upgrade action a timeout of 60 seconds.

# Store the action that we should take from the service command's first
# argument (e.g. start, stop, upgrade).

# Make sure the script exits if any variables are unset. This is short for
# set -o nounset.
set -u

# Set the location of the old pid. The old pid is the process that is getting
# replaced.

# Make sure the APP_ROOT is actually a folder that exists. An error message from
# the cd command will be displayed if it fails.
cd $APP_ROOT || exit 1

# A function to send a signal to the current unicorn master process.
sig () {
  test -s "$PID" && kill -$1 `cat $PID`

# Send a signal to the old process.
oldsig () {
  test -s $old_pid && kill -$1 `cat $old_pid`

# A switch for handling the possible actions to take on the unicorn process.
case $action in
  # Start the process by testing if it's there (sig 0), failing if it is,
  # otherwise running the command as specified above.
    sig 0 && echo >&2 "$DESC is already running" && exit 0
    su - $USER -c "$CMD"

  # Graceful shutdown. Send QUIT signal to the process. Requests will be
  # completed before the processes are terminated.
    sig QUIT && echo "Stopping $DESC" exit 0
    echo >&2 "Not running"

  # Quick shutdown - kills all workers immediately.
    sig TERM && echo "Force-stopping $DESC" && exit 0
    echo >&2 "Not running"

  # Graceful shutdown and then start.
    sig QUIT && echo "Restarting $DESC" && sleep 2 \
      && su - $USER -c "$CMD" && exit 0
    echo >&2 "Couldn't restart."

  # Reloads config file (unicorn.rb) and gracefully restarts all workers. This
  # command won't pick up application code changes if you have `preload_app
  # true` in your unicorn.rb config file.
    sig HUP && echo "Reloading configuration for $DESC" && exit 0
    echo >&2 "Couldn't reload configuration."

  # Re-execute the running binary, then gracefully shutdown old process. This
  # command allows you to have zero-downtime deployments. The application may
  # spin for a minute, but at least the user doesn't get a 500 error page or
  # the like. Unicorn interprets the USR2 signal as a request to start a new
  # master process and phase out the old worker processes. If the upgrade fails
  # for some reason, a new process is started.
    if sig USR2 && echo "Upgrading $DESC" && sleep 10 \
      && sig 0 && oldsig QUIT
      while test -s $old_pid && test $n -ge 0
        printf '.' && sleep 1 && n=$(( $n - 1 ))

      if test $n -lt 0 && test -s $old_pid
        echo >&2 "$old_pid still exists after $TIMEOUT seconds"
        exit 1
      exit 0
    echo >&2 "Couldn't upgrade, starting 'su - $USER -c \"$CMD\"' instead"
    su - $USER -c "$CMD"

  # A basic status checker. Just checks if the master process is responding to
  # the `kill` command.
    sig 0 && echo >&2 "$DESC is running." && exit 0
    echo >&2 "$DESC is not running."

  # Reopen all logs owned by the master and all workers.
    sig USR1

  # Any other action gets the usage message.
    # Usage
    echo >&2 "Usage: $0 <start|stop|restart|reload|upgrade|force-stop|reopen-logs>"
    exit 1

Change permission:

sudo chmod +x /etc/init.d/unicorn

Let unicorn starts on reboot:

sudo update-rc.d unicorn defaults

Step 4: Nginx configuration

Installing nginx

sudo apt-get install nginx

It's unnecessary to change /etc/nginx/nginx.conf

Configuring /etc/nginx/sites-available/sitename

sudo vim /etc/nginx/sites-available/testapp
upstream unicorn {
  server unix:/home/vagrant/testapp/tmp/unicorn.sock;

server {
  listen 80;
  server_name localhost; # Replace this with your site's domain.

  keepalive_timeout 300;

  client_max_body_size 4G;

  root /home/vagrant/testapp/public; # Set this to the public folder location of your Rails application.

  try_files $uri/index.html $uri.html $uri @unicorn;

  location @unicorn {
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header Host $http_host;
          proxy_set_header X-Forwarded_Proto $scheme;
          proxy_redirect off;
          # This passes requests to unicorn, as defined in /etc/nginx/nginx.conf
          proxy_pass http://unicorn;
          proxy_read_timeout 300s;
          proxy_send_timeout 300s;

  # You can override error pages by redirecting the requests to a file in your
  # application's public folder, if you so desire:
  error_page 500 502 503 504 /500.html;
  location = /500.html {
          root /home/vagrant/testapp/public;

Add a symlink to enable site:

# Enter the sites-enabled folder
cd /etc/nginx/sites-enabled

# Add a symlink to your configuration file
sudo ln -s ../sites-available/testapp

# Reload nginx. Make sure to use the `reload` action so that nginx can check
# your configuration before reloading, thereby saving you from causing downtime.
sudo service nginx reload

Check the processes of unicorn and nginx:

sudo service unicorn status
sudo service nginx status

Step 5: Set rails production secret key

Generate a key with rake:

rake secret

Copy the key to config/secrets.yml:

  secret_key_base: <%= ENV['SECRET_KEY_BASE'] %>

Step 6: Open your site

In the Vagrantfile:

vim Vagrantfile
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "precise32"
  config.vm.box_url = "http://files.vagrantup.com/precise32.box"
  # config.vm.network :forwarded_port, guest: 80, host: 8080
  config.vm.network :private_network, ip: ""
  # config.vm.network :public_network

Reload vagrant and login again:

vagrant reload
vagrant ssh

Access your site from browser:

If there are something wrong, delete /etc/nginx/sites-available/default and /etc/nginx/sites-enabled/default then try again.



nginx ruby-on-rails ubuntu unicorn vagrant