Qiang Blog

Just another zhangjingqiang's blog.

Amazon Web Services LiveLessons

These contents are very useful.

The Ideal

  • Hightly Available
  • Fault Tolerant
  • Secure
  • Durable
  • Self Healing
  • Automated
  • Cost Effective

Best Practices

  • Design for Failure
  • Scale Horizontally
  • Disposable Resources over Fixed Servers
  • Automate, Automate, Automate!
  • Security in Layers
  • Loose Coupling
  • Optimize for Cost

amazon-web-services

How to use serverspec to test servers?

Start a test project

$ serverspec-init

Directory

.
├── README.md
├── Rakefile
├── properties.yml
└── spec
    ├── app
    │   └── ruby_spec.rb
    ├── base
    │   ├── host_spec.rb
    │   └── users_and_groups_spec.rb
    ├── db
    │   └── mysql_spec.rb
    ├── proxy
    │   └── nginx_spec.rb
    ├── spec_helper.rb
    └── worker
        └── redis_spec.rb

Source

Rakefile

require 'rake'
require 'rspec/core/rake_task'
require 'yaml'

properties = YAML.load_file('properties.yml')

task :spec    => 'serverspec:all'
task :default => :spec

namespace :serverspec do
  task :all => properties.keys.map {|key| 'serverspec:' + key.split('.')[0] }
  properties.keys.each do |key|
    desc "Run serverspec to #{key}"
    RSpec::Core::RakeTask.new(key.split('.')[0].to_sym) do |t|
      ENV['TARGET_HOST'] = key
      t.pattern = 'spec/{' + properties[key][:roles].join(',') + '}/*_spec.rb'
    end
  end
end

properties.yml

# server1

app.server1:
  :roles:
    - base
    - proxy
db.server1:
  :roles:
    - db
worker.server1:
  :roles:
    - app
    - worker

# server2

app.server2:
  :roles:
    - base
    - proxy
db.server2:
  :roles:
    - db
worker.server2:
  :roles:
    - app
    - worker

spec/spec_helper.rb

require 'serverspec'
require 'net/ssh'
require 'yaml'

properties = YAML.load_file('properties.yml')

set :backend, :ssh
set :request_pty, true

if ENV['ASK_SUDO_PASSWORD']
  begin
    require 'highline/import'
  rescue LoadError
    fail "highline is not available. Try installing it."
  end
  set :sudo_password, ask("Enter sudo password: ") { |q| q.echo = false }
else
  set :sudo_password, ENV['SUDO_PASSWORD']
end

host = ENV['TARGET_HOST']
set_property properties[host]

options = Net::SSH::Config.for(host)

options[:user] ||= Etc.getlogin

set :host,        options[:host_name] || host
set :ssh_options, options

# Disable sudo
# set :disable_sudo, true


# Set environment variables
# set :env, :LANG => 'C', :LC_MESSAGES => 'C'

# Set PATH
# set :path, '/sbin:/usr/local/sbin:$PATH'

spec/app/ruby_spec.rb

require_relative '../spec_helper'

describe process("ruby") do
  it { should be_running }
end

spec/base/host_spec.rb

require_relative '../spec_helper'

shared_examples "cpu and memory should be ok" do
  it { should be_resolvable }
  it { should be_resolvable.by('dns') }

  it 'CPU should eq 2' do
    expect(host_inventory['cpu']['total']).to eq('2')
  end

  it 'Memory should > 7000000kB' do
    expect(host_inventory['memory']['total']).to be > '7000000kB'
  end
end

# server1

describe host('www.server1.com') do
  include_examples "cpu and memory should be ok"
end

# server2

describe host('www.server2.com') do
  include_examples "cpu and memory should be ok"
end

spec/db/mysql_spec.rb

require_relative '../spec_helper'

describe 'MySQL config parameters' do
  context mysql_config('innodb-buffer-pool-size') do
    its(:value) { should > 100000000 }
  end

  context mysql_config('socket') do
    its(:value) { should eq '/var/lib/mysql/mysql.sock' }
  end
end

spec/proxy/nginx_spec.rb

require_relative '../spec_helper'

describe port(80) do
  it { should be_listening }
end

describe port(80) do
  it { should be_listening.with('tcp') }
end

describe process("nginx") do
  it { should be_running }
end

spec/worker/redis_spec.rb

require_relative '../spec_helper'

describe process("redis") do
  it { should be_running }
end

Run test cases

$ rake spec

More resource types

http://serverspec.org/resource_types.html

serverspec

How to bulk set multiple servers use different colors with ansible?

2017年1月Gitlab的数据库误删除事件使全世界对服务器的安全重视起来,把不同的服务器设置成不同的颜色背景是一个较有效的方法。下面使用 Ansible 设置 Tmux 的 powerline 区分不同环境的服务器。

Directory

.
├── README.md
└── provisioning
    ├── files
    │   └── .zshrc.yml
    ├── inventory
    ├── playbook.yml
    ├── tasks
    │   ├── tmux.yml
    │   └── zsh.yml
    └── templates
        └── .tmux.conf.j2

How to use?

$ cd provisioning
$ ansible-playbook playbook.yml -i inventory

Source

provisioning/files/.zshrc

if [ "$TMUX" = ""   ]; then tmux; fi

provisioning/templates/.tmux.conf.j2

source-file "/home/{{username}}/.tmux-themepack/powerline/block/{{color}}.tmuxtheme"

provisioning/tasks/tmux.yml

---
- name: Install the latest version of Tmux
  yum: name=tmux state=latest

- name: Install tmux-thmepack
  git: repo=https://github.com/jimeh/tmux-themepack
       dest=/home/{{username}}/.tmux-themepack

- name: Copy .tmux.conf file to servers
  template:
    src: templates/.tmux.conf.j2
    dest: /home/{{username}}/.tmux.conf

provisioning/tasks/zsh.yml

---
- name: Install the latest version of Zsh
  yum: name=zsh state=latest

- name: Copy .zshrc file to servers
  copy:
    src: files/.zshrc
    dest: /home/{{username}}/.zshrc

- name: Start zsh shell
  user: name={{username}} shell=/bin/zsh

provisioning/playbook.yml

---
- hosts: all
  become: yes
  vars_prompt:
    name: "username"
    prompt: "Enter username"
    private: no

  tasks:
    - include: tasks/tmux.yml
    - include: tasks/zsh.yml

provisioning/inventory

[server1]
app1.server1
app2.server1

[server1:vars]
color=blue

[server2]
app1.server2
app2.server2

[server2:vars]
color=orange

[server3]
app1.server3
app2.server3

[server3:vars]
color=red

[servers:children]
server1
server2
server3

ansible tmux zsh

使用Google API自动把本地文件内容写入Google Spreadsheet

需要使用的Google API

  • Google Drive API
  • Google Spreadsheet API

实现的功能

  • 获取Google认证权限
  • 在Google Drive创建Google Spreadsheet
  • 分享给用户,域名功能
  • 删除原有sheet1并创建新的固定sheet
  • 向Google Spreadsheet循环创建并写入本地指定路径下所有文件内容
  • 设置sheet头部样式(标题,颜色,固定,加粗等)

此例文件类型

  • tsv

使用方法

Step 1

从Google开发者页面下载 client_secret.json
参考:

https://developers.google.com/sheets/api/quickstart/python

Step 2

在terminal运行:

$ python write_to_google_sheets.py <file_name> <folder_path>

精彩代码

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from __future__ import print_function
import httplib2
import os
import sys
import json
import re
from termcolor import colored

from apiclient import discovery
from oauth2client import client
from oauth2client import tools
from oauth2client.file import Storage
import requests

reload(sys)
sys.setdefaultencoding('utf8')

# If modifying these scopes, delete your previously saved credentials
# at ~/.credentials/sheets.googleapis.com-python-quickstart.json
SCOPES = "https://www.googleapis.com/auth/drive https://www.googleapis.com/auth/spreadsheets"
CLIENT_SECRET_FILE = 'client_secret.json'
APPLICATION_NAME = 'Google API Drive + Spreadsheet'

def get_credentials():
    """Gets valid user credentials from storage.

    If nothing has been stored, or if the stored credentials are invalid,
    the OAuth2 flow is completed to obtain the new credentials.

    Returns:
        Credentials, the obtained credential.
    """
    home_dir = os.path.expanduser('~')
    credential_dir = os.path.join(home_dir, '.credentials')
    if not os.path.exists(credential_dir):
        os.makedirs(credential_dir)
    credential_path = os.path.join(credential_dir,
                                   'sheets.googleapis.com-python-quickstart.json')

    store = Storage(credential_path)
    credentials = store.get()
    if not credentials or credentials.invalid:
        flow = client.flow_from_clientsecrets(CLIENT_SECRET_FILE, SCOPES)
        flow.user_agent = APPLICATION_NAME
        if flags:
            credentials = tools.run_flow(flow, store, flags)
        else: # Needed only for compatibility with Python 2.6
            credentials = tools.run(flow, store)
        print('Storing credentials to ' + credential_path)
    return credentials

def main():
    """Create spreadsheet and write data into it
    Use:
    - Google Drive API
    - Google Spreadsheet API
    """
    credentials = get_credentials()
    http = credentials.authorize(httplib2.Http())

    # Create new spreadsheet
    spreadsheet_id = create_spreadsheet(http)

    # Create new sheet
    write_data_to_sheets(http, spreadsheet_id)

    print(colored('Finish!', 'green'))

def create_spreadsheet(http):
    drive_service = discovery.build('drive', 'v3', http=http)
    file_metadata = {
      'name' : sys.argv[1],
      'mimeType' : 'application/vnd.google-apps.spreadsheet'
    }
    file = drive_service.files().create(body=file_metadata, fields='id').execute()
    # share spreadsheet
    share(drive_service, file.get('id'))
    return file.get('id')

def write_data_to_sheets(http, spreadsheet_id):
    discoveryUrl = ('https://sheets.googleapis.com/$discovery/rest?'
                    'version=v4')
    spreadsheet_service = discovery.build('sheets', 'v4', http=http,
                              discoveryServiceUrl=discoveryUrl)

    files = os.listdir(sys.argv[2])
    print(colored(files, 'red'))

    # create new sheets and delete default sheet1
    spreadsheet_service.spreadsheets().batchUpdate(spreadsheetId=spreadsheet_id, body=base_sheet()).execute()
    # write log files to sheets
    for file in files:
        write_log_to_sheet(spreadsheet_service, spreadsheet_id, file)

def base_sheet():
    data = {
      'requests': [
        {
          'addSheet':{
            'properties': {'title': u'New Sheet Name'}
          }
        },
        {
          'deleteSheet':{
            'sheetId': 0
          }
        }
      ]
    }
    return data

def write_log_to_sheet(spreadsheet_service, spreadsheet_id, file):
    # make sheet
    body = {
      'requests': [
        {
          'addSheet':{
            'properties': {'title': u'{0}'.format(file)}
          }
        }
      ]
    }
    result = spreadsheet_service.spreadsheets().batchUpdate(spreadsheetId=spreadsheet_id, body=body).execute()
    sheetId = result['replies'][0]['addSheet']['properties']['sheetId']

    # write to sheet
    values = []
    values.append(titles(file))
    lines = [line.rstrip() for line in open(file)]
    for i in range(len(lines)):
        values.append(re.split(r'\t+', lines[i]))
    data = [
            {
                'range': '{0}!A1'.format(file),
                'values': values
            }
    ]
    body = {
            'valueInputOption': 'USER_ENTERED',
            'data': data
    }
    spreadsheet_service.spreadsheets().values().batchUpdate(spreadsheetId=spreadsheet_id, body=body).execute()

    # format header row
    body = {
    "requests": [
    {
      "repeatCell": {
        "range": {
          "sheetId": sheetId,
          "startRowIndex": 0,
          "endRowIndex": 1
        },
        "cell": {
          "userEnteredFormat": {
            "backgroundColor": {
              "red": 0.0,
              "green": 0.0,
              "blue": 1.0
            },
            "horizontalAlignment" : "LEFT",
            "textFormat": {
              "foregroundColor": {
                "red": 1.0,
                "green": 1.0,
                "blue": 1.0
              },
              "fontSize": 12,
              "bold": 'true'
            }
          }
        },
        "fields": "userEnteredFormat(backgroundColor,textFormat,horizontalAlignment)"
      }
    },
    {
      "updateSheetProperties": {
        "properties": {
          "sheetId": sheetId,
          "gridProperties": {
            "frozenRowCount": 1
          }
        },
        "fields": "gridProperties.frozenRowCount"
      }
    }
    ]
    }
    spreadsheet_service.spreadsheets().batchUpdate(spreadsheetId=spreadsheet_id, body=body).execute()

def titles(file):
    values_title = [
        'ID', 'Name', 'Description'
    ]
    return values_title

def share(drive_service, spreadsheet_id):
    batch = drive_service.new_batch_http_request(callback=callback)
    # share to users
    user_permission = [
        'user1@example.com',
        'user2@example.com'
    ]
    for user in user_permission:
        share_user(drive_service, spreadsheet_id, batch, user)
    # share to domains
    domain_permission = [
        'google.com',
        'facebook.com'
    ]
    for domain in domain_permission:
        share_domain(drive_service, spreadsheet_id, batch, domain)
    # batch execute
    batch.execute()

def share_user(drive_service, spreadsheet_id, batch, user):
    user_permission = {
            'type': 'user',
            'role': 'writer',
            'emailAddress': user
            }
    batch.add(drive_service.permissions().create(
        fileId=spreadsheet_id,
        body=user_permission,
        fields='id',
        ))
    return batch

def share_domain(drive_service, spreadsheet_id, batch, domain):
    domain_permission = {
            'type': 'domain',
            'role': 'reader',
            'domain': domain
            }
    batch.add(drive_service.permissions().create(
        fileId=spreadsheet_id,
        body=domain_permission,
        fields='id',
        ))
    return batch

def callback(request_id, response, exception):
    if exception:
        print(exception)
    else:
        print("Permission Id: {0}".format(response.get('id')))

if __name__ == '__main__':
    if len(sys.argv) == 3 and os.path.isdir(sys.argv[2]):
        main()
    else:
        print('Please input a file name and log path.')

google-drive-api google-spreadsheet-api python

How to bulk make json list with ruby?

File example:

# text_list
text1
text2
text3

Ruby batch:

#!/usr/bin/ruby

require 'json'

class Maker
  def initialize(counter=0)
    @counter = counter
    case counter
    when 0
      @position = 'bottom-center'
      @type = 'horizontal'
    when 1
      @position = 'vertical-right'
      @type = 'vertical'
    end
  end

  def read_write
    File.open('text_list', 'r') do |fr|
      while(line = fr.gets)
        json = json_format(line.strip, @counter)
        @counter = @counter + 1
        p JSON.generate(json)
        File.open('new_json', 'a') { |fw| fw.write(JSON.generate(json).to_s + ',') }
      end
    end
  end

  def json_format(line, counter)
    {
      "id":"#{counter}",
      "position": "#{@position}",
      "type": "#{@type}",
      "text":"#{line}"
    }
  end
end

if ARGV.empty?
  p 'Input a value, please.'
  p 'For example:'
  p 'Horizontal -- ruby mjl.rb 0'
  p 'Vertical -- ruby mjl.rb 1'
  exit
end

if ARGV.length > 1
  p 'Please input one value only.'
  exit
end

if not [0, 1].include?(ARGV[0].to_i)
  p 'Please input 0 or 1'
  exit
end

m = Maker.new ARGV[0].to_i
m.read_write

Then use https://jsonformatter.curiousconcept.com format the json list.

json ruby

How to bulk compare images by imagemagick with python and ruby?

If we want to compare all images in two different path, we can save them into two files and bulk compare them by batch script.

For example:

# old
1.png
2.png
# new
1.png
2.png( - Different image)

Then run the python or ruby script:

#!/usr/bin/python
# coding: UTF-8

import os
import sys
import subprocess
from termcolor import colored

reload(sys)
sys.setdefaultencoding('utf8')

def main():
    if len(sys.argv) == 3 and os.path.isfile(sys.argv[1]) and os.path.isfile(sys.argv[2]):
        compare()
    else:
        print 'Please input regular files'

def compare():
    list1 = [line.rstrip() for line in open(sys.argv[1])]
    list2 = [line.rstrip() for line in open(sys.argv[2])]
    for i in range(len(list1)):
        os.system("composite -compose difference {0} {1} {2}".format(list1[i], list2[i], '/tmp/diff.png'))
        pipe = subprocess.Popen("identify -format %[mean] {0}".format('/tmp/diff.png'), shell=True, stdout=subprocess.PIPE).stdout
        value = pipe.read()
        if float(value) > 0:
            os.system("cp /tmp/diff.png {0}".format(str(i) + '.png'))
            print colored("{0}{1} Diff - {2}".format('[' + str(i) + ']', float(value), list2[i]), 'red')
        else:
            print colored("{0}{1} Same - {2}".format('[' + str(i) + ']', value, list2[i]), 'green')

if __name__ == "__main__":
    print 'Compare by python:'
    main()
#!/usr/bin/ruby

require 'colorize'

class CompareImages
  def initialize()
    puts "Compare by ruby:"
  end

  def read_put
    list1 = File.readlines(ARGV[0]).map{|x| x.strip}
    list2 = File.readlines(ARGV[1]).map{|x| x.strip}
    (0..list1.length - 1).each do |i|
      `composite -compose difference #{list1[i]} #{list2[i]} /tmp/diff.png`
      value = `identify -format %[mean] /tmp/diff.png`
      if value.to_i > 0
        `cp /tmp/diff.png #{i}.png`
        puts "[#{i}]#{value} Diff - #{list2[i]}".red
      else
        puts "[#{i}]#{value} Same - #{list2[i]}".green
      end
    end
  end
end

ci = CompareImages.new
ci.read_put
$ python ci.py old new
$ ruby ci.rb old new

It can also output the messages with color in the terminal!

imagemagick python ruby

用Python脚本导入文件中数据到MySQL数据库

#!/usr/bin/python
# coding: UTF-8

import sys
import os

reload(sys)
sys.setdefaultencoding('utf8')

def main():
    if len(sys.argv) == 2 and os.path.isdir(sys.argv[1]):
        make_file()
    else:
        print 'Please input a regular path'

def make_file():
    files = os.listdir(sys.argv[1])
    print files
    for file in files:
        if '.tsv' in file and '.swp' not in file:
            os.system("iconv -f SHIFT-JIS -t UTF-8 {0} > {1}".format(sys.argv[1] + "/" + file, 'tmp'))
            file_name = ''
            # Error Report
            if 'errors_report' in file:
                if 'English Error' in open('tmp').read():
                    file_name = 'errors_report_en'
                    os.system("mv tmp {0}".format(file_name))
                elif 'Japanese Error' in open('tmp').read():
                    file_name = 'errors_report_ja'
                    os.system("mv tmp {0}".format(file_name))
            # Warning Report
            if 'warnings_report' in file:
                file_name = 'warnings_report'
                os.system("mv tmp {0}".format(file_name))

            _import(file_name)
            _remove_tmp(file_name)

def _import(file_name):
    """
    Import to MySQL
    """
    os.system("mysqlimport -u root --local caption {0}".format(file_name))

def _remove_tmp(file_name):
    """
    Remove renamed file
    """
    os.system("rm {0}".format(file_name))

if __name__ == "__main__":
    main()

这个脚本可以把一个目录中的所有文件读入数据库,并且根据文件中数据不同,读入不同的表。例如数据库中有这样的表:

  • errorsreporten
  • errorsreportja
  • warnings_report

表中不用ID,完全和文件中的数据格式一致。对所有字段加uniq索引,确保读入的数据不重复插入。
此例目标文件为 .tsv 文件,编码不是UTF-8,要先转码。

脚本使用方法:

python import_to_db.py <path>

mysql python

MongoDB Deployment Checklist

Hardware

  • RAM
  • Disk space
  • Disk speed
  • CPU
  • Network

Security

  • Protection of network traffic
  • Access control

Monitoring

  • Hardware usage
  • Health checks
  • MMS Monitoring
  • Client performance monitoring

Disaster recovery

  • Evaluate risk
  • Have a plan
  • Test your plan
  • Have a backup plan

Performance

  • Load testing

MongoDB in Action, 2nd Edition

deployment mongodb

DevOps Deployment Pipeline Steps

  1. Checkout the code
  2. Run pre-deployment tests
  3. Compile and/or package the code
  4. Build the container
  5. Push the container to the registry
  6. Deploy the container to the production server
  7. Integrate the container
  8. Run post-deployment tests
  9. Push the tests container to the registry

The DevOps 2.0 Toolkit

deployment devops

Django Workflow

Setup

  • Within a new directory, create and activate a new virtualenv.
  • Install Django.
  • Create your project: django-admin.py startproject
  • Create a new app: python manage.py startapp
  • Add your app to the INSTALLED_APPS tuple.

Add Basic URLs and Views

  • Map your Project’s urls.py file to the new app.
  • In your App directory, create a urls.py file to define your App’s URLs.
  • Add views, associated with the URLs, in your App’s views.py; make sure they return a HttpResponse object. Depending on the situation, you may also need to query the model (database) to get the required data back requested by the end user.

Templates and Static Files

  • Create a templates and static directory within your project root.
  • Update settings.py to include the paths to your templates.
  • Add a template (HTML file) to the templates directory. Within that file, you can include the static file with - {% load static %} and {% static "filename" %}. Also, you may need to pass in data requested by the user.
  • Update the views.py file as necessary.

Models and Databases

  • Update the database engine to settings.py (if necessary, as it defaults to SQLite).
  • Create and apply a new migration.
  • Create a super user.
  • Add an admin.py file in each App that you want access to in the Admin.
  • Create your models for each App.
  • Create and apply a new migration. (Do this whenever you make any change to a model).

Forms

  • Create a forms.py file at the App to define form-related classes; define your ModelForm classes here.
  • Add or update a view for handling the form logic - e.g., displaying the form, saving the form data, alerting the user about validation errors, etc.
  • Add or update a template to display the form.
  • Add a urlpattern in the App’s urls.py file for the new view.

User Registration

  • Create a UserForm
  • Add a view for creating a new user.
  • Add a template to display the form.
  • Add a urlpattern for the new view.

User Login

  • Add a view for handling user credentials.
  • Create a template to display a login form.
  • Add a urlpattern for the new view.

Setup the template structure

  • Find the common parts of each page (i.e., header, sidebar, footer).
  • Add these parts to a base template
  • Create specific. templates that inherent from the base template.

Reference

Real Python Part 2

django workflow