Thought Paperclip offers storage path and url customization, however, sometimes it just cannot satisfy some situation. For example, when you have a complex path/url generation logic that must be handle outside the model and you don't know the path/url until the last moment before creating model object. In this case, maybe the only way is "PASS" them into the model object, just like passing arguments in C/C++ or Java.

I tried few ways to solve this problem, here I'm going to show you what's NOT gonna work and why and finally what's gonna work it out.

Say there're 2 dependent models
A -> B
which means A owns B. Assume A and B are both literally singular. B has attachement called file. The attachement's path and url will be calculated in controller, in this case, not model .

#In some_controller.rb

class SomeController < ApplicationController
  def some_method
    @a = A.create!
    b =
    b.file =
    b.store_path = path #string

    b.store_url  = url  #string!
    puts b.file.path  # equals to path, no problem

    puts b.file.url   # equals to url, no problem


#In B.rb

class B < ActiveRecord::Base
# automatically generate setter and getters for @store_path and @store_url

# By the way, @store_path directly access the instance variable while self.store_path calls the getter method

attr_accessor :store_path, :store_url 

has_attached_file :file,
                  :path => :get_path
                  :url  => :get_url
  def get_path
  def get_url


It works. file was stored in path. Perfect. But when we retrieve from database, e.g.

# for example, in another_controller.rb

class AnotherController < ApplicationController
  def some_method
    b = B.find_by_id(params[:id]) 
    b.file.path # wrong, it becomes to paperclip's default path

    b.file.url  # wrong, it becomes to paperclip's default url


What the hell are these things happens! Go checkout your disk, file is exactly stored in the location. But you can't find them from the database.

The reason is: @store_path and @store_url are instance variables and they're just not exist in the new object initiated by B.find_by_id(params[:id]). Unless you assign values to them after you create a new object, or it will be nil, thus when you tried to access file.path or file.url, paperclip will send back default value of course. So you got file stored properly but cannot be accessed from database forever. The key for paperclip to use dynamically generated arguments is:

Just save the storage information in database.

It's pretty straight forward. Cost extra 2 columns, which is the price I really don't want to pay at the begining. But well... it's alright.

This phenomenant like the dangling object in C/C++, where a pointer is pointing to some object, then once the pointer's value changes, you could never get a way to access what it was pointed to -- the object.



  1. 啟動GoAgent (支援Windows/OSX/Linux,只是啟動方式不一樣)
  2. 在瀏覽器設定proxy server


下載GoAgent:放心,這沒病毒,無需安裝,Windows使用者請開啓local資料夾底下的goagent.exe (記得用系統管理員身分執行),然後會出現一個黑色小視窗,把他縮到右下角工具列。OSX/Linux的使用者請用 python執行,記得加sudo。





到chrome webstore下載並安裝外掛SwitchySharp,安裝好之後網址列旁邊會多一顆地球的按鈕。然後點擊地球 -> Options -> 到Import/Export頁面 -> Restore from File -> 載入剛剛goagent資料夾裡的local資料夾裡的SwitchyOptions.bak,如下圖所示



ps: 日本VPN


Few webcams you bought from 3C-market support linux, pretty sad. However there's a way lead you to the heaven:


Here is it: fswebcam

It's a light-weight open source program for people to get image from webcams which support V4L or V4L2 protocal. Just install it and type

$>fswebcam -help
$>fswebcam -r 1024x768 hello_webcam.jpg

Try it!


This tutorial is about how to use PointGrey's Camera (a CCD vendor from Cadana) by using their FlyCapture SDK(C/C++) as an interface connecting with Java in Ubuntu.

Prepare a PointGrey(PG) camera, Fly Capture SDK, and probably Netbeans IDE. Now here we go.

ps. This tutorial using the C libraries of the SDK, not yet C++, but I believe the setup procedure is similar. If you wanna more powerful features, try C++.

My environment:

  • CCD: Chameleon CMLN-13S2C (connect to PC via USB 2.0)
  • Fly Capture SDK version: (amd64)
  • OS: Ubuntu 12.04 LTS (desktop manager is by default)
  • Compiler: gcc 4.6.3
  • IDE: Netbeans 7.3
  • Java version: 1.6.0_27 OpenJDK Runtime Environment 64-Bit Server VM

Setting everything up

  1. Download the SDK and install it. It has dependencies, please checkout them in README before run install script.
  2. ./

After doing this, you will have C/C++ header files in /usr/include/flycapture/ and dynamic libraries in /usr/lib/, the library you need is and Besides, PointGrey also offer you the sample code to use the CCD. Open /usr/src/flycapture you will see bin, lib, src directories.

Now you have some executable program in your computer, try to play with them to see if the connection to CCD via USB is built successfully.


OK, if you could run it without problems, you're on the half of the way. Personally I suggest copy the /usr/src/flycapture folder into another place and play with it. For example:

cp -r /usr/src/flycapture ~/

And try to build the sample code:

cd ~/flycapture/src/FlyCapture2Test_C
cd ../../bin

Same as it for you to play with another C++ sample codes. Just remember to build FlyCapture2GUI first since all other C++ sample codes here require the GUI interface.

Now let's connect it to Java

The way for Java connect with C/C++ libraries is called Java Native Interface. You write Java codes and use java compiler auto-generated the C/C++ header interface by include preinstalled jni.h. And then include the auto-generated header interface in your C/C++ code and implement it!

First of all, Fork this project:

It already generate the interface header file for you. Now Craate a Netbean C/C++ dynamic library project, add the FlyCapture2JNI_Interface.c and include PointGreyCameraInterface.h.

You also need to tell Netbeans to include the path of FlyCapture2_C.h, if you're in Ubuntu, it's in /usr/include/flycapture. And you need to add the path of jni.h as well. If you don't know where it is:

locate jni.h

Project Property -> C Compiler -> Include Directories -> add:

Now the compiler has everything to know: the .c and included .h files. You can compile now but will fail in runtime because you didn't tell where the needed libraries are.

Project Property -> Linker -> Additional Library Directories -> add: /usr/lib
Project Property -> Linker -> Libraries -> select: /usr/lib/ and /usr/lib/
Project Property -> Linker -> Output -> specify where you want to place the dynamic link library file (.so), this dynamic link library (say, called will need to be load into your Java project in the future. Remember to name this .so file with prefix "lib". Linux systme recognizes the library in this way. The library file will then be include in Java by System.loadLibrary("ABC"), for example.

Now clean and build it, the should be generated, place it into somewhere.

OK, now you got all you need to let Java interact with C/C++ libraries by using Coding in Java are comfortable but sometimes you need more efficiency, run faster. By this way, you could actually implement some criticle functions in C/C++ and use them in Java. Perfect!

Open a Java application project with Netbeans and add what you just forked, include them in a proper way. One thing to take care: In, make sure the library you load is just the library you generated.

    System.loadLibrary("Interface"); /*CAREFUL: just Interface, no need to use libInterface nor the extensions*/

For this to work, you need to tell where this dynamic library is as usual:
Project Property -> Run -> VM Options -> add:


Done, enjoy =)


Online example

rails generate migration AddLatitudeAndLongitudeToModel latitude:float longitude:float
rake db:migrate
in case of doing math in the future, datatype is float

model storage

google maps accept gps format

exiftool api

json to hash

string to datetime

[issue] iOS upload files stripped EXIF

Nginx + Thin are greate, but auto deployment makes them even awesome! Here's HOW-TOs by using capistrano:

First assumed that you've set up a Nginx-Thin server, if you don't, please read this post.

Typically, there're few things you will always do after cap deploy:

  1. Bundle install: This is most important, otherwise your app might not work at all. This part is included in this post

  2. Copy config files: database.yml, thin_config.yml, some scripts... and so on.

  3. Precompile assets: This is of course you must do for performance, I think maybe it could be done before deploy but anyway I do this after cap deploy.

  4. Migration

  5. Restart thin servers and nginx server.

And we're going to make these things done automatically. Here's a sample configuration for deploy.rb

require "bundler/capistrano"

set :application, "iSee"

set :user, 'dayen'                           # ssh login user
set :domain, ''                              # domain or ip
set :deploy_to_dir, "/home/dayen/work/rails/iSee"
set :local_repository,  "/Users/tpy/panorama/iSee"
set :repository,  "/home/dayen/work/rails/iSee" # should be /home/bitnami/apps/myapp in this tutorial
set :use_sudo, false                           # if turn on, all capistrano maked dir would be of root

set :branch, 'master'
set :scm, "git"

role :web, domain                          # Your HTTP server, Apache/etc
role :app, domain                          # This may be the same as your `Web` server
role :db,  domain, :primary => true        # This is where Rails migrations will run

set :deploy_to, deploy_to_dir
set :deploy_via, :export

default_run_options[:pty] = true

set :git_shallow_clone, 1
set :scm_verbose, true

after "deploy:restart", "deploy:cleanup"

set :bundle_cmd, 'source $HOME/.bash_profile && bundle'
default_run_options[:shell] = false
# default_run_options[:shell] = '~/bash'

#================== tasks ====================

namespace :deploy do

  task :copy_config_files, :roles => [:app] do
    db_config = "#{path}/database.yml"
    thin_config = "#{path}/thin_config.yml"
    run "cp #{db_config} #{release_path}/config/database.yml"
    run "cp #{thin_config} #{release_path}/"

  task :precompile do
    run "cd #{release_path}; source $HOME/.bash_profile && bundle exec rake assets:precompile RAILS_ENV=production"

  task :migration do
    run "cd #{release_path}; source $HOME/.bash_profile && bundle exec rake db:migrate RAILS_ENV=production"

  task :config_nginx do
    pre = File.basename(previous_release)
    cur = File.basename(release_path)
    run "#{sudo} sed 's/#{pre}/#{cur}/g' /etc/nginx/sites-available/default"

  task :restart_thin_server do
    run "cd #{previous_release}; source $HOME/.bash_profile && thin stop -C thin_config.yml"
    run "cd #{release_path}; source $HOME/.bash_profile && thin start -C thin_config.yml"

  task :restart_nginx do
    run "#{sudo} service nginx restart"


after "deploy:finalize_update", # after this step, excute the following task


First and of course, install them:

sudo apt-get update
sudo apt-get install nginx

gem install thin

Configure Thin server

In your rails app, create a thin.yml setting for thin server:


chdir: /home/user/work/rails/application/releases/20130508014904
environment: production
port: 5000
timeout: 30
log: log/thin.log
pid: tmp/pids/
max_conns: 1024
max_persistent_conns: 100
require: []
wait: 30
servers: 3
daemonize: true 

thin server -C thin.yml
or if you don't want the config bounded to a specific dir, just don't specify "chdir", then its usage:
cd {some_path}; thin server -C thin.yml

It will create 3 thin servers in port 5000, 5001, 5002

Connect Nginx with Thin

Now let's set up nginx, insert these code into http{...} section

upstream thin_cluster {
            server unix:/tmp/thin.0.sock;
            server unix:/tmp/thin.1.sock;
            server unix:/tmp/thin.2.sock;

  client_max_body_size 20m;

  server {
    listen       80;
    server_name  application;

    root /home/user/work/rails/application/releases/20130508014904/public;

    location / {
      proxy_set_header  X-Real-IP  $remote_addr;
      proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header Host $http_host;
      proxy_redirect off;

      if (-f $request_filename/index.html) {
        rewrite (.*) $1/index.html break;
      if (-f $request_filename.html) {
        rewrite (.*) $1.html break;
      if (!-f $request_filename) {
        proxy_pass http://thin_cluster;

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
      root   html;

in /etc/nginx/sites-available create a configure file, or modify default. If you choose to create a configure file, like, remember to add it in /etc/nginx/sites-enable as well, make a symbolic link to sites-available.

configure file:

upstream thinservers {

server {
            listen   80;
            server_name put_your_domain_or_ip_here;

            access_log /home/user/work/rails/application/shared/log/access.log;
            error_log /home/user/work/rails/application/shared/logerror.log;

            root   /path/to/rails/application/public;
            index  index.html;

            location / {
                          proxy_set_header  X-Real-IP  $remote_addr;
                          proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
                          proxy_set_header Host $http_host;
                          proxy_redirect off;

                          if (-f $request_filename/index.html) {
                                           rewrite (.*) $1/index.html break;

                          if (-f $request_filename.html) {
                                           rewrite (.*) $1.html break;

                          if (!-f $request_filename) {
                                           proxy_pass http://thinservers;


Worth Notice

  1. Remember to run assets pipeline first in your rails app so that nginx could serve the static files (thin is not going to serve static files by default). rake assets:precompile RAILS_ENV=production

  2. Create dir "log" and "system" in rails_deployment_dir_path/shared/

Problems might face

If you got "413 Request Entity Too Large", that's nginx's setting stuff, in nginx.conf, insert this line of code into http{...}.

client_max_body_size 20m;


Ubuntu Hardy - Nginx, rails and thin

Useful command:
sudo service nginx start/stop/restart
sudo service apache2 start/stop/restart
rails db:migrate RAILS_ENV=production
thin start -s 3 -p 5000 -e production
tail -f log/production.log






Let's Start!

gem install geocoder

gem geocoder
bundle install


if request.location.country_code == 'CN' # China
  @video_link = ... # to yoku
  @video_link = ... # to youtube


如果想更了解Geocoder的話,官方API很有用,如果想跟他混熟的話,打開rails console
Geocoder # Module"China") # Array"China")[0].data # Hash"China")[0].data["address_components"][0]['short_name'] # CN, = request.location.country_code

假設 bitnami Rails stack 安裝好在 ec2

remote ec2 server

mkdir ~/apps/myapp
cd ~/apps/myapp
git init --bare

local machine

Host host_name
HostName ip_address_or_domain
User bitnami    # bitnami by default
IdentityFile /path/to/key.pem

make sure you can ssh host_name connect to your remote server. ssh-add key.pem if neccessary.

rails new myapp
cd myapp
in <code>Gemfile</code> add <code>gem 'capistrano'</code> then <code>bundle install</code>

git init .
git add .
git commit -m 'initial commit'

Install capistrano, then in your local application dir
capify .
# in your config/deploy.rb
require 'bundler/capistrano'

set :application, "app_name"

set :user, 'bitnami'                           # ssh login user
set :domain, 'ip'                              # domain or ip 
set :deploy_to_dir, "/home/bitnami/apps/myapp"
set :local_repository,  "/path/of/your/local/repository"
set :repository,  "/path/to/remote/repository" # should be /home/bitnami/apps/myapp in this tutorial
ssh_options[:keys] = ["/path/to/key.pem"]      # used to connect to remote server   
set :use_sudo, false                           # if turn on, all capistrano maked dir would be of root

set :branch, 'master'
set :scm, "git"

role :web, domain                          # Your HTTP server, Apache/etc
role :app, domain                          # This may be the same as your `Web` server
role :db,  domain, :primary => true        # This is where Rails migrations will run

set :deploy_to, deploy_to_dir
set :deploy_via, :export

default_run_options[:pty] = true

set :git_shallow_clone, 1
set :scm_verbose, true

after "deploy:restart", "deploy:cleanup"

one more thing:
git remote add ec2 host_name:/home/bitnami/apps/myapp

ok, let's deploy,
git push ec2 master # dont forget push first
cap deploy

Gone for this far, theoretically it should be easy for the rest. However, by my experiences, it's still highly possible to get stucked in any step by any kind of errors.

A command problem might be the shell environment. On one linux machine, there might be different shell environments, sh, bash, cth, ..etc, and different shell environments usually corresponding to different rvm/ruby/rails paths. To solve this problem, one may want to decide the shell environment you want to use in deployment.

By default, capistrano use sh. If you don't want it be your choice, set false in your deploy.rb, ref here:
default_run_options[:shell] = false
or you can specify your favorite, ref here :
default_run_options[:shell] = '/bin/bash' # I haven't test this yet

Once into your favorite shell, there might still be different rvm loading path. Try this, it works for me, ref here:
set :bundle_cmd, 'source $HOME/.bash_profile && bundle'

good ref: here
this maybe: here, here

rails new myapp
cd myapp
spree install
bundle update
bundle install
spree bootstrap
...Would you like to install the default gateways? (Recommended) (yes/no) [yes] 
...Would you like to install the default authentication system? (yes/no) [yes] 
...Would you like to run the migrations? (yes/no) [yes] 
...Would you like to load the seed data? (yes/no) [yes] 
...Would you like to load the sample data? (yes/no) [yes]