Monthly Archives: October 2014

Experimenting with Celery and RabbitMQ

Celery is an opensource asynchronous distributed task queue that allows processing of vast amounts of messages. That’s a mouth full of course. Let me explain it a bit easier by giving a concrete example: the idea is that for instance activation emails for new sign ups on your website are handled via tasks that are distributed to and executed concurrently on one or more servers. The way it works is that a task is sent over a message queue like RabbitMQ. This is also often referred to as a “Message Broker”. The servers that will execute the tasks, often referred to as “ Workers” are listening to incoming tasks from the broker and will execute them. Obviously, the benefit is that your main web application is offloaded and can continue normal operation, assuming the tasks will be processed at a later time. I found the following a nice tutorial:

Below is a picture to make clear how this works in principle:

Celery Architecture 2

On the client side, a function is called to add your task onto the message queue. The worker computers are listing to the queue and if an incoming task is available, the celery daemon will execute that particular function.

Here is an overview of my setup. We will have Celery installed on the client side. On server side, we install RabbitMQ and Celery.

Celery Architecture 1

Installing Celery is really simple. I followed these steps:

ubuntu@ubuntu-celery-client:~/Celery$ sudo apt-get update
ubuntu@ubuntu-celery-client:~/Celery$ sudo apt-get install python-pip
ubuntu@ubuntu-celery-client:~/Celery$ sudo pip install Celery

The above example is run on the client side so to speak. Obviously, you also have to repeat the above for all the distributed servers that are going to execute the tasks.

Installing RabbitMQ is also not very difficult. Do the following:

ubuntu@ubuntu-celery-server:~/Celery$ sudo apt-get update 
ubuntu@ubuntu-celery-server:~/Celery$ sudo apt-get install rabbitmq-server

Now, we need to configure RabbitMQ. For simplicity, I will have a user “ubuntu” with password “ubuntu”

ubuntu@ubuntu-celery-server:~/Celery$ sudo rabbitmqctl add_user ubuntu ubuntu
ubuntu@ubuntu-celery-server:~/Celery$ sudo rabbitmqctl add_vhost vhost_ubuntu
ubuntu@ubuntu-celery-server:~/Celery$ sudo rabbitmqctl set_permissions -p vhost_ubuntu ubuntu".*"".*"".*"

Now that both Celery and RabbitMQ are installed and configured properly, let’s create an easy example of how this all works.

On the client side, we write the “” script

from celery.execute import send_task

results = []

for x in range(1,100):

Note, the following snippet would also work and I even consider it a bit more cleaner:

from celery import Celery

results = []
celery = Celery()

for x in range(1,100):

In the above snippet, we have to do 100 multiplications. Instead of doing them in the same process, we will send these tasks to a different server that will take care of the execution. In order to execute the task, we use Celery’s send_task command. This

On the server side, we write the “” script:

from celery.task import task

def multiply(x, y):
        multiplication = x * y
        return "The product is " + str(multiplication)

Before we can execute the scripts, we need to tell Celery where the broker can be found. We do this by creating a file that contains the following content:

BROKER_HOST = “” #IP address of the server B, which is running RabbitMQ and Celery
BROKER_USER = “ubuntu” #username for RabbitMQ
BROKER_PASSWORD = “ubuntu” #password for RabbitMQ
BROKER_VHOST = “vhost_ubuntu” #host as configured on RabbitMQ server

This file is stored on both servers in the same directory as your other scripts.

On the server side, we ensure that Celery is running:

ubuntu@ubuntu-celery-server:~/Celery$ celery worker -l info

 -------------- celery@ubuntu-celery-7696e291-37d9-4d0a-802e-fcc046d9e72d v3.1.16 (Cipater)
---- **** ----- 
--- * ***  * -- Linux-3.13.0-36-generic-x86_64-with-Ubuntu-14.04-trusty
-- * - **** --- 
- ** ---------- [config]
- ** ---------- .> app:         default:0x7f6764948750 (.default.Loader)
- ** ---------- .> transport:   amqp://ubuntu:**@
- ** ---------- .> results:     amqp
- *** --- * --- .> concurrency: 1 (prefork)
-- ******* ---- 
--- ***** ----- [queues]
 -------------- .> celery           exchange=celery(direct) key=celery
  . tasks.multiply

[2014-10-29 13:22:08,380: INFO/MainProcess] Connected to amqp://ubuntu:**@
[2014-10-29 13:22:08,389: INFO/MainProcess] mingle: searching for neighbors
[2014-10-29 13:22:09,399: INFO/MainProcess] mingle: all alone
[2014-10-29 13:22:09,410: WARNING/MainProcess] celery@ubuntu-celery-7696e291-37d9-4d0a-802e-fcc046d9e72d ready.

The server is now configured and waiting for tasks to execute. On the client server, we execute the file:

ubuntu@ubuntu-celery-client:~/Celery$ python 

If all is well, you should see output the is similar to the one below:

[2014-10-29 13:23:53,169: INFO/MainProcess] Received task: tasks.multiply[ec6273e2-2adf-4a98-b3ab-7d2b95bb72df]
[2014-10-29 13:23:53,176: INFO/MainProcess] Received task: tasks.multiply[c94d8e5a-4afc-4920-916f-b33fca0dc94c]
[2014-10-29 13:23:53,186: INFO/MainProcess] Received task: tasks.multiply[8cdcb1de-31f5-455c-b785-19d8eb9281f2]
[2014-10-29 13:23:53,187: INFO/MainProcess] Received task: tasks.multiply[5ecb8a03-2af4-4d6f-ab2f-e8b0f4398f54]
[2014-10-29 13:23:53,188: INFO/MainProcess] Received task: tasks.multiply[1d8c3efb-ad20-42e9-976b-34b8be0a5e39]
[2014-10-29 13:23:53,205: INFO/MainProcess] Task tasks.multiply[ec6273e2-2adf-4a98-b3ab-7d2b95bb72df] succeeded in 0.0337770140031s: 'The product is 40000'
[2014-10-29 13:23:53,208: INFO/MainProcess] Received task: tasks.multiply[5c42dac8-2f4f-4639-9089-d91b2873dff1]
[2014-10-29 13:23:53,219: INFO/MainProcess] Task tasks.multiply[c94d8e5a-4afc-4920-916f-b33fca0dc94c] succeeded in 0.0136614609946s: 'The product is 40000'
[2014-10-29 13:23:53,221: INFO/MainProcess] Received task: tasks.multiply[a72e756b-1e99-4455-ad18-4110cbfd3e1e]
[2014-10-29 13:23:53,224: INFO/MainProcess] Task tasks.multiply[8cdcb1de-31f5-455c-b785-19d8eb9281f2] succeeded in 0.00538706198859s: 'The product is 40000'
[2014-10-29 13:23:53,226: INFO/MainProcess] Received task: tasks.multiply[004296e3-f931-4075-ba83-09a7804b5e49]
[2014-10-29 13:23:53,229: INFO/MainProcess] Task tasks.multiply[5ecb8a03-2af4-4d6f-ab2f-e8b0f4398f54] succeeded in 0.00483364899992s: 'The product is 40000'
[2014-10-29 13:23:53,231: INFO/MainProcess] Received task: tasks.multiply[18172600-70f8-4402-a923-db02d71718a5]
[2014-10-29 13:23:53,235: INFO/MainProcess] Task tasks.multiply[1d8c3efb-ad20-42e9-976b-34b8be0a5e39] succeeded in 0.00486789099523s: 'The product is 40000'

You can see that first 5 tasks were retrieved from the queue and then the workers started to execute the tasks successfully, each displaying the multiplication result.

Deploying Sinatra app from Github to Heroku

I recently received a question from one of my readers how to deploy something existing (on Github) to Heroku. He noticed that I was posting most of my code on Github, but he wanted to know how to get it off Github and deploy to Heroku.

In this post, we have created a REST API for our todo app. Let’s take this as an example. Here we go…

The code from that post can be found on Github ( We use git clone to grab a copy for making some local changes.

wim@wim-mint ~/ $ git clone
Cloning into 'Sinatra-Todo-app-with-Datamapper-using-Postgres-and-JSON'...
remote: Counting objects: 43, done.
remote: Compressing objects: 100% (39/39), done.
remote: Total 43 (delta 0), reused 43 (delta 0)
Unpacking objects: 100% (43/43), done.
Checking connectivity... done.

You will see that a new directory has been created called ‘Sinatra-Todo-app-with-Datamapper-using-Postgres-and-JSON.gitCloning’. Go into that directory and do the following:

wim@wim-mint ~/Sinatra-Todo-app-with-Datamapper-using-Postgres-and-JSON $ heroku create todo-restserver
Creating todo-restserver... done, stack is cedar |
Git remote heroku added

To add the postgres database, do the following

wim@wim-mint ~/Sinatra-Todo-app-with-Datamapper-using-Postgres-and-JSON $ heroku addons:add heroku-postgresql
Adding heroku-postgresql on todo-restserver... done, v4 (free)
Database has been created and is available
 ! This database is empty. If upgrading, you can transfer
 ! data from another database with pgbackups:restore.
Use `heroku addons:docs heroku-postgresql` to view documentation.

You also see in the previous CLI output, that Heroku makes available the database location in the environment variable “HEROKU_POSTGRESQL_ONYX_URL”. We can now change the database location in our application’s main.rb file:

configure :production do

This will tell our application to store all todo items in the Postgres database located at the URL as specified in the environment variable ENV[‘HEROKU_POSTGRESQL_ONYX_URL’].

wim@wim-mint ~/Sinatra-Todo-app-with-Datamapper-using-Postgres-and-JSON $ git push heroku master
Initializing repository, done.
Counting objects: 47, done.
Compressing objects: 100% (43/43), done.
Writing objects: 100% (47/47), 88.46 KiB | 0 bytes/s, done.
Total 47 (delta 3), reused 0 (delta 0)

-----> Ruby app detected
-----> Compiling Ruby/Rack
-----> Using Ruby version: ruby-2.0.0
-----> Installing dependencies using 1.6.3
       Running: bundle install --without development:test --path vendor/bundle --binstubs vendor/bundle/bin -j4 --deployment
       The source :rubygems is deprecated because HTTP requests are insecure.
       Please change your source to '' if possible, or '' if not.
       Fetching gem metadata from
       Installing addressable 2.3.6
       Installing fastercsv 1.5.5
       Installing json_pure 1.8.1
       Installing multi_json 1.10.0
       Installing stringex 1.5.1
       Installing uuidtools 2.1.4
       Installing rack 1.5.2
       Installing json 1.8.1
       Using bundler 1.6.3
       Installing tilt 1.4.1
       Installing bcrypt-ruby 3.1.1
       Installing data_objects 0.10.14
       Installing rack-protection 1.5.3
       Installing dm-core 1.2.1
       Installing dm-aggregates 1.2.0
       Installing sinatra 1.4.5
       Installing dm-constraints 1.2.0
       Installing dm-migrations 1.2.0
       Installing dm-serializer 1.2.2
       Installing dm-timestamps 1.2.0
       Installing dm-transactions 1.2.0
       Installing dm-types 1.2.2
       Installing dm-validations 1.2.0
       Installing dm-do-adapter 1.2.0
       Installing sinatra-flash 0.3.0
       Installing datamapper 1.2.0
       Installing do_postgres 0.10.14
       Installing dm-postgres-adapter 1.2.0
       Your bundle is complete!
       Gems in the groups development and test were not installed.
       It was installed into ./vendor/bundle
       Bundle completed (8.50s)
       Cleaning up the bundler cache.

###### WARNING:
       You have not declared a Ruby version in your Gemfile.
       To set your Ruby version add this line to your Gemfile:
       ruby '2.0.0'
       # See for more information.

###### WARNING:
       No Procfile detected, using the default web server (webrick)

-----> Discovering process types
       Procfile declares types -> (none)
       Default types for Ruby  -> console, rake, web

-----> Compressing... done, 13.7MB
-----> Launching... done, v6 deployed to Heroku

 * [new branch]      master -> master

Going to will result in a “500 Internal Server Error”. This was somehow expected as we still need to create the database on Heroku:

wim@wim-mint ~/Sinatra-Todo-app-with-Datamapper-using-Postgres-and-JSON $ heroku run rake migrate
Running `rake migrate` attached to terminal... up, run.5342
The source :rubygems is deprecated because HTTP requests are insecure.
Please change your source to '' if possible, or '' if not.

If you now go to, you will get the []. Again this was expected, as the database was just created and there are no records inserted yet.

In some of the next posts, we will create a REST client that consumes this REST webservice on

Keep reading!

B4B instead of B2B

Why is this interesting

In the past decades and even now, most companies were/are built around the traditional model of “make, sell and ship” (B2B). These companies sell a product. Obviously this product needs to be installed so they would sell some services along with it.Obviously, they were/are also selling often expensive support contracts that allows them to generate recurring revenue streams.

The talk discusses the need for a shift from B2B to B4B. The idea behind it is that businesses don’t want to be sold a product (or “solution”), they want to achieve a specific outcome. B4B encourages suppliers to stop thinking about how they can sell more product (as per the existing B2B sales model) and start thinking about understanding and actively working with customers to arrive at a specific end result.

The below video explains how to go from the traditional B2B model to the B4B model. Learn about how Kodak was a synonym to photography in the past and how they missed the move to the digital world. Kodak had all of the original innovation and they were turning their back towards digital photography. Learn about how Blockbuster was taken in speed by Netflix in the video world and how Netflix is continuously re-inventing themselves.

Watch the video

AWS account hacked…the verdict

In thispost I described how somebody got unauthorised access to my AWS account and started 160 EC2 instances. I was called by Amazon and started a support case to which I received an initial reply as described in this post.

A week has passed, have been sending some mails back and forth with the support team about the fact that I was receiving late payment notifications. The assured me this was absolutely normal as the case went through the approval chain.

Today, 9th of October, I finally received some fantastic news. Read below part of the mail I received:

Hi Wim,

I have some fantastic news today! Our request to waive the unauthorized charges from your account has gone through all my upper levels of management, and they’ve approved a $32,830.80 (before tax) waiver as a one-time courtesy to your account! I’ve already applied the waiver, which brings your balance due for September back down to $0.47, the legitimate charges for that month. I’ve set that charge to run against your card within the next hour, and I’ll know soon whether it was successful or not, after which the remaining unauthorized charge balance will automatically be waived. Thank you so much for your patience while we worked through this issue for you. I’m just glad it came to a happy resolution!

I don’t want to resolve your case prematurely in case you have any more questions, but I think you should be good to go! You can now feel free to close out your case, or let me know if there’s anything else I can help with. Thanks!

Man, I can’t tell you how relieved I am. In the past days, I had been talking to friends and colleagues about it and people told me chances were high that Amazon would waive this huge amount. On my side, I was absolutely not convinced entirely as somebody needs to end up paying for the resources used.

Now that I got the final approval that the huge amount is going to be waived soon, I can only say that I’m a happy person. I must admit that my respect for Amazon has grown tremendously. Last summer, I had read the Amazon book from Brad Stone titled the “Everything store” (can be found on Amazon here). That book describes Amazon’s history and I read indeed that Jeff Bezos was putting “Customer Satisfaction” as the highest priority within Amazon. But these are of course just words in a book. Today, I can only come to the conclusion that this is effectively true.

I’m really amazed by the way the AWS security and support team have guided me through this entire process. They kept me informed how I could secure my account again, they took time (a lot of time) to walk me through some crucial steps to secure my account, they re-assured me many times that my account was safe again, they have updated me frequently about the progress. But above all, they treated this case very very professionally, showing a lot of respect for somebody who was concerned about the huge billable amount.

Send SMS via Twilio

While I was looking for a tool to send SMS text messages to my customers, I came across Twilio. It seemed pretty straightforward to integrate into my applications. So I decided to write a small script to test this service. This post describes some easy steps to get started with Twilio, more in particular, we write a small piece of Ruby code (and Python variant) that would send an SMS to an array of cell phone numbers.

Let’s get started

1) Go to and sign up for a free account
2) Get yourself a free trial phone number
3) Note down your AccountSID and AuthToken as well as your phone number

Here is the little script. Call it sms.rb or similar and execute it using “ruby sms.rb command”. If all goes well, you will receive a text message from the Twilio phone number.

require 'rubygems'
require 'twilio-ruby'
account_sid = ""
auth_token = ""
client = account_sid, auth_token
from = "+14846624263" # Your Twilio number
friends = {
"+32473xxxxxx" => "Wim",
"+32485xxxxxx" => "Iris",
friends.each do |key, value|
    :from => from,
    :to => key,
    :body => "Hey #{value}, how are you?"
  puts "Sent message to #{value}"

This is the Python variant

from import TwilioRestClient
account_sid = "AC2b99feef2c2fcaad4bea74b969cfb35c"
auth_token = "dab2b9ce1d0b0f61e48982afce151552"

client = TwilioRestClient(account_sid, auth_token)

sms = client.sms.messages.create(body="All in the game",
    to="+32473xxxxxx5",   #Your phone number
    from_="+14846624263") #Your Twilio number

print sms.body

Sinatra Todo app with Datamapper using Postgres and JSON

This blog post really continues on the previous one. In that post, we created a Todo app using Sinatra, Postgres but we returned ERB files. Essentially, our app returned some HTML files that contained immediately all the data in a proper Bootstrap format.

In this case, the server part and the client part are really linked to each other. This might not be what you always want. More and more, applications are divided to clearly split the server from the client. In such scenario’s, the server exposes typically a REST API while the client parses the JSON or XML code that is send back by the REST Server. In this post, we will create such a REST server that returns JSON style responses to a client. In next posts, we will then create clients using some Javascript frameworks or mobile applications.

Compared to the previous post, the only change really is that we have changed the routes.rb file in the ‘routes’ folder to respond with JSON rather than referring to the ERB files. The routes.rb file now looks like:

  get "/" do
    format_response(Todo.all, request.accept)

  get "/todos" do
    format_response(Todo.all, request.accept)

  get "/todos/:id" do
    todo ||= Todo.get(params[:id]) || halt(404)
    format_response(todo, request.accept)

  post "/todos" do
    body = JSON.parse
    todo = Todo.create(
      content:    body['content'],
  status 201
  format_response(todo, request.accept)

  put '/todos/:id' do
    body = JSON.parse
    todo ||= Todo.get(params[:id]) || halt(404)
    halt 500 unless todo.update(
      content:      body['content'],
      completed_at: body['done'] ? : nil,
      done:         body['done'] ?  true : false
    format_response(todo, request.accept)
  delete '/api/movies/:id' do
    todo ||= Todo.get(params[:id]) || halt(404)
    halt 500 unless todo.destroy

The full application can be found here in case you want to see a completed example. Let’s go ahead an run our app.

wim@wim-mint ~/Sinatra_Todo_Postgres_Datamapper_structure_json $ rake migrate
 ~ (0.000721) PRAGMA table_info("todos")
 ~ (0.000015) SELECT sqlite_version(*)
 ~ (0.013478) DROP TABLE IF EXISTS "todos"
 ~ (0.000045) PRAGMA table_info("todos")
 ~ (0.004265) CREATE TABLE "todos" ("id" INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, "content" VARCHAR(255) NOT NULL, "done" BOOLEAN DEFAULT 'f' NOT NULL, "completed_at" TIMESTAMP, "created_at" TIMESTAMP, "updated_at" TIMESTAMP)
wim@wim-mint ~/Sinatra_Todo_Postgres_Datamapper_structure_json $ shotgun
== Shotgun/WEBrick on
[2014-09-12 11:44:17] INFO  WEBrick 1.3.1
[2014-09-12 11:44:17] INFO  ruby 2.1.2 (2014-05-08) [x86_64-linux]
[2014-09-12 11:44:17] INFO  WEBrick::HTTPServer#start: pid=5161 port=9393

Going to http://localhost:9393/todos will show just some brackets. This means our app is working. Since there is no data yet in the database, it will return an empty resultset. You could also verify this with a REST client. To do so, we’ll be using the POSTMAN extension to Chrome browser, which is an excellent graphical tool in case you don’t want to use cURL (which is perfectly well suited to test our REST API if you’re more into the CLI mindset).

In the below screenshot, you can see we did a GET http://localhost:9393/todos and we got back again the []. The fact you can do this using a random REST client shows that we really made our server app independent from the client app.


Let’s add some data to the database using the Postman REST client. The model we are using is in below snippet. So this means that all the data items will be added to the database. The ‘id’ will be created automatically, the content field is mandatory and the done file will be set to false by default, indicating a todo item is never completed already when it is inserted in the database the first time. The date fields are not required, but will be automatically created in our code later on.

class Todo
  include DataMapper::Resource  
  property :id,           Serial,	key: true, unique_index: true
  property :content,      String,	required: true, length: 1..255
  property :done,         Boolean,  :default => false, required: true
  property :completed_at, DateTime
  property :created_at,   DateTime
  property :updated_at,   DateTime

However, in below snippet from out routes.rb file, you see that the app expects some data like the content. Note that you don’t have to supply the dates (created_at and updated_at) since they will be automatically set to the current time in our code.

  post "/todos" do
    body = JSON.parse
    todo = Todo.create(
      content:    body['content'],
  status 201
  format_response(todo, request.accept)

So we wanted to insert some todo items to our database, wasn’t it? See the below screenshot on how to achieve this:

Note that the server replies back with a “200 OK” message which means the item was inserted successfully. If you’re not convinced by the “200 OK” message from Postman, you could also see that this succeeded by doing a GET request to the /todos as shown in below example

Noteworthy is that in the ‘routes.rb’ snippet above, we refer to the format_response helper method. This can be found in the ‘helpers/response_format.rb’ file which looks like:

require 'sinatra/base'

module Sinatra
  module ResponseFormat
    def format_response(data, accept)
      accept.each do |type|
        return data.to_xml  if type.downcase.eql? 'text/xml'
        return data.to_json if type.downcase.eql? 'application/json'
        return data.to_json
  helpers ResponseFormat

In the above snippet, the ‘format_response(Todo.all, request.accept)’ will provide all the todo items to the function and the format depends on the Accept header of the request. This is pretty cool. If your client application -for some reason- prefers to receive XML instead of JSON, it can be achieved easily.

This means that when we set the Accept header to ‘text/xml’ in the Rest client (can be Postman or your own client application), it will be caught by our format_response function where the accept parameter will be equal to ‘text/xml’, hence our app will return XML data via the data.to_xml return statement. Isnt’ this pretty neat?

I have made the code available on Github (Sinatra-Todo-app-with-Datamapper-using-Postgres-and-JSON) so you can have a look how it all ties together.