Monthly Archives: September 2014

AWS account: first feedback received

In this post, I described how I found out that somebody had been using my AWS account for his own benefit. I received a first reply to my case

This is **** in the Escalations team at AWS Customer Service. I’m terribly sorry for any concern the unauthorized usage of your account may have caused, but it looks like you were able to terminate those compromised resources and re-secure your account again with help from my associate Nathan. That’s great! I will be handling your case from this point forward, and I’ll be sure to keep you updated along the way as our request to waive the unauthorized charges on your account progresses through the various levels of approval.

It doesn’t say a lot but at least someone is taking this seriously and will guide me through the entire process. Let’s hope all of this will get a happy end. Still not sure at this point in time to be honest.

AWS account hacked


I’m a loyal user of AWS since years, I simply like the concept of doing everything in the cloud and I always believed they have an amazing set of services to offer. Although I’m not a heavy user at all (I typically spend less than 10USD per month), I use it quite frequently. I launch an EC2 instance every now and then to run an experiment, I put my photo’s on S3, I’m developing against Elastic transcoder….you know, the usual stuff geeks do.

On Wednesday, 24th September, I received an email from Amazon saying the following:

Greetings from Amazon Web Services.

Your security is important to us and we have detected suspicious activity on your Amazon Web Services account ending in 9123. We currently  see current charges of 1.14 due to increased EC2 usage.

Please log into your AWS Management Console at, check if all usage is authorized, and delete all unauthorized resources. Please pay special attention to the running EC2 instances and IAM users, roles, and groups (please check all regions – to switch between regions use the drop-down in the top-right corner of the management console screen).

You must also change your AWS account password and rotate and delete your old AWS access credentials.

Also, please make sure that you never share your AWS Access Key ID as well as your AWS Secret Access Key with anyone and never publish them in an environment where other people have access to them. In addition, industry best practice recommends frequent access key rotation. Exposing your credentials would allow other people to access your account and you will be responsible for the billing charges for their usage.

If you are unable to delete your AWS access key and stop any unauthorized usage within a reasonable time,  we may need to suspend your account to protect you from unauthorized charges. . If you have verified that all usage is authorized and you accept the billing for this usage, please respond to this email and confirm.

…..and so on

No big deal I thought, in the end the charges were still very low. So I logged into my AWS account and I saw indeed that 20 EC2 instances were launched. What the heck…who did this? So I immediately terminated these instances, changed my password and deleted all of my AWS keys. So I felt pretty safe again and could sleep on both ears that a hacker wouldn’t get access again to my account. Case closed…I thought at least!

Another surprise

Then last Saturday evening, the 27th September, I received a call from AWS security department. To be honest, I wasn’t really in the mood to pick up the phone so I decided not to take the call immediately. Who does on a Saturday evening while having dinner with the family? In the end, I changed everything I could change and everything was secure again, nothing to worry about. I decided I would take care of it after the weekend.

That same number called me a couple of times in less than 10 minutes so I thought it was probably not that innocent after all and I decided to pick up the phone eventually. I’m glad I did … or actually…I wasn’t if I think about it now. I had no reason to be happy at all I would soon find out. A lady from AWS security informed me that they’d detected some suspicious activity on my AWS account. Obviously, I knew this already because I received the email couple of days before. That’s also what I told her on the phone. Then she told me the account was displaying a billable amount of close to 40.000USD!!! Needless to say that my heart rate went up immediately. See the screenshot below in case you don’t believe it immediately.


I simply couldn’t believe it as I killed those 20 EC2 instances on Wednesday already, so what else could it be. Then, the lady on the phone told me to check in each AWS region and effectively….I saw 20 EC2 instances in each region. And there are 8 regions in total. So the hacker did launch 160 EC2 instances in total!! After 4 days, these servers accumulated to a billable amount of close to 40.000USD. To be honest, I was shocked…as far as I know I had not exposed any of my AWS credentials anywhere. She advised me to open a support case immediately. Which I did…I asked the case manager to call me back and they did in less than 5 minutes.

The helpdesk engineer calmed me down a bit by mentioning that it happened to more people. I spent about 1 hour on the call with the support engineer and we went through my AWS account. In fact, we were having a walkthrough how to make the account secure again and he basically told me all the things I had already done in the past couple of days. He also told me he would start a procedure to waive the costs from my account. This would take 7-14 days and he could not guarantee me the costs would be waived. This depends on upper management he told me. He also told me not to be too concerned in the coming days although he could not make any promised that the outstanding sum would be waived.

Now we’re a couple of days later and I have to admit that it keeps me busy despite the gentle words of the AWS engineer. I’m struggling with a lot of questions: what if I need to pay that amount, how did this happen, why didn’t I look into all the different regions the first time….The rational ‘me’ thinks they will not charge me as it’s clear that this is the work of a hacker. On the other side, it’s just not so obvious that large companies like Amazon will waive this huge amount easily away. In the end, the resources (in this case EC2 instances) have been used and somebody will need to pay for it.

Stay tuned….

Update 01 October: I googled a bit and it seems more people had the same issue. Apparently these hackers are doing this for Litecoin mining.

Sinatra Todo app with Datamapper using Postgres and ERB

In my previous post, I have build a Todo app based on Sinatra with Datamapper and SQlite. For this post, I will be using again the Todo app, but I will use Postgres database instead. It’s amazingly simple to achieve this using Datamapper. The reason why this is important is because tools like Heroku do not support SQLite, but they do support Postgres.

As a nice add on, I will also create a better structure for my Sinatra apps in general. I don’t like the fact that everything is packed into a single app.rb file. Let’s get started!

Change to Postgress

configure :development do

configure :production do

As a comparison, in my previous post using SQLite it used to be:

require "sinatra"
require "data_mapper"

    :default, ENV['DATABASE_URL'] || "sqlite3://#{Dir.pwd}/todo.db")

And that’s really all there is to changing from SQlite to Postgres.

Applying the new structure

So far, all are changes were done in a single app.rb file. While this is a perfectly good solution for Sinatra, it becomes very messy sometimes and easy to lose the overview. So let’s re-structure things a bit:


You can see we created a folder ‘routes’, a folder ‘models’. The ‘routes’ folder will contain all the code related to the routing of our app while the ‘model’ folder will contain all our models in seperate files.

In each folder an init.rb file is also available. This contains links to the files in the directory the index file resides. So in other words, the init.rb file in the ‘model’ folder contains:

require_relative './todo'

Using the require_relative method, we are importing the todo.rb file in the ‘model’ folder.

The main.rb file is the file that keeps everything glued together.

# encoding: UTF-8
require 'json'
require 'sinatra'
require 'data_mapper'
require 'dm-migrations'
require 'sinatra/flash'

enable :sessions

configure :development do

configure :production do

require './models/init'
require './helpers/init'
require './routes/init'


This main file contains the section to setup datamapper and will also include the models filers, the helper files and the routes files. For each, it refers to the appropriate folder.

While this is merely some reshuffling of our code base, it really provides a better structure to keep the overview of your app. At least, all route files are grouped, all models are grouped etc…

In order to run this application, you will need to go to your console and do “rake migrate” (after you configured your postgres on your local server of course) and “shotgun”

The full source code can be found on Github

Sinatra Todo app with Datamapper using Sqlite

In this blog post, I’m continuing to use my To Do application that was build in this post and this post, but instead of using the ActiveRecord ORM, I will be using the Datamapper ORM. An example of the final result is shown in the picture above.

First thing to do is to configure Datamapper. To achieve this, you need to include the datamapper gem as well as the sqlite3 and dm-dqlite-adapter in your Gemfile. The Gemfile I’m using looks as follows:

source :rubygems
gem "sinatra"
gem "sqlite3"
gem "datamapper"
gem "dm-sqlite-adapter"
gem 'rack-flash', '0.1.2'

group :development do
  gem "shotgun"
  gem "tux"

To really setup Datamapper, you will need to do the following in the app.rb file

require "sinatra"
require "data_mapper"

DataMapper.setup(:default, ENV['DATABASE_URL'] || "sqlite3://#{Dir.pwd}/todo.db")

The above tells Datamapper to setup a SQLite database , called todo.db, in the current location.

The next thing to do really, is to define the model for the database. In below snippet, you can see that we create a class Todo (a database table really) that has some fields like the id, content, done and then two DateTime fields to keep track of when a todo item was created and when it was finished. Also notice the id is of type Serial, which is Datamappers way of saying this is a primary key (auto-increment).

class Todo
  include DataMapper::Resource  
  property :id,           Serial
  property :content,      String
  property :done,         Boolean,  :default => false
  property :completed_at, DateTime
  property :created_at,   DateTime


The DataMapper.finalize method is used to check the integrity of all the models you defined. It should be called after all your models have been created and before your app starts interacting with them.

Next, we will need to define the routes for the CRUD operations.

  get "/todos/?" do
    @todos = Todo.all(:order => :created_at.desc)
    erb :"todo/index"

For your information, the corresponding method to retrieve all todo items using ActiveRecord can be seen in below snippet. Not a lot of difference all in all.

  get "/todos" do
    @todos = Todo.order("created_at DESC")
    erb :"todo/index"

In the above code snippet, we call an ERB file called index in the todo folder under the views directory. Go ahead and create this todo folder. The index file is as follows:

<table class="table table-bordered">

<div class="control-group">
  <a href="/todos/new" class="btn btn-primary">Add todo item</a>
            <th>Todo item</th>
      <% @todos.each do |todo| %>
          <td><%= todo[:id] %></td>
          <td><%= todo[:done] ? "<del>#{todo[:content]}</del>" : todo[:content] %></td>
          <td><%= pretty_date(todo[:created_at]) %></td>
          <td><%= pretty_date(todo[:completed_at]) %></td>
            <% if todo[:done] %>
              <span class="label label-success">Completed</span>
            <% else %>
              <span class="label label-warning">Pending</span>
            <% end %>
          <td><a href="/todos/edit/<%= todo[:id] %>" class="btn btn-primary">Edit</a>
              <a href="/todos/delete/<%= todo[:id] %>" class="btn btn-danger">Delete</a>
      <% end %>

You’ll see that I define a table using some bootstrap styling. This table has a heading (thead) to define all the columns and a body (tbody)containing all the rows (tr) with the data retreived from the Todo Model.

Note: before you can run this app locally, you have to create the database. Therefore you need to first do “rake migrate” in your console before you start the app using “shotgun”.

For the complete example, including all routes and views, I refer to my Github repository. You can find it here

Webservice for system info: client part


In this post, we created a webservice that would allow us to store system information such as public and private IP address or the system name of your computer. In this post, we will develop a small client to use the server.

GET /ip               Retrieves the list of all systeminfo from the database
GET /ip/:id           Retrieves a single systeminfo entry from the database
POST /ip              Inserts the systeminfo item in the database
DELETE /todos         Deletes the systeminfo item from the database

The client part

We would like to store our private and public IP address as well as our system name in the database. As this data is quite easily retrieved from the client side, we will use python to retrieve all info and have the client program call our REST API as described in this post.

The below snippet contains the core part of our client. It gathers the private and public IP address of our computer as well as the system name.

import socket
import json
import requests
from requests import get, post

def getPrivateIP():
    s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
    s.connect(('', 80))
    return s.getsockname()[0]

def getPublicIP():
	ip = get('').text
	return ip

def getSystemName():
 	return socket.gethostname()

After we have retrieved the information on client side, we need to call the REST API from our python client using the REQUESTS library from Python:

url = ''

public 	= getPublicIP()
private = getPrivateIP()
systemname = getSystemName()

data = {"public": public, "private": private, "systemname": systemname}
headers = {'Content-type': 'application/json', 'Accept': 'text/plain'}

r =, data=json.dumps(data), headers=headers)

Executing the “python” command will then store the information via REST in the database which is running on Heroku. Visit the herokuapp, in my case it’s on, and you will see the entry added with your real private and public IP address as well as the name of your system.

Blinking LEDs with Raspberry Pi

I recently received my very first Raspberry Pi. I’ve ordered a model B+ which has an ethernet port, 4 USB ports, a microUSB slot and above all 40 GPIO pins instead of the 26 pins in the model B variant.
I know I’m pretty late in the whole Raspberry Pi hype, but somehow I never had sufficient time to start exploring this wonderful device. As always, most of my posts are for myself to remember what I did to get things up and running as in a year from now, big chances I have forgotten most of it. So this blogs serves as my digital memory in a sense. Anyway, therefor I wanted to describe a very easy project. It’s easy, but also somehow confusing a bit…read further the find out why.

Obviously, I want to start with something terribly easy, kind of like the Hello World equivalent in the software world. For Raspberry Pi, this would mean “Blinking Leds”. So in this post, we are going to write a small Python script for making LEDs blink via the GPIO pins of the Raspberry Pi. We will use the following schema. Note that if you don’t use a resistor, you will blow up the LED, therefore any resistor going from 100Ohm to about 1KOhm would be added.
Here’s the schema:


So what we want to do is to control the LED via one of the GPIO pins. Here is an example Python program to let the led blink a number of times:

import RPi.GPIO as GPIO
import time

GPIO.setup(26, GPIO.OUT)

def BlinkLED():
	for i in range(0, 10):
		print "Iteration " + str(i+1)
		GPIO.output(26, True)
		GPIO.output(26, False)
	print "Done"

You have two modes for addressing the GPIO pins. These modes are:

  • BOARD mode: use the pin numbering of the RPi board
  • BCM mode: use the pin numbering of the BCM chipset

You have to specify the mode you are using in your Python program. In the above example I have specified it to use the “Board” mode via the “GPIO.setmode(GPIO.BOARD)”. So why did I use pin 26 in boardmode and not pin 7 which is where it is connected on my GPIO breakout board (if you zoom in on my setup, you’ll see that my breakout board mentions P7)?

The answer is in the below drawing, which is a drawing of the pin layout of the RPi model B+ board:
Here is a usefull drawing of the mapping:

In the photo, you could see that I have plugged it into P7 into our breakout board. So I’m using this pin to control the blinking. If you have a careful look the above drawing and compare that drawing with the Pin layout of my breakout board, you’ll see that when I use P7 on my breakout board, this is in fact GPIO 7. Looking at the drawing GPIO7 translates to pin 26 (P26) on the Rpi board. This can be confusing since you might have thought to use P7 as it is indicated on the breakout board. So as a conclusion:

  • BOARD mode: use pin 26, “26” should be in our little Python program
  • BCM mode: use GPIO7, so “7” should be in our little Python program

The following code is completely similar to the above code, but instead of using the “Board” mode (and hence use 26), I’ve set it to “BCM” mode (and hence use pin 7)

import RPi.GPIO as GPIO
import time

GPIO.setup(7, GPIO.OUT)

def BlinkLED():
	for i in range(0, 10):
		print "Iteration " + str(i+1)
		GPIO.output(7, True)
		GPIO.output(7, False)
	print "Done"

Webservice for system info: server part


The purpose is to build a webservice to store system information such as private and public IP address, computername etc. We will build a Sinatra app with a sqlite database for development purposes and a Postgres database for production purposes as we will deploy it to Heroku when finished. If it’s the first time you are deploying a Sinatra application to Heroku, you can first follow this tutorial to get up and running.

The model and application

We will create a model that stores the relevant data such the IP addresses and the computernames. For lack of a better name, I will call it “IP”:

class IP
  include DataMapper::Resource  
  property :id,           Serial,	key: true, unique_index: true
  property :public_ip,    String,	length: 1..16
  property :private_ip,   String,	length: 1..16
  property :systemname,	  String,	length: 1..100
  property :created_at,   DateTime
  property :updated_at,   DateTime

In the Sinatra routes file, we will expose the basic CRUD operations to create, view and delete entries:

  get "/ip" do
    format_response(IP.all, request.accept)

  get "/ip/:id" do
    ip ||= IP.get(params[:id]) || halt(404)
    format_response(ip, request.accept)

  post "/ip" do
    body = JSON.parse
    ip = IP.create(
      private_ip: body['private'],
      public_ip:  body['public'],
      systemname: body['systemname'],
  status 201
  format_response(ip, request.accept)
  delete '/ip/:id' do
    ip ||= IP.get(params[:id]) || halt(404)
    halt 500 unless ip.destroy

The full source can be found on Github here

Deploying to Heroku

We will create an Heroku app called publicprivateip.

$ git init
$ git add .
$ git commit -m "First commit"
[master (root-commit) 29708e9] First commit
 28 files changed, 9972 insertions(+)
 create mode 100644 Gemfile
 create mode 100644 Gemfile.lock
 create mode 100644 Rakefile
$ heroku create publicprivateip 
Creating publicprivateip... done, stack is cedar

And attach also a PostgreSQL database to the application:

$ heroku addons:add heroku-postgresql
Adding heroku-postgresql on publicprivateip... done, v4 (free)
Database has been created and is available
 ! This database is empty. If upgrading, you can transfer
 ! data from another database with pgbackups:restore.
Use `heroku addons:docs heroku-postgresql` to view documentation.

And then continue to deploy the application:

$ git push heroku master
Fetching repository, done.
Counting objects: 7, done.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 471 bytes, done.
Total 4 (delta 3), reused 0 (delta 0)

-----> Ruby app detected

-----> Compressing... done, 13.7MB
-----> Launching... done, v7 deployed to Heroku

   d128817..c713aad  master -> master

Then finally we need to create the database on Heroku:

$ heroku run rake migrate
Running `rake migrate` attached to terminal... up, run.4672
The source :rubygems is deprecated because HTTP requests are insecure.
Please change your source to '' if possible, or '' if not.

Visiting the will give you of course an empty resultset [] as there is no data in our database.

Testing the webservice

We will use cURL to test our webservice.

GET system info (will give no entries as database is still empty):

$ curl -H "Content-Type: application/json"

POST system info (will insert a record in the database):

$ curl -H "Accept: application/json" -H "Content-type: application/json" -X POST -d '{"public":"","private":"", "systemname":"test"}'

GET system info (will return the records from the database):

$ curl -H "Content-Type: application/json"

Big note: we have not added any kind of validation or security to our application as this is merely for illustrative purposes how to create a REST API and deploy it on Heroku.