09 Aug 2016

Switched Static Site Generator From Pelican To Hugo

Out with the old

I’ve been using the static site generator Pelican for a very long time to generate my blog. The problem I’ve had for a while with Pelican is that I had to also use virtualenv to isolate the dependencies, having to manually run make html to generate the html output. I was never able to get the development server setup with Pelican.

The deployment process was also a little painful but not really Pelican’s fault. I would first push to github then pull down the changes on my server and again run make html.

I’m sure this process could have been automated.

In with the new

Hugo is written in Go which means all I have to install is just a single binary and on OS X with homebrew it was as simple as;

brew install hugo

After installing, I just followed the quickstart guide to get started then I picked a theme - purehugo - that resembled my old blog. I tinkered the theme until I got it exactly the way I wanted it. This required me to edit and create some template files based on Go’s built in templating.

The hardest part of switching to Hugo was just updating the Front Matter that was already in place.

Now to deploy all I have to do is just rsync the output directory from Hugo to my server.

16 Feb 2016

Broccoli and Angular.js


This post was initially written on 2014-05-28 and not published. Things might have changed.

Broccoli is a relatively new asset builder. It is based on doing operations on a trees of files.

Here is how I used it to concatenate front end dependencies installed via bower and an angular.js app.


First, let me show you the two files you need for bower.


Here is an example bower.json that lists the frontend dependencies. Note the resolutions property.

  "dependencies": {
    "angular-ui-router": "0.2.11",
  "resolutions": {
    "angular": "~1.3.0"


This is relative to your project and tells bower where to put the dependencies.

    "directory": "public/vendor"


Now we need to install broccoli. I’ve installed the broccoli-cli globally and as per the installation guide.

npm install --save-dev broccoli
npm install --global broccoli-cli

We also need to install plugins for broccoli;

npm install --saveDev broccoli-concat


Like all task runners broccoli has its own file format to define its operations, though its not really a task runner but rather a build tool.

Here is the brocfile.js to concatenate all of the above bower dependencies

var broccoli = require('broccoli');
var concat = require('broccoli-concat');

var concatenated = concat('public/',  {
  inputFiles: [
  outputFile: '/assets/app.js',
  separator: '\n', // (optional, defaults to \n)
  wrapInEval: false // (optional, defaults to false)

module.exports = concatenated;

We explicitly define the order of concatenation to the concat function. This way we have jQuery loading before angular, and angular loading before ui-router and our app code (which is assumed to exist in public/js).

Now running broccoli serve will start a http server on port 4200 and the concatenated Javascript will be available at http://localhost:4200/assets/app.js.

Hope that helps.

11 Nov 2014

Using Bluebird With Angular Protractor

Async control flow

There are few places where you would want to use a promise. Protractor supports Promises in the onPrepare function but the example uses Q.

That example onPrepare written using Bluebird looks like this;

var Promise = require('bluebird');

onPrepare: function(){
  return Promise.delay(2000);
      browser.params.password = '12345';

A better example is that the onPrepare function can be used to perform some async setup task like creating a fake User in your database to be able to login.

var User = require('./models/User');

onPrepare: function() {
  // returns a Promise
  return User.create({
    username: 'bulkan',
    password': 'igotdis'

Test structure

Protractor uses Jasmine 1.3 and has updated it to automatically resolve Promises.

describe('Home page', function(){
  it('should have username input', function(){
    var username = element(by.css('#username'));

expect automatically resolves the Promise so there is no need to do the following


Here is another example test that will verify that the home page is rendering Post titles. This time we have to chain onto the .then of the Promises.

var Promise = require('bluebird'),
    Posts = require('./models/Posts');

describe('Home Page', function(){
  it('should have a list of posts', function(done){


    var posts = element(by.repeater('post in posts').column('post.title'));

      return elm.getInnerHtml();
      return titles.sort();
      return Posts.findAll({attributes: 'title', order: 'title'})

We need to Promise.cast the posts.map as we call .nodeify which is a bluebird function. nodeify helps simplify tests by not needing to explicitly call done in the last then and in a catch

Jasmine supports asynchronous tests by passing in a callback function to an it, just like in Mocha. In the test above we find elements by the repeater. The template used might look like;

<div ng-repeat="post in posts">
    <h1> {{::post.title}} </h1>

There might be an easier/simpler way to do this so please do let me know by commenting below.

09 Jun 2014

Using Express Router instead of express-namespace

express 4.0 has been out for a while and it seems people are still using express-namespace. According to npm it had 183 downloads on the 8th of June.

express-namespace hasnt been updated in nearly two years and it can now be replaced with the Router that comes with express 4.

Also I’ve found that the middleware mounting on namespace roots would mount it at the the application level. This is else that the router solves as it allows you to seperate out routes into different modules with its own middleware.

Here is the example from express-namespace written using the Router in express 4.0.

var express = require('express'),
    forumRouter = express.Router(),
    threadRouter = express.Router(),
    app = express();

forumRouter.get('/:id/((view)?)', function(req, res){
  res.send('GET forum ' + req.params.id);

forumRouter.get('/:id/edit', function(req, res){
  res.send('GET forum ' + req.params.id + ' edit page');

forumRouter.delete('/:id', function(req, res){
  res.send('DELETE forum ' + req.params.id);

app.use('/forum', forumRouter);

threadRouter.get('/:id/thread/:tid', function(req, res){
  res.send('GET forum ' + req.params.id + ' thread ' + req.params.tid);

forumRouter.use('/', threadRouter);


A little bit more typing but easier to explain to others and no monkey patching weirdness of express-namespace.

The routes are more little more explicitly defined.

Hope this helps.

28 Apr 2014

Mocking a function that returns a (bluebird) Promise

With Sinon.JS mocking functions are quite easy. Here is how to stub a function that returns a Promise.

Demonstrated with a potato quality example. Imagine the following code is in a file named db.js

var Promise = require('bluebird');

module.exports.query = function query(q) {
  return Promise.resolve([
      username: 'bulkan',
      verified: true

Using bluebird we simulate a database query which returns a Promise that is resolved with an Array of Objects.

Imagine the following code located in users.js;

var db = require('./db');

module.exports.getVerified = function getVerified(){
  return db.query('select * from where verified=true');

The mocha unit test for the above which stubs out db.query that is called in users.js;

var db = require('./db')
  , should  = require('chai').should()
  , sinon = require('sinon')
  , users;

describe('Users', function(){
  var sandbox, queryStub;

    sandbox = sinon.sandbox.create();
    queryStub = sandbox.stub(db, 'query');
    users = require('./users');


  it('getVerified should return a resolved Promise', function(){
    queryStub.returns(Promise.reject('still resolved'));
    var p = users.getVerified();
    return p;

In the beforeEach and afterEach functions of the test we create a sinon sandbox which is slightly over kill for this example but it allows you to stub out a few methods without worrying about manually restoring each stub later on as you can just restore the whole sandbox as demonstrated in the afterEach.

There is one test case that tells the queryStub to return a Promise that is rejected. Then test that the promise that users.getVerified returns is resolved. Mocha now will wait until Promises that are returned from its to resolve.

Sorry about the potato quality example, been trying to think of a better example. Any suggestions ?

Hope this helps.

24 Apr 2014

Using mockery to mock modules for Node.js testing

In a previous article I wrote about mocking methods on the request module.

request also supports another workflow in which you directly call the imported module;

var request = require('request');

  method: 'GET',
  url: 'https://api.github.com/users/bulkan'
}, function(err, response, body){
  if (err) {
    return console.err(err);


You pass in an options object specifying properties like the HTTP method to use and others such as url, body & json.

Here is the example from the previous article updated to use request(options);

var request = require('request');

function getProfile(username, cb){
    method: 'GET',
    url: 'https://api.github.com/users/' + username
  }, function(err, response, body){
    if (err) {
      return cb(err);
    cb(null, body);

module.exports = getProfile;

Its not that big of a change. To unit test the getProfile function we will need to mock out request module that is being imported by the module that getProfile is defined in. This where mockery comes in. It allows us to change what gets returned when a module is imported.

Here is a mocha test case using mockery. This assumes that the above code is in a file named gh.js.

var sinon = require('sinon')
  , mockery = require('mockery')
  , should = require('chai').should();

describe('User Profile', function(){
  var requestStub, getProfile

      warnOnReplace: false,
      warnOnUnregistered: false,
      useCleanCache: true

    requestStub = sinon.stub();

    // replace the module `request` with a stub object
    mockery.registerMock('request', requestStub);

    getProfile = require('./gh');


  it('can get user profile', function(done){
    requestStub.yields(null, {statusCode: 200}, {login: "bulkan"});

    getProfile('bulkan', function(err, result){
      if(err) {
        return done(err);

mockery hijacks the require function and replaces modules with our mocks. In the above code we register a sinon stub to be returned when require('request') is called. Then we configure the mock in the test using the method .yield on the stub to a call the callback function passed to request with null for the error, an object for the response and another object for the body.

You can write more tests

Hope this helps.

14 Apr 2014

AngularJS & Popup Windows

Popup windows are extremely annoying hence most modern browsers block them, agreeably so. That being said one use of popup windows is when doing OAuth. Showing the OAuth authorization dialog in a popup window as not to confuse the user.

If there is a better or different way please comment below.

All the code can be found at angular-popup.

Here is how I solved it using a simple express 4 application and the accompanying AngularJS.

The express code is very simple it just creates two routes. The root/index route renders the view to bootstrap the angular application.

The angular app has one default route / with its controller set to PopupCtrl. In the template popup.html using ng-click we call the function bound on the $scope called showPopup. This is the code for PopupCtrl;

Read the inline comments;

popupApp.controller('PopupCtrl', ['$scope', '$window', '$interval', function PopupCtrl($scope, $window, $interval) {
  'use strict';

  // assign the current $scope to $window so that the popup window can access it
  $window.$scope = $scope;

  $scope.showPopup = function showPopup(){
    // center the popup window
    var left = screen.width/2 - 200
        , top = screen.height/2 - 250
        , popup = $window.open('/popup', '', "top=" + top + ",left=" + left + ",width=400,height=500")
        , interval = 1000;

    // create an ever increasing interval to check a certain global value getting assigned in the popup
    var i = $interval(function(){
      interval += 500;
      try {

        // value is the user_id returned from paypal
        if (popup.value){
      } catch(e){
    }, interval);


We tell the popup to load up the /popup URL which our express app will render the server side jade template.

extends layout

block content
    <h1>I'm a popup</h1>

            window.opener.$scope.says = 'teapot';
            window.value = true;
        }, 2000);

The template above is simple enough. All it does is after two seconds assing to window.value to indicate to the $interval that the popup has done something important. The popup also assigns a value to window.opener.$scope which is the $scope that was assigned in PopupCtrl.

As we have used ng-model in the default routes template a we will see the text teapot appear in the text input.

Hope this makes sense.

20 Jan 2014

Using Sequelize Migrations With An Existing Database


Im sure you know know how to install packages but here is the command for the sake of completeness

npm install sequelize async

The first migration

First initialize the migrations structure

sequelize --init

Then create the initial migration, but dont edit this file as we will use it create the SequelizeMeta table.

sequelize -c initial

Create another migration

sequelize -c create-tables

Dump the database

Now dump your database without the data. With mysqldump

mysqldump -d --compact --compatible=mysql323 ${dbname}|egrep -v "(^SET|^/\*\!)".

We need to remove the lines beginning or containing SET

Save this dump to the migrations folder and name it initial.sql

Edit the last migration that was created to look like;

var async = require('async')
  , fs = require('fs');

module.exports = {
  up: function(migration, DataTypes, done) {
    var db = migration.migrator.sequelize;

        fs.readFile(__dirname + '/initial.sql', function(err, data){
          if (err) throw err;
          cb(null, data.toString());

      function(initialSchema, cb){
        // need to split on ';' to get the individual CREATE TABLE sql
        // as db.query can execute on query at a time
        var tables = initialSchema.split(';');

        function createTable(tableSql, doneCreate){

        async.each(tables, createTable, cb);
    ], done);

  down: function(migration, DataTypes, done) {

      // Dont drop the SequelizeMeta table
      var tables = tableNames.filter(function(name){
        return name.toLowerCase() !== 'sequelizemeta';

      function dropTable(tableName, cb){

      async.each(tables, dropTable, done);

Please explain

On the migrations up function we use async.waterfall to orchestrate a the async calls;

  • read in the initial.sql file
  • need to split the initial.sql and retrieve each CREATE TABLE queries as db.query can execute on query at a time
  • using async.each run each of these queries

On the migrations down function we just remove all tables that is not the SequelizeMeta table. For some reason migration.dropAllTables() remove this table and messes up the migrations. Not sure if this is the correct behavior.

Hope this helps

30 Dec 2013

Faber-Castell Ambition Pearwood Review

Thanks for reading. This will be my first ever review of a fountain pen. I hope it is usefull in helping you decide to either buy this version of the Ambition or not.

The Faber-Castell Ambition Pearwood is not my first fountain pen but it is one of the expensive ones that I've bought recently. In Australia this pen costs around $200-$150 but online on ebay or cultpens you can get it at a cheaper price. It also comes with a Faber-Castell branded converter so you dont need to buy one.

It's been in my daily usage and at the moment inked with Iroshizuku Yama-guri. When I first bought it I had it inked with Diamine Oxblood as I like to colour match the body of the fountain pen with the ink.

The Nib

The Ambition Pearwood has a Stainless steel medium nib and writes smoothly. You can also get nibs in fine, extra-fine, medium or broad. The medium nib is a bit more on the thicker side compared to other nibs but this also provides a really nice shading and line variation.

Here is another album containing a few images of the Ambition. The notebook I used for the writing sample is the Monsiuer Fountain Pen friendly notebook. I find it hard to believe this notebook is marketed as such as with certain nibs & ink the feathering and ghosting is really noticeable.

07 Oct 2013

Testing With Mocha, Sinon.js & Mocking Request


Writing tests with Mocha is really fun and satisfying. Combined with should.js for expectations/assertions and mocking/stubbing with sinon.js I think it becomes very powerful test environment.

In this article I will show you how to get started with test mocking/stubbing by showing an example which stubs out the get method on mikeal/request. I also assume you are somewhat familiar with Node.js and have it installed.

Before we start create a directory and install the dependencies needed.

mkdir test_article; cd test_article
npm install mocha should sinon request async

GET some data

Here is a function that will get any users public GitHub profile from using the GitHub API. We will use the async module to help with the asyncronousness.

var request = require('request')
  , async   = require('async');

function getProfile(username, cb){
      request.get('https://api.github.com/users/' + username, function(err, response, body){
        if (err) return callback(err);
        callback(null, body);
  ], cb)

module.exports = getProfile;

getProfile('bulkan', function(result){

We require the two packages we need, then define a function which accepts the GitHub username & a callback function.

async.waterfall takes an array of functions as its argument then calls them one by one passing the values from that is passed to the callback to the next function. For more details read the official description here.

The first function in async.waterfall function list does a request to GitHub API passing the body to the next function. We dont have have another function so async.waterfall will call the callback we passed into getProfile with err and result as its argument. Its a good idea to read the official description of the waterfall function.

Last, we export our function as the module so we can use it later.

Tests taste better with Mocha

To test the above code we can write a Mocha test assuming the above code is in a file named gh.js.

var getProfile = require('./gh');

describe('User Profile', function(){
  it('can get user profile', function(done){
    getProfile('bulkan', function(err, result){
      if(err) return done(err);

      // simple should.js assertion

We can run the above test via the following line assuming the Mocha test is in a file named gh_test.js

./node_modules/.bin/mocha --require should --ui bdd gh_test.js

The problem with this test is that each time we run it will do a HTTP GET to GitHub API which will be a slow. The more tests we add that do actual HTTP requests to third parties will slow the tests even more. What we can do is we can mock out the request.

We can change the test code.

var request    = require('request')
  , sinon      = require('sinon')
  , getProfile = require('./gh');

describe('User Profile', function(){
      .stub(request, 'get')
      .yields(null, null, JSON.stringify({login: "bulkan"}));


  it('can get user profile', function(done){
    getProfile('bulkan', function(err, result){
      if(err) return done(err);

We add a before call that stubs out request.get.

The yields allows us to simulate the call to the callback that is passed to request.get. In this case we return null for err, null for the response and JSON string of a simple object.

The after call restores the default request.get method.

Our test case tests that request.get was called.

In Node.js require(..) loads modules once into a cache. As our test case runs first it will load the request module first. We use the reference to the module to be able to stub methods on it. Later on when we load getProfile module it will do a require('request') call which will retrieve the module from the cache with the get method stubbed out.

I hope this example helps you in your testing.

18 Aug 2013

Lazy Load Twitter Bootstrap Carousel Images

Twitter Bootstrap comes with a nice carousel for cycling through images. If you look at the html for the carousel you will notice that the images are loaded on page load. This is fine as long as it contains a few images but what happens if we have 11 jpg”s 500kb each ? One solution I have is to put the carousel in a modal and using jQuery to lazy load the carousel images.

In the following html we have a modal which contains a carousel which loads three images, when the page loads.

Full screen demo

We can add a bit of JavaScript and change the HTML markup to lazy load the carousel images when the modal is launched.

Full screen demo

In the HTML all that has changed is that a div was added that contains an image element that loads an animated GIF of an ajax-loader just above the carousel html and the src attributes on img elements were changed to data-src. This way the browser wont load the images on page load.

The JavaScript does a few things. First, it binds/listens to the show event on all divs with the modal class, finds the carousel within it and hidse it. Then for all image elements within the carousel div we look to see if it has a data-src attribute, if it does we create a Deferred instance.

Deferreds are a bit advanced topic but here they are used to make sure that the carousel is shown after all images are loaded, hence why the deferred instance is added to an array.

After this, the JavaScript binds/listens to the load event on the img element. In this case it uses the resolved function on the deferred instance p as the callback function. This means that once the image is loaded by the browser, the deferred is marked as resolved/done.

To load the image, the src attribute is set to the value of data-src and data-src is set to an empty value so that this process isnt repeated again if the modal is closed and re-opened later.

The last bit of code is to wait until all of our deferred instances are done. This is done by the $.when.apply call. apply is used here as an array needs to be passed as the argument to $.when. In the callback function, we hide the ajax-loader and fadeIn the carousel.

Thats it. Hope this helps. Read the following deferred.promise docs for more information on the API.

03 Aug 2013

Using Twython To Connect To The Twitter Streaming API via OAuth

Before you can connect to the Streaming API you need create a Twitter application. This will give you the necessary OAuth credentials. To do this, go to dev.twitter.com/apps, login to your Twitter account then click the Create a new application button and follow the instructions.

To connect to the Streaming using Twython, you need create a subclass of TwythonStreamer

from twython import TwythonStreamer

class TweetStreamer(TwythonStreamer):
    def on_success(self, data):
        if 'text' in data:
            print data['text'].encode('utf-8')

    def on_error(self, status_code, data):
        print status_code

Now we will instanstiate the TweetStreamer class and pass in the oauth details

# replace these with the details from your Twitter Application
consumer_key = ''
consumer_secret = ''
access_token = ''
access_token_secret = ''

streamer = TweetStreamer(consumer_key, consumer_secret,
                         access_token, access_token_secret)

streamer.statuses.filter(track = 'python')

The method on_success on the class TweetStreamer will get called for each tweet we receive from the streaming api. The statuses.filter call, will find tweets that contain the word python. Running this script will start printing tweets to the console.

30 May 2013

Writing a Linked List using Go - part one

In this article series I will briefly talk about Linked Lists then go about implementing one using Go. This is both a learning exercise for me to get comfortable using Go and to possibly help other developers transition into becoming a Go programmer. That being said please bear in mind I am still learning Go so excuse the code.

Also, Go already has a linked list implemented so better use that in your production code.

Lets begin with a quick refresher on linked lists and for a more detailed analysis read this PDF by Stanford University.

That being said have a look at the following diagram.

A linked list is a simple data structure that is used as the basis for other complex data structures. They are comprised of nodes each containing some data field and a reference field to the next node in the list. In the diagram above our nodes contain an integer field as the data. With this information let us write out the code for a node.

If we were using an Object Oriented language like Python then we could just create a class to represent the node. But Go is a procedural language and it does not have classes. That being said it does have a something similar called struct type which we can use to encapsulate the fields of a node.

type node struct {
    data int
    next *node

Here we define a new struct with a data field of type int. The next field has a type of a pointer to a node. Remember pointers store the memory address of the type it is pointing too.

Next up we, need to write the functions to insert and remove nodes into the list. Like I wrote previously Go does not have classes hence you would assume that there wont be methods, but you would be surprisingly wrong. Go has a feature of being able to associate functions with structs methods.

Here is what I mean.


As per the feedback on reddit Ive simplified the AddToHead method

We first create a new struct with a field that is the pointer to the beginning of the linked list. All linked lists need this first reference to able to do any operations on it.

Then we create a method called AddToHead and associate it with the LinkedList struct. This method will create a node with the data that is passed in and then add this node to head of the list. What happens;

  • when the list is empty ?
  • when there is at least one node in the list (its not a empty list) ?

We handle the first case by checking if the head of the list ll.head is nil which is the zeroed value for a pointer in Go. If ll.head is nil we can just assign ll.head to our tmp node.

If ll.head is not nil we have at least one node in the linked list so we cant just just assign tmp to ll.head as we would lose the references to the rest of the linked list and all of the data. First we have to set the tmp nodes next reference to ll.head to not lose this reference, then reassign ll.head to tmp.

I hope this makes sense. Ive pushed this code onto a repository on GitHub called goll. Once you clone the repository run the following command to checkout the branch containing the code in this article.

git checkout -b part_one

Once this is branch is checked out look at the tests in ll_test.go

In the next post we will look at removing nodes from our linked list and possibly adding another method to insert in sort order.

09 May 2013

HTML5 Manifest File & Nginx

Ive been developing a HTML5 game using the LimeJS framework. As I am targetting iPhone and iPod Touches I asked my lovely girlfriend fiance to design me an app icon and hopefully In the near future she will also have give startup images too.

As I dont want to load the icon and startup images via the network every time the app loads, I thought I would cache it “offline”. To do this you need to create a manifest file which your web server needs to set a specific content-type header which is just text/cache.manifest.

To load a manifest file you need to add an attribute called manifest that points to your manifest file. It can be a relative path.

<html manifest="mysite.manifest">

Here is how the rest of my HTML file looks like

    <meta name="apple-mobile-web-app-capable" content="yes"/>
    <link rel="apple-touch-icon" href="/assets/icon.png"/>
    <script type="text/javascript" src="site.js"/>

The manifest file tells the browser which files to store locally, the syntax for it kind of reminds me of INI files, but not quite. My manifest file looks like following;



This tells the site to cache/store the png image for the app icon. The file listing following the NETWORK: section tells the browser to always load the files from the network. More info is available in this link

If you are using Nginx like I am, then you need to change the file mime.types and add the following

text/cache.manifest       manifest;

Which just tells Nginx to serve up file resources ending with manifest with the content-type header of test/cache.manifest.

Hope this helps.

29 Apr 2013

Using Custom Events With LimeJS

LimeJS is an open source JavaScript HTML5 game creation framework built using Google Closure. In this article I will show you how to create a new event type and dispatch it, which is more so a Closure feature than LimeJS. I am going to assume you have installed LimeJS if not read the instruction.

We will create a simple game that will display a Sprite with the same width and height as the viewport. We will listen to touch & click events on this Sprite, generate a random number when these events fired, between 0-256 and fire a custom event once this number is greater than 128.

This number will be than used to change the color of our Sprite.

The game we will create is kind of a contrived example with zero playability but I hope it will serve the purpose introducing custom events to you.

Let there be events

Create a new LimeJS project by typing the following, which will create a directory called events_tutorial which will contain two files, events_tutorial.html and events_tutorial.js

bin/lime.py create events_tutorial

I like to create a separate file to store all my event types and the dispatcher so lets start with that file.

Create a new file in the events_tutorial directory and call it events.js and copy/paste in the following.

<script src="https://gist.github.com/bulkan/5500582.js"></script>

Closure provides goog.events.EventTarget for dispatching events and listening to them. The documentation blurb writes;

Inherit from this class to give your object the ability to dispatch events. Note that this class provides event sending behaviour, not event receiving behaviour: your object will be able to broadcast events, and other objects will be able to listen for those events using goog.events.listen().

As goog.events.EventTarget provides the ability to dispatch events we just create a new instance instead of inheriting from it which is done on line 6.

To distinguish between events we will need to create a subclass from goog.events.Event which is done on lines 8-10. The important part of that code block is the call to the base class on line 9. Make sure you use pass a unique string as this will be the string that will be used to identify the event.

Time to use this event in a new Sprite.

Create a new new file in events_tutorial called coloredsprite.js directory and paste in the following.

<script src="https://gist.github.com/bulkan/5500571.js"></script>

Here we create a subclass from lime.Sprite_ in which the constructor requires the width and height parameters that define it’s size. The changeColor method will be the callback method which will be registered in the event listener when the user touches or clicks on the Sprite. This method is straight forward, generate a random number and if it is greater than 128 fire a new instance of our event class we defined in events.js.

Before we move on run the following so that we update our dependencies.

bin/lime.py update

Let us now connect all of this together in events_tutorial.js which will look like the following.

<script src="https://gist.github.com/bulkan/5500572.js"></script>

Most of the code above is boiler plate code. We create an instance of Director, Scene and Layer. The getting started guide for LimeJS_ describes what each of these objects do.

What is important is that we also create an instance of our ColoredSprite class on line 19 and add it to the Layer called target. We than listen to the custom event that is being dispatched on line 24 using the unique string we passed into the call to the base class on line 9 of events.js.

When the event fires we create a Label, add it to target and animate it.

Hope this blog post helped. If you have questions comment on the individual Gist’s or send me a tweet @bulkanevcimen

10 May 2011

Export Test Cases From Quality Center Using Python

Here is a Python script that will export out test cases in a folder from Quality Center into a CSV file.

The following script will not handle Attachments. Will work on that later when I have time.

18 Mar 2010

Building a Twitter Filter With CherryPy, Redis, and tweetstream


all the code is available at https://github.com/bulkan/queshuns

Since reading this post by Simon Willison I’ve been interested in Redis and have been following its development. After having a quick play around with Redis I’ve been looking for a project to work on that uses Redis as a data store. I then came across this blog post by Mirko Froehlich, in which he shows the steps and code to create a Twitter filter using Redis as the datastore and Sinatra as the web app. This blog post will explain how I created queshuns.com in Python and the various listed tools below.


  • tweetstream - provides the interface to the Twitter Streaming API
  • CherryPy - used for handling the web app side, no need for an ORM
  • Jinja2 - HTML templating
  • jQuery - for doing the AJAXy stuff and visual effects
  • redis-py - Python client for Redis
  • Redis - the “database”, look here for the documenation on how to install it

Retrieving tweets

The first thing we need to is retrieve tweets from the Twitter Streaming API. Thankfully there is already a Python module that provides a nice interface called tweetstream. For more information about tweetstream look at the Cheeseshop page for its usage guide.

Here is the code for the filter_daemon.py, which when executed as a script from the command-line will start streaming tweets from Twitter that contain the words “why”, “how”, “when”, “lol”, “feeling” and the tweet must end in a question mark.

In this script I define a class, FilterRedis which I use to abstract some methods that will be used by both filter_daemon.py and later by the web app itself.

The important part of this class is the push method, which will push data onto the tail of a Redis list. It also keeps a count of items and when it goes over the threshold of 100 items, it will trim starting from the head and the first 20th elements (or the oldest tweets).

The schema for the tweet data that gets pushed into the Redis list is a dictionary of values that gets jsonified (we can probably use then new Redis hash type);

{ ‘id’:“the tweet id”, ‘text’:“text of the tweet”, ‘username’:“, ‘userid’:“userid”, ‘name’: “name of the twitter user”, ‘profile_image_url’: “url to profile image”, ‘received_at’:time.time() }

‘received_at’ is important because we will be using that to find new tweets to display in the web app.

Web App

I picked CherryPy to write the web application, because I wanted to learn it for the future when I need to write a small web frontends that dont need an ORM. Also, CherryPy has a built-in HTTP server that is sufficient for websites with small loads, which I initially used to run queshuns.com it is now being run with mod_python. For templating, I used Jinja2 because its similair in syntax to the Django templating language that I am familiar with.

The following is the code for questions_app.py which is the CherryPy application.

The index (method) of the web app will get the all the tweets from Redis. The other exposed
function is latest which accepts an argument since which is used to get tweets that are newer (since is the latest tweets received_at value). nt is used to create a different URL each time so that IE doesn’t cache it. This method returns JSON at.

The templates are located in a directory called templates :)

Here is the template for the root/index of the site; index.jinja

This template will be used to render a list of tweets and also assign the first tweets recieved_at value to a variable on the window object. This is used by the refreshTweets function which will pass it on to /latest in a GET parameter. refreshTweets will try to get new tweets and prepend it to the content div and then slide the latest tweets. This is the template used to render the HTML for the latest tweets;

I explicitly set the the latest div to “display: none” so that I can animate it.

Now we should be able to run questions_daemon.py to start retrieving tweets then start questions_app.py to look at the web app. On your browser go to http://localhost:8080/ and if everything went correctly you should see a list of tweets that update every 10 seconds.

Thats it. Hope this was helpful.

18 Dec 2009

Baş Taksım - Bulkan Evcimen - (736 Şeb-i Arus - Avusturalya)

15 Dec 2009

jQuery.get and IE7

I’ve been recently playing around with jQuery and some AJAXy stuff using jquery.get to request a piece of HTML. Like any sane web developer I use Firefox and Firebug and everything worked as expected. But then I decided to try Internet Explorer 7 (yeah i’m crazy like that). Well the AJAX call didn’t work. Actuallyjquery.get was executed but the callback function didn’t get ehh called. I spent quite a few hours googling I didn’t find anything directly to solve my problem. This Google Group post kind of helped.

I read on the jQuery docs that the callback to get will only execute if data is loaded. Don’t know why data wasn’t being loaded when IE7 issued the get (maybe because of caching ). So I decided to change the back end code to return JSON instead and use jquery.getJSON. With this change IE7 getJSON successfully got data back from the server.

30 Nov 2009

Install Shield Silent Installs

Install Shield has this nifty feature of being able to install packages in silent mode. This means that you can run setup.exe from the command prompt and it will install in the background with no user interaction. This is very useful if you want to test your installation. If you use some sort of continuous integration system (and you should if you don’t) then you could download the latest installer do a silent install and run some tests against the program that is installed, then silent uninstall it all automat(g)ically.

To be able to do silent installs/un-installs you first need to record a response file that contains all the choices for the install shield dialogs.

To record the response file;

setup.exe -r

This will be like a normal install done manually. Follow it through like you would in any normal installation. After the installer exits, the response file should be at C:\Windows\setup.iss

Next time around you can do a silent install by running

setup.exe -s -f1

I’m paranoid so I use the absolute path to the response file. There is no space between “-f1” and the path to setup.iss. Note that, when you run the above command to silent install, the command will seem to exit immediately but if you check Task Manager you should see setup.exe (possibly 2 of them) running.

Silent un-installation is pretty much the same. You need to create a response file first. To do this run the following;

setup.exe -r -uninst -removeonly

This will again create a setup.iss file in C:\Windows I usually rename the uninstall response file as uninst.iss. Now you can do silent uninstallation by running;

setup.exe -s -uninst -removeonly -f1

Some installers might install the program under different GUID’s each time you install it. If this is the case I have found that the above command for uninstallation doesn’t work, as Install Shield doesn’t know what to uninstall. The solution is to work out the UninstallString from the Registry (which is what Windows uses to uninstall the program via Add/Remove Software).

Here is a python script that uses the registry module (http://pypi.python.org/pypi/registry/) to find out the full UninstallString. You first need to manually find this string in your registry by looking under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall so that you can pass into this function a unique string that is present in the UninstallString of your program

EDIT: the following script is quite ugly actually. I have a new version in which I use regobj it makes things easier.

09 Nov 2009

Hicaz Peşrev (Salim Bey), Son Yürük Semai ve Taksim

18 Sep 2009

Running QTP tests using Python

QTP provides an interface called the automation object model. This model is essentially a COM interface providing a bunch of objects that can be used to automate QTP. The full object list is available in the QuickTest Professional Automation documentation.

Running QTP tests from the command line is useful for doing scheduled automatic testing. If you use a continuous integration system to do automatic builds of your software, you can run your QTP tests on the latest build.

The following is a Python script that is able to run a test and print out Passed or Failed. It is a direct port of example code in the documentation written in VBScript

13 Sep 2009

Hicaz Taksim

06 Sep 2009

Evcara Saz Semai - Dilhayat Kalfa

12 Jul 2009

Aşk-Efza Saz Eseri - Sadettin Arel

28 Dec 2008

Setting up a git repository on Slicehost

On your slice;

  1. Install git.

    sudo apt-get install git-core

  2. Create an empty directory for your repository

    mkdir myrepo.git && cd myrepo.git

  3. Initialize git

    git init

On your local machine

  1. Create an empty directory for your repository

    mkdir myrepo.git && cd myrepo.git

  2. Initialize git

    git init

  3. Add the remote repository as the origin

    git remote add origin ssh://server-domain/repo

    for my server the above command is

    git remote add origin ssh://bulkan-evcimen.com/home/bulkan/src/repo.git

  4. Create a ignore file for the first push

    touch .gitignore

  5. Add, commit

    git add .gitignore

    git commit -m “initial git commit”

  6. Push your repo to the origin on slicehost

    git push origin master

That’s it. Happy gitting.

15 Jan 2008

(Ugly) Python type checking

I like Python because of the explicitness of the syntax

def add(a,b):
return a + b

explicitness is good as it really leads to code that is understandable at one glance, but quick, tell me two types that the above function will work on ?

int’s and strings

>>> add(1,2)
>>> add(‘hello ‘,‘world’)
hello world

the above function works on both integers and strings only because both provide the special method add which gets called for the + operator.

So does this lead to implicitness ? Not really, because you should know (from programming) that you can add integers together and concatenate strings together, Python just makes this general across types.

If you wanted to say, restrict the types of the arguments to our add function above, you could do something like the following

def add(a,b):
if type(a)==type(str) and type(b)==type(str):
return a + b

Type checking is kind of ambiguous to me in a dynamic language. If i want to restrict the the ability of a function to only work with certain types or i don’t know the types of the object im passing to a function then i have design issues (or no design at all).

You could rewrite the above function to do the type checking using a decorator.

EDIT: i didn’t know if the following decorator was written by the original creators or not, but as i was pointed out it wasn’t here is the original link

Python Cookbook Recipe

def require(arg_name, allowed_types):
def make_wrapper(f):
if hasattr<
(f, "wrapped_args"):
wrapped_args = getattr(f, "wrapped_args")
code = f.func_code
wrapped_args = list(code.co_varnames

arg_index = wrapped_args.index(arg_name)
except ValueError:
raise NameError, arg_name

def wrapper(
args, kwargs):
if len(args) > arg_index:
arg = args[arg_index]
arg = kwargs[arg_name]

if not isinstance(arg, allowed_types):
type_list = " or ".join("'"
+ str(allowed_type.name) +
"'" for allowed_type in allowed_types)
raise TypeError, "Expected argument '%s' </span>
to be of type %s but it was of type '%s'." <br /> % (arg_name, type_list,

return f(*args,

wrapper.wrapped_args = wrapped_args
return wrapper

return make_wrapper

def add(a,b):
return a+b

>>>add(‘hello ‘,‘world’)
hello world
Traceback (most recent call last):
File “snippet3.py”, line 38, in
print add(‘hello’,2)
File “snippet3.py”, line 24, in wrapper
return f(*args, **kwargs)
File “snippet3.py”, line 22, in wrapper
raise TypeError, “Expected argument ‘%s’ to be of type %s
but it was of type ‘%s’.” % (arg_name,
type_list, arg.class.name)
TypeError: Expected argument ‘b’ to be of type
‘str’ but it was of type ‘int’.

above is code that does type checking on input arguments. I may be wrong and there may be use cases where you need to check the type of an object but the point is you should design your program so that you know all the involved types or use a language that has compile time type-checking.

Note: The above type checking is in a Django application that is live and it has users and i didn’t write that decorator.

13 Dec 2007

Never store passwords as clear text

Never store passwords as clear text, that is the general rule with any application that has a database backend that is used for authentication into the system. Why?

Basic authentication with a database usually works by comparing username and password combination that the user entered to values in the database table containing user details such as login name, password etc…It might be possible for a user with correct credentials to be able to inject SQL queries to the application, something like;


assuming the user can guess or knows the table containing user data.

If passwords are in clear text then lo and behold the users now has access to all other users login name and passwords. Anyway this post is not about security of database backed applications but a post about how i overcame different versions of Python and py-bcrypt’s support.

As i’ve posted before i’ve been working at a web development company as a Python programmer. It’s a Zope ‘shop’ in the sense that there main application, developed in house a shopping cart system is developed using Zope. Anyway i just developed ‘External Scripts’ to do specialized stuff for customers. But recently i’ve been working on a ‘time/job tracking’ web application using web.py. I’ve mentioned this in a previous post. So here is the versions of Python,web.py and py-bcrypt that i used to develop the tracker;

  • Python 2.5.1

  • py-bcrypt-0.1 (Python 2.4 or higher)

  • MySQL-5

Anyway, this past week the application was put on a live production server. That has;

  • Python2.3

  • Postgres

I’ve also used decorators to ‘decorate’ functions to restrict access to certain URL paths. Take a guess in which version decorators was introduced into Python? You guessed right Python 2.4 got blessed with decorators. Guess what the lead/senior developer did ? He re-wrote most of the code to just use plain function calls instead of decorators…he re-wrote…instead of the simpler solution of installing Python >=2.4. (Converted the decorator to a plain function which is called in all other functions that was decorated with it).

I mentioned above that py-bcrypt requires Python >= 2.4 because it needs the function os.urandom which was introduced into Python 2.4. Oh slap! i’ve got Python 2.3 so we can’t generate hashes…the solution that was suggested to me was….“write/copy urandom” i think that was one of the moments in my not so long professional career that i thought that someone more senior and with more was wrong. I would have instead installed Python 2.5 on the server, which seemed to be the ‘path of least resistance’.

It seemed a daunting task. The first step i took was looking at py-bcrypt module. It contains two files;

  • init.py

  • _bcrytp.so

You can’t edit _bcrytp.so file as it a library file. So i looked at init.py, which imports the os module and defines the gensalt function. From my the Python 2.5 installation on my Mac i copied the urandom implementation into init.py just above the line where import os . urandom is not that complex , it just tries to open /dev/urandom and reads in n number of bytes and return it. So here is what init.py looks like after the changes, its a hack.

25 from os import O_RDONLY,read
27 def urandom(n):
28     """urandom(n) -> str
30     Return a string of n random bytes suitable for cryptographic use.
32     """
33”     try:
34         _urandomfd = open("/dev/urandom",r )
35     except (OSError, IOError):
36         raise Exception("/dev/urandom (or equivalent) not found")
37     bytes = ""
39     while len(bytes) < n:
40         bytes += _urandomfd.read(n-len(bytes))
41         #bytes += read(_urandomfd, n - len(bytes))
43     _urandomfd.close()

44     return bytes

46 import os

47 os.urandom = urandom
48 from _bcrypt import 

Now py-bcrypt works and passwords are hashable.

The thing that troubles me and is a question on my mind, is it worth the risk to install Python >= 2.4 on a server that contains ‘live shops’ ? The risk being totally blowing up the default Python installation (2.3) and bringing down the shops ? I would have probably installed a new version of Python and sandboxed it. The irony is that the senior developer was the one who chose py-crypt and told me to come up with a decorator for methods which need to be password protected. I would have thought that with his experience he would have guessed that the request for the app to go online would have come. Also if you are scared to blow the default installation of Python on the production server, WHY PUT AN IN HOUSE APPLICATION THERE?

25 Nov 2007

Exporting a csv file with web.py

This is how you export a csv file and get the browser to recognize that its a csv file and popups the download window with web.py. Lets say we have a database with a table called users and you want to create a csv file that contains all the users with their names and id’s here is how you do it.

1 class export:
2     def GET(self):
3         i = web.input()
6         users = web.select(’users’, vars=locals())
8         csv = []
9         csv.append(”id,name\n”)
10        for user in users:
11             row = []
12             row.append(user.id)
13             row.append(user.name)
15            csv.append(”,”.join(row))
16         #writer.writerow(row)
18         #f.close()
20     web.header(’Content-Type’,’text/csv’)
21     web.header(’Content-disposition’, ‘attachment; filename=export.csv’)
22     print “”.join(csv)
23     return

I export the csv file in a GET method of a class called export which i map in the urls list to ‘/export’,‘export’

A quick breakdown, do a database query and iterate over the IterBetter object create a row and appending a comma seperated string to the csv list. Then at the end you send the appropirate HTTP headers , the first telling the type of the file and the second setting the filename and extension.

Anyway you can download this code from http://bulkanix.pastebin.com/f1f567ea0

05 Nov 2007

embryo.py and py2app awesomeness

At work i created this script that changes permissions on our application BizarShop so that it works with the new Dashboard Widget to control the starting and stopping of of the Zope server. The permissions need to change because the controller that comes with BizarShop starts the Zope server as root, which creates the lock file (Data.fs file if it doesn’t exist) with root ownership. The widget on the other hand tries to control the server via normal user permissions, but the server wont start because all the files belong to root and cannot be overwritten. For example the file Z2.pid needs to be writable, so you need to change the ownership to that of the user.

So i created a python script that recursively goes through all directories under /Applications/Bizar Shop and changes all of the file/folder ownership to that of the owner current user. As you probably know to run a python script you either need to run explicitly via;

python scriptname.py

or by including a hash-bang at the start of the file to tell where python is located and then make the script executable. I thought that the user could just double click on an executable python script to run it but i was wrong. I didn’t want the user to open Terminal.app and execute it manually, this is just not user friendly. I then remembered py2app. From the README file of py2app

py2app is a Python setuptools command which will allow
you to make standalone Mac OS X application bundles
and plugins from Python scripts.

py2app is similar in purpose and design to py2exe for

So using py2app i created this installer, that also includes the widget and embryo.py, oh and embryo.py is also a nice little module, from its Google Code description;

embryo is a tiny Mac/Windows/Linux GUI toolkit for Python. It can be used to “boot-strap” the user into downloading a larger GUI toolkit such as PyGTK, PyGame, pyglet, PyOpenGL, etc.

What i used it for is, my script checks if the folder /Applications/Bizar Shop exists and if it doesn’t then it assumes that BizarShop is not installed and then shows a message box saying BizarShop is not installed do you want to download. But if it does find it displays a message box letting the user now know that the Widget is about to be installed and Dashboard opens up a the install widget dialog box.

What is so cool is that py2app is very easy to use and it works! Combining this with embryo you can easily create a quick installation program for literally anything.

Oh, did i mention that embryo is created by Alex Holkner the same guy who is working on pyglet? Well now i did. Here are the links to these modules.


04 Nov 2007

Redirecting stdout to StringIO object

So how do you redirect everything written to stdout and store it somewhere but also print it out to stdout? This was asked on #python and i answered it.

To access the stdout file object you need to import the sys module. Redirecting stdout to a StringIO object works because all functions that write to stdout expect the write() method of a file-like object, which StringIO has (along with read, seek etc). So here is the code;

So here is a quick breakdown line by line:

  • Lines 1 and 2 are used to import the required modules.

  • Then we subclass StringIO and create an attribute to hold the reference to stdout.

  • In Line 9 we overwrite the write method of the StringIO baseclass which does only one additional thing of writing back out to the original stdout, then it calls the baseclasses write method to store the string again.

  • Then also overwrite the read method that does one additional thing of seeking to the start of the StringIO object and then writing it all out back to stdout.

14 Sep 2007

Experimenting with Python frameworks and modules

I’ve been very busy since last semester got a new job as a Python (Zope) Developer. Then quitting from my previous job which i was still considered an intern. Now im in my second and last semester of university and I graduate at then end of this year!

Even though i haven’t had free time to actually write a post i’ve had some of those times were you need to do work but cbf so you end up doing totally random stuff? Well i’ve had those times a plenty in which i experimented with couple of Python frameworks and modules.

First one i tried was Turbogears. Remember that major project i have were we are using Rails well that’s what spurned me on to actually try a Python web application development framework. Oh and also the Python411 Podcast. Well installing Turbogears on a Mac is very easy by following this guide

  • I had Python2.5 already installed and the default version to run when python is run, so i skipped that step.
  • I had to install the easy_install using ez_setup.py
  • Then i downloaded tgsetup.py and ran it which downloads and install everything for you, except for the database wrapper.
  • The database i have installed on my Mac is MySQL 5.0 and the driver for that is MySQLdb. This was the tricky part of the whole installation process of Turbogears. So i downloaded MySQLdb and tried running sudo python setup.py install, but it failed. The solution to this failure was tricky but after a bit of Googling the problem was that setup_posix.py had the wrong path to mysql_config file which is located at /usr/local/mysql/bin/mysql_config on my Mac. Changing the path and running setup.py again worked. Then i tested it by trying to import MySQLdb from Python again and that seemed to work aswell (no Exceptions).
  • Then to actually test to see if Turbogears was correctly installed i ran this command tg-admin quickstart and entered a name for the project and package name (in this case it was wiki, creating wiki’s with web application development frameworks seems to be the Hello World program of languages). Then started the built-in webserver by running the script start-wiki.py and accessed it via http://localhost:8080/ and i got the “Welcome to TurboGears” page.
  • I haven’t done anything else with Turbogears as i was interested in the installation process.

01 May 2007

bulkanix? what the...

This should have been my first post…meh. bulkanix is a name coined by my friend, dbp. Bulkan + Linux fascination = bulkanix. He keeps on asking me to create my own Linux distro called bulkanix, very original idea…

29 Apr 2007

Ruby (and Rails)

I have this major project at uni, to develop a web app following the whole SDLC, in a group of six. We (well the team leader) decided that we should use Ruby on Rails. Me being the “Python junkie”, i wasn’t interested in Ruby. As i have had no experience in using Django or Turbogears i couldn’t suggest these, whereas the team leader has had some experience with Rails but to what extent i do not know.

Anyway, ive been looking into Ruby and its just seems like Python. Hash’s and Dictionaries. But Rails as a framework is great, to the extent that i understand so far. Which is not much. I just watched the first screencast and each time something worked the screencaster says “oops” as if it something didn’t work. Ruby…well reading Why’s (Poignant) Guide To Ruby, is, i admit fun/different and who can resist Cartoon Foxes with who ramble on about Chunky Bacon, what else can i say?

More to come.

28 Apr 2007

AUC Python and Objective C Workshop

Thursday - Day 1;

It was 29C today, very nice weather. I arrived in Sydney yesterday. The plane trip was good, compared to the fact that i kept on thinking it would be a 45 minute ride on a Mad Mouse, but it wasn’t. Other than two or three “major” air pockets or turbulence, it was a good flight. Maybe watching Heroes also helped. (I did it!)

Yesterday, i just came down to my hotel checked-in left my bag and then went scouting for Cliftons Training Centre. It wasn’t that hard to find, a train ride to Circular Quay Station, just a couple of minutes of walking down George St to the left side. I did this because if i was going to get lost i would have done that yesterday instead of today. I didn’t get lost. The cool thing is that the centre is close to Sydney Harbor Bridge and the Opera House. So i walked a bit and took some photos with my N73. Then went back to my hotel.

The next day (today,May 3rd) I woke up at 7:30am. After i got ready i went out to Central Station. I left early as i didn’t have any wireless connection in my hotel. I went out looking for a cafe where they had wireless next to the training centre. Chance had it i found Starbucks only to realise that to access the internet you need a prepaid card or a credit card and i had neither. By the time i drank my Iced Cafe Latte and it was time for me to head to the centre.

Walking into the centre going up the lift (i hate the things by the way) asking for the direction to the room. Then stepping into the room. Our tutor is James Bekkama a PhD student at CSU. After all the six students arrived it started.

The usual introduction started, with everyone saying where they are from and experience with Python and/or Objective C. Me and another student were the only two with experience, but me being the only one who was new to the Mac scene. After the introduction we where asked to install Xcode if we haven’t already. I had already installed Xcode and even PyObjC. After all of us was up and running (except for one person who did some weird thing with his Mac moving /etc somewhere and aliasing it, Xcode not liking, he had to re-install Mac OS X!).

The first day was all Python. Starting off with the basics of the language, such as it being dependent on indentation for code blocks which is always a let down for people who are coming from C/C++, Java etc. Using the interactive shell, variable assignment, retrieving input from the user. Then we moved on to using Xcode for Python development. This was needed as PyObjC includes templates for Xcode which eases the development process. Setting up Xcode for Python had some steps to follow but not hard to do but easy to forget. After setting up Xcode we moved on to general language syntax. After this we started writing up a Address Book script. Starting off we a basic function based script then converting it into a more cleaner Object Oriented script.

Before i forget to mention we had morning tea during lunchtime, and lunchtime during afternoon tea. It was good fun. After lunch we started on a CGI script to create a web interface for adding entries into the Address Book. This was interesting because i couldn’t get Python CGI scripts to run under Apache that is pre-installed on my MacBook. I was trying to get mod_python installed instead of just using the magic hash bang thing!

If i admit it, the first day was a bore, except for the CGI part. James does know his quite a bit about CGI (Python,Objective C in general as well) scripting, but i was more interested in the Objective C bridge. To learn to develop applications that look like “real” Mac applications but programmed with Python with no need to learn Objective-C.

Friday - Day 2;

Second day i didn’t wake up that early as there is trains more or less every 2-8 minutes that goes around the City Loop in Sydney. So i arrived at the training centre ten minutes earlier for a quick email and Digg.com check (10 minutes isn’t enough for a Digg ;) ) .

We started by configuring a new PyObjC project on Xcode. Again there were some steps we needed to follow. Then the fun began, with a simple example.

>>> import AppKit
>>> AppKit.NSBeep()

You can guess that all this does is a “beep” sound, very useful just like the classic “Hello World” example. No seriously, it is an easy way of checking if PyObjC is correctly installed.

Keep in mind that i’ve had no previous experience with Objective-C. When i got my MacBook i said to myself “might as well learn the native development framework and language. So i looked at some tutorials on Objective-C and some screen cast on YouTube about Objective-C development. When i saw NSObject or NS* i was like “what is NS”, then NIB files. It was just too many things to learn and again university was like “here more work for you, we don’t really want you to have a social life”. So i abandoned my sojourn into Cocoa.

Now thanks to this workshop i now know what NS and NIB are. Next Step and Next Interface Builder respectively. Once everyone confirmed that PyObjC was installed and working we then started looking at the syntax mapping between Objective-C and Python. What i mean by syntax mapping is this. The way to call a method of an object in Objective-C is done like the following;

[aObject doThis:art1 withThis:arg2];

and in PyObjC this would be translated to


i like this better. No square brackets or anything, except the underscores, which are needed to represent the colon’s in Objective-C. (Colon’s in Python are used to denote the beginning of a block).

So converting any Objective-C API call to PyObjC is easy, whenever you see a colon you replace it with an underscore in Python. Here is another example of creating NSString objects in PyObjC

myString = NSString.stringWithString(u”Hello World”)

instead of a very ugly looking Objective-C version

myString = NSString.alloc().initWithString
(u”Hello World”)

After this we had morning tea. Once back we started working on a Aqua interface for the Address Book we created yesterday. I must admit the Interface Builder is a very handy tool. You add your buttons and then create outlets and connect them VISUALLY (holding down ctrl?). We worked on this until lunch then we came back to more fun stuff, Bluetooth.

PyObjC can directly control Bluetooth via the NSBluetoothIO (if i remember correctly Xcode is crashing for some reason) but we just used a framework, lightblue. To do a device scan of bluetooth devices all you need to do is this,

>>> import lightblue
>>> lightblue.finddevices()

making sure Bluetooth is switched on. This call returns a list containing all devices with Bluetooth switched on and in Discoverable mode, with their MAC addresses. Using this information it is very easy to create some sort of Bluetooth proximity detector, which James did, continuously search for devices, and see if the device your looking for is in range, meaning that it was detected.

Then it was all over and time to head back. Everyone said their goodbye’s and nice to “meet you”. I made some good connections into the Mac development world.