Monday, December 26, 2011

Upgrading to Grails 2.0

With the recent release of grails 2.0, I upgraded OpenArc's ILocker and E-Arc software tonight. Ran into a few hurdles along the way and thought sharing them here might help someone else in the future.

First, I ran into some dependency conflicts and had to add some new lines to grails-app/conf/BuildConfig.groovy:
+      runtime ('edu.ucar:netcdf:4.2-min') {
+ excludes 'slf4j-api', 'slf4j-simple'
+ }
+ runtime ('org.apache.tika:tika-parsers:0.10') {
+ excludes "commons-logging", "commons-codec"
+ }
+ runtime ('org.xhtmlrenderer:core-renderer:R8') {
+ excludes "itext", "commons-logging", "commons-codec"
+ }
In particular, lots of trouble with "commons-logging" and "slf4j".

Secondly, I'm using java7 (1.7.0_b147) and was getting the error "javac: target release 1.6 conflicts with default source release 1.7" so I add to throw:

grails.project.source.level = 1.6
into grails-app/conf/BuildConfig.groovy as well.

Finally and most perplexingly, I got everything running, but when I went to browse the application I just got an empty blank page, no error messages, nothing, just a blank page. Uggh. Turns out you must run:
grails install-templates
if you've installed the templates previously. It's documented in the upgrade notes - would have been a nice thing to throw up a warning about too.

JQuery Mobile for web-based forms applications

As a consulting company, OpenArc, does a fair number of web-based forms applications for customers across a wide range of industries. In the last several months, we've really taken to using jQuery Mobile (JQM) for many of these applications.

As is typical for applications of this type, a clean and usable interface is far more important to our customers than a sexy/flashy look and feel. JQM gives us an easy framework to produce a modern looking UI, tailored with the jQuery Mobile Themeroller, customized to match the client's brand requirements. Our clients are also very happy to know their applications can be accessed from a wide array of mobile devices.

Here's a few screenshots from just one of these applications:

First, a dashboard of sorts:

A listing page, with the ever so useful "data-filter: true" attribute:

Finally, a meeting edit page showing a time picker control, still in progress:

We've had a few glitches along the way, but in general, our clients are very pleased with a JQM based UI. This makes us very happy too!

One issue we saw early on was the default ajax-based navigation not playing well on IE*, so for now we've disabled it via:
$.mobile.ajaxEnabled = false; $.mobile.pushStateEnabled = false;
Normally, dialog boxes in JQM do not require full HTML pages, just HTML snippets (e.g. :layout => nil) as they are loaded via AJAX and work like jQuery UI dialogs.

However, when you set "$.mobile.ajaxEnabled = false" JQM will no longer load dialogs via ajax, EVEN if you set "data-ajax=true" on the dialog links. That seems like a bug to me, ignoring "data-ajax=true". A patch to fix the problem:
diff --git js/ js/
index f85a491..181b9c9 100755
--- js/
+++ js/
@@ -1322,10 +1322,11 @@
var baseUrl = getClosestBaseUrl( $link ),

//get href, if defined, otherwise default to empty hash
- href = path.makeUrlAbsolute( $link.attr( "href" ) || "#", baseUrl );
+ href = path.makeUrlAbsolute( $link.attr( "href" ) || "#", baseUrl ),
+ isTargetDialog = $"rel") === "dialog";

//if ajax is disabled, exit early
- if( !$.mobile.ajaxEnabled && !path.isEmbeddedPage( href ) ){
+ if( !$.mobile.ajaxEnabled && !isTargetDialog && !path.isEmbeddedPage( href ) ){
//use default click handling

My only hope at this point is to see an expanded set of controls/plugins, ideally such that we'd no longer have need of jQuery UI.

Friday, October 08, 2010

Grails and JCifs

JCIFS is: Open Source client library that implements the CIFS/SMB networking protocol in 100% Java. CIFS is the standard file sharing protocol on the Microsoft Windows platform.
As part of a project to provide schools and businesses with an open source solution to access their "My Documents" folder anytime/anywhere over the web, I recently had the pleasure of integrating JCIFS into my Grails application.

The obligatory screenshot:

I dropped the latest JCIFS jar file into my $GRAILS-APP/lib folder, and began implementing the "My Documents" feature against a samba server for starters. When I moved to a Windows 2008 server everything fell apart, with all operations started timing out. After some digging around in the rather extensive set of config options, I realized I need the following in my grails config file:
System.setProperty("jcifs.smb.client.dfs.disabled", "true");
Your environment may differ but make sure you take a good look at the JCIFS configuration options at least.

Ok, so here's a simple example of removing a file:
  void removeFile(WorkspacePath p)
def ntlm = new NtlmPasswordAuthentication("", p.username, p.password);
SmbFile file = new SmbFile(absoluteFilePath(p.url, p.path), ntlm);
Note: I pass "" as the first argument to NtlmPasswordAuthentication as the domain is part of p.username (e.g.

One thing you need to make sure of is always ending directory paths with a "/", otherwise you will get errors. Here's a more complicated example of a "eachFile" method that takes a closure as it's final argument:
  public void eachFile(WorkspacePath p, Closure c)
println "eachFile ${p.url} - ${p.path}";
def path = absoluteDirPath(p.url, p.path);
def ntlm = new NtlmPasswordAuthentication("", p.username, p.password);
SmbFile file = new SmbFile(path, ntlm);

// are we dealing with a directory path or just a single file?
if (!file.isDirectory()) {[name:, file: file, path: file.canonicalPath,
inputStream: { return new SmbFileInputStream(file); },
outputStream: { return new SmbFileOutputStream(file); }

file.listFiles().each {
f-> if (f.isDirectory()) return;
if (f.isHidden()) return;[name:, file: f, path: f.canonicalPath,
inputStream: { return new SmbFileInputStream(f); },
outputStream: { return new SmbFileOutputStream(f); }
We've been quite pleased with JCIFS and it well its been working in our grails application. We are currently using 1.3.14 with the patches noted here. I just noticed that 1.3.15 is out so I'm interested in trying that as soon as possible!

Friday, October 01, 2010

Grails and JackRabbit

Here's a brief overview of I plugged JackRabbit, a fully conforming implementation of the Java Content Repository specifications, into several of the Grails based projects I've been working on recently.

Currently, I'm using JackRabbit for user editable page content. Perhaps overkill, but I have plans to leverage additional JackRabbit features down the road.

First off, there is a Grails JackRabbit plugin, but it looked rather old and un-maintained and had no real documentation, so I just rolled my own solution.

Ok, so first, drop the jackrabbit jars into your $PROJ/lib/ folder.
(~/src/ilocker) ls -1 lib/
An improved approach would be to add the appropriate directives to grails-app/conf/BuildConfig.groovy. But for now, this will work.

Next you'll need an appropriately configured JackRabbit repository.xml file. I configured JackRabbit with a Postgresql DbDataStore. A sample of my configuration can be found here.

So how to get started? I created a grails-app/service/ContentService.groovy, that starts out like this:
import org.springframework.beans.factory.InitializingBean;
import javax.jcr.Repository;
import javax.jcr.Session;
import javax.jcr.SimpleCredentials;
import javax.jcr.Node;
import org.apache.jackrabbit.core.TransientRepository;

class ContentService implements InitializingBean
static scope = "singleton";
def grailsApplication;
Repository _repository;

public void afterPropertiesSet() {
def jcr = grailsApplication.config.jcr;
_repository = new TransientRepository(jcr.repo.config, jcr.repo.home); "Configuring Content Service ... config=${jcr.repo.config}, home=${jcr.repo.home}";
My grails-app/conf/Config.groovy file has the following entries:
jcr.repo.home = "/var/lib/ilocker"
jcr.repo.config = "/etc/ilocker/repository.xml"
So the line
_repository = new TransientRepository(jcr.repo.config, jcr.repo.home);
above wires everything up to use /etc/ilocker/repository.xml and to set ${rep.home} = /var/lib/ilocker. Make sure the tomcat user has appropriate access to /var/lib/ilocker when you put the site into production!

Getting JackRabbit to work first time around can be a little dicey, because JackRabbit will copy the repository.xml to ${rep.home}/workspaces. If anything is misconfigured, it's easiest to just change repository.xml, delete ${rep.home}/workspaces, and try again. If you don't delete ${rep.home}/workspaces, your changes to repository.xml will have no effect (unless you create a new workspace). Take note!

Now to write content to our ContentService, I'm using:
  public void put(String controller, String action, String data) {
Session session = _repository.login(new SimpleCredentials("username", "password".toCharArray())); "ContentService.put ${controller} ${action}";

try {
Node controllerNode = getControllerNode(session, controller);
Node node = getActionNode(controllerNode, action);
Calendar lastModified = Calendar.getInstance();

node.setProperty("jcr:lastModified", lastModified);
node.setProperty("jcr:mimeType", "text/html");
node.setProperty("jcr:encoding", "utf-8");
node.setProperty("jcr:data", data);;
finally {
Obviously, completely ignoring JackRabbit level security. To read content in my controllers, I write code like this for example:
class AdminController {

def contentService;

def index = {
String content = contentService.get(controllerName, actionName);
[ chtml: content ]
And then in ContentService.groovy I have:
  public String get(String controller, String action) {
Session session = _repository.login(new SimpleCredentials("username", "password".toCharArray()));
String value; "ContentService.get ${controller} ${action}";

try {
Node controllerNode = getControllerNode(session, controller);
Node actionNode = getActionNode(controllerNode, action);
if (actionNode.hasProperty("jcr:data")) {
value = actionNode.getProperty("jcr:data").getString();
finally {

return value;

private Node getControllerNode(Session session, String controller) {
Node root = session.getRootNode();
if (root.hasNode(controller))
return root.getNode(controller);

Node node = root.addNode(controller, "nt:folder");
return node;

private Node getActionNode(Node parent, String action) {

if (parent.hasNode(action)) {
Node actionNode = parent.getNode(action)
return actionNode.getNode("jcr:content");

Node actionNode = parent.addNode(action, "nt:file");
Node content = actionNode.addNode("jcr:content", "nt:resource");
return content;
Again punting on JackRabbit level security. To preload my sites with default content, I wrote a simple groovy program to load the repository. I put jackrabbit-standalone-2.1.1.jar into $HOME/.groovy/lib/ then wrote a simple script, the heart of which is
    _repository = new TransientRepository("/etc/ilocker/repository.xml", "/var/lib/ilocker/");

Session session = _repository.login(
new SimpleCredentials("username", "password".toCharArray()));

try {

File input = new File(args[0]);
{ line ->
List words = line.tokenize('\t');
println "Processing " + words[0] + "." + words[1];

Node home = getHomeNode(session, words[0]);
Node content = getContentNode(home, words[1]);

// store std. attributes
Calendar lastModified = Calendar.getInstance();
content.setProperty("jcr:lastModified", lastModified);
content.setProperty("jcr:mimeType", "text/html");
content.setProperty("jcr:encoding", "utf-8");

// store extended attributes
content.setProperty("jcr:title", words[3]);

// store content
File data;
if (words[2].startsWith("/")) data = new File(words[2]);
else data = new File(scriptPath, words[2]);

String jcrData = data.getText();
content.setProperty("jcr:data", jcrData);;
finally {
. The full script can be found here.

Well I hope you found this a useful overview of integrating JackRabbit into a Grails application. The only trouble I've had in production with the above setup is when I had:
    <SearchIndex class="org.apache.jackrabbit.core.query.lucene.SearchIndex">
<param name="path" value="${rep.home}/repository/index"/>
<param name="supportHighlighting" value="true"/>
In my repository.xml. Then I would get periodic repository locking errors when Lucene indexing kicked in. Since I'm not doing any JCR searching, I just deleted all Lucene search index nodes from my repository.xml.

This work was done for OpenArc, a Pittsburgh-based open source consulting firm with clients in Pittsburgh, Chicago, and D.C.

Wednesday, August 12, 2009

Updated NetCenter Screenshots

Ok, no reams of code in this post, just some recent screenshots of NetCenter, an ajax rich jquery/Grails based CRM I've been working on. Most of the icons below come from the CrystalClear icon set on wikimedia.

This first shot shows our TODO manager rollup/down side bar:

And the asset management module:

Who are those cute kids ;-) ?

And finally, the document management accordion panel for an account:

Seeing the product live is far more impressive - new tab load speed, yahoo map popups, click to call - but hopefully these screenshots give you a sense of the general UI layout of NetCenter. This is really the first time I've down a tab oriented layout but I thought it would be the best design for a web-based CRM solution where you are jumping around alot, with multiple ways to get to the same information, but don't want to lose your place.

Thursday, July 16, 2009

Document Management in NetCenter

Although our mid to long term plans for NetCenter365 include Sharepoint and Alfresco integration, we currently provide a more streamlined, account oriented, document management capability within NetCenter that we think might better serve some organizations.

Documents in NetCenter are attached to customer records or accounts. Here's a screenshot:

On the backend, I created a C++/FUSE based filesystem. When you mount it you see a list of customer names as directories, under which documents attached to the accounts are found. This metadata is stored in the NetCenter database while the actual file contents are simply stored in a backing ext3 filesystem. This way it's easy to backup and restore, replicate, etc. Here's a snippet from account_node::readdir()
 int account_node::readdir(void *buf, fuse_fill_dir_t filler, off_t offset, struct fuse_file_info *fi)
filler(buf, ".", NULL, 0);
filler(buf, "..", NULL, 0);

pqxx::connection db(connect_string());
pqxx::nontransaction work(db);
pqxx::result result = work.exec("SELECT name,id,trunc(date_part('epoch',last_updated)),path FROM document where account_id=" + id());

std::string did; long lctm; std::string rpath;
for (pqxx::result::const_iterator r = result.begin(); r != result.end(); ++r)
filler(buf, r[0].c_str(), NULL, 0);
did = r[1].c_str();
rpath = r[3].c_str();

std::string path = _path + "/" + r[0].c_str();
_filesystem->set_attributes(path, attributes(did, lctm, rpath));

return 0;
Whereas the code to read the actual file contents, looks something like this:
 int poi_node::open(struct fuse_file_info *fi)
std::string fpath = full_path();

int res = ::open(fpath.c_str(), fi->flags);
if (res == -1)
return -errno;

return 0;
With the virtual filesystem mounted, we simply serve it up via Apache webdav and since we store the document metadata in the NetCenter database it's very easy to provide the frontend UI via grails.

As far as the frontend goes, one big complaint we've heard about other document management solutions is how confusing it is for some users to download a file, find it on their hard drive, edit it, go back to their browser, and upload a new version. That's a very frustrating set of steps for many users.

We built a very simple JetPack based extension for Firefox that registers a "webdav://" protocol handler that passes off such links to OpenOffice which already knows how to handle them properly such that there is no downloading, finding, editing, and re-uploading. OpenOffice will directly save the document back to our Apache webdav server that sits on top of the NetCenter virtual filesystem discussed above.

For Internet Explorer, we wrote a small C# based protocol handler that does almost the same thing but handles Microsoft Word or OpenOffice. Not quite as nice as the Firefox solution, but we can push out the MSI via AD group policy.

Tuesday, July 14, 2009

Grails, jQuery, and Yahoo Maps

I recently completed a new NetCenter365 feature that uses Yahoo Maps to show the location of all current customers. Here's a screenshot:

I really appreciate Yahoo's "Maps Web Services" which include a helpful geolocation service.

First, we map out HQ with:
 var map = new YMap(document.getElementById('map'));
map.addTypeControl(); map.addZoomLong(); map.addPanControl();

var hq = new YGeoPoint(HQ.latitude, HQ.longitude);
map.drawZoomAndCenter(hq, 11);
Then we use grails and jquery to loop through every customer and fire off the following ajax requests:
 var url = '${createLink(controller: "location", action: "latlong")}' + "/";

<g:each var="account" in="${accounts}">
$.getJSON(url + ${}, function(x) {
var pt = new YGeoPoint(x.latitude, x.longitude);
var m = new YMarker(pt);
The heart of the location/latlong method uses Yahoo's geolocation services. Here's a snippet of the groovy code:
 def geocoder = "${APPID}"
if (account.line1) geocoder += "&street=" + URLEncoder.encode(account.line1);
if ( geocoder += "&city=" + URLEncoder.encode(;
if (account.state) geocoder += "&state=" + account.state;
if ( geocoder += "&zip=" +;

def xml = geocoder.toURL().text
def records = new XmlParser().parseText(xml);
location.latitude = records.Result[0].Latitude.text()
location.longitude = records.Result[0].Longitude.text()
Performance wise, the map pops up quite quickly and the markers appear in rapid procession. This is aided by caching Lat/Long info to minimize geolocation requests.

Monday, June 01, 2009

CRM Integration via LDAP

Our vision for NetCenter is to facilitate and drive a customer centric view of day to day activities within an organization. Whether you're in sales, engineering, administration, or elsewhere, we want to help organize your documents, emails, phone calls, projects, and other day to day activities in a customer centric way.

We also want a platform that's easy to use and integrates well into existing business systems.

As part of this effort, I recently completed exposing NetCenter contacts to mail clients like Zimbra, Outlook, and Thunderbird via a custom OpenLDAP backend.

All of these mail clients can leverage LDAP based address books, so we expose NetCenter contacts via LDAP so that you can quickly and easily send emails to prospective and current customers. Here's a screenshot from Outlook:

And Zimbra:

There's no real documentation on how to create a custom backend, but the back-null and back-shell backends are pretty good places to start.

Friday, April 24, 2009

Incoming call screen pops with sipX, rabbitMQ, and Adobe Air

I just finished the first beta of NetCenterPlus, an Adobe Air html based tray application that presents screen pops for incoming calls on sipX systems. NetCenterPlus is part of NetCenter, a CRM/Business Productivity solution from NetServe365.

Here's a screenshot of the notification window on an incoming call.

On the backend, I implemented a solution very similar to the one I did for Integrating sipx with ejabberd. There are two database triggers installed into the SIPXCDR database, the second of which is a PostgreSQL plperlu trigger which uses Net::Stomp to send a message to our rabbitMQ server indicating the callerId of an incoming call to the user registered for the destination extension. Not many of lines of code:
CREATE FUNCTION cse_ncplus_change() RETURNS trigger AS $end$
use Net::Stomp;

my ($domain, $uid, $pwd) = @{$_TD->{args}};
my $msg = TD->{"new"}{"from_id"};

my $stomp = Net::Stomp->new({hostname=>'', port=>'61613'});
$stomp->connect({login=>$uid, passcode=>$pwd});

my $uid = $_TD->{"new"}{"username"};

return undef;
LANGUAGE plperlu;
The other plpgsql trigger looks up the destination extension and munges up a nice looking incoming call number. That exercise is left to the reader.

Now we've got a message on a per-user queue for every incoming call on our sipX system. So what next?

I wanted an easy to deploy, cross platform, tray application that would listen for incoming messages on present the screen pop. I looked at Mozilla Prism, Silverlight, and Adobe Air. Air was not my first choice to be honest, but the Prism project seems to have stagnated afaict, and Silverlight 2.0 on Linux doesn't look like it will be out anytime soon, so I went with Air. After spending some time with the product, I've definitely grown in my appreciation of its ease of use and design. It's really nice to be able to leverage existing web development skills to build these type of applications.

So what does the Air application do? First of it, I used air.Socket and javascript to implement a STOMP client.

First the connection code:
air.trace("setting up MessageQueue...");
this.socket = new air.Socket();
var self = this;

this.socket.addEventListener(air.Event.CONNECT, function(event) {
self.sendCommand("CONNECT\nlogin:guest\npasscode:" + password + "\n\n");
self.state = self.STATE.CONNECT;
The main listener loop looks something like this:
function(event) {

switch (self.state) {
case self.STATE.CONNECT:
case self.STATE.READY:
var data =;
var lines = data.split("\n");
if (lines[0] == "MESSAGE" && lines.length>5) {
So a NetCenterPlus user installs the application via a web page (yet to be prettied up!).

Then, the user enters their NetCenter username and password (again, this dialog needs some UI love. Did I mention I'm not a graphic artist?):

You can read and write to a local encrypted store in Air via functions like this:
readFromLocalStore = function(key, defstr) {                                
var item = air.EncryptedLocalStore.getItem(key);
if (item == null) return defstr;
return item.readUTFBytes(item.length);

saveToLocalStore = function(key, value) {
var bytes = new air.ByteArray();
air.EncryptedLocalStore.setItem(key, bytes);
When NetCenterPlus receives an incoming screen pop, we use the DOM to set the incoming call caller id, then we do an authenticated HTTP GET on the NetCenter REST based API to lookup the contact's name. The code looks something like this:
    var cpnum = this.document.getElementById("callpop_number");
cpnum.innerHTML = fnum;

var url = "http://" + server + "/api/contact/byPhone/" + callnum;
var request = new air.URLRequest(url);

var loader = new air.URLLoader();
var self = this; var loader_sucess = true;

loader.addEventListener(air.IOErrorEvent.IO_ERROR, function(error) {
air.trace("Failed to load: " + url);

loader.addEventListener(air.Event.COMPLETE, function(event) {
var data = new air.URLVariables(;
var cpname = self.document.getElementById("callpop_name");
cpname.innerHTML =;
You setup the login credentials in Air, with a single line of code:
air.URLRequestDefaults.setLoginCredentialsForHost(this.server, username, password);
The NetCenter CRM is a Grails application that exposes a REST based api via basic authentication tied into Active Directory. To set this up, I added the following lines to grails-app/conf/Config.groovy:
jsecurity.filter.config = """                                                 

authcBasic = org.jsecurity.web.filter.authc.BasicHttpAuthenticationFilter
authcBasic.applicationName = NetCenter API

/api/** = authcBasic
The contact controller "byPhone" method that the Air application uses is a very simple:
    def byPhone = {
def contacts = Contact.withCriteria {
eq('active', true)
eq('', session.lid)
or {

return [ 'contacts': contacts ]
Well that about describes how all these parts come together. We plan on adding a lot more functionality to the NetCenterPlus Air application and thus far I'm pretty pleased with the Air platform.

Tuesday, April 07, 2009

NetCenter Click to Call

I just completed adding "Click to Call" functionality to NetCenter. Since this is a bit difficult to demonstrate with screenshots, I made a YouTube video instead.

I implemented "Click to Call" using Aloha and RabbitMQ, testing the solution on sipX.

I have a grails PlaceCallService that sets up a connection to our RabbitMQ instance like this:
  static transactional = false;
ConnectionParameters connectionParameters;
ConnectionFactory connectionFactory;
ConfigObject config = ConfigurationHolder.config;

MessageQueueService() {
connectionParameters = new ConnectionParameters();
connectionFactory = new ConnectionFactory(connectionParameters);

The actual message publish function looks something like this:
  def publish(message) {
try {
Connection conn = connectionFactory.newConnection(,
Channel ch = conn.createChannel();

ch.basicPublish("", config.placeCall.routingKey, null,

catch (Exception e) {
log.error("Main thread caught exception: " + e);
return false

return true

Then in a "Third Party call initiator daemon", I unpack the message and use the Aloha stack to do a
        try {
OutboundCallLegBean outboundCallLegBean = (OutboundCallLegBean)
CallBean callBean = (CallBean)

// create two call legs
String callLegId1 =
String callLegId2 =

// join the call legs
System.out.println(String.format("connecting %s and %s in call...$
System.out.println(callBean.joinCallLegs(callLegId1, callLegId2))$

This chunk of code is based on the helpful Third Party Call sample from Aloha's subversion repository.

Anyway, it's working well, consumes minimal resources on the web server (just post message to the "/placeCall/request" queue), and only took a few days to setup and deploy into production. Many thanks to the Aloha team, RabbitMQ folks, and sipX gurus.

Friday, March 06, 2009

NetCenter CRM

For the last month or so, I've been working on "NetCenter" a Grails 1.1 based CRM system that will integrate with sipX for call detail records, Zimbra or Exchange 2007 for email, calendaring, and time tracking purposes, and finally Alfresco or Sharepoint for document management.

I've really enjoyed using Grails - its a real productivity booster and I really appreciate the Separation of concerns you get with an MVC framework.

I completed the sipX integration first and am now working with Exchange 2007 Web Services so that users can associate meetings with accounts and mark them billable/non-billable.

First a few screenshots, then a brief overview of the sipx integration. Note: in the screenshots below the account and contact information is randomly generated test data, while the call records are real records coming out of our production sipX server.

Call Manager:

Account Calls:

Contact Calls:

I used the Grails Quartz Plugin and added a grails-app/jobs/CdrSyncJob.groovy that looks at licensees with registered sipX servers and then queries with sipX instance for call detail records that have not yet been processed.

I wanted call detail report generation to be as fast as possible, so the CdrSyncJob looks up the sipX callee and caller phone numbers against the contact table and licencedUser table then writes a new "call" record into the NetCenter database and marks the sipX call record has having been processed so it can be ignored the next time the job runs. Now whenever anyone wants to view all calls made to any contact within a certain account, its a simple database query that has a few joins and doesn't involve any phone number normalization, determining whether a call is related to any known contact, ignoring interoffice calls, or figuring out the call direction.

Here a few snippets for CdrSyncJob. First the execute() method:
def execute() {

if (Environment.current == Environment.DEVELOPMENT) return
def licensees = Licensee.withCriteria {
eq("active", true)

licensees.each { syncCdrs(it); }
Then syncCdrs begins with some Groovy SQL like this:
   def cdr = Sql.newInstance("jdbc:postgresql://${licensee.sipHost}/SIPXCDR", "username", "password", "org.postgresql.Driver")
cdr.eachRow("select * from view_call_records A, cdrs_sync B where and NOT(B.done)")
Hmmm, I guess I should point out that view_call_records and cdrs_sync are custom tables. Here's the SQL:
CREATE VIEW view_call_records as
select id, SUBSTRING(caller_aor FROM '.*.*') as caller,
LTRIM(LTRIM(SUBSTRING(callee_aor FROM '.*.*'), '8'), '1') as callee,
connect_time as start_time,
to_char(cdrs.end_time-cdrs.connect_time, 'MI') AS minutes,
to_char(cdrs.end_time-cdrs.connect_time, 'SS') as seconds
from cdrs where cdrs.termination != 'F' and cdrs.connect_time IS NOT NULL;

CREATE TABLE cdrs_sync (
id integer PRIMARY KEY,
done boolean DEFAULT FALSE
Anyway, the rest of syncCdrs is just about ignoring interoffice calls or calls to contacts with don't have on record, then adding new entries to the NetCenter call table:
new Call(callDirection: direction, callId:, contact: contact, dateStarted: it.start_time, minutes: it.minutes, seconds: it.seconds, licensee: licensee, owner: owner).save();
and marking the call as processed in the cdrs_sync table.

Next time I get a chance to blog, I hope to show the Exchange integration and some jQuery snippets. jQuery has been a big productivity booster as well. Web development has come along way!

Tuesday, January 13, 2009

Integrating sipX with ejabberd

I recently completed integrating our sipX based voip platform with our ejabberd XMPP server, so that users can see when others are on the phone or not. There are alot of similar integrations that people have done with Asterisk using their AMI api, but I haven't found anything similar for sipX yet, so we rolled our own for now. While, it's not terribly exciting, here's a screenshot of what it looks like when someone is on the phone:

The solution I came up with involves 3 parts. First, I setup a clustered RabbitMQ server (an open source implementation of AMQP). I plan on using it to facilitate a loosely coupled, event driven architecture for integrating multiple open source
applications. I'm pretty happy with RabbitMQ thus far - about the only complaint I have is that they don't have any message tracing capabilities right now (version 1.5.0) which made it more difficult to debug my client side code. I'm also hoping that sometime soon we start seeing debian packages for python/perl amqp libraries. For now, I'm using Net::Stomp and the RabbitMQ stomp adapter which seemed like the most stable, easily deployed client side solution.

On the XMPP server side, I created an erlang module that acts as a message consumer. Each virtual host in our ejabberd server listens on a separate queue for presence messages generated by the sipX side and sends out XMPP presence updates to online sessions.

After getting the RabbitMQ erlang client library installed, here's the code I used to connect and setup my consumer:
Connection = amqp_connection:start(Uname, Pwd, ""),
Channel = amqp_connection:open_channel(Connection),
Qname = list_to_binary("/" ++ Host ++ "/presence/phone"),
Q = lib_amqp:declare_queue(Channel, Qname),
lib_amqp:bind_queue(Channel, <<"">>, Q, Qname),
lib_amqp:subscribe(Channel, Q, self(), false),

Then I created a handle_info function that looks like this:

handle_info({ {'basic.deliver', DeliveryTag, _, _, _, _ },
{content, ClassId, Properties, PropertiesBin,
[Payload]} = Info}, State) ->

%% Message processing here, then send out the XMPP presence update...,
BroadcastPresence = fun({U, S, R}) ->
Dest = jlib:make_jid(U, S, R),
ejabberd_router:route(FromJID, Dest, Presence)
Sessions = ejabberd_sm:get_vh_session_list(,
lists:foreach(BroadcastPresence, Sessions),
Now on the sipX side, things are a bit more ugly, and when I have more time later, I'd like to rework this end. For now, I created a PL/pgSQL AFTER trigger on SIPXCDR.call_state_events table that handles new call state events ('S' and 'E' event_types to be specific). This trigger inserts new rows into a new cse_summary table I created for every call, one for when the call is setup and one for call termination and it does this for each internal user. If the call involves two internal folks, you end up with 4 rows, if on the other hand, one side is external, you end up with only 2 rows. This trigger also looks up the XMPP jid for the extension and records that in the generated cse_summary rows.

When a row is created in the cse_summary table, a separate
PL/Perl AFTER trigger uses Net::Stomp to generate a call state
event message for the RabbitMQ cluster.

Here's what the PL/Perl trigger looks like:
my $stomp = Net::Stomp->new({hostname=>'',port=>'61613'});
$stomp->connect({login=>$uid, passcode=>$pwd});

my $msg = sprintf("%s,%s,%s", $domain,
$_TD->{"new"}{"event_type"}, $_TD->{"new"}{"jid"});

$stomp->send({destination=>"/$domain/presence/phone", body=>($msg)});
Now, I'm just creating some debian packages and RPMs (for the sipX side), documenting how it works, and thinking about our next integration.

Saturday, December 06, 2008

Load Balance Clustered Ejabberd Servers

I recently completed setting up our XMPP infrastructure. After spending some time reviewing the current capabilities of jabberd2, openfire, djabberd, and ejabberd, I decided that ejabberd had the best combination of features for our needs: virtual hosting, LDAP integration, clustering support, shared rosters, and reasonably good documentation!

So after setting up the first ejabberd node (im1), with a test virtual host and working LDAP integration, I setup our second ejabberd node (im2) by copying /etc/ejabberd/ejabberd.cfg to the 2nd node, then running through the following steps:

  • First launch an erlang shell as the ejabberd user, with erl -sname ejabberd@im2 -mnesia extra_db_nodes "['ejabberd@im1']" -s mnesia

  • Then, to replicate all ejabberd tables in my configuration, I ran a: mnesia:change_table_copy_type(schema, node(), disc_copies).mnesia:add_table_copy(offline_msg,node(),disc_only_copies). mnesia:add_table_copy(privacy,node(),disc_copies). mnesia:add_table_copy(sr_group,node(),disc_copies). mnesia:add_table_copy(sr_user,node(),disc_copies). mnesia:add_table_copy(roster,node(),disc_copies). mnesia:add_table_copy(last_activity,node(),disc_copies). mnesia:add_table_copy(disco_publish,node(),disc_only_copies). mnesia:add_table_copy(pubsub_node,node(),disc_copies). mnesia:add_table_copy(pubsub_state,node(),disc_copies). mnesia:add_table_copy(pubsub_item,node(),disc_only_copies). mnesia:add_table_copy(session,node(),ram_copies). mnesia:add_table_copy(s2s,node(),ram_copies). mnesia:add_table_copy(route,node(),ram_copies). mnesia:add_table_copy(iq_response,node(),ram_copies). mnesia:add_table_copy(caps_features,node(),ram_copies). mnesia:add_table_copy(motd_users,node(),disc_copies). mnesia:add_table_copy(motd,node(),disc_copies). mnesia:add_table_copy(acl,node(),disc_copies). mnesia:add_table_copy(config,node(),disc_copies).

    After you quit the shell, you'll most likely need to move the result mnesia database files to the ejabberd user's $HOME folder.

    Once, both nodes were working correctly I setup a LVS-DR load balancer with ldirectord. This proves to be rather straightforward.

    First the realservers (each ejabberd instance, im1 and im2) had to configured with a local interface that listens to the load balancer's VIP (virtual IP). The most reliable way I found to set this up was with a simple
    ip addr add brd + dev lo label lo:vip
    in /etc/rc.local.

    Then I setup a /etc/sysctl.d/60-ipvs-arp-rules.conf with
    net.ipv4.conf.eth0.arp_ignore = 1
    net.ipv4.conf.eth0.arp_announce = 2
    net.ipv4.conf.all.arp_ignore = 1
    net.ipv4.conf.all.arp_announce = 2
    On Ubuntu (and I think debian as well), you must also tweak /etc/sysctl.d/10-network-security.conf to disable source address validation
    That's pretty much it for the realservers.

    Setting up the loadbalancer involves setting up the VIP in /etc/network/interfaces
    auto eth0:vip0
    iface eth0:vip0 inet static
    Then setting up ldirectord (apt-get install ldirectord) in /etc/ with
    # Global Directives

    real= gate
    real= gate
    It'd be really cool if there was some kind of builtin heathcheck call you could do on an ejabberd node, but alas there isn't so I just send it a string of garbage ("junk" to be exact), and look for the string in the XMPP response. Seems to be working OK thus far...
  • Monday, November 03, 2008

    Alfresco on EC2

    Over the weekend, I created a Alfresco Labs 3b AMI on EC2, Amazon's cloud computing platform.

    I took one of the Alestic Ubuntu 8.10 base images, added my own ec2-tools_0.1.deb package, and built out an AMI with Labs 3b running on the system tomcat5.5, instead of the bundled tomcat instance. That part was far more brutal than using EC2. You have to make quiet a few changes to the catalina policy to get things working.

    I made an Alfresco package, that installs an /etc/tomcat5.5/policy.d/60alfresco.policy file that looks like this:
    grant { 
    permission java.lang.RuntimePermission "*";

    permission java.lang.RuntimePermission "accessDeclaredMembers";
    permission java.lang.reflect.ReflectPermission "suppressAccessChecks";
    permission java.util.PropertyPermission "alfresco.jmx.dir", "read,write";
    permission java.util.PropertyPermission "webapp.root", "read,write";
    permission "/usr/share/java/servlet-api-2.4.jar", "read";

    grant codeBase "file:${catalina.home}/bin/tomcat-juli.jar" {
    permission "/usr/share/tomcat5.5/webapps/alfresco/WEB-INF/classes/", "read";
    permission "/var/lib/tomcat5.5/temp/-", "read,write,delete,execute";
    permission "/var/lib/tomcat5.5/temp", "read,write,execute";
    All of my AMIs have a script that can quickly upload an updated AMI. It looks something like this:

    umount /var/local
    ec2-bundle-vol -u $ACCOUNTID -c $CERTFILE -k $KEYFILE -p ubuntu-8.10-appsuite-1.0-20081101 --ec2cert /etc/ec2/amitools/cert-ec2.pem -r i386
    ec2-upload-bundle -b -m /tmp/ubuntu-8.10-appsuite-1.0-20081101.manifest.xml -a $ACCESSKEY -s $SECRETKEY
    This made life a bit easier as I made changes to the image and uploaded them. I unmount /var/local at the start of the script as that's where I mount my EBS volume.

    Monday, October 20, 2008

    Samba4 on Ubuntu Intrepid

    Here's a brief rundown of my experiences with Samba4 on Ubuntu Intrepid.

    I first tried the samba4 package in the ubuntu intrepid repositories, but when you do a
    ./setup/provision --domain=azulogic --adminpass=fubar --server-role='domain controller'
    you get a python stackdump with
    IOError: [Errno 2] No such file or directory: '/usr/etc/samba/smb.conf'
    I tried creating a "/usr/etc/samba" folder (though the distaste was high), but then proceeded to get further file path errors.

    So, next I switched to the Debian Experimental package. This worked much better.

    After you apt-get install the package, you'll have to fixup /etc/init.d/samba4 - it's still looking for smbd (the samba3 daemon), whereas in samba4 its now /usr/sbin/samba.

    So, I just did a
    ln -s /usr/sbin/samba /usr/sbin/smbd
    to get it to work.

    After getting krb5, dns, and samba ready to go, I tried to join a linux machine running winbind 2:3.2.3-1ubuntu3 to the domain. No luck though:
    (~) net ads join -U Administrator
    Enter Administrator's password:
    Failed to join domain: failed to lookup DC info for domain 'AZULOGIC.COM' over rpc: NT_STATUS_INTERNAL_ERROR
    How do you fix this? One way is to run in the "single" process model mode. I changed /etc/init.d/samba4 to launch the samba daemon with -M single. Then you see a nice:
    (~) net ads join -U Administrator
    Enter Administrator's password:
    Using short domain name -- AZULOGIC
    Joined 'LTS' to realm '
    One final note: as far as I can tell the debian version (4.0.0alpha6-GIT-7fb9007) crashes when someone tries to do a change password. So beware!