Saturday, December 06, 2008

Load Balance Clustered Ejabberd Servers

I recently completed setting up our XMPP infrastructure. After spending some time reviewing the current capabilities of jabberd2, openfire, djabberd, and ejabberd, I decided that ejabberd had the best combination of features for our needs: virtual hosting, LDAP integration, clustering support, shared rosters, and reasonably good documentation!

So after setting up the first ejabberd node (im1), with a test virtual host and working LDAP integration, I setup our second ejabberd node (im2) by copying /etc/ejabberd/ejabberd.cfg to the 2nd node, then running through the following steps:

  • First launch an erlang shell as the ejabberd user, with erl -sname ejabberd@im2 -mnesia extra_db_nodes "['ejabberd@im1']" -s mnesia

  • Then, to replicate all ejabberd tables in my configuration, I ran a: mnesia:change_table_copy_type(schema, node(), disc_copies).mnesia:add_table_copy(offline_msg,node(),disc_only_copies). mnesia:add_table_copy(privacy,node(),disc_copies). mnesia:add_table_copy(sr_group,node(),disc_copies). mnesia:add_table_copy(sr_user,node(),disc_copies). mnesia:add_table_copy(roster,node(),disc_copies). mnesia:add_table_copy(last_activity,node(),disc_copies). mnesia:add_table_copy(disco_publish,node(),disc_only_copies). mnesia:add_table_copy(pubsub_node,node(),disc_copies). mnesia:add_table_copy(pubsub_state,node(),disc_copies). mnesia:add_table_copy(pubsub_item,node(),disc_only_copies). mnesia:add_table_copy(session,node(),ram_copies). mnesia:add_table_copy(s2s,node(),ram_copies). mnesia:add_table_copy(route,node(),ram_copies). mnesia:add_table_copy(iq_response,node(),ram_copies). mnesia:add_table_copy(caps_features,node(),ram_copies). mnesia:add_table_copy(motd_users,node(),disc_copies). mnesia:add_table_copy(motd,node(),disc_copies). mnesia:add_table_copy(acl,node(),disc_copies). mnesia:add_table_copy(config,node(),disc_copies).

    After you quit the shell, you'll most likely need to move the result mnesia database files to the ejabberd user's $HOME folder.

    Once, both nodes were working correctly I setup a LVS-DR load balancer with ldirectord. This proves to be rather straightforward.

    First the realservers (each ejabberd instance, im1 and im2) had to configured with a local interface that listens to the load balancer's VIP (virtual IP). The most reliable way I found to set this up was with a simple
    ip addr add brd + dev lo label lo:vip
    in /etc/rc.local.

    Then I setup a /etc/sysctl.d/60-ipvs-arp-rules.conf with
    net.ipv4.conf.eth0.arp_ignore = 1
    net.ipv4.conf.eth0.arp_announce = 2
    net.ipv4.conf.all.arp_ignore = 1
    net.ipv4.conf.all.arp_announce = 2
    On Ubuntu (and I think debian as well), you must also tweak /etc/sysctl.d/10-network-security.conf to disable source address validation
    That's pretty much it for the realservers.

    Setting up the loadbalancer involves setting up the VIP in /etc/network/interfaces
    auto eth0:vip0
    iface eth0:vip0 inet static
    Then setting up ldirectord (apt-get install ldirectord) in /etc/ with
    # Global Directives

    real= gate
    real= gate
    It'd be really cool if there was some kind of builtin heathcheck call you could do on an ejabberd node, but alas there isn't so I just send it a string of garbage ("junk" to be exact), and look for the string in the XMPP response. Seems to be working OK thus far...
  • Monday, November 03, 2008

    Alfresco on EC2

    Over the weekend, I created a Alfresco Labs 3b AMI on EC2, Amazon's cloud computing platform.

    I took one of the Alestic Ubuntu 8.10 base images, added my own ec2-tools_0.1.deb package, and built out an AMI with Labs 3b running on the system tomcat5.5, instead of the bundled tomcat instance. That part was far more brutal than using EC2. You have to make quiet a few changes to the catalina policy to get things working.

    I made an Alfresco package, that installs an /etc/tomcat5.5/policy.d/60alfresco.policy file that looks like this:
    grant { 
    permission java.lang.RuntimePermission "*";

    permission java.lang.RuntimePermission "accessDeclaredMembers";
    permission java.lang.reflect.ReflectPermission "suppressAccessChecks";
    permission java.util.PropertyPermission "alfresco.jmx.dir", "read,write";
    permission java.util.PropertyPermission "webapp.root", "read,write";
    permission "/usr/share/java/servlet-api-2.4.jar", "read";

    grant codeBase "file:${catalina.home}/bin/tomcat-juli.jar" {
    permission "/usr/share/tomcat5.5/webapps/alfresco/WEB-INF/classes/", "read";
    permission "/var/lib/tomcat5.5/temp/-", "read,write,delete,execute";
    permission "/var/lib/tomcat5.5/temp", "read,write,execute";
    All of my AMIs have a script that can quickly upload an updated AMI. It looks something like this:

    umount /var/local
    ec2-bundle-vol -u $ACCOUNTID -c $CERTFILE -k $KEYFILE -p ubuntu-8.10-appsuite-1.0-20081101 --ec2cert /etc/ec2/amitools/cert-ec2.pem -r i386
    ec2-upload-bundle -b -m /tmp/ubuntu-8.10-appsuite-1.0-20081101.manifest.xml -a $ACCESSKEY -s $SECRETKEY
    This made life a bit easier as I made changes to the image and uploaded them. I unmount /var/local at the start of the script as that's where I mount my EBS volume.

    Monday, October 20, 2008

    Samba4 on Ubuntu Intrepid

    Here's a brief rundown of my experiences with Samba4 on Ubuntu Intrepid.

    I first tried the samba4 package in the ubuntu intrepid repositories, but when you do a
    ./setup/provision --domain=azulogic --adminpass=fubar --server-role='domain controller'
    you get a python stackdump with
    IOError: [Errno 2] No such file or directory: '/usr/etc/samba/smb.conf'
    I tried creating a "/usr/etc/samba" folder (though the distaste was high), but then proceeded to get further file path errors.

    So, next I switched to the Debian Experimental package. This worked much better.

    After you apt-get install the package, you'll have to fixup /etc/init.d/samba4 - it's still looking for smbd (the samba3 daemon), whereas in samba4 its now /usr/sbin/samba.

    So, I just did a
    ln -s /usr/sbin/samba /usr/sbin/smbd
    to get it to work.

    After getting krb5, dns, and samba ready to go, I tried to join a linux machine running winbind 2:3.2.3-1ubuntu3 to the domain. No luck though:
    (~) net ads join -U Administrator
    Enter Administrator's password:
    Failed to join domain: failed to lookup DC info for domain 'AZULOGIC.COM' over rpc: NT_STATUS_INTERNAL_ERROR
    How do you fix this? One way is to run in the "single" process model mode. I changed /etc/init.d/samba4 to launch the samba daemon with -M single. Then you see a nice:
    (~) net ads join -U Administrator
    Enter Administrator's password:
    Using short domain name -- AZULOGIC
    Joined 'LTS' to realm '
    One final note: as far as I can tell the debian version (4.0.0alpha6-GIT-7fb9007) crashes when someone tries to do a change password. So beware!

    Thursday, October 16, 2008

    Secure Apt Repository Howto

    After a good bit of googling and poking around, I completed the setup of our secure apt repository here at nvizn.

    Here's how you'd do it for an Ubuntu intrepid repository.

    First, setup a directory tree that looks like this:
    mkdir -p /var/www/packages/dists/intrepid/main/binary-i386/
    mkdir -p /var/www/packages/intrepid/main
    Then, install apt-ftparchive, which will do most of the heavy lifting.
    apt-get install apt-ftparchive
    Now, drop all your .debs into /var/www/packages/intrepid/main/ and create an apt-ftparchive configuration file at /etc/archive.config

    Here's what mine looks like:
    Dir {
    ArchiveDir "/var/www/packages";
    CacheDir "/home/joel.reed/uploads/";

    Default {
    Packages::Compress ". gzip bzip2";
    Sources::Compress ". gzip bzip2";
    Contents::Compress ". gzip bzip2";

    APT::FTPArchive::Release::Codename "intrepid";
    APT::FTPArchive::Release::Suite "intrepid";
    APT::FTPArchive::Release::Origin "Joel W. Reed";

    TreeDefault {
    BinCacheDB "packages-$(SECTION)-$(ARCH).db";
    Directory "intrepid/$(SECTION)";
    Packages "$(DIST)/$(SECTION)/binary-$(ARCH)/Packages";
    SrcDirectory "intrepid/$(SECTION)";
    Sources "$(DIST)/$(SECTION)/source/Sources";
    Contents "$(DIST)/Contents-$(ARCH)";

    Tree "dists/intrepid" {
    Sections "main";
    Architectures "i386";
    Finally, run this sequence of commands:
    apt-ftparchive generate /etc/archive.config
    cd /var/www/packages/dists/intrepid/
    apt-ftparchive -c /etc/archive.config release . > Release
    rm -v Release.gpg
    gpg -v --output Release.gpg -ba Release
    When you're done, you'll end up with a /var/www/packages tree that looks something like this:
    Now, to make all this work, you need to have a gpg key of course, and apache set to serve up /var/www/packages, and all client machines need the public key. To do that with a key on a keyserver, do something like
    gpg --recv-keys B1850655 && gpg --export B1850655 | apt-key add -
    Hope this is helpful to you!

    Monday, October 13, 2008


    I haven't blogged for while, because I've been putting a lot of hours into an open source startup company. It's been great fun to work with some new technologies like Groovy, Grails, CouchDB, and Samba4.

    Among other things, I setup an openldap server, built a few custom">overlays, and integrated Zimbra, Alfresco, Openfire, SipX, Samba3, and an Ubuntu desktop. Each of these integrations has there pros and cons, perhaps Zimbra and SipX are the nicest.

    I'm hoping to blog about my experience with Samba4 shortly.

    Monday, February 04, 2008

    OpenTF 0.6.0 Release

    Wow - two months without a blog post and 3 months since my last OpenTF release! For the last month or two, I really haven't worked much on OpenTF, preferring instead to work on learning NT Greek and more about the Book of Isaiah.

    The latest release includes a few new goodies and many bugfixes. There's the new IRC changeset notification bot, support for CruiseControl (an open source continuous build framework), a monodevelop plugin for browsing TFS servers, and several new commands like "shelve", "rollback", and "merges".

    Over the next few month, I hope to be able to further develop the monodevelop plugin, continue work on missing commands, and begin testing other open source Team Foundation tools for compatibility with the OpenTF libraries.