Monday, December 7, 2009

Lessons from Startup Weekend

I had an exhausting but fun weekend at the Athens Startup Weekend a few days ago. Along with Christos I joined Yannis, Panagiotis Christakos and Babis Makrinikolas on the Newspeek project. When Yannis pitched the idea on Friday night, the main concept was to create a mobile phone application that would provide a better way to view news on the go. I don't believe it was very clear in his mind then, what would constitute a "better" experience, but after some chatting about it we all defined a few key aspects, which we refined later with lots of useful feedback and help from George. Surprisingly, for me at least, in only two days we managed to design, build and present a working prototype in front of the judges and the other teams. And even though the demo wasn't exactly on par with our accomplishments, I'm still amazed at what can be created in such a short time frame.
Newspeek, our product, had a server-side component that periodically collected news items from various news feeds, stored them and provided them to clients through a simple REST API. It also had an iPhone client that fetched the news items and presented them to the user in a way that respected the UI requirements and established UX norms for that device.

So, in the interest of informing future participants about what works and what doesn't work in Startup Weekend, here are the lessons I learned:

  1. If you plan to win, work on the business aspect, not on the technology. Personally, I didn't go to ASW with plans to create a startup, so I didn't care that much about winning. I mostly considered the event as a hackathon, and tried my best to end up with a working prototype. Other teams focused more on the business side of things, which is understandable, given the prize. Investors fund teams that have a good chance to return a profit, not the ones with cool technology and (mostly) working demos. Still, the small number of actual working prototypes was a disappointment for me. Even though the developers were the majority in the event, you obviously can't have too many of them in a Startup Weekend.
  2. For quick server-side prototyping and hosting, Google App Engine is your friend. Since everyone in the team had Java experience, we could have gone with a JavaEE solution and set up a dedicated server to host the site. But, since I've always wanted to try App Engine for Java and the service architecture mapped nicely to it, we tried a short experiment to see if it could fly. We built a stub service in just a few minutes, so we decided it was well worth it. Building our RESTful service was really fast, scalability was never a concern and the deployment solution was a godsend, since the hosting service provided for free by the event sponsors was evidently overloaded. We're definitely going to use it again for other projects.
  3. jQTouch rocks! Since our main deliverable would be an iPhone application, and there were only two of us who had ever built an iPhone application (of the Hello World variety), we knew we had a problem. Fortunately, I had followed the jQTouch development from a reasonable distance and had witnessed the good things people had to say, so I pitched the idea of a web application to the team and it was well received. iPhone applications built with web technologies and jQTouch can be almost indistinguishable from native ones. We all had some experience in building web applications, so the prospect of having a working prototype in only two days seemed within the realm of possibility again. The future option of packaging the application with PhoneGap and selling it in the App Store was also a bonus point for our modest business plan.
  4. For ad-hoc collaboration, Mercurial wins. Without time to set up central repositories, a DVCS was the obvious choice, and Mercurial has both bundles and a standalone server that make collaborative coding a breeze. If we had zeroconf/bonjour set up in all of our laptops, we would have used the zeroconf extension for dead easy machine lookup, but even without it things worked flawlessly.
  5. You can write code with a netbook. Since I haven't owned a laptop for the last three years, my only portable computer is an Asus EEE PC 901 running Linux. Its original purpose was to allow me to browse the web from the comfort of my couch. Lately however, I'm finding myself using it to write software more than anything else. During the Startup Weekend it had constantly open Eclipse (for server-side code), Firefox (for JavaScript debugging), Chrome (for webkit rendering), gedit (for client-side code) and a terminal, without breaking a sweat.
  6. When demoing an iPhone application, whatever you do, don't sweat. Half-way through our presentation, tapping the buttons didn't work reliably all the time, so anxiety ensued. Since we couldn't make a proper presentation due to a missing cable, we opted for a live demo, wherein Yannis held the mic and made the presentation, and I posed as the bimbo that holds the product and clicks around. After a while we ended up both touching the screen, trying to make the bloody buttons click, which ensured the opposite effect. In retrospect, using a cloth occasionally would have made for a smoother demo, plus we could have slipped a joke in there, to keep the spirit up.
All in all it was an awesome experience, where we learned how far we can stretch ourselves, made new friends and caught up with old ones. Next year, I'll make sure I have a napkin, too.

Thursday, August 20, 2009

Spell-checking in JavaScript

I just heard of Atwood's Law a few days ago. Apparently Jeff Atwood first published his discovery a couple of years ago, which is like a century in Internet time, but I can't keep up with everything. The Law states that "any application that can be written in JavaScript, will eventually be written in JavaScript". As Atwood explains, it is based on Tim Berners-Lee's Principle Of Least Power:

Computer Science spent the last forty years making languages which were as powerful as possible. Nowadays we have to appreciate the reasons for picking not the most powerful solution but the least powerful. The less powerful the language, the more you can do with the data stored in that language.


I think there is a common theme between Atwood's Law, Zawinski's Law ("Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can") and Greenspun's Tenth Rule ("Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp"). The inevitability of the specified outcome makes the world seem like a much simpler place. There is no need to argue. Believe and ye shall be redeemed.

I might have been in this state of mind, when I came across Peter Norvig's "How to Write a Spelling Corrector", because I knew instantly I had to port the algorithm to JavaScript. The algorithm is quite simple and has already been ported to many different languages, so I seized the opportunity to study the differences in expressiveness, performance and style, between these languages and my latest affection, JavaScript.

After a few night's work I have it working and free (as in beer and speech) for anyone to use. Check out the speller project on GitHub. If you want to understand how the algorithm works, you should go and read Norvig's article, although you might get some hints from the comments in my code. The modestly condensed version of the code is 53 lines:


var speller = {};
speller.train = function (text) {
var m;
while ((m = /[a-z]+/g.exec(text.toLowerCase()))) {
speller.nWords[m[0]] = speller.nWords.hasOwnProperty(m[0]) ? speller.nWords[m[0]] + 1 : 1;
}
};
speller.correct = function (word) {
if (speller.nWords.hasOwnProperty(word)) return word;
var candidates = {}, list = speller.edits(word);
list.forEach(function (edit) {
if (speller.nWords.hasOwnProperty(edit)) candidates[speller.nWords[edit]] = edit;
});
if (speller.countKeys(candidates) > 0) return candidates[speller.max(candidates)];
list.forEach(function (edit) {
speller.edits(edit).forEach(function (w) {
if (speller.nWords.hasOwnProperty(w)) candidates[speller.nWords[w]] = w;
});
});
return speller.countKeys(candidates) > 0 ? candidates[speller.max(candidates)] : word;
};
speller.nWords = {};
speller.countKeys = function (object) {
var attr, count = 0;
for (attr in object)
if (object.hasOwnProperty(attr))
count++;
return count;
};
speller.max = function (candidates) {
var candidate, arr = [];
for (candidate in candidates)
if (candidates.hasOwnProperty(candidate))
arr.push(candidate);
return Math.max.apply(null, arr);
};
speller.letters = "abcdefghijklmnopqrstuvwxyz".split("");
speller.edits = function (word) {
var i, results = [];
for (i=0; i < word.length; i++)
results.push(word.slice(0, i) + word.slice(i+1));
for (i=0; i < word.length-1; i++)
results.push(word.slice(0, i) + word.slice(i+1, i+2) + word.slice(i, i+1) + word.slice(i+2));
for (i=0; i < word.length; i++)
speller.letters.forEach(function (l) {
results.push(word.slice(0, i) + l + word.slice(i+1));
});
for (i=0; i <= word.length; i++)
speller.letters.forEach(function (l) {
results.push(word.slice(0, i) + l + word.slice(i));
});
return results;
};


It may not be as succinct as Norvig's Python version (21 lines), or the record-holding Awk and F# versions (15 lines), but is much better than C (184 lines), C++, Perl, PHP, Rebol and Erlang. There is even a Java version with 372 lines. It must contain some sort of spell-checking framework in there, or something. The condensed version above, although correct, has terrible performance in most JavaScript engines, however. For real-world use you should pick the regular version which may be slightly longer, but performs much better.

Since this was a toy project of mine, I wanted to play with ServerJS modules as well, in order to run it as a shell script. I turned the code into a securable module, so you can run it from the command line, using narwhal. I have a couple of scripts to that end. Of course since this is JavaScript, you can try it from your browser, by visiting the demo page, courtesy of GitHub Pages. Be sure to use a modern browser, like Firefox 3.5, Safari 4 or Chrome 3 (beta), otherwise you won't be able to run the test suite, since I used the brand-new Web Workers to make the long-running tasks run in the background.

Norvig's Python implementation took 16 seconds for test 1 and the best I got was 25 seconds with Safari 4 on my Mac. Narwhal is using Rhino by default, so it is definitely not competitive in such tests (139 seconds), but I'm planning to fix support for v8cgi and give that a try.

And it goes without saying that I'd love to hear about ways to improve the performance or the conciseness of the code. If you have any ideas, don't be shy, leave a comment or even better fork the code and send me a pull request on GitHub.

Tuesday, July 7, 2009

GSS authentication

As I've described before, the GSS API is a REST-like interface for interacting with a GSS-based service, like Pithos. Using regular HTTP commands a client can upload, download, browse and modify files and folders stored in the GSS server. These interactions have two important properties: they store no client state to the server and they are not encrypted. I have already mentioned the most important benefits from the stateless architecture of the GSS protocol, namely scalability and loose coupling along the client-server boundary. In the same line of thought, SSL/TLS encryption of the transport was avoided for scalability reasons. Consequently, these two properties of the communication protocol, lead to another requirement: message authentication for each API call.


Since no session state is stored in the server, the client must authenticate each request as if it were the first one. Traditional web applications use an initial authentication interaction between client and server, that creates a unique session ID, which is transmitted with each subsequent request in that same session. While using this ID, the client does not need to authenticate again to the server. Stateless protocols, like WebDAV for instance, cannot rely on such an ID and have to transmit authentication data in each call. Ultimately the tradeoff is a minor increase in the message payload, in return for a big boost in scalability. The HTTP specification defines an Authorization header for use in such cases and WebDAV uses the HTTP Digest Access Authentication scheme. The GSS API uses a slight variation in that theme, blended with some ideas from OAuth request signing.

Essentially the standard HTTP Authorization header is populated with a concatenation of the username (for identifying the user making the request) and a HMAC-SHA1 signature of the request. The request signature is obtained by applying a secret authentication token to a concatenated string of the HTTP method name, the date and time of the request and the actual requested path. The GSS API page has all the details. The inclusion of the date and time helps thwart replay attacks using a previously sniffed signature. Furthermore, a separate header with timestamp information, X-GSS-Date, is used to thwart replay attacks with a full copy of the payload. The authentication token is issued securely by the server to the client and is the time-limited secret that is shared between the client and server. Since it is not the user password that is used as a shared secret, were the authentication token to be compromised, any ill effects would have been limited to the time period the token was valid. There are two ways for client applications to obtain the user's authentication token, and they are both described in detail in the API documentation.

In my experience, protocol descriptions are one thing and working code is a totally different one. In that spirit, I'm closing this post with a few tested implementations of the GSS API signature calculation, for client applications in different languages. Here is the method for signing a GSS API request in Java (full code here):


public static String sign(String httpMethod, String timestamp,
String path, String token) {
String input = httpMethod + timestamp + path;
String signed = null;

try {
System.err.println("Token:" + token);
// Get an HMAC-SHA1 key from the authentication token.
System.err.println("Input: " + input);
SecretKeySpec signingKey = new SecretKeySpec(
Base64.decodeBase64(token.getBytes()), "HmacSHA1");

// Get an HMAC-SHA1 Mac instance and initialize with the signing key.
Mac hmac = Mac.getInstance("HmacSHA1");
hmac.init(signingKey);

// Compute the HMAC on the input data bytes.
byte[] rawMac = hmac.doFinal(input.getBytes());

// Do base 64 encoding.
signed = new String(Base64.encodeBase64(rawMac), "US-ASCII");

} catch (InvalidKeyException ikex) {
System.err.println("Fatal key exception: " + ikex.getMessage());
ikex.printStackTrace();
} catch (UnsupportedEncodingException ueex) {
System.err.println("Fatal encoding exception: "
+ ueex.getMessage());
} catch (NoSuchAlgorithmException nsaex) {
System.err.println("Fatal algorithm exception: "
+ nsaex.getMessage());
nsaex.printStackTrace();
}

if (signed == null)
System.exit(-1);
System.err.println("Signed: " + signed);
return signed;
}


Here is a method for signing GSS API requests in JavaScript, that uses the SHA-1 JavaScript implementation by Paul Johnston (full code here):


function sign(method, time, resource, token) {
var q = resource.indexOf('?');
var res = q == -1? resource: resource.substring(0, q);
var data = method + time + res;
// Use strict RFC compliance
b64pad = "=";
return b64_hmac_sha1(atob(token), data);
}

Here is a Tcl implementation by Alexios Zavras (full code here):


proc ::rest::_sign {id} {
variable $id
upvar 0 $id data
set str2sign ""
append str2sign $data(method)
append str2sign $data(date)
set idx [string first ? $data(path)]
if {$idx < 0} {
set str $data(path)
} else {
incr idx -1
set str [string range $data(path) 0 $idx]
}
# append str2sign [::util::url::encode $str]
append str2sign $str
puts "SIGN $str2sign"
set sig [::base64::encode [::sha1::hmac -bin -key $::GG(gsstoken) $str2sign]]
set data(signature) $sig
}

I'd love to show implementations of the signature calculation for other languages as well, but since my time is scarce these days, I could use some help here. If you've written one yourself and you'd like to share, leave a comment and I'll update this post, with proper attribution of course.

Wednesday, June 17, 2009

Retrying transactions with exponential backoff

The method I described in my previous post about retrying transactions could use some improvement. A deficiency that will only become relevant in highly congested servers is the constant retry interval. When two or more transactions try to commit at the same time and fail, with the code from the last post they will retry probably simultaneously again. Random runtime events (like process/thread scheduler decisions, JIT compiler invocations, etc.) might help avoid collisions, but in general the collided transactions may well collide again. And again. And then again, until the specified maximum number of retries is reached.


The general method to alleviate such problems is to randomize the retry intervals. The most well-known algorithm in this category is called exponential backoff. This is one way to implement it for the utility method in my previous post (full code in gss):


public T tryExecute(final Callable<T> command) throws Exception {
T returnValue = null;
// Schedule a Future task to call the command after delay milliseconds.
int delay = 0;
ScheduledExecutorService executor = Executors.newScheduledThreadPool(1);
for (int i = 0; i < TRANSACTION_RETRIES; i++) {
final int retry = i;
ScheduledFuture<T> future = executor.schedule(new Callable<T>() {

@Override
public T call() throws Exception {
return command.call();
}
}, delay, TimeUnit.MILLISECONDS);

try {
returnValue = future.get();
break;
} catch (ExecutionException e) {
Throwable cause = e.getCause();
if (!(cause instanceof EJBTransactionRolledbackException) ||
retry == TRANSACTION_RETRIES - 1) {
executor.shutdownNow();
throw new Exception(cause);
}
delay = MIN_TIMEOUT + (int) (MIN_TIMEOUT * Math.random() * (i + 1));
String origCause = cause.getCause() == null ?
cause.getClass().getName() :
cause.getCause().getClass().getName();
logger.info("Transaction retry #" + (i+1) + " scheduled in " + delay +
" msec due to " + origCause);
}

}
executor.shutdownNow();
return returnValue;
}
The notable changes in this version are the delayed invocations of the Callable and the retry interval calculation. The former is accomplished using a ScheduledFuture from the excellent utilities in java.util.concurrent, which gets executed after a random time period. The interval calculation is something that can be implemented in a number of ways, either with monotonically increasing intervals or not. I opted for the formula above since it provided the fastest collision elimination in my tests, much faster than the monotonically increasing interval formulas I tried. The MIN_TIMEOUT constant is more of a black art. It should be tuned to the particular hardware and network setup in order to attain maximum efficiency: the minimum number of retries with the minimum interval between retries.

Another issue raised in a comment in my previous post was that the exception I am using as guard, EJBTransactionRolledbackException, may be too generic for this purpose. This is definitely true, EJBTransactionRolledbackException just wraps an OptimisticLockException as one would expect for this case, which in turn wraps the Hibernate-specific StaleObjectStateException that is thrown initially. However, contention in a database table, index or row might not necessarily result in optimistic locking violations, but in deadlocks as well, when one transaction holds exclusive locks on resources that are needed by the other and vice versa. Deadlock detection is performed by the DBMS, which forces a rollback of both transactions. This time however the initial exception (at least in my testing, might be DBMS or JDBC driver-specific) is GenericJDBCException which gets wrapped in a PersistenceException, which in turn is put inside a EJBTransactionRolledbackException before being received by our call site. Therefore, the generic nature of the guard is not a problem in this case. On the contrary it covers more relevant cases and one might argue even that there is no better service that can be offered to the caller, than retrying in all such occasions.

Monday, June 15, 2009

Retrying transactions in Java

When building a web-based system, the most often encountered persistence solution is the use of a relational DBMS. This provides many benefits, but probably the most important one is that its behavior and usage are widely understood after so many decades of use. The simplicity of the ACID transactional model, coupled with the nice fit of the HTTP request/response protocol, make for a killer combination.

The simplicity starts to break down however, when the load on the DBMS server increases, causing transactions to start failing. The DBMS can guarantee that no data corruption will occur, but handling such cases is not that simple. The scalable Optimistic Locking model requires transactions to be resubmitted when failure occurs. Since this is supposed to be rather rare (otherwise optimistic locking should probably not be used), developers often cheat by not dealing with such errors, letting them bubble up to the user, who is then responsible for resubmitting the transaction. This leads to simple, elegant codebases and unhappy users.

Adding ORM to the mix only makes things worse. In JavaEE, the JPA API hides much of the complexity of dealing with relational databases, but corner cases become harder. Add the container-managed transaction boundaries provided by session EJBs and one might be tempted to raise his hands up in despair.

However a solution is not as hard as it sounds. Provided that session EJB methods are sufficiently free from side effects, one can replay a transaction for a number of times from the call site (external to the session bean) with a small amount of code and a small added complexity to the call sites. Since I seem to come up against this requirement quite often, I thought I should describe one way to do it. The following tryExecute() helper method, retries any action supplied in a standard Callable interface for a number of times, in the face of optimistic locking violations:


public T tryExecute(Callable<T> command) throws Exception {
T returnValue = null;
for (int i = 0; i < TRANSACTION_RETRIES; i++) {
try {
returnValue = command.call();
break;
} catch(EJBTransactionRolledbackException trbe) {
if (i == TRANSACTION_RETRIES - 1)
throw trbe;
} catch (Exception e) {
throw e;
}
logger.debug("Transaction retry: " + (i+1));
}
return returnValue;
}


By abstracting away the session bean method code in a parameterized class we can reuse this method in every place that needs to retry transactions. Using a Callable as a parameter lets us return any values received from the session bean. For supplying bean methods that do not return values, we could create Runnable instances and wrap them to Callable using Executors.callable(). You can see the original code here.

If you use the above code in a JavaSE environment without EJB, you should probably need to catch OptimisticLockException for JPA, or even StaleObjectStateException, if you don't mind depending on a Hibernate API. I use the above helper method like this in my code:


file = new TransactionHelper<FileHeaderDTO>().tryExecute(new Callable<FileHeaderDTO>() {
@Override
public FileHeaderDTO call() throws Exception {
return sessionBean.createFile(user.getId(), folder.getId(), name, mimeType, uploadedFile);
}
});


Java's verbosity makes this not that easy to understand at first sight, but what happens is very simple indeed: I wrap a one-liner call to the session bean method createFile(), in a Callable and supply it to tryExecute(). The result from the bean method is received in a local variable as usual. One thing to keep in mind though is that since Java doesn't have closures, the above anonymous function can only receive final variables as parameters. In my example user, folder, name, mimeType and uploadedFile are all final. You might have to resort to using temporary local variables for storing final values on some occasions.

Thursday, June 4, 2009

Ubuntu on EEE PC

A few weeks ago I set out to upgrade Ubuntu on my EEE PC 901 from Intrepid to Jaunty. It was an, um, interesting experience, so I'm jotting down these notes for next time. Upgrading from Update Manager was not an option, because my root partition was not big enough to host the upgrade. The 901 has a 4GB SSD that I'm using as root and an 8GB (or 12GB) SSD that I've put /home on. It turns out that 4GB is OK for running a modern operating system, but a little short for updating it.

The usual procedure for upgrading is to use the official ISO images to boot from. As always you need a USB stick, or an external optical drive. I settled for the former, as you can see below.

The good news with Jaunty is that a special netbook edition is available for download. With previous editions you had to start with the desktop edition and do lots of tweaking, to arrive at a satisfactory result. With the netbook edition however things were unbelievably smooth. Everything worked out of the box with my EEE PC. Well, almost everything, but I will get to that in a while.

The first step after downloading the ISO (for the security paranoid at least) is verifying the download:


$ md5sum ubuntu-9.04-netbook-remix-i386.img
8f921e001aebc3e98e8e8e7d29ee1dd4 ubuntu-9.04-netbook-remix-i386.img

Then, we can write the image to the disk. I used Ubuntu ImageWriter (package usb-imagewriter) from another Ubuntu PC:


That went very smooth. Next step was booting the EEE PC with the newly formatted USB disk. Hitting ESC on boot, allows one to change the boot drive sequence. If this is the first time you are replacing the original OS, you probably need to disable Boot Booster from the BIOS first.

The installation process is quite simple. The only important thing was partitioning in my case, since I wanted to preserve my /home partition intact. Last time I installed Ubuntu I went with ext2 filesystems, trying to squeeze as much life from my solid state disks as I could. After doing some more research on the subject, I decided that I'll probably throw away my EEE long before the disks begin to wear out. Either that, or I'll throw an SD card in the expansion slot. Life is full of choices, particularly on the $300 range.

So I selected manual partitioning and chose ext3 for my first SSD. This is a screenshot from my first attempt. No, actually it was my second attempt. I screwed up the first one. Turns out I screwed up this one, too. See if you can spot why.


It may not be that obvious, but swap should not be on the first disk. If you are using swap at all (and I know many suggest that you don't), it would be a better choice to put it on the second disk. With 1GB of RAM, swap should be at least 1GB, which leaves only 3GB for Linux. The default installation needs something north of 2GB, so that leaves less than a gig to install software you use. Unless of course you are planning to play symlink tricks, in which case I should probably offer my condolences. Anyway, I ended up chopping 1GB with gparted from the second disk and redoing the installation again, but fortunately you won't have to.

What you should definitely do however, is put aside 16MB of space in the first disk for enabling Boot Booster in the BIOS again. This is an ASUS feature that persists the BIOS configuration on the disk, shaving a couple of seconds on each boot. It may not seem much, but think about how many times you are going to boot this thing. If you add them up, it starts to matter. Boot Booster needs only 8MB of disk to function, but I found that the installer's partition editor would only allocate one cylinder for 8MB, while we need two. Using 16MB did the right thing.

The regular customization I always do is to put /tmp and /var/tmp on tmpfs, so that many common operations don't touch the disks at all. Put the following lines in fstab to do that:


tmpfs /tmp tmpfs defaults,noexec,nosuid 0 0
tmpfs /var/tmp tmpfs defaults,noexec,nosuid 0 0

After editing /etc/fstab the proper way to remount these file systems is by following these steps:
  1. Log out of the Ubuntu desktop in order to minimize the number of open temp files.
  2. Press CTRL-ALT-F1 on your keyboard. You should see a login prompt. Login with your usual user/pass.
  3. Clear out the files in the existing /tmp and /var/tmp directories.
    sudo rm -rf /tmp/* /var/tmp/*
  4. Mount the filesystems using tmpfs.
    sudo mount /tmp
    sudo mount /var/tmp
  5. Reboot the EEE PC.
    sudo shutdown -r now
Next, from the boot-as-fast-as-you-can department comes another tweak. In /boot/grub/menu.lst set "timeout 0" in order to skip the kernel selection delay. If I ever need to boot a different kernel, I'll change it back to something larger, thankyouverymuch.

  1. Log out of the Ubuntu desktop. (Yes.. I said log out. This will go a long way to ensuring the fewest number of temp files are currently open.)
  2. Press CTRL-ALT-F1 on your keyboard. You should see a login prompt. Login with your usual user/pass.
  3. Clear out the files in the existing /tmp and /var/tmp directories.
    sudo rm -rf /tmp/* /var/tmp/*
  4. Mount the filesystems using tmpfs.
    sudo mount /tmp
    sudo mount /var/tmp
  5. Reboot the EEE PC.
    sudo shutdown -r now
Now in order to enable back Boot Booster follow these instructions for configuring the small partition we created earlier. When you reboot make sure to enable the option in the BIOS (it will be hidden when the proper partition isn't found).

Now you are all set. At least if you have no use for WPA-secured WiFi. Because if you do, you'll hit a bug with the included rt2860 driver. The details aren't that interesting, but the available options are two: either use the array.org kernel or patch the driver. The former solution used to be necessary with Intrepid, since lots of things were not working with the default kernel. However after trying out both, I don't see much need for Adam's kernel in Jaunty, unless you are terrified when encountering patch, make and their ilk. Since I'm on first name basis with them, I went with the second option, hoping that a future kernel update will bring along the fixes.

All in all, the process this time around was very easy. Almost easy peasy. My EEE PC with Ubuntu Netbook Remix makes me almost forget that I'm not on my beloved Mac.

Almost.

Thursday, April 23, 2009

An introduction to the GSS API

Application Programming Interface or API design is one of my favorite topics in programming, probably because it is both a science and an art. It is a science because there are widely accepted principles about how to design an API, but at the same time applying these principles within the constraints of a given programming language, requires the finesse of an experienced practicioner. Therefore it is with great pleasure that I'll try to explain the ins and outs of the GSS REST-like API, as promised before. As I've already mentioned, GSS is both the name of the source code project in Google Code, as well as the GRNET-sponsored service for the Greek research and academic network (although, it's official name after leaving the beta stage will be Pithos). Since anyone can use the open-source code to setup a GSS service, in this post I'll use generic examples, so anyone writing a client for the GRNET service should modify them accordingly.

When developing an application in a particular programming language, we are used to thinking about the APIs presented to us by the various libraries, which are invariably specified in that same language. For instance, for communicating with an HTTP server from a Java program we might use the HttpClient library API. This library presents a set of Java classes, interfaces and methods for interacting with web servers. These classes hide the underlying complexity of making the low-level HTTP protocol operations, allowing our mental process to remain constantly in a Java world. We could however interact with a web server without such a library, opting to implement the HTTP protocol interactions ourselves instead. Unfortunately, there is no such higher-level library for GSS yet, wrapping the low-level HTTP communications. Therefore this post will present a low-level API, in the sense that one has to make direct HTTP calls in his chosen programming language. The good news is that the following discussion is useful for programmers with any background, since there is support for the ubiquitus HTTP protocol in every modern programming language.

A RESTful API models its entities as resources. Resources are identified by Uniform Resource Identifiers, or URIs. There are four kinds of resources in GSS: files, folders, users & groups. These resources have a number of properties that contain various attributes. The API models these entities and their properties in the JSON format. There is also a fifth entity that is not modeled as a resource, but is important enough to warrant special mention: permissions.

Users · Users are the entities that represent the actual users of the system. They are used to login to the service and separate namespaces of files and folders. User entities have attributes like full name, e-mail, username, authentication token, creation/modification times, groups, etc. The URI of a user with username paul would be:

http://host.domain/gss/rest/paul/
The JSON representation of this user would be something like this:
{
"name": "Paul Smith",
"username": "paul",
"email": "paul@gmail.com",
"files": "http://hostname.domain/gss/rest/paul/files",
"trash": "http://hostname.domain/gss/rest/paul/trash",
"shared": "http://hostname.domain/gss/rest/paul/shared",
"others": "http://hostname.domain/gss/rest/paul/others",
"tags": "http://hostname.domain/gss/rest/paul/tags",
"groups": "http://hostname.domain/gss/rest/paul/groups",
"creationDate": 1223372769275,
"modificationDate": 1223372769275,
"quota": {
"totalFiles": 7,
"totalBytes": 429330,
"bytesRemaining": 10736988910
}
}


Groups · Groups are entities used to organize users for easier sharing of files and folders among peers. They can be used to facilitate sharing files to multiple users at once. Groups belong to the user who created them and cannot be shared. The URI of a group named work created by the user with username paul would be:
http://host.domain/gss/rest/paul/groups/work
The JSON representation of this group would be something like this:

[
"http://hostname.domain/gss/rest/paul/groups/work/tom",
"http://hostname.domain/gss/rest/paul/groups/work/jim",
"http://hostname.domain/gss/rest/paul/groups/work/mary"
]


Files · Files are the most basic resources in GSS. They represent actual operating system files from the client's computer that have been augmented with extra metadata for storage, retrieval and sharing purposes. Familiar metadata from modern file systems are also maintained in GSS, like file name, creation/modification times, creator, modifier, tags, permissions, etc. Furthermore, files can be versioned in GSS. Updating versioned files retains the previous versions, while updating an unversioned file replaces irrevocably the old file contents. The URI of a file named doc.txt located in the root folder of the user with username paul would be:
http://host.domain/gss/rest/paul/files/doc.txt
The JSON representation of the metadata in this file would be something like this:

{
"name": "doc.txt",
"creationDate": 1232449958563,
"createdBy": "paul",
"readForAll": true,
"modifiedBy": "paul",
"owner": "paul",
"modificationDate": 1232449944444,
"deleted": false,
"versioned": true,
"version": 1,
"size": 802,
"content": "text/plain",
"uri": "http://hostname.domain/gss/rest/paul/files/doc.txt",
"folder": {
"uri": "http://hostname/gss/rest/aaitest@uth.gr/files/",
"name": "Paul Smith"
},
"path": "/",
"tags": [
"work",
"personal"
],
"permissions": [
{
"modifyACL": true,
"write": true,
"read": true,
"user": "paul"
},
{
"modifyACL": false,
"write": true,
"read": true,
"group": "work"
}
]
}


Folders · Folders are resources that are used for grouping files. They represent the file system concept of folders or directories and can be used to mirror a client's computer file system on GSS. Familiar metadata from modern file systems are also maintained in GSS, like folder name, creation/modification times, creator, modifier, permissions, etc. The URI of a folder named documents located in the root folder of the user with username paul would be:
http://host.domain/gss/rest/paul/files/documents
The JSON representation of this folder would be something like this:

{
"name": "documents",
"owner": "paul",
"deleted": false,
"createdBy": "paul",
"creationDate": 1223372795825,
"modifiedBy": "paul",
"modificationDate": 1223372795825,
"parent": {
"uri": "http://hostname.domain/gss/rest/paul/files/",
"name": "Paul Smith"
},
"files": [
{
"name": "notes.txt",
"owner": "paul",
"creationDate": 1233758218866,
"deleted":false,
"size":4567,
"content": "text/plain",
"version": 1,
"uri": "http://hostname.domain/gss/rest/paul/files/documents/notes.txt",
"folder": {
"uri": "http://hostname.domain/gss/rest/paul/files/documents/",
"name": "documents"
},
"path": "/documents/"
}
],
"folders": [],
"permissions": [
{
"modifyACL": true,
"write": true,
"read": true,
"user": "paul"
},
{
"modifyACL": false,
"write": true,
"read": true,
"group": "work"
}
]
}

Working with these resources is accomplished by sending HTTP protocol requests to the resource URI with GET, HEAD, DELETE, POST, PUT methods. GET requests retrieve the resource representation, either the file contents, or the JSON representations for the resources specified above. HEAD requests for files return just the metadata of the file and DELETE requests remove the resource from the system. PUT requests upload files to the system from the client, while POST requests perform various modifications to the resources, like renaming, moving, copying, moving files to the trash, restoring them from the trash, creating folders and more. The operations are numerous and I hope to cover them in more detail in a future post.

One important aspect of every RESTful API is the use of URIs to allow the client to maintain a stateful conversation. For example, fetching the user URI would provide the files URI for fetching the available files and folders. Fetching the files URI would in turn return the URIs for the particular files and folders contained in the root folder (along with other folder properties). Returning to the parent of the current folder would entail following the URI contained in the parent property. This mechanism removes the state handling from the server and puts the burden on the client, providing excellent scalability for the service. Furthermore, since the URIs are treated opaquely by the client, the API allows client reuse across server deployments. A client can target multiple GSS services, as long as they speak the same RESTful API. Moreover, links from service A can refer to resources in service B without a problem (in the same authentication domain, e.g. the same Shibboleth federation). This is the same as using a single web browser to communicate with multiple web servers, by following links among them.

Monday, April 20, 2009

Reconciling Apache Commons Configuration with JBoss 5

There are many ways to configure a JavaEE application. Among the available solutions are DBMS tables, JNDI, JMX MBeans and even plain old files in a variety of formats. While we have used most of the above in various occasions, I find that plain files resonate with all types of sysadmins, when no other administrative interface is available. For such scenarios, Apache Commons Configuration is undoubtedly the best tool for the job. Recently, I came across an undocumented incompatibility when using Commons Configuration with JBoss 5 and I thought I should describe our solution for the benefit of others.

Usually we are storing our configuration files in the standard place for JBoss, which is JBOSS_HOME/server/default/conf, for the default server configuration. This has the disadvantage that is not as easy to remember as /etc in UNIX/Linux or \Program Files and \Windows in Windows systems, but it has the important advantage of being specified as a relative path in our code, making it more cross-platform without cluttering it with platform-specific if/else path resolution checks.

Commons Configuration can reload the configuration files automatically when changed in the file system, which helps prolong the server uptime. However JBoss 5 has introduced the concept of a virtual file system that caches all file system accesses performed through the context class loaders using relative paths. Unfortunately this generates resource URLs in the form vfsfile:foo.properties that Commons Configuration does not know how to deal with. Fixing this requires extending FileChangedReloadingStrategy, like we do in gss. Alternatively, one could patch Commons Configuration with the following change and use the standard FileChangedReloadingStrategy unchanged:


Index: FileChangedReloadingStrategy.java
===================================================================
--- FileChangedReloadingStrategy.java (revision 764760)
+++ FileChangedReloadingStrategy.java (working copy)
@@ -46,6 +46,9 @@
/** Constant for the jar URL protocol.*/
private static final String JAR_PROTOCOL = "jar";

+ /** Constant for the JBoss MC VFSFile URL protocol.*/
+ private static final String VFSFILE_PROTOCOL = "vfsfile";
+
/** Constant for the default refresh delay.*/
private static final int DEFAULT_REFRESH_DELAY = 5000;

@@ -161,7 +164,8 @@

/**
* Helper method for transforming a URL into a file object. This method
- * handles file: and jar: URLs.
+ * handles file: and jar: URLs, as well as JBoss VFS-specific vfsfile:
+ * URLs.
*
* @param url the URL to be converted
* @return the resulting file or null
@@ -181,6 +185,18 @@
return null;
}
}
+ else if (VFSFILE_PROTOCOL.equals(url.getProtocol()))
+ {
+ String path = url.getPath();
+ try
+ {
+ return ConfigurationUtils.fileFromURL(new URL("file:" + path));
+ }
+ catch (MalformedURLException mex)
+ {
+ return null;
+ }
+ }
else
{
return ConfigurationUtils.fileFromURL(url);


Neither of these solutions covers the case of storing configuration files in zip or jar containers, but since it is something I haven't found a use for yet, I can't test a fix for it. If anyone is interested in such a use case, I'd advise extending FileChangedReloadingStrategy, combining the logic in jar: and vfsfile: URL handling.

Monday, April 13, 2009

GSS architecture

Handling a large number of concurrent connections requires many servers. Not only because scaling vertically (throwing bigger hardware at the problem) is very costly, but also because even if an application can be designed to scale vertically, the underlying stack probably can not. Java applications for instance, like GSS, run on the JVM and although the latter is an excellent piece of engineering, using huge amounts of heap is not something it's tuned for. Big Iron servers with many cores and 20+ GB of RAM are usually running more than one JVM, since garbage collection is not all that efficient with huge heaps. And since running application instances with a 4-8 GB heap size can be done with cheap off-the-shelf hardware, why spend big bucks on Big Iron?

So having a large number of servers is a sane choice, but brings it's own set of problems. Unless one partitions users to servers (having all requests a particular user makes be delivered to the same server), all servers must have a consistent view of the system data, in order to deliver meaningful results. Assigning user requests to particular servers, usually requires expensive application layer load-balancers or customized application code on each server, so it would rarely be your first option. Having all servers work on the same data is a more tractable problem, since it can be solved by having the application state being replicated among server nodes. Usually, only a small part of the application state needs to be replicated, for each user, that is the part which concerns his current session. But even though session clustering solutions have been a well studied field and implementations abound, having no session to replicate is an even better option.

For GSS we have implemented a stateless architecture for the core server, that should provide us with good scalability in a very cost-effective manner. The most important part in this architecture is the REST-like API that moves part of the responsibility for session state maintenance to the client applications, effectively distributing the system load to more systems than the available server pool. Furthermore, client requests can be authenticated without requiring an SSL/TLS transport layer (even though it can be used if extra privacy is required), which would entail higher load on the application servers or require expensive load balancers. In the server side, API requests are being handled by servlets that enlist the services of stateless session beans, for easier transaction management. Our persistence story so far is JPA with a DBMS backing, plus high-speed SAN storage for the file contents. If or when this becomes a bottleneck, we have various contingency plans, depending on the actual characteristics of the load that will be observed.


The above image depicts the path user requests will travel along, while various nodes interact in order to serve them. A key element in this diagram is the number of different servers that can be found, effectively specializing in their own particular domain. Although the system can be deployed on a single physical server (and regularly is, for development and testing), consisting of a number of standalone sub-services instead of a big monolithic service is a boon to scalability.

This high-level overview of the GSS architecture should help those interested to find their way around the open-source codebase and explain some of the design decisions. But the most interesting part from a user's point of view would be the REST-like API, that allows one to take advantage of the service for scratching his own itch.

So that will be the subject of my next post.

Wednesday, April 8, 2009

Introducing GSS

During my recent work-induced blog hiatus, I've been working on a new software system, called GSS. I've been more than enjoying the ride so far and since we have released the code as open-source, I thought discussing some of the experience I've gained might be interesting to others as well.

GSS is a network, er grid, er I mean cloud service, for providing access to a file system on a remote storage space. It is the name of both a service (currently in beta) for the Greek research and academic community and the open source software used for it, that can also be used by others for deploying such services. It is similar in some ways to services like iDrive, DropBox and drop.io, but it can also be regarded as a more high-level Amazon S3. Its purpose is to let the desktop computer's file system meet the cloud. The familiar file system metaphors of files and folders are used to store information in a remote storage space, that can be accessed from a variety of user and system interfaces, from any place in the world that has an Internet connection. All usual file manager operations are supported and users can share their files with selected other users or groups, or even make them public. Currently there are four user interfaces available, a web-based application, a desktop client, a WebDAV interface and an iPhone web application, in various stages of development. Underlying these user interfaces is a common REST-like API that can be used to extend the service in new, unanticipated ways.


The main focus of this service was to provide the users of the Greek research and academic community with a free, large storage space that can be used to store, access, backup and share their work, from as many computer systems as they want. Since the available user base is close to half a million (although the expected users of the service are projected to the low ten thousands), we needed a scalable system, that would be able to accommodate high network traffic and a high storage capacity at the same time. A Java Enterprise Edition server coupled with a GWT-based web client and a stateless architecture were our solution. In future posts I will describe the system architecture with all the gory details. The exposed virtual file system features file versioning, trash bin support, access control lists, tagging, full text search and more.

All of these features are presented through an API for third party developers to create scripts, applications or even full blown services that will fulfill their own particular needs or serve other niches. This API has a REST-like design and though it will probably fail a formal RESTful definition, it sports many of the advantages of such architectures:

  • system resources such as users, groups, files and folders are represented by URIs
  • GET, HEAD, POST, PUT and DELETE methods on resources have the expected semantics
  • HTTP caching is explicitly supported via Last-Modified, ETag & If-* headers
  • resource formats for everything besides files are simple JSON representations
  • only authenticated requests are allowed, except for public resources

Users are authenticated through the GRNET Shibboleth infrastructure. User passwords are never transmitted to the GSS service. Instead GSS-issued authentication tokens are used by both client and server to sign the API requests after the initial user login. SSL transport can provide even stronger privacy guarantees, but it is not required, nor enabled by default.

The GSS code base is GPL-licensed and therefore anyone can use it as a starting point to implement his own file storage service. We have yet to provide binary downloads, due to the various dependencies, but the build instructions should be enough to get someone started. We are always interested in source or documentation patches, of course (did I mention it's open source?). Most importantly, the REST API will ensure that clients developed for one such service can be reused for every other one.

I will have much more to say about the API in a future post. In the meantime you can peruse the code and documentation, or even try it out yourself. I'd be very interested in any comments you might have.

Saturday, March 14, 2009

Applications meme

It's been a while since my last post, but I've been wanting to write about the interesting stuff that I've been working on. Unfortunately it took longer than anticipated, but now we're almost done and I'll be posting soon with the gory details. However, since I got tagged, I suppose I can throw in a not-so-interesting post, about my Linux application preferences.


  1. Which desktop manager do you use more often ?

    Gnome. It's the only one that reminds me of OS X somewhat (appearance- and functionality-wise).

  2. Which desktop application you would not like to see implemented again on linux? And why?

    A desktop manager. If you are going to provide yet another incarnation of a 30-year old idea, please don't bother. The concept is pretty well understood: icons/workspace/file manager/drag'n'drop/mouse/menus/launcher/whatever, it's been done before. Like a million times. If you want to be useful, try to fix some of the shortcomings of the existing ones (notification mechanisms, translucent effects, faster access patterns, application integration, etc.).

  3. Which desktop application you definitely would like to see implemented on linux? Describe it briefly or point out to a similar application.

    Adobe Premiere, or iMovie, or any other semi-decent video editing software. This was the main reason I bought a Mac.

  4. Write the name of the last project (not the very best, the last!) that made you wish to thank their developers so you can thank them now!

    Git. You guys rock! (And if you improve on the UI, you will jazz!)

  5. (Optional) Link the blogs of 1-3 people you’d like to take part to this meme. (no more than three). you can skip this question if you like.

    Let's hear from chstath, ckotso and keramida (who may s/Linux/FreeBSD/ at his discretion).



Creative Commons License Unless otherwise expressly stated, all original material in this weblog is licensed under a Creative Commons Attribution 3.0 License.