Someone in the Ruby IRC channel pastied this code today and complained about how ugly it is:
begin require 'vlad' rescue LoadError require 'rubygems' begin require 'vlad' rescue LoadError end end
It’s meant to account for differences between Ruby 1.8 and 1.9, where rubygems is automatically loaded. And this code is pretty ugly. I came up with a simplified version that takes advantage of require()’s return value, and the `retry` statement:
begin require 'vlad' rescue LoadError retry if require('rubygems') # raise end
The `retry` keyword jumps back to the start of the ‘begin’ block so you can try to load your libraries again. Used carelessly this would turn into an infinite loop (LoadError occurs, retry, LoadError occurs, retry, ad nauseum), but since require() returns false if the library was already loaded, you can skip the retry in that case.
This version eats the LoadError, which is useful for an optional dependency, but if you don’t want that you can comment out the `raise` above. If the retry ended up not doing any good, it’ll just re-raise the LoadError that was rescued.
Frankly, this probably isn’t very useful in practice. After all, if you cared about 1.8/1.9 compatibility, you could just require(“rubygems”) first and get rid of all of the exception handling. I think it’s a pretty good example of how `retry` works, though.
I don’t play Achaea much these days, but in the interest of keeping my plugins up to date and useful, I’ve updated them to use Iron Realms’ new GMCP protocol. I’ve also contributed to the GMCP plugin that gets the GMCP data from the server; thanks to Maxhrk from the MUSHclient forums for getting the ball rolling!
If you’ve used any of the previous versions of these plugins, delete it from your plugins directory and remove it from MUSHclient (in File -> Plugins) before you install the new one. You can (and probably should) keep the ATCP plugin, because other plugins that I didn’t make probably still depend on it.
Also, every one of these plugins has a new “readme.txt” file inside, containing instructions and troubleshooting tips.
UPDATE: If you enable GMCP using the GMCP plugin above, ATCP won’t work, because Achaea only supports one or the other at a time. This means that if you use ATCP-based plugins or systems (like Vadisys), you’ll either have to not use them anymore, or stick to ATCP. Luckily, all of the above plugins will with ATCP as well, if you remove the GMCP plugin and use the ATCP plugin instead.
I’ll make this short: I’ve dropped the Dishes project because (1) I haven’t had enough time to work on it, and (2) someone else has already built Faye, which looks much better than what I was trying to do. I haven’t used it much yet, but there’s going to be a presentation on it at RubyConf 2010, which I’m excited to see.
Aspect’s been on hold for a while now as well, because I haven’t had the time to work on it. I have a job (yay!) working on a new website, and I haven’t been able to hack on Aspect. Hopefully once the site’s been launched I’ll be able to get back on it again.
I’ve just begun building the messaging model for Aspect, and I’m developing it as a separate, reusable Rack-based framework. It’s called Dishes, which is an extraordinarily lame pun derived from “asynchronous” sounding like it has a “sink” in it. So far things are looking good, but there’s a lot of work still ahead!
As it happens, I read up a little on BOSH, and it uses exactly the same two-connection model I described in my last post. It’s interesting reading and I’m sure to take a lot of inspiration from it. I don’t really want to adopt BOSH in its entirety because I think it’s way more than I need; Dishes is probably going to be fairly small.
Something cool I’m trying in Dishes is putting each request in its own Fiber. Since Aspect is going to be extremely I/O bound, it makes sense to re-invert the evented model using fibers to make handlers feel more synchronous. That will make it easier to use Dishes.
Anyways, that’s enough late-night rambling for tonight. When I have a simple working Dishes example, I’ll post again and explain how everything works. Meanwhile, you can always take a look at the current code. I even have a basic goal application to aim for.
Comments always welcome!
(Before I begin, I should warn you that I haven’t had much time to actively work on this yet.)
I found an article about Starbuck’s method of handling coffee orders on Hacker News today. I was surprised, because I had explained this exact concept to some friends already, but using Pat & Oscar’s instead! This kind of query-response architecture strikes me as the perfect model for Aspect, because everything you do constitutes a request, and everything Aspect tells you is some form of response.
The Pat & Oscar’s analogy is a little different from the Starbucks one, though, and it highlights a few key points. If you’ve never been to a Pat & Oscar’s, it works like this: You go to the counter, make your order, receive a number, sit down at a table, and place the number card in the little holder on the table so they can find you. When the order’s ready, they come to you with the food. Importantly, you can make multiple orders and receive multiple numbers, and the orders will be served whenever they’re ready.
Now, how do you apply this to web communication? The classic HTTP protocol has a strictly blocking request/response format, meaning that every request must wait until the response is sent before you can reuse the connection. Most browsers have a cap on how many active connections you can have, and the bare minimum is IE’s two connections. So we need to make this work by using only two connections.
The solution is to use one connection for making requests, and the other for receiving results! You keep the results connection open constantly, and use the other whenever you make a request. The request connection must be closed as quickly as possible, so you just return an “order number”, a job ID. When the job is done, it’s pushed down the response connection along with its job ID. Then you re-connect the response connection so it’s constantly open. The approach the response connection is using is called long polling. Instead of polling periodically, the server hangs on to the connection until it has something to send.
I believe that this is a powerful approach to Comet-like communication. Unlike pure long polling – where the server simply holds the connection instead of responding immediately, and the client doesn’t need to care – it does require some infrastructure on both sides of the network gap. But if it can be abstracted properly, it should really be very easy to use. It’s just a layer you can work on top of.
I’ll be working on this as time permits, and of course I’ll open-source my work on this (though probably not Aspect itself) after a certain point. I’m a firm believer in open-sourcing platforms so everyone can benefit. If anyone else is interested in helping though, I’d be glad to make it public sooner! I think this is a useful model that definitely has applications beyond Aspect.
I’ve got a lot of interesting projects I’m dealing with right now, but I just don’t have enough time in the day to give each one the attention it needs. And naturally, some projects take more priority than others. Some of them are pretty cool, though, so I figured I’d list them here.
Aspect – The big one. I’ve been tinkering a lot with the MUSHclient source in my spare time instead, because it doesn’t take as much focus as building a whole new client, so at least I’m getting more experience with MUD clients. But this guy deserves a lot more attention from me.
Unnamed Rails project – This one takes priority, because I’m building it for my father and I actually get paid. It’s getting closer to completion, but I’m no designer, so getting the CSS just right takes a lot of my time. Inexperience rules the day here (but I’m not a total kludge!).
MUSHclient plugins and libraries – I’ve had plenty of ideas here, and a good amount have actually been completed. My newest project is a highlighting library which makes it much simpler to “paint” parts of lines different colors and styles without manually creating a trigger for each change, or worse, gagging and re-echoing the line with your changes. It should be simple to implement if I can just sit down and get down to business.
Misc. jobs – I do a lot of odds and ends for my father, like setting up Webalizer for a website. I’m very new to the whole Linux scene, so I (get|have) to learn something new with almost every job. (With Webalizer it was cron jobs.)
And then I do things in Real Life™ and in online communities like the MUSHclient forums. Of course, most of this is self-inflicted, and I enjoy everything I do; I’m definitely not complaining. I just come up with too many projects for my own good.
…As a rather funny side-note, I remember learning C++ and having no idea what I should do for my next project, and just kind of muddling around. Now I find myself with an excess of them. Life is bittersweet.
So it’s been a while since I’ve posted about Aspect. I was talking with a friend who had read up on Aspect here, and it seems I had forgotten to mention a rather key change I made: switching to Ruby. So I’ll take some time to explain exactly what’s up with Aspect right now.
Previously, I had explained that I was using Python, using the Tornado server. Unfortunately – and sorry, Python-lovers – I can’t stand Python. Development got to a point where I couldn’t make any progress because I was fighting the language. I had originally planned on using Ruby, in fact, but at the time I couldn’t find any Ruby libraries that did what I wanted.
Apparently I just didn’t look in the right place, because once I started looking again, I immediately found EventMachine, a Reactor-based library. EventMachine is pretty awesome, and suits my needs perfectly. But I still needed an HTTP server, and preferably one that could handle lots of concurrent conncetions. Thin fits the bill nicely. And lastly I needed a framework that could deal with holding onto connections until I have content ready. So far, Cramp handles that perfectly well.
Recently I’ve tossed Nginx into the mix, since I’m hoping I can have multiple Thin workers behind a front-end Nginx. Nginx also uses the event-based model, and it doesn’t give a thread to each connection. That would bring my server an early death, I think.
So that’s my network pipeline. From start to finish, it’s built to handle multiple concurrent, persistent connections, and it should be scalable too. Next time I’ll post about the messaging layer I’m building between the user and Aspect.
I’ve been experimenting a bit with Nginx lately. I haven’t had a chance to actually benchmark it – nor, in fact, do I have any idea how I would actually do that – but I’m pretty happy with the learning curve. My only problem was with getting Passenger working, since I use RVM to manage my rubies. It turned out to be as simple as “rvm ruby –passenger” and setting the passenger_ruby configuration option.
Nginx seems to be a better choice to run Aspect behind than Apache, because Nginx is built to manage simultaneous connections. Just like Thin, it’s event-driven instead of thread-based, and I’m going to have plenty of long-running (and recurring) connections.
Now that I have a local server running on my computer, too, I think I’ll have an easier time juggling my projects. I recently discovered /etc/hosts, too, and I just can’t describe how much nicer it is to use “aspect.local” instead of “127.0.0.1″ all the time.
In other news, the 9 key on my laptop broke. You may also know this key as the “left parenthesis” key. Now… If you’ve done any programming at all, you should know just how important this key is. Gah!!
This is still entirely conceptual at the moment, so don’t get too excited. Also, if you’re not too into technical stuff, you might want to read only the first paragraph.
I’m planning on making Aspect extensible to a certain extent. Users will be able to create plugins written in Lua, run on the server in a highly sandboxed environment, with a specialized API so the plugins can do useful stuff. One such API will probably be a widget interface, to create and manage HTML widgets on the browser side. This means you could create plugins that manage a visual set of stat gauges, or show a visual map, etc etc. in a similar vein to MUSHclient and Mudlet.
Obviously I have to only give scripts access to utilities I want them to have access to. But what if the plugin goes into an infinite loop, or just takes too much time to execute? In the best case, it will lag everyone else connected to Aspect. That’s clearly not a good thing!
There is functionality in Lua to set a “hook” which can be called automatically every so often. Theoretically, I can use this to halt execution, go do other stuff, then come back to the plugins. But we also want some way to keep plugins completely separate, and we don’t want any one plugin to hog all the time so no other plugin gets a chance. It’s a bit of a sticky situation.
First of all, we can isolate each plugin by starting each one in a coroutine, and setting the coroutine functions environment to its own special table. Coroutines are like threads, but they’re cooperative rather than preemptive. That is to say, a coroutine has to say “Okay, I’m going to take a break now” or “Hey, you, do some work” explicitly in order for another coroutine to begin. And changing the environment keeps the plugins from sharing any state.
Now we’re down to one more problem (or at least the last major one for now). I mentioned before that I don’t want any one plugin to hog all the time. I also mentioned hook functions. It just so happens that you can set a hook function separately for each coroutine. If we could yield from that hook, it would be almost like preemptive threads. And while you can’t yield from a hook set with debug.sethook (in Lua), you can from a hook set with lua_sethook (in C). So I just have to figure out how to write C-based hook functions to yield every so often.
So that’s a rather verbose explanation of how I want to implement plugins. Well, it’s 2am and I needed to talk. Tl;dr summary: every plugin will be a completely isolated Lua coroutine that shares execution time with other plugins. Plugins will be able to do a lot of interesting things, like create and manage visual widgets on the user’s browser, like status bars and maps and stuff, and could even augment/replace the default output window and input bar. It’ll be extremely extensible.
Now if only I could figure out how lua_yield works.
I’ve been working on a new personal project lately, which I call Aspect. It’ll a web-based MUD client, but without Java or Flash. (A bit like PHudBase, actually, but built differently.) I haven’t done much with it yet, but I’ve got a local webserver running using the Python-based Tornado server. Tornado is great because it’s built specifically for handling AJAX long polling, which is what I use to communicate between Aspect and the browser. I’ve never used Python before, so it’s an interesting experience.
I’m planning on making the client extremely customizable by way of plugins, drawing from my ongoing experiences with MUSHclient. I’m also taking philosophical inspiration from Mibbit, which seems to work similarly (though the focus is completely different). With some MUDs, though, there’s a bit of an issue. Many MUDs disallow multiplaying, which is often tracked through the user’s IP address. But all users playing through Aspect will appear to connect from the same IP address; namely, the IP of Aspect itself. This is something I’ll have to work out eventually, but I have some ideas.
More to come eventually. I’d like to keep most of the details under wraps for now, and there are a lot of details.