Lifehack: Free or Cheap SaaS Tools I Used to Get to Inbox Zero

This article original appeared as a guest post on Scott Abel’s blog, The Content Wrangler.
 
Lately I’ve been really overwhelmed by my email inbox. This is not a new problem, but in the past I’ve been able to keep it at under a hundred emails; recently it has grown to nearly 300 and it has really begun to interfere with my getting things done.

So, last night, I took a good, hard look at what was really IN my inbox.

About 40% of the notes consisted of links sent to me by well-meaning people who thought I should check them out for various reasons. Another 30% were suggestions on how to make our products, marketing materials, services, etc. better from employees, customers, partners and other well-meaning people. Of the remaining 30%, about half were personal introductions to potential partners, customers, investors or other people with whom the authors thought I would want to connect. The other half were ‘to-do’ items of a business or personal nature, some sent by me to myself (ick!) or by other people.

I think maybe one or two messages actually consisted of correspondence — by that I mean something like the letters of yesteryear that we used to send through snail mail. It’s interesting to see how the bastardized email of today is so different from the purpose for which it was invented, but that’s the subject of a whole other article. However, while I’m digressing, it’s worth noting that

email functions brilliantly as a “better matchbox” than snail mail, but at the same time it performs really poorly at all the other functions that it’s used for today.

Email is not a contact management system, a customer relationship management (CRM) system, a link-sharing/social bookmarking tool, nor a support ticketing/issue tracking system. Not by a long shot.

The goal for me was to put all these messages that shouldn’t remain as emails into their proper home so I could deal with them appropriately while maintaining my sanity.

Now that I had performed some analytics, it was time to get organized! Here are the tools I used to clean up the mess: Basecamp, Highrise and Instapaper. Instapaper is free; however the 37signals products Basecamp and Highrise carry a small monthly fee.
[Note: They also have trial versions, but don’t expect to get too far with them since 37signals made the free versions just useful enough to show you their value without actually providing any.]
Getting from almost 300 emails to under 20 took about two hours and it was time well spent. I made one pass through my bloated inbox and took one of these actions, based on the type of email:

Email Type #1: “Hey, you should check out this link because. . . .”

Opened the link and used the “Read Later” bookmarklet from Instapaper to save the link for when I have to time to read it. If the email containing the link had something interesting in it (besides the link), I copied that into the notes field for that link once I had saved it to Instapaper. If you care to share what you’re reading/bookmarking, you can also use a del.icio.us bookmarklet for this. I find Instapaper easier though, because you can bookmark a link with one click. Del.icio.us forces you to enter tags and other metadata, which increases friction and slows down the process of bookmarking.

Bottom line: Bookmarking, per se, is a simple, rote task that shouldn’t take more than one click to accomplish.

Email Type #2: “Hey, you should make your product better by doing this. . . .”

Read the email. If there were specific action items associated with it, I created to-dos in Basecamp (under the project for the appropriate product) so that we can address them in a future release. We maintain a to-do list for each release of each product and another to-do list that serves as a backlog for each product. (Some agile tools refer to this as “the icebox.”) When we’re planning a release, we pop the most important things out of the backlog and move them into the current release to-do list.
If the to-dos were general, more thematic suggestions without specific action items associated with them, I copied the suggestions to one of our design writeboards in Basecamp. Then I responded to the email thanking them for the feedback and deleted it.

Bottom line: Product feedback and support tickets belong in Basecamp or a support ticketing system … or even a CRM, but they should never be kept in email as email is not the right tool for tracking the support ticket cycle.

Email Type #3: “Hey, you should sell to (or partner with) so-and-so. . . ”

Forward the email to Highrise’s email dropbox. Delete. Done. When I process my Highrise queue of messages, I can decide whether or not to pursue these leads on a case-by-case basis. Sales leads belong in your CRM system so that they can be tracked and managed. Email is the wrong tool for tracking the sales cycle. If you want to close sales deals and you’re using email as your CRM system, important communiqués are going to slip through the cracks and you’re going to lose business as a result.

Bottom line: E = mc2 but Email != CRM.
 
Email Type #4: “Hey, Chris, meet so-and-so. Hey, so-and-so, meet Chris”

Reply All and start the process of scheduling a good time to talk. However, there’s a bit of a hole in this, because if I then delete the message, how do I ensure that so-and-so and I actually end up talking/meeting? If you have any suggestions about how you’ve solved this problem and what tools you’ve used (besides stinkin’ email), please let me know in the comments field associated with this blog post. I guess I could use our CRM for this, but that’s kind of like using a bazooka to kill flies.

Bottom line: I don’t know what the best tool for this is, but I do know that it’s most definitely not email.

Email Type #5: To-do item (not related to a product or a lead)

Put in on my to-do list. Right now, somewhat ironically, this is an email that I keep perpetually in draft status. To-do lists are a funny thing. I’ve used Remember the Milk, Google Spreadsheets/Documents and a number of other tools, but frankly, nothing beats a text file. By keeping it as a draft email in Gmail, I always have access to it from anywhere, buy you can easily accomplish this with Google Docs too, or a number of other tools.

Bottom line: Your inbox should not be your to-do list. Use a text document, a to-do management tool or even a piece of paper and a pen. There’s something inherently gratifying about the physical, visceral action of scratching something off my to-do list with a big, fat marker (preferably a Sharpie). No tool I have encountered can come close to emulating that feeling of accomplishment.

Email Type #6: Personal Correspondence

Print it on nice paper, frame it and hang it on the wall! Seriously, these have gotten so rare, that I really don’t mind them at all.

Bottom line: This is what email was designed to do, so feel free to use it for that. Enjoy it, because your friends would probably rather update their Facebook status than send you an email. If they do send you emails (and there’s no to-do/action-item associated with them), then they’re a true friend. You should return the favor with a personal email of your own, or, if you really want to surprise them, drop a handwritten note to them in the postal mail, preferably with a designer stamp that reflects your sense of style.

There’s something really sexy about being retrosexual — try it, I guarantee you’ll get great results!

Conclusion: I didn’t quite reach Inbox Zero before my head hit the keyboard, but I am down to under 20 emails in my Inbox. Every time I hit “delete” I could feel my stress level, my blood pressure and my state of disorganization decreasing proportionately.

So, how many messages are in your inbox? What do you think of my approach? What tools and strategies do you use to manage all this email insanity? I’d love to hear your comments. Just don’t email them to me! :-)

How to Convince Your Company to Pay for a SXSWi Pass

sxsw2009Times are tough, right? Everyone is slashing spending, especially around travel and conference budgets. But you need (read: want) to be at SXSWi. So it’s time to convince your boss that your attendance at SXSWi is something that the business needs to be successful.

Fortunately, if your company does or wants to do anything with the interwebs (and seriously, who doesn’t these days?), this is easier than you thought. Just follow these five easy steps.

1. Look at the SXSWi speaker/panel lineup and pick ten panels that are relevant to your line of work. I’m a web 2.0 developer with more than a passing interest in social media, so this is easy. But the panels run the gamut of topics, so you should be able to find something that works for your business/industry. Here’s an example: Building Personal and Company Brands with Web 2.0 Tools. Every company wants a stronger brand, right?

2. Copy the titles and abstracts into an e-mail to your boss and elaborate on how you’ll benefit from them. More importantly, give specific reasons why what you learn will help you and your team, peers, etc. achieve 2009’s business goals. To continue with our example, my company needs to grow our social media cred. The panel consists of Saul Colt, C.C. Chapman and Gary Vaynerchuk. According to their bios (on their web sites), Saul is “an accomplished marketing professional, with more than a decade of diverse high-level experience and a respected publisher” and C.C.’s company, The Advance Guard, “focuses on helping brands of all sizes smartly and strategically leverage emerging technologies for radical marketing programs.” Gary doesn’t really require an explanation, but if your boss has been living in a cave, then you might want to drop a few adjectives like “inspirational” and “passionate.” Example: This panel will help me form an action plan on how to grow my company’s social media cred, following the examples set by these three extraordinary social media mavens.

3. Outline the maximum line item costs for the event. The pass, the travel, the hotel and the food. If you really want to go, make your food budget less than $50/day, your hotel budget less than $100/day and cover the rest (if necessary) with your own cash. Don’t provide a total, as it might overwhelm your boss at first brush. Besides, I’m sure he or she can add.

4. Plan a post-conference re-cap meeting. This is crucial! Set a date and make a list of team members who you will invite, including your boss. During this meeting, promise to share the highlights of what you learned at SXSWi and what you recommend that the business do differently. Explain how these revolutionary ideas will boldly move the company forward in ways they never could have imagined.

5. Split the difference. Remind your boss that the conference takes place Friday-Tuesday (March 13th to 17th). If you travel after work on Thursday or on Friday morning and return to work the following Wednesday, you’re only missing three days of work AND you’re donating your time to the company you love so much over the weekend.

There you have it, your “free” pass to SXSWi. Well, it’s not exactly free. You have to deliver on all the promises you’re making to your boss, especially if you want to go next year! Now if only it was this easy to justify the music festival. . . .

(Thanks to allisonb00, the inspiration for many things in my life, including this blog post.)

Top Ten Tips for Writing Plumtree Crawlers that Actually Work

Just in time for Halloween, I’ve decided to publish my Top Ten Tips for Writing Plumtree Crawlers that Actually Work. This post may scare you a little bit, but hey, that’s the spirit of Halloween, right?

[Editor’s note: yes, we’re still calling it Plumtree. Why? I did a Google search today and 771,000 hits came up for “plumtree” as opposed to around 300,000 for “aqualogic” and just over 400,000 for “webcenter.” Ignoring the obvious — that a short, simple name always wins over a technically convoluted one — it just helps clarify what we’re talking about. For example, if we say “WebCenter,” no one knows whether we’re talking about Oracle’s drag-n-drop environment for creating JSR-168 portlets (WebCenter Suite) or Plumtree’s Foundation/Portal (WebCenter Interaction). So, frankly, you can call it whatever you want, but we’re still gonna call it Plumtree so that people will know WTF we’re talking about.]

So, you want to write a Plumtree Crawler Web Service (CWS), eh?

Here are ten tips that I learned the hard way (i.e. by NOT doing them):

1. Don’t actually build a crawler
2. If you must, at least RTFM
3. “Hierarchyze” your content
4. Test first
5. When testing, use the Service Station 2.0 (if you can get it)
6. Code for thread safety
7. Write DRY code (or else)
8. Don’t confuse ChildDocument with Document
9. Use the named methods on DocumentMetaData
10. RTFM (again)

Before I get into the gory details, let me give you some background. First off, what’s a CWS anyway? It’s the code behind what Oracle now calls Content Services, which spider through various types of content (for lack of a better term) and import pointers to those bits of content into the Knowledge Directory. This ability to spider content and normalize its metadata is one of the most underrated features in Plumtree. (FYI, it was also the first feature we built and arguably, the best.)

Each bit of spidered content is called a Document or a Card or a Link depending on whether you’re looking at the product, the API or the documentation, respectively. It’s important to realize that CWSs don’t actually move content into Plumtree; rather, they store only pointers/links and metadata and they help the Plumtree search engine (known under the covers as Ripfire Ignite) build its index of searchable fulltext and properties.

Today, Plumtree ships with one OOTB CWS that knows how to crawl/spider web pages. Not surprisingly, it’s known as the Web Crawler. Don’t let the name mislead you: the web crawler can actually crawl almost anything, as I explain in my first tip, which is:

Don’t actually build a crawler.

But I’m getting ahead of myself.

So, back to the background on crawlers. Oracle ships five of ’em, AFAIK: one for Windows files, one for Lotus Notes databases, one for Exchange Public Folders, one for Documentum and one for Sharepoint. Their names give you blatantly obvious hints at what they do, so I won’t get into it. Along with the OOTB crawlers, Oracle also exposes a nice, clean API for writing crawlers in Java or .NET. (If you really want to push the envelope, you can try writing a crawler in PHP, Python, Ruby, Perl, C++ or whatever, but it’s hard enough to write one in Java or .NET, so I wouldn’t go there. If you do, though, make sure that your language has a really good SOAP stack.)

So, after reading this, you still want to write a crawler, yes?

Let’s get into my Top Ten Tips:

1. Don’t actually build a crawler

Yes, you really don’t want to go here. Building crawlers is not that hard, as there’s a clean, well documented API. However, getting them work is a whole other story.

Most applications these days have a web UI. So, take advantage of it. Point the OOTB web crawler at the web UI and see what it does. Some web UIs will work well, other won’t (particularly if they use frames or lots of javascript.)

Let’s assume for a moment that this technique doesn’t work. Or perhaps you’re dealing with some awful client-server or greenscreen POS that doesn’t have a web UI. Either way, you may still be able to use the web crawler.

How? Well, the web crawler can crawl almost anything using something that we used to call the Woogle Web. Think of it this way. Say you want to crawl a database. Perhaps that database contains bug reports. Rather than waste your time trying to write a database crawler, just write an index.jsp (or .php, .aspx, .html.erb, .you-name-it) page that does something like select id from bugs and dumps out a list of all the bugs in the database. Then, hyperlink each one to a detail page (that’s essentially a select * from bugs where id = ? query). Your index page can be sinfully ugly. However, put some effort into your detail pages, making them look pretty AND using meta tags to map each field in the database to its value.

Then, simply point the OOTB web crawler at your index page, tell it not to import your index page, map your properties to the appropriate meta tags, crawl at a depth of one level, and get yourself a nice cup of coffee. By the time you get back, the OOTB web crawler will have created documents/links/cards for every bug with links to your pretty detail pages and every bit of text will be full-text searchable. So will every property, meaning that you can run an advanced search on bug id, component, assignee, severity, etc.

At Plumtree, we used to call this a Woogle Web. It may sound ridiculous, but

a Woogle Web is a great way to crawl virtually anything without lifting a finger.

However, a Woogle Web won’t work for everything. If you’re dealing with a product where you can’t even get your head around the database schema AND you have a published Java, .NET or Web Service API, then you might want to think about writing a custom crawler.

2. If you must, at least RTFM

If you’re anything like me, reading the manual is what you do after you’ve tried everything else and nothing has worked out. In the case of Plumtree crawlers, I recommend swallowing your pride (at least momentarily) and reading all of the documentation, including their “tips” (which are totally different from and not nearly as entertaining as my tips, but equally valuable).

Once you’re done reading all the documentation, you might also want to consult Tip #10.

3. “Hierarchyze” your content

Um, yeah, I know “hierarchyze” isn’t a word. But since crawlers only know how to crawl hierarchies, if your data aren’t hierarchical, you darn well better figure out how to represent them hierarchically before you start writing code. Even if you don’t think this step is necessary, just do it because I said so for now. You’ll thank me later.

4. Test first

Don’t even try to write your crawler and then run it and expect it to work. Ha! Instead, write unit tests for every modular bit of code that you throw down. To every extent that’s it’s possible, write these tests first. It’ll save your butt later.

5. When testing, use the Service Station 2.0 (if you can find it)

When you do finally get around to integration testing your crawler, it’ll save you a lot of time if you use the Service Station 2.0. However, it may take you a long time to get it, so start the process early.

Unlike every product Oracle distributes, Service Station is 1) free and 2) not available for download. Yes, you read that correctly.

To get it, you need to contact support. I called them and told them I needed it and after two weeks I got nothing but a confused voicemail back saying that support doesn’t fulfill product orders. Um, yeah. So I called back and literally begged to talk to someone who actually knew what this thing was. Then I painstakingly explained why I couldn’t get it from edelivery (because it’s not there) nor from commerce.bea.com (because it’s permanently redirecting to oracle.com) nor from one.bea.com (because there it says to contact support). So, after my desperate pleas and 15 calendar days of waiting, I got an e-mail with an FTP link to download the Service Station 2.0.

After installing this little gem, my life got a lot easier. Now instead of testing by launching a Plumtree job to kick off the crawler (and then watching it crash and burn), I could use the Service Station to synchronously invoke each method on the crawler and log the results.

Another handy testing tool is the PocketSOAP TCPTrace utility. (It’s also very handy for writing Plumtree portlets.) You can set it up between the Service Station and your CWS and watch the SOAP calls go back and forth in clear text. Very nice.

6. Code for thread safety

So, as the documentation says (and as I completely ignored), crawlers are multithreaded. The portal will launch several threads against your code and, unless you code for thread safety, these threads will proceed to eat your lunch.

Coding for threadsafety means not only that you need to synchronize access to any class-level variables, but also that you must use only threadsafe objects (e.g. in Java, use ArrayList instead of Vector).

7. Write DRY code (or else)

Even though you’re probably writing your CWS in Java or .NET, stick to the ol’ Ruby on Rails adage:

Don’t Repeat Yourself.

Say for example, that you need to build a big ol’ Map of all the Document objects in order to retrieve a document and send its metadata back to the mothership (Plumtree). It’s really important that you don’t build that map every time IDocumentProvider.attachToDocument is called. If you do, your crawler is going to run for a very very very long time. Crawlers don’t have to be super fast, but they shouldn’t be dogshit slow either.

As a better choice, build the Map the first time attachToDocument is called and store it as a class-level variable. Then, with each successive call to attachToDocument, check for the existence of the Map and, if it’s already built, don’t build it again. And don’t forget to synchronize not only the building of the Map, but also the access to the variable that checks whether the Map exists or not. Like I said, this isn’t a walk in the park. (See Tip #1.)

8. Don’t confuse ChildDocument with Document

IContainer has a getChildDocuments method. This, contrary to how it looks on the surface, does not return IDocument objects. Instead, it expects you to return an array of ChildDocument objects. These, I repeat, are not IDocument objects. Instead, they’re like little containers of information about child documents that Plumtree uses so that it knows when to call IDocumentProvider.attachToDocument. It is that call (attachToDocument) and not getChildDocuments that actually returns an IDocument object, where all the heavy lifting of document retrieval actually gets done.

You may not understand this tip right now, but if drop by and read it again after you’ve tried to code against the API for a few hours, and it should make more sense.

9. Use the named methods on DocumentMetaData

This one really burned me badly. I saw that DocumentMetaData had a “put” method. So, naturally, I called it several times to fill it up with the default metadata. Then I wasted the next two hours trying to figure out why Plumtree kept saying that I was missing obvious default metadata properties (like FileName). The solution? Call the methods that actually have names like setFileName, setIndexingURL, etc. — don’t use put for standard metadata. Instead, only use it for custom metadata.

10. RTFM (again)

I can’t stress the importance of reading the documentation enough.

If you think you understand what to do, read it again anyway. I guarantee that you’ll set yourself up for success if you really read and thoroughly digest the documentation before you lift a finger and start writing your test cases (which of course you’re going to write before you write your code, right?).

* * *

As always, if you get stuck, feel free to reach out to the Plumtree experts at bdg. We’re here to help. But don’t be surprised if the first thing we do is try to dissuade you from writing a crawler.

Have a safe and happy Halloween and don’t let Plumtree Crawlers scare you more than they should.

Boo!

Chris Bucchere Speaking at the NovaRUG on June 18th, 2008

Calling all local Rubyists! I’m speaking about modular page design in Ruby on Rails at tomorrow night’s NovaRUG. The title of my talk is “To Portal or Not to Portal: How to Build DRY, Truly Modular Mashups in Rails.”

The meat of my talk is going to come from these two recent blog posts:

Modular Page Assembly in Rails (Part 1)
Modular Page Assembly in Rails (Part 2)

I’ll be followed by Arild Shirazi of FGM giving a presentation entitled “CSS for Developers.”

Get all the details here.

P.S.: Free pizza!

Modular Page Assembly in Rails (Part 2)

In Part 1, I explained how you can develop clean, DRY and encapsulated MVC code that allows for completely modular page assembly in Ruby on Rails.

In this follow up post, I explain how you can use a combination of layouts and content_for to apply title bars and consistent styles to your page components (or modules or portlets, or whatever you want to call them).

The code here is easy to follow and it pretty much speaks for itself. It consists of two layouts (which I called aggregate and snippet), a sample controller and two sample views. Let’s start with the controller for one page in your site that, say, aggregates a couple of modular page components together to show a nice view of company information.

Controller code (app/controllers/company_controller.rb):

def index
  render :action => 'index', :layout => 'aggregate'  
end

This controller simply delegates the rendering of the page to the index.html.erb view and tells Rails to use a layout called aggregate.

Now let’s inspect the view.

View code (app/views/company/index.html.erb):

<% content_for :left do %>
 <%= embed_action :controller => 'company', :action => 'company_list' %>
 <%= embed_action :controller => 'feed', :action => 'feed' %>
<% end %>

<% content_for :center do %>
 <%= embed_action :controller => 'company', :action => 'detail' %>
<% end %>

<% content_for :right do %>
 <%= embed_action :controller => 'home', :action => 'sponsors' %>
<% end %>

This view defines three content regions, with the end goal being to create a page with three columns of “portlets.” The left column contains two portlets: a list of all companies (company_list) and a news feed (feed). The center column contains a company detail portlet and the right column contains a portlet with information about sponsors. (Note that the portlets come from three different functional areas of the site, so they’re decomp’d appropriately into three different controllers.)

Now, let’s take a look at some layout magic.

Here’s the aggregate layout (app/views/layouts/aggregate.html.erb):

<table class="main" cellspacing="0" cellpadding="0">
  <tbody>
    <tr>
      <td id="column-left" class="column" valign="top">
        <div id="region-left" class="region"><%= yield :left %></div>
      </td>
      <td id="column-center" class="column" valign="top">
        <div id="region-center" class="region"><%= yield :center %></div>    
      </td>
      <td id="column-right" class="column" valign="top">
        <div id="region-right" class="region"><%= yield :right %></div>
      </td>
    </tr>
  </tbody>
</table>

I chose a table (yes, I still use tables) with three divs in it, one for each region of modules, but you could use pure divs with floating layouts or any other approach.

The three content regions, left, center and right, match up with the three content sections defined in the index view above using content_for. In case this isn’t obvious, when the layout encounters a page-level definition of a content region in the view, it renders it. If there is no definition for a particular region, the containing column will just collapse on itself, which is the behavior we want.

This is a slight digression, but note how I used CSS classes to identify the columns and regions in a general way (using classes) and a specific way (using ids). This allows me to style the whole module-carrying region with CSS using table.main as my selector, all the columns using table.main td.column as my selector or all the regions using table.main td.column div.region as my selector. I can also pick and choose different specifc areas (e.g. table.main td.column#column-right) and define their style attributes using CSS. As you’ll see in a minute, I can write CSS selectors to say if module A is in the left column, apply style X but if module A is in the center or right column, apply style Y. Pretty cool.

Now, let’s explore the module layout. (Note that I’ve been calling page snippets portlets, modules or components, pretty much interchangeably. I think this illustrates that it doesn’t make a difference what we call ’em — e.g. portlets vs. gadgets — the concept is fundamentally clear and fundamentally the same.)

Module layout (app/views/layouts/snippet.html.erb):

<div id="<%= yield :id %>"><%= yield :id %>" class="snippet-container">
  <div class="snippet-title"><%= yield :title %></div>
  <div class="snippet-body"><%= yield :body %></div>
</div>

This layout expects three more content regions to be defined in the view: id, title and body. Here are the matching content regions from one sample view (for sponsors) — for brevity’s sake, I didn’t include all the views.

Sample module view (app/views/home/sponsors.html.erb):

<% content_for :id do %>sponsors<% end %>

<% content_for :title do %>Our Sponsors<% end %> 

<% content_for :body do %>
Please visit the sites of our wonderful sponsors!
<% end %>

Now, because of some nicely-placed classes and ids, I can once again use CSS selectors to give a common look-and-feel to all portlet containers (div.snippet-container), portlet titles (div.snippet-title) and to portlet bodies (div.snippet-body). Of course, if I want to diverge from the main look-and-feel, I can call out specific portlets: div.snippet-body#sponsors.

If I really want to get fancy, I can use CSS selectors to select, say, the sponsor portlet, but only when it’s running in the right column: table.main td.column-right div.snippet-container#sponsors.

So, in summary, using layouts, content_for and some crafty CSS, I can create a page of modules that can be styled generically or specifically. Combine this approach with what I described in Part 1, and you can “portal-ize” your Rails applications without using a portal!

Was this useful to you? If so, please leave a comment.

Modular Page Assembly in Rails (Part 1)

Recently I was faced with an interesting problem: I wanted to create a modular, portal-like page layout natively in Ruby on Rails without using another layer in the architecture like SiteMesh or ALUI. Java has some pretty mature frameworks for this, like Tapestry, but I found the Ruby on Rails world to be severely lacking in this arena.

I started with Rails 2.0 and the best I could come up with at first brush was to create a html.erb page comprised of several partials. Each partial would basically resemble a “portlet.” This works fine, but with one showstopping pitfall — you can’t call controller logic when you call render :partial. That means in order for each page component (or portlet, if you like) to maintain its modularity, you would have to either 1) put all the logic in the partial view (which violates MVC) or 2) repeat all the logic for each component in the controller for every page (which violates DRY).

If that’s not sinking in, let me illustrate with an example. Let’s say you have two modular compontents. One displays the word “foo” and the other “bar”, which are each contained in page-level variables @foo and @bar, respectively. You want to layout a page containing both the “foo” and the “bar” portlets, so you make two partials.

“foo” partial (app/views/test/_foo.html.erb):

<[email protected]%>

“bar” partial (app/views/test/_bar.html.erb):

<[email protected]%>

Now, you make an aggregate page to pull the two partials together.

aggregate page (app/views/test/aggregate.html.erb):

<%=render :partial => 'foo' %>
<%=render :partial => 'bar' %>

You want the resulting output to say “foo bar” but of course it will just throw an error unless you either embed the logic in the view (anti-MVC) or supply some controller logic (anti-DRY):

embedded logic in the view (app/views/test/_foo.html.erb):

<[email protected] = 'foo'%>
<[email protected]%>

embedded logic in the view (app/views/test/_bar.html.erb):

<[email protected] = 'bar'%>
<[email protected]%>

— OR —

controller logic (app/controllers/test_controller.rb):

def aggregate
 @foo = 'foo'
 @bar = 'bar'
end

Neither solution is optimal. Obviously, in this simple example, it doesn’t look too bad. But if you have hundreds of lines of controller logic, you certainly don’t want to dump that in the partial. At the same time, you don’t want to repeat it in every controller that calls the partial — this is supposed to be modular, right?

What a calamity.

I did some research on this and even read a ten-page whitepaper that left me with no viable solution, but my research did confirm that lots of other people were experiencing the same problem.

So, back to the drawing board. What I needed was a way to completely modularize each partial along with its controller logic, so that it could be reused in the context of any aggregate page without violating MVC or DRY. Enter the embed_action plugin.

This plugin simply allows you to call invoke a modular bit of code in a controller and render its view, but not in a vacuum like render :partial. With it, I could easily put controller logic where it belongs and be guaranteed that no matter where I invoked the “portlet,” it would always render correctly.

Here’s the “foo bar” example, implemented with embed_action.

“foo” controller (app/controllers/foo_controller.rb):

def _foo
 @foo = 'foo' #this logic belongs here
end

“bar” controller (app/controllers/bar_controller.rb):

def _bar
 @bar = 'bar' #this logic belongs here
end

aggregate view (app/views/test/aggregate.html.erb):

<%= embed_action :controller => 'foo', :action => '_foo' %>
<%= embed_action :controller => 'bar', :action => '_bar' %>

That’s it! Note that there is no logic in the aggregate controller — that’s not where it belongs. Instead, the foo and bar logic has been modularized/encapsulated in the foo and bar controllers, respectively, where the logic does belong. Now you can reuse the foo and bar partials anywhere, because they’re 100% modular.

Thanks to embed_action, I was finally able to create a completely modular page (and site) design, with very little effort on my part.

In a follow-up post (Part 2), I’ll explain how you can create really nice-looking portlets using everything above plus layouts and content_for.

How to Build your own Temple of Ego in Five Minutes

My wife is arguably my biggest fan, although my mom probably deserves “honorable mention.”

If you too are a fanboy or fangirl of someone, like, say Robert Scoble, you may want to know what he’s blogging about, pod/vod-casting about, Twittering about, etc. Someone put together this great aggregator called Robert Scoble’s Temple of Ego.

I thought, wouldn’t it be great if we all had our own Temples of Ego?

Back to my wife. Despite her self-professed fanhood, she’s been having trouble lately (well, okay, ever) keeping up with all my web activity. This all stems from the fact that Feedhaus, a site I built and launched last fall, was selected as a SXSW Web Award finalist and I’ve been blathering about this fact in every online setting imaginable, including here on dev2dev. (Please vote for us, BTW.)

So, with upwards of five different blogs, Twitter, Facebook, Google Reader shared items, flickr, YouTube and del.icio.us — keeping track of my enormous ego is a formidable task.

But now, with the power of the semantic web and a great tool called Yahoo! Pipes, you can create your own Temple of Ego in five minutes.

Here’s mine.

Simply go to Yahoo! Pipes, log in and create a pipe. In the “Fetch Feed” node at the top, simply enter the RSS or ATOM feed from whatever you want to include in your Temple of Ego. For example, I included all my blogs, my tweets (from Twitter), my Facebook posted items, my Google Reader Shared Items, my del.icio.us links, my flickr photos and my YouTube videos. That’s a good start.

Now, drag in a “Sort” node and sort by descending pubDate. This puts all the newest stuff first, known to geeks as “reverse chronological order.”

Finally, wire together the Fetch Feed node with the Sort node and then the Sort node with the Pipe Output node.

Now, if you’re really egotistical, you can email all your friends a link to your Temple of Ego and encourage them to add the pipe’s outbound RSS to their feed reader of choice. (Here’s mine.)

So, what on earth does this have to do with ALUI?

ALI 6.5 — which the good folks at bdg are using to build social applications for Participate.08 — has some pretty slick RSS capabilities and some really beautiful user profile pages. Imagine if everyone’s profile page had the output from their Temple of Ego embedded in it. How powerful would that be? And, with ALI 6.5 and a little Yahoo! Pipes magic, setting this up in your ALI deployment will be a breeze.

Comments

Comments are listed in date ascending order (oldest first)

  • Thanks for helping me keep the title of “your biggest fan” — your Pipe implementation is working beautifully.

    Posted by: allisonbucchere on February 13, 2008 at 1:17 PM

  • this reminds me of a similar feature on google reader that lets you create a public feed based on your tags. so i could tag multiple feeds with the same tag. then if i make that tag public, it results in a feed that combines all feeds with that tag. pipes looks to be a little more powerful with respect to sorting, etc, but if you don’t need that, reader offers a little bit of the same. james

    Posted by: jbayer on February 13, 2008 at 5:17 PM

Write an ALUI IDS in Under 15 Lines Using Ruby on Rails

Not only is it possible to write an ALUI Identity Service in Ruby on Rails, it’s remarkably easy. I was able to do the entire authentication part in fewer than 15 lines of code! However, I ran into problems on the synchronization side and ended up writing that part in Java. Read on for all the gory details.

As part of building the suite of social applications for BEA Participate 2008, we’re designing a social application framework in Ruby on Rails and integrating it with ALI 6.5. Not being a big fan of LDAP, I decided to put the users of the social application framework in the database (which is MySQL). Now, when we integrate with ALI, we need to sync this user repository (just as many enterprises do with Active Directory or LDAP).

So I set out to build an IDS to pull in users, groups and memberships in Ruby on Rails.

It’s pretty obvious that Ruby on Rails favors REST over SOAP for their web service support. However, they still support SOAP for interoperability and it mostly works. I did have to make one patch to Ruby’s core XML processing libraries to get things humming along. I haven’t submitted the patch back to Ruby yet, but at some point I will. Basically, the problem was that the parser didn’t recognize the UTF-8 encoding if it was enclosed in quotes (“UTF-8”). This patch suggestion guided me in the right direction, but I ended up doing something a little different because the suggested patch didn’t work.

I changed line 27 of lib/ruby/1.8/rexml/encoding.rb as follows:

 enc = enc.nil? ? nil : enc.upcase.gsub('"','') #that's a double quote inside single quotes

Now that Ruby’s XML parser recognized UTF-8 as a valid format, it decided that it didn’t support UTF-8! To work around this, I installed iconv, which is available for Windows and *nix and works seamlessly with Ruby. In fact, after installation, all the XML parsing issues went bye-bye.

Now, on to the IDS code. From your rails project, type:

ruby script/generate web_service Authenticate

This creates app/apis/authenticate_api.rb. In that file, place the following lines of code:

class AuthenticateApi < ActionWebService::API::Base
 api_method :Authenticate, :expects => [{:Username =>
:string}, {:Password =>
:string}, {:NameValuePairs =>
[:string]}], :returns =>
[:string]
end

All you’re doing here is extending ActionWebService and declaring the input/output params for your web service. Now type the following command:

ruby script/generate controller Authenticate

This creates the controller, where, if you stick with direct dispatching (which I recommend), you’ll be doing all the heavy lifting. (And there isn’t much.) This file should contain the following:

class AuthenticateController < ApplicationController
 web_service_dispatching_mode :direct
 wsdl_service_name 'Authenticate'
 web_service_scaffold :invoke

 def Authenticate(username, password, nameValuePairs)
   if User.authenticate(username, password)
     return ""
   else
     raise "-102" #generic username/password failure code
   end
 end
end

Replace User.authenticate with whatever mechanism you’re using to authenticate your users. (I’m using the login_generator gem.) That’s all there is to it! Just point your AWS to http://localhost:3000/authenticate/api and you’re off to the races.

Now, if you want to do some functional testing (independently of the portal), rails sets up a nice web service scaffold UI to let you invoke your web service and examine the result. Just visit http://localhost:3000/authenticate/invoke to see all of that tasty goodness.

There you have it — a Ruby on Rails-based IDS for ALUI in fewer than 15 lines of code!

The synchronization side of the IDS was almost just as simple to write, but after countless hours of debugging, I gave up on it and re-wrote it in Java using the supported ALUI IDK. Although I never could quite put my finger on it, it seemed the problem had something to do with some subtleties about how BEA’s XML parser was handing UTF-8 newlines. I’ll post the code here just in case anyone has an interest in trying to get it to work. Caveat: this code is untested and currently it fails on the call to GetGroups because of the aforementioned problems.

In app/apis/synchronize_api.rb:

class SynchronizeApi < ActionWebService::API::Base
 api_method :Initialize, :expects =>
[{:NameValuePairs =>
[:string]}], :returns =>
[:integer]
 api_method :GetGroups, :returns =>
[[:string]]
 api_method :GetUsers, :returns =>
[[:string]]
 api_method :GetMembers, :expects =>
[{:GroupID => :string}], :returns =>
[[:string]]
 api_method :Shutdown
end

In app/controllers/synchronize_controller.rb:

class SynchronizeController < ApplicationController
  web_service_dispatching_mode :direct
  wsdl_service_name 'Synchronize'
  web_service_scaffold :invoke

  def Initialize(nameValuePairs)
    session['initialized'] = true
    return 2
  end

  def GetGroups()
    if session['initialized']
      session['initialized'] = false
      groups = Group.find_all
      
      groupNames = Array.new
      for group in groups
        groupNames << "<SecureObject Name=\"#{group.name}\" AuthName=\"#{group.name}\" UniqueName=\"#{group.id}\"/>" 
      end 
      return groupNames
    else
      return nil
    end
  end
  
  def GetUsers()
    if session['initialized']
      session['initialized'] = false
      users = User.find_all
      
      userNames = Array.new
      for user in users
        userNames << "<SecureObject Name=\"#{user.login}\" AuthName=\"#{user.login}\" UniqueName=\"#{user.id}\"/>" 
      end
      
      return userNames
    else
      return nil
    end
  end

  def Shutdown()
    return nil
  end
end

Comments

Comments are listed in date ascending order (oldest first)

  • Nice post, Chris. This is the first time I’ve seen this done!

    Posted by: dmeyer on January 20, 2008 at 4:16 PM

  • Thank you, David.I just noticed that part of my sync code was chomped off in the blog post because WordPress was assuming that was actually an opening HTML/XML tag. I made the correction so the above code now accurately reflects what I was testing.

    Posted by: bucchere on January 21, 2008 at 1:16 PM