Categories
Business Featured Posts Personal

Lifehack: Free or Cheap SaaS Tools I Used to Get to Inbox Zero

This article original appeared as a guest post on Scott Abel’s blog, The Content Wrangler.
 
Lately I’ve been really overwhelmed by my email inbox. This is not a new problem, but in the past I’ve been able to keep it at under a hundred emails; recently it has grown to nearly 300 and it has really begun to interfere with my getting things done.

So, last night, I took a good, hard look at what was really IN my inbox.

About 40% of the notes consisted of links sent to me by well-meaning people who thought I should check them out for various reasons. Another 30% were suggestions on how to make our products, marketing materials, services, etc. better from employees, customers, partners and other well-meaning people. Of the remaining 30%, about half were personal introductions to potential partners, customers, investors or other people with whom the authors thought I would want to connect. The other half were ‘to-do’ items of a business or personal nature, some sent by me to myself (ick!) or by other people.

I think maybe one or two messages actually consisted of correspondence — by that I mean something like the letters of yesteryear that we used to send through snail mail. It’s interesting to see how the bastardized email of today is so different from the purpose for which it was invented, but that’s the subject of a whole other article. However, while I’m digressing, it’s worth noting that

email functions brilliantly as a “better matchbox” than snail mail, but at the same time it performs really poorly at all the other functions that it’s used for today.

Email is not a contact management system, a customer relationship management (CRM) system, a link-sharing/social bookmarking tool, nor a support ticketing/issue tracking system. Not by a long shot.

The goal for me was to put all these messages that shouldn’t remain as emails into their proper home so I could deal with them appropriately while maintaining my sanity.

Now that I had performed some analytics, it was time to get organized! Here are the tools I used to clean up the mess: Basecamp, Highrise and Instapaper. Instapaper is free; however the 37signals products Basecamp and Highrise carry a small monthly fee.
[Note: They also have trial versions, but don’t expect to get too far with them since 37signals made the free versions just useful enough to show you their value without actually providing any.]
Getting from almost 300 emails to under 20 took about two hours and it was time well spent. I made one pass through my bloated inbox and took one of these actions, based on the type of email:

Email Type #1: “Hey, you should check out this link because. . . .”

Opened the link and used the “Read Later” bookmarklet from Instapaper to save the link for when I have to time to read it. If the email containing the link had something interesting in it (besides the link), I copied that into the notes field for that link once I had saved it to Instapaper. If you care to share what you’re reading/bookmarking, you can also use a del.icio.us bookmarklet for this. I find Instapaper easier though, because you can bookmark a link with one click. Del.icio.us forces you to enter tags and other metadata, which increases friction and slows down the process of bookmarking.

Bottom line: Bookmarking, per se, is a simple, rote task that shouldn’t take more than one click to accomplish.

Email Type #2: “Hey, you should make your product better by doing this. . . .”

Read the email. If there were specific action items associated with it, I created to-dos in Basecamp (under the project for the appropriate product) so that we can address them in a future release. We maintain a to-do list for each release of each product and another to-do list that serves as a backlog for each product. (Some agile tools refer to this as “the icebox.”) When we’re planning a release, we pop the most important things out of the backlog and move them into the current release to-do list.
If the to-dos were general, more thematic suggestions without specific action items associated with them, I copied the suggestions to one of our design writeboards in Basecamp. Then I responded to the email thanking them for the feedback and deleted it.

Bottom line: Product feedback and support tickets belong in Basecamp or a support ticketing system … or even a CRM, but they should never be kept in email as email is not the right tool for tracking the support ticket cycle.

Email Type #3: “Hey, you should sell to (or partner with) so-and-so. . . ”

Forward the email to Highrise’s email dropbox. Delete. Done. When I process my Highrise queue of messages, I can decide whether or not to pursue these leads on a case-by-case basis. Sales leads belong in your CRM system so that they can be tracked and managed. Email is the wrong tool for tracking the sales cycle. If you want to close sales deals and you’re using email as your CRM system, important communiqués are going to slip through the cracks and you’re going to lose business as a result.

Bottom line: E = mc2 but Email != CRM.
 
Email Type #4: “Hey, Chris, meet so-and-so. Hey, so-and-so, meet Chris”

Reply All and start the process of scheduling a good time to talk. However, there’s a bit of a hole in this, because if I then delete the message, how do I ensure that so-and-so and I actually end up talking/meeting? If you have any suggestions about how you’ve solved this problem and what tools you’ve used (besides stinkin’ email), please let me know in the comments field associated with this blog post. I guess I could use our CRM for this, but that’s kind of like using a bazooka to kill flies.

Bottom line: I don’t know what the best tool for this is, but I do know that it’s most definitely not email.

Email Type #5: To-do item (not related to a product or a lead)

Put in on my to-do list. Right now, somewhat ironically, this is an email that I keep perpetually in draft status. To-do lists are a funny thing. I’ve used Remember the Milk, Google Spreadsheets/Documents and a number of other tools, but frankly, nothing beats a text file. By keeping it as a draft email in Gmail, I always have access to it from anywhere, buy you can easily accomplish this with Google Docs too, or a number of other tools.

Bottom line: Your inbox should not be your to-do list. Use a text document, a to-do management tool or even a piece of paper and a pen. There’s something inherently gratifying about the physical, visceral action of scratching something off my to-do list with a big, fat marker (preferably a Sharpie). No tool I have encountered can come close to emulating that feeling of accomplishment.

Email Type #6: Personal Correspondence

Print it on nice paper, frame it and hang it on the wall! Seriously, these have gotten so rare, that I really don’t mind them at all.

Bottom line: This is what email was designed to do, so feel free to use it for that. Enjoy it, because your friends would probably rather update their Facebook status than send you an email. If they do send you emails (and there’s no to-do/action-item associated with them), then they’re a true friend. You should return the favor with a personal email of your own, or, if you really want to surprise them, drop a handwritten note to them in the postal mail, preferably with a designer stamp that reflects your sense of style.

There’s something really sexy about being retrosexual — try it, I guarantee you’ll get great results!

Conclusion: I didn’t quite reach Inbox Zero before my head hit the keyboard, but I am down to under 20 emails in my Inbox. Every time I hit “delete” I could feel my stress level, my blood pressure and my state of disorganization decreasing proportionately.

So, how many messages are in your inbox? What do you think of my approach? What tools and strategies do you use to manage all this email insanity? I’d love to hear your comments. Just don’t email them to me! :-)

Categories
dev2dev Plumtree • BEA AquaLogic Interaction • Oracle WebCenter Interaction

How to Build your own Temple of Ego in Five Minutes

My wife is arguably my biggest fan, although my mom probably deserves “honorable mention.”

If you too are a fanboy or fangirl of someone, like, say Robert Scoble, you may want to know what he’s blogging about, pod/vod-casting about, Twittering about, etc. Someone put together this great aggregator called Robert Scoble’s Temple of Ego.

I thought, wouldn’t it be great if we all had our own Temples of Ego?

Back to my wife. Despite her self-professed fanhood, she’s been having trouble lately (well, okay, ever) keeping up with all my web activity. This all stems from the fact that Feedhaus, a site I built and launched last fall, was selected as a SXSW Web Award finalist and I’ve been blathering about this fact in every online setting imaginable, including here on dev2dev. (Please vote for us, BTW.)

So, with upwards of five different blogs, Twitter, Facebook, Google Reader shared items, flickr, YouTube and del.icio.us — keeping track of my enormous ego is a formidable task.

But now, with the power of the semantic web and a great tool called Yahoo! Pipes, you can create your own Temple of Ego in five minutes.

Here’s mine.

Simply go to Yahoo! Pipes, log in and create a pipe. In the “Fetch Feed” node at the top, simply enter the RSS or ATOM feed from whatever you want to include in your Temple of Ego. For example, I included all my blogs, my tweets (from Twitter), my Facebook posted items, my Google Reader Shared Items, my del.icio.us links, my flickr photos and my YouTube videos. That’s a good start.

Now, drag in a “Sort” node and sort by descending pubDate. This puts all the newest stuff first, known to geeks as “reverse chronological order.”

Finally, wire together the Fetch Feed node with the Sort node and then the Sort node with the Pipe Output node.

Now, if you’re really egotistical, you can email all your friends a link to your Temple of Ego and encourage them to add the pipe’s outbound RSS to their feed reader of choice. (Here’s mine.)

So, what on earth does this have to do with ALUI?

ALI 6.5 — which the good folks at bdg are using to build social applications for Participate.08 — has some pretty slick RSS capabilities and some really beautiful user profile pages. Imagine if everyone’s profile page had the output from their Temple of Ego embedded in it. How powerful would that be? And, with ALI 6.5 and a little Yahoo! Pipes magic, setting this up in your ALI deployment will be a breeze.

Comments

Comments are listed in date ascending order (oldest first)

  • Thanks for helping me keep the title of “your biggest fan” — your Pipe implementation is working beautifully.

    Posted by: allisonbucchere on February 13, 2008 at 1:17 PM

  • this reminds me of a similar feature on google reader that lets you create a public feed based on your tags. so i could tag multiple feeds with the same tag. then if i make that tag public, it results in a feed that combines all feeds with that tag. pipes looks to be a little more powerful with respect to sorting, etc, but if you don’t need that, reader offers a little bit of the same. james

    Posted by: jbayer on February 13, 2008 at 5:17 PM

Categories
bdg dev2dev Plumtree • BEA AquaLogic Interaction • Oracle WebCenter Interaction

ALI G6 on Ubuntu?

Some of you may be familiar with my rants on the bdg blog about how Linux just isn’t ready for the desktop. My opinion on that matter has largely changed with the release of Ubuntu 6.06 LTS (Dapper Drake), which I have been running with minimal hassle on my newish Gateway MP6954 laptop since last summer. It has a tasty coffee-colored UI (mmm), it NEVER crashes, it basically takes care of itself with updates and has equivalent — or better — software for pretty much everything you’d ever want to do with Windows or OSX at a great price: free.

Of course ALUI is only officially supported on two Linux plaforms: RHEL and Suse. But Linux is Linux, right? Well, sort of. I had all sorts of “fun” getting ALUI running on Oracle on Fedora. However, with Ubuntu, getting Oracle and ALUI up was a breeze.

First off, unless you call yourself a DBA, you don’t want to mess around with a full-blown Oracle instance. Instead, just follow these easy steps to install something called Oracle XE. It has certain limitations — the most important of which is that you can’t create more than one database.

My first — and really my only — mistake during this setup process came next (and it’s related to this one-database issue). I tried to drop the XE default database (ORACLE_SID=XE) and run the crdb1_oracle_unix.sql script to create the PLUM10 database. This was a bad idea. I poked around on Google a bit and then thought, well, I don’t really need my own database. (Had I had this epiphany before starting down that path, I could have saved two hours and had ALUI up and running on Ubuntu in fewer than 30 minutes.) So, instead of running crdb1_oracle_unix.sql, just edit create_tables_oracle.sql and remove any reference to PLUMINDEX, then run the following commands on the XE database:

$sqlplus sys as sysdba
SQL>create user plumtree identified by password
SQL>grant connect, resource, create view to plumtree

This creates the plumtree user on the XE database, which gives ALUI its own schema, which, for our purposes, is just as good as having your own DB. Now you can basically just run the out-of-the-box scripts (keeping in mind the changes I made to create_tables_oracle.sql):

$sqlplus plumtree/password@XE
SQL>@create_tables_oracle.sql
SQL>@load_seed_info_oracle.sql
SQL>@stored_procs_oracle.sql
SQL>@postinst_oracle.sql

At this point, ALUI was ready to rock. I only ran into one small snag. One of the native search libraries complained about a missing LD_LIBRARY_PATH dependency on libstdc++. This was not a showstopper. I did the following:

$ln -s /usr/lib/libstdc++.so.6.0.7 /usr/lib/libstdc++-libc6.1-1.so.2

From there I configured the bundled tomcat to host the portal and the imageserver and viola, ALUI 6.0SP1, in all its glory, was up and running on Ubuntu. (BTW, I would have used ALUI 6.1.0.1, but when I wrote this article, the RHEL and Suse versions weren’t available yet.)

Comments

Comments are listed in date ascending order (oldest first)

  • I’ve also successfully installed ALUI 6.1.1 (6.1MP1) on Ubuntu 7.04 (Server). Required one workaround for the LAX installer shared libraries problem (can’t find libc.so.6 etc):
    $cp AquaLogicInteraction_v6-1_MP1 AquaLogicInteraction_v6-1_MP1.bak
    $cat AquaLogicInteraction_v6-1_MP1.bak | sed "s/export LD_ASSUME_KERNEL/#xport LD_ASSUME_KERNEL/" > AquaLogicInteraction_v6-1_MP1

    Posted by: rdouglas on May 7, 2007 at 10:45 AM

  • hey Chris, appreciate the post! just wanted to give the hint that to change the plumindex on the create_tables script, you can do this in vi: :1,$s/PLUMINDEX/USERS/g

    Posted by: jbell on June 2, 2007 at 8:57 PM

  • Chris, nice post…I referenced this post while trying to get the new ALUI 6.1 quickstart installer to correctly intall the portal on windows xp. I’ve tried the installer on several xp machines but it is still failing…i think the error has to do with the way the installer is setting up the paths/environmental variables – when i run the diagnostics tool i get an invalid entry point…my paths look correct and i’ve tried re-installing multiple times on multiple machines…any ideas? Thanks.

    Posted by: phil- on September 10, 2007 at 8:41 AM

  • Well, after some troubleshooting I figured it out…here is the solution…I hope this is helpful to someone in the future…I needed to rename the icuuc30.dll in C:WINXPsystem32 to icuuc30_from_system32.dll and paste the icuuc30.dll from C:beaaluicommoninxight3.7.6binnative into the C:WINXPsystem32 directory before the installation would work.

    I did try just moving the INXIGHT_PATH variable so that it is loaded on the PATH before the C:WINXPsystem32 but the error still occured. BTW – icuuc30.dll is a component for Unicode version 3.0

    Posted by: phil- on September 12, 2007 at 11:47 AM

  • Thank you so much for this post, I had the same problem on XP. I’m just curious, how were you able to debug this problem? What pointed you to icuuc30.dll?

    Posted by: fhkoetje on December 4, 2007 at 9:31 AM

Categories
Featured Posts Software Development

Say hello world to comet

A couple of weekends ago I inflicted upon myself a quest to discover what all the buzz was about regarding Comet. What I discovered is that there is quite a bit of code out there to help you get started but the documentation around that code, and about Comet in general, is severely lacking. All I really wanted to find was a Comet-based Hello World, which as any developer knows, is the cornerstone of any programming language or methodology.

Since I couldn’t find one on Google, I ascertained that no Hello World exists for Comet and therefore I took it upon myself to write one.

For those of you who are new to Comet, the first thing you should do is read Alex Russell’s seminal blog post on the topic. At its core, Comet is really just a message bus for browser clients. In one browser, you can subscribe to a message and in another you can publish a message. When a message gets published, every browser that’s subscribed (almost) instantaneously receives it.

What? I thought clients (browsers) had to initiate communication per the HTTP spec. How does this work?

Under the covers, Comet implementations use a little-known feature of some web server implementations called continuations (or hanging gets). I won’t go into details here, but at a high level, a continuation initiates from the browser (as all HTTP requests must do) and then, when it’s received by the server, the thread handling it basically goes to sleep until it gets a message or times out. When it times out, it wakes up and sends a response back to the browser asking for a new request. When the thread on the server receives a message, it wakes up and sends the message payload sent back to the browser (which also implies that it’s time to send a new request). Via this mechanism, HTTP is more or less “inverted” so that the server is essentially messaging the client instead of vice-versa.

A few questions immediately pop into mind, so let’s just deal with them right now:

Why is this better than Ajax alone?

It boils down to latency and users’ tolerance for it. In the worst case, traditional web applications force entire page refreshes. Ajax applications are a little better, because they can refresh smaller parts of a page in response to users’ actions, but the upshot is that the users are still waiting for responses, right? A Comet-driven application has essentially removed the user from the picture. Instead of the user asking for fresh data, the server just sends it along as soon as it changes, given the application more of a “realtime” feel and removing virtually all perceived latency.

So are we back to client server again?

Sort of. Comet gives you the benefit of server-to-client messaging without the deployment issues associated with fat clients.

Can’t applets do this?

Of course they can. But who wants to download an applet when some lightweight Javascript will do the trick?

Why the name Comet?

Well, clearly it’s a pun on Ajax. But it’s not the only name for this sort of technology. There’s something out there called pushlets which claims to do the same thing as Comet, but which didn’t seem to catch on, I guess.

Back to the whole point of this post: my hello world. I pieced this example together using dojo.io.cometd and a recent version of Tomcat that into which I dropped the relevant parts of Jetty to provide support for continuations.

It’s finally time to say “hello world” to my hello world.

First off, download one of the more recent dojo builds that contains support for dojo.io.cometd. Drop dojo.js on your Java-based web/application server. (I used Tomcat, but you can use JBoss, Jetty, Weblogic, Websphere or any other web server with support for servlets.) Add this page in the root of your application:

<script src="js/dojo.js" type="text/javascript"></script>
<script type="text/javascript">//<![CDATA[
  dojo.require("dojo.io.cometd");
  cometd.init({}, "cometd");
  cometd.subscribe("/hello/world", false, "publishHandler");
  publishHandler = function(msg) { alert(msg.data.test); }
// ]]></script>
<input type="button" value="Click Me!" />

Without a cometd-enabled web server behind it, the above page does absolutely nothing.

So, to make this work, I needed to find a Java-based web/application server with support for continuations. I’m sure there are many ways to skin this cat, but I picked Jetty. You can get Jetty source and binaries if you’d like to follow along. Since all of our customers who embrace open source are lightyears more comfortable with Tomcat than they are with any other open source web/application server (ahem . . . Jetty), I decided to embed Jetty in Tomcat rather than run everything on Jetty alone. It’s all just Java, right?

Here I ran into a few snags. The maven build file for Jetty didn’t work for me, so I dropped everything in org.mortbay.cometd and org.mortbay.cometd.filter into my Eclipse project and just integrated it with the ant build.xml I was already using to build my web application. Here’s the relevant portion of my build.xml:

<javac srcdir="${srcdir}" destdir="${classdir}" debug="true" debuglevel="lines,vars,source">
<classpath>
<pathelement location="${jetty.home}/lib/jetty-util-6.0.1.jar"/>
<pathelement location="${jetty.home}/lib/servlet-api-2.5-6.0.1.jar"/>
</classpath>
</javac>

Once Jetty was essentially hacked into Tomcat, the rest was smooth sailing. I just wrote a JSP that dropped a “goodbye world” message onto the same old queue that I used in the last example, but I did so using server-side code. Here’s the JSP:

<%@page import="org.mortbay.cometd.*"%>
<%@page import="java.util.*"%>
<%
Bayeux b = (Bayeux)getServletContext().getAttribute(CometdServlet.ORG_MORTBAY_BAYEUX);
Channel c = b.getChannel("/hello/world");
Map message = new HashMap();
message.put("test", "goodbye world");
c.publish(message, b.newClient());
%>

This page does not produce any output of its own; rather, it just drops the “goodbye world” message on the queue. When you hit this page in a browser, any other browser listening to the /hello/world queue will get the message. The above JSP, along with the dojo page you created in the first step, should be enough to wire together two different flavors of Comet messaging: browser to server to browser and just plain old server to browser.

I’m curious 1) if this was helpful and 2) if you’d like to share what you’re doing with Comet with me (and please don’t say cleaning your kitchen).

Categories
dev2dev Plumtree • BEA AquaLogic Interaction • Oracle WebCenter Interaction

How to Integrate PKI Certs or CAC Cards with ALI

In his 1947 speech to the House of Commons, Winston Churchill quipped, “It has been said that democracy is the worst form of government except all those other forms that have been tried.”

I’m not nearly as pithy as Sir Winston (nor as portly — at least not yet), but yet I feel the same way about passwords being used to protect web sites or other enterprise systems. In many ways, they’re the worst form of security out there except for everything else that’s been tried. Part of this has something to do with what I’ve coined Bucchere’s Axiom of Strong Passwords, which is a derivative of Murphy’s Law (which states that whatever can go wrong will). It goes something like this: the stronger a password is, the easier it is to hack. Why? Because if you force users into using a strong password, they’re more likely to write it down. And writing a password down defeats its purpose entirely.

The bottom line: passwords suck. But they’ve become the de-facto standard because they’re easier and cheaper than everything else we’ve tried, including PKI certs, biometrics (e.g. fingerprints, retina-scans), CAC cards, RSA secure IDs, etc. (Even for a cert-based authentication scheme, you still need a key to generate your cert, which is essentially just a glorified password.)

Just because passwords are the de-facto standard for authentication does not mean that we should quit trying to use other, ostensibly better forms of security, especially if 1) you’re protecting particularly sensitive data, 2) you’re open to the internet and 3) you have the resources (e.g. $$$) to invest in more robust forms of security. And I’m not talking about just buying an SSL cert from Verisign and continuing to have your users write down their passwords on post-it notes attached to their monitors. (Note to self: remove the post it note on your monitor with your password on it when you get back to the office.) I’m talking about using some sort of “soft” cert (e.g. PKI) or “hard” cert (e.g. CAC) to protect your system and your data.

Now if your system is ALI (formerly known as Plumtree Foundation or Plumtree Portal), you’re in luck, because the eggheads at what was once known as Plumtree have made this particularly easy to do. In fact, the hardest part is just getting the user’s identity out of the cert (see below the code snippet for some suggestions). Once you’ve done that, just drop a class into a jar that implements the ISSOProvider interface. (For those of you running on Windows, please don’t ask me to “port” this to C# — just take the Java code, drop it into Visual Studio.NET and then fix the syntax errors.)

But wait, SSO stands for “Single Sign On,” right? And what you’re really doing here is passing credentials from a cert to Plumtree and that has little or nothing to do with SSO. That’s a true statement. The subtlety here is that ISSOProvider, while it contains the letters SSO in its name, can be used for pretty much any form of authentication, whether you are using an SSO product or not.

CertIntegration.java

package com.bdgportal.alui.auth;

import com.plumtree.openfoundation.util.*;
import com.plumtree.openfoundation.web.*;
import com.plumtree.portaluiinfrastructure.sso.*;

public class CertIntegration implements ISSOIntegration {
 
   private XPHashtable settings;
 
   public CertIntegration() {
     ;
   }
 
   public boolean Initialize(XPHashtable settings) {
     this.settings = settings;       
     //String exampleSetting = ((XPArrayList)settings.GetElement("SettingName")).GetElement(0);
   }

   public String GetSSOProductName() {
     return "My Favorite Cert Integration";
   }

   /**
    * Gets the username from the cert and returns it to Plumtree. This will fail if the username
    * does not have a matching account in Plumtree. This can be a Plumtree database user or a user
    * imported from an authentication source, in which case you need to include the auth source
    * prefix in the username, e.g. "MyAuthSource/cbucchere"
    *
    * @param request The wrapped HttpServletRequest from the web container.
    * @return The object passed back to Plumtree for authentication with the portal.
    */
   public SSOLoginInfo GetLoginInfo(IXPRequest request) {
     String userName = ((XPRequest)request).GetUnderlyingObject().getUserPrincipal().getName();
     return new SSOLoginInfo(userName);
   }

   public String[] GetSecureCookies() {
     return null;
   }

   public String[] GetSecureHeaders() {
     return null;
   }

   public boolean OnLogout(IXPResponse response, String returnURI) {
     return false;
   }   
}

The hardest part about all this, as I said above, is getting the user name out of the PLI cert/CAC card/retina scan/etc. In the example above, I made MANY assumptions. First, I assumed that your portal is running on Weblogic, which understands and correctly implements Principal, which is a Java Servlet’s way of knowing who’s using it. Weblogic lets you plug custom implementations of the Principal class into its security infrastructure. All you need to do is extend java.security.Principal and then walk through a bunch of magical configuration steps to enable it.

Speaking of magical configuration, I neglected to mention that there are two small configuration steps that you need to perform in order to get your shiny new ISSOIntegration working in ALI. In portalconfig.xml, you need to set the value of SSOVendor setting to 100 (or greater) and then set the CustomSSOClass to the fully qualified name of the class you wrote that implements ISSOIntegration. For our Java example above, that would be com.bdgportal.alui.auth.CertIntegration and for .NET, it would the the name of your C# class.

Speaking of .NET . . . as many of you know, it is an entirely different animal with its own way of provisioning security to web applications (e.g. System.Web.Security).

Regardless of your platform, you need to get the user name out of whatever authentication method you’re using. Once you’ve accomplished that, just drop the code above into your project and replace the getUserPricipal().getName() with whatever mechanism you can find for getting your users’ names.

Assuming you trust your authentication mechanism to return the appropriate user name, you’ll have users getting logged into the portal via pretty much however you would like — CAC, PKI, biometrics, etc.

If only implementing a democracy were this easy . . . .

Comments

Comments are listed in date ascending order (oldest first)

  • This is wonderful article. How ever I’ve researched for a long time but still can not figure out what to do with Bea Weblogic to use Costom Identify Assertion. I wish this artical to have link to the document of how to “do the magical configuration steps”.

    Posted by: minh.tran on January 9, 2007 at 9:04 AM

  • This article was intended to be application server independent, but if you’re using BEA WebLogic, there’s a great article on how to set up custom identity providers which should work with this ALUI SSO solution.

    Posted by: bucchere on January 10, 2007 at 6:44 PM

  • NOTE: 1. the user’s password in the portal must be empty string. 2. jar should be put in portal.war and lib/java.

    Posted by: luotuoci on April 28, 2007 at 8:31 PM

Categories
dev2dev Featured Posts Plumtree • BEA AquaLogic Interaction • Oracle WebCenter Interaction

Searching Intrinsic ALI Properties Using the PRC

There’s a problem with the IDK PRC API for search that’s tripped up users in the dev2dev forums and that stymied me for the first time today while coding up a custom search application for one of our customers.

The problem is that there’s a hardcoded limitation in the IDK that prevents you from calling PortalField.forID if the passed in object ID is less than 100. This prevents you from searching on some really useful properties, including e-mail address! For the life of me, I can’t figure out why this limitation was imposed.

The good news is that I found a workaround. It involves a quick two-file IDK patch that entails subclassing two classes. The only catch is that you need to put the child classes in the same package as the IDK (because the parent classes have package-private constructors).

Here’s the source code that does the trick.

com.plumtree.remote.prc.search.IntrinsicPortalField.java:

package com.plumtree.remote.prc.search;

import com.plumtree.remote.prc.search.PortalField;
import com.plumtree.remote.prc.search.xp.*;

public class IntrinsicPortalField extends PortalField {
  private IntrinsicPortalField(IntrinsicXPPortalField xpField) {
    super(xpField);
  }

  public static final IntrinsicPortalField EMAIL_ADDRESS;

  static {
    EMAIL_ADDRESS = new IntrinsicPortalField(IntrinsicXPPortalField.forID(26));
  }
}

com.plumtree.remote.prc.search.xp.IntrinsicXPPortalField.java:

package com.plumtree.remote.prc.search.xp;

import com.plumtree.openfoundation.util.XPIllegalArgumentException;

public class IntrinsicXPPortalField extends XPPortalField {

  private IntrinsicXPPortalField(String name, boolean isSearchable, boolean isRetrievable) {
    super(name, isSearchable, isRetrievable);
  }

  public static IntrinsicXPPortalField forID(int propertyId) throws XPIllegalArgumentException {
    return new IntrinsicXPPortalField("ptportal.propertyid." + propertyId, true, true);
  }
}

I used e-mail address (ID = 26) as an example, but you can put any properties in there that you want. Then, when you’re setting up your search filter, just use IntrinsicPortalField instead of PortalField. For example:

IFilterClause filter = searchFactory.createOrFilterClause();
filter.addStatement(IntrinsicPortalField.EMAIL_ADDRESS, Operator.Contains, searchQuery);

Since IntrinsicPortalField is a subclass of PortalField, the PRC has no problem with it. I’ve tested this with e-mail address and it works flawlessly. I’m sure other properties will work perfectly well too.

Enjoy!

Comments

Comments are listed in date ascending order (oldest first)

  • Thank you, Chris 🙂 Now that Chris has kindly posted a workaround, any possibility of having this put into an IDK hotfix?

    Posted by: ewwhitley on August 27, 2006 at 2:39 PM

  • I second Eric’s opinion. Can you guys just remove the artificial restriction of 100 from the IDK in the next release? Seems to work fine without it and it would obviate the need for my silly patch.

    Posted by: bucchere on August 30, 2006 at 2:31 PM

  • Here is an e-mail I received from a person who attempted to use my patch:

     

    I found your blog because I am having the same problem you describe with searching Intrinsic properties. However, I am now having trouble actually “patching” the IDK. How exactly would I go about repackaging everything with these new java files included? Thank you very much for your time and help.

    Here was my response:

    Thanks for your note. Assuming that you’re building a Java web application, all you need to do is compile the patch along with all your other application code. You can put the class files for the patch in WEB-INF/classes or you can make a jar (e.g. myapp.jar) and put the class files for the IDK patch there and then drop the jar in WEB-INF/lib. You can then put everything into a .war or .ear (or not).

    The magic of the Java classloader is that all the .class files in WEB-INF/classes and all your .class files inside jars in WEB-INF/lib all end up loaded into the same memory. That means that if you have two class files in two different jars, but they’re both in the com.plumtree.remote.portlet package (meaning you have the line package com.plumtree.remote.portlet; at the top of your source files and your .java files live in com/plumtree/remote/portlet), then they’ll act like they’re in the same package. This means that you’ll have access to all package private member methods, which the patch needs in order to compile.

    Posted by: bucchere on August 30, 2006 at 7:23 PM

  • Hi mate, I think this is very helpfull but I was wondering where can I find corresponding ID’s for all standart and custom user properties, when I’m using

    forID(26) method? Thanks in advance!

    Posted by: ggeorgiev on September 19, 2006 at 1:16 AM

  • This gets you the standard (intrinsic) ones:
    select objectid, name from plumdbuser.ptproperties order by objectid where objectid < 200;
    1 Name
    2 Description
    3 Object Created
    4 Object Last Modified
    5 Open Document URL
    6 Content Type ID
    7 Plumtree Document Image UUID
    8 Content Language
    9 Content Tag
    26 Email Address
    50 Full Text Content
    60 Document Submit Content Source
    61 Document Upload Repository Server
    62 Document Upload DocID
    71 Related Communities
    72 Related Folders
    73 Related Portlets
    74 Related Experts
    75 Related Content Managers
    80 Snapshot Query Reference
    101 Keywords
    102 Subject
    103 Author
    104 Created
    105 Document Title
    106 URL
    107 Category
    111 Comments
    112 Modified
    152 Phone Number
    153 Title
    154 Department
    155 Manager
    156 Company
    157 Address
    158 Postal Code
    159 State or Province
    160 Country
    161 Employee ID
    162 City
    163 Address 2
    

    For the custom ones, change the where clause to >= 200.

    Posted by: bucchere on September 23, 2006 at 6:53 PM

Categories
Plumtree • BEA AquaLogic Interaction • Oracle WebCenter Interaction

More adventures in desktop linux

Everything I do in linux seems to be an adventure. That couldn’t be more true for Oracle 10g. After fighting with the installer, monkeying around with the ALUI database scripts and editing the start-up script, I got the database to start, but it would only shut down immediately afterword. Drats!

This morning, I deleted every trace of Oracle 10g from my system and attempted an install of Oracle 9i. The adventure begins . . . .

First off, Oracle 9i requires JRE 1.3.1, which Sun is planning to retire very soon (as soon as Java 6 comes out). Damn, I remember working on Java 1.0 — am I getting old?

JRE 1.3.1 doesn’t install cleanly on Fedora Core 5. Then again, does anything? Java is closed-source — meaning you can’t build it yourself — so once again I was in a linux bind. When I tried to unpack the 1.3.1 JRE I downloaded from Sun, it gave me this:

tail: `-1' option is obsolete; use `-n 1' since this will be removed in the future
Unpacking...
tail: cannot open `+486' for reading: No such file or directory
Checksumming...
1 The download file appears to be corrupted. [etc]

I downloaded the file again a few times to make sure it really wasn’t corrupted. Of course it wasn’t.

Then I found this great blog post that explained exactly what was going wrong and offered an easy fix. Easy, that is, if you’re a developer. (I’m becoming more and more convinced every day that linux is not at all poised to take over the desktop unless the entire earth’s population goes out and gets a CS degree.)

Alas, the antiquated JRE was really to roll and now it was time to run the Oracle installer. Of course, that didn’t run either. Instead, it spat out JRE errors;

Error occurred during initialization of VM
Unable to load native library: /tmp/OraInstall2006-08-19_11-59-35AM/jre/lib/i386/libjava.so: symbol __libc_wait, version GLIBC_2.0 not defined in file libc.so.6 with link time reference

Nice. Back to Google.

The fix this time came (ironically) from IBM’s web site. No problem, just make a change to libcwait.c, recompile it as a shared object and then set the LD_PRELOAD variable. I’m sure my mom could do that, right?

Then of course I had the standard “this only works under X” problem, but I had already figured that one out. Here’s the error:

Xlib: connection to ":0.0" refused by server
Xlib: No protocol specified
Exception in thread "main" java.lang.InternalError: Can't connect to X11 window server using ':0.0' as the value of the DISPLAY variable.
at sun.awt.X11GraphicsEnvironment.initDisplay(Native Method)
at sun.awt.X11GraphicsEnvironment.(X11GraphicsEnvironment.java:59)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:120)
at java.awt.GraphicsEnvironment.getLocalGraphicsEnvironment(GraphicsEnvironment.java:58)
at java.awt.Window.(Window.java:188)
at java.awt.Frame.(Frame.java:315)
at java.awt.Frame.(Frame.java:262)
at oracle.sysman.oii.oiic.OiicInstaller.main(OiicInstaller.java:593)

And the fix (as root):

%>xhost +
%>xterm &
%>su - oracle
%>/tmp/Disk1/runInstaller &

And finally, the Oracle 9i installer launched. Now of course it’s totally hung at 18% on “Linking Oracle Net Required Support Files” and it’s been stuck there since before I started writing this blog post.

Gotta love it.

Categories
Software Development

Adventures in desktop linux

I’ve had such a good experience using Fedora on several of bdg’s enterprise systems (SugarCRM, Subversion, Bugzilla, Vetrics, Connotea, etc.) that I thought I would give desktop linux a shot.

What a mistake.

Actually, it was a good learning experience. But still, a mistake.

First I download Fedora Core 5 (Bordeaux) using BitTorrent. My first problem was mastering the ISO files to CD. Windows has no native support for this (surprise) and for the life of me I couldn’t find a free product without filesize restrictions or other issues. Finally I remembered that I had a purchased a license for Sateira DropToCD some time ago, so I attempted to use that miserable excuse for a program. I tried to burn the five CDs at 24x (~10 minutes each) and my computer would not recognize them. The CD-Rs, once burned, were useless, yet Windows did not show any data on them nor a volume label.

I did a little Googling and then remembered that I needed to burn at 4x in order to get Fedora Core 4 (and Solaris x86 — another mistake) to work. So I tried that (at ~30 minutes per CD) and again, total failure. Finally I used a real operating system, OS X, running on my wife’s Mac laptop, to create the ISOs. (Of course OS X has built in support for ISO burning that works like a charm.)

After all this nonsense, I was finally ready to install FC5. So I backed up all my company files, music, photos and other stuff to my Western Digital 250 Gb external firewire drive and off I went.

I must say, there are some nice things about FC5. Unfortunately, it’s a short list:

  1. The installer, Anaconda, is awesome.
  2. The graphic design is beautiful.
  3. Wireless networking just works.
  4. Firewire just works.

So I was off to a running start. But here is where my problems began. At the top of my shit list is CodeWeavers‘ CrossOver Office. What a complete piece of garbage. From all their press releases, I was led to believe that they actually supported some useful Windows programs such as Office and, more recently, iTunes on various flavors of Linux. Don’t believe what you read. It’s all lies. Damn lies.

I started with Office 2003. That just failed utterly and completely. I wasn’t about to go back to Office XP, so I gave up on running M$ Office. FC5 comes with OpenOffice, which claims to support Word, Excel, etc. so I figured I would just use that.

Next I moved to iTunes. First off, installing it is a series of hacks and kludges. Upon following these ridiculous instructions, iTunes actually launched! But:

  1. All my playlists were gone, even though I repeatedly pointed iTunes to my backed up iTunes Music folder.
  2. The best feature in iTunes, search, didn’t work — the search box was grayed out.
  3. A basic feature — scrolling — was inconsistent and buggy.
  4. It crashed about 10 times before I completely gave up on it.

So now I had limited options. I decided that I would give up on purchasing DRM music through the iTunes store (and save about $500/yr in the process) and switch to Banshee, which claimed to be everything that iTunes was minus the music store.

Okay, so music is just music. But what about e-mail? I’m totally addicted to Outlook — the proof is my 1.5+ Gb .pst saved mail file. Without CrossOver Office running Outlook, I had to fall back on Evolution or Thunderbird. Access to saved mail, however, was a showstopper. To use my gi-normous .pst file in a non-M$ program, I needed to convert it to MBOX format. That proved impossible. Or at least not possible within my own personal constraints of time, patience and most importantly, sanity.

First I tried Thunderbird, because I remembered using its Outlook .pst conversion program. After struggling for a long time with compilation issues, linking issues/missing dependencies (including the wrong version of libstdc++) and segfaults, I finally got the ol’ T-bird working on FC5. But to my disbelief, the option to import a .pst was missing. After some Googling, I found out that Mozilla’s hairbrained implementation actually relies on MAPI, so you need to have Outlook installed and configured on the machine with Thunderbird in order to convert from .pst to MBOX.

I tried various other programs, including a useless dungheap called MailNavigator. I also tried hand-compiling a C program called libpst that was supposed to work and didn’t. I was beginning to think that my .pst file had been corrupted, but that was impossible because it was running fine in Outlook.

After all this nonsense, I used my wife’s laptop to download a DOS book disk with fdisk, deleted all my partition info, and now here I am back on Windows XP.

Lessons learned:

  1. Linux is not ready for the desktop, even if you’re a hardcore developer.
  2. Don’t believe anything CodeWeavers say about CrossOver Office. It just doesn’t work. Period.
  3. Windows, for all its faults, is actually not that bad. I can’t believe I just said that, but it’s true. 😉