Categories
bdg dev2dev Plumtree • BEA AquaLogic Interaction • Oracle WebCenter Interaction

WLP + Adrenaline = ALI?

I recall sitting in a meeting in 1998 where we were discussing how to aggregate portlet content into a portal page. We talked a lot about iframes but couldn’t consider them as a serious integration option because of security, scalability/performance, caching and portal-to-portlet communication. Instead, we spent the next year building and testing the HTTPGadgetProvider, which later came to be called the “(Massively) Parallel Portal Engine.” (The term “Massively” was later dropped and I believe the name “Parallel Portal Engine” or PPE for short finally stuck.) I won’t go into details about how the PPE works, but if you’re interested, you can check out this great page in edocs that sums it up nicely.

So anyway, iframes are certainly reasonble way to build a portal in a day. But, in terms of building a robust enterprise portal that can actually withstand the demands of more than say, ten users, and that will pass even the most rudimentary security evaluation, iframes are complete nonsense.

So, today, during my lunch break, I attended Peter Laird’s Webinar, which he advertised in his nascent blog. It was all about enterprise mashups, a topic by which I’m very much intrigued. (Recall that PTMingle, my winning entry in the 2005 Plumtree Odyssey “Booth of Pain” coding competition was a mashup between Hypergraph, Google Maps, del.icio.us and Plumtree User Profiling.)

Imagine my surprise when Peter described how you can mash up Google “Gadgets” and other resources available via URLs using Adrenaline, a “new” technology from the WLP team based on, of all things, iframes. It was like entering a worm hole and being transported back to 1998. (I was single again, I had no kids, I was thinner and I had more hair on my head . . . and less on my back.) But the weird thing about this parallel universe is that BEA engineers were telling me that iframes were a great way to mashup enterprise web content and that intranets all over the world could benefit from this revolutionary concept. Intranets? You mean the things that everybody replaced with portals in the last millennium? Iframes? I must have been dreaming . . . .

When I finally came back to my senses, a few things occurred to me.

First of all, it’s 2007. Portals are a thing of the past. For some of us, that will be a hard pill to swallow. But let’s face it, innovators have moved on to blogging, wikis, tagging/folksonomies and lots of other nice web 2.0 sites that all have rounded corners. The bleeding edge folks have decided that many is smarter than any. The rest of the world will catch up soon.

Secondly, if you are still building a portal or composite application of any flavor, iframes are not a viable solution. They fall short in the following ways:

Portal-to-Portlet Communication

Say you want to send information (like the name of the current user) down to a portlet running in an iframe. Hmmm, the request for an iframe comes from the browser, not from the portal. So, if anything needs to be passed into the iframe, I guess you have to put in in the URL in the request for the iframe. That’s great, but that URL is now visible in the page’s source. So a simple, “Hello [your name]” portlet where the portlet gets the name from the portal is doable. But what about passing a password? That information would need to go first to the browser and then back to the remote tier, which, from a security standpoint, is a complete showstopper.

Security

Let’s talk a little more about security. Since you’re using an iframe, the requests aren’t proxied by the portal. Instead, a page of HTML gets sent from the portal to the browser and then the browser turns around and makes requests to all the iframes on that page. Since the portal isn’t serving as a proxy, it can’t control what you do and don’t have rights to see, so security is completely thrown out of the window. (Or should I say, thrown out of the iframe?) Moreover, in an enterprise deployment, the portal usually sits in the DMZ and proxies requests out to bits and pieces of internal systems in order to surface them for extranet users. If you’re using iframes, every bit of content needs to be visible from an end user’s browser. So what’s to stop an end-user from scraping the URL out of a portal page and hitting a portlet directly? Nothing! (If I understand what I’m reading correctly, the WLP team is calling this a feature. I would call it a severe security risk.)

Scalability/Performance

Yes, this approach will work for Google Gadgets. But Google has more money than pretty much everyone. They can afford to spend frivolously on anything, including hardware. However, the rest of the world actually cares about the kind of load you put on a system when you create a “mashup.” A page consisting of five iframes is like five users hitting the sites with five separate requests, separate sessions and separate little “browsers.” If any of the iframes forces a full-page refresh or if the user does the unthinkable and say, moves to another page, every request is reissued and the mashup content is regenerated. This simply does not scale beyond a few users, unless you have as much money and as much hardware as Google does.

Caching

A properly designed portal or content aggregation engine will only issue requests to portlets when necessary. In other words, each remote portlet will only get a request if it needs to be loaded because the portal doesn’t have a cached entry. Unfortunately, you can’t do this with iframes because the portal doesn’t even know they exist. (Remember, all requests for iframe content go directly from the browser to the remote content, bypassing the portal entirely.)

What baffles me is why a company would acquire another company with a revolutionary technology (the PPE) and then start from ground zero and build a technology that does the same thing but without a portal-to-portlet communication model (preferences), security, scalability or caching. If consumers weren’t already confused, now they most certainly are.

As technologists, I hope you can see through the hype about Adrenaline and consider a product that actually allows you to mash up web content in a scalable and secure way and has been doing so since 1999. It’s called AquaLogic Interaction and it’s sold by a company we all know and love called BEA.

Comments

Comments are listed in date ascending order (oldest first)

  • I just discovered that the BID/AquaLogic (formerly Plumtree, Fuego, Flashline, etc.) folks are having another webinar, entitled “Harnessing Enterprise Mash-ups with Security and Control.” This webinar (I hope) will show:
    1. how ALI has been handling mashups since before mashups was even a buzzword and
    2. how Project Runner enables next generation mashups that allow you to invoke back-end applications and provision security, branding, SSO, etc. without actually funneling everything through the portal.

    If you were at today’s webinar and you’re now wondering how to do mashups with more robustness and security, then I hope you’ll attend this webinar. By all means, it’s just the responsible thing to do in order to offer customers different integration options when creating their mashups.

    Posted by: bucchere on January 10, 2007 at 7:31 PM

  • I’d like to add a couple points of clarity from BID product management. First of all, we’re happy to have passionate developers, but I fear this post may give the wrong impression about some of BEA’s technology and plans.

    WLP Adrenaline, ALUI, and project Runner are all complementary technologies that have a very exciting future when applied to problems such as Enterprise Mashups. You’ll be hearing more about them from BEA over the coming months through various venues, including Webinars targeted at WLP-specific use cases (such as Peter’s excellent talk) and ALUI use cases (including tomorrow’s Runner Webinar). There will also be the usual blogging and other activities.

    Just as WLP and ALUI product teams are aligned, these different technologies are aligned. Adrenaline offers WLP customers a way to extend their reach in fundamentally new ways, and Peter will expound on some technical subtleties to address some of Chris’ concerns. Runner, too, is very exciting, enabling a completely different set of use cases. As the details unfold we’ll demonstrate how well aligned these technologies are — just wait until you see them working together!

    – David Meyer

    Posted by: dmeyer on January 10, 2007 at 10:41 PM

  • Just for those that don’t know about Adrenaline, here’s an article introducing Adrenaline.

    Posted by: jonmountjoy on January 11, 2007 at 12:19 PM

  • Chris,

    As David writes, BEA is moving ahead with multiple approaches to address the enterprise mashup space. My webinar covered the approach WLP is taking, and in no way implied that ALUI is not also a viable player in this space. We offer our customers a choice of products, and different products make sense to different customers.

    As for the specific issues you raised:

    ** Technical Reply

    Good technical points, but I think you overemphasized the role of iframes within WLP. Let me cover the two places we showed the use of iframes:

    Use Case 1: injecting a portlet into a legacy webapp

    Demo: An iframe was used in the demo to inject a portlet into a legacy static html page with almost no modification to that page (one line change).

    WLP does support an alternative approach – an Ajax streamed portlet. I simply did not have time to demo it. Also, this is not a portal use case for including external non-portal content into a Portal; instead it is the inverse, which is to publish existing portal content into legacy web applications . It was intended to show a very inexpensive way to energize a dated application until it is rationalized into a portal. The focus here is on minimizing cost of supporting legacy, while building portlets in transit to a portal solution.

    Use Case 2: WLP as a Mashup composition framework

    Demo: Iframes were used to pull in non-WSRP capable components (e.g. Google Gadgets) onto a WLP page

    First, as background info, the WLP architecture supports the rendering of various types of portlets:

    • Local portlets (deployed within the webapp, JSF, JPF, etc)
    • WSRP portlets – an advanced remoting approach which handles security, inter-portlet communication, etc…
    • Iframe portlets – an available remoting approach
    • WLP partners with Kapow for remote clipped portlets (similar to the ALUI approach)

    In regards to this use case, you brought up specific concerns:

    Security

    Concerns about shared authentication were noted in my talk. If components come from outside the enterprise, there is no easy solution to that problem, regardless of what product you are using. However, I spoke of a couple approaches in the webinar, including SAML.

    If those components come from inside the enterprise, the security hacks you were referring to are generally not necessary. Our customers that expect SSO have a web SSO solution (typically, cookie powered, not password in the URL powered) in place within the enterprise.

    Caching/Performance

    The most serious concerns of yours appear to be performance related. Specifically, the concern is that a full page refresh of a page that contains N number of iframes will cause an N+1 number of requests. To expand on your concern, I will add that this is not only seen in pages with iframes, but also pages that use Ajax to pull in data. I would say that there are several reasons why this does not invalidate WLP’s approaches:

    1. Mashup pages with lots of iframe portlets approach

    Google Personalized Home Page makes use of iframes to implement their mashup framework. Many of the Gadgets on the page are rendered with an iframe. But you are mistaken in saying that this scales because Google is throwing tons of hardware at the problem. The iframe Gadgets rendered in GPHP are rendered not by Google, but by 3rd party gadget hosting servers around the world. Google does NOT have to process those iframe Gadget requests, it is a distributed approach. Likewise, you could create a WLP page where most of the portlets are iframe portlets that hit a distributed set of servers, if that makes sense. Or…

    2. Mashup pages with a mixture of portlets

    The 2nd demo in my webinar wasn’t showing a page with all iframe portlets. Rather, what the demo was showing was a WLP page with a couple of iframe portlets mixed in with local portlets. As shown above, WLP supports a number of portlet types, and a good approach is to build pages that are a mixture of that set.

    3. Ajax helps minimize page refreshes

    Your concern about iframe performance stems from the case in which the entire page refreshes. With the usage of Ajax becoming common, plus with WLP 9.2 built in support for auto-generating Ajax portlets, this impact can be minimized. Page refreshes are becoming more rare. With WLP 10.0, which releases in a few months, the Ajax support has been expanded to support Ajax based portal page changes, further reducing the liklihood of a page refresh.

    4. The “Bleeding Edge” guys are also using browser based mashup approaches

    You referred to the “Bleeding Edge” technologists in your blog as the people that are doing things correctly. What are they doing? Some of the time, those guys are doing browser based Mashups. They often use a combination of iframes and Ajax from the browser to implement their mashups. So the same approach that you dislike is already in common use across the web.

    ** Market Reply

    You state “Portals are a thing of the past”. An interesting opinion, but just that. IT cannot afford web sprawl, and so a framework for rationalization will always be in demand whether you call it a Portal or something new.

    New technologies continue to provide alternatives to existing methodologies and portals are no different. However, one thing that has distinguished portal frameworks is their ability to embrace new technologies. Struts, WSRP, JSF are all examples of this as are the Web 2.0 constructs like mashups and rich interfaces based on Ajax. This is all good news as the enterprise has a wealth of options to choose from.

    Posted by: plaird on January 11, 2007 at 2:56 PM

  • I must say, as a customer and developer, it’s great plumtree (I mean BEA, or is it Oracle) management allows you guys to express your own opinions. It so happens I’ve spent quite a bit of time trying to get JBoss Seam (and Ice Faces) to work with Aqualogic 6.1. I’ve been looking at the IFrame route, because the gateway stuff just isn’t working (it doesn’t properly rewrite the URLs for the Ajax stuff). I’ve come to hate the gateway. I bet it was a great idea before Ajax, but now it seems like almost every web 2.0 application is incompatible (needs major modification to get it to work). Or maybe I just don’t understand how to get it to work. Is there any good documentation on it? I’m hoping for some major improvements when 6.5 comes out though.

    Posted by: cmann50 on April 4, 2008 at 2:28 PM

Categories
dev2dev Plumtree • BEA AquaLogic Interaction • Oracle WebCenter Interaction

How to Integrate PKI Certs or CAC Cards with ALI

In his 1947 speech to the House of Commons, Winston Churchill quipped, “It has been said that democracy is the worst form of government except all those other forms that have been tried.”

I’m not nearly as pithy as Sir Winston (nor as portly — at least not yet), but yet I feel the same way about passwords being used to protect web sites or other enterprise systems. In many ways, they’re the worst form of security out there except for everything else that’s been tried. Part of this has something to do with what I’ve coined Bucchere’s Axiom of Strong Passwords, which is a derivative of Murphy’s Law (which states that whatever can go wrong will). It goes something like this: the stronger a password is, the easier it is to hack. Why? Because if you force users into using a strong password, they’re more likely to write it down. And writing a password down defeats its purpose entirely.

The bottom line: passwords suck. But they’ve become the de-facto standard because they’re easier and cheaper than everything else we’ve tried, including PKI certs, biometrics (e.g. fingerprints, retina-scans), CAC cards, RSA secure IDs, etc. (Even for a cert-based authentication scheme, you still need a key to generate your cert, which is essentially just a glorified password.)

Just because passwords are the de-facto standard for authentication does not mean that we should quit trying to use other, ostensibly better forms of security, especially if 1) you’re protecting particularly sensitive data, 2) you’re open to the internet and 3) you have the resources (e.g. $$$) to invest in more robust forms of security. And I’m not talking about just buying an SSL cert from Verisign and continuing to have your users write down their passwords on post-it notes attached to their monitors. (Note to self: remove the post it note on your monitor with your password on it when you get back to the office.) I’m talking about using some sort of “soft” cert (e.g. PKI) or “hard” cert (e.g. CAC) to protect your system and your data.

Now if your system is ALI (formerly known as Plumtree Foundation or Plumtree Portal), you’re in luck, because the eggheads at what was once known as Plumtree have made this particularly easy to do. In fact, the hardest part is just getting the user’s identity out of the cert (see below the code snippet for some suggestions). Once you’ve done that, just drop a class into a jar that implements the ISSOProvider interface. (For those of you running on Windows, please don’t ask me to “port” this to C# — just take the Java code, drop it into Visual Studio.NET and then fix the syntax errors.)

But wait, SSO stands for “Single Sign On,” right? And what you’re really doing here is passing credentials from a cert to Plumtree and that has little or nothing to do with SSO. That’s a true statement. The subtlety here is that ISSOProvider, while it contains the letters SSO in its name, can be used for pretty much any form of authentication, whether you are using an SSO product or not.

CertIntegration.java

package com.bdgportal.alui.auth;

import com.plumtree.openfoundation.util.*;
import com.plumtree.openfoundation.web.*;
import com.plumtree.portaluiinfrastructure.sso.*;

public class CertIntegration implements ISSOIntegration {
 
   private XPHashtable settings;
 
   public CertIntegration() {
     ;
   }
 
   public boolean Initialize(XPHashtable settings) {
     this.settings = settings;       
     //String exampleSetting = ((XPArrayList)settings.GetElement("SettingName")).GetElement(0);
   }

   public String GetSSOProductName() {
     return "My Favorite Cert Integration";
   }

   /**
    * Gets the username from the cert and returns it to Plumtree. This will fail if the username
    * does not have a matching account in Plumtree. This can be a Plumtree database user or a user
    * imported from an authentication source, in which case you need to include the auth source
    * prefix in the username, e.g. "MyAuthSource/cbucchere"
    *
    * @param request The wrapped HttpServletRequest from the web container.
    * @return The object passed back to Plumtree for authentication with the portal.
    */
   public SSOLoginInfo GetLoginInfo(IXPRequest request) {
     String userName = ((XPRequest)request).GetUnderlyingObject().getUserPrincipal().getName();
     return new SSOLoginInfo(userName);
   }

   public String[] GetSecureCookies() {
     return null;
   }

   public String[] GetSecureHeaders() {
     return null;
   }

   public boolean OnLogout(IXPResponse response, String returnURI) {
     return false;
   }   
}

The hardest part about all this, as I said above, is getting the user name out of the PLI cert/CAC card/retina scan/etc. In the example above, I made MANY assumptions. First, I assumed that your portal is running on Weblogic, which understands and correctly implements Principal, which is a Java Servlet’s way of knowing who’s using it. Weblogic lets you plug custom implementations of the Principal class into its security infrastructure. All you need to do is extend java.security.Principal and then walk through a bunch of magical configuration steps to enable it.

Speaking of magical configuration, I neglected to mention that there are two small configuration steps that you need to perform in order to get your shiny new ISSOIntegration working in ALI. In portalconfig.xml, you need to set the value of SSOVendor setting to 100 (or greater) and then set the CustomSSOClass to the fully qualified name of the class you wrote that implements ISSOIntegration. For our Java example above, that would be com.bdgportal.alui.auth.CertIntegration and for .NET, it would the the name of your C# class.

Speaking of .NET . . . as many of you know, it is an entirely different animal with its own way of provisioning security to web applications (e.g. System.Web.Security).

Regardless of your platform, you need to get the user name out of whatever authentication method you’re using. Once you’ve accomplished that, just drop the code above into your project and replace the getUserPricipal().getName() with whatever mechanism you can find for getting your users’ names.

Assuming you trust your authentication mechanism to return the appropriate user name, you’ll have users getting logged into the portal via pretty much however you would like — CAC, PKI, biometrics, etc.

If only implementing a democracy were this easy . . . .

Comments

Comments are listed in date ascending order (oldest first)

  • This is wonderful article. How ever I’ve researched for a long time but still can not figure out what to do with Bea Weblogic to use Costom Identify Assertion. I wish this artical to have link to the document of how to “do the magical configuration steps”.

    Posted by: minh.tran on January 9, 2007 at 9:04 AM

  • This article was intended to be application server independent, but if you’re using BEA WebLogic, there’s a great article on how to set up custom identity providers which should work with this ALUI SSO solution.

    Posted by: bucchere on January 10, 2007 at 6:44 PM

  • NOTE: 1. the user’s password in the portal must be empty string. 2. jar should be put in portal.war and lib/java.

    Posted by: luotuoci on April 28, 2007 at 8:31 PM

Categories
Plumtree • BEA AquaLogic Interaction • Oracle WebCenter Interaction Software Development

Server-to-server SSL with Plumtree

Two different customers of ours have recently experienced problems with server-to-server SSL and Plumtree, so I thought I would shed some light on the issue in the hope that it might help someone else who’s having the same problem.

The reality is that server-to-server SSL is no different with Plumtree than it is in any other environment, but it’s just poorly understood in general. Also, Plumtree makes server-to-server requests in places where you might not think to look. For example, you might think that only end users hit the Image Server, but in fact both the Portal Server and the Collaboration Server make server-to-server connections to the Image Server to pull down javascript files. (It’s possible that Content Server, Studio Server and the new Analytics Server make similar requests; I just haven’t run into problems with these products — yet.)

So, here’s the gist: in order for a server to communicate with another server over SSL, the requesting server needs to have the host server’s certificate installed in it’s keystore. Doing this is pretty straighforward and well documented. First, you need to export the certificate from the host server and copy it over to the requesting server. You can read about how to do this in IIS or how to do this in a JVM-based application server such as Weblogic or Tomcat.

After exporting the cert, you’ll end up with a .cer file that you’ll need to install on the requesting server. Say that server is a Tomcat instance on which you’ve installed Plumtree’s Collaboration Server. In this case, this set of instructions should help you get that part working.

One gotcha is that the name of the server in the certificate must match the name being placed in requests made to that server. For example, if the requesting server is making a call to https://images.mycompany.com, you need to have images.mycompany.com as the name of the server in its certificate.