Bringing Web 2.0 to the en.terpri.se

First, let me establish one thing: I don’t work for BEA. Since leaving Plumtree more than four years ago, neither Plumtree nor BEA has paid me a dime. They don’t pay me to write in this blog. They’re not paying me to speak at BEA Participate in May. Although we have a subcontract agreement in place, we have never actually subcontracted through Plumtree nor through BEA. You get the point: what I write here (or anywhere, for that matter) is not endorsed or sanctioned in any way by BEA.

The beauty of that is that I can be BEA’s sharpest critic or their most outspoken advocate. Today, I come to you, dear readers, as the latter.

I am here to tell you that I think the latest marketing positioning to come from BID — in the most apropos form of a “rogue” web site called en.terpri.se — is perhaps the finest writing I have yet to read on the topic of bringing Web 2.0 to the enterprise.

This concept — which many of you already know as “Enterprise 2.0” — is not a new one. But just as consumer portals were not new in 1997, they were at that time very new to the enterprise. And today blogs, wikis, tagging and other social software have already infiltrated the consumer internet. But, as we have been saying since early this year and as others have been saying for a while now, these concepts are only being embraced by the early adopters in corporate/enterprise computing.

But with projects Builder (Holland), Runner and Graffiti (now known as Pages, Ensemble and Pathways, respectively) nearing general availability, all of that is about to change. If you want to find out exactly how, I encourage you to read and digest all of en.terpri.se and its more traditionally-branded counterpart, www.bea.com/enterprise.

Just as Plumtree took the world of enterprise computing by storm by introducing the concept of the corporate portal, BEA is about to re-revolutionize the enterprise by injecting it with a strong dose of Web 2.0.

I won’t rehash what they’ve already spelled out so concisely and intelligently on en.terpri.se; instead, I’ll give you my own take on the products based on what I’ve read there.

Pages (formerly known as Project Builder or Holland)

To call Pages a powerful blogging and wiki tool for the enterprise doesn’t really do it justice because it is, well, so much more than that. Imagine if you could use point-and-click/drag-and-drop tools to mash up structured data (RDF/RSS, the output of a SOAP-based web service, or the result of a SQL query) with unstructured, end-user maintainable, version-controlled wiki-like content — now you’re scratching the surface.

For those of you already familiar with AquaLogic products, think of how amazing Studio would be if it were somehow married to Publisher (we used to call this “Contudio” before it actually existed) and if Studio could tap into existing resources and then somehow weave published content into the resulting user interface output. Now put all this in the hands of the end-user (to give it that Web 2.0 magic), add a sprinkle of security/governance, auditing and enterprise administration and you’ve got Pages.

Ensemble (formerly known as Project Runner)

This may not be the best way to envision Ensemble, but it works for me: imagine taking all the tasty bits that Pages gives you, but put it in the hands of IT and developers. Instead of dragging-and-dropping, a developer can embed a runner Pagelet XML tag into his or her legacy (or newfangled long-tail/rogue) application, then proxy the application through the runner “gateway” and, out of nowhere, the application can have, say, a collaboration discussion or wiki page embedded in it.

Not to mention that other enterprise services such as security, SSO and auditing, can be mixed into the application just because it’s running in the Ensemble gateway. With this incredible new product, pretty much anything is possible because it gives developers the tools to provide secure, scalable, audit-able and maintainable mashups of just about anything in the enterprise or consumer web.

Pathways (formerly known as Project Graffiti)

Calling Pathways a next-generation Knowledge Directory may be an easy way to conceptualize it, but again it really doesn’t do it justice. Unlike the top-down, “mother knows best” taxonomies of the past, Pathways puts the power to categorize corporate knowledge in the hands of the knowledge consumer. Like del.icio.us and digg, Pathways is BEA’s recognition of the “many is smarter than any” principle. Unlike its consumer web counterparts, Pathways uses a page-ranking system that’s based on a whole slew of factors, including not just how or how much an entry is tagged, but also how “respected” the tagger is in terms of other entries he or she has tagged. Like the KD of the past, Pathways can import content from file shares, e-mail/groupware systems and even from Sharepoint (gasp) — think CWSs — but very much unlike the KD of the past, control over the taxonomy and how well entries get ranked in search is ceded to the end-user, where many argue it belongs.

Needless to say, I’m very exited about these new product initiatives for many reasons, not the least of which being that I’ve bet my entire company’s future on their success. So maybe I am a little biased. That being said, I’m not here to tell you that BEA invented Web 2.0 or even Enterprise 2.0. However, I am saying that — based on what I’ve experienced over the past ten years that I’ve been pushing the enterprise computing envelope — BEA is poised to execute on the Enterprise 2.0 reality better than any other company right now.

Mark my words: you will watch Pages, Ensemble and Pathways implementations spring up throughout the Fortune-whatever just as quickly as you saw enterprise portals replace intranets in the late 90s.

Better yet, in the spirit of Enterprise and Web 2.0, rather than watching this happen, let’s participate in it.

Comments

Comments are listed in date ascending order (oldest first) | Post Comment

  • Are these products built with partners or reskinned? Is that the reason why this isn’t this on dev2dev? How long has this been on?

    Posted by: logicuser on April 1, 2007 at 1:53 PM

  • I’m not sure I fully understand your questions, but I think I can comment on them a bit. First off, the products are being built by BID’s core engineering team — many of the same folks who brought you ALUI, ALI Publisher, ALI Collaboration, ALI Studio, etc. I don’t know what you mean by “reskinned” but these products are all new initiatives, although they undoubtedly leverage the experience the BID folks have garnered over the past 10 years they’ve spent building enterprise portals and other enterprise software.To answer your last question, the marketing documents and web site were only released last week to the public. I think you should expect to see GA for these products some time this summer, but don’t quote me on that — remember, I don’t work for BEA.As for this stuff not being on dev2dev, well, with my blog post, now it is! There’s also a lot of the same info located at www.bea.com/enterprise with more traditional BEA branding. I’m sure you’ll be seeing more on dev2dev soon.

    Posted by: bucchere on April 1, 2007 at 3:48 PM

  • These products have all been organically developed at BEA. In some cases we have leveraged our existing capabilities and technologies (i.e. our experiences with our search product informed our decisions with the new Pathways product), and in the end these are new products built by BEA.The http://en.terpri.se micro-site was launched last week, and it is meant to provide a Web 2.0 and Social Computing resource that will grow over time. We will have the blogs on dev2dev and en.terpri.se refer to each other as appropriate.And, of course the same base product information was also made available concurrently last week at http://www.bea.com/enterprise.

    Cheers,

    Shane Pearson

    BEA Systems, Inc.

    VP, Marketing and Product Management

    Posted by: spearson on April 3, 2007 at 1:40 PM

Portals and SOA: Portals in a Service-Oriented Architecture

I’ve been invited to give the following talk at BEA Participate:

Why is a Service-Oriented Architecture important to an IT infrastructure and what are the elements and products needed to build out an SOA? These questions answered, plus a discussion on how portals are the practical starting point to leveraging SOA.

Quite honestly, the title and abstract make it sound like an invitation to engage in a lively game of buzzword bingo, but I assure you this talk will be light on the trite — you won’t hear me use the acronym SOA more than once or twice — and heavy on the real deal, rubber-meets-the-road stuff about how mere mortals/human beings are actually accomplishing the sort of things that SOA evangelists are preaching these days.

So, here’s what you can expect: I’ll talk a bit about some of the challenges of building integrated user experiences in today’s enormously complex and heterogeneous IT environment and show how a software developer — without superpowers — can piece together an integrated true-to-the-principals-of-SOA application using ALUI, ALDSP (Data Services Platform) and ALESB (Enterprise Service Bus). This will culminate in an actual, real-life demo.

I will of course make sure to sacrifice a chicken to the Almighty Goddess of Demos or do whatever else I have to do to make sure my demo doesn’t crash. Scratch that, I’ll just run it on Linux and everything will be fine.

So, all joking aside, if you have any ideas for items you’d like me to include in (or exclude from) my talk, please post your comments here. I’ll be sure to give anyone who makes a good suggestion a “shout out” during my presentation. They’re actually giving me a whole hour this time, so they’ll be room for plenty of tomfoolery, geekspeak, silly anecdotes and still time to answer your insightful questions at the end. As one of my good friends and business partners said following my talk at last year’s BEA World,

you never know what to expect during one of [Chris Bucchere’s] talks.

I’m not sure exactly what he meant, but of course I took it as a compliment.

In closing, while we’re on the subject of BEA Participate, I just wanted to say thanks to Christine “Obi” Wan for giving me the opportunity to present and, more importantly, for putting together such a great-looking agenda, which you can review if you like, because now it’s posted on the BEA Participate site.

In the meantime, do your best to convince the powers that be at your company/organization that they will finally discover the secret to “leveraging SOA” if they send you to this conference. Also, please don’t mention that every past Odyssey has had several open bars.

Comments

Comments are listed in date ascending order (oldest first)

  • Working with Aqualogic we all know how it’s easy to plug in our portlet into Aqualogic. We don’t need Aqualogic portal running on our own computer to do this, we don’t need special IDE, we don’t need upload wars into portal. It took time to explain this to my experience J2EE collegaes that got some experience with IBM Websphere. Here what they do there:
    http://www-128.ibm.com/developerworks/websphere/techjournal/0410_barcia/0410_barcia.html
    A lot of steps pretty much the some but have a look at step 11. Here is the core difference. So at least one benifit of SOA is that we don’t need to do step 11.

    Posted by: Bryazgin on April 13, 2007 at 7:03 AM

  • >Quite honestly, the title and abstract make it sound like an >invitation to engage in a lively game of buzzword bingo True, I have the some issue. In my article (for russian development network) I want to stress SOA architecture of Aqualogic, but I don’t want to use SOA word. Audience is pretty techical so they all pretty much feed up of this word. Hmm, may be I will end up with this:
    Avoid nightmare of step number eleven !
    At least, “what the hell this guy talking about?” will be more predict reaction. 🙂

    Posted by: Bryazgin on April 13, 2007 at 7:25 AM

  • Hi Dmitri! Thanks for your insightful comments.As I’m building the demo for my talk, I’ve noticed that these SOA tools encourage you to loosely-couple everything. And that’s a good thing. As you pointed out, ALUI fits into this nicely with its loosely-coupled portlet architecture. The evil “Step 11” (too bad it wasn’t “Step 13”) is: “Select the Browse button and navigate to the WAR file for your portlet, then select Next (Figure 17).” Step 11 has some pretty awful implications for the enterprise. First off, it assumes that everything is Java, which, as much as I love Java, is just wrong wrong wrong in the heterogeneous enterprise. Secondly, it tightly couples your portlets to your portal, which is contrary to SOA.As an aside, I was listening to some Web 2.0 podcasts in the car the other day, and this guy who worked on Google Maps talked about “seams” in an architecture. To paraphrase, he basically said that everyone misuses the word “seamless.” Seams, just like in the textile industry, are critical to enterprise architecture. Just as seams hold swaths of fabric together and separate one bit of fabric from another, they also help define boundaries in the enterprise architecture that are equally critical to SOA. Without seams, everything must be homogeneous — applications must be bought from the same vendor, run on the same OS, be written in the same language, etc. — and this is completely contrary to the reality of enterprise software and systems and completely anti-SOA.

    To illustrate how not being “seamless” is actually a good thing, I’ve designed a demo system that involves bits of LAMP (Linux Apache MySQL PHP), bits of Java, bits of .NET and bits of Adobe Flash all held together with seams built with ALDSP, ALESB and ALUI. I’m still working on the technical side of things, but the use case is simple: a sales rep wants to quote his customer. Behind the scenes, his company is running a LAMP CRM server, a Flash/SQLServer product database, a .NET portal, and a Java-based Collaboration Server. Using a hybrid of ALDSP, ALESB and Java and .NET web services, the user experience is easy and seamless, but behind the scenes, it’s the powerful seams supported by ALDSP and ALESB that make this not only possible, but fairly straightforward.

    If you’re interested in hearing more, register for BEA Participate and [shameless plug]come to my talk[/shameless plug]! By the way, I’m co-presenting with Joseph Stanko, the BEA Engineering Manager responsible for the development of Ensemble (formerly known as Project Runner) — he will run several slides to help you understand the theory behind SOA and I will show the reality of how the AquaLogic stack truly enables SOA in the enterprise.

    Posted by: bucchere on April 14, 2007 at 6:07 AM

  • Alas, I’ve finally finished my demo. I had some configuration issues with ALSB, but ultimately they boiled down to the interface between the keyboard and the chair, i.e. human error. I had the proxy service calling the business service, which, in turn, called the proxy service again. You should have seen the utter wasteland this little tidbit of mutual recursion made of my machine. Actually, I was impressed — Java would spit out a JVM_Bind error once it exceeded some internal maximum, but ALSB (running on WLS 9.2) would actually keep running. Nice.Anyway, now that I’m past all that, I have an ALDSP layer over two disparate data sources (one MySQL DB containing CRM info and one HSQL DB containing product info) exposing data through netui/beehive to a single ALI portlet. (The nifty little portlet uses script.aculo.us to show an interesting new take on the age-old concept of master-detail.) I also included an Adobe Flex-driven portlet. The two portlets use some client-side IPC (inter-portlet communication) to exchange info and then they call a proxy service on ALSB that takes info from both sources and creates a Word document (in the form of a sales quote). The business service also uploads this document to ALI Collaboration so that people can work on it collaboratively before sending it to the customer. (I may replace this last little bit with a .NET web service, just to show that Java and .NET are both acceptable alternatives for writing the “glue” or “seams” in a true service-oriented architecture.)Lastly, the event coordinators have locked in a time slot for us: Monday, May 7th at 4:30 PM in the Technical/Developer Track.

    If you’re “participating” it would great to see you at our talk or at the bdg booth. This year we have a cool — yet practical — giveway that will definitely brighten your day. Looking forward to the conference!

    Posted by: bucchere on April 22, 2007 at 7:52 PM

BEA Participate

A quiet little announcement was made last week: BEA plans to host an ALUI (formerly Plumtree) and ALBPM (formerly Fuego) user conference! Suprisingly, I don’t see any references on BEA’s web site, on dev2dev or really anywhere else about it, so I thought I would take a minute to promote the conference here.

Could this be a response to some customer and integrator concerns that there weren’t enough AL* breakout sessions at BEA World 2006? Possibly. Could this be the final nail in the coffin that was once called the “Unified Portal Roadmap.” I’m not sure.

Regardless, you can bet that I’ll be there along with several other folks from bdg. Stayed tuned for more information here about how we’ll be involved as an event sponsor, exhibitor and perhaps even as a presenter. I expect that we’ll have a lot of fun, share a great deal of what we know about ALUI and learn a great deal more from ALUI customers and other BEA partners.

The full extent of the information that currently exists about this conference can be found at http://www.bea.com/participate. We’ll be watching that space for more info and also posting several more times about our specific role in the conference. I suggest you do the same.

One obvious question any customer or partner should ask is: if I’m getting my budget together for 2007 conferences, should I attend BEA World or BEA Participate? If you’re a current ALUI or ALBPM customer, it’s a no brainer: attend BEA Participate. But what if you’re a prospect who is considering a portal or SOA solution from BEA? If you can afford it, I would say attend both!

Comments

Comments are listed in date ascending order (oldest first)

  • Now I’m officially confused. Very weird that these are separate unless they’re using BEA World as a venue for “technical building blocks” and “Participate” to sell business collaboration / process solutions – that’s the only way I can see this.

    I have to be careful how I word this, so if the tone comes across in any way negative, well… that’s not my intention. IMO I would not attend BEA World again if it’s a repeat of last year’s.

    I loved Odyssey – it was well organized, had _great_ sessions targeted toward user education and productivity, and was all about the customer – sharing best practices, discussing common problems, and engaging in one-on-one w/ engineers and product managers. Sessions were focused on empowering the customer and making sure they were just a bit better at their jobs when they left. It was always worthwhile and our entire team (repeatedly) came away saying “glad we went.” Awesome stuff all around and did a lot to let the customers sell the solutions to other customers (always a better way to go).

    In attending BEA World last year I got the constant nagging sensation that it was a big (overt) sales conference and not really about the user and how to better utilize tools. ALUI was barely even on the map (which really bothers me). I didn’t have the sense that my needs were being addressed as much as in previous years and I really didn’t come away with anything “tangible” I could take back to justify the fee. The customer keynotes were cool, but beyond that we struggled to find value.

    Doing something with a “Participate” focus thing is a _great_ idea on the part of BEA if it’s about targeting the customer and helping understand how to succeed with the tools (and make friends along the way ;). Keywords: using the tools to succeed in business. That, IMO, was always the point to me in attending.

    Obi-wan – hear me. This should really be incorporated into BEA World for the benefit of your current and prospective customers. It will really boost the value of BEA World and do something to hammer home the fact that BEA and Plumtree are one company with one comprehensive suite (something Jay Simons’ web conference last year did a great job of explaining). Separating things like this … well… I get it, but it does imply a continued level of separation that customers expressed concern with last year.

    That said – and I sincerely hope that didn’t come across as negative – I’m excited to see what 2007 brings for the new products. Seeing a bit of what they’re cooking up, it’s nice to users finally getting past a lot of the geekware bits and into things they can build and use w/o IT bottlenecks. Very cool. Buy three 🙂

    Posted by: ewwhitley on February 12, 2007 at 7:28 AM

  • It’s not Obi-wan here, but Christine Wan and we’re definitely listening! BEA organized Participate to directly address the needs of business and IT users working with ALUI and ALBPM products. This is very much a forum for customers to gather and share best practices, to go deep with product managers and engineers and to hear the latest on new product developments.

    And it is an important complement to BEAWorld, providing much richer detail on these two specific product lines and more focus on bringing these specific users together in a forum where they can share experiences and ideas. The announcement last week was just a Save-the-Date. Stayed tuned, you’ll see a lot more information to come on the bea.com homepage and bea.com/participate.

    Posted by: cwan on February 12, 2007 at 2:09 PM

  • Hi, Christine 🙂 Very cool – I’m glad to hear this. We loved the “interactive” and focused nature of the Odyssey sessions. You guys did such a good job on that I think we just kinda got spoiled and expected something on that order for BEA World last year. That’s what happens when you make us too happy year after year 😉

    Posted by: ewwhitley on February 12, 2007 at 7:51 PM

  • I know that many customers I spoke with during and after BEAWorld echoed the same sentiment of being “underwhelmed” simply from being spoiled by Odysseys past. Along those same lines, an Advanced Developer Conference either as part of Particpate, an extension to it or separate from it would be awesome as well. I know that may be hard to do as part of this initial effort but it would be great at some point. We are definitely excited about it and all of this just builds anticipation until May.

    Posted by: kurtanderson on February 15, 2007 at 10:05 PM

WLP + Adrenaline = ALI?

I recall sitting in a meeting in 1998 where we were discussing how to aggregate portlet content into a portal page. We talked a lot about iframes but couldn’t consider them as a serious integration option because of security, scalability/performance, caching and portal-to-portlet communication. Instead, we spent the next year building and testing the HTTPGadgetProvider, which later came to be called the “(Massively) Parallel Portal Engine.” (The term “Massively” was later dropped and I believe the name “Parallel Portal Engine” or PPE for short finally stuck.) I won’t go into details about how the PPE works, but if you’re interested, you can check out this great page in edocs that sums it up nicely.

So anyway, iframes are certainly reasonble way to build a portal in a day. But, in terms of building a robust enterprise portal that can actually withstand the demands of more than say, ten users, and that will pass even the most rudimentary security evaluation, iframes are complete nonsense.

So, today, during my lunch break, I attended Peter Laird’s Webinar, which he advertised in his nascent blog. It was all about enterprise mashups, a topic by which I’m very much intrigued. (Recall that PTMingle, my winning entry in the 2005 Plumtree Odyssey “Booth of Pain” coding competition was a mashup between Hypergraph, Google Maps, del.icio.us and Plumtree User Profiling.)

Imagine my surprise when Peter described how you can mash up Google “Gadgets” and other resources available via URLs using Adrenaline, a “new” technology from the WLP team based on, of all things, iframes. It was like entering a worm hole and being transported back to 1998. (I was single again, I had no kids, I was thinner and I had more hair on my head . . . and less on my back.) But the weird thing about this parallel universe is that BEA engineers were telling me that iframes were a great way to mashup enterprise web content and that intranets all over the world could benefit from this revolutionary concept. Intranets? You mean the things that everybody replaced with portals in the last millennium? Iframes? I must have been dreaming . . . .

When I finally came back to my senses, a few things occurred to me.

First of all, it’s 2007. Portals are a thing of the past. For some of us, that will be a hard pill to swallow. But let’s face it, innovators have moved on to blogging, wikis, tagging/folksonomies and lots of other nice web 2.0 sites that all have rounded corners. The bleeding edge folks have decided that many is smarter than any. The rest of the world will catch up soon.

Secondly, if you are still building a portal or composite application of any flavor, iframes are not a viable solution. They fall short in the following ways:

Portal-to-Portlet Communication

Say you want to send information (like the name of the current user) down to a portlet running in an iframe. Hmmm, the request for an iframe comes from the browser, not from the portal. So, if anything needs to be passed into the iframe, I guess you have to put in in the URL in the request for the iframe. That’s great, but that URL is now visible in the page’s source. So a simple, “Hello [your name]” portlet where the portlet gets the name from the portal is doable. But what about passing a password? That information would need to go first to the browser and then back to the remote tier, which, from a security standpoint, is a complete showstopper.

Security

Let’s talk a little more about security. Since you’re using an iframe, the requests aren’t proxied by the portal. Instead, a page of HTML gets sent from the portal to the browser and then the browser turns around and makes requests to all the iframes on that page. Since the portal isn’t serving as a proxy, it can’t control what you do and don’t have rights to see, so security is completely thrown out of the window. (Or should I say, thrown out of the iframe?) Moreover, in an enterprise deployment, the portal usually sits in the DMZ and proxies requests out to bits and pieces of internal systems in order to surface them for extranet users. If you’re using iframes, every bit of content needs to be visible from an end user’s browser. So what’s to stop an end-user from scraping the URL out of a portal page and hitting a portlet directly? Nothing! (If I understand what I’m reading correctly, the WLP team is calling this a feature. I would call it a severe security risk.)

Scalability/Performance

Yes, this approach will work for Google Gadgets. But Google has more money than pretty much everyone. They can afford to spend frivolously on anything, including hardware. However, the rest of the world actually cares about the kind of load you put on a system when you create a “mashup.” A page consisting of five iframes is like five users hitting the sites with five separate requests, separate sessions and separate little “browsers.” If any of the iframes forces a full-page refresh or if the user does the unthinkable and say, moves to another page, every request is reissued and the mashup content is regenerated. This simply does not scale beyond a few users, unless you have as much money and as much hardware as Google does.

Caching

A properly designed portal or content aggregation engine will only issue requests to portlets when necessary. In other words, each remote portlet will only get a request if it needs to be loaded because the portal doesn’t have a cached entry. Unfortunately, you can’t do this with iframes because the portal doesn’t even know they exist. (Remember, all requests for iframe content go directly from the browser to the remote content, bypassing the portal entirely.)

What baffles me is why a company would acquire another company with a revolutionary technology (the PPE) and then start from ground zero and build a technology that does the same thing but without a portal-to-portlet communication model (preferences), security, scalability or caching. If consumers weren’t already confused, now they most certainly are.

As technologists, I hope you can see through the hype about Adrenaline and consider a product that actually allows you to mash up web content in a scalable and secure way and has been doing so since 1999. It’s called AquaLogic Interaction and it’s sold by a company we all know and love called BEA.

Comments

Comments are listed in date ascending order (oldest first)

  • I just discovered that the BID/AquaLogic (formerly Plumtree, Fuego, Flashline, etc.) folks are having another webinar, entitled “Harnessing Enterprise Mash-ups with Security and Control.” This webinar (I hope) will show:
    1. how ALI has been handling mashups since before mashups was even a buzzword and
    2. how Project Runner enables next generation mashups that allow you to invoke back-end applications and provision security, branding, SSO, etc. without actually funneling everything through the portal.

    If you were at today’s webinar and you’re now wondering how to do mashups with more robustness and security, then I hope you’ll attend this webinar. By all means, it’s just the responsible thing to do in order to offer customers different integration options when creating their mashups.

    Posted by: bucchere on January 10, 2007 at 7:31 PM

  • I’d like to add a couple points of clarity from BID product management. First of all, we’re happy to have passionate developers, but I fear this post may give the wrong impression about some of BEA’s technology and plans.

    WLP Adrenaline, ALUI, and project Runner are all complementary technologies that have a very exciting future when applied to problems such as Enterprise Mashups. You’ll be hearing more about them from BEA over the coming months through various venues, including Webinars targeted at WLP-specific use cases (such as Peter’s excellent talk) and ALUI use cases (including tomorrow’s Runner Webinar). There will also be the usual blogging and other activities.

    Just as WLP and ALUI product teams are aligned, these different technologies are aligned. Adrenaline offers WLP customers a way to extend their reach in fundamentally new ways, and Peter will expound on some technical subtleties to address some of Chris’ concerns. Runner, too, is very exciting, enabling a completely different set of use cases. As the details unfold we’ll demonstrate how well aligned these technologies are — just wait until you see them working together!

    – David Meyer

    Posted by: dmeyer on January 10, 2007 at 10:41 PM

  • Just for those that don’t know about Adrenaline, here’s an article introducing Adrenaline.

    Posted by: jonmountjoy on January 11, 2007 at 12:19 PM

  • Chris,

    As David writes, BEA is moving ahead with multiple approaches to address the enterprise mashup space. My webinar covered the approach WLP is taking, and in no way implied that ALUI is not also a viable player in this space. We offer our customers a choice of products, and different products make sense to different customers.

    As for the specific issues you raised:

    ** Technical Reply

    Good technical points, but I think you overemphasized the role of iframes within WLP. Let me cover the two places we showed the use of iframes:

    Use Case 1: injecting a portlet into a legacy webapp

    Demo: An iframe was used in the demo to inject a portlet into a legacy static html page with almost no modification to that page (one line change).

    WLP does support an alternative approach – an Ajax streamed portlet. I simply did not have time to demo it. Also, this is not a portal use case for including external non-portal content into a Portal; instead it is the inverse, which is to publish existing portal content into legacy web applications . It was intended to show a very inexpensive way to energize a dated application until it is rationalized into a portal. The focus here is on minimizing cost of supporting legacy, while building portlets in transit to a portal solution.

    Use Case 2: WLP as a Mashup composition framework

    Demo: Iframes were used to pull in non-WSRP capable components (e.g. Google Gadgets) onto a WLP page

    First, as background info, the WLP architecture supports the rendering of various types of portlets:

    • Local portlets (deployed within the webapp, JSF, JPF, etc)
    • WSRP portlets – an advanced remoting approach which handles security, inter-portlet communication, etc…
    • Iframe portlets – an available remoting approach
    • WLP partners with Kapow for remote clipped portlets (similar to the ALUI approach)

    In regards to this use case, you brought up specific concerns:

    Security

    Concerns about shared authentication were noted in my talk. If components come from outside the enterprise, there is no easy solution to that problem, regardless of what product you are using. However, I spoke of a couple approaches in the webinar, including SAML.

    If those components come from inside the enterprise, the security hacks you were referring to are generally not necessary. Our customers that expect SSO have a web SSO solution (typically, cookie powered, not password in the URL powered) in place within the enterprise.

    Caching/Performance

    The most serious concerns of yours appear to be performance related. Specifically, the concern is that a full page refresh of a page that contains N number of iframes will cause an N+1 number of requests. To expand on your concern, I will add that this is not only seen in pages with iframes, but also pages that use Ajax to pull in data. I would say that there are several reasons why this does not invalidate WLP’s approaches:

    1. Mashup pages with lots of iframe portlets approach

    Google Personalized Home Page makes use of iframes to implement their mashup framework. Many of the Gadgets on the page are rendered with an iframe. But you are mistaken in saying that this scales because Google is throwing tons of hardware at the problem. The iframe Gadgets rendered in GPHP are rendered not by Google, but by 3rd party gadget hosting servers around the world. Google does NOT have to process those iframe Gadget requests, it is a distributed approach. Likewise, you could create a WLP page where most of the portlets are iframe portlets that hit a distributed set of servers, if that makes sense. Or…

    2. Mashup pages with a mixture of portlets

    The 2nd demo in my webinar wasn’t showing a page with all iframe portlets. Rather, what the demo was showing was a WLP page with a couple of iframe portlets mixed in with local portlets. As shown above, WLP supports a number of portlet types, and a good approach is to build pages that are a mixture of that set.

    3. Ajax helps minimize page refreshes

    Your concern about iframe performance stems from the case in which the entire page refreshes. With the usage of Ajax becoming common, plus with WLP 9.2 built in support for auto-generating Ajax portlets, this impact can be minimized. Page refreshes are becoming more rare. With WLP 10.0, which releases in a few months, the Ajax support has been expanded to support Ajax based portal page changes, further reducing the liklihood of a page refresh.

    4. The “Bleeding Edge” guys are also using browser based mashup approaches

    You referred to the “Bleeding Edge” technologists in your blog as the people that are doing things correctly. What are they doing? Some of the time, those guys are doing browser based Mashups. They often use a combination of iframes and Ajax from the browser to implement their mashups. So the same approach that you dislike is already in common use across the web.

    ** Market Reply

    You state “Portals are a thing of the past”. An interesting opinion, but just that. IT cannot afford web sprawl, and so a framework for rationalization will always be in demand whether you call it a Portal or something new.

    New technologies continue to provide alternatives to existing methodologies and portals are no different. However, one thing that has distinguished portal frameworks is their ability to embrace new technologies. Struts, WSRP, JSF are all examples of this as are the Web 2.0 constructs like mashups and rich interfaces based on Ajax. This is all good news as the enterprise has a wealth of options to choose from.

    Posted by: plaird on January 11, 2007 at 2:56 PM

  • I must say, as a customer and developer, it’s great plumtree (I mean BEA, or is it Oracle) management allows you guys to express your own opinions. It so happens I’ve spent quite a bit of time trying to get JBoss Seam (and Ice Faces) to work with Aqualogic 6.1. I’ve been looking at the IFrame route, because the gateway stuff just isn’t working (it doesn’t properly rewrite the URLs for the Ajax stuff). I’ve come to hate the gateway. I bet it was a great idea before Ajax, but now it seems like almost every web 2.0 application is incompatible (needs major modification to get it to work). Or maybe I just don’t understand how to get it to work. Is there any good documentation on it? I’m hoping for some major improvements when 6.5 comes out though.

    Posted by: cmann50 on April 4, 2008 at 2:28 PM

Podcast Episode 4: bdg’s take on BEA’s Unified Portal Roadmap

bdg-podcastI’m very pleased to announce that we’ve finally laid down the fourth episode of our podcast, nearly a year after Episode 3!

A lot has happened, a lot has changed, but a few things have stayed the same, including our trivia contest. Check out this episode’s question and e-mail us at [email protected] if you think you know the answer.

A good chunk of this episode covers bdg’s take on BEA’s Unified Portal Roadmap. You can read this press release if you want to get the official word from BEA Systems. Again, I want to remind everyone that I don’t work for BEA, so any opinions expressed on this blog or in the podcast are solely those of Chris Bucchere and bdg.

Enjoy the latest addition to the podcast and be sure to leave me a comment here if you like what you hear (or if you don’t).

Is Plumtree an “open” platform?

“We call this re-imagining Radical Openness. Radical Openness is our strategy to offer both J2EE and .NET versions of our entire application management framework, new points of integration for synchronizing the Enterprise Web environment with systems of record as well as desktop tools, and the ability to embed Enterprise Web services in any Web application,” Kunze continued. “Ultimately, we believe the way applications are being developed is fundamentally changing, and that with the Enterprise Web, applications can be developed in greater volumes, at lower cost, and on a wider variety of platforms than ever before.” –John Kunze, CEO, Plumtree, Inc. (excerpted from a 2003 press release).

bdg‘s response to this is that in some ways it is and in some ways it isn’t.

Plumtree is Open:

  • It runs on Windows (.NET or Java) or Solaris (Java)
  • It can embed portlets from anything that speaks HTTP(S)
  • It uses SOAP over HTTP for Crawlers, Authentication, Profiling and Search
  • It uses other nice, open-ish technologies like XML, SQL, HTML, CSS, Javascript
  • It runs on SQL Server or Oracle

Plumtree isn’t Open:

  • It only runs on only Windows and Solaris, not AIX, HP-UX, Linux, or any other *nix
  • It’s entire codebase, though highly pluggable and configurable, is proprietary
  • It uses proprietary headers (CSP, which stands for Content Server Protocol, no relation to Plumtree’s Content Server, don’t ask 🙂 to communicate information to and from portlets*
  • It only runs on SQL Server and Oracle, not MySQL or any other RDBMS

*Plumtree does support both WSRP and JSR-168 through plug-ins, though they limit functionality to some degree (more on this later).

I should preface all of this by saying that I still believe Plumtree is far and away the “best” portal solution for most mid- to large-size corporate intranets and even extranets for a whole host of reasons. I mean really, why would I bet my company on it if I didn’t?

However, it’s easy to confuse “open” with “pluggable” when they are in fact very different. When I hear things like, “my web service is written in Ruby on Rails, but .NET, Java and PHP clients use it all the time,” then I think “open.” (And no, if you’re wondering, I’ve never actually heard that, not even from Dave Thomas.) When I hear, “sure, you can replace the page navigation in my presentation layer, but only if do it with Tapestry” then I think “pluggable.”

Plumtree’s UI is pluggable; their WS/PRC server, EDK, CWS, AWS/PWS and SWS architectures are open; and their Portlets are, well, a little of both: they’re very open in that you can write them in anything that speaks HTTP(S), but only if you do it with their proprietary headers, but then, well, you can use JSR-168 or WSRP to get around that, but then, well, you can’t get all the functionality like Adaptive Portlets . . . .

When it comes to Plumtree’s Portlets (or Gadgets as they used to be called), it almost sounds like I’m arguing with myself.

In summary, if you’re looking for a proprietary product that’s built on some open standards that you can extend using open standards (sometimes) but that only runs on certain platforms, well, then Plumtree is for you.

While I’m not in the business of making excuses for Plumtree, I must say that every time a company with a proprietary enterprise software product needs to support a new OS or browser or database or “thingy” they need to run that combination through a testing matrix that grows exponentially each time you add a new “thingy” to it. That is a royal pain in the proverbial backside.

The complexity of the testing matrix alone is a great argument for open sourcing everything. (And yes, I understand that open and open source are not the same thing.) While I do see merit in commercial, proprietary software, I assure you that if Plumtree’s code base were open source it would already be running wild on Linux. Why? Because I would have compiled it myself. 🙂