Oracle Announces Roadmap for Plumtree / AquaLogic / WebCenter

UPDATE 2: I’ve incorporated all the great feedback and comments from ex-Plumtreevians, ex-BEA and ex- and current Oracle folks.

UPDATE: A bunch of Plumtreevians are contributing really good comments on this post over on Facebook.

bea_think_oracleI worked at Plumtree Software, Inc. from June 1998 to December, 9th 2002. In four-and-a-half years, the company grew from 25 employees to over 400 and it had thousands of happy customers before it was purchased by BEA Systems in 2005 for $220M. Here at bdg, we’ve been supporting dozens of Plumtree/AquaLogic Interaction (ALI)/WebCenter Interaction (WCI) customers since we opened our doors in December of 2002.

Back around 2005, BEA’s BID (Business Interaction Division) still had a lot of really smart engineers from Plumtree working on a lot of really interesting things, including Pages (think CMS 2.0), Pathways (kind of an enterprise version of del.icio.us) and Ensemble (the portlet engine/gateway, minus the overhead and UI of the portal itself).

They were also working on an enterprise social network, kind of a Facebook for business if you will.

However, there was a lot of wrangling at BEA, primarily between BID/AquaLogic and BEA’s flagship product, WebLogic (the world-class application server). Most of the strife came in the form of WebLogic Portal vs. AquaLogic/Plumtree Portal nonsense. Senior management at BEA, in their infinite wisdom, had taken a “let’s try not to alienate any customers” policy and in the process they confused all their customers and alienated/frustrated quite a few of them as well. They renamed Plumtree to AquaLogic User Interaction (ALUI), put in place a “separate but equal” policy with WebLogic Portal (WLP) and spewed some nonsense about how WLP was for “transactional portal deployments” vs. ALI for .NET and non-transactional portals, but no one, including BEA management, had any idea WTF that meant. To further confuse the issue, the WLP team, which also had a lot of really smart engineers, built products like “Adrenaline” (which was basically a less-functional and more buggy version of Ensemble) rather than do the unthinkable and integrate Ensemble into WLP so that WLP could finally host non-Java/JSR-168 portlets.

I was really pissed about BEA’s spineless portal strategy, their “separate but equal” policy between WLP and BID/ALUI and their waste of precious engineering resources in an arms race between WLP and ALUI rather than just stepping back, growing a spine, and coming up with a portal strategy.

Because I can’t keep my pie hole shut, I started several loud, messy and public fights with BEA management. Why? Because the real loser here is the customer.

And BEA, because management got mired in politics and chose to waste engineers’ time on in-fighting and competition instead of building enterprise Facebook, which Steve Hamrick and I arguably already wrote in our spare time. All they needed to do was product-ize that and they would have owned that market.

In 2008, Oracle inherited this clusterfuck of a portal strategy when they bought BEA for $7B+, giving me new hope that cooler heads would prevail and fix this mess. The first thing they did was fire all the impotent BEA managers who were afraid to make any decisions. It took Oracle a while, but alas, they have finally arrived at a portal strategy that makes sense. I first learned about this strategy when I crashed the WebCenter Customer Advisory Board last Thursday.

First of all, let me say this: under the leadership of Vince Casarez, current (and future) customers are in good hands.

I realized when he said “everyone still calls it Plumtree” that this was going to be a bullshit-free presentation.

He also said something regarding the “portal stew” at Oracle that puts all of my ranting and raving in perspective: “Oracle did not buy BEA for Plumtree or WLP, just like it didn’t buy SUN for SUN’s portal product.” To rephrase that, Oracle bought BEA for WebLogic (the application server, not the portal) and Sun for their hardware (not for Java, NetBeans and all the rest of Sun’s baggage).

So, let’s face it, portals are a relatively insignificant part of Oracle.

However, they’ve finally did what I called for 2008 and what BEA never had the wits to do: pick a single portal strategy/stack and stick to it. SO, if you’re a current Plumtree/ALUI/WCI or a current WLP customer, you have a future with Oracle.

Here’s the plan, as I understand it.

All roads lead to Web Center (not Web Center Interaction, but Web Center)

At the heart of Web Center will be WebLogic’s app server and portal. Plumtree/ALUI as a code base will be supported, but eventually put into maintenance mode and retired. You get nine or twelve years of support and patches (blah blah blah) but if you want new features, you need to switch to the new Web Center, powered by WLP. CORRECTION: WebCenter will not be “powered by WLP.” At its core will be the Oracle-developed, ADF-based WebCenter Portal running on WebLogic Server.

All the “server products” (Collaboration, Studio, Analytics, Publisher) will be replaced by Web Center Services or Web Center Suite

Publisher will be subsumed by WCM/UCM (Web Content Management / Universal Content Management, formerly Stellent). The other products will be more-or-less covered by similar offerings in Suite or Services.

What about Pages, Ensemble and Pathways?

Pages is dead as WCM/UCM does it better. Pathways is getting rolled into the new Web Center somehow, but I’m not sure how yet. Perhaps I can follow up with another blog post on that. Ensemble has been renamed “Pagelet Producer” — more on that below. CORRECTION: Pathways is now called “Activity Graph” and it will be part of the new WebCenter. Think of an enterprise-class version of the Facebook News Feed crossed with Sales Force chatter and you’ll be on the right track.

What about .NET/SQL Server, IIS and everything else that isn’t Java?

This is a really interesting question and the key question that I think drove a lot of BEA’s failure to make any decision about portal strategy from 2005-2008. Plumtree had a lot of .NET customers and some of the biggest remaining Plumtree/ALUI customers are still running on an all-Microsoft stack. In fact, one of them told me recently that they have half a million named user accounts, two million documents and 72 Windows NT Servers to power their portal deployment.

So, let’s start with the bad news: Oracle doesn’t want you to run .NET/Windows and they REALLY don’t want you to run on SQL Server.

(That will change when Oracle acquires Microsoft, but that’s not gonna happen, at least not any time soon.) WebLogic app server and WLP/WCI, to the best of my knowledge, will not run on SQL Server. They will, however, run on Windows, but I would not recommend that approach.

It’s inevitable that large enterprises will have both .NET and Java systems along with a smattering of other platforms.

So, if you’re a .NET-heavy shop, you’ll need to bite the bullet and have at least one server running JRockit or Sun’s JVM, one of Oracle’s DB’s (Oracle proper or MySQL), WLS/WLP/WCI and preferably Oracle Enterprise Linux, Solaris or some other other flavor of Un*x. CORRECTION: WLP will run on SQL Server. Not sure about the new WebCenter Portal, but my guess is that it does not.

Now, for the good news: the new WCI, powered by WLP and in conjunction with the Pagelet Producer (formerly Ensemble) and the WSRP Producer (formerly the .NET Application Accelerator) will run any and all of your existing portlets, regardless of language or platform.

This was arguably the best feature in Plumtree and it will live on at Oracle.

.NET/WRSP and even MOSS (Sharepoint) Web Parts will run in WebCenter through the WSRP Producer. The Pagelet Producer will run portlets written in ANY language through what is essentially a next generation, backwards-compatible CSP (Content Server Protocol, the superset of HTTP that allows you to get/set preferences, etc. in Plumtree portlets). So, in theory, if you’re still writing your portlets in ASP 1.0 using CSP 1.0 and GSServices.dll, they will run in the new Web Center via the Pagelet Producer. Time for us to update the PHP and Ruby/Rails IDKs? Indeed it is. Let me know if you need that sooner rather than later.

How do I upgrade to the new WebCenter?

Well, first off, you have to wait for it to come out later this fall. Then, you have to start planning for what’s less of an upgrade and more of a migration. Oracle, between engineering and PSO, has promised to provide migration for all the portal metadata (users, communities, pages, portlets, security, etc.) from Plumtree/ALUI/WCI to the new Web Center, with WLP at its heart. (Wouldn’t it have made sense for some of those WLP engineers to start building that migration script in 2005 instead of trying to compete with ALUI by building Adrenaline? Absolutely.) All your Java portlets, if you’re using JSR-168 or JSR-286, will run natively in WLP through a wrapper in WebCenter Portal. Everything else will either run in the WRSP Producer (if it’s .NET) or in the Pagelet Producer (if it’s anything else). The only thing I don’t fully understand yet is how to migrate from Publisher to UCM, but I’m due to speak with Oracle’s PSO about that soon. Please contact me directly if you need to do a migration from Publisher to WCM/UCM that’s too big to do by hand.

The only other unanswered question in my mind is how the new WebCenter will handle AWS/PWS services — the integrations that bring LDAP/AD users and profile information/metadata into Plumtree/ALUI/WCI. I wrote a lot of that code for Plumtree anyway, so if Oracle’s not working on a solution for the new Web Center, perhaps I can help you with that somehow as well. CORRECTION: User and group objects are fully externalized in Web Center, so there is no need for AWS/PWS synchronization. (Thanks, Vince, for pointing that out.)

So, that’s my understanding of the new portal strategy at Oracle.

Kudos to Oracle’s management for listening to their customers, making some really hard decisions and picking a path that I think is smart and achievable.

I’m here to help if you have questions or need help with your portal strategy or technical implementation/migration.

Q&A;

(Some other notes about discussions that have spawned from this original post.)

Q: What’s the future of the Microsoft Exchange portlets (Mail, Calendar and Contacts) and the CWS for crawling Exchange public folders. Retired and replaced with something Beehive related? Still supported? For how long? Against what versions of Exchange?

A: We’ve got updated portlets for Mail & Calendar in WebCenter now for Exchange 2003 & 2007. We don’t have a Contacts portlet but it could be added quickly if we see a large demand. Crawling public folders can be done with an adapter we have for SES [Oracle Secure Enterprise Search] already. We’re working but aren’t done with a new version of KD on top of the new infrastructure that will come out post PS3. (Contributed by Vince Casarez.)

Q: If migration scripts are provided to move WCI metadata into WebCenter, I understand that a portlet is a portlet, but what about pages and communities, users and groups, content sources and crawlers, etc.? Do they all have analogous objects in WebCenter or is there some reasonable mapping to some other objects?

A: Pages and Communities follow a model where we extract/export the meta data and data, then run it through a set of scripts that create a WebCenter Space for each Collab project/community and a JSPx page for every page. Users and Groups will come out of the LDAP/AD directory they are already using and the scripts associate the right permissions to each of the migrated objects. I don’t recall what we did about crawlers but since we use SES directly, all the hundred or more connectors we ship for SES are now available for direct usage. The scripts go through a multiphase approach to move content, then portlets, then pages, then communities so that dependencies can be fixed up versus trying to do a manual fix up. (Contributed by Vince Casarez.)

Q: Will any existing WCI-related products that are slated for retirement (e.g. Publisher, Collab, Studio, Analytics, etc.) be re-released with support for Windows Vista, Windows 7, IE 8, IE 9 or Chrome?

A: For Publisher, we are planning a set of migrations to quickly move them to UCM. For Collab & Studio, we have new capabilities in WebCenter Spaces to match these functions. For Analytics, we’ve also rebuilt it on top of the WebCenter stack with over 50 portlets for the different metrics and made sure we provide apis/ access to the data directly. These analytics data also feeds the activity graph in providing recommendations for people on the content and UIs that are relevant to them. These are tied into the personalization engine that we brought over from the WLP side. So there is a rich blending of the best features from WLP with WCI key features. As for Neo [the codename for the next release of WCI], we are certifying the additional platforms. On the IE 8 front, we’ve just released patches for WCI 10gR3 customers to be able to use IE8 without upgrading to Neo. (Contributed by Vince Casarez.)

On Open Letter to the Java Community

java_oracleIn the wake of the Sun acquisition by Oracle, the much-lambasted Oracle vs. Google lawsuit over Google’s alleged JavaME patent infringement, and the rumblings I’ve been hearing at Oracle Open World / JavaOne / Oracle Develop 2010, I have a message to the Java community:

Quit your bitching and moaning and start doing something productive!

Now that I’ve offended all the Java fanboys/girls out there, let me explain:

  1. Why I’m qualified to give you all one big collective kick in the ass, and
  2. Why this collective ass-kicking is coming from a place of love, not hate.

My first experience with Java was in 1994/95, when Stanford started switching its Computer Science curricula from C/C++ to Java. After struggling with memory management, segmentation faults, horrific concurrency problems and the other ways I kept shooting myself in the foot, Java was a breath of fresh air. My first corporate experience with Java was working as a summer intern for JavaSoft (a former subsidiary of Sun) in 1997 porting Patrick Chan’s Java 1.0 sample applications (remember Hangman?) from JDK 1.0 to JDK 1.1.

I went on to join Plumtree. Originally, they were a Microsoft darling. I helped lead the charge to switch them from COM/DCOM, ASP 1.0 and SQL Server to Java and Oracle.

In 2002, I started a Plumtree-focused consulting firm, helping 50+ customers install, maintain and grow their Plumtree deployments. In all but a precious few of those accounts, I wrote all of the code in Java/JSP.

Since about 2008, we’ve been using Ruby on Rails for most of our software. When Rails hit the scene, I had a similar “breath of fresh air” moment similar to when I first encountered Java.

But this letter is not about Ruby or about Rails; it’s about Java. A language I’ve used since it’s very first iteration in 1994/95 and up to the present day. A language wherein I’ve written at least half a million lines of code, most of which still run in production today inside Plumtree/AquaLogic User Interaction/WebCenter Interaction, at major customer sites in the corporate world and in the federal government.

So, fast-forward to today, this is what I’m hearing about Java, in a nutshell:

  1. Oracle’s going to kill/close-source/fuck up Java
  2. Life’s not fair!
  3. Blah blah blah

Twitter _ Jock Murphy_ @oracle I love Java, I do .All of this bitching and moaning starts right at the top with Java grandfather and CEW (Chief Executive Whiner) James Gosling, who is showing incredibly poor leadership, lousy judgment and massive immaturity with his totally irrelevant, outdated and hateful anti-Oracle bitch-fest.

Twitter _ Marcello de Sales_ Solaris 11 to be contI’ve heard people whining about everything around them that’s not running on Java: mobile applications, web sites, conference tools, Twitter, Facebook, etc.

Twitter _ Paweł Szymczykowski_ @dendro Awesome thaI even saw someone complain on Twitter that the Black Eyed Peas, who Oracle paid an undoubtedly handsome sum of money to entertain your sorry asses last night, gave a shoutout to Oracle and not “The Java Community.” Seriously? Give it a rest, folks!

There are lots of choices of development stacks and people are free to choose the one that works best for them.

Embrace that freedom; don’t fight it.

And the word Oracle doesn’t mean “database” anymore. It is an umbrella term that could refer to thousands of different products.

Let’s take a look at some of the advantages of Oracle owning Java.

With respect to OpenWorld, the Java Community got:

  1. Your own conference with around 400 sessions
  2. Your own tent
  3. Your own street closure (Mason Street)
  4. Invited to OTN Night, one of the best parties at OpenWorld

More importantly, with Oracle Corporation, the Java community gets:

  1. Cemented into the infrastructure of nearly all of Oracle’s products, meaning that nearly all of their customers — most of the Fortune 1000 — are now Java shops (if they weren’t already)
  2. Stability, stewardship, thousands of really bright engineers and nearly unlimited resources
  3. One of Corporate America’s most powerful legal teams backing you up
  4. A secure and promising future, including a just-announced roadmap for JDK 7 and 8

And, with all that being said, guess what?

Java is still open source.

Do you know what that means?

Let me answer that question with another question: what brilliant phoenix rose from the ashes of the debacle that was the AOL acquisition of Netscape in 1998?

It was Firefox, a free, open source-based browser that literally revolutionized the massively screwed up browser market and gave the dominant browser (IE 5, and later, IE 6) a true run for its money. From wikipedia:

“When AOL (Netscape’s parent) drastically scaled back its involvement with Mozilla Organization, the Mozilla Foundation was launched on July 15, 2003 to ensure Mozilla could survive without Netscape. AOL assisted in the initial creation of the Mozilla Foundation, transferring hardware and intellectual property to the organization and employing a three-person team for the first three months of its existence to help with the transition and donated $2 million to the foundation over two years.”

IBM’s symbiotic relationship with Eclipse is another great example.

So, dear Java community, to ensure your own survival, please, in the name of Duke, stop complaining and start thinking strategically about how you can “pull a Firefox” here. You’re all brilliant engineers, so start putting all the effort you’re wasting in complaining toward something productive.

I love you all and I love all your passion and energy, but I hate your bitching — use that energy to go save the world, Java style!

Was The Facebook Hacker Story Irresponsible Journalism?

This article originally appeared as a guest post on All Facebook.
 
The media scandal du jour relates to how WikiLeaks leaked all this classified information about the war in Afghanistan, but let’s not overlook this extremely irresponsible piece of reporting that MSNC published earlier this week about an alleged Facebook privacy breach.

Why is it irresponsible? Well, before I break it down for you, let’s take a few journalism lessons from Robert Scoble, who explains why Flipboard (an iPad application that turns RSS feeds into a magazine-like layout) is superior to the one-item-after-another streams of information that we’re used to browsing on the Facebook news feed, Twitter, etc. He writes:

“I remember that early eye tracking research showed that pages that had a single headline that was twice as big as any other headline were more likely to be read. Same for pages with photos. If you put two photos of equal size on the page, it would be looked at less often, or less completely, than a page that had a photo that was at least twice as big as any other.
I won a newspaper design contest in college because of this my designs made sure that they included headlines that were twice as big as any other and photos that were twice as big as any other.”

MSNBC used these exact techniques to spin an oh-so-scary story about an alleged Facebook privacy breach.
lies1(2)This first screen shot is what I could see on an average (15″) monitor “above the fold.” (You can click the image to see it in actual size.) Note the massive font used for the headline and the four tiny images. Keep in mind that some internet users don’t know how to scroll (really, I’m not kidding), so by not showing a broken line of text at the bottom of the page, many people won’t know that the rest of the article is even there, let alone how to get to it.

lies3If you endeavor to read past the headline, you’ll notice that they “end” the story with more scary talk from the alleged “hacker” and hide the final three paragraphs behind this completely absurd “Show More Text” link, which serves no purpose other than to obscure the truth, which is in the final (that’s right, the very last) paragraph of the article:

“No private data is available or has been compromised. Similar to a phone book, this is the information available to enable people to find each other, which is the reason people join Facebook. If someone does not want to be found, we also offer a number of controls to enable people not to appear in search on Facebook, in search engines, or share any information with applications.”

So, if I were to email MSNBC and tell them that I was “a researcher” or “a white-hat hacker” and I had discovered a huge scam — “You see, these conspirators from Yellow Pages have been collecting and amassing all this private data and delivering it to everyone’s doorstep!” — they would think I was completely insane. Well, change “Yellow Pages” to “Facebook” and “delivering it to everyone’s doorstep” to “making available for download” and I think you see my point.

msnSo how did MSN get away with posting this completely absurd story? To understand that, we need to look at their demographic. I went to Alexa.com to find out. As I had guessed, their readers lean toward females of the Baby Boomer generation and up. The same people who don’t know how to change their default settings in their default browser (IE6) on their default operating system (Windows XP) to anything other than MSN.com. Big suprise? No: MSNBC is preying on innocent victims by using psychological tricks to create phobias for things that they don’t understand. And there’s nothing scarier than the fear of the unknown.

bowling-for-columbineThe premise that the media is out to scare us all into staying home and buying more security systems/guns/etc. is not news; Michael Moore built a really compelling case against Big Media’s fear tactics in Bowling for Columbine in 2002. However, an interesting question to ask in 2010 is:

if Big Media is prone to Big Lies and Misinformation, can social media serve as an antidote?

In other words, can investigative reporting by “citizen journalists” help suss the truth out of all the lies?

To help answer the question, I turned to the 875+ comments on the article. To do some highly unscientific semantic analysis, I read a small sample to look for keywords were common in a neutral-to-favorable comment (information, private/privacy, security, people/friends, public) vs. what keywords where prevalent in a highly negative response (wrong, attention, fame, fraud, scam, boring, crap). Then I ran all the comment text through a histogram tool.

histogramUnfortunately, the results of my study show that most comments were favorable by a ratio of over 5:1. However, it all goes back to to the demographic. After glancing at the TechCrunch coverage on this, it seems about 60-70% of the commenters call bullshit, which seems to be in line with a younger, male-dominated, tech-savvy demographic.

So what do you think? Can commenting/voting/Tweeting uncover the truth obscured though it is by the news outlets that report it? Or will we all just continue to propagate the monkey excrement that the mass media keep throwing at us?

Leave a comment to tell us what you think!

Friday Fun: Rails, Django and Caprese Salad

twitter(2)I had this Twitter argument today with former coworker, fellow web developer and friend Bryan Hughes:

bucchere: The Spring Framework is driving me crazy. If this were Rails, I’d be done already.

huuuze: @bucchere If it was Django, it’d be faster and ready to scale.

bucchere: @huuuze I’m not interested in a religious war right now. Please don’t provoke me. 😉

huuuze: @bucchere No war — even the Rails guys agree: http://is.gd/1ZZu

bucchere: @huuuze Apparently Gluon is even faster than Django. But is anyone using it? You have to consider factors other than performance.

huuuze: @bucchere Um, Django’s used by thousands. It’s not some fringe framework. Guaranteed anyone that’s used RoR and Django will prefer Django.

bucchere: @huuuze How could you make that “guarantee” when you’ve never used Rails? I said I didn’t want a religious war, you damn Python Nazi. 😉

huuuze: @bucchere I’ve built a couple sites using Rails. How many sites have you built using Django?

bucchere: @huuuze bdg’s svn server just crashed. I have more important things to do than continue this pointless argument.

huuuze: @bucchere Then quit wasting time on Twitter. I’m not trying to start anything with you. Just be aware that RoR isn’t the only game in town.

bucchere: @huuuze There are lots of religions too. And if I want to pick one and say the others are “wrong” then that’s my prerogative.

huuuze: @bucchere Whatever dude. Not sure why you’d say Django is “wrong.”

bucchere: @huuuze All I’m saying is that language/framework wars are like religious wars. I have mine, you have yours. Leave it at that.

bucchere: Enjoying a homemade caprese — my favorite salad. (Now watch while @huuuze tells me his favorite salad is better than mine.)

huuuze: @bucchere Having never tried caprese, I have no opinion on the matter.

bucchere: @huuuze LOL. I’m glad we can still be friends. 🙂

huuuze: @bucchere Get real. I’m only friends with Christians and Django users. 😉

* * *

So the time it took me to compile this discussion made me wonder why Twitter doesn’t have threaded discussions. Summize (now search.twitter.com) has “conversations” but, like Facebook’s wall-to-wall feature, just because the posts occur consecutively, it doesn’t mean that they’re actually “in” the same thread. If I were re-writing Twitter, adding threaded discussions — and with it, the ability to reply to a specific Tweet — would be near the top of my list.

Happy Friday everyone (and happy 3-day weekend for hard-working and hard-twittering Americans)!

Middleware for the REST of us

bea_think_oracleI’m sitting in my third Oracle Fusion Middleware briefing, this one at the Willard Hotel in Washington, DC. Thomas Kurian has been going through all the products in the Oracle stack in excruciating detail.

First let me say this: Thomas Kurian is a really smart guy. He holds an BS in EE from Princeton summa cum laude (that’s Latin for really fucking good). He holds an MBA from the Stanford GSB. He’s been working for Oracle forever and he even knows how to pronounce Fuego (FWAY-go). I’m dutifully impressed.

Unfortunately, all those academic credentials and 10+ years in the industry is barely the minimum requirement for getting your head around the middleware space. Either I don’t have enough (0) letters after my name, or I just don’t get it.

For starters, there are way too many products — the middleware space is filled with “ceremonious complexity” (to quote Neal Ford). App servers, data services layers, service buses, web service producers and consumers — even portals, content management and collaboration has been sucked into this space. Don’t get me wrong: the goals of the stack are admirable — middleware tries to glue together all the heterogeneous, fragmented systems in the enterprise. Everyone knows that most enterprises are a mess of disparate systems and they need this glue to provide unified user experiences that hide the complexity of these systems from the people who have to use them. That makes the world a better place for everybody.

That was also, not coincidentally, one of Plumtree’s founding principles and the concept — integrating enterprise systems to improve the user experience — has guided my career since I got my lowly undergraduate degree in Computer Science from Stanford in 1998.

So, it’s a good concept, however, if you’re considering middleware because you’re trying to clean up the mess that your enterprise has become, you need to ask yourself the following fundamental question:

Does middleware add to or subtract from the overall complexity of your enterprise?

Your enterprise is already insanely complicated. You’ve got Java, .NET, perhaps Sharepoint, maybe an enterprise ERP system like SAP and say, an enterprise open source CRM system like SugarCRM or a hosted service like SalesForce.com. The bleeding edge IT folks and even (god forbid) people outside of IT are installing wikis written in PHP (e.g. MediaWiki) along with collaborative software like Basecamp written in Ruby on Rails. I’m not even going to mention all the green-screen mainframe apps still lurking in the enterprise — wait, I just did. This veritable cornucopia of systems just scratches the surface of what exists at many large — and even some mid-to-small-sized companies — today.

So clearly there’s a widespread problem. But what’s the solution?

At the end of his impressive presentation, I asked Thomas the following question:

“How can middleware from Oracle/BEA help you make sense of the fragmented, heterogeneous enterprise when you have existing collaborative (web 2.0) technologies written in PHP, Ruby on Rails, etc. running rampant throughout IT and beyond?”

(Okay, so I wasn’t exactly that pithy, but it was something close to that.)

His Aladdin-esque answer came in the form of three choices:

    1. “Take control of” and “centralize” your IT systems by replacing everything with Oracle Web Center spaces
    2. Ditto by migrating everything to UCM (Stellant)
    3. Build a services framework and aggregate everything in one of four ways:
        1. Use a Java transaction layer (JSR 227)
        2. Use a portlet spec like JSR 168 or WSRP
        3. Build RESTful web services
        4. Use the WebPart adapter for Sharepoint

      I like to call answers one and two “The SAP Approach.” In other words, we’re SAP, we’re German, wir geben nicht einen Scheiße about your existing enterprise software, you’re now going to do it the SAP way (or the highway).

Will companies buy into that? Some companies may. Many will not. ERP is a well understood space, so this approach has worked for SAP. Enterprise 2.0 is not terribly well understood, so that means even more diversity in the enterprise software milieu.

So the only approach that I believe in is #3: integrate. Choose the right tool for the right problem, e.g. the WebPart adapter if you’re using Sharepoint. Use REST when appropriate, e.g. when you need a lightweight way to send some JSON or XML across the wire between nonstandard or homegrown apps. Use JSR 168/286 for your Java applications. Even use SOAP if the backend application already supports it.

Keep things loosely coupled so that you can plug different components in and out as needed.

This requires a lot of development — the glue — but, I don’t think there’s any way around that. (You should take that with a grain of salt, because my company has been supplying the government and the commercial world with exactly that kind of development expertise since 2002.)

As for the overarching, user facing “experience” or “interaction” product — that’s where I’ve always used Plumtree (or AquaLogic Interaction).

Will I start using Web Center Spaces? At this point, I’m still not sure.

If it can be used as the topmost bit of the architectural stack to absorb and surface all the enterprise 2.0 software that my customers are running, then perhaps. If it’s going to replace all the enterprise software that my customers are running, then no way José.

This conundrum really opens up a new market for enterprise software: I call it “Middleware for the REST of us” or MMM (not M&M, 3M or M3, because they’re already taken): “Mid-Market Middleware” — similar to the way 37signals approaches (with a great deal of hubris and a solid dose of arrogance) the “Fortune Five Million” by marketing their products toward the whole long-tail of small and medium-sized companies. Maybe the world needs a RESTful piece of hardware that just aggregates web services and spits out a nice UI, kind of like the “Plumtree in a Box” idea that Michael Young (former Plumtree Chief Architect, now Chief Architect at RedFin) had back in the last millennium.

Oracle Web Center Spaces might be the right choice for some very large enterprises, but what about the REST of us?

How the New Facebook Utterly Destroyed my Favorite Application (and Why That Makes Me Sad)

I used to love Feedheads. It’s a simple, elegant and beautiful application that does one thing really well: help you share your Google reader shared items.

Unfortunately, the “new” Facebook has rendered the application utterly useless and I can’t think of a good way, as an end-user, to fix it. In fact, as someone who’s built two facebook apps, I can’t even think of a way that the Feedheads developers can fix it. What a calamity.

So here’s the problem: the News Feed (and the Mini Feed) introduced an option that allows end-users to set the story “size.” When a Google shared item story comes through Feedheads now, it defaults to the “one line” size and as a result, it doesn’t say anything other than “Chris posted an item to Feedheads.”

Thank you very much, Facebook. That piece of information is completely useless. People who are reading your feed need to click through into the Feedheads application in order to see what story you posted — and the whole point of Feedheads is to help you share your shared items, not make them harder to find.

(As a result of all this, Facebook also broke one of my applications, called WhyI. It has < 200 users, so very few people care, but . . . the point of the app was to help people ask themselves and their friends questions that have to be answered in five words or fewer. And of course, the questions and answers would show up in the Mini Feed and News Feed. But not anymore! Now it just says: “Chris posted a new mini-update using WhyI.” Again, a totally useless piece of information. Drats.)

As an end-user, I can set the “size” of each feed item. So that means, after I hit Shift-S in Google Reader — which doesn’t take much effort — I have to wait for the story to be published in Facebook and then, if I remember (which at this point is unlikely), I have to go into that little drop down on the right and set the size to “small” instead of the default, which is “one line.” And here’s the best part: I can’t tell Facebook to remember this, so I have to do it every time.

All this just to share a shared item on Google Reader through Feedheads . . . ick.

Here’s the best part. I just noticed that Facebook added their own feature to the new and “improved” news feed. You can import your shared items from Google Reader! And, not surprisingly, the news feed actually shows the stories’ titles. In other words, Facebook took a great application — Feedheads — and replaced the functionality with their own feature; in the process, they rendered Feedheads useless.

This makes me sad. I only have one thing to say:

Wow, Facebook, how very Microsoft of you.

meebo Sells Out

It has been a long time coming, but meebo has finally succumbed to the pressures of a basic business truth that they’ve been dutifully ignoring:

in order to stay in business, you actually have to make money.

Since their initial $3.5M financing round in December, 2005, they’ve been very good at two things: spending money and generating buzz around their service offering: free, browser-based multi-band instant messaging that supports AIM, MSN, Yahoo!, GTalk, Jabber and ICQ. New features, including “meebo rooms” and iPhone integration, have also generated a fair amount of hype. But back to dollars and cents . . . .

Their primary investor is Sequoia Capital, which has a great track record that includes companies like Cisco, Yahoo!, Paypal and Plumtree. From their point of view, investing in meebo in order to flip it to a larger company doesn’t seem viable because if any of the big players (Google, AOL, Yahoo! or Microsoft) bought meebo, they would most certainly shut down the other channels, which is one of meebo’s most compelling features. So, how does Sequoia intend to monetize meebo?

The team has been fairly tight-lipped about their plans, although co-founder Seth Sternberg has dropped a few hints on their blog including selling ad space, partnering with other providers to provide fee-based SMS or other services, and (my personal favorite) selling virtual goods to “spice up” your IM avatar.

San Jose Mercury News quotes Seth as saying:

“There are tons of ways we can make money, but we have to choose our priorities carefully.”

When you take the venture capital route, however, choosing the company’s priorities involves more than just the management team. Whether it was investor pressure or just common sense, we’ll never know, but yesterday meebo finally started devoting some of their copious dead space to advertising. They’re calling the new feature “meebo sponsors” which is a euphemism for, ehem, “meebo advertisements.”

meebo_adI have to give the team some credit because the introduction of ads on meebo was tastefully done — the ad is small, out-of-the-way and you can disable it with a single mouse click. However, if you click on the “try the Talib background” link, the results are quite shocking. Moreover, there’s no easy way to stop “trying” the Talib background. You have to navigate into your preferences and reset the background to whatever you had before.

A little “Are you sure?” could have gone a long way here.

meebo also plans to use the “holy grail” of advertising — targeting — to make sure these sponsor messages hit home. From the meebo blog: “We’ve already got a bunch of ideas to make [the ads] better, including preferences for the types of things you’re interested in. We’re hoping to figure out how to be selective, so if you indicate that you like movies, but not rap music, future sponsors will reflect that for you.”

It’s just a matter of time before meebo will be combing through your IM conversations looking for keywords like “BMW” or “Rolex” and using those data points to drive targeted ad campaigns.

Succumbing to financial pressure to allow advertising on your site is a slippery slope.

I’m curious to see where this leads and if meebo can continue to provide ads — and their free service — without the ads becoming too obtrusive, which will cause their user community to resent them.

While I commend them for finally taking a step toward financial responsibility, I worry that it won’t be long before the ads on meebo become burdensome enough that the users no longer want to use the service, e.g. AOL pre-welcome screen pop-ups of the late 90s.

I’m definitely interested to see how this one plays out.

ALI G6 on Ubuntu?

Some of you may be familiar with my rants on the bdg blog about how Linux just isn’t ready for the desktop. My opinion on that matter has largely changed with the release of Ubuntu 6.06 LTS (Dapper Drake), which I have been running with minimal hassle on my newish Gateway MP6954 laptop since last summer. It has a tasty coffee-colored UI (mmm), it NEVER crashes, it basically takes care of itself with updates and has equivalent — or better — software for pretty much everything you’d ever want to do with Windows or OSX at a great price: free.

Of course ALUI is only officially supported on two Linux plaforms: RHEL and Suse. But Linux is Linux, right? Well, sort of. I had all sorts of “fun” getting ALUI running on Oracle on Fedora. However, with Ubuntu, getting Oracle and ALUI up was a breeze.

First off, unless you call yourself a DBA, you don’t want to mess around with a full-blown Oracle instance. Instead, just follow these easy steps to install something called Oracle XE. It has certain limitations — the most important of which is that you can’t create more than one database.

My first — and really my only — mistake during this setup process came next (and it’s related to this one-database issue). I tried to drop the XE default database (ORACLE_SID=XE) and run the crdb1_oracle_unix.sql script to create the PLUM10 database. This was a bad idea. I poked around on Google a bit and then thought, well, I don’t really need my own database. (Had I had this epiphany before starting down that path, I could have saved two hours and had ALUI up and running on Ubuntu in fewer than 30 minutes.) So, instead of running crdb1_oracle_unix.sql, just edit create_tables_oracle.sql and remove any reference to PLUMINDEX, then run the following commands on the XE database:

$sqlplus sys as sysdba
SQL>create user plumtree identified by password
SQL>grant connect, resource, create view to plumtree

This creates the plumtree user on the XE database, which gives ALUI its own schema, which, for our purposes, is just as good as having your own DB. Now you can basically just run the out-of-the-box scripts (keeping in mind the changes I made to create_tables_oracle.sql):

$sqlplus plumtree/password@XE
SQL>@create_tables_oracle.sql
SQL>@load_seed_info_oracle.sql
SQL>@stored_procs_oracle.sql
SQL>@postinst_oracle.sql

At this point, ALUI was ready to rock. I only ran into one small snag. One of the native search libraries complained about a missing LD_LIBRARY_PATH dependency on libstdc++. This was not a showstopper. I did the following:

$ln -s /usr/lib/libstdc++.so.6.0.7 /usr/lib/libstdc++-libc6.1-1.so.2

From there I configured the bundled tomcat to host the portal and the imageserver and viola, ALUI 6.0SP1, in all its glory, was up and running on Ubuntu. (BTW, I would have used ALUI 6.1.0.1, but when I wrote this article, the RHEL and Suse versions weren’t available yet.)

Comments

Comments are listed in date ascending order (oldest first)

  • I’ve also successfully installed ALUI 6.1.1 (6.1MP1) on Ubuntu 7.04 (Server). Required one workaround for the LAX installer shared libraries problem (can’t find libc.so.6 etc):
    $cp AquaLogicInteraction_v6-1_MP1 AquaLogicInteraction_v6-1_MP1.bak
    $cat AquaLogicInteraction_v6-1_MP1.bak | sed "s/export LD_ASSUME_KERNEL/#xport LD_ASSUME_KERNEL/" > AquaLogicInteraction_v6-1_MP1

    Posted by: rdouglas on May 7, 2007 at 10:45 AM

  • hey Chris, appreciate the post! just wanted to give the hint that to change the plumindex on the create_tables script, you can do this in vi: :1,$s/PLUMINDEX/USERS/g

    Posted by: jbell on June 2, 2007 at 8:57 PM

  • Chris, nice post…I referenced this post while trying to get the new ALUI 6.1 quickstart installer to correctly intall the portal on windows xp. I’ve tried the installer on several xp machines but it is still failing…i think the error has to do with the way the installer is setting up the paths/environmental variables – when i run the diagnostics tool i get an invalid entry point…my paths look correct and i’ve tried re-installing multiple times on multiple machines…any ideas? Thanks.

    Posted by: phil- on September 10, 2007 at 8:41 AM

  • Well, after some troubleshooting I figured it out…here is the solution…I hope this is helpful to someone in the future…I needed to rename the icuuc30.dll in C:WINXPsystem32 to icuuc30_from_system32.dll and paste the icuuc30.dll from C:beaaluicommoninxight3.7.6binnative into the C:WINXPsystem32 directory before the installation would work.

    I did try just moving the INXIGHT_PATH variable so that it is loaded on the PATH before the C:WINXPsystem32 but the error still occured. BTW – icuuc30.dll is a component for Unicode version 3.0

    Posted by: phil- on September 12, 2007 at 11:47 AM

  • Thank you so much for this post, I had the same problem on XP. I’m just curious, how were you able to debug this problem? What pointed you to icuuc30.dll?

    Posted by: fhkoetje on December 4, 2007 at 9:31 AM

WLP + Adrenaline = ALI?

I recall sitting in a meeting in 1998 where we were discussing how to aggregate portlet content into a portal page. We talked a lot about iframes but couldn’t consider them as a serious integration option because of security, scalability/performance, caching and portal-to-portlet communication. Instead, we spent the next year building and testing the HTTPGadgetProvider, which later came to be called the “(Massively) Parallel Portal Engine.” (The term “Massively” was later dropped and I believe the name “Parallel Portal Engine” or PPE for short finally stuck.) I won’t go into details about how the PPE works, but if you’re interested, you can check out this great page in edocs that sums it up nicely.

So anyway, iframes are certainly reasonble way to build a portal in a day. But, in terms of building a robust enterprise portal that can actually withstand the demands of more than say, ten users, and that will pass even the most rudimentary security evaluation, iframes are complete nonsense.

So, today, during my lunch break, I attended Peter Laird’s Webinar, which he advertised in his nascent blog. It was all about enterprise mashups, a topic by which I’m very much intrigued. (Recall that PTMingle, my winning entry in the 2005 Plumtree Odyssey “Booth of Pain” coding competition was a mashup between Hypergraph, Google Maps, del.icio.us and Plumtree User Profiling.)

Imagine my surprise when Peter described how you can mash up Google “Gadgets” and other resources available via URLs using Adrenaline, a “new” technology from the WLP team based on, of all things, iframes. It was like entering a worm hole and being transported back to 1998. (I was single again, I had no kids, I was thinner and I had more hair on my head . . . and less on my back.) But the weird thing about this parallel universe is that BEA engineers were telling me that iframes were a great way to mashup enterprise web content and that intranets all over the world could benefit from this revolutionary concept. Intranets? You mean the things that everybody replaced with portals in the last millennium? Iframes? I must have been dreaming . . . .

When I finally came back to my senses, a few things occurred to me.

First of all, it’s 2007. Portals are a thing of the past. For some of us, that will be a hard pill to swallow. But let’s face it, innovators have moved on to blogging, wikis, tagging/folksonomies and lots of other nice web 2.0 sites that all have rounded corners. The bleeding edge folks have decided that many is smarter than any. The rest of the world will catch up soon.

Secondly, if you are still building a portal or composite application of any flavor, iframes are not a viable solution. They fall short in the following ways:

Portal-to-Portlet Communication

Say you want to send information (like the name of the current user) down to a portlet running in an iframe. Hmmm, the request for an iframe comes from the browser, not from the portal. So, if anything needs to be passed into the iframe, I guess you have to put in in the URL in the request for the iframe. That’s great, but that URL is now visible in the page’s source. So a simple, “Hello [your name]” portlet where the portlet gets the name from the portal is doable. But what about passing a password? That information would need to go first to the browser and then back to the remote tier, which, from a security standpoint, is a complete showstopper.

Security

Let’s talk a little more about security. Since you’re using an iframe, the requests aren’t proxied by the portal. Instead, a page of HTML gets sent from the portal to the browser and then the browser turns around and makes requests to all the iframes on that page. Since the portal isn’t serving as a proxy, it can’t control what you do and don’t have rights to see, so security is completely thrown out of the window. (Or should I say, thrown out of the iframe?) Moreover, in an enterprise deployment, the portal usually sits in the DMZ and proxies requests out to bits and pieces of internal systems in order to surface them for extranet users. If you’re using iframes, every bit of content needs to be visible from an end user’s browser. So what’s to stop an end-user from scraping the URL out of a portal page and hitting a portlet directly? Nothing! (If I understand what I’m reading correctly, the WLP team is calling this a feature. I would call it a severe security risk.)

Scalability/Performance

Yes, this approach will work for Google Gadgets. But Google has more money than pretty much everyone. They can afford to spend frivolously on anything, including hardware. However, the rest of the world actually cares about the kind of load you put on a system when you create a “mashup.” A page consisting of five iframes is like five users hitting the sites with five separate requests, separate sessions and separate little “browsers.” If any of the iframes forces a full-page refresh or if the user does the unthinkable and say, moves to another page, every request is reissued and the mashup content is regenerated. This simply does not scale beyond a few users, unless you have as much money and as much hardware as Google does.

Caching

A properly designed portal or content aggregation engine will only issue requests to portlets when necessary. In other words, each remote portlet will only get a request if it needs to be loaded because the portal doesn’t have a cached entry. Unfortunately, you can’t do this with iframes because the portal doesn’t even know they exist. (Remember, all requests for iframe content go directly from the browser to the remote content, bypassing the portal entirely.)

What baffles me is why a company would acquire another company with a revolutionary technology (the PPE) and then start from ground zero and build a technology that does the same thing but without a portal-to-portlet communication model (preferences), security, scalability or caching. If consumers weren’t already confused, now they most certainly are.

As technologists, I hope you can see through the hype about Adrenaline and consider a product that actually allows you to mash up web content in a scalable and secure way and has been doing so since 1999. It’s called AquaLogic Interaction and it’s sold by a company we all know and love called BEA.

Comments

Comments are listed in date ascending order (oldest first)

  • I just discovered that the BID/AquaLogic (formerly Plumtree, Fuego, Flashline, etc.) folks are having another webinar, entitled “Harnessing Enterprise Mash-ups with Security and Control.” This webinar (I hope) will show:
    1. how ALI has been handling mashups since before mashups was even a buzzword and
    2. how Project Runner enables next generation mashups that allow you to invoke back-end applications and provision security, branding, SSO, etc. without actually funneling everything through the portal.

    If you were at today’s webinar and you’re now wondering how to do mashups with more robustness and security, then I hope you’ll attend this webinar. By all means, it’s just the responsible thing to do in order to offer customers different integration options when creating their mashups.

    Posted by: bucchere on January 10, 2007 at 7:31 PM

  • I’d like to add a couple points of clarity from BID product management. First of all, we’re happy to have passionate developers, but I fear this post may give the wrong impression about some of BEA’s technology and plans.

    WLP Adrenaline, ALUI, and project Runner are all complementary technologies that have a very exciting future when applied to problems such as Enterprise Mashups. You’ll be hearing more about them from BEA over the coming months through various venues, including Webinars targeted at WLP-specific use cases (such as Peter’s excellent talk) and ALUI use cases (including tomorrow’s Runner Webinar). There will also be the usual blogging and other activities.

    Just as WLP and ALUI product teams are aligned, these different technologies are aligned. Adrenaline offers WLP customers a way to extend their reach in fundamentally new ways, and Peter will expound on some technical subtleties to address some of Chris’ concerns. Runner, too, is very exciting, enabling a completely different set of use cases. As the details unfold we’ll demonstrate how well aligned these technologies are — just wait until you see them working together!

    – David Meyer

    Posted by: dmeyer on January 10, 2007 at 10:41 PM

  • Just for those that don’t know about Adrenaline, here’s an article introducing Adrenaline.

    Posted by: jonmountjoy on January 11, 2007 at 12:19 PM

  • Chris,

    As David writes, BEA is moving ahead with multiple approaches to address the enterprise mashup space. My webinar covered the approach WLP is taking, and in no way implied that ALUI is not also a viable player in this space. We offer our customers a choice of products, and different products make sense to different customers.

    As for the specific issues you raised:

    ** Technical Reply

    Good technical points, but I think you overemphasized the role of iframes within WLP. Let me cover the two places we showed the use of iframes:

    Use Case 1: injecting a portlet into a legacy webapp

    Demo: An iframe was used in the demo to inject a portlet into a legacy static html page with almost no modification to that page (one line change).

    WLP does support an alternative approach – an Ajax streamed portlet. I simply did not have time to demo it. Also, this is not a portal use case for including external non-portal content into a Portal; instead it is the inverse, which is to publish existing portal content into legacy web applications . It was intended to show a very inexpensive way to energize a dated application until it is rationalized into a portal. The focus here is on minimizing cost of supporting legacy, while building portlets in transit to a portal solution.

    Use Case 2: WLP as a Mashup composition framework

    Demo: Iframes were used to pull in non-WSRP capable components (e.g. Google Gadgets) onto a WLP page

    First, as background info, the WLP architecture supports the rendering of various types of portlets:

    • Local portlets (deployed within the webapp, JSF, JPF, etc)
    • WSRP portlets – an advanced remoting approach which handles security, inter-portlet communication, etc…
    • Iframe portlets – an available remoting approach
    • WLP partners with Kapow for remote clipped portlets (similar to the ALUI approach)

    In regards to this use case, you brought up specific concerns:

    Security

    Concerns about shared authentication were noted in my talk. If components come from outside the enterprise, there is no easy solution to that problem, regardless of what product you are using. However, I spoke of a couple approaches in the webinar, including SAML.

    If those components come from inside the enterprise, the security hacks you were referring to are generally not necessary. Our customers that expect SSO have a web SSO solution (typically, cookie powered, not password in the URL powered) in place within the enterprise.

    Caching/Performance

    The most serious concerns of yours appear to be performance related. Specifically, the concern is that a full page refresh of a page that contains N number of iframes will cause an N+1 number of requests. To expand on your concern, I will add that this is not only seen in pages with iframes, but also pages that use Ajax to pull in data. I would say that there are several reasons why this does not invalidate WLP’s approaches:

    1. Mashup pages with lots of iframe portlets approach

    Google Personalized Home Page makes use of iframes to implement their mashup framework. Many of the Gadgets on the page are rendered with an iframe. But you are mistaken in saying that this scales because Google is throwing tons of hardware at the problem. The iframe Gadgets rendered in GPHP are rendered not by Google, but by 3rd party gadget hosting servers around the world. Google does NOT have to process those iframe Gadget requests, it is a distributed approach. Likewise, you could create a WLP page where most of the portlets are iframe portlets that hit a distributed set of servers, if that makes sense. Or…

    2. Mashup pages with a mixture of portlets

    The 2nd demo in my webinar wasn’t showing a page with all iframe portlets. Rather, what the demo was showing was a WLP page with a couple of iframe portlets mixed in with local portlets. As shown above, WLP supports a number of portlet types, and a good approach is to build pages that are a mixture of that set.

    3. Ajax helps minimize page refreshes

    Your concern about iframe performance stems from the case in which the entire page refreshes. With the usage of Ajax becoming common, plus with WLP 9.2 built in support for auto-generating Ajax portlets, this impact can be minimized. Page refreshes are becoming more rare. With WLP 10.0, which releases in a few months, the Ajax support has been expanded to support Ajax based portal page changes, further reducing the liklihood of a page refresh.

    4. The “Bleeding Edge” guys are also using browser based mashup approaches

    You referred to the “Bleeding Edge” technologists in your blog as the people that are doing things correctly. What are they doing? Some of the time, those guys are doing browser based Mashups. They often use a combination of iframes and Ajax from the browser to implement their mashups. So the same approach that you dislike is already in common use across the web.

    ** Market Reply

    You state “Portals are a thing of the past”. An interesting opinion, but just that. IT cannot afford web sprawl, and so a framework for rationalization will always be in demand whether you call it a Portal or something new.

    New technologies continue to provide alternatives to existing methodologies and portals are no different. However, one thing that has distinguished portal frameworks is their ability to embrace new technologies. Struts, WSRP, JSF are all examples of this as are the Web 2.0 constructs like mashups and rich interfaces based on Ajax. This is all good news as the enterprise has a wealth of options to choose from.

    Posted by: plaird on January 11, 2007 at 2:56 PM

  • I must say, as a customer and developer, it’s great plumtree (I mean BEA, or is it Oracle) management allows you guys to express your own opinions. It so happens I’ve spent quite a bit of time trying to get JBoss Seam (and Ice Faces) to work with Aqualogic 6.1. I’ve been looking at the IFrame route, because the gateway stuff just isn’t working (it doesn’t properly rewrite the URLs for the Ajax stuff). I’ve come to hate the gateway. I bet it was a great idea before Ajax, but now it seems like almost every web 2.0 application is incompatible (needs major modification to get it to work). Or maybe I just don’t understand how to get it to work. Is there any good documentation on it? I’m hoping for some major improvements when 6.5 comes out though.

    Posted by: cmann50 on April 4, 2008 at 2:28 PM