Oracle Announces Roadmap for Plumtree / AquaLogic / WebCenter

UPDATE 2: I’ve incorporated all the great feedback and comments from ex-Plumtreevians, ex-BEA and ex- and current Oracle folks.

UPDATE: A bunch of Plumtreevians are contributing really good comments on this post over on Facebook.

bea_think_oracleI worked at Plumtree Software, Inc. from June 1998 to December, 9th 2002. In four-and-a-half years, the company grew from 25 employees to over 400 and it had thousands of happy customers before it was purchased by BEA Systems in 2005 for $220M. Here at bdg, we’ve been supporting dozens of Plumtree/AquaLogic Interaction (ALI)/WebCenter Interaction (WCI) customers since we opened our doors in December of 2002.

Back around 2005, BEA’s BID (Business Interaction Division) still had a lot of really smart engineers from Plumtree working on a lot of really interesting things, including Pages (think CMS 2.0), Pathways (kind of an enterprise version of del.icio.us) and Ensemble (the portlet engine/gateway, minus the overhead and UI of the portal itself).

They were also working on an enterprise social network, kind of a Facebook for business if you will.

However, there was a lot of wrangling at BEA, primarily between BID/AquaLogic and BEA’s flagship product, WebLogic (the world-class application server). Most of the strife came in the form of WebLogic Portal vs. AquaLogic/Plumtree Portal nonsense. Senior management at BEA, in their infinite wisdom, had taken a “let’s try not to alienate any customers” policy and in the process they confused all their customers and alienated/frustrated quite a few of them as well. They renamed Plumtree to AquaLogic User Interaction (ALUI), put in place a “separate but equal” policy with WebLogic Portal (WLP) and spewed some nonsense about how WLP was for “transactional portal deployments” vs. ALI for .NET and non-transactional portals, but no one, including BEA management, had any idea WTF that meant. To further confuse the issue, the WLP team, which also had a lot of really smart engineers, built products like “Adrenaline” (which was basically a less-functional and more buggy version of Ensemble) rather than do the unthinkable and integrate Ensemble into WLP so that WLP could finally host non-Java/JSR-168 portlets.

I was really pissed about BEA’s spineless portal strategy, their “separate but equal” policy between WLP and BID/ALUI and their waste of precious engineering resources in an arms race between WLP and ALUI rather than just stepping back, growing a spine, and coming up with a portal strategy.

Because I can’t keep my pie hole shut, I started several loud, messy and public fights with BEA management. Why? Because the real loser here is the customer.

And BEA, because management got mired in politics and chose to waste engineers’ time on in-fighting and competition instead of building enterprise Facebook, which Steve Hamrick and I arguably already wrote in our spare time. All they needed to do was product-ize that and they would have owned that market.

In 2008, Oracle inherited this clusterfuck of a portal strategy when they bought BEA for $7B+, giving me new hope that cooler heads would prevail and fix this mess. The first thing they did was fire all the impotent BEA managers who were afraid to make any decisions. It took Oracle a while, but alas, they have finally arrived at a portal strategy that makes sense. I first learned about this strategy when I crashed the WebCenter Customer Advisory Board last Thursday.

First of all, let me say this: under the leadership of Vince Casarez, current (and future) customers are in good hands.

I realized when he said “everyone still calls it Plumtree” that this was going to be a bullshit-free presentation.

He also said something regarding the “portal stew” at Oracle that puts all of my ranting and raving in perspective: “Oracle did not buy BEA for Plumtree or WLP, just like it didn’t buy SUN for SUN’s portal product.” To rephrase that, Oracle bought BEA for WebLogic (the application server, not the portal) and Sun for their hardware (not for Java, NetBeans and all the rest of Sun’s baggage).

So, let’s face it, portals are a relatively insignificant part of Oracle.

However, they’ve finally did what I called for 2008 and what BEA never had the wits to do: pick a single portal strategy/stack and stick to it. SO, if you’re a current Plumtree/ALUI/WCI or a current WLP customer, you have a future with Oracle.

Here’s the plan, as I understand it.

All roads lead to Web Center (not Web Center Interaction, but Web Center)

At the heart of Web Center will be WebLogic’s app server and portal. Plumtree/ALUI as a code base will be supported, but eventually put into maintenance mode and retired. You get nine or twelve years of support and patches (blah blah blah) but if you want new features, you need to switch to the new Web Center, powered by WLP. CORRECTION: WebCenter will not be “powered by WLP.” At its core will be the Oracle-developed, ADF-based WebCenter Portal running on WebLogic Server.

All the “server products” (Collaboration, Studio, Analytics, Publisher) will be replaced by Web Center Services or Web Center Suite

Publisher will be subsumed by WCM/UCM (Web Content Management / Universal Content Management, formerly Stellent). The other products will be more-or-less covered by similar offerings in Suite or Services.

What about Pages, Ensemble and Pathways?

Pages is dead as WCM/UCM does it better. Pathways is getting rolled into the new Web Center somehow, but I’m not sure how yet. Perhaps I can follow up with another blog post on that. Ensemble has been renamed “Pagelet Producer” — more on that below. CORRECTION: Pathways is now called “Activity Graph” and it will be part of the new WebCenter. Think of an enterprise-class version of the Facebook News Feed crossed with Sales Force chatter and you’ll be on the right track.

What about .NET/SQL Server, IIS and everything else that isn’t Java?

This is a really interesting question and the key question that I think drove a lot of BEA’s failure to make any decision about portal strategy from 2005-2008. Plumtree had a lot of .NET customers and some of the biggest remaining Plumtree/ALUI customers are still running on an all-Microsoft stack. In fact, one of them told me recently that they have half a million named user accounts, two million documents and 72 Windows NT Servers to power their portal deployment.

So, let’s start with the bad news: Oracle doesn’t want you to run .NET/Windows and they REALLY don’t want you to run on SQL Server.

(That will change when Oracle acquires Microsoft, but that’s not gonna happen, at least not any time soon.) WebLogic app server and WLP/WCI, to the best of my knowledge, will not run on SQL Server. They will, however, run on Windows, but I would not recommend that approach.

It’s inevitable that large enterprises will have both .NET and Java systems along with a smattering of other platforms.

So, if you’re a .NET-heavy shop, you’ll need to bite the bullet and have at least one server running JRockit or Sun’s JVM, one of Oracle’s DB’s (Oracle proper or MySQL), WLS/WLP/WCI and preferably Oracle Enterprise Linux, Solaris or some other other flavor of Un*x. CORRECTION: WLP will run on SQL Server. Not sure about the new WebCenter Portal, but my guess is that it does not.

Now, for the good news: the new WCI, powered by WLP and in conjunction with the Pagelet Producer (formerly Ensemble) and the WSRP Producer (formerly the .NET Application Accelerator) will run any and all of your existing portlets, regardless of language or platform.

This was arguably the best feature in Plumtree and it will live on at Oracle.

.NET/WRSP and even MOSS (Sharepoint) Web Parts will run in WebCenter through the WSRP Producer. The Pagelet Producer will run portlets written in ANY language through what is essentially a next generation, backwards-compatible CSP (Content Server Protocol, the superset of HTTP that allows you to get/set preferences, etc. in Plumtree portlets). So, in theory, if you’re still writing your portlets in ASP 1.0 using CSP 1.0 and GSServices.dll, they will run in the new Web Center via the Pagelet Producer. Time for us to update the PHP and Ruby/Rails IDKs? Indeed it is. Let me know if you need that sooner rather than later.

How do I upgrade to the new WebCenter?

Well, first off, you have to wait for it to come out later this fall. Then, you have to start planning for what’s less of an upgrade and more of a migration. Oracle, between engineering and PSO, has promised to provide migration for all the portal metadata (users, communities, pages, portlets, security, etc.) from Plumtree/ALUI/WCI to the new Web Center, with WLP at its heart. (Wouldn’t it have made sense for some of those WLP engineers to start building that migration script in 2005 instead of trying to compete with ALUI by building Adrenaline? Absolutely.) All your Java portlets, if you’re using JSR-168 or JSR-286, will run natively in WLP through a wrapper in WebCenter Portal. Everything else will either run in the WRSP Producer (if it’s .NET) or in the Pagelet Producer (if it’s anything else). The only thing I don’t fully understand yet is how to migrate from Publisher to UCM, but I’m due to speak with Oracle’s PSO about that soon. Please contact me directly if you need to do a migration from Publisher to WCM/UCM that’s too big to do by hand.

The only other unanswered question in my mind is how the new WebCenter will handle AWS/PWS services — the integrations that bring LDAP/AD users and profile information/metadata into Plumtree/ALUI/WCI. I wrote a lot of that code for Plumtree anyway, so if Oracle’s not working on a solution for the new Web Center, perhaps I can help you with that somehow as well. CORRECTION: User and group objects are fully externalized in Web Center, so there is no need for AWS/PWS synchronization. (Thanks, Vince, for pointing that out.)

So, that’s my understanding of the new portal strategy at Oracle.

Kudos to Oracle’s management for listening to their customers, making some really hard decisions and picking a path that I think is smart and achievable.

I’m here to help if you have questions or need help with your portal strategy or technical implementation/migration.

Q&A;

(Some other notes about discussions that have spawned from this original post.)

Q: What’s the future of the Microsoft Exchange portlets (Mail, Calendar and Contacts) and the CWS for crawling Exchange public folders. Retired and replaced with something Beehive related? Still supported? For how long? Against what versions of Exchange?

A: We’ve got updated portlets for Mail & Calendar in WebCenter now for Exchange 2003 & 2007. We don’t have a Contacts portlet but it could be added quickly if we see a large demand. Crawling public folders can be done with an adapter we have for SES [Oracle Secure Enterprise Search] already. We’re working but aren’t done with a new version of KD on top of the new infrastructure that will come out post PS3. (Contributed by Vince Casarez.)

Q: If migration scripts are provided to move WCI metadata into WebCenter, I understand that a portlet is a portlet, but what about pages and communities, users and groups, content sources and crawlers, etc.? Do they all have analogous objects in WebCenter or is there some reasonable mapping to some other objects?

A: Pages and Communities follow a model where we extract/export the meta data and data, then run it through a set of scripts that create a WebCenter Space for each Collab project/community and a JSPx page for every page. Users and Groups will come out of the LDAP/AD directory they are already using and the scripts associate the right permissions to each of the migrated objects. I don’t recall what we did about crawlers but since we use SES directly, all the hundred or more connectors we ship for SES are now available for direct usage. The scripts go through a multiphase approach to move content, then portlets, then pages, then communities so that dependencies can be fixed up versus trying to do a manual fix up. (Contributed by Vince Casarez.)

Q: Will any existing WCI-related products that are slated for retirement (e.g. Publisher, Collab, Studio, Analytics, etc.) be re-released with support for Windows Vista, Windows 7, IE 8, IE 9 or Chrome?

A: For Publisher, we are planning a set of migrations to quickly move them to UCM. For Collab & Studio, we have new capabilities in WebCenter Spaces to match these functions. For Analytics, we’ve also rebuilt it on top of the WebCenter stack with over 50 portlets for the different metrics and made sure we provide apis/ access to the data directly. These analytics data also feeds the activity graph in providing recommendations for people on the content and UIs that are relevant to them. These are tied into the personalization engine that we brought over from the WLP side. So there is a rich blending of the best features from WLP with WCI key features. As for Neo [the codename for the next release of WCI], we are certifying the additional platforms. On the IE 8 front, we’ve just released patches for WCI 10gR3 customers to be able to use IE8 without upgrading to Neo. (Contributed by Vince Casarez.)

Was The Facebook Hacker Story Irresponsible Journalism?

This article originally appeared as a guest post on All Facebook.
 
The media scandal du jour relates to how WikiLeaks leaked all this classified information about the war in Afghanistan, but let’s not overlook this extremely irresponsible piece of reporting that MSNC published earlier this week about an alleged Facebook privacy breach.

Why is it irresponsible? Well, before I break it down for you, let’s take a few journalism lessons from Robert Scoble, who explains why Flipboard (an iPad application that turns RSS feeds into a magazine-like layout) is superior to the one-item-after-another streams of information that we’re used to browsing on the Facebook news feed, Twitter, etc. He writes:

“I remember that early eye tracking research showed that pages that had a single headline that was twice as big as any other headline were more likely to be read. Same for pages with photos. If you put two photos of equal size on the page, it would be looked at less often, or less completely, than a page that had a photo that was at least twice as big as any other.
I won a newspaper design contest in college because of this my designs made sure that they included headlines that were twice as big as any other and photos that were twice as big as any other.”

MSNBC used these exact techniques to spin an oh-so-scary story about an alleged Facebook privacy breach.
lies1(2)This first screen shot is what I could see on an average (15″) monitor “above the fold.” (You can click the image to see it in actual size.) Note the massive font used for the headline and the four tiny images. Keep in mind that some internet users don’t know how to scroll (really, I’m not kidding), so by not showing a broken line of text at the bottom of the page, many people won’t know that the rest of the article is even there, let alone how to get to it.

lies3If you endeavor to read past the headline, you’ll notice that they “end” the story with more scary talk from the alleged “hacker” and hide the final three paragraphs behind this completely absurd “Show More Text” link, which serves no purpose other than to obscure the truth, which is in the final (that’s right, the very last) paragraph of the article:

“No private data is available or has been compromised. Similar to a phone book, this is the information available to enable people to find each other, which is the reason people join Facebook. If someone does not want to be found, we also offer a number of controls to enable people not to appear in search on Facebook, in search engines, or share any information with applications.”

So, if I were to email MSNBC and tell them that I was “a researcher” or “a white-hat hacker” and I had discovered a huge scam — “You see, these conspirators from Yellow Pages have been collecting and amassing all this private data and delivering it to everyone’s doorstep!” — they would think I was completely insane. Well, change “Yellow Pages” to “Facebook” and “delivering it to everyone’s doorstep” to “making available for download” and I think you see my point.

msnSo how did MSN get away with posting this completely absurd story? To understand that, we need to look at their demographic. I went to Alexa.com to find out. As I had guessed, their readers lean toward females of the Baby Boomer generation and up. The same people who don’t know how to change their default settings in their default browser (IE6) on their default operating system (Windows XP) to anything other than MSN.com. Big suprise? No: MSNBC is preying on innocent victims by using psychological tricks to create phobias for things that they don’t understand. And there’s nothing scarier than the fear of the unknown.

bowling-for-columbineThe premise that the media is out to scare us all into staying home and buying more security systems/guns/etc. is not news; Michael Moore built a really compelling case against Big Media’s fear tactics in Bowling for Columbine in 2002. However, an interesting question to ask in 2010 is:

if Big Media is prone to Big Lies and Misinformation, can social media serve as an antidote?

In other words, can investigative reporting by “citizen journalists” help suss the truth out of all the lies?

To help answer the question, I turned to the 875+ comments on the article. To do some highly unscientific semantic analysis, I read a small sample to look for keywords were common in a neutral-to-favorable comment (information, private/privacy, security, people/friends, public) vs. what keywords where prevalent in a highly negative response (wrong, attention, fame, fraud, scam, boring, crap). Then I ran all the comment text through a histogram tool.

histogramUnfortunately, the results of my study show that most comments were favorable by a ratio of over 5:1. However, it all goes back to to the demographic. After glancing at the TechCrunch coverage on this, it seems about 60-70% of the commenters call bullshit, which seems to be in line with a younger, male-dominated, tech-savvy demographic.

So what do you think? Can commenting/voting/Tweeting uncover the truth obscured though it is by the news outlets that report it? Or will we all just continue to propagate the monkey excrement that the mass media keep throwing at us?

Leave a comment to tell us what you think!

Free Breakfast Seminar in Alexandria, VA on 4/9

Join association thought leader Jeff DeCagna, Chris Hopkinson of DC-area startup DubMeNow and Chris Bucchere of The Social Collective for a free breakfast seminar entitled “Strategies for Association Success in the Era of Social Business” on Thursday, 4/9 in Alexandria, VA.

Registration is limited, but there are few spots left. Sign up now!

My SXSW Panel: Social Networks for the Anti-Social

sxsw2009(2)UPDATE: SXSW released a complete audio recording of this panel!

I’m at SXSW again this year. I attended SXSWi last year and, if my memory serves me correctly, I also attended SXSW Music in 1995, though I might be confusing it with H.O.R.D.E., Austin City Limits or one of the other great music festivals in this fine city which is known internationally for its eclectic music scene. Anyway, because The Social Collective is powering my.SXSW, I actually have the pleasure of spending 10 full days in Austin and attending all three festivals this year: Film, Music and Interactive. I’m also speaking, oddly enough, in a Music Panel called Social Networks for the Anti-Social.

I have to warn you, most panels (at any conference, not just SXSW) totally suck and this may not be an exception.

But who knows, it might be a completely magical and transcendental experience, but you won’t know unless you check it out.

Feedback Loop

We’ve been keeping a close watch on what people are saying about my.SXSW and trying to respond to as much of the feedback as possible, either directly from us or via the folks at SXSW.

It’s no surprise — in this “2.0” world of hypersharing and total transparency — that we’ve seen literally hundreds of blog posts and tweets about my.SXSW, but we’ve only received a handful of e-mails.

We don’t really like e-mail anyway, so this is cool.

The SXSW help desk has received a lot of support requests via e-mail, with issue #1 being that the welcome e-mails and password reset e-mails aren’t showing up, most likely due to downstream spam filters. Ah, the irony! Again, this is why e-mail sucks, but it’s sort of something that’s hard to live with and also hard to live without.

So, how are we tracking and responding to feedback?

We’re using a jury-rigged system of free tools: search.twitter.com (remember Summize?), Google Alerts and Google Reader.

This “system” only takes a few minutes to set up and it can be used to track virtually anything being said about anything in a public space on the interwebs.

Basically, you can set up “comprehensive” Google Alerts and have them “delivered” via feed (or e-mail, but you already know how we feel about that). You can do the same with search.twitter.com.

Simply plug the feeds into Google Reader, organize them into folders/tags and voila, your feedback tracking system is ready to roll.

We’re searching for terms like “SXSW,” so obviously we get a lot of false positives. However, it’s easy to manually “star” or “share” items in Google reader and then publish the resulting list of shared or starred items back out as a feed to share with your team via a web page or, if you like, put it back in Google Reader. (Yikes! We know that sounds like it might be infinitely or mutually recursive, but actually, it works — trust us, we’ve tried it.)

So, here it is: a pretty comprehensive list of all the good, the bad and the ugly things people are saying about my.SXSW. Hey, it’s all public information on the interwebs anyway, so why not republish it all in one place?

Adding Context to my.SXSW Tweets

There has been quite a bit of chatter that we’ve been following in the SXSW community about the launch of my.SXSW. Fortunately, most of it has been good news. (Phew!) We’ve also gotten some great, constructive feedback and some fabulous ideas for new features that we wish we would have thought of ourselves!

There’s one issue that has come up a few times that we’d like to address in this post. Several people in the community have expressed concern about the lack of context in tweets coming from my.SXSW. (This only applies to people who have integrated their Twitter accounts.)

We’ve applied two changes to The Social Collective software to help with this. First off, we changed the application source from “The Social Collective” to “my.SXSW” with a link back to the my.SXSW site. Also, we added a “on http://my.sxsw.com” link back to messages that notify people when you join a group.

Keep the feedback coming — we’d love to hear from you!

How the New Facebook Utterly Destroyed my Favorite Application (and Why That Makes Me Sad)

I used to love Feedheads. It’s a simple, elegant and beautiful application that does one thing really well: help you share your Google reader shared items.

Unfortunately, the “new” Facebook has rendered the application utterly useless and I can’t think of a good way, as an end-user, to fix it. In fact, as someone who’s built two facebook apps, I can’t even think of a way that the Feedheads developers can fix it. What a calamity.

So here’s the problem: the News Feed (and the Mini Feed) introduced an option that allows end-users to set the story “size.” When a Google shared item story comes through Feedheads now, it defaults to the “one line” size and as a result, it doesn’t say anything other than “Chris posted an item to Feedheads.”

Thank you very much, Facebook. That piece of information is completely useless. People who are reading your feed need to click through into the Feedheads application in order to see what story you posted — and the whole point of Feedheads is to help you share your shared items, not make them harder to find.

(As a result of all this, Facebook also broke one of my applications, called WhyI. It has < 200 users, so very few people care, but . . . the point of the app was to help people ask themselves and their friends questions that have to be answered in five words or fewer. And of course, the questions and answers would show up in the Mini Feed and News Feed. But not anymore! Now it just says: “Chris posted a new mini-update using WhyI.” Again, a totally useless piece of information. Drats.)

As an end-user, I can set the “size” of each feed item. So that means, after I hit Shift-S in Google Reader — which doesn’t take much effort — I have to wait for the story to be published in Facebook and then, if I remember (which at this point is unlikely), I have to go into that little drop down on the right and set the size to “small” instead of the default, which is “one line.” And here’s the best part: I can’t tell Facebook to remember this, so I have to do it every time.

All this just to share a shared item on Google Reader through Feedheads . . . ick.

Here’s the best part. I just noticed that Facebook added their own feature to the new and “improved” news feed. You can import your shared items from Google Reader! And, not surprisingly, the news feed actually shows the stories’ titles. In other words, Facebook took a great application — Feedheads — and replaced the functionality with their own feature; in the process, they rendered Feedheads useless.

This makes me sad. I only have one thing to say:

Wow, Facebook, how very Microsoft of you.

The Social Collective Debuts at RubyNation

We’re very pleased to announce that, together with the organizers of RubyNation, we debuted our social application “The Social Collective” today as a means for RubyNation conference attendees and other Rubyists to meet and interact with their peers.

This is a very similar codebase to what we deployed at BEA Participate in May, but without ALI or ALBPM. These BEA (now Oracle) products provided a great, scalable and flexible architecture, but we didn’t feel it was a good use of our resources (i.e. $$$s) to continue to use these products and we didn’t want to pass this cost on to RubyNation, which, BTW, is only charging $175 for two jam-packed days of Ruby awesomeness.

So, for those of you who have been following all this social goodness coming from bdg, there are now two distinct versions of The Social Collective: one that uses BEA/Oracle products and one that does not. This affects pricing (obviously), so if you’re interested in either, please contact us to find out more.

And in the meantime, if you’re as gung ho about Ruby as we are, sign up for an account and help us grow the Ruby community here in DC and beyond!

This Just In — BEA Participate Social App Stats

I find this a little hard to believe, but the numbers don’t lie. We had a whopping 75,000 page views the week of the conference!

That’s more than 100 page views per registered attendee. This chart was from our hottest day, Tuesday, 5/13.

5-13-08(3)Thanks to everyone for using our application. I think we may be on to something here!