Pathable, the award-winning provider of social networking services for conferences and events, announced today that it is acquiring The Social Collective, a long-time competitor in the market in a cash-only deal. The terms of deal are not being disclosed, but the move continues a growing consolidation of the event-centric social media segment, following the closure of EventVue in February 2010.
“Pathable has built a very compelling experience for the event organizer, enhancing the attendee networking experience beginning months before the event itself and resulting in a sustaining community after it’s over,” said Pathable CEO Jordan Schwartz. ” The Social Collective has built a great reputation and business, serving world-class clients including Oracle and SXSW. But the needs of events are increasingly complex, and it’s simply better for everyone to have a single, combined effort to meet those core needs while we enhance and extend the existing solution.”
“We’ve always had a great respect for the solution Pathable has delivered to event managers, so we’re very pleased to be able to offer that to our customers today,” said Chris Bucchere, co-founder and CEO of The Social Collective.
We’re proud of the part we’ve played in revolutionizing how event attendees connect, and now consolidating efforts under the Pathable platform will be a boon to our existing customers and those to come.
The acquisition expands Pathable’s reach in the events industry, already firmly placed on the foundation of direct customers as well as reseller relationships with industry heavyweights such as Active Events (www.activeevents.com/), Cvent (www.cvent.com), Omnipress (www.omnipress.com) and Amiando (www.amiando.com) and heralds a trend toward a standardization of the experience.
When you’re going to a conference, you don’t want to be thinking about which social networking solution the host has chosen and whether it will be effective at helping you connect with the other attendees. You want that layer to be invisible and effortless. Our offering has always been focused on letting the event and its attendees take the spotlight.
There are over 200 million attendees at conferences, conventions, tradeshows, meetings and events in the US alone, according to the Conventions Industry Council. Social networking is becoming an in increasingly important component of serving them.
Pathable, Inc., a privately held Seattle-based company, was founded in 2008. Since that time, it has served hundreds of events, including those of Microsoft, SAP, GE Healthcare, Meeting Professionals International, and Dell, with a private, branded on-line event communities that allow attendees to connect, schedule meetings, choose their session schedules and visit exhibitors for months around a face-to-face event.
I worked at Plumtree Software, Inc. from June 1998 to December, 9th 2002. In four-and-a-half years, the company grew from 25 employees to over 400 and it had thousands of happy customers before it was purchased by BEA Systems in 2005 for $220M. Here at bdg, we’ve been supporting dozens of Plumtree/AquaLogic Interaction (ALI)/WebCenter Interaction (WCI) customers since we opened our doors in December of 2002.
Back around 2005, BEA’s BID (Business Interaction Division) still had a lot of really smart engineers from Plumtree working on a lot of really interesting things, including Pages (think CMS 2.0), Pathways (kind of an enterprise version of del.icio.us) and Ensemble (the portlet engine/gateway, minus the overhead and UI of the portal itself).
They were also working on an enterprise social network, kind of a Facebook for business if you will.
However, there was a lot of wrangling at BEA, primarily between BID/AquaLogic and BEA’s flagship product, WebLogic (the world-class application server). Most of the strife came in the form of WebLogic Portal vs. AquaLogic/Plumtree Portal nonsense. Senior management at BEA, in their infinite wisdom, had taken a “let’s try not to alienate any customers” policy and in the process they confused all their customers and alienated/frustrated quite a few of them as well. They renamed Plumtree to AquaLogic User Interaction (ALUI), put in place a “separate but equal” policy with WebLogic Portal (WLP) and spewed some nonsense about how WLP was for “transactional portal deployments” vs. ALI for .NET and non-transactional portals, but no one, including BEA management, had any idea WTF that meant. To further confuse the issue, the WLP team, which also had a lot of really smart engineers, built products like “Adrenaline” (which was basically a less-functional and more buggy version of Ensemble) rather than do the unthinkable and integrate Ensemble into WLP so that WLP could finally host non-Java/JSR-168 portlets.
I was really pissed about BEA’s spineless portal strategy, their “separate but equal” policy between WLP and BID/ALUI and their waste of precious engineering resources in an arms race between WLP and ALUI rather than just stepping back, growing a spine, and coming up with a portal strategy.
Because I can’t keep my pie hole shut, I started several loud, messy and public fights with BEA management. Why? Because the real loser here is the customer.
And BEA, because management got mired in politics and chose to waste engineers’ time on in-fighting and competition instead of building enterprise Facebook, which Steve Hamrick and I arguably already wrote in our spare time. All they needed to do was product-ize that and they would have owned that market.
In 2008, Oracle inherited this clusterfuck of a portal strategy when they bought BEA for $7B+, giving me new hope that cooler heads would prevail and fix this mess. The first thing they did was fire all the impotent BEA managers who were afraid to make any decisions. It took Oracle a while, but alas, they have finally arrived at a portal strategy that makes sense. I first learned about this strategy when I crashed the WebCenter Customer Advisory Board last Thursday.
First of all, let me say this: under the leadership of Vince Casarez, current (and future) customers are in good hands.
I realized when he said “everyone still calls it Plumtree” that this was going to be a bullshit-free presentation.
He also said something regarding the “portal stew” at Oracle that puts all of my ranting and raving in perspective: “Oracle did not buy BEA for Plumtree or WLP, just like it didn’t buy SUN for SUN’s portal product.” To rephrase that, Oracle bought BEA for WebLogic (the application server, not the portal) and Sun for their hardware (not for Java, NetBeans and all the rest of Sun’s baggage).
So, let’s face it, portals are a relatively insignificant part of Oracle.
All roads lead to Web Center (not Web Center Interaction, but Web Center)
At the heart of Web Center will be WebLogic’s app server and portal. Plumtree/ALUI as a code base will be supported, but eventually put into maintenance mode and retired. You get nine or twelve years of support and patches (blah blah blah) but if you want new features, you need to switch to the new Web Center, powered by WLP. CORRECTION: WebCenter will not be “powered by WLP.” At its core will be the Oracle-developed, ADF-based WebCenter Portal running on WebLogic Server.
All the “server products” (Collaboration, Studio, Analytics, Publisher) will be replaced by Web Center Services or Web Center Suite
Publisher will be subsumed by WCM/UCM (Web Content Management / Universal Content Management, formerly Stellent). The other products will be more-or-less covered by similar offerings in Suite or Services.
What about Pages, Ensemble and Pathways?
Pages is dead as WCM/UCM does it better. Pathways is getting rolled into the new Web Center somehow, but I’m not sure how yet. Perhaps I can follow up with another blog post on that. Ensemble has been renamed “Pagelet Producer” — more on that below. CORRECTION: Pathways is now called “Activity Graph” and it will be part of the new WebCenter. Think of an enterprise-class version of the Facebook News Feed crossed with Sales Force chatter and you’ll be on the right track.
What about .NET/SQL Server, IIS and everything else that isn’t Java?
This is a really interesting question and the key question that I think drove a lot of BEA’s failure to make any decision about portal strategy from 2005-2008. Plumtree had a lot of .NET customers and some of the biggest remaining Plumtree/ALUI customers are still running on an all-Microsoft stack. In fact, one of them told me recently that they have half a million named user accounts, two million documents and 72 Windows NT Servers to power their portal deployment.
So, let’s start with the bad news: Oracle doesn’t want you to run .NET/Windows and they REALLY don’t want you to run on SQL Server.
(That will change when Oracle acquires Microsoft, but that’s not gonna happen, at least not any time soon.) WebLogic app server and WLP/WCI, to the best of my knowledge, will not run on SQL Server. They will, however, run on Windows, but I would not recommend that approach.
It’s inevitable that large enterprises will have both .NET and Java systems along with a smattering of other platforms.
So, if you’re a .NET-heavy shop, you’ll need to bite the bullet and have at least one server running JRockit or Sun’s JVM, one of Oracle’s DB’s (Oracle proper or MySQL), WLS/WLP/WCI and preferably Oracle Enterprise Linux, Solaris or some other other flavor of Un*x. CORRECTION: WLP will run on SQL Server. Not sure about the new WebCenter Portal, but my guess is that it does not.
Now, for the good news: the new WCI, powered by WLP and in conjunction with the Pagelet Producer (formerly Ensemble) and the WSRP Producer (formerly the .NET Application Accelerator) will run any and all of your existing portlets, regardless of language or platform.
This was arguably the best feature in Plumtree and it will live on at Oracle.
.NET/WRSP and even MOSS (Sharepoint) Web Parts will run in WebCenter through the WSRP Producer. The Pagelet Producer will run portlets written in ANY language through what is essentially a next generation, backwards-compatible CSP (Content Server Protocol, the superset of HTTP that allows you to get/set preferences, etc. in Plumtree portlets). So, in theory, if you’re still writing your portlets in ASP 1.0 using CSP 1.0 and GSServices.dll, they will run in the new Web Center via the Pagelet Producer. Time for us to update the PHP and Ruby/Rails IDKs? Indeed it is. Let me know if you need that sooner rather than later.
How do I upgrade to the new WebCenter?
Well, first off, you have to wait for it to come out later this fall. Then, you have to start planning for what’s less of an upgrade and more of a migration. Oracle, between engineering and PSO, has promised to provide migration for all the portal metadata (users, communities, pages, portlets, security, etc.) from Plumtree/ALUI/WCI to the new Web Center, with WLP at its heart. (Wouldn’t it have made sense for some of those WLP engineers to start building that migration script in 2005 instead of trying to compete with ALUI by building Adrenaline? Absolutely.) All your Java portlets, if you’re using JSR-168 or JSR-286, will run natively in WLP through a wrapper in WebCenter Portal. Everything else will either run in the WRSP Producer (if it’s .NET) or in the Pagelet Producer (if it’s anything else). The only thing I don’t fully understand yet is how to migrate from Publisher to UCM, but I’m due to speak with Oracle’s PSO about that soon. Please contact me directly if you need to do a migration from Publisher to WCM/UCM that’s too big to do by hand.
The only other unanswered question in my mind is how the new WebCenter will handle AWS/PWS services — the integrations that bring LDAP/AD users and profile information/metadata into Plumtree/ALUI/WCI. I wrote a lot of that code for Plumtree anyway, so if Oracle’s not working on a solution for the new Web Center, perhaps I can help you with that somehow as well. CORRECTION: User and group objects are fully externalized in Web Center, so there is no need for AWS/PWS synchronization. (Thanks, Vince, for pointing that out.)
So, that’s my understanding of the new portal strategy at Oracle.
Kudos to Oracle’s management for listening to their customers, making some really hard decisions and picking a path that I think is smart and achievable.
I’m here to help if you have questions or need help with your portal strategy or technical implementation/migration.
(Some other notes about discussions that have spawned from this original post.)
Q: What’s the future of the Microsoft Exchange portlets (Mail, Calendar and Contacts) and the CWS for crawling Exchange public folders. Retired and replaced with something Beehive related? Still supported? For how long? Against what versions of Exchange?
A: We’ve got updated portlets for Mail & Calendar in WebCenter now for Exchange 2003 & 2007. We don’t have a Contacts portlet but it could be added quickly if we see a large demand. Crawling public folders can be done with an adapter we have for SES [Oracle Secure Enterprise Search] already. We’re working but aren’t done with a new version of KD on top of the new infrastructure that will come out post PS3. (Contributed by Vince Casarez.)
Q: If migration scripts are provided to move WCI metadata into WebCenter, I understand that a portlet is a portlet, but what about pages and communities, users and groups, content sources and crawlers, etc.? Do they all have analogous objects in WebCenter or is there some reasonable mapping to some other objects?
A: Pages and Communities follow a model where we extract/export the meta data and data, then run it through a set of scripts that create a WebCenter Space for each Collab project/community and a JSPx page for every page. Users and Groups will come out of the LDAP/AD directory they are already using and the scripts associate the right permissions to each of the migrated objects. I don’t recall what we did about crawlers but since we use SES directly, all the hundred or more connectors we ship for SES are now available for direct usage. The scripts go through a multiphase approach to move content, then portlets, then pages, then communities so that dependencies can be fixed up versus trying to do a manual fix up. (Contributed by Vince Casarez.)
Q: Will any existing WCI-related products that are slated for retirement (e.g. Publisher, Collab, Studio, Analytics, etc.) be re-released with support for Windows Vista, Windows 7, IE 8, IE 9 or Chrome?
A: For Publisher, we are planning a set of migrations to quickly move them to UCM. For Collab & Studio, we have new capabilities in WebCenter Spaces to match these functions. For Analytics, we’ve also rebuilt it on top of the WebCenter stack with over 50 portlets for the different metrics and made sure we provide apis/ access to the data directly. These analytics data also feeds the activity graph in providing recommendations for people on the content and UIs that are relevant to them. These are tied into the personalization engine that we brought over from the WLP side. So there is a rich blending of the best features from WLP with WCI key features. As for Neo [the codename for the next release of WCI], we are certifying the additional platforms. On the IE 8 front, we’ve just released patches for WCI 10gR3 customers to be able to use IE8 without upgrading to Neo. (Contributed by Vince Casarez.)
Join association thought leader Jeff DeCagna, Chris Hopkinson of DC-area startup DubMeNow and Chris Bucchere of The Social Collective for a free breakfast seminar entitled “Strategies for Association Success in the Era of Social Business” on Thursday, 4/9 in Alexandria, VA.
Registration is limited, but there are few spots left. Sign up now!
Just in time for Halloween, I’ve decided to publish my Top Ten Tips for Writing Plumtree Crawlers that Actually Work. This post may scare you a little bit, but hey, that’s the spirit of Halloween, right?
[Editor’s note: yes, we’re still calling it Plumtree. Why? I did a Google search today and 771,000 hits came up for “plumtree” as opposed to around 300,000 for “aqualogic” and just over 400,000 for “webcenter.” Ignoring the obvious — that a short, simple name always wins over a technically convoluted one — it just helps clarify what we’re talking about. For example, if we say “WebCenter,” no one knows whether we’re talking about Oracle’s drag-n-drop environment for creating JSR-168 portlets (WebCenter Suite) or Plumtree’s Foundation/Portal (WebCenter Interaction). So, frankly, you can call it whatever you want, but we’re still gonna call it Plumtree so that people will know WTF we’re talking about.]
So, you want to write a Plumtree Crawler Web Service (CWS), eh?
Here are ten tips that I learned the hard way (i.e. by NOT doing them):
1. Don’t actually build a crawler
2. If you must, at least RTFM
3. “Hierarchyze” your content
4. Test first
5. When testing, use the Service Station 2.0 (if you can get it)
6. Code for thread safety
7. Write DRY code (or else)
8. Don’t confuse ChildDocument with Document
9. Use the named methods on DocumentMetaData
10. RTFM (again)
Before I get into the gory details, let me give you some background. First off, what’s a CWS anyway? It’s the code behind what Oracle now calls Content Services, which spider through various types of content (for lack of a better term) and import pointers to those bits of content into the Knowledge Directory. This ability to spider content and normalize its metadata is one of the most underrated features in Plumtree. (FYI, it was also the first feature we built and arguably, the best.)
Each bit of spidered content is called a Document or a Card or a Link depending on whether you’re looking at the product, the API or the documentation, respectively. It’s important to realize that CWSs don’t actually move content into Plumtree; rather, they store only pointers/links and metadata and they help the Plumtree search engine (known under the covers as Ripfire Ignite) build its index of searchable fulltext and properties.
Today, Plumtree ships with one OOTB CWS that knows how to crawl/spider web pages. Not surprisingly, it’s known as the Web Crawler. Don’t let the name mislead you: the web crawler can actually crawl almost anything, as I explain in my first tip, which is:
Don’t actually build a crawler.
But I’m getting ahead of myself.
So, back to the background on crawlers. Oracle ships five of ’em, AFAIK: one for Windows files, one for Lotus Notes databases, one for Exchange Public Folders, one for Documentum and one for Sharepoint. Their names give you blatantly obvious hints at what they do, so I won’t get into it. Along with the OOTB crawlers, Oracle also exposes a nice, clean API for writing crawlers in Java or .NET. (If you really want to push the envelope, you can try writing a crawler in PHP, Python, Ruby, Perl, C++ or whatever, but it’s hard enough to write one in Java or .NET, so I wouldn’t go there. If you do, though, make sure that your language has a really good SOAP stack.)
So, after reading this, you still want to write a crawler, yes?
Let’s get into my Top Ten Tips:
1. Don’t actually build a crawler
Yes, you really don’t want to go here. Building crawlers is not that hard, as there’s a clean, well documented API. However, getting them work is a whole other story.
Let’s assume for a moment that this technique doesn’t work. Or perhaps you’re dealing with some awful client-server or greenscreen POS that doesn’t have a web UI. Either way, you may still be able to use the web crawler.
How? Well, the web crawler can crawl almost anything using something that we used to call the Woogle Web. Think of it this way. Say you want to crawl a database. Perhaps that database contains bug reports. Rather than waste your time trying to write a database crawler, just write an index.jsp (or .php, .aspx, .html.erb, .you-name-it) page that does something like select id from bugs and dumps out a list of all the bugs in the database. Then, hyperlink each one to a detail page (that’s essentially a select * from bugs where id = ? query). Your index page can be sinfully ugly. However, put some effort into your detail pages, making them look pretty AND using meta tags to map each field in the database to its value.
Then, simply point the OOTB web crawler at your index page, tell it not to import your index page, map your properties to the appropriate meta tags, crawl at a depth of one level, and get yourself a nice cup of coffee. By the time you get back, the OOTB web crawler will have created documents/links/cards for every bug with links to your pretty detail pages and every bit of text will be full-text searchable. So will every property, meaning that you can run an advanced search on bug id, component, assignee, severity, etc.
At Plumtree, we used to call this a Woogle Web. It may sound ridiculous, but
a Woogle Web is a great way to crawl virtually anything without lifting a finger.
However, a Woogle Web won’t work for everything. If you’re dealing with a product where you can’t even get your head around the database schema AND you have a published Java, .NET or Web Service API, then you might want to think about writing a custom crawler.
2. If you must, at least RTFM
If you’re anything like me, reading the manual is what you do after you’ve tried everything else and nothing has worked out. In the case of Plumtree crawlers, I recommend swallowing your pride (at least momentarily) and reading all of the documentation, including their “tips” (which are totally different from and not nearly as entertaining as my tips, but equally valuable).
Once you’re done reading all the documentation, you might also want to consult Tip #10.
3. “Hierarchyze” your content
Um, yeah, I know “hierarchyze” isn’t a word. But since crawlers only know how to crawl hierarchies, if your data aren’t hierarchical, you darn well better figure out how to represent them hierarchically before you start writing code. Even if you don’t think this step is necessary, just do it because I said so for now. You’ll thank me later.
4. Test first
Don’t even try to write your crawler and then run it and expect it to work. Ha! Instead, write unit tests for every modular bit of code that you throw down. To every extent that’s it’s possible, write these tests first. It’ll save your butt later.
5. When testing, use the Service Station 2.0 (if you can find it)
When you do finally get around to integration testing your crawler, it’ll save you a lot of time if you use the Service Station 2.0. However, it may take you a long time to get it, so start the process early.
Unlike every product Oracle distributes, Service Station is 1) free and 2) not available for download. Yes, you read that correctly.
To get it, you need to contact support. I called them and told them I needed it and after two weeks I got nothing but a confused voicemail back saying that support doesn’t fulfill product orders. Um, yeah. So I called back and literally begged to talk to someone who actually knew what this thing was. Then I painstakingly explained why I couldn’t get it from edelivery (because it’s not there) nor from commerce.bea.com (because it’s permanently redirecting to oracle.com) nor from one.bea.com (because there it says to contact support). So, after my desperate pleas and 15 calendar days of waiting, I got an e-mail with an FTP link to download the Service Station 2.0.
After installing this little gem, my life got a lot easier. Now instead of testing by launching a Plumtree job to kick off the crawler (and then watching it crash and burn), I could use the Service Station to synchronously invoke each method on the crawler and log the results.
Another handy testing tool is the PocketSOAP TCPTrace utility. (It’s also very handy for writing Plumtree portlets.) You can set it up between the Service Station and your CWS and watch the SOAP calls go back and forth in clear text. Very nice.
6. Code for thread safety
So, as the documentation says (and as I completely ignored), crawlers are multithreaded. The portal will launch several threads against your code and, unless you code for thread safety, these threads will proceed to eat your lunch.
Coding for threadsafety means not only that you need to synchronize access to any class-level variables, but also that you must use only threadsafe objects (e.g. in Java, use ArrayList instead of Vector).
7. Write DRY code (or else)
Even though you’re probably writing your CWS in Java or .NET, stick to the ol’ Ruby on Rails adage:
Don’t Repeat Yourself.
Say for example, that you need to build a big ol’ Map of all the Document objects in order to retrieve a document and send its metadata back to the mothership (Plumtree). It’s really important that you don’t build that map every time IDocumentProvider.attachToDocument is called. If you do, your crawler is going to run for a very very very long time. Crawlers don’t have to be super fast, but they shouldn’t be dogshit slow either.
As a better choice, build the Map the first time attachToDocument is called and store it as a class-level variable. Then, with each successive call to attachToDocument, check for the existence of the Map and, if it’s already built, don’t build it again. And don’t forget to synchronize not only the building of the Map, but also the access to the variable that checks whether the Map exists or not. Like I said, this isn’t a walk in the park. (See Tip #1.)
8. Don’t confuse ChildDocument with Document
IContainer has a getChildDocuments method. This, contrary to how it looks on the surface, does not return IDocument objects. Instead, it expects you to return an array of ChildDocument objects. These, I repeat, are not IDocument objects. Instead, they’re like little containers of information about child documents that Plumtree uses so that it knows when to call IDocumentProvider.attachToDocument. It is that call (attachToDocument) and not getChildDocuments that actually returns an IDocument object, where all the heavy lifting of document retrieval actually gets done.
You may not understand this tip right now, but if drop by and read it again after you’ve tried to code against the API for a few hours, and it should make more sense.
9. Use the named methods on DocumentMetaData
This one really burned me badly. I saw that DocumentMetaData had a “put” method. So, naturally, I called it several times to fill it up with the default metadata. Then I wasted the next two hours trying to figure out why Plumtree kept saying that I was missing obvious default metadata properties (like FileName). The solution? Call the methods that actually have names like setFileName, setIndexingURL, etc. — don’t use put for standard metadata. Instead, only use it for custom metadata.
10. RTFM (again)
I can’t stress the importance of reading the documentation enough.
If you think you understand what to do, read it again anyway. I guarantee that you’ll set yourself up for success if you really read and thoroughly digest the documentation before you lift a finger and start writing your test cases (which of course you’re going to write before you write your code, right?).
* * *
As always, if you get stuck, feel free to reach out to the Plumtree experts at bdg. We’re here to help. But don’t be surprised if the first thing we do is try to dissuade you from writing a crawler.
Have a safe and happy Halloween and don’t let Plumtree Crawlers scare you more than they should.
I’m headed to Oracle Open World on Saturday, 9/20. Here’s my proposed schedule. Like I said earlier, I’m probably going to spend most of my time in the unconference anyway, but here’s what looked interesting to me.
[Editor’s note: I’ve removed the gCal from here because it defaults to the current date, so it’s not really usable anymore, now that Oracle Open World 2008 is a thing of the past.]
If you’d prefer, you can also access this schedule in XML or ICAL format.
I’m sitting in my third Oracle Fusion Middleware briefing, this one at the Willard Hotel in Washington, DC. Thomas Kurian has been going through all the products in the Oracle stack in excruciating detail.
First let me say this: Thomas Kurian is a really smart guy. He holds an BS in EE from Princeton summa cum laude (that’s Latin for really fucking good). He holds an MBA from the Stanford GSB. He’s been working for Oracle forever and he even knows how to pronounce Fuego (FWAY-go). I’m dutifully impressed.
Unfortunately, all those academic credentials and 10+ years in the industry is barely the minimum requirement for getting your head around the middleware space. Either I don’t have enough (0) letters after my name, or I just don’t get it.
For starters, there are way too many products — the middleware space is filled with “ceremonious complexity” (to quote Neal Ford). App servers, data services layers, service buses, web service producers and consumers — even portals, content management and collaboration has been sucked into this space. Don’t get me wrong: the goals of the stack are admirable — middleware tries to glue together all the heterogeneous, fragmented systems in the enterprise. Everyone knows that most enterprises are a mess of disparate systems and they need this glue to provide unified user experiences that hide the complexity of these systems from the people who have to use them. That makes the world a better place for everybody.
That was also, not coincidentally, one of Plumtree’s founding principles and the concept — integrating enterprise systems to improve the user experience — has guided my career since I got my lowly undergraduate degree in Computer Science from Stanford in 1998.
So, it’s a good concept, however, if you’re considering middleware because you’re trying to clean up the mess that your enterprise has become, you need to ask yourself the following fundamental question:
Does middleware add to or subtract from the overall complexity of your enterprise?
Your enterprise is already insanely complicated. You’ve got Java, .NET, perhaps Sharepoint, maybe an enterprise ERP system like SAP and say, an enterprise open source CRM system like SugarCRM or a hosted service like SalesForce.com. The bleeding edge IT folks and even (god forbid) people outside of IT are installing wikis written in PHP (e.g. MediaWiki) along with collaborative software like Basecamp written in Ruby on Rails. I’m not even going to mention all the green-screen mainframe apps still lurking in the enterprise — wait, I just did. This veritable cornucopia of systems just scratches the surface of what exists at many large — and even some mid-to-small-sized companies — today.
So clearly there’s a widespread problem. But what’s the solution?
At the end of his impressive presentation, I asked Thomas the following question:
“How can middleware from Oracle/BEA help you make sense of the fragmented, heterogeneous enterprise when you have existing collaborative (web 2.0) technologies written in PHP, Ruby on Rails, etc. running rampant throughout IT and beyond?”
(Okay, so I wasn’t exactly that pithy, but it was something close to that.)
His Aladdin-esque answer came in the form of three choices:
“Take control of” and “centralize” your IT systems by replacing everything with Oracle Web Center spaces
Ditto by migrating everything to UCM (Stellant)
Build a services framework and aggregate everything in one of four ways:
Use a Java transaction layer (JSR 227)
Use a portlet spec like JSR 168 or WSRP
Build RESTful web services
Use the WebPart adapter for Sharepoint
I like to call answers one and two “The SAP Approach.” In other words, we’re SAP, we’re German, wir geben nicht einen Scheiße about your existing enterprise software, you’re now going to do it the SAP way (or the highway).
Will companies buy into that? Some companies may. Many will not. ERP is a well understood space, so this approach has worked for SAP. Enterprise 2.0 is not terribly well understood, so that means even more diversity in the enterprise software milieu.
So the only approach that I believe in is #3: integrate. Choose the right tool for the right problem, e.g. the WebPart adapter if you’re using Sharepoint. Use REST when appropriate, e.g. when you need a lightweight way to send some JSON or XML across the wire between nonstandard or homegrown apps. Use JSR 168/286 for your Java applications. Even use SOAP if the backend application already supports it.
Keep things loosely coupled so that you can plug different components in and out as needed.
This requires a lot of development — the glue — but, I don’t think there’s any way around that. (You should take that with a grain of salt, because my company has been supplying the government and the commercial world with exactly that kind of development expertise since 2002.)
As for the overarching, user facing “experience” or “interaction” product — that’s where I’ve always used Plumtree (or AquaLogic Interaction).
Will I start using Web Center Spaces? At this point, I’m still not sure.
If it can be used as the topmost bit of the architectural stack to absorb and surface all the enterprise 2.0 software that my customers are running, then perhaps. If it’s going to replace all the enterprise software that my customers are running, then no way José.
This conundrum really opens up a new market for enterprise software: I call it “Middleware for the REST of us” or MMM (not M&M, 3M or M3, because they’re already taken): “Mid-Market Middleware” — similar to the way 37signals approaches (with a great deal of hubris and a solid dose of arrogance) the “Fortune Five Million” by marketing their products toward the whole long-tail of small and medium-sized companies. Maybe the world needs a RESTful piece of hardware that just aggregates web services and spits out a nice UI, kind of like the “Plumtree in a Box” idea that Michael Young (former Plumtree Chief Architect, now Chief Architect at RedFin) had back in the last millennium.
Oracle Web Center Spaces might be the right choice for some very large enterprises, but what about the REST of us?
This is a 30-minute clip from the general session at BEA Participate from back in May. Jay Simons and I demo the social application that bdg built for the conference, known now as The Social Collective.