Archive for the ‘BPEL’ Category

Oracle OpenWorld: The Big BPEL-ESB-OSB cook-off

Full disclosure: I am attending Oracle OpenWorld courtesy of Oracle.

The speaker in this session is Andreas Chatziantoniou from Accenture.  He’s discussing the overlap between Oracle’s BPEL, ESB (legacy Oracle), and OSB (BEA ESB) products. 

First up is BPEL.  His slide states that BPEL should be used for system to system or service orchestration, when human workflow is needed, and when there are parallel request-response patterns.  The next slide says that BPEL should not be used for complex data transformations, it should not be used to program, and should not be used as a business modeling tool.  At first glance, this may seem strange, but I think it’s more of an indication that BPEL is something that gets generated by your tool, it’s not something people should be editing directly.  This point could be made more clearly.  He is emphasizing that you should not use BPELJ (embedded Java in BPEL).

He’s now talking about “dehydration,” a term I had not heard before.  He’s using that to refer to the writing of a process state to disk so it can be restored at a later time.  He stated that this is a natural part of BPEL, but not part of ESB/OSB.  I can live with that.  A service bus shouldn’t be doing dehydration any more than a network switch should be.

Now on to ESB/OSB.  His slide says they should be used for loose coupling, location transparency, mediation, error handling, transformation, load balancing, security, and monitoring.  Good list, although it does have the two grey areas of mediation and transformation.  You need to further define what types of mediation and transformation should and should not be done.  The way I’ve phrased it is that ESB’s should be about standards-in and standards-out.  As long as you’re mediating and transforming between standards (and the same standards on both sides), it’s a good fit.  If you are transforming between external and internal standards, as is the case in an external gateway, consider whether your ESB is the right fit for this since these mappings can get quite complicated. Those are my words, not the speakers, sorry this is something I’ve thought a lot about.

He’s now talking about mediation, and specifically referring to a component that existed in Oracle’s legacy ESB.  He said it connects components in a composite application.  To me, this does not belong in a service bus, and in the case of Oracle Service Bus, it does not.  He did not go into more detail on the type of mediation (e.g. security token mediation, message schema mediation, transport mediation).  As previously said, this needs to be made more narrow to make an appropriate decision on whether your mediation is really new business logic that belongs on a development platform, or mediation between supported standards than can be done by your connectivity infrastructure.

On transformation, Andreas focused more on what the platforms can do, rather on what they should do, calling out that XML transformations via XQuery, XSLT, et. can be equally done on any of the platforms.  His advice was do it in the service bus, and avoid mixed scenarios.  I’m really surprised at that, given how CPU-intensive transformations and mappings can be.  His point was that in a very large (50-60 steps) BPEL process, handling transformations could get ugly.  I see the logic on this, but I think if you do the analysis on where those transformations are needed, it may only be in one activity and best handled by the platform for that activity itself.

Overall, the speaker spent to much time discussing what the products can do, calling out overlaps, and not enough time on what they should do.   There was some good advice for customers, but I think it could have been made much simpler. My take on this whole debate  has always been straightforward.  A BPEL engine is a service development platform.  You use it to build new services that are most likely some composite of existing services.  I like to think of it as an orchestrated service platform.  As I previously said, though, you don’t write BPEL.  You use the graphical modeler for your tool, and behind the scenes, it may (or may not) be creating BPEL. 

A service bus is a service intermediary.  You don’t use it to build services, you use it to connect service consumers and service providers.  Unfortunately, in trying to market the service bus, most vendors succumbed to feature creep, whether due to creating their ESB from a legacy EAI product, or by adding more development like features to get more sales.  Think of it as a very intelligent router, meant to be configured by Operations, not coded by developers.

Oracle OpenWorld: An Architect’s View of the New Features of Oracle SOA Suite 11g Release 1

Full disclosure: I’m attending Oracle OpenWorld courtesy of Oracle.

First wave of industry standardization was around functional-specific standards in areas causing headaches in the integration space. Emphasizing the role of SCA in the standardization of the service platform in the same way that Java EE played a role in the evolution of the application server. I’ll be honest, I’m still not a big SCA fan. I know Oracle is, though. The one good thing being shown is that the hosting environments can be managed in a single, unified way, regardless of whether that service is hosted in BPEL PM or WebLogic. As long as there’s good tooling that hides of the various SCA descriptors, this is a good thing.

Now they are talking about the event delivery network. It’s nice to see a discussion on fundamentals rather than trying to jump into a CEP discussion. They’re talking about having an event catalog, utilizing an EDL (event description language), and easily connecting consumers and subscribers. This is a good step forward, in my opinion. It may actually get people to think about events as first class citizens in the same way as services.

Now, they’re on to Oracle Human Workflow. It is all task-based, with property-based configuration. The routing of tasks can be entirely dynamic, rather than based on static rules. It has integration with Oracle Business Rules. It publishes events on the EDN (e.g. onTaskAssigned, onTaskModified, etc.). Nice to see them eating their own dog food with the use of EDN.

They’ve now moved on to Service Data Objects. They’ve introduced entity variables into BPEL to allow working with SDOs.

Additional subjects in this session included Metadata Services (MDS) and the Dev-Test-Prod problem (changing of environment-specific parameters as code is promoted through environments). On the latter, there are a large number of parameters that can now be modified via a “c-plan,” applied at deployment time. Anything that makes this easier is a good thing in my opinion.

Think Orchestration, not BPEL

I was made aware of this response from Alex Neihaus of Active Endpoints on the VOSibilities blog to a podcast and post from David Linthicum. VOS stands for Visual Orchestration System. Alex took Dave to task for some of the “core issues” that Dave had listed in his post.

I read both posts and listened to Dave’s podcast, and as is always the case, there are elements of truth on both sides. Ultimately, I feel that the wrong question was being asked. Dave’s original post has a title of “Is BPEL irrelevant?” and the second paragraph states:

OK, perhaps it’s just me but I don’t see BPEL that much these days, either around its use within SOA problem domains I’m tracking, or a part of larger SOA strategies within enterprises. Understand, however, that my data points are limited, but I think they are pretty far-reaching relative to most industry analysts’.

To me, the question is not whether BPEL is relevant or not. The question is how relevant is orchestration? When I first learned about BPEL, I thought, “I need a checkbox on my RFP/RFI’s for this to make import/export is supported,” but that was it. I knew the people working with these systems would not be hand-editing the XML for BPEL, they’d be working with a graphical model. To that end, the BPMN discussion was much more relevant than BPEL.

Back to the question, though. If we start talking about orchestration, we get into two major scenarios:

  1. The orchestration tool is viewed as a highly-productive development environment. The goal here is not to externalize processes, but rather, to optimize the time it takes to build particular solutions. Many of the visual orchestration tools leverage significant amount of “actions” or “adapters” that provide a visual metaphor for very common operations such as data retrieval or ERP integration. The potential exists for significant productivity gains. At the same time, many of the things that fall into this category aren’t what I would call frequently changing processes. The whole value add of being able to change the process definition more efficiently really doesn’t apply.
  2. The orchestration tool is viewed as a facility for process externalization. Here’s the scenario where the primary goal is flexibility in implementing process changes rather than in developer productivity. I haven’t seen this scenario as often. In other words, the space of “rapidly changing business processes” is debatable. I certainly have seen changes to business rules, but not necessarily to the project itself. Of course, on the other hand, many processes are defined to begin with, so the culture is merely reacting to change. We can’t say what we’re changing from or to, but we know that something in the environment is different.

So what’s my opinion? I still don’t get terribly excited about BPEL, but I definitely think orchestration tools are needed for two reasons:

  1. Developer productivity
  2. Integrated metrics and visibility

Most of the orchestration tools out there are part of a larger BPM suite, and the visibility that they provide on how long activities take is a big positive in my book (but I’ve always been passionate about instrumentation and management technologies). As for the process externalization, the jury is still out. I think there are some solid domains for it, just as there are for things like complex event processing, but it hasn’t hit mainstream yet at the business level. It will continue to grow outward from the developer productivity standpoint, but that path is heavily focused on IT system processes, not business processes (just like OO is widely used within development, but you don’t see non-IT staff designing object models very often). As for BPEL, it’s still a mandatory checkbox, and as we see separation of modeling and editing from execution engine, it’s need may become more important. At the same time, how many organizations have separate Java tooling for when they’re writing standalone code versus writing Java code for SAP? We’ve been dealing with that for far longer, so I’m not holding my breath waiting for a clean separation between tools and the execution environment.

Oslo, DSLs, MDD, and more…

It’s amazing how long it can take for some things to become a reality. Back in the corn fields of central Illinois in the early 90’s, I was in graduate school working for a professor that was researching visual programming languages. While the project was focused on building tools for visual programming languages, and not as much on visual programming itself, it certainly was interested. Here we are, almost 15 years later, and what’s the latest news from Microsoft? Visual programming languages. Okay, they’re calling it model driven development (which isn’t new either).

It will be very interesting to see if Microsoft can succeed in this endeavor. I suspect that they will, simply because Microsoft has a unique relationship with their development community, something that none of the Java players do. While there were competing tools for quite a few years, you’d be hard pressed to find an organization doing Microsoft development that isn’t using Visual Studio. You’d also be hard pressed to find an organization that isn’t leveraging the .NET framework. While the Java community has Eclipse, there’s still enough variation of the environment through the extensive plugin network that it’s a different beast. So, if Microsoft emphasizes model driven development in Visual Studio, it’s safe to say that a good number of developers will follow suit. As a point of comparison, BEA did the same thing over 5 years ago when they introduced WebLogic Workshop in February of 2002. This article stated:

BEA is calling WebLogic Workshop the first integrated application development environment with visual interfaces to Java and J2EE… “We’re radically changing the way people development applications,” said Alfred Chuang, president and CEO of BEA… Chuang said WebLogic Workshop could improve the application development and deployment cycle by as much as 100 times.

Hmmm… I’m getting a sense of deja vu. I’m hopeful that Microsoft can achieve better successes than BEA did. Point of fact, I don’t think it had anything to do with the quality of BEA’s offering. At the time, I had someone working for me look into Workshop. I was very surprised when the answer was not, “This is just a bunch of fluff, we need to keep writing the code” and instead was, “Wow, this really did improve my productivity.” Unfortunately, many developers will take their text-based editor to the grave with them so they can continue to write code.

In a similar vein, Phil Windley had a post on domain specific languages or DSLs. He points out that he is “a big believer in notation. Using the right notation to describe and think about a problem is a powerful tool.” Further on, he states, “GPLs (General Purpose Languages) are, by definition, general. They are designed to solve a wide host of problems outside the domain I care about. As a result, expressing something I can put in a line of code in a DSL takes a dozen in most GPLs.” Now Phil focuses on textual notations in his post, but I’d argue that there’s no reason that the notation can’t be symbolic. It may make the exercise of writing a parser more difficult, but then again, I’m pretty sure that the work I did back in grad school wound up having some in-memory representation of what was graphically shown in the editor that would have been pretty easy to parse. Of course, the example I used in my thesis wound up being music notation, which didn’t have such an internal representation, but I’m getting off the topic. If there are any grad students reading this blog, I think a parser for music notation is an interesting problem because it doesn’t follow the more procedural, flow chart like approach that we all too often see.

Anyway, back to the point. I 100% agree with Phil that a custom notation can make a smaller subset of programming tasks far more efficient. So much of what we do in corporate IT is just data in/data out type operations, and a language tuned for this, whether graphical or textual can help. The trend right now is toward graphical tools, toward model driven development. There will always be a need for GPLs, and we’ll continue to have them. But, given the smaller subset of typical corporate IT problems, it’s about time we start making things more efficient through DSLs, and model driven development. Let’s just hope it doesn’t take another 15 years.

Latest SOA Insights Podcast

Dana Gardner has posted the latest episode of his Briefings Direct: SOA Insights series. In this episode, the panelists (Tony Baer, Jim Kobielus, Brad Shimmin, and myself) along with guest Jim Ricotta, VP and General Manager of Appliances at IBM, discuss SOA Appliances and the recent announcements around the BPEL4People specification.

This conversation was particularly enjoyable for me, as I’ve spent a lot of time understanding the XML appliance space in the past. As I’ve blogged about in the past, there’s a natural convergence between software-based intermediaries like proxy servers and network appliances. I’ve learned a lot when working with my networking and security counterparts in trying to come up with the right solution. The other part of the conversation on BPEL4People was also fun, given my interests in human computer interaction. I encourage you to give it a listen, and feel free to send me any questions you may have, or suggestions for topics you’d like to see discussed.

To ESB or not to ESB

It’s been a while since I posted something more infrastructure related. Since my original post on the convergence of infrastructure in the middle was reasonably popular, I thought I’d talk specifically about the ESB: Enterprise Service Bus. As always, I hope to present a pragmatic view that will help you make decisions on whether an ESB is right for you, rather than coming out with a blanket opinion that ESBs are good, bad, or otherwise.

From information I’ve read, there are at least five types of ESBs that exist today:

  1. Formerly EAI, now ESB
  2. Formerly BPM, now ESB
  3. Formerly MOM/ORB, now ESB
  4. The WS-* Enabling ESB
  5. The ESB Gateway

Formerly EAI, now ESB

There’s no shortage of ESB products that people will claim are simply rebranding efforts of products formerly known as EAI tools. The biggest thing to consider with these is that they are developer tools. Their strengths are going to lie in their integration capabilities, typically in mapping between schemas and having a broad range of adapters for third party products. There’s no doubt that these products can save a lot of time when working with large commercial packages. At the same time, however, this would not be the approach I’d take for a load balancing or content-based routing solution. Those are typically operational concerns where we’d prefer to configure rather than code. Just because you have a graphical tool doesn’t mean it doesn’t require a developer to use it.

Formerly BPM, now ESB

Many ESBs are adding orchestration capabilities, a domain typically associated with BPM products. In fact, many BPM products are simply rebranding of EAI products, and there’s at least one I know of that is now also marketed as an ESB. There’s definitely a continuum here, and the theme is much the same. It’s a graphical modeling tool that is schema driven, possibly with built-in adapter technology, definitely with BPEL import/export, but still requires a developer to properly leverage it.

Both of these two categories are great if your problem set centers around orchestration and building new services in a more efficient manner. If you’re interested in sub-millisecond operations for security, routing, throttling, instrumentation, etc., recognize that these solutions are primarily targeted toward business processing and integration, not your typical “in-the-middle” non-functional concerns. It is certainly true that from a modeling perspective, the graphical representation of a processing pipeline is well-suited for the non-functional concerns, but it’s the execution performance that really matters.

Formerly MOM/ORB, now ESB

These products bring things much closer to non-functional world, although as more and more features are thrown at the ESB umbrella, they may start looking more like one of the first two approaches. In both cases, these products try to abstract away the underlying transport layer. In the case of MOM, all service communication is put onto some messaging backbone. The preferred model is to leverage agents/adapters on both endpoints (e.g. having a JMS client library for both the consumer and provider), potentially not requiring any centralized hub in the middle. The scalability of messaging systems certainly can’t be denied, however, the bigger concern is whether agents/adapters can be provided for all systems. All of the products will certainly have a fallback position of a gateway model via SOAP/HTTP or XML/HTTP, but you lose capabilities in doing so. For example, having endpoint agents can ensure reliable message delivery. If one endpoint doesn’t have it, you’ll only be reliable up to the gateway under control of the ESB. In other words, you’re only as good as your weakest link. One key factor in looking at this solution is the heterogeneity of your IT environment. The more varied systems you have, the greater challenge you have in finding an ESB that supports all of them. In an environment where performance is critical, these may be good options to investigate, knowing that a proprietary messaging backbone can yield excellent performance, versus trying to leverage something like WS-RM over HTTP. Once again, the operational model must be considered. If you need to change a contract between a consumer and a provider, the non-functional concerns are enforced at the endpoint adapters. These tools must have a model where those policies can be pushed out to the nodes, rather than requiring a developer to change some code or model and go through a development cycle.

The WS-* Enabling ESB

This category of product is the ESB that strives to allow an enterprise to have a WS-* based integration model. Unlike the MOM/ORB products, they probably only require the use of agents on the service provider side, essentially to provide some form of service enablement capability. In other words, a SOAP stack. While 5 years ago this may have been very important, most major platforms now provide SOAP stacks, and many of the large application vendors provide SOAP-based integration points. If you don’t have an enterprise application server, these may be worthwhile options to investigate, as you’ll need some form of SOAP stack. Unlike simply adding Axis on top of Tomcat, these options may provide the ability to have intercommunication between nodes, effectively like a clustered application server. If not apparent, however, these options are very developer focused. Service enablement is a development activity. Also, like the MOM/ORB solutions, these products can operate in a gateway mode, but you do lose capabilities. Like the BPM/EAI solutions, these products are focused on building services, so there’s a good chance that they may not perform as well for the “in-the-middle” capabilities typically associated with a gateway.

The ESB Gateway

Finally, there are ESB products that operate exclusively as a gateway. In some cases, this may be a hardware appliance, it could be software on commodity hardware, or it could be a software solution deployed as a stand-alone gateway. While the decision between a smart-network or smart-node approach is frequently a religious one, there are certainly scenarios where a gateway makes sense. Perimeter operations are the most common, but it’s also true that many organizations don’t employ clustering technologies in their application servers, but instead leverage front-end load balancers. If your needs focus exclusively on the “in-the-middle” capabilities, rather than on orchestration, service enablement, or integration, these products may provide the operational model (configure not code) and the performance you need. Unlike an EAI-rooted system, an appliance is typically not going to provide an integrate anything-to-anything model. Odds are that it will provide excellent performance along a smaller set of protocols, although there are integration appliances that can talk to quite a number of standards-based systems.

Final words

As I’ve stated before, the number one thing is to know what capabilities you need first, before ever looking at ESBs, application servers, gateways, appliances, web services management products, or anything else. You also need to know what the operational model is for those capabilities. Are you okay with everything being a development activity and going through a code release cycle, or are there things you want to be fully controlled by an operational staff that configures infrastructure according to a standard change management practice? There’s no right or wrong answer, it all depends on what you need. Orchestration may be the number one concern for some. Service-enablement of legacy systems to another. Another organization may need security and rate throttling at the perimeter. Hopefully this post will help you on your decision making process. If I missed some of the ESB models out there or if you disagree with this breakdown, please comment or trackback. This is all about trying to help end users better understand this somewhat nebulous space.

What’s the big deal about BPEL

Courtesy of this news release from InfoWorld, I found out that Microsoft Windows Workflow Foundation (WWF, which has nothing to do with endangered animals or professional wrestlers) is going to support BPEL. This is a good thing, but what does it really mean?

I’ll admit that I’ve never gotten very excited about BPEL. My view has always been that it’s really important as an import/export format. You should have a line item on your RFI/RFP that says, “supports import/export of BPEL.” You should ask the sales guy to demonstrate it during a hands-on section. Beyond this, however, what’s the big deal?

The BPM tools I’ve seen (I’ll admit that I haven’t seen them all nor even a majority of them) all have a nice graphical editor where you drag various shapes and connectors around, and they probably have some tree-like view where you draw lines between your input structure and your output structure. At best, you may need to hand code some XPath and some very basic expressions. The intent of this environment is to extract the “business process” from the actual business services where the heavy duty processing occurs. If you sort through the marketing hype, you’ll understand that this is all part of a drive to raise the level of abstraction and allow IT systems to be leveraged more efficiently. While we may not be there yet, the intent is to get tools into the hands of the people driving the requirements for IT- the business. Do you want your business users firing up XML Spy and spending their time writing BPEL? I certainly don’t.

What are the important factors that we should be concerned about with our BPM technologies, then? Repeating a common theme you’ve seen on this blog, it’s the M: Management. No one should need to see BPEL unless you’re migrating from one engine to another. There shouldn’t be a reason to exchange BPEL between partners, because it’s an execution language. Each partner executes their own processes, so the key concern is the services that allow them to integrate, not the behind the scenes execution. What is important is seeing the metrics associated with the execution of the processes to gain an understanding of the bottlenecks that can occur. You can have many, many moving parts in an orchestration. Your true business process (that’s why it was in quotes earlier) probably spans multiple automated processes (where BPEL applies), multiple services, multiple systems, and multiple human interactions. Ultimately, the process is judged by the experiences of the humans involved, and if they start complaining, how do you figure out where the problems are? How do you understand how other forces (e.g. market news, company initiatives, etc.) influence the performance of those processes. I’d much rather see all of the vendors announcing support for BPEL begin announcing support for some standard way of externalizing the metrics associated with process execution for a unified business process management view of what’s occurring, regardless of the platforms where everything is running, or how many firewalls need to be traversed.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.