Archive for the ‘Infrastructure’ Category

Piloting within IT

Something I’ve seen at multiple organizations is problems with the initial implementation of new technology. In the perfect world, every new technology would be implemented using a carefully controlled pilot that exercised the technology appropriately, allowed repeatable processes to be identified and implemented, and added business value. Unfortunately, it’s that list item that always seems to do us in. Any project that has business value tends to operate under the same approach that any project for the business does, which usually means schedule first, everything else second. As a result, sacrifices are made, and the project doesn’t have the appropriate buffers to account for the lack of experience the organization has. Even if professional services are leveraged, there’s still a knowledge gap that relates the product capabilities to the business need.

One suggestion I’ve made is to look inside of IT for potential pilots. This can be a chicken versus the egg situation, because sometimes funding can not be obtained unless the purchase is tied to a business initiative. IT is part of the business, however, and some funding should be reserved for operating efficiency improvements within IT, just as the same should be done for other non-revenue producing areas, such as HR.

BPM technology is probably the best example to discuss this. In order to fully leverage BPM technology, you have to have a deep understanding of the business process. If you don’t understand the processes, there’s no tool that you can buy that will give you that knowledge. There are packaged and SaaS solutions available that will give you their process, but odds are that your own processes are different. Who is the keeper of knowledge about business processes? While IT may have some knowledge, odds are this knowledge resides within the business itself, creating the challenge of working across departments when trying to apply the new technology. These communication gaps can pose large risks to a BPM adoption effort.

Wouldn’t it make more sense to apply BPM technology to processes that IT is familiar with? I’m sure nearly every large organization purchases servers and installs them in its data center. I’m also quite positive that many organizations complain about how long this process takes. Why not do some process modeling, orchestration, and execution using BPM technologies in our own backyard? The communication barriers are far less, the risk is less, and value can still be demonstrated through the improved operational efficiencies.

My advice if you are piloting new technology? Look for an opportunity within IT first, if at all possible. Make your mistakes on that effort, fine tune your processes, and then take it to the business with confidence that the effort will go smoothly.

Infrastructure in the Cloud

James Urquhart sent me an email about one of his posts and invited me to join the conversation. After reading his post and Simon Wardley’s post, it was interesting enough that I thought I’d throw in my two cents.

The topic of discussion was Google’s new App Engine. Per Google’s site:

Google App Engine lets you run your web applications on Google’s infrastructure. App Engine applications are easy to build, easy to maintain, and easy to scale as your traffic and data storage needs grow. With App Engine, there are no servers to maintain: You just upload your application, and it’s ready to serve your users.

The theme of James’ post, and why I think he invited me into the conversation, is does this matter to the large enterprise? I tend to agree with James. While I think this is cool technology, in its present form it’s probably of little value to the typical large enterprise. At the same time, I would definitely qualify this, and Amazon’s cloud of services, as disruptive technologies. I can’t help but find myself making mental comparisons to Clayton Christensen’s discussion of steel mills. Smaller steel mills came along and catered to the low end, low margin area of the market that the larger, integrated steel mills were happy to give up. Over time, however, those smaller mills expanded their offerings until the business model of the larger mills was completely disrupted. Will something similar occur in the infrastructure space?

There are certainly parallels in the potential markets. Big enterprises are not the target of Google or Amazon, just as the smaller steel mills focused on re-bar rather than on the more expensive and potentially lucrative market for structural beams or sheet metal. One key difference, however, is that it’s hard to figure out who the “big steel mill” is in this case. Clearly, both Google and any major enterprise currently buy servers. So, Google is not disrupting HP, IBM, Dell, etc. What we’re really talking about disrupting is the internal IT data center. In most cases, save for outsourcing the data center to EDS or IGS, there is no business to be disrupted. The right comparison of this to the steel mills would be a comparison of companies that leveraged the products from the smaller mini-mills disrupting the companies that leveraged the products from the larger, integrated mills. While efficient cost controls are certainly part of the equation, there’s much more that goes into the disruption equation.

In the end, it’s very clear to me that tools like Google App Engine are good for the industry as a whole. They cater nicely to the low end of the market and the size of Google can sustain low margins or even a loss on making these services available. Over time, some of the companies that leverage them will become bigger companies, making additional requests of Google, which will in turn evolve the product that with each evolution makes it more attractive to a broader set of customers, eventually including the big enterprise.

Is your vendor the center of the universe?

A recent post from James McGovern reminded me about some thoughts I had after a few different meetings with vendors.

Vendors have a challenge, and it all stems from a view that they can be the center of the universe. A customer buys their product and builds around it, thereby becoming the “center of the universe” for that customer, exhibiting a gravitational field that attempts to mandate that all other products abide by its laws of physics. In other words, every other product must integrate with it, but that’s the responsibility of those products. For reasons I went into in my last post, that doesn’t work well. It’s a very inward-facing view rather than being the consumer-oriented view.

The challenge is that even if a vendor didn’t want to come across as the center of the universe, for some customers, it is required. For example, if a customer doesn’t have a handle on enterprise identity management, a vendor can shoot themselves in the foot if their own product doesn’t provide some primitive identity management capabilities to account for customers that don’t have an enterprise solution. In the systems management space, you may frequently hear the term “single pane of glass” intended for the Tier 1 operations person. Once again, however, every monitoring system that deals with a specialized portion of the infrastructure will have its own console. It’s a difficult challenge to open up that console to other monitoring sources, and it’s a difficult challenge to open up the data and events to an outside challenge. So what’s an enterprise to do?

To me, it all comes back to architecture. When evaluating these products, you have to evaluate them for architectural fit. Obviously, in order to do that, you need to have an architecture. The typical functional requirements don’t normal constitute an architecture. You can make this as complicated or as simple as you’d like. A passion of mine tends to be systems management capabilities, so I normally address this in an RFI/RFP with just one question:

Are all of the capabilities that are available in your user-facing management console also available as services callable by another system, orchestration engine, or script?

Now, there are obviously follow-ons to this question, but this does serve to open up the communication. Simply put, the best advice for corporate practitioners is to ensure that you are in charge of your architecture and setting the laws of physics for your universe, not the vendors you choose.

The Elusive Service Contract

In an email exchange with David Linthicum and Jason Bloomberg of ZapThink in response to Dave’s last podcast (big thanks to Dave for the shout-out and the nice comments about me in the episode), I made some references to the role of the service contract and decided that it was a great topic for a blog entry.

In the context of SOA Governance, my opinion is that the service contract is the “container” of policy that governs behavior at both design-time and run-time. According to Merriam-Webster, a contract is “a binding agreement between two or more persons or parties; especially : one legally enforceable.” Another definition from Merriam-Webster is “an order or arrangement for a hired assassin to kill someone” which could certainly have implications on SOA efforts, but I’m going to use the first definition. The key part of the definition is “two or more persons or parties.” In the SOA world, this means that in order to have a service contract, I need both a service consumer and a service provider. Unfortunately, the conversations around “contract-first development” that were dominant in the early days caused people to focus on one party, the service provider, when discussing contracts. If we get back to the notion of a contract as a binding agreement between two parties, and going a step further by saying that the agreement is specified through policies, the relationship between the service contract and design and run time governance should become much clearer.

First, while I picked on “contract-first development” earlier, the functional interface is absolutely part of the contract. Rather than be an agreement between designers and developers, however, it’s an agreement on between a consumer and a provider on the structure of the messages. If I am a service provider and I have two consumers of the service, it’s entirely possible that I expose slightly different functional interfaces to those consumers. I may choose to hide certain operations or pieces of information from one consumer (which may certainly be the case where one consumer is internal and another consumer is external). These may have an impact at design-time, because there is a handoff from the functional interface policies in the service contract to the specifications given to a development team or an integration team. Beyond this, however, there are non-functional policies that must be in the contract. How will the service be secured? What’s the load that the consumer will place on the service? What’s the expected response time from the provider? What are the notification policies in the event of a service failure? What are the implications when a consumer exceeds its expected load? Clearly, many of these policies will be enforced through run-time infrastructure. Some policies aren’t enforced on each request, but have implications on what goes on in a request, such as usage reporting policies. My service contract should state what reports will be provided to a particular consumer. This now implies that the run-time infrastructure must be able to collect metrics on service usage, by consumer. Those policies may ripple into a business process that orchestrates the automated construction and distribution of those usage reports. Hopefully, it’s also clear that a service contract exists between a single consumer and a single provider. While each party may bring a template to the table, much as a lawyer may have a template for a legal document like a will, the specific policies will vary by consumer. One consumer may only send 10,000 requests a day, another consumer may send 10,000 requests an hour. Policies around expected load may then be enforced by your routing infrastructure for traffic prioritization, so that any significant deviation from these expected load don’t starve out the other consumers.

The last comment I’d like to make is that there are definitely policies that exist outside of the service contract that influence design-time and run-time, so don’t think that the service contract is the container of all policies. I ran into this while I was consulting when I was thinking that the service contract could be used as a handoff document between the development team and the deployment team in Operations. What became evident was that policies that govern service deployment in the enterprise were independent of any particular consumer. So, while an ESB or XML appliance may enforce the service contract policies around security, they also take care of load balancing requests across the multiple service endpoints that may exist. Since those endpoints process requests for any consumer, the policies that tell a deployment team how to configure the load balancing infrastructure aren’t tied to any particular service contract. This had now become a situation where the service contract was trying to do too much. In addition to being the policies that govern the consumer-provider relationship, it was also trying to be the container for turnover instructions between development and deployment, and a single document couldn’t do both well.

Where I think we need to get to is where we’ve got some abstractions between these things. We need to separate policy management (the definition and storage of policies) from policy enforcement/utilization. Policy enforcement requires that I group policies for a specific purpose, and some of those policies may be applicable in multiple domains. Getting to this separation of management from enforcement, however, will likely require standardization in how we define policies, and they simply don’t exist. Policies wind up being tightly coupled to the enforcement points, making it difficult to consume them for other purposes. Of course, the organizational culture needed to support this mentality is far behind the technology capabilities, so these efforts will be slow in coming, but as the dependencies increase in our solutions over time, we’ll see more and more progress in this space. To sum it up, my short term guidance is to always think of the service contract in terms of a single consumer and a single provider, and as a collection of policies that govern the interaction. If you start with that approach, you’ll be well positioned as we move forward.

The Return of the ESB

Brain.jpg

Just when you thought it was safe to build your SOA, the ESB has returned. A whitepaper from Paul Fremantle of WSO2 seems to have stirred the ESB pot once again, with a number of pundits chiming in on the future of the ESB. I was surprised at how little conversation there was about at the Gartner AADI Summit in comparison to the previous Gartner event I had attended two years prior, but that’s quickly changed.

I read Paul’s paper, and from my perspective, not much has changed from the views that I expressed two years ago back in a Burton Group Catalyst presentation and then in this followup blog post. The bulleted list of capabilities provided in that post aren’t ones that are going to go away, and it surprises me that there continues to be so much debate in this space.

Joe McKendrick recently summarized a number of recent comments in a blog entry, including ones from Roy Schulte of Gartner, Lief Davidson of IBM, ZDNet blogger Dana Gardner, and Lorraine Lawson of IT Business Edge (who has given a couple of my recent blogs some publicity, thanks for that). There were a couple things in Joe’s entry that just rubbed me the wrong way.

First is this notion of federated ESBs. Both Roy and Dana made comments around this, and I simply don’t agree. I’ll admit, there are a certain class of organizations that will have multiple ESBs. They’re the same organizations that have one of just about every technology because they’re so huge. Take them out of the equation, because I don’t believe they represent the masses. For the rest of us, what does “federated ESBs” mean? Does it mean that I have multiple ESBs performing redundant capabilities? Or does it mean that I partition up the capabilities across multiple products/devices, but yet with no two products providing the same capability, even though they may be capable of it? The latter is tolerable and likely, the former is not. For example, Roy mentioned security policies and quality of service. I may rely on an XML appliance for service security at the perimeter, and then rely on app server capabilities or some security agent within the data center. For simplicity sake, if we define quality of service as some form of intelligent routing and traffic shaping, I may rely on an ESB, a high-end XML appliance or network device, or advanced clustering capabilities of an app server. This type of partitioning makes sense. What doesn’t make sense is having one ESB (pick your favorite) providing security, QoS, etc. for services by development group A and having another ESB providing the same for services from development group B. To put it in context, routing is a core capability according to my definitions. Would you let development group A put a Cisco router in front of their services and development group B put a Juniper router in front of their services? You certainly wouldn’t do this if all of the services are hosted in the same data center. Yes, if your company has grown through acquisition and has lines of business all over the place, you may have different routers, but now we’re getting back to those super-large conglomerates I mentioned at the beginning. For those of you now envisioning the “big honking ESB,” don’t think that way. Think of it more like the network. I may have many deployments of a single ESB product, each handling a portion of the services in the enterprise, and their consumers know to direct to the appropriate ESB from design time rather than some uber-ESB. Applications point at specific databases or hostnames, there’s no reason that services can’t work the same way. Again, I don’t view this as federation.

What I believe we need to strive for in this space is centralized policy management and distributed enforcement, with standards based communication between the policy manager and the enforcement points (if only those standards existed). For example, one could make the argument that SAP could provide a second ESB that deals with services provided by the SAP infrastructure. I don’t view this as federation, however. From my perspective, I don’t even care whether SAP has an ESB or not. What I do care about is whether the entry points exposed by SAP can enforcement my QoS policies, my security policies, my versioning policies, etc. Let me centrally manage the policies, push them out, and have it just work. The challenge with this is not the enforcement architecture and federation there, but rather the metadata repository that will hold all of this policy information. This is where federation is important, because I may be getting third party products that come with their own pre-populated registry/repository of services that I need to manage.

The second statement that I disagreed with was Paul’s comment that ESBs discourage the shared ownership of services. If it does, then I think ESB ownership is the problem. Most web applications require the configuration of a load balancing farm before they can be accessed. No one considers the keeper of the farms in network operations the “owner” of those web applications, so why would the team responsible for the configuration of the ESB be considered the owner of services? Personally, I think a lot of this stems from ESBs being targeted at developers, rather than at operations teams. As I’ve commented in the past, I think the inclusion of service development capabilities like orchestration and the leftovers from the EAI space messed up the paradigm and caused confusion. Keep mediation separate from development.

So, as I climb down from my soap box, what are my parting words? I still believe that the capabilities in my post from last year are necessary, and an ESB is one way of providing those capabilities. Intermediaries are almost always used in web architectures, so there shouldn’t be such a strong aversion to having them as a front end to a service. That being said, too many intermediaries is a bad thing, because we haven’t gotten to the single pane of glass management that I think is necessary. Rather, each intermediary has its own management console, and the chances of something getting missed or fat-fingered goes up. Focus on an approach for centrally managing the policies, minimizing the number of places where policies are enforced, and keeping operational activities separated from service development activities.

Taxonomy or folksonomy?

Dan Foody of Progress Software had an interesting blog recently called UDDI in a Web 2.0 World. In it, he asks:

SOA What? With all of this Web 2.0 development, it’s clear that internet scale folksonomies work far better than taxonomies. On the other hand enterprises are, for the most part, stuck with UDDI-related SOA governance tools and their strict taxonomy and categorization mechanisms. The open question though… is this really a problem?

Aside: I love the use of SOA What? That’s exactly why I try to always say S-O-A. On the subject, however, I think Dan raises an interesting question. One of the questions I’ve asked some of the registry/repository vendors is “Can you be indexed by a Google Appliance?” Admittedly, I’m not a huge fan of taxonomy-based searching. At the same time, however, a typical enterprise asset repository may not have enough critical mass to get appropriate metadata for folksonomy based searching. The Web is filled with hyperlinks. How many links to a service detail page am I going to have inside a typical enterprise?

Personally, I’d rather try to find a way to build up the metadata than go crazy building taxonomies to support direct navigation. First off, you can quickly get into taxonomy hell where there are so many variations that you try to support that it becomes difficult to present to the user. Second, people are so used to using Google, Desktop Search, Spotlight, etc. Universal search is going to be a standard part of the office toolset, and we need to find a way to ensure relevant results get returned. This will likely require analysis of software development artifacts (including source code) and building up those relationships based upon presence within project repositories and the role of the user performing the search. A developer performing a search will want to see very different results when searching on “Customer service” than a business manager.

The challenge we face is that the documents and their metadata are scattered all over the place. I previously asked if metadata should be the center of the SOA universe. Neil Ward-Dutton replied that it the center of the universe, and is inherently federated. We need intelligent crawlers that can infer the appropriate relationships and feed this into the universal search engine. Is anyone out there leveraging a Google appliance or other universal search option to facilitate searching for services and other IT assets? If so, like Dan, I’d love to hear about your experiences.

SOA and the Kitchen Sink

Mike “The Mad Greek” Kavis had an interesting post on SOA Lessons Learned over at IT Toolbox. First off, a big thank you to Mike for sharing some experiences about an effort that didn’t go the way it was originally planned. We can learn as much from these as we can from success stories.

The part of the post that caught my attention began with this line:

As we started the second wave of projects, I mandated that all code should be delivered with test harnesses, the build process should be 100% automated, and testing automation should be part of the project deliverables.

Mike went on to discuss how automation and governance were quickly forgotten when the schedule began to slip. The surprising thing to me was not that these aspects were dropped when the schedule began to slip, but that the implementation of things like continuous integration and testing automation were tied to the SOA effort to begin with.

To me, this was indicative of something that I’ve seen at a number of places which is to use “SOA” as the umbrella to fix all things in need of improvement within the software development process. Continuous integration is a great thing and everyone should be practicing it. But if you organization hasn’t adopted it or created a standard, repeatable way of doing it, don’t target your pilot for another key initiative (like SOA) to try to make it happen. SOA adoption is not dependent on having a continuous integration system. If you have it, will it make SOA easier? Yes, probably, but more so because it’s making all development easier, regardless of whether you’re practicing SOA or not. Give it its own pilot where it can be successful in a very managed fashion, and roll it out to the rest of the enterprise.

Part of the problem is finding ways to adopt these improvements in the development process. Is the business going to care about continuous integration? You can argue that they should, but it’s really about internal IT processes. All too often, IT is left to take the congressional lawmaker approach and find some big project that will be sure to be funded and then push everything but the kitchen sink into it under the radar. This creates additional risk and often results in the project team biting off more than they can chew. It’s unfortunate that IT frequently has little ability to improve on its internal processes due to the project-centric nature of its work. Take another support organization like HR. While I’m sure some work in HR work is project-driven, a lot of the work is day-to-day operations. My suspicion is that organizations that are focused more on fixed-cost day-to-day work like this probably have more ability to take on internal improvement initiatives.

My point of all this is that an organization has to be careful on what they try to take on. There’s always opportunities for improvement, and the point should be quality, not quantity. Trying to take on everything is unlikely to lead to success on any of it. Taking on a smaller set of goals and ensuring that you do them well is a safer approach.

Services in a box

While doing my thinking outside the box, I ran across Joe McKendrick’s post discussing whether SOA can be boxed. Joe provided commentary on a blog entry from Jack van Hoof who called out that the major ERP-vendors could potentially deliver “SOA in a box.” He points out:

SAP offers a service bus, service registry, events registry, canonical data management, business processes, services deployments (!), business monitoring, business process management, security… out-of-the-box. Yes, of course the implementation must be tuned and configured. But it’s all there, out-of-the-box.

There’s a certain amount of truth to this statement. It’s probably better stated as services in a box, because after all, it’s a box. We have no idea what the underlying architecture of SAP is. While it may be the case that SAP or any other large ERP system could constitute the bulk of your infrastructure, that doesn’t mean that you’re suddenly purchasing an SOA. If new business needs come along, it’s now up to your ERP system to provide those capabilities. If they’re new, then it’s the ERP vendor who must figure out how to quickly make the changes necessary, if they even choose to do so, since it would have to be something with broad customer applicability to make financial sense for them. It’s certainly possible to build custom code on top of the ERP system, but you’re always going to have that dependency. That’s not necessarily a bad thing, you just have to choose wisely on where to leverage the ERP system.

I’d like to pick on one comment Joe made in his commentary. He stated:

…the idea of buying all SOA from one vendor flies in the face of the ultimate meaning and purpose of SOA — the ability to pick and choose tools, applications and services from any and all vendors. SOA is supposed to mean the end of vendor lock-in.

I don’t agree with this opinion. While services can be used to create an abstraction layer from vendor products, I don’t think it needs to be a goal. The factors that influence whether an organization leverages one vendor, lots of vendors, or no vendors really doesn’t come into play at all. What SOA should do is assist you in making appropriate vendor decisions. Just as I commented some time ago that SOA should neither increase nor decrease outsourcing, but instead ensure increase the chance that outsourcing efforts are successful, the same holds true for choosing vendors. By breaking the problem domain down to a finer-grained level (services rather than applications), I can make better decisions on the vendor products I choose. If they don’t expose services for the capabilities I need, I’m going to look elsewhere. The only thing that could start leading toward better insulation from vendor lock-in will be more standards in the vertical domains. There’s plenty of standards out there, but there’s probably far more spaces that are not standardized.

So, what’s my advice? I don’t think you can buy “SOA in a box,” you can only buy “services in a box.” Your enterprise architects need to be the ones defining the architecture, and then leveraging the architecture to ensure that not only your home grown systems, but also your vendor systems, whether from one or many, adhere to it.

Is it about the technology or not?

Courtesy of Nick Gall, this post from Andrew McAfee was brought to my attention. Andrew discusses a phrase which many of us have either heard or used, especially in discussions about SOA: “It’s not about the technology.” He premises that there are two meanings behind this statement:

  1. “The correct-but-bland meaning is ‘It’s not about the technology alone.’ In other words a piece of technology will not spontaneously or independently start delivering value, generating benefits, and doing precisely what its deployers want it to do.”
  2. “The other meaning … is ‘The details of this technology can be ignored for the purposes of this discussion.’ If true, this is great news for every generalist, because it means that they don’t need to take time to familiarize themselves with any aspect of the technology in question. They can just treat it as a black box that will convert specified inputs into specified outputs if installed correctly.”

In his post, Nick Gall states that discussions that are operating around the second meaning are “‘aspirational’ — the entire focus is on architectural goals without the slightest consideration of whether such goals are realistically achievable given current technology trends. However, if you try to shift the conversation from aspirations to how to achieve them, then you will inevitably hear the mantra ‘SOA is not about technology.'”

So is SOA about the technology or not? Nick mentions the Yahoo SOA group, of which I’m a member. The list is known for many debates on WS-* versus REST and even some Jini discussions. I don’t normally jump into some of these technology debates not because the technology doesn’t matter, but because I view these as implementation decisions that must be chosen based upon your desired capabilities and the relative priorities of those capabilities. Anne Thomas Manes makes a similar point in her response to these blogs.

As an example, back in 2006, the debate around SOA technology was centered squarely on the ESB. I gave a presentation on the subject of SOA infrastructure at Burton Group’s Catalyst conference that summer which discussed the overlapping product domains for “in the middle” infrastructure, which included ESBs. I specifically crafted my message to get people to think about the capabilities and operational model first, determining what your priorities are, and then go about picking your technology. If your desired capabilities are focused in the run-time operations (as opposed to a development activity like Orchestration) space, and if you developers are heavily involved with the run-time operations of your systems, technologies that are very developer-focused, such as most ESBs, may be your best option. If your developers are removed from run-time operations, you may want a more operations focused tool, such as a WSM or XML appliance product.

This is just one example, but I think it illustrates the message. Clearly, making statements that flat our ignore the technology is fraught with risk. Likewise, going deep on the technology without a clear understanding of the organization’s needs and culture is equally risky. You need to have balance. If your enterprise architects fall into Nick’s “aspirational” category, they need to get off their high horse and work with the engineers that are involved with the technology to understand what things are possible today, and what things aren’t. They need to be involved with the inevitable trade-offs that arise with technology decisions. If you don’t have enterprise architects, and have engineers with deep technical knowledge trying to push technology solutions into the enterprise, they need to be challenged to justify those solutions, beginning with a discussion on the capabilities provided, not on the technology providing them. Only after agreement on the capabilities can we now (and should) enter a discussion on why a particular technology is the right one.

Composite Applications

Brandon Satrom posted some of his thoughts on the need for a composite application framework, or CAF, on his blog and specifically called me out as someone from which he’d like to hear a response. I’ll certainly oblige, as inter-blog conversations are one of the reasons I do this.

Brandon’s posted two excerpts from the document he’s working on, here and here. The first document tries to frame up the need for composition, while the second document goes far deeper into the discussion around what a composite application is in the first place.

I’m not going to focus on the need for composition for one very simple reason. If we look at the definition presented in the second post, as well as articulated by Mike Walker in his followup post, composite applications are ones which leverage functionality from other applications or services. If this is the case, shouldn’t every application we build be a composite application? There are vendors out there who market “Composite Application Builders” which can largely be described as EAI tools focused on the presentation tier. They contain some form of adapter for third party applications, legacy systems, that allow functionality to be accessed from a presentation tier, rather than as a general purpose service enablement tool. Certainly, there are enterprises that have a need for such a tool. My own opinion, however, is that this type of an approach is a tactical band-aid. By jumping to the presentation tier, there’s a risk that these integrations are all done from a tactical perspective, rather than taking a step back and figuring out what services need to be exposed by your existing applications, completely separate from the construction of any particular user-facing application.

So, if you agree with me that all applications will be composite applications, then what we need is not a Composite Application Framework, but a Composition Framework. It’s a subtle difference, but it gets us away from the notion of tactical application integration and toward the strategic notion of composition simply being part of how we build new user-facing systems. When I think about this, I still wind up breaking it into two domains. The first is how to easily allow user-facing applications to easily consume services. Again, in my opinion, there’s not much different here than the things you need to do to make services easily consumable, regardless of whether or not the consumer is user-facing or not. The assumption needs to be that a consumer is likely to be using more than one service, and that they’ll have a need to share some amount of data across those services. If the data is represented differently in those services, we create work for the consumer. The consumer must translate and transform the data from one representation to one or more additional representations. If this is a common pattern for all consumers, this logic will be repeated over and over. If our services all expose their information in a consistent manner, we can minimize the amount of translation and transformation logic in the consumer, and implement it once in the provider. Great concept, but also a very difficult problem. That’s why I use the term consistent, rather than standard. A single messaging schema for all data is a standard, and by definition consistent, but I don’t think I’ll get too many arguments that coming up with that one standard is an extremely difficult, and some might say impossible, task.

Beyond this, what other needs are there that are specific to user-facing consumers? Certainly, there are technology decisions that must be considered. What’s the framework you use for building user-facing systems? Are you leveraging portal technology? Is everything web-based? Are you using AJAX? Flash? Is everything desktop-based using .NET and Windows Presentation Foundation? All of these things have an impact on how your services that are targeted for use by the presentation tier must be exposed, and therefore must be factored into your composition framework. Beyond this, however, it really comes down to an understanding of how applications are going to be used. I discussed this a bit in my Integration at the Desktop posts (here and here). The key question is whether or not you want a framework that facilitates inter-application communication on the desktop, or whether you want to deal with things in a point-to-point manner as they arise. The only way to know is to understand your users, not through a one-time analysis, but through continuous communication, so you can know whether or not a need exists today, and whether or not a need is coming in the near future. Any framework we put in place is largely about building infrastructure. Building infrastructure is not easy. You want to build it in advance of need, but sometimes gauging that need is difficult. Case in point: Lambert St. Louis International Airport has a brand new runway that essentially sits unused. Between the time the project was funded and completed, TWA was purchased by American Airlines, half of the flights in and out were cut, Sept. 11th happened, etc. The needs changed. They have great infrastructure, but no one to use it. Building an extensive composition framework at the presentation tier must factor in the applications that your users currently leverage, the increased use of collaboration and workflow technology, the things that the users do on their own through Excel, web-based tools, and anything else they can find, how their job function is changing according to business needs and goals, and much more.

So, my recommendations in this space would be:

  1. Start with consistency of data representations. This has benefits for both service-to-service integration, as well as UI-to-service integration.
  2. Understand the technologies used to build user-facing applications, and ensure that your services are easily consumable by those technologies.
  3. Understand your users and continually assess the need for a generalized inter-application communication framework. Be sure you know how you’ll go from a standard way of supporting point-to-point communication to a broader communication framework if and when the need becomes concrete.

Providing good service

Beth Gold-Bernstein had a great post entitled, “The Second S in Saas” that outlined her experience in trying to get a backup restored from an online survey site.

This is clearly important when you’re dealing with external service providers, but I’d like to add that it is equally important for the services that you build in house. The typical large enterprise today is rife with politics, with various organizations battling for control, whether they realize it or not. SOA strikes fear into the heart of many a project manager because the success of their effort is now dependent on some other team. Ultimately, however, success is not defined by getting the project done on time and on budget, success can only be determined by meeting the business goals that justified the project in the first place. If something goes wrong, what’s the easiest course of action? Point the finger at the elements that were outside of your control.

I experienced this many times over when rolling out some new web service infrastructure at an organization. Teams building services were required to use it, and whenever something went wrong, it was the first thing that was blamed, usually without any root cause analysis. Fortunately, I knew that in order to provide good service for the teams that were leveraging this new infrastructure, I needed to be on top of it. I usually knew about problems with services before they did, and because the infrastructure put in place increased visibility, it was very easy to show that it wasn’t the new infrastructure, and in fact, the new infrastructure provided the information necessary to point to where the problem really was. Interestingly, this infrastructure was in the middle, between the consumer and the provider. Arguably, the teams responsible for the services should be looking at the same information I was, and be on top of these problems before some user calls up and says it’s broken.

If you simply put services into production and ignore it until the fire alarm goes off, you’re going to continue to struggle in achieving higher levels of success with SOA adoption, whether you’re a SaaS provider or a service developer inside the enterprise.

Latest SOA Insights Podcast

Dana Gardner has posted the latest episode of his Briefings Direct: SOA Insights series. In this episode, the panelists (Tony Baer, Jim Kobielus, Brad Shimmin, and myself) along with guest Jim Ricotta, VP and General Manager of Appliances at IBM, discuss SOA Appliances and the recent announcements around the BPEL4People specification.

This conversation was particularly enjoyable for me, as I’ve spent a lot of time understanding the XML appliance space in the past. As I’ve blogged about in the past, there’s a natural convergence between software-based intermediaries like proxy servers and network appliances. I’ve learned a lot when working with my networking and security counterparts in trying to come up with the right solution. The other part of the conversation on BPEL4People was also fun, given my interests in human computer interaction. I encourage you to give it a listen, and feel free to send me any questions you may have, or suggestions for topics you’d like to see discussed.

Open Group EA 2007: Andres Carvallo

Andres Carvallo is the CIO for Austin Energy. He was just speaking on how the Internet has changed the power industry. He brought up the point that we’ve all experienced, where we must call our local power company to tell them that the power is out. Take this in contrast to the things you can do with package delivery via the Internet, and it shows how the Internet age is changing customer expectations. While he didn’t go into this, my first reaction to this was that IT is much like the power company. It’s all too often that we only know a system is down because an end user has told us so.

This leads to discussion of something that is all too frequently overlooked, which is the management of our solutions. Visibility into what’s going on is all too often an afterthought. If you exclusively focus on outages, you’re missing the point. Yes, we do want to know when the .001% of downtime occurs. What makes things more important, however, is an understanding of what’s going on the other 99.999% of the time. It’s better to refer this as visibility rather than monitoring, because monitoring leads to narrow thinking around outages, rather than on the broader information set.

Keeping with the theme of the power industry, clearly, Austin Energy needs to deal with the varying demands of the consumers of their product. That may range from some of the major technology players in the Austin area versus your typical residential customer. Certainly, all consumers are not created equal. Think about the management infrastructure that must be in place to understand these different consumers. Do you have the same level of management in your IT solutions to understand different consumers of your services?

This is a very interesting discussion, especially given today’s context of HP’s acquisition of Opsware (InfoWorld report, commentary/analysis from Dana Gardner and Tony Baer).

Dilbert Governance, Part 2

I’ll be giving a webinar on Policy-Driven SOA Infrastructure with Mike Masterson from IBM DataPower next week on Thursday at 1pm Eastern / 10am Pacific, and probably could find a way to tie in today’s Dilbert to it. Give it a read.

As for the webinar, it will discuss themes that I’ve previously blogged about here, including separation of non-functional concerns as policies, enforcing those policies through infrastructure, and the importance of it to SOA. Mike will cover the role of SOA appliances in this domain. You can register for it here.

To ESB or not to ESB

It’s been a while since I posted something more infrastructure related. Since my original post on the convergence of infrastructure in the middle was reasonably popular, I thought I’d talk specifically about the ESB: Enterprise Service Bus. As always, I hope to present a pragmatic view that will help you make decisions on whether an ESB is right for you, rather than coming out with a blanket opinion that ESBs are good, bad, or otherwise.

From information I’ve read, there are at least five types of ESBs that exist today:

  1. Formerly EAI, now ESB
  2. Formerly BPM, now ESB
  3. Formerly MOM/ORB, now ESB
  4. The WS-* Enabling ESB
  5. The ESB Gateway

Formerly EAI, now ESB

There’s no shortage of ESB products that people will claim are simply rebranding efforts of products formerly known as EAI tools. The biggest thing to consider with these is that they are developer tools. Their strengths are going to lie in their integration capabilities, typically in mapping between schemas and having a broad range of adapters for third party products. There’s no doubt that these products can save a lot of time when working with large commercial packages. At the same time, however, this would not be the approach I’d take for a load balancing or content-based routing solution. Those are typically operational concerns where we’d prefer to configure rather than code. Just because you have a graphical tool doesn’t mean it doesn’t require a developer to use it.

Formerly BPM, now ESB

Many ESBs are adding orchestration capabilities, a domain typically associated with BPM products. In fact, many BPM products are simply rebranding of EAI products, and there’s at least one I know of that is now also marketed as an ESB. There’s definitely a continuum here, and the theme is much the same. It’s a graphical modeling tool that is schema driven, possibly with built-in adapter technology, definitely with BPEL import/export, but still requires a developer to properly leverage it.

Both of these two categories are great if your problem set centers around orchestration and building new services in a more efficient manner. If you’re interested in sub-millisecond operations for security, routing, throttling, instrumentation, etc., recognize that these solutions are primarily targeted toward business processing and integration, not your typical “in-the-middle” non-functional concerns. It is certainly true that from a modeling perspective, the graphical representation of a processing pipeline is well-suited for the non-functional concerns, but it’s the execution performance that really matters.

Formerly MOM/ORB, now ESB

These products bring things much closer to non-functional world, although as more and more features are thrown at the ESB umbrella, they may start looking more like one of the first two approaches. In both cases, these products try to abstract away the underlying transport layer. In the case of MOM, all service communication is put onto some messaging backbone. The preferred model is to leverage agents/adapters on both endpoints (e.g. having a JMS client library for both the consumer and provider), potentially not requiring any centralized hub in the middle. The scalability of messaging systems certainly can’t be denied, however, the bigger concern is whether agents/adapters can be provided for all systems. All of the products will certainly have a fallback position of a gateway model via SOAP/HTTP or XML/HTTP, but you lose capabilities in doing so. For example, having endpoint agents can ensure reliable message delivery. If one endpoint doesn’t have it, you’ll only be reliable up to the gateway under control of the ESB. In other words, you’re only as good as your weakest link. One key factor in looking at this solution is the heterogeneity of your IT environment. The more varied systems you have, the greater challenge you have in finding an ESB that supports all of them. In an environment where performance is critical, these may be good options to investigate, knowing that a proprietary messaging backbone can yield excellent performance, versus trying to leverage something like WS-RM over HTTP. Once again, the operational model must be considered. If you need to change a contract between a consumer and a provider, the non-functional concerns are enforced at the endpoint adapters. These tools must have a model where those policies can be pushed out to the nodes, rather than requiring a developer to change some code or model and go through a development cycle.

The WS-* Enabling ESB

This category of product is the ESB that strives to allow an enterprise to have a WS-* based integration model. Unlike the MOM/ORB products, they probably only require the use of agents on the service provider side, essentially to provide some form of service enablement capability. In other words, a SOAP stack. While 5 years ago this may have been very important, most major platforms now provide SOAP stacks, and many of the large application vendors provide SOAP-based integration points. If you don’t have an enterprise application server, these may be worthwhile options to investigate, as you’ll need some form of SOAP stack. Unlike simply adding Axis on top of Tomcat, these options may provide the ability to have intercommunication between nodes, effectively like a clustered application server. If not apparent, however, these options are very developer focused. Service enablement is a development activity. Also, like the MOM/ORB solutions, these products can operate in a gateway mode, but you do lose capabilities. Like the BPM/EAI solutions, these products are focused on building services, so there’s a good chance that they may not perform as well for the “in-the-middle” capabilities typically associated with a gateway.

The ESB Gateway

Finally, there are ESB products that operate exclusively as a gateway. In some cases, this may be a hardware appliance, it could be software on commodity hardware, or it could be a software solution deployed as a stand-alone gateway. While the decision between a smart-network or smart-node approach is frequently a religious one, there are certainly scenarios where a gateway makes sense. Perimeter operations are the most common, but it’s also true that many organizations don’t employ clustering technologies in their application servers, but instead leverage front-end load balancers. If your needs focus exclusively on the “in-the-middle” capabilities, rather than on orchestration, service enablement, or integration, these products may provide the operational model (configure not code) and the performance you need. Unlike an EAI-rooted system, an appliance is typically not going to provide an integrate anything-to-anything model. Odds are that it will provide excellent performance along a smaller set of protocols, although there are integration appliances that can talk to quite a number of standards-based systems.

Final words

As I’ve stated before, the number one thing is to know what capabilities you need first, before ever looking at ESBs, application servers, gateways, appliances, web services management products, or anything else. You also need to know what the operational model is for those capabilities. Are you okay with everything being a development activity and going through a code release cycle, or are there things you want to be fully controlled by an operational staff that configures infrastructure according to a standard change management practice? There’s no right or wrong answer, it all depends on what you need. Orchestration may be the number one concern for some. Service-enablement of legacy systems to another. Another organization may need security and rate throttling at the perimeter. Hopefully this post will help you on your decision making process. If I missed some of the ESB models out there or if you disagree with this breakdown, please comment or trackback. This is all about trying to help end users better understand this somewhat nebulous space.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.