Archive for the ‘Infrastructure’ Category

Enterprise Architect: Advisor versus Gatekeeper

A recent conversation with a colleague delved into the complicated world of new technology decisions. At every organization I’ve been at, this has been a source of contention between four major groups: Enterprise Architects, Domain Architects, Development Teams, and Engineering Teams. I specifically listed Enterprise and Domain Architects separately, because I’ve seen contention between those two groups. It’s easy to come up with scenarios where each one of these teams should be involved, but it’s most problematic when one team tries to “own” the process.

Why does this struggle for ownership occur? Let’s start with a development team. First and foremost, they spend nearly all of their time on projects, which is where things get delivered. They are typically the ones closest to the immediate business need as represented by the project requirements, so they have fewer challenges getting justification or funding. They claim they need the new technology to deliver to the business, so they have a vested stake and want control over the decision.

An engineering team is also involved with projects, but has a challenge of remaining relevant when the project ends. The technology typically gets handed off to an operations team, and if the engineering team has built shared infrastructure, rather than one off infrastructure, there may not be much to do until the vendor releases the next version. Given that, it’s likely that the engineering team will expand into the world of technology architecture, but potentially only with the vendors they know, rather than a vendor-neutral architectural approach. This creates a risk of driving technology adoption based on new features rather than on company need, can create conflict with the technology architecture team, if one exists, and with other technology areas when the continued feature creep results in overlap with other domains. Unfortunately, engineering teams don’t have as much visibility into the business need, because that’s all funneled through projects and the development teams, so it sets the stage for tension between the development and engineering areas.

Now let’s throw in architecture. First, there can be conflicts within architecture teams, if there’s a separation and different reporting relationships for enterprise architects and portfolio/domain architects. Second, enterprise and domain architects can frequently be disconnected from both the “need” process of the projects and the technology delivery of the engineering team. Clearly, the right place for these roles to play is at the portfolio management level, where categorization, prioritization, and strategy takes place that results in the needs of individual projects. That doesn’t always happen though, and these roles often wind up having to get involved by mandate, becoming the gatekeeper or bottleneck that everyone tries to avoid.

As many have said, there are typically far more ways to mess something up than there are to do it correctly, and this is certainly one of them. Coming back to the idea of trusted advisor, my theory is that any approach that tries to mandate that technology introduction only come through one path is probably not going to work. Lots of people get great ideas, and guess what, not all of them come out of any particular role, and plenty of them come from outside of IT. (By the way, the same should hold true in reverse, IT can come up with plenty of good business ideas, just as the business can come up with good IT ideas.) The role of the architect is to be the trusted advisor and provide the appropriate context to make things successful. If someone has a great idea about a new technology, don’t stifle them because you didn’t come up with it, advise them on how to make it work based upon the context you have as an enterprise or domain architect, or advise them that it’s not going to be successful based upon that same context. That’s what an advisor does, and providing the appropriate guidance, whether it is what the other person wants to hear or not, is what will create trust. There will always be people that may be out of alignment with the business needs and priorities, if you don’t create alignment (either by adjusting the individuals view or adjusting your needs and priorities), you will be destined for frustration, splintering, and potential lack of success. Create alignment, and create an environment where appropriate ideas can thrive regardless of the source. That’s the role of the trusted advisor, and that’s a big part of what enterprise and domain architects should do.

All content written by and copyrighted by Todd Biske. If you are reading this on a site other than my “Outside the Box” blog, it’s probably being republished without my permission. Please consider reading it at the source.

Oracle OpenWorld: The Big BPEL-ESB-OSB cook-off

Full disclosure: I am attending Oracle OpenWorld courtesy of Oracle.

The speaker in this session is Andreas Chatziantoniou from Accenture.  He’s discussing the overlap between Oracle’s BPEL, ESB (legacy Oracle), and OSB (BEA ESB) products. 

First up is BPEL.  His slide states that BPEL should be used for system to system or service orchestration, when human workflow is needed, and when there are parallel request-response patterns.  The next slide says that BPEL should not be used for complex data transformations, it should not be used to program, and should not be used as a business modeling tool.  At first glance, this may seem strange, but I think it’s more of an indication that BPEL is something that gets generated by your tool, it’s not something people should be editing directly.  This point could be made more clearly.  He is emphasizing that you should not use BPELJ (embedded Java in BPEL).

He’s now talking about “dehydration,” a term I had not heard before.  He’s using that to refer to the writing of a process state to disk so it can be restored at a later time.  He stated that this is a natural part of BPEL, but not part of ESB/OSB.  I can live with that.  A service bus shouldn’t be doing dehydration any more than a network switch should be.

Now on to ESB/OSB.  His slide says they should be used for loose coupling, location transparency, mediation, error handling, transformation, load balancing, security, and monitoring.  Good list, although it does have the two grey areas of mediation and transformation.  You need to further define what types of mediation and transformation should and should not be done.  The way I’ve phrased it is that ESB’s should be about standards-in and standards-out.  As long as you’re mediating and transforming between standards (and the same standards on both sides), it’s a good fit.  If you are transforming between external and internal standards, as is the case in an external gateway, consider whether your ESB is the right fit for this since these mappings can get quite complicated. Those are my words, not the speakers, sorry this is something I’ve thought a lot about.

He’s now talking about mediation, and specifically referring to a component that existed in Oracle’s legacy ESB.  He said it connects components in a composite application.  To me, this does not belong in a service bus, and in the case of Oracle Service Bus, it does not.  He did not go into more detail on the type of mediation (e.g. security token mediation, message schema mediation, transport mediation).  As previously said, this needs to be made more narrow to make an appropriate decision on whether your mediation is really new business logic that belongs on a development platform, or mediation between supported standards than can be done by your connectivity infrastructure.

On transformation, Andreas focused more on what the platforms can do, rather on what they should do, calling out that XML transformations via XQuery, XSLT, et. can be equally done on any of the platforms.  His advice was do it in the service bus, and avoid mixed scenarios.  I’m really surprised at that, given how CPU-intensive transformations and mappings can be.  His point was that in a very large (50-60 steps) BPEL process, handling transformations could get ugly.  I see the logic on this, but I think if you do the analysis on where those transformations are needed, it may only be in one activity and best handled by the platform for that activity itself.

Overall, the speaker spent to much time discussing what the products can do, calling out overlaps, and not enough time on what they should do.   There was some good advice for customers, but I think it could have been made much simpler. My take on this whole debate  has always been straightforward.  A BPEL engine is a service development platform.  You use it to build new services that are most likely some composite of existing services.  I like to think of it as an orchestrated service platform.  As I previously said, though, you don’t write BPEL.  You use the graphical modeler for your tool, and behind the scenes, it may (or may not) be creating BPEL. 

A service bus is a service intermediary.  You don’t use it to build services, you use it to connect service consumers and service providers.  Unfortunately, in trying to market the service bus, most vendors succumbed to feature creep, whether due to creating their ESB from a legacy EAI product, or by adding more development like features to get more sales.  Think of it as a very intelligent router, meant to be configured by Operations, not coded by developers.

Oracle OpenWorld: Michael Dell Keynote

Full disclosure: I’m attending Oracle OpenWorld courtesy of Oracle.

Michael Dell started by presenting some facts: $1.2 trillion dollars spent annually on IT infrastructure. $400 billion on hardware/software, $800 billion on labor and services. The dilemma is that we spend 70% on keeping the lights on, and only 30% on innovation. The desire is to flip that balance (same message that Ann Livermore of HP delivered yesterday). Dell is making a commitment to taking $200 billion out of the $1.2 trillion spend by enabling the efficient enterprise through: standardization, simplification, and automation.

On standardization, Michael discussed the role of x86 hardware and that today, 90% of all business applications are running on x86 hardware. According to Dell’s calculations, databases run up to 200% better on x86 systems than on proprietary hardware. Oracle and Dell are committed to making the technology work harder, not the user.

Moving on to simplification… the theme is pragmatic consolidation. He talked about Dell’s tiered storage capabilities, including iSCSI, solid state disks, and 10 gigabit ethernet. Similar to the opening keynote, he stated that 20x performance gains are possible with solid state storage technology. He then moved onto virtualization, giving examples of 20:1 server consolidation, 50% operational savings, and 1/3 of IT resources freed up for other efforts.

Oracle OpenWorld: An Architect’s View of the New Features of Oracle SOA Suite 11g Release 1

Full disclosure: I’m attending Oracle OpenWorld courtesy of Oracle.

First wave of industry standardization was around functional-specific standards in areas causing headaches in the integration space. Emphasizing the role of SCA in the standardization of the service platform in the same way that Java EE played a role in the evolution of the application server. I’ll be honest, I’m still not a big SCA fan. I know Oracle is, though. The one good thing being shown is that the hosting environments can be managed in a single, unified way, regardless of whether that service is hosted in BPEL PM or WebLogic. As long as there’s good tooling that hides of the various SCA descriptors, this is a good thing.

Now they are talking about the event delivery network. It’s nice to see a discussion on fundamentals rather than trying to jump into a CEP discussion. They’re talking about having an event catalog, utilizing an EDL (event description language), and easily connecting consumers and subscribers. This is a good step forward, in my opinion. It may actually get people to think about events as first class citizens in the same way as services.

Now, they’re on to Oracle Human Workflow. It is all task-based, with property-based configuration. The routing of tasks can be entirely dynamic, rather than based on static rules. It has integration with Oracle Business Rules. It publishes events on the EDN (e.g. onTaskAssigned, onTaskModified, etc.). Nice to see them eating their own dog food with the use of EDN.

They’ve now moved on to Service Data Objects. They’ve introduced entity variables into BPEL to allow working with SDOs.

Additional subjects in this session included Metadata Services (MDS) and the Dev-Test-Prod problem (changing of environment-specific parameters as code is promoted through environments). On the latter, there are a large number of parameters that can now be modified via a “c-plan,” applied at deployment time. Anything that makes this easier is a good thing in my opinion.

Black/White, Coding/Configuration, and other Shades of Gray

I’ve been going through the TOGAF 9 documentation, and in the Application Software section of the Technical Reference Model, there are two categories that are recognized, Business Applications and Infrastructure Applications. They define these two as follows:

Business applications … implement business processes for a particular enterprise or vertical industry. The internal structure of business applications relates closely to the specific application software configuration selected by an organization.
Infrastructure applications … provide general purpose business functionality, based on infrastructure services.

There’s a lot more to the descriptions than this, but what jumped out at me was the typical black and white breakdown of infrastructure and “not” infrastructure. Normally, it’s application and infrastructure, but since TOGAF uses the term infrastructure application, that obviously won’t work, but you get the point. What I’ve found at the organizations I’ve worked with is that there’s always a desire to draw a black and white line between the world of infrastructure and the application world. In reality, it’s not that easy to draw such a line, because it’s an ever-changing continuum. It’s far easier to see from the infrastructure side, where infrastructure used to mean physical devices, but now clearly involves software solutions ranging from application servers to, as TOGAF 9 correctly calls out in their description of infrastructure applications, commercial of the shelf products.

The biggest challenge in the whole infrastructure/application continuum is knowing when to shift your thinking from coding to configuration. As things become more commoditized and more like infrastructure, your thinking has to shift to that of configuration. If you continue with a coding and customization mentality, you’re likely investing significant resources into an area without much potential for payback. There are parallels between this thinking and the cloud computing and software as a service movements. You should use this thinking when making decisions on where to leverage these technologies and techniques. If you haven’t changed your thinking from coding to configuration, it’s unlikely that you’re going to be able to effectively evaluate SaaS or cloud providers. When things are offered as a service, your interactions with them are going to be a configuration activity based upon the interfaces exposed, and it’s very unlikely that any interface will have as much flexibility as a programming language. If you make good decisions on where things should be configured rather than coded, you’ll be in good shape.

Governing Anonymous Service Consumers

On Friday, the SOA Chief (Tim Vibbert), Brenda Michelson, and I had a conversation on Twitter regarding SOA governance and anonymous service consumers. Specifically, how do you provide run-time governance for a service that is accessed anonymously?

If you’ve read this blog or my book, you’ll know that my take on run-time SOA governance is the enforcement and/or monitoring of compliance with the policies contained within the service contract. Therein lies the biggest problem: if the service consumer is anonymous, is there a contract? There’s certainly the functional interface, which is part of the contract, but there isn’t any agreement on the allowed request rates, hours of usage, etc. So what do we do?

The first thing to recognize is that while there may not be a formal contract that all consumers have agreed to, there should always be an implied contract. When two parties come to the table to establish an agreement, it’s likely that both sides comes with a contract proposal, and the final contract is a negotiation between the two. The same thing must be considered here. If someone starts using a service, they have some implicit level of service that they expect to receive. Likewise, the service provider knows both the capacity they currently can handle as well as what how they think a typical consumer will use the service. Unfortunately, these implied contracts can frequently be wrong. The advice here is that even if you are trying to lower the barrier for entry by having anonymous access, you still need to think about service contracts and design to meet some base level of availability.

The second thing to do, which may seem obvious, is to avoid anonymous access in the first place. It’s very hard to enforce anything when you don’t know where it’s coming from. Your authorization policy can simply be that you must be an authenticated user to use the service. Even in an internal setting, having some form of identity on the message, even if there are no authentication or authorization policies, becomes critical when you’re trying to understand how the systems are interacting, perform capacity planning, and especially in a troubleshooting scenario. Even services with low barriers to entry, like the Twitter API, often require identity.

The next thing you should do is leverage a platform with elasticity. That is, the available capacity should grow and shrink with the demand. If it’s anonymous, and new consumers can start using it simply by getting the URLs from someone else, you have no control over the rate at which usage will scale. If the implied level of availability is that the service is always available, you’ll need on-demand resources.

Finally, you still need to protect your systems. No request is completely anonymous, and there are things you can do to ensure the availability of your service against rogue consumers. Requests will have source IP addresses on them, so you can look for bad behavior at that level. You can still do schema validation, look for SQL injection, etc. In other words, you still need to do DoS protection. You also should be looking at the usage metrics on a frequent basis to understand the demand curve, and making decisions accordingly.

When is Redundancy Okay?

A common theme that comes up in architecture discussions is the elimination of redundancy. Simply stated, it’s about finding systems that are doing the same thing and getting rid of all of them except one. While it’s easily argued that there are cost savings just waiting to be realized, does this mean that organizations should always strive to eliminate all redundancy from their technology architectures? I think such a principle is too restrictive. If you agree, then what should the principle be?

The principle that I have used is that if I’m going to have two or more solutions that appear to provide the same set of capabilities, then I must have clear and unambiguous policies on when to use each of those solutions. Those policies should be objective, not subjective. So, a policy that says “Use Windows Server and .NET if your developer’s preferred language is C#, and use if your developer’s preferred language is Java” deosn’t cut it. A policy that says, “Use C# for the presentation layer of desktop (non-browser) applications, use Java for server-hosted business-tier services” is fine. The development of these policies is seldom cut and dry, however. Two factors that must be considered are the operational model/organizational structure and the development-time values/costs involved.

On the operational model/organizational structure side of things, there may be a temptation to align technology choices with the organizational structure. While this may work for development, frequently, the engineering and operations team are centralized, supporting all of the different development organizations. If each development group is free to choose their own technology, this adds cost to the engineering and operations team, as they need expertise in all of the platforms involved. If the engineering and operations functions are not centralized, then basing technology decisions the org chart may not be as problematic. If you do this, however, keep in mind that organizations change. An internal re-organization or a broader merger/acquisition could completely change the foundation on which policies were defined.

On the development side of things, the common examples where this comes into play are environments that involve Microsoft or SAP. Both of these solutions, while certainly capable of operating in a heterogeneous environment, provide significant value when you stay within their environments. In the consumer space, Apple fits into this category as well. Their model works best when it’s all Apple/Microsoft/SAP from top-to-bottom. There’s certainly other examples, these are just ones that people will associate with this more strongly than others. Using SAP as an example, they provide both middleware (NetWeaver) and applications that leverage that middleware. Is it possible to have SAP applications run on non-SAP middleware? Certainly. Is there significant value-add if you use SAP’s middleware? Yes, it’s very likely. If your entire infrastructure is SAP, there’s no decisions to be made. If not, now you have to decide whether you want both SAP middleware and your other middleware, or not. Likewise, if you’ve gone through a merger, and have both Microsoft middleware and Java middleware, you’re faced with the same decision. The SAP scenario is bit more complicated because of the applications piece. If we were only talking about custom development, the more likely choice is to go all Java, all C#, or all -insert your language of choice-, along with the appropriate middleware. Any argument about value-add of one over the other is effectively a wash. When we’re dealing with out-of-the-box applications, it’s a different scenario. If I deploy a SAP application that will automatically leverage SAP middleware, that needs to be compared against deploying the SAP application and then manually configuring the non-SAP middleware. In effect, I create additional work by not using the SAP middleware, which now chips away at the cost reductions I may have gained by only going with a single source of middleware.

So, the gist of this post is that a broad principle that says, “Eliminate all redundancy” may not be well thought out. Rather, strive to reduce redundancy where it makes sense, and where it doesn’t, make sure that you have clear and unambiguous policies that tells project teams how to choose among the options. Make sure you consider all use cases, such as where the solution may span domains. Your policies may say “use X if in domain X, use Y if in domain Y,” but you also need to give direction on how to use X and Y when the solution requires communication across domains X and Y. If you don’t, projects will either choose what they want (subjective, bad), or come back to you for direction anyway.

What are the Services?

I recently completed a certification in ITIL v3 Foundations. On the plus side, I found that the ITIL framework provided some great structure around the concept of service management that is very applicable to SOA. There was one key question, however, that I felt was left unanswered. What are the services?

My assumption going in was that ITIL was very much about running IT operations within an enterprise, so I expected to see some sort of a service domain model associated with the “business of IT.” That’s not the case, at least not in the material I was given. There are a number of roles defined that are clearly IT specific, but overall, I’d say that many of the processes and functions presented were not specific to IT at all. As an example, ITIL foundations won’t tell you whether server provisioning or application deployment should be services in your catalog or not. Without this, an effort to adopt ITIL can struggle in the same way as an SOA adoption effort can. I’ve seen first hand where an organization thrashed around what the right operational and engineering services were. ITIL does offer the right guidance in helping you define them, in that it begins with understanding your customer.

This is the same question where many SOA initiatives struggle. We can have lots of conceptual talk about how to build services the right way, but actually defining the services that should be built is a challenge. In both ITIL and SOA adoption, there is a penalty for defining too many services. It’s probably much more pronounced in ITIL, because those services likely have a higher cost since managing and using those services tends to have a higher cost in the manual effort than managing and using a web service, although, if you’re doing business-driven SOA, the costs may be very similar.

Overall, I definitely felt there is a lot of value in the ITIL v3 framework, and I think if you are leading an SOA adoption effort, it’s worth learning about, as it will help your efforts. If you’re looking to improve IT operations, it will likewise help your efforts. Just know that there you’ll still need to figure out what your services are on your own, and that can have a big impact on the success of your adoption efforts.

Fundamental Question on Virtualization

Since I first learned a bit about virtualization, there’s been one question that I’ve had that still keeps nagging me: isn’t this what operating systems were originally supposed to do? Back in my undergraduate days in the Computer Science department at the University of Illinois at Urbana-Champaign, I took a course in operating systems, and I seem to recall it being all about the allocation of memory, I/O, storage, and processor cycles among processes. This seems to be the exact same problem that virtualization is trying to achieve. About the only differences I can see is that virtualization, at least on the server side, does try to go across physical boundaries with things like VMWare’s VMotion, and it also allows us to avoid having to add physical resources just because one system requires Windows Server while another requires SuSE Linux.

So, back to the question. Did we simply screw up our operating systems so badly with so much bloat that they couldn’t effectively allocate resources? If so, you could argue that a new approach that removes all the bloat may be needed. That doesn’t necessarily require virtualization, however. There’s no reason why better resource management couldn’t be placed directly into the operating system. Either way, this path at least has the potential to provide benefits, because the potential value is more heavily based on the technology capabilities, rather than on how we leverage that technology.

In contrast, if the current state has nothing to do with the operating systems capabilities, and more about how we choose to allocate systems to those resources, then will virtualization make things any better? Put another way, how much of the potential value in applying virtualization is dependent on our ability to properly configure the VMs? If that number is significant, we may be in trouble.

This is also a key point of discussion as people look into cloud computing. The arguments are again based on economies of scale, but the value is heavily dependent on the ability to efficiently allocate the resources. If the fundamental problem is in the technology capabilities, then we should eventually see solutions that allow for both public-cloud computing as well as private-cloud computing (treat your internal data center as you own private cloud). If the problem is not the technology, then we’re at risk of taking our problems and making them someone else’s problem, which may not actually lead to a better situation.

What are your thoughts on this? Virtualization isn’t something I think about a lot, so I’m open to input on this. So far, the most interesting thing for me has been hearing about products that are designed to run on a hypervisor directly, which removes all of the OS bloat. The risk is that 15 years from now, we’ll repeat this cycle again.

Best of Breed or Best Fit?

I saw the press release from SoftwareAG that announced their “strategic OEM partnership” with Progress Software for their Actional products.  While I’m not going to comment on that particular arrangement, I did want to comment on the challenge that we industry practitioners face when trying to leverage vendor technologies these days.

There has been a tremendous amount of consolidation in the SOA space.  There’s also been a lot of consolidation in the Systems Management space, another area where I pay a lot of attention. Unfortunately, the challenge still comes down to an integration problem. The smaller companies may be able to be more nimble and add desired capabilities.  This approach is commonly referred to as a “best of breed” approach, where you pick the product that is the best for the immediate needs in a somewhat narrow area.  Eventually, you will need to integrate those systems into something larger.  This is where a “best fit” approach sometimes comes into play.  Here, the desire is to focus more on breadth of capability than on depth of capability.

The definition of what is appropriate breadth is always changing, which is why many of the “best fit” vendors have grown by acquisition rather than continued enhancements and additions to their own solutions.  Unfortunately, this approach doesn’t necessarily make the integration challenges go away.  Sometimes it only means that a vendor is well positioned to offer consulting services as part of their offering, rather than having to go through a third party systems integrator.  It does mean that the customer has a “single throat to choke,” but I don’t know about you, I’d much rather have it all work and not have to choke anyone.

This recent announcement is yet another example of the relationships between vendors that can occur.  OEM relationships, rebranding, partnerships, etc.  Does it mean that we as end users get a more integrated product?  I think the answer is a firm maybe.

The only way that makes sense to me is to always retain control of your architecture.  It doesn’t do any good to ask the questions, “Does your product integrate with foobar?” or “How easy is it to integrate with such-and-such?”  You need to know the specifics of where and how you want these systems to integrate, and then compare that to what the vendors have to say, whether it’s all within their own suite of branded products or involves partners and OEM agreements.  The more specifics you have the better.  You may find that highly integrated suites, perhaps are integrated in name only, or maybe you’ll find that the suite really does operate as a well-oiled machine.  Perhaps you’ll see a small vendor that has worked their tail off to integrate seamlessly into a larger ecosystem, and perhaps you’ll find a small vendor that is best left as an island in the environment.

Then, after getting answers, go through a POC effort to actually prove it out and get your hands dirty (you execute the POC, not the vendor).  There are many choices involved in integrating these systems, such as what the message schemas will be, and the mechanisms of the integration itself- are you integrating “at the glass” via cut and paste between applications?  Are you integrating in the middle via service interactions in the business tier?  Or are you integrating at the data layer, either through direction database access or through some data integration/MDM-like layer?  Just those questions alone can cause significant differences in your architecture.  The only way you’ll see what’s really involved with the integration effort is to sit down and try it out, just do so after first defining how you’d like it to work through a reference architecture, then questioning the vendors on how well they map to your reference architecture, and finally by getting your hands dirty in a POC and actually trying to make it work as advertised in those discussions.

Cloud versus Grid

I am back from vacation and trying to catch up on my podcasts. In an IT Conversations Technometria Podcast, Phil Windley spoke with Rich Polski. Rich is working on Eucalyptus, an open source implementation of the Amazon EC2 interface.

Rich gave a great definition of the difference between grid computing and cloud computing. Grid computing typically involves a small number of users requesting big chunks of resources from a homogenous environment. Cloud computing typically involves a large number of users with relatively low resource requirements from a heterogenous environment.
Rich and Phil went on to discuss the opportunities for academic research in the cloud computing and virtualization spaces. If you are considering when and how to leverage these technologies, give it a listen.

Gartner AADI: SAP Presentation

I’m in a SAP session now. They’ve got their “End-to-End SOA Composition and Middleware Platform” picture up right now. It’s always nice when your vendor’s picture aligns with your own. Specifically, they have a separation between their Enterprise SOA Provisioning layer and their SOA Interoperability layer. Enterprise SOA Provisioning includes “Service and Event Enablement” and “Connectivity and Integration.” In SOA Interoperability, they have “Service Bus and SOA Management.” Thanks to the confusion between the ESB space and EAI space, these two layers are frequently combined, and I think they should be separate. The SOA Interoperability layer should be about mediating across a set of standards that the enterprise has adopted, which the enablement and integration layer is about hooking non-standard things into it. Push those pieces as close to the endpoints as possible, and put the stuff that’s required on all standards-based service messages in the middle. Unfortunately, they’ve now put up a slide on SAP NetWeaver Process Integration 7.1 and are positioning it to cover Service Bus, Service Integration, and SOA Management. So, conceptually they get it, but in terms of the product mapping, there could be some challenges if you don’t deploy it properly. If you separate out one PI environment for SOA Interoperability, and a second environment for Enablement and Integration, a lot of the potential risks can be mitigated.

Enterprises need to think architecture, not integration

In a blog entry last week and his podcast for this week, David Linthicum lamented the fact that many technology vendors are too focused on integration and not enough on architecture. My opinion on this is that the problem lies first with enterprises, and not with technology vendors.

In order to first explain this, I need to split the technology product space into two large groups. First, there are products that are pure infrastructure. They are platforms on which someone else builds solutions. This is the familiar space of database platforms, application servers, network appliances, EAI platforms, ESBs, MOM servers, etc. For products in this space, I have absolutely no problem with the vendors providing products that are focused on making integration easier. Does this enable enterprises to build up layers of “glue” in the middle? Absolutely, but at the same time, the enterprise had to have a need (whether perceived or real) to make their integration efforts easier.

The second group of technology products are the actual business solution providers, whether it’s a big suite from SAP or Oracle, web-based solutions like Workday and Salesforce.com, or anything in between. These vendors absolutely should be focused on architecture first. At the same time, I don’t think many of these products are being marketed and sold on their integration benefits, they’re being sold on their business capabilities.

So, what’s the problem then? The problem comes when the enterprise IT staff involved with technology identification and selection is too focused on integration, rather than architecture. Almost always, when I hear an enterprise talk about integration, it’s a just in time effort. Someone is building some new system and as part of the design of that system, they decide they need to talk to some other system. No thought of this need occurred in advance from either side of the integration effort. In putting together the solution, the focus is simply on the minimal amount of work to put the glue in the middle. As long as this trend continues, the infrastructure vendors are going to continue to market their products to this space. While it’s a noble quest to try to educate and market at the same time, it’s a risky strategy to present using a different mental model than your target audience.

The change that needs to occur is that integration needs to be a primary principle that is thought about at the time a system is placed into production. Normal behavior is to build a solution for my stakeholders and my users, and not think about anything else. In past posts (here, here), I’ve talked about three simple questions that all projects should start thinking about. One of those questions is “What services does your solution use / expose?” How many projects actually identifying anything other than just what their front end consumes? Does anyone see this as a problem? Let’s come back to the infrastructure vendors. They actually do need to think about architecture and services, but in a different space- management. I’ve railed on this in the past. How many vendors expose all of the capabilities in their user-facing management console through one or more service interfaces? If I want to embrace IT Systems Automation, how on earth am I going to do this what what these vendors give me? I’m not. I’m going to have to leverage management adapters in more automation environment. Does this sound familiar? It sure sounds like EAI to me. The best way I see to address this is think about integration in advance. Don’t think about it at the time someone comes and says, “I need to talk your system,” think about it at the time you build your solution and ask the question, “How will other systems need to interact with this.” Yes, this is a bit of predicting the future, and we’ll probably expose things that no one ever uses, but I think an enterprise will be in a better state if they try to anticipate in advance, thinking about architecture, rather that continue with today’s approach of integrate on demand.

Another day, one less vendor

The press releases came out today that SOA Software has bought LogicLibrary, with blogosphere comments from Miko Matsumura, Dana Gardner, and Jeff Schneider. I see this as a step toward the bigger SOA platform players by SOA Software. At this point, most of the players in SOA platforms all now have a registry/repository offering. IBM has WebSphere Registry Repository, Oracle/BEA has AquaLogic Registry Repository (consisting of the OEM’d Systinet and purchased Flashline products), Tibco resells Systinet, SoftwareAG has the former WebMethods/Infravio, Iona has Artix Registry/Repository, SAP has their Enterprise Service Repository, and Microsoft has their Oslo efforts. I think it’s safe to say that the vendors that are trying to be the acquirer rather than the acquired have all realized that a registry/repository is the center of the SOA technology universe. Now if only they could talk to each other easily along with the CMDBs of the ITIL technology world.

In my “Future of ESBs” post, I talked about how selling an ESB on its own is a difficult proposition because of the relative value that a developer will place on it. The same thing certainly holds true for a registry/repository, and I think the market has shown that to be the case by now having all of the registry/repository providers get swallowed up by larger fish. It would be interesting to know how many times these products are sold on their own versus being bundled in as a value-add with a larger purchase.

The Future of ESBs

Yogish Pai had a interesting post titled, “A decision maker’s concern about ESB.” In it, he provided two quotes, one from a Chief Architect of a financial services company and another from a CTO of a transportation company, both of which were raising some concern about leveraging an ESB.

ESBs have been one of the more controversial technology products in quite some time. They’ve been attacked as either rebranded EAI technology or efforts by vendor to “sell SOA” when most of us pundits have all stated that you can’t buy SOA. I’ve posted in the past (here, here, and here) on ESBs with more of a neutral approach, discussing capabilities that are needed and simply pointing out that ESBs are one way of providing those capabilities, and that’s still my stance. I’ve had the opportunity to work with companies that had purchased an ESB as well as companies that wouldn’t touch it with a ten foot pole. In both cases, the companies had found a suitable way to provide these capabilities, so you can’t say that one approach was better than the other.

What ultimately will decide the fate of the ESB will probably not be the specific technical capabilities associated with it, but the value that enterprises place on those capabilities. My past posts have stated my preference that the capabilities associated with the space really belong in the hands of operations rather than the hands of developers. As a result, you’d have to compare the cost/value of an ESB or other intermediary to the cost of other network intermediaries, such as switches, load balancers, and proxying appliances. Unfortunately, the ESB space is dominated not by traditional networking companies, but middleware companies. As a result, the products are being marketed to developers with feature after feature being thrown in, creating overlap with service hosting platforms, integration brokers, and orchestration engines. This dilutes the benefits of the core capabilities, and if anything, can make it more complicated to get those things done. In addition, these products may now clash with other products in the vendor’s portfolio, putting the sales staff in a difficult position.

The challenge that I see is that the value of a typical network load balancer from the view of a developer is pretty low. From their perspective, the features provided by the load balancer are minimal compared to what they need from the typical application server. As a result, I suspect that ESBs are very likely to become bundled capabilities rather than standalone products. It certainly means that there’s room for open source products, given that developers aren’t putting a lot of value to those capabilities, yet they are necessary. Open source products still need mindshare, however, so it will be interesting to see where it goes.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.