Archive for the ‘SOA’ Category

Implementing Effective Governance

According to ZDNet’s Joe McKendrick’s coverage of the recent Gartner Application Architecture, Development, and Integration summit, SOA governance and siloed thinking is top of mind.

If this really is the case, how do we make our governance efforts more effective? The more I think about this, the more I come back to a recent post of mine from earlier this year: “Want Successful Enterprise Architecture? Define ‘Enterprise’ First.” I’m convinced that this is a critical step for any effort that tries to go beyond a project-level scope, SOA initiatives included. If you don’t provide a structure that says what things will be implemented and managed at an enterprise level, versus a domain level or project/team level, anything with the term “enterprise” will be a struggle.

Too often, the approach to governance is concerned with establishing oversight, not establishing outcomes that are rooted in an agreed upon definition of what will be managed at an enterprise level, a domain level, and at the project level. Does it really help to set a standard that a particular coding library must be used when there is no central team that manages the library, no centralized support team, and no stated strategy for developer portability across projects? No, it just gets people up in arms and accusations of EA or the governance team being an ivory tower that sets arbitrary standards.

In my book, I defined governance as the combination of people, policies, and processes that are put in place to ensure the organization achieves one or more desired behaviors and outcomes. It’s not there to simply have a check mark to that says, “I went through a review.” In the absence of clear desired behaviors and outcomes, that’s what you will have. There is no reason to have an enterprise architecture team review a project if there are no things that are managed (or desired to be managed) at an enterprise level. You need to have some idea of what those things are up front, along with a mechanism for quickly making decisions on new candidates for enterprise items. The project team must know that this analysis will be done, and that it is a necessary part of achieving the company’s strategic goals, which they should be well aware of. Lack of communication of these goals can be just as detrimental and is often a symptom of lack of agreement on enterprise goals or inadequately specified goals: “Sure, we need to cut our IT costs by sharing more systems. I’m all for it as long as they’re not mine.” Someone needs to define exactly what the target areas are.

To be successful, we must define the desired outcome first. We must clearly establish the list of things that must be managed at an enterprise level, a divisional level, or left to the discretion of individual projects/teams. In fact, it’s even more fundamental than this. We can’t even know what success is without doing this step. There were no shortage of companies in the past that stated they were adopting SOA, my question to them would be, “How do you know when you’ve been successful?” Simply having a bunch of services doesn’t mean you’ve adopted SOA, it has to be the right services. Too often, enterprise architecture teams are positioned for failure because this fundamental step has not happened. Before you task your enterprise architecture team with reviewing all projects, make sure you’ve defined what enterprise is. If you haven’t, task your enterprise architecture team with doing the analysis of what’s out there and coming up with some recommendations. Then, your governance program will actually have a desired outcome to use in their reviews.

Maintaining a Service Mentailty

On Twitter, Brenda Michelson of Elemental Links started a conversation with the question:

Do #entarch frameworks enable or constrain practice of (value from) enterprise architecture?

In my comments back to Brenda, it became clear to me that there’s a trap that many teams fall into, not just Enterprise Architecture, and that’s falling into an inward view, rather than an outward view.

As an example, I worked with a team once that was responsible for the creation, delivery, and evolution of data access services. Over time, teams that needed these services were expressing frustration that the services available were not meeting their needs. They could eventually get what they needed, but in a less than efficient manner. The problem was that the data services team primary goal was to minimize the number of services they created and managed. In other words, they wanted to make their job as easy as possible. In doing so, they made the job of their customers more and more difficult. This team had an inward view. It’s very easy to fall into this trap, as performance objectives frequently come from internally measured items, not from the view of the customer.

EA teams that obsess over the adoption of EA frameworks fall into the same category. Can EA frameworks be a valuable tool? Absolutely. But if your primary objective becomes proper adoption of the framework versus delivering value to your customers, you have now fallen into an internal view of your world, which is a recipe for failure.

Instead, teams should strive to maintain a service mentality. The primary focus should always be on delivering value to your customers. There’s a huge emphasis on EA becoming more relevant to the business, in order to do so, we need to deliver things that fit into the context of the business and how they currently make decisions. If we walk in preaching that they need to change their entire decision making process to conform to a framework, you’ll be shown the door. You must understand that you are providing a service to the teams you work with and helping them get their job done better that they could without you. While a framework can help, that should never be your primary focus. Internal optimizations of your process should be a secondary focus. In short, focus on what you deliver first, how you deliver it second. If you deliver useless information efficiently, it doesn’t do anyone any good.

A Lesson in Service Management

In the Wired magazine article on the relationship between AT&T and Apple (see: Bad Connection: Inside the iPhone Network Meltdown), the author, Fred Vogelstein, presents a classic service management problem.

In the early days of the iPhone, when data usage was coming in at levels 50% higher than what AT&T projected, AT&T Senior VP Kris Renne came to Apple and asked if they could help throttle back the traffic. Apple consistently responded that they were not going to mess up the consumer experience to make the AT&T network tenable.

In this conversation, AT&T fell into the trap that many service providers do: focusing on their internal needs rather than that of the customer. Their service was failing, and the first response was to try to change the behavior of their consumers to match what their service was providing, not to change the service to what the consumer needs.

I’ve seen this happen in the enterprise. A team whose role was to deliver shared services became more focused on minimizing the number of services provided (which admittedly made their job easier) than on providing what the customers needed. As a result, frustration ensued, consumers were unhappy and were increasingly unwilling to use the services. While not the case in this situation, an even worse possibility is where that service provider is the only choice for the consumer. They become resigned to poor service, and the morale goes down.

It is very easy to fall into this trap. A move to shared services is typically driven by a desire to reduce costs, and the fewer services a team has to manage, the lower their costs can be. This cannot be done at the expense of the consumer though. First and foremost, your consumers must be happy, and consumer satisfaction must be part of the evaluation process of shared service teams. Balance that appropriately with financial goals, and you’ll be in a better position for success.

eReaders for Kids

Barnes and Noble has introduced a $149 Wi-Fi version of its Nook eReader. This has now reached a price point where I think parents may consider purchasing one for their children. Having recently moved, I know where my budget for book purchases has gone recently: kids books. This ranges from learning to read books all the way up to the several-hundred-page series books like Harry Potter and Percy Jackson. While there’s no easy way to get all of these existing books onto an eReader (I think demand would shoot into the stratosphere if there was), there’s certainly no shortage of new book purchases in the future, either. So what would make a great kids eReader?

First, I think existing eReaders like the Nook or Kindle are probably fine for the Harry Potter/Percy Jackson age group, say 9 and up. They should have no problem using the device, it’s more a question of taking care of the device. For the under 6 age group, I don’t think current eInk screens are going to provide the right amount of visual stimulation, so at best, it’s probably a device best used while your child is in your lap and you’re reading to them. They’ll pick up the interface of the device, and be ready to go when they reach the chapter book stage of reading. The 7-8 age group is the trickier one. It’s going to get thrown into a school backpack, have who knows what smeared all of it from their hands, etc., so you get the point. The device needs to be of equivalent durability to a Nintendo DS. Most 7-8 year olds I know have one of these.

In terms of features, I think Barnes and Noble has it right with the WiFi only. The kids aren’t going to be purchasing books in airports- it’s a reading device. I’d even be okay with a device that only allows USB sync, but since I wouldn’t expect the removal of WiFi to change the price point, I’d rather have it than not. If you can give me a $100 price point with sync only capabilities, like an iPod Nano or Shuffle, even better. Purchasing from the device would need to be disabled at the discretion of the parent, especially with the one-click purchase approach of the Kindle. As a parent, I would prefer to go to a website, make the purchase, and then choose to deliver to my kids’ devices when they connect. Add in date-based delivery options, and friends and family could purchase presents that automatically show up on the kids’ birthdays, or we could even have link in to the North Pole and allow Santa to deliver them to the device on Christmas morning. eInk-based screens are a must, because the kids will forget to charge the device, so battery life is critical. Finally, we must be able to share books across multiple devices. I don’t want to have to buy separate copies of the latest book by Rick Riordan for each device, as my kids share the books now.

The real question is whether a dedicated device makes sense for your children. I think we’re looking at an age group of 7-11. From 12 and up, there’s a good chance your child will have an iPad/Netbook/Tablet/Laptop of their own with a screen space suitable for reading. Does the independent eReader get put on the shelf at that point? I know I have stopped using my Kindle now that I have the Kindle app on my iPad. Personally, I think the answer to the question is still yes, even if only used for 5 years from ages 7 to 11. 5 years for any electronic device is a pretty good life span. We spend $150 on a NintendoDS for probably 5 years of use, why wouldn’t we do the same for an eReader with more educational value? As long as there’s a software version of the reader for the multi-purpose device, all their books can go with them.

The final piece of the puzzle would be to have Scholastic tie their school book programs into this. Parents should be able to purchase for any eReader from their website and have it tie into the classroom or school fund raising programs that they offer. While the vertically-integrated device and store models of Amazon and Barnes and Noble probably won’t allow purchases for other devices, a publisher-owned store should.

What is Architecturally Significant?

What looks to be a very simple question is actually a very tough one. The answer to this is of particular importance to a domain architecture team (a team whose scope is larger than a single project or solution), but the principles apply even to a solution architect. The solution architect has a slight advantage that they’re typically working with a team that has a single common goal: deliver the solution. Domain architects, however, must balance the delivery focus of project teams with setting the stage for systemic success across a broader portfolio of solutions, be it within a line of business or across the entire enterprise.

To me, architecture is about creating a categorization that establishes boundaries. These boundaries partition the solution into different areas. What’s the most frequent reason for partitioning? To create areas of responsibility. Within a project, your break things down to a sufficient level in order to be able to hand off units of work to individual developers or engineers, who now have responsibility for delivering that work. The biggest challenge is where those units of work overlap. When thinking of the typical Visio diagrams associated with architecture, this type of view is consistent with a boxes and lines view. We’re interested in what the boxes are and what’s on the lines (the interfaces and messages) that connect them.

While this boxes, lines, and responsibility approach works for both project and domain architects, there is one big, significant difference: the timeframe of responsibility. Once a project has been delivered, the development responsibilities typically go away. Your decisions on how partition the project are solely based on getting it delivered. A domain architect, however, is interested the full lifecycle of responsibility for a component. It’s not just the initial development, but it’s the ongoing care and feeding, the onboarding of new consumers, etc. If we don’t partition things to support future change, the pain involved in supporting that change will be high. The desire to partition things to allow for an efficiently managed portfolio may not be the same partitioning that allows for the most efficient development. These needs have to be balanced. In the perfect world, the partitioning for portfolio management could occur outside the context of any project, allowing the “optimal” partitioning to be used as an input by the project architect to balance these needs. In reality, that context doesn’t exist, and we’re doing our best to build it as we go along.

This type of approach can be challenging for domain architects when many people have the perception that the architect is the nuts and bolts person, looking at how things are built, rather than what gets built. That’s because many architects have gotten there by being a senior developer or engineer. I’m not suggesting that the “how” portion isn’t important, especially because the “how” decisions also have a lot to do with partitioning, but the “what” is increasingly important, because that ultimately defines what must be managed for the long term. If those units are difficult to change over time because of poor partitioning from a responsibility and ownership viewpoint, it will be a struggle.

What are your thoughts on what things are architecturally significant?

Governance Technology and Portfolio Management

All content written by and copyrighted by Todd Biske. If you are reading this on a site other than my “Outside the Box” blog, it’s probably being republished without my permission. Please consider reading it at the source.

David Linthicum continued the conversation around design-time governance in cloud computing over at his InfoWorld blog. In it, he quoted my previous post, even though he chose to continue to use the design-time moniker. At least he quoted the paragraph where I state that I don’t like that term. He went on to state that I was “arguing for the notion of policy design,” which was certainly part of what I had to say, but definitely not the whole message. Finally, Dave made this statement:

The core issue that I have is with the real value of the technology, which just does not seem to be there. The fact is, you don’t need design-time service governance technology to define and define service policies.

Let’s first discuss the policy design comment. Dave is correct that I’m an advocate for policy-based service interactions. A service contract should be a collection of policies, most if not all of which will be focused on run-time interactions and can be enforced by run-time infrastructure. Taking a step backward, though, policy design is really a misnomer. I don’t think anyone really “designs” policies, they define them. Furthermore, the bulk of the definition that is required is probably just tweaking of the parameters in a template.

Now, moving to Dave’s second comment, he made it very clear that he was talking about governance technology, not the actual governance processes. Speaking from a technology perspective, I’ll agree that for policy management, which includes policy definition, all of the work is done through the management console of the run-time enforcement infrastructure. There are challenges with separation of concerns, since many tools are designed with a single administration team in mind (e.g. can your security people adjust security policies across services while your operations staff adjust resources consumption while your development team handles versioning, all without having the ability to step on each other’s toes or do things they’re not allowed to do?). Despite this, however, the tooling is very adequate for the vast majority (certainly better than 80-90% in my opinion) of enterprise use cases.

The final comment from me on this subject, however, gets back to my original post. Your SOA governance effort involves more than policy management and run-time interactions. Outside of run-time, the governance efforts has the closest ties to portfolio management efforts. How are you making your decisions on what to build and what to buy, whether provided as SaaS or in house? Certainly there is still a play for technology that support these efforts. The challenge, however, is that processes that support portfolio management activities vary widely from organization, so beyond a repository with a 80% complete schema for the service domain, there’s a lot of risk in trying to create tools to support it and be successful. How many companies actually practice systemic portfolio management versus “fire-drill” portfolio management, where a “portfolio” is produced on a once-a-year (or some other interval) basis in response to some event, and then ignored for the rest of the time, only to be rebuilt when the next drill occurs. Until these processes are more systemic, governance tools are going to continue to be add-ons to other more mature suites. SOA technologies tried to tie things to the run-time world. EA tools, on the other hand, are certainly moving beyond EA, and into the world of “ERP for IT” for lack of a better term. These tools won’t take over all corporate IT departments in the next 5 years, but I do think we’ll see increased utilization as IT continues its trend toward being a strategic advisor and manager of IT assets, and away from being the “sole provider.”

Governance Needs for Cloud Services

All content written by and copyrighted by Todd Biske. If you are reading this on a site other than my “Outside the Box” blog, it’s probably being republished without my permission. Please consider reading it at the source.

David Linthicum started a debate when he posted a blog with the attention grabbing headline of “Cloud computing will kill these 3 technologies.” One of the technologies listed was “design-time service governance.” This led to a response from K. Scott Morrison, CTO and Chief Architect at Layer 7, as well as a forum debate over at eBizQ. I added my own comments both to Scott’s post, as well the eBizQ forum, and thought I’d post my thoughts here.

First, there’s no doubt that the run-time governance space is important to cloud computing. Clearly, a service provider needs to have some form of gateway (logical or physical) that requests are channeled through to provide centralized capabilities like security, billing, metering, traffic shaping, etc. I’d also advocate that it makes sense for a service consumer to have an outgoing gateway, as well. If you are leveraging multiple external service providers, centralizing functions such as digital signatures, identity management, transformations, etc. makes a lot of sense. On top of that, there is no standard way of metering and billing usage yet, so having your own gateway where you can record your own view of service utilization and make sure that it’s line with the what the provider is seeing is a good thing.

The real problem with Dave’s statement is the notion that design-time governance is only concerned with service design and development. That’s simply not true. In my book, I deliberately avoided this term, and instead opted for three timeframes of governance: pre-project, project, and run-time. There’s a lot more that goes on before run-time than design, and these activities still need to be governed. It is true that if you’re leveraging an external provider, you don’t have any need to govern the development practices. You do, however, still need to govern:

  • The processes that led to the decision of what provider to use.
  • The processes that define the service contract between you and the provider, both the functional interface and the non-functional aspects.
  • The processes executed when you add additional consumers at your organization of externally provided services.

For example, how is the company deciding what service provider to use? How is the company making sure decisions by multiple groups for similar capabilities are in line with company principles? How is the company making sure that interoperability and security needs are properly addressed, rather than being left at the whim of what the provider dictates? What happens when a second consumer starts using the service, yet the bills were being sent to the first consumer? Does the providers service model align with the company’s desired service model? Does the provider’s functional interface create undue transformation and integration work for the company? These are all governance issues that do not go away when you switch to IaaS, SaaS, or PaaS. You will need to ensure that your teams are aware of the contracts in place, and don’t start sending service requests without being properly onboarded into the contractual relationship. Your internal allocation of charges takes multiple consumers into account, if necessary. All of these must happen before the first requests are sent in production, so the notion that run-time governance is the only governance concern in a cloud computing scenario is simply not true.

A final point I’m adding on after some conversation with Lori MacVittie of F5 on Twitter. Let’s not forget that someone still needs to build and provide these services. If you’re a service provider, clearly, you still have technical, design-time governance needs in addition to everything else discussed earlier.

Book Review: Cloud Computing and SOA Convergence in Your Enterprise by David Linthicum

Full disclosure: I was provided a review copy of this book by the publisher free of charge. From the back cover: “Cloud Computing as SOA Convergence in Your Enterprise offers a clear-eyed assessment of the challenges associated with this new world–and offers a step-by-step program for getting there with maximum return on investment and minimum risk.”

My review in a nutshell: This is a very well-written, easy-to-read book, targeted at IT managers, that provides a robust overview of Cloud Computing and its relationship to SOA, and the core basics of a game plan for leveraging it.

This book was an extremely easy read, which is to be expected of any book from Dave, based upon the easy to read style of his InfoWorld blog. He provides a taxonomy of cloud offerings, extending the typical three categories (Software-as-a-Service, Platform-as-a-Service, and Infrastructure-as-a-Service) to eleven. While some may think eleven is too many, the fact remains that a taxonomy is necessary as a core starting point, and Dave provides solid definitions for each category that an organization can choose to use. He goes on to provide a financial model to consider for making your cloud decisions, but correctly states that cost is only one factor in the decision making process. He provides the other dimensions that should be associated with your decision making process in equal detail. In chapters 5 through 10, he walks through steps associated with moving services, data, processes, governance, and testing into a cloud environment. Dave’s steps for these chapters are very straightforward. That being said, Dave does not sugarcoat the fact that these steps are not always easy to do, and your success (or lack of it) is highly dependent on how large of a domain you choose to attack.

For someone who has researched SOA and Cloud Computing in detail, this book may not provide a lot of new information, but what it does provide, is a straightforward process for organizing your effort and making progress. Often times, that can be the biggest challenge. For this reason, I do think the book is more geared toward the management side of IT, and less so toward the technical side (architects and developers), but as an architect, I did find the taxonomies presented valuable. The only area for improvement that I saw would have been a stronger emphasis on the role the service model must play in the selection process, and stronger emphasis on having service managers inside your IT organization. Dave discussed both of these topics, however, to make stronger ties between SOA and Cloud Computing (or even ITIL and Cloud Computing), these points could have been emphasized more strongly. Choosing the right cloud provider requires that you have solid requirements on what you need, which comes from your service model. Ensuring that your requirements continue to be met and don’t get transformed into what the service provider would prefer to offer requires solid service management on your side.

Any cloud computing initiative will require that everyone involved have a base level of understanding of the goals to be achieved and the process for doing it. This book can help your staff gain that base understanding.

BPM and SOA Tool Linkage

I’ve been invited to participate in SearchSOA.com’s “Ask the Expert” series and will be fielding questions primarily on BPM technologies in the context of SOA, but I hope to see some EA related questions as well. My first response was posted on November 3rd, answering the question, “What is a key characteristic I should look for in BPM modeling tools, especially when looking to pair them with SOA?” You can read my response at SearchSOA.com.

Oracle OpenWorld: SOA-Enabled BPM Adoption, Reference Architecture and Methodology Aspects

Full disclosure: I’m attending Oracle OpenWorld courtesy of Oracle.

The speakers for this session were Manas Deb, Sr. Director, SOA/BPM/Governance Product Management from Oracle and Mark Wilkins, Enterprise Architect, EAP, from Oracle.

They framed up the session as a discussion based on best practices from their EA practice, focused on companies who have a goal to adopt to BPM, built on a foundation of services. They began with a quick recap on what they feel SOA and BPM are, with SOA being focused on encapsulation and loose coupling, and BPM being focused on improved efficiency. I’m not going to debate those definitions here, just repeated them to understand the context of the presentation.

They, like many others, extended the three tier model to insert processes between the presentation tier and the service tier. The one thing that they did do was to not claim the process layer was a new tier, rather, they presented it as an extension of the services tier. Obviously, the one risk with this is it immediately puts BPM into a technology context, rather than a business context. This isn’t a problem, but it shouldn’t be the sole focus of your BPM and SOA conversations. It may be the cornerstone for conversations with IT developers, engineers, and solution architects, but certainly not for analysts, business architects, and other non-IT staff.

The first slide on methodology is emphasizing what they are calling scopes. The examples shown include enterprise (cross-project) scope, project scope, and operations scope. At the enterprise scope, the interest is assessment, strategy, and planning. It performs value-benefits analysis, forms CoEs, establishes roadmaps and maturity models, plans the portfolio, establishes governance, etc. The project scope is execution and delivery focused, the operations scope is focused on measurements, scorecards, and keeping things running. It’s important to keep these scopes, or viewpoints, in mind and ensure that they all work together.

Mark went on to blow through a whole bunch of slides way too quickly. This should have been a 60 minute presentation. It appeared that there was some good content in there, including Oracle’s approach to the use of BPM and SOA conceptual reference architectures and how they eventually drive down to the physical view of the underlying infrastructure. He went on to show examples of the conceptual architectures for BPM and SOA, some information on a maturity model, a governance framework, and a few slides that tried to fit it all together. Once I’m able to download the slides, I’ll try to remember to come back and edit this post with the details. It’s unfortunate that a presentation that appeared to have very good content with appeal to architects got crammed into half the time frame of the other sessions.

Oracle OpenWorld: The Big BPEL-ESB-OSB cook-off

Full disclosure: I am attending Oracle OpenWorld courtesy of Oracle.

The speaker in this session is Andreas Chatziantoniou from Accenture.  He’s discussing the overlap between Oracle’s BPEL, ESB (legacy Oracle), and OSB (BEA ESB) products. 

First up is BPEL.  His slide states that BPEL should be used for system to system or service orchestration, when human workflow is needed, and when there are parallel request-response patterns.  The next slide says that BPEL should not be used for complex data transformations, it should not be used to program, and should not be used as a business modeling tool.  At first glance, this may seem strange, but I think it’s more of an indication that BPEL is something that gets generated by your tool, it’s not something people should be editing directly.  This point could be made more clearly.  He is emphasizing that you should not use BPELJ (embedded Java in BPEL).

He’s now talking about “dehydration,” a term I had not heard before.  He’s using that to refer to the writing of a process state to disk so it can be restored at a later time.  He stated that this is a natural part of BPEL, but not part of ESB/OSB.  I can live with that.  A service bus shouldn’t be doing dehydration any more than a network switch should be.

Now on to ESB/OSB.  His slide says they should be used for loose coupling, location transparency, mediation, error handling, transformation, load balancing, security, and monitoring.  Good list, although it does have the two grey areas of mediation and transformation.  You need to further define what types of mediation and transformation should and should not be done.  The way I’ve phrased it is that ESB’s should be about standards-in and standards-out.  As long as you’re mediating and transforming between standards (and the same standards on both sides), it’s a good fit.  If you are transforming between external and internal standards, as is the case in an external gateway, consider whether your ESB is the right fit for this since these mappings can get quite complicated. Those are my words, not the speakers, sorry this is something I’ve thought a lot about.

He’s now talking about mediation, and specifically referring to a component that existed in Oracle’s legacy ESB.  He said it connects components in a composite application.  To me, this does not belong in a service bus, and in the case of Oracle Service Bus, it does not.  He did not go into more detail on the type of mediation (e.g. security token mediation, message schema mediation, transport mediation).  As previously said, this needs to be made more narrow to make an appropriate decision on whether your mediation is really new business logic that belongs on a development platform, or mediation between supported standards than can be done by your connectivity infrastructure.

On transformation, Andreas focused more on what the platforms can do, rather on what they should do, calling out that XML transformations via XQuery, XSLT, et. can be equally done on any of the platforms.  His advice was do it in the service bus, and avoid mixed scenarios.  I’m really surprised at that, given how CPU-intensive transformations and mappings can be.  His point was that in a very large (50-60 steps) BPEL process, handling transformations could get ugly.  I see the logic on this, but I think if you do the analysis on where those transformations are needed, it may only be in one activity and best handled by the platform for that activity itself.

Overall, the speaker spent to much time discussing what the products can do, calling out overlaps, and not enough time on what they should do.   There was some good advice for customers, but I think it could have been made much simpler. My take on this whole debate  has always been straightforward.  A BPEL engine is a service development platform.  You use it to build new services that are most likely some composite of existing services.  I like to think of it as an orchestrated service platform.  As I previously said, though, you don’t write BPEL.  You use the graphical modeler for your tool, and behind the scenes, it may (or may not) be creating BPEL. 

A service bus is a service intermediary.  You don’t use it to build services, you use it to connect service consumers and service providers.  Unfortunately, in trying to market the service bus, most vendors succumbed to feature creep, whether due to creating their ESB from a legacy EAI product, or by adding more development like features to get more sales.  Think of it as a very intelligent router, meant to be configured by Operations, not coded by developers.

Oracle OpenWorld: Five Steps to Better SOA Governance with Oracle Enterprise Manager

Full disclosure: I’m attending Oracle OpenWorld courtesy of Oracle.

I’m having to recreate this post thanks to a bug in WordPress for the iPhone which managed to eat a couple posts, so my apologies for it being a bit shorter than hoped, since I had to recall what I was typing live.

In this session, James Kao from Oracle presented five steps to improving SOA governance. The core premise that was emphasized throughout is that the use of metadata is becoming more and more prevalent in the development world, as it is necessary to increase the efficiency of our development efforts. Examples include SCA descriptors and BPEL. We will have a big problem, however, if the operational tools can’t keep up with these advances. This same metadata needs to be leveraged in the run-time world to improve our operational processes. I’ll add to this that while much of the metadata is coming out of the SOA and BPM technology space, this concept should not be limited to just those areas. The concept of having metadata that describes solutions for gains in both the design time world and the run time world is extremely important.

The five steps presented were:

  1. Assess. (sorry lost the details on this one)
  2. Discover. This is where the metadata created at design time is leveraged to set up appropriate run-time governance.
  3. Monitor. The systems must be instrumented appropriately, exposing metrics, in addition to leveraging external monitors to collect information about run-time behavior.
  4. Control. The four examples given here were policy management, service management, server/service provisioning, and change management. Clearly, this is the actionable step of the process. Based upon the data, we take action. Sometimes that action is reflected in changes to the infrastructure via provisioning and/or change management, sometimes that action is modifications to the policies that govern the systems.
  5. Share. Finally, just as the metadata from design time played a role in the run time world, the metrics collected at run time can play a role in other processes. The information must be shared into systems like Oracle BAM or Oracle ER to provide a feedback loop so that appropriate decisions can be made for future solutions.

I was very impressed with James’ grasp of the space. While this session presented concepts and not a live demonstration, if Oracle Enterprise Manager can make these concepts a reality in a usable manner, this could be a very powerful platform for companies leveraging the red stack. Excellent talk.

Oracle OpenWorld: Using Oracle Web Services Manager to Manage Security

Full disclosure: I’m attending Oracle OpenWorld courtesy of Oracle.

I’m having to recreate this post thanks to a bug in WordPress for the iPhone which managed to eat a couple posts, so my apologies for it being a bit shorter than hoped, since I had to recall what I was typing live.

In this talk, Vikas Jain gave an overview of Oracle Web Services Manager, and Josh Bregman (I think) gave a demo of integration between Oracle Web Services Manager (OWSM) and Oracle Entitlements Server (OES). For most of his portion, Vikas went over the architecture behind WSM. It hasn’t changed too dramatically since I first saw it back as Confluent years ago, and that’s a good thing, since it had proper separation between policy enforcement and policy management. One thing I didn’t know, which is a good thing, is that the WSM enforcement point is now an embedded agent within WebLogic Server. That is, it comes with WebLogic server, there’s no separate install for it. This is a very important point, because if you need to do end-to-end identity propagation, you’ll need some kind of agent or native support for your identity formats on every node in the call chain. They did mention E2E identity propagation on a slide, but they didn’t go into any depth on it.

From a feature standpoint, OWSM has all of the necessary WS-* features necessary, including WS-Policy, WS-Security, SAML support, and WS-ReliableMessaging to name a few.

One thing I was disappointed with is when they presented a slide on integrations with the rest of the fusion middleware, Oracle Service Bus was not shown. SOA and WebLogic was a line item, and since OSB runs on WebLogic, it could be inferred that there’s a relationship, but what I wanted to know about was the significant functionality overlap between OSB and OWSM. I did get to ask about this, and the first answer was that they felt there wasn’t a lot of overlap, and frankly, I don’t agree with that in the slightest. On the plus side, however, they did say that in a future release of Oracle Service Bus, the security features of OSB will be fully provided by the OWSM agent, and not by the underlying WebLogic (non-OWSM) capabilities as is currently done. If this is the case, then they are working to eliminate the functional overlap, however, there’s a long way to go. Oracle Service Bus is a policy enforcement point, just as Oracle Web Service Manager agents are. OWSM can do more than just security, just as OSB can. Hopefully, this will be resolved in the future, and customers will not have to choose between two products from the same vendor to attack the same problem of enforcing service contract policies through a service intermediary.

Oracle OpenWorld: EA, BPM, and SOA

Full disclosure: I am attending Oracle OpenWorld courtesy of Oracle.

The speaker is Dirk Stähler from Opitz Consulting And he is talking about how to bridge the information gap using Oracle BPA Suite and an integrated model.

He started by presenting the EA, BPM, and SOA problem which includes no unified methodology, unclear semantics, and no differentiation between EA, BPM, and SOA aspects.

He presented the three domains in a Venn diagram and called out the overlap in artifacts from each, including org structure, infrastructure, business processes, IT systems, and business objects. This overlap forms the foundation for the metamodel which can be captured in Oracle’s BPA suite.

In discussing this, he presented a pyramid, where EA is at the top (providing a conceptual blueprint of the org), underneath that is business process management (as a business design tool), then comes technical business process management (for IT specifications), and finally is information technology (supporting development). SOA spans one leg of the pyramid, impacting all four layers.

In discussing the artifacts, he defined domains for process architecture, application architecture, infrastructure architecture, data architecture, organziation architecture, and service architecture. All of the artifacts can be captured in BPA suite. In aligning this to EA, BPM, and SOA, he feels that EA covers app and infrastructure architecture, BPM covers organization, process, and data, and SOA covers service and some of data.

After this, he switched to a demo of the BPA suite, showing how to navigate the metamodel, associate different diagram types with different domains, etc. As someone with no experience with BPA suite or any other EA tooling, this was a good overview of how BPA suite could be used to manage the various models associated with an EA practice. The metamodel description covered how to separate these things within BPA suite, however the talk did not get into any issues or concerns with having two or even three different audiences using one centralized tool and repository, making sure they leverage each other’s work where appropriate.

For more information, they have published a book on their methodology, however it is currently only available in german.

Oracle OpenWorld: An Architect’s View of the New Features of Oracle SOA Suite 11g Release 1

Full disclosure: I’m attending Oracle OpenWorld courtesy of Oracle.

First wave of industry standardization was around functional-specific standards in areas causing headaches in the integration space. Emphasizing the role of SCA in the standardization of the service platform in the same way that Java EE played a role in the evolution of the application server. I’ll be honest, I’m still not a big SCA fan. I know Oracle is, though. The one good thing being shown is that the hosting environments can be managed in a single, unified way, regardless of whether that service is hosted in BPEL PM or WebLogic. As long as there’s good tooling that hides of the various SCA descriptors, this is a good thing.

Now they are talking about the event delivery network. It’s nice to see a discussion on fundamentals rather than trying to jump into a CEP discussion. They’re talking about having an event catalog, utilizing an EDL (event description language), and easily connecting consumers and subscribers. This is a good step forward, in my opinion. It may actually get people to think about events as first class citizens in the same way as services.

Now, they’re on to Oracle Human Workflow. It is all task-based, with property-based configuration. The routing of tasks can be entirely dynamic, rather than based on static rules. It has integration with Oracle Business Rules. It publishes events on the EDN (e.g. onTaskAssigned, onTaskModified, etc.). Nice to see them eating their own dog food with the use of EDN.

They’ve now moved on to Service Data Objects. They’ve introduced entity variables into BPEL to allow working with SDOs.

Additional subjects in this session included Metadata Services (MDS) and the Dev-Test-Prod problem (changing of environment-specific parameters as code is promoted through environments). On the latter, there are a large number of parameters that can now be modified via a “c-plan,” applied at deployment time. Anything that makes this easier is a good thing in my opinion.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.